• Share
  • ?
  • Profiles ▼
  • Communities ▼
  • Apps ▼

Blogs

  • My Blogs
  • Public Blogs
  • My Updates

LATEST TRENDS:

  • Log in to participate
Tony Pearson Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the IBM Systems Client Experience Center in Tucson Arizona, and featured contributor to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Facebook,   Twitter,   LinkedIn,   RSS Feed 

My books are available on Lulu.com! Order your copies today!

Featured Redbooks and Redpapers:

  • IBM System Storage Solutions Handbook
  • IBM Software-Defined Storage Guide
  • IBM Private, Public, and Hybrid Cloud Storage Solutions
  • IBM Spectrum Archive Enterprise Edition V1.2: Installation and Configuration Guide
  • IBM Spectrum Scale and ECM FileNet Content Manager Are a Winning Combination
  • IBM Spectrum Scale in an OpenStack Environment


IT featured blogger BlogWithIntegrity.com HootSuite Certified Professional

Links to other blogs...

  • Accelerate with ATS
  • Alltop - Top Storage Blogs
  • Anthony Vandewerdt
  • Barry Whyte (IBM)
  • Bob Sutor (IBM)
  • Brad Johns Consulting
  • Chris M. Evans
  • Chuck Hollis (Oracle)
  • Corporate Blogs
  • Greg Schulz
  • Hu Yoshida (HDS)
  • Jim Kelly (IBM)
  • Jon Toigo - DrunkenData
  • Kirby Wadsworth (F5)
  • Martin Glasborow
  • Raj Sharma, IBM Storage and Te...
  • Richard Swain (IBM)
  • Roger Leuthy, Storage CH Blog
  • Ron Riffe, "The Line"
  • Seb's SANblog
  • Stephen Foskett, Pack Rat
  • Steve Duplessie (ESG)
  • Storagezilla
  • Technology Blogs
  • Top 10 Storage Blogs
  • VMblog by David Marshall

Archive

  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • May 2016
  • April 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • October 2014
  • September 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006

Disclaimer

"The postings on this site solely reflect the personal views of each author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management."

(c) Copyright Tony Pearson and IBM Corporation. All postings are written by Tony Pearson unless noted otherwise.

Tony Pearson is employed by IBM. Mentions of IBM Products, solutions or services might be deemed as "paid endorsements" or "celebrity endorsements" by the US Federal Trade Commission.

This blog complies with the IBM Business Conduct Guidelines, IBM Social Computing Guidelines, and IBM Social Brand Governance. This blog is admistered by Tony Pearson and Sarochin Tollette.

Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.

Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.

Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.

Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.

Posts by date
  • Sort by:
  • Date ▲
  • Title
  • Likes
  • Comments
  • Views

Lab Tour - Steward Observatory Mirror Lab

| | Visits (9603)

Most readers know thta Tucson is home of one of the largest collections of world-renowned experts on IT storage. But what you may not know, is that Tucson is also the home of experts for optical sciences. This week, I was part of a delegation of IBMers invited on a tour of the Steward Observatory Mirror Lab [SOML].

Steward Observatory Mirror Lab

SOML was built in 1990 underneath the football stadium at the University of Arizona. Why under the stadium? Their motivation was [Chicago Pile-1], the world's first nuclear reactor, built by Enrico Fermi under the football stadium at the University of Chicago.

We got to see all aspects of the process to develop the huge mirrors used in large telescopes. SOML did not always offer lab tours. Back in 1993, two dozen members of the Earth First! terrorist organization [attacked the lab with hammers and monkey wrenches to destroy and dismantle the mirror lab]. Now, security is tight to ensure no-one damages these mirrors, some of which fetch as much as $30 million dollars.

Aluminum Silicate

At other mirror labs, mirrors start as a large, heavy, flat piece of glass and then ground and polished to the correct parabolic curve. SOML created a new process that works a lot better, similar to making a [Pineapple Upside Down Cake]. For those who are not familiar with this cake, you arrange sliced pineapple rings on the bottom of the baking dish, then pour the liquid cake batter that fills in and around the pineapple slices, then bake.

The first step is creating a base of 1,690 hexoganal tubes made of Aluminum Silicate. These are like the pineapple rings in the cake. The tubes are bolted to the baking dish that is 8.4 meters wide. These tubes form the base of the [parabolic shape] that focuses starlight to a small focal point. The tubes are spaced with about an inch of space in between. The Aluminum silicate feels like clay.

Boron Silicate Glass

Once the base is built, chunks of glass are placed on the surface. Rather then pouring on the cake mix of molten glass, these chunks will be melted in place. This isn't normal glass, but a special Boron Silicate glass that does not expand or contract much during changes in temperature, made by the [Ohara Corporation] in Japan.

Steward Observatory Mirror Lab oven

The oven is then lowered onto the baking dish. Once the temperature reaches 700 degrees, the entire system is then rotated at 7 RPM. This allows the glass to melt and take its parabolic shape through [centrifugal force]. The people who run the oven are called "oven pilots", and they monitor the entire process to make sure nothing goes wrong.

This particular mirror is one of the two that will go into the [Large Binocular Telescope]. The mirror will be 36 inches thick at the edges, and 18 inches in the middle. If the glass cools down to quickly, it may crack or form crystals, so instead the oven is kept in place and the temperature lowered slowly over the course of a few months. This is called annealing.

Steward Observatory Mirror Lab - mirror

Once a mirror has annealed, 24 suction cups are glued to the top surface to pull the mirror out of the baking dish. It is then tipped on its side so that all the bolts can be removed and the hexagonal tubes washed out, leaving behind a honey-combed effect on the bottom of the mirror. This means the mirror is 80 percent air, making it strong and lightweight.

LSST mirror

The next step is grinding the surface with diamonds. In most cases, the process of spinning creates the correct shape so little grinding is required. However, for this mirror here for the Large Synoptic Survey Telescope [LSST], about five tons of glass will be ground out of the center. This will actually have two parabolic curves, the outer curve is shallow, and the inner curve is deep. This will allow for the LSST to survey a wide area of space at a time.

LMT mirror

Once the glass is ground to the right shape, it will be polished with Cerium Oxide, what is commonly known as Jeweler's Rouge. How smooth does it have to be? If this mirror were the size of the United States, there would be no bump higher than 2 inches tall!

Most mirrors are symmetrical, so the polishing can be done on a spinning platform, but this mirror is not. The Large Magellan Telescope will consist of seven mirrors, one in the middle that is symmetrical, and surrounded by six other mirrors that will all continue the parabolic shape in each direction. This is one of the outer mirrors, which means that each part of the polishing process will be controlled by computers to get exactly the curve required.

Magellan Telescope scale model

Here is a small scaled-down model of the Magellan Telescope. Each of the seven mirrors will be 8.4 meters wide. At this point, one person asked why all the mirrors were 8.4 meters wide. I joked that this was the size of the oven! It reminded me of [the story where newly-wed had to ask her grandmother why she cut the ends off the pot roast]. The actual reason was that the posts of the football stadium are 8.5 meters wide, so any mirror made inside the lab larger than that could not be removed easily for transportation.

SOML measuring station

The LMT will be installed on [Cerro Tololo] in Chile, where my father worked earlier in his career. Why Chile? Observatories need high altitude, dry climate and clear skies. That is why Arizona is home to many observatories, including Kitt Peak National Observatory and the Vatican Observatory on Mount Graham. Cerro Tololo in Chile is close to the equator and meets these requirements.

Once operational in 2020, it will gather 6 TB of images every evening. That got all of the IBMers on the tour very excited!

To verify the polishing is complete, it is put on three red stands and measured with a laser. Once the measurements are complete. The surface will be coated with aluminum to provide the reflective surface. You can't just paint the surface with a roller! Instead, the aluminum is vaporized and allowed to land on the surface of the mirror evenly, in a layer that is only three molecules thick. There is more aluminum in standard size beer can than on the surface of one of these 8.4 meter size mirrors!

So that was the tour. It took almost 2 hours. If you are ever in Tucson, consider contacting the SOML and arranging a tour for yourself. There is no other mirror lab like it!

technorati tags: IBM, SOML, LMT, LSST, LBT, Cerro Tololo, Kitt Peak



Tags:  kitt+peak lmt lbt ibm cerro+tololo lsst soml

IBM Pulse2012 in Las Vegas

| | Visits (9257)
IBM Pulse2012 Conference

This week is IBM Pulse2012 conference in Las Vegas. I am not there, for medial reasons this time. While my colleagues will be spending this week sipping Margaritas and enjoying the music in between inspiring technical sessions, I will be flat on my back, getting all my nutrients from a tube connected to my arm, listening to the hospital equivalent of [Muzak].

I found a great write-up from fellow blogger Jason Buffington from ESG. Here are some excerpts from his post [IBM Pulse 2012 — Day One Keynote]:

"IBM Pulse 2012 ‘s opening keynote talked about the realities of cloud as a delivery model – without the ‘private-‘, or the ‘public-‘, or even the quotes or capitalization of “The Cloud.” It was IBM’s perspective on what IBM knows better than most, how to deliver enterprise IT services that map to strategic business goals."
"In contrast to talking about ‘data-center/cloud’ stuff and then later about ‘consumerization-of-IT’ stuff , IBM’s core message was how mobility was in many ways driving cloud evolution."
"...cloud-based delivery was ‘more than just virtualization’"
"...the US Dept of Labor stating that jobs related to technology are forecast to be among the fastest growing segment thru 2018."

Hopefully, this post will hold you over until I regain consciousness.



Tags:  mobility ibm esg cloud jason+buffington

IBM Pulse2012 Video Library Now Available

| | Comments (2) | Visits (12883)

IBMpulse2012

Did you miss IBM's Pulse 2012 conference? So did I. Last month, I told you all to [mark your calendars], but wasn't sure if I would be there myself or not.

I was invited to attend Pulse this year, but had to instead go to the Hospital for surgery and spend the week recovering. I thought I made that clear on my last post that I would be spending [the week on my back, with a tube in my arm], but apparently, people missed that subtlety.

The tube was actually connected to the back of my left hand, and I was tempted to take pictures of the entire process, but decided not to, since my gown had no pockets to hold my camera. Perhaps it is better it went undocumented. The less you see of the inner workings of a hospital, as a patient, the better. The whole things was quite a blur.

Despite a few mishaps, I managed to survive the week. Many thanks to Hilda, Dina, Crystal, Marcie, Mike, Joe, Ryan, Sue, Debra, Donna, Modrechai, and the rest of the fine medical staff at St. Joseph's for their hospitality! And of course, many thanks to Mo, my parents and sisters for helping me through the recovery!

Fortunately, for those like me who were unable to go to Las Vegas last week, there is the [IBM Pulse2012 Video Library] with highlights of the keynotes and other sessions during the week.

Enjoy!

technorati tags: IBM, Pulse2012



Tags:  pulse2012 ibm

Is this what HDS tells our mainframe clients?

| | Comment (1) | Visits (17773)

Five years ago, I sprayed coffee all over my screen from something I read on a blog post from fellow blogger Hu Yoshida from HDS. You can read what cased my reaction in my now infamous post [Hu Yoshida should know better]. Subsequently, over the years, I have disagreed with Hu on a variety of of topics, as documented in my 2010 blog post [Hu Yoshida Does It Again].

(Apparently, I am not alone, as the process of spraying one's coffee onto one's computer screen while reading other blog posts has been referred to as "Pulling a Tony" or "Doing a Tony" by other bloggers!)

Fortunately, my IBM colleague David Sacks doesn't drink coffee. Last month, David noticed that Hu had posted a graph in a recent blog entry titled [Additional Storage Performance Efficiencies for Mainframes], comparing the performance of HDS's Virtual Storage Platform (VSP) to IBM's DS8000.

HDS-EAV-graph2

For those not familiar with disk performance graphs, flatter is better, lower response time and larger IOPS are always desired. This graph implies that the HDS disk system is astonishingly faster than IBM's DS8000 series disk system. Certainly, the HDS VSP qualifies as a member of the elite [Super High-End club] with impressive SPC benchmark numbers, and is generally recognized as a device that works in IBM mainframe environments. But this new comparison graph is just ridiculous!

(Note: While SPC benchmarks are useful for making purchase decisions, different disk systems respond differently to different workloads. As the former lead architect of DFSMS for z/OS, I am often brought in to consult on mainframe performance issues in complex situations. Several times, we have fixed performance problems for our mainframe clients by replacing their HDS systems with IBM DS8000 series!)

Since Hu's blog entry contained very little information about the performance test used to generate the graph, David submitted a comment directly to Hu's blog asking a few simple questions to help IBM and Hu's readers determine whether the test was fair. Here is David's comment as submitted:

"Hello, Hu,
(Disclosure: I work for IBM. This comment is my own.)

I was quite surprised by the performance shown for the IBM DS8000 in the graph in your blog. Unfortunately, you provided very little detail about the benchmark. That makes it rather difficult (to say the least) to identify factors behind the results shown and to determine whether the comparison was a fair one.

Of the little information provided, an attribute that somewhat stands out is that the test appears to be limited to a single volume at least, that's my interpretation of "LDEV: 1*3390-3"? IBM's internal tests for this kind of case show far better response time and I/Os per second than the graph you published.

Here are a few examples of details you could provide to help readers determine whether the benchmark was fair and whether the results have any relevance to their environment.

  1. What DS8000 model was the test run on? (the DS8000 is a family of systems with generations going back 8 years. The latest and fastest model is the DS8800.)
  2. What were the hardware and software configurations of the DS8000 and VSP systems, including the number and speed of performance-related components?
  3. What were the I/O workload characteristics (e.g., read:write ratio and block size(s))?
  4. What was the data capacity of each volume? (Allocated and used capacity.)
  5. What were the cache sizes and cache hit ratios for each system? (The average I/O response times under 1.5 milliseconds for each system imply the cache hit ratios were relatively high.)
  6. How many physical drives were volumes striped across in each system?"

Unlike my blog on IBM, HDS bloggers like Hu are allowed to reject or deny comments before they appear on his blog post. We were disappointed that HDS never posted David's comment nor responded to it. That certainly raises questions about the quality of the comparison.

So, perhaps this is yet another case of [Hitachi Math], a phrase coined by fellow blogger Barry Burke from EMC back in 2007 in reference to outlandish HDS claims. My earliest mention was in my blog post [Not letting the Wookie Win].

By the way, since the test was about z/OS Extended Address Volumes (EAV), it is worth mentioning that IBM's DS8700 and DS8800 support 3390 volume capacities up to 1 TB each, while the HDS VSP is limited to only 223 GB per volume. Larger volume capacities help support ease-of-growth and help reduce the number of volumes storage administrators need to manage; that's just one example of how the DS8000 series continues to provide the best storage system support for z/OS environments.

Personally, I am all for running both IBM and HDS boxes side-by-side and publishing the methodology, the workload characteristics, the configuration details, and the results. Sunshine is always the best disinfectant!

technorati tags: IBM, DS8000, DS8800, HDS, Hu Yoshida, USP, VSP, mainframe, EAV



Tags:  eav mainframe ds8800 ds8000 vsp hds usp ibm hu+yoshida

Cover-Up is Worse than the Original Crime

| | Comment (1) | Visits (12132)

On my last blog post [Is this what HDS tells our mainframe clients?], I poked fun at Hu Yoshida's blog post that contained a graphic with questionable results. Suddenly, the blog post disappeared altogether. Poof! Gone!

Just so that I am not accused of taking a graph out of context, here is Hu's original post, in its entirety:

"Since my last post on Storage Performance Efficiency, Claus wrote on the use of HDP, Hitachi Dynamic Provisioning and HDT, Hitachi Dynamic Tiering for mainframes on Virtual Storage Platform (VSP). Naturally, this prompted me to think of the specific performance efficiency implications for mainframes.

HDP brings the performance benefits of automated wide striping and HDT automatically keeps the hot pages of data on the highest performance tier of storage for mainframes, just as it does for open systems. There are differences between open systems and mainframe implementation due to mainframe CKD and CCHHR formats for instance, the page size is optimized for mainframe storage formats and storage reclamation must be host initiate. For more information check out our website: http://www.hds.com/assets/pdf/how-to-apply-latest-advances-in-hitachi-mainframe-storage.pdf

There are also additional performance efficiencies specific for mainframes.

Mainframe HDP is the foundation for Extended Addressable Volumes, which increases the size of 3390 volumes from 65,520 cylinders to 262,668 cylinders. This, along with HyperPAV--which facilitates multiple accesses to a volume, addressing the problem of queuing on a very large volume with a single UCB--enhances throughput with many more concurrent I/O operations.

[graph]

The thin provisioning of HDP also increases the performance of mainframe functions that move, copy, or replicate these thin volumes like Concurrent Copy, FlashCopy V02, and HUR, since the actual volumes are smaller.

If you have mainframes, check out the capacity and performance efficiency of VSP with HDP and HDT.

For other posts on maximizing storage and capacity efficiencies, check these out: http://blogs.hds.com/capacity-efficiency.php"

At this point, you might be wondering: "If Hu Yoshida deleted his blog post, how did Tony get a copy of it? Did Tony save a copy of the HTML source before Hu deleted it?" No. I should have, in retrospect, in case lawyers got involved. It turns out that deleting a blog post does not clear the various copies in various RSS Feed Reader caches. I was able to dig out the previous version from the vast Google repository. (Many thanks to my friends at Google!!!).

The graph itself was hosted separately has been deleted, but it was just taken from slide 10 of the HDS presentation [How to Apply the Latest Advances in Hitachi Mainframe Storage], so it was easy to recreate.

(Lesson to all bloggers: If you write a blog post, and later decide to remove it for whatever legal, ethical, moral reasons, it is better to edit the post to remove offending content, and add a comment that the post was edited, and why. Shrinking a 700-word article down to 'Sorry Folks - I decided to remove this blog post because...' would do the trick. This new edited version will then slowly propagate across to all of the RSS Feed Reader caches, eliminating most traces to the original. Of course, the original may have been saved by any number of your readers, but at least if you have an edited version, it can serve as the official or canonical version.)

Perhaps there was a reason why HDS did not want to make public the FUD its sales team use in private meetings with IBM mainframe clients. Whatever it was, this appears to be another case where the cover-up is worse than the original crime!

technorati tags: HDS, Hu Yoshida, VSP, EAV



Tags:  hu+yoshida vsp eav hds
  • Show:
  • 10
  • 20
  • 30
  • Previous
  • Next
1 2
Inside System Storage -- by Tony Pearson
  • Share
  • ?
  • Profiles ▼
  • Communities ▼
  • Apps ▼