This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Well, another week has gone by, and I am now back from my grand "Digital IBMer" trip to Europe! Here's what the second week involved.
Prague, Czech Republic
The cold and rainy weather followed us from Berlin! We were able to go to the old castle, light a candle for a friend in the hospital at the St. Vitus Cathedral, and walk across the famous Charles Bridge to go see the Astronomical Clock.
We stayed in the "Blind Eye" hostel, which was an awesome place, with friendly and helpful staff.
The weather was much nicer in Vienna, giving us a chance to see the palace and surrounding gardens. In front of the palace was an "Easter Market" where booths sold various arts and crafts, as well as delicious food and drink. I had a slab of ham, a pile of mustard, and a cup of [Glühwein], a hot drink made from mulled red wine.
I met several people at the hostel Ruthensteiner, from the UK, Argentina and Spain, and we all went out drinking at a Polish pub down the street.
The next morning we walked through the city center. We learned that this week leading up to Easter Sunday--known as Semana Santa in some countries--was also "Spring Break" for many students, which explained why we were starting to have a harder time finding hostels to stay at.
Salzburg means "City of Salt" and the salt mines in the area allowed the landlords to get rich. If you have seen the movie [The Sound of Music] then you already know how beautiful Salzburg is. The castle was incredible, and was used for military purposes until 1861, when it was opened to the public.
Inside the castle is an awesome museum for [Marionettes], which are puppets controlled from above by strings, still used in productions today.
Unable to find a youth hostel, we stayed in the lovely Alderhof Pension Hotel. It was quiet and well-situated near the main train station.
At Munich, we decide to take [Sandeman's "New Europe" free guided city walking tour]. It is free, in that the tour guides work entirely for tips. Tours were available in English or Spanish. This was about three hours long, and we gladly tipped heavily for such an informational tour of everything from the Glockenspiel to the Residenz palace. One stop on the tour was to see the main "Beer Garden" where rows and rows of people enjoying beer in the beautiful weather.
While in Munich, I was invited to see a sneak preview of the movie [Iron Sky], a campy, politically-incorrect, low-budget sci-fi comedy made in Europe with a mix of English language dialogue, and German language dialogue with English subtitles. The year is 2018, a woman that looks a lot like Sarah Palin is now president of the United States, and Nazis that have set up a moon-based space station back in 1945 are ready to attack. If you liked the movies "White Chicks" and "Battlefield Earth", then you might enjoy this one as well. You may need to know a bit about the history of the third Reich, the operas of Wagner, and the movie [The Great Dictator] by Charlie Chaplin, to make sense of some of the inside jokes.
We visited Heidelberg on [Good Friday and the place was a ghost town. The streets were nearly empty, and the tourist shops didn't open until 10am. Despite this, we managed to take the funicular train up the mountain to visit the castle, visit an interesting Pharmacy Museum, see the world's largest wine barrel, have a traditional German lunch, and take pictures of the old stone bridge.
We got back to Frankfurt and left Saturday morning to fly back to the United States.
We managed to visit 11 cities in six different countries over the course of 16 days. I was able to learn quite a lot about the use of mobile apps to book hotels and find the appropriate trains to get around each country, take advantage of social media to determine what to see and do, and the use of cloud to store my photos, videos and notes along the way.
Has it been a week already? I am here in Europe checking out various options for mobile, social media and cloud on my "Digital IBMer" tour. Here´s where we have been so far...
We landed at the Frankfurt airport, which will serve as our starting and ending point. It is close to Mainz where my IBM colleagues at the Executive Briefing Center for Germany is located. I looked throughout the airport for a SIM chip for my smartphone that worked in all European countries, but nobody had one for sale. We had lunch while we wait for the train to Brussels.
Our next stop was Brussels, capital of Belgium. The Belgians speak Flemish, which is like a Belgian version of Dutch, and French. I don´t speak Flemish nor Dutch, so I have been able to get by on French here. The Hotel Opera was near the central station, but we got off at the Bruxelles-Midi, thinking that Midi meant middle, or central of the city, but is Flemish (er.. make that French) for South instead, so we had a bit of walking to do!
Bruges is only an hour train ride from Brussels and is worth seeing. Our Eurail pass makes it easy just to go from city to city by train. Our particular one allows us first class travel through 23 countries for 15 contiguous days. We had lunch at the central square, and for dessert... Belgian Waffle-on-a-stick! Mine was covered in powdered sugar, and soon the rest of me was also.
Through tweets on Twitter, I was able to meet up with Stef, a local storage administrator and fan of my blog, and go out for beers. Stef was kind enough to lend me a pre-paid SIM chip for my phone that provided data plan while I am in Belgium! Thank you Stef!
Amsterdam, The Netherlands
Not surprisingly, Amsterdam is one of my favorite cities. It´s like Las Vegas without casinos. Our hotel, The Bulldog, is conveniently located in the center of town.
I met up with Joanne, a professional cellist (yes, she plays the Cello musical instrument for a living) who took us on a tour of the MuzikGebouw, which is where they hold concerts and events. Using the "Amsterdam City Guide" app from Travel Advisor on my smartphone, we followed one of their suggested self-guided walking tours. We also went to the Rijksmuseum, which is under construction, so only a subset of the art is on display.
From Amsterdam, we took a night train to Copenhagen. This is a 15-hour train ride, no dinner, but they give you breakfast. Men and women are in separate sleeping cars, and I was paired up with a business man, Danny, from Tiawan trying to sell clothing for firefighters.
Until now, I have managed with German, French and English, but I wasn´t sure about Danish, so I brought this "European Phrase Book" that has 14 languages. We stayed at the DanHostel, conveniently located near"Tivoli Park".
We have safely arrived to Berlin. Our train from Copenhagen to Hamburg went on a ferry boat to cross over the water! We are staying at Plus Berlin Hostel,which has a nice indoor swimming pool and dry sauna.
Until now, we have had beautiful sunny weather, but today is cold and dreary. We started out taking photos of all the graffiti in East Berlin we could find, but it started raining, so we changed plans and went to the world famous Pergamonmuseum.
Well, that´s my first week of adventure. Tomorrow, we leave for Prague in the Czech Republic!
"This week, IBM is launching a companywide effort to build the digital eminence of all IBMers. The goal is to arm you with the tools and knowledge to effectively use emerging technologies -- such as social, mobile, and cloud computing -- for strategic advantage."
This is how Rod Adkins, IBM Senior VP of Systems Technology Group, and my sixth-line manager, starts a memo to declare April "Digital IBMer awareness month". I am not sure if this is just for this April, or every April going forward. Included with this is a set of ten guidelines to improve CyberSecurity:
In honor of this, I will be spending the next two weeks traveling through Europe. Instead of bringing a large suitcase and my laptop, I have decided instead to only take:
The clothes I am wearing on the plane
A heavy jacket with lots of pockets
A backpack with 15 pounds of clothes
A hipsack with my smartphone, digital camera, MP3 player and all the related adapters, chargers and cables
My smartphone uses a GSM chip, so I should be able to get a European SIM when I arrive. I have not booked any hotels, tours, or transportation. Instead, I will rely on social media and cloud computing to take care of things on a daily basis.
(Why only 15 pounds of clothing? I just had major surgery two weeks ago, and my doctor advised me not to lift more than 15 pounds for the next six weeks!)
I plan to have a series of blog posts documenting what I learn from this trip. For those who want to follow along, I will be tweeting from @az990tony. You do not need a Twitter account to read my tweets. You can read them directly from [http://twitter.com/#!/az990tony].
I can't remember the last time I have gone this long without the comforts of my laptop or desktop, so it will be interesting how it works out!
HDP brings the performance benefits of automated wide striping and HDT automatically keeps the hot pages of data on the highest performance tier of storage for mainframes, just as it does for open systems. There are differences between open systems and mainframe implementation due to mainframe CKD and CCHHR formats for instance, the page size is optimized for mainframe storage formats and storage reclamation must be host initiate. For more information check out our website: http://www.hds.com/assets/pdf/how-to-apply-latest-advances-in-hitachi-mainframe-storage.pdf
There are also additional performance efficiencies specific for mainframes.
Mainframe HDP is the foundation for Extended Addressable Volumes, which increases the size of 3390 volumes from 65,520 cylinders to 262,668 cylinders. This, along with HyperPAV--which facilitates multiple accesses to a volume, addressing the problem of queuing on a very large volume with a single UCB--enhances throughput with many more concurrent I/O operations.
The thin provisioning of HDP also increases the performance of mainframe functions that move, copy, or replicate these thin volumes like Concurrent Copy, FlashCopy V02, and HUR, since the actual volumes are smaller.
If you have mainframes, check out the capacity and performance efficiency of VSP with HDP and HDT.
At this point, you might be wondering: "If Hu Yoshida deleted his blog post, how did Tony get a copy of it? Did Tony save a copy of the HTML source before Hu deleted it?" No. I should have, in retrospect, in case lawyers got involved. It turns out that deleting a blog post does not clear the various copies in various RSS Feed Reader caches. I was able to dig out the previous version from the vast Google repository. (Many thanks to my friends at Google!!!).
(Lesson to all bloggers: If you write a blog post, and later decide to remove it for whatever legal, ethical, moral reasons, it is better to edit the post to remove offending content, and add a comment that the post was edited, and why. Shrinking a 700-word article down to 'Sorry Folks - I decided to remove this blog post because...' would do the trick. This new edited version will then slowly propagate across to all of the RSS Feed Reader caches, eliminating most traces to the original. Of course, the original may have been saved by any number of your readers, but at least if you have an edited version, it can serve as the official or canonical version.)
Perhaps there was a reason why HDS did not want to make public the FUD its sales team use in private meetings with IBM mainframe clients. Whatever it was, this appears to be another case where the cover-up is worse than the original crime!
Five years ago, I sprayed coffee all over my screen from something I read on a blog post from fellow blogger Hu Yoshida from HDS. You can read what cased my reaction in my now infamous post [Hu Yoshida should know better]. Subsequently, over the years, I have disagreed with Hu on a variety of of topics, as documented in my 2010 blog post [Hu Yoshida Does It Again].
(Apparently, I am not alone, as the process of spraying one's coffee onto one's computer screen while reading other blog posts has been referred to as "Pulling a Tony" or "Doing a Tony" by other bloggers!)
Fortunately, my IBM colleague David Sacks doesn't drink coffee. Last month, David noticed that Hu had posted a graph in a recent blog entry titled [Additional Storage Performance Efficiencies for Mainframes], comparing the performance of HDS's Virtual Storage Platform (VSP) to IBM's DS8000.
For those not familiar with disk performance graphs, flatter is better, lower response time and larger IOPS are always desired. This graph implies that the HDS disk system is astonishingly faster than IBM's DS8000 series disk system. Certainly, the HDS VSP qualifies as a member of the elite [Super High-End club] with impressive SPC benchmark numbers, and is generally recognized as a device that works in IBM mainframe environments. But this new comparison graph is just ridiculous!
(Note: While SPC benchmarks are useful for making purchase decisions, different disk systems respond differently to different workloads. As the former lead architect of DFSMS for z/OS, I am often brought in to consult on mainframe performance issues in complex situations. Several times, we have fixed performance problems for our mainframe clients by replacing their HDS systems with IBM DS8000 series!)
Since Hu's blog entry contained very little information about the performance test used to generate the graph, David submitted a comment directly to Hu's blog asking a few simple questions to help IBM and Hu's readers determine whether the test was fair. Here is David's comment as submitted:
(Disclosure: I work for IBM. This comment is my own.)
I was quite surprised by the performance shown for the IBM DS8000 in the graph in your blog. Unfortunately, you provided very little detail about the benchmark. That makes it rather difficult (to say the least) to identify factors behind the results shown and to determine whether the comparison was a fair one.
Of the little information provided, an attribute that somewhat stands out is that the test appears to be limited to a single volume at least, that's my interpretation of "LDEV: 1*3390-3"? IBM's internal tests for this kind of case show far better response time and I/Os per second than the graph you published.
Here are a few examples of details you could provide to help readers determine whether the benchmark was fair and whether the results have any relevance to their environment.
What DS8000 model was the test run on? (the DS8000 is a family of systems with generations going back 8 years. The latest and fastest model is the DS8800.)
What were the hardware and software configurations of the DS8000 and VSP systems, including the number and speed of performance-related components?
What were the I/O workload characteristics (e.g., read:write ratio and block size(s))?
What was the data capacity of each volume? (Allocated and used capacity.)
What were the cache sizes and cache hit ratios for each system? (The average I/O response times under 1.5 milliseconds for each system imply the cache hit ratios were relatively high.)
How many physical drives were volumes striped across in each system?"
Unlike my blog on IBM, HDS bloggers like Hu are allowed to reject or deny comments before they appear on his blog post. We were disappointed that HDS never posted David's comment nor responded to it. That certainly raises questions about the quality of the comparison.
So, perhaps this is yet another case of [Hitachi Math], a phrase coined by fellow blogger Barry Burke from EMC back in 2007 in reference to outlandish HDS claims. My earliest mention was in my blog post [Not letting the Wookie Win].
By the way, since the test was about z/OS Extended Address Volumes (EAV), it is worth mentioning that IBM's DS8700 and DS8800 support 3390 volume capacities up to 1 TB each, while the HDS VSP is limited to only 223 GB per volume. Larger volume capacities help support ease-of-growth and help reduce the number of volumes storage administrators need to manage; that's just one example of how the DS8000 series continues to provide the best storage system support for z/OS environments.
Personally, I am all for running both IBM and HDS boxes side-by-side and publishing the methodology, the workload characteristics, the configuration details, and the results. Sunshine is always the best disinfectant!
Did you miss IBM's Pulse 2012 conference? So did I. Last month, I told you all to [mark your calendars], but wasn't sure if I would be there myself or not.
I was invited to attend Pulse this year, but had to instead go to the Hospital for surgery and spend the week recovering. I thought I made that clear on my last post that I would be spending [the week on my back, with a tube in my arm], but apparently, people missed that subtlety.
The tube was actually connected to the back of my left hand, and I was tempted to take pictures of the entire process, but decided not to, since my gown had no pockets to hold my camera. Perhaps it is better it went undocumented. The less you see of the inner workings of a hospital, as a patient, the better. The whole things was quite a blur.
Despite a few mishaps, I managed to survive the week. Many thanks to Hilda, Dina, Crystal, Marcie, Mike, Joe, Ryan, Sue, Debra, Donna, Modrechai, and the rest of the fine medical staff at St. Joseph's for their hospitality! And of course, many thanks to Mo, my parents and sisters for helping me through the recovery!
Fortunately, for those like me who were unable to go to Las Vegas last week, there is the [IBM Pulse2012 Video Library] with highlights of the keynotes and other sessions during the week.
This week is IBM Pulse2012 conference in Las Vegas. I am not there, for medial reasons this time. While my colleagues will be spending this week sipping Margaritas and enjoying the music in between inspiring technical sessions, I will be flat on my back, getting all my nutrients from a tube connected to my arm, listening to the hospital equivalent of [Muzak].
"IBM Pulse 2012 ‘s opening keynote talked about the realities of cloud as a delivery model – without the ‘private-‘, or the ‘public-‘, or even the quotes or capitalization of “The Cloud.” It was IBM’s perspective on what IBM knows better than most, how to deliver enterprise IT services that map to strategic business goals."
"In contrast to talking about ‘data-center/cloud’ stuff and then later about ‘consumerization-of-IT’ stuff , IBM’s core message was how mobility was in many ways driving cloud evolution."
"...cloud-based delivery was ‘more than just virtualization’"
"...the US Dept of Labor stating that jobs related to technology are forecast to be among the fastest growing segment thru 2018."
Hopefully, this post will hold you over until I regain consciousness.
Most readers know thta Tucson is home of one of the largest collections of world-renowned experts on IT storage. But what you may not know, is that Tucson is also the home of experts for optical sciences. This week, I was part of a delegation of IBMers invited on a tour of the Steward Observatory Mirror Lab [SOML].
SOML was built in 1990 underneath the football stadium at the University of Arizona. Why under the stadium? Their motivation was [Chicago Pile-1], the world's first nuclear reactor, built by Enrico Fermi under the football stadium at the University of Chicago.
At other mirror labs, mirrors start as a large, heavy, flat piece of glass and then ground and polished to the correct parabolic curve. SOML created a new process that works a lot better, similar to making a [Pineapple Upside Down Cake]. For those who are not familiar with this cake, you arrange sliced pineapple rings on the bottom of the baking dish, then pour the liquid cake batter that fills in and around the pineapple slices, then bake.
The first step is creating a base of 1,690 hexoganal tubes made of Aluminum Silicate. These are like the pineapple rings in the cake. The tubes are bolted to the baking dish that is 8.4 meters wide. These tubes form the base of the [parabolic shape] that focuses starlight to a small focal point. The tubes are spaced with about an inch of space in between. The Aluminum silicate feels like clay.
Once the base is built, chunks of glass are placed on the surface. Rather then pouring on the cake mix of molten glass, these chunks will be melted in place. This isn't normal glass, but a special Boron Silicate glass that does not expand or contract much during changes in temperature, made by the [Ohara Corporation] in Japan.
The oven is then lowered onto the baking dish. Once the temperature reaches 700 degrees, the entire system is then rotated at 7 RPM. This allows the glass to melt and take its parabolic shape through [centrifugal force]. The people who run the oven are called "oven pilots", and they monitor the entire process to make sure nothing goes wrong.
This particular mirror is one of the two that will go into the [Large Binocular Telescope]. The mirror will be 36 inches thick at the edges, and 18 inches in the middle. If the glass cools down to quickly, it may crack or form crystals, so instead the oven is kept in place and the temperature lowered slowly over the course of a few months. This is called annealing.
Once a mirror has annealed, 24 suction cups are glued to the top surface to pull the mirror out of the baking dish. It is then tipped on its side so that all the bolts can be removed and the hexagonal tubes washed out, leaving behind a honey-combed effect on the bottom of the mirror. This means the mirror is 80 percent air, making it strong and lightweight.
The next step is grinding the surface with diamonds. In most cases, the process of spinning creates the correct shape so little grinding is required. However, for this mirror here for the Large Synoptic Survey Telescope [LSST], about five tons of glass will be ground out of the center. This will actually have two parabolic curves, the outer curve is shallow, and the inner curve is deep. This will allow for the LSST to survey a wide area of space at a time.
Once the glass is ground to the right shape, it will be polished with Cerium Oxide, what is commonly known as Jeweler's Rouge. How smooth does it have to be? If this mirror were the size of the United States, there would be no bump higher than 2 inches tall!
Most mirrors are symmetrical, so the polishing can be done on a spinning platform, but this mirror is not. The Large Magellan Telescope will consist of seven mirrors, one in the middle that is symmetrical, and surrounded by six other mirrors that will all continue the parabolic shape in each direction. This is one of the outer mirrors, which means that each part of the polishing process will be controlled by computers to get exactly the curve required.
Here is a small scaled-down model of the Magellan Telescope. Each of the seven mirrors will be 8.4 meters wide. At this point, one person asked why all the mirrors were 8.4 meters wide. I joked that this was the size of the oven! It reminded me of [the story where newly-wed had to ask her grandmother why she cut the ends off the pot roast]. The actual reason was that the posts of the football stadium are 8.5 meters wide, so any mirror made inside the lab larger than that could not be removed easily for transportation.
The LMT will be installed on [Cerro Tololo] in Chile, where my father worked earlier in his career. Why Chile? Observatories need high altitude, dry climate and clear skies. That is why Arizona is home to many observatories, including Kitt Peak National Observatory and the Vatican Observatory on Mount Graham. Cerro Tololo in Chile is close to the equator and meets these requirements.
Once operational in 2020, it will gather 6 TB of images every evening. That got all of the IBMers on the tour very excited!
To verify the polishing is complete, it is put on three red stands and measured with a laser. Once the measurements are complete. The surface will be coated with aluminum to provide the reflective surface. You can't just paint the surface with a roller! Instead, the aluminum is vaporized and allowed to land on the surface of the mirror evenly, in a layer that is only three molecules thick. There is more aluminum in standard size beer can than on the surface of one of these 8.4 meter size mirrors!
So that was the tour. It took almost 2 hours. If you are ever in Tucson, consider contacting the SOML and arranging a tour for yourself. There is no other mirror lab like it!
The old adage applies "You can't please everyone. Presidents can't. Prostitutes can't. Nobody can." I am reminded of that as I fielded a variety of interesting comments and emails about, of all things, my choice of order of things in recent blog posts.
Certainly, there are times when the order of things matters greatly. In my now-infamous blog post [Sock Sock Shoe Shoe], I use a scene from a popular 1970's television show to explain why compression should be done before encryption.
In my case, I put things in the order that I felt made sense to me, but not everyone agrees. Here are three recent examples:
In my blog post [Two IBMers Earn Their Retirement], I congratulated two of my colleagues on their retirement. Since their retirement happened on the same day, I decided to mention Mark Doumas first, and Jim Rymarczyk second.
However, one of my readers, who I will assume is a member of the unofficial "Jim Rymarczyk fan club", felt that I should have listed Jim first, as Jim served IBM for 44 years, and Mark only 32 years.
Really? I realize that movie stars insist on having their name listed first on the poster, but neither of these guys would be confused with George Clooney!
So, to Jim and all his fans out there, I assure you I did not mean this as a slight in any way. I have updated the post to indicate that the ordering was strictly alphabetical by last name.
In my blog post [IBM Announcements for February 2012], I presented tape products first, and disk second. Normally, I cover them alphabetically, disk first, then tape. However, I was asked to promote tape this year in preparation for the upcoming 60th anniversary of tape, so I mentioned the tape announcements first, and the disk second.
The feedback from the XIV community was swift. Many felt that I [buried the lede] in not mentioning the XIV Gen3 SSD caching first.
(Note: For those not familiar with the phrase used in journalism, 'burying the lede' refers to the failure to mention the most interesting or attention grabbing elements of a story in the first paragraph. In American news journalism, it is spelled "lede" and elsewhere it is spelled "lead". Major US dictionaries apparently accept both spellings for this phrase.)
Technically, my lead paragraph stated clearly that: "This week we have announcements for both disk and tape, but since 2012 is the 60th Diamond Anniversary for tape, I will start with tape systems first."
So, while I don't claim to be a journalist by any means, I think the lead paragraph accurately reflected that I would talk about both disk and tape products in the rest of the blog post, and if a reader didn't care to learn more about tape could bypass those sections and go directly to the section on disk instead.
I have had my head handed to me on a platter so many times here at IBM that I am considering installing a zipper around my neck. My friends in XIV land insisted that I write a secondary post about XIV Gen3 SSD caching that had no mention of tape whatsoever. One suggestion was to compare and contrast XIV Gen3 SSD caching with EMC's announcement for VFCache. The result was my blog post [IBM XIV Gen3 SSD Caching versus EMC VFCache].
What could go wrong with an apples-to-orange comparison of two different storage products sprinkled with a small amount of FUD against a major competitor?
I had two complaints on this one. First, is the order of products in my side-by-side table of comparisons. I put EMC VFCache in the left column, and IBM XIV Gen3 SSD caching in the right. I meant nothing sinister by this. Alphabetically, EMC comes before IBM, and VFCache comes before XIV. Chronologically, EMC's announcement came out on Monday, and IBM's announcement came out the following day.
(Note: The term [sinster] comes from the Latin word sinistra meaning "left hand". In the Middle Ages it was believed that when a person was writing with their left hand they were possessed by the Devil. Left-handed people were therefore considered to be evil. My poor mother was born left-handed and was forced as a child to write with her right hand to be accepted by society.)
Apparently, an unwritten convention within IBM is that comparison tables always have the newer product on the left column, followed by one or more older products to the right, or the IBM product on the left column, with one or more competitive alternatives to the right.
The second complaint came from a reader in the comments section: "... I think [what] you're doing is trying to ride EMC's release for your own marketing, did you really need to? XIV is an excellent array; adding SSD Cache to the Gen3 takes it further, Moshe would be fuming (which I think is a good thing), can you just stick to that and not ride someone else's wave?"
Both announcements relate to reducing latency of read IOPS through the use of Solid State Drives. That both companies would announce these were no surprise to any employee at either company, as both IBM and EMC have been talking about their intent to do so last year. IBM's announcement of XIV SSD Gen3 caching was certainly not in response to EMC's VFCache announcement, and I doubt EMC rushed out their VFCache announcement the day before as a pre-emptive strike against IBM's announcement of the XIV Gen3 SSD Caching feature.
(Note: I don't know her personally, but she has thousands of followers!)
There you have it. I will gladly fix false or misleading information, but I am not going to re-arrange the order of things just to please some readers, only to have other readers complain that they liked it better in the original order. As always, feel free to comment on any of this in the section below.
I can't believe we got snow this week on Valentine's Day! It didn't last long on the ground here in Tucson, but there are still some white caps in our mountains. For those of you "trapped" by snow, or too much work, here are two upcoming events you can attend from your desk and computer!
IBM Oracle Virtual University 2012
Please join us for the fourth annual IBM Oracle Virtual University that runs "live" for 24 hours, then continues 'on-demand' replay through the remainder of 2012.
From: Tuesday, February 21, 6:00 am US Eastern Time EST (6:00 pm China Time)
To: Wednesday, February 22, 6:00 am EST
This is a great educational event for IBM and Business Partner sales & technical teams who sell IBM Oracle solutions or have Oracle solutions installed in their account. It is for anyone who is new to or interested in the IBM Oracle Alliance as well as experienced sales & technical people who need all the latest on the IBM/Oracle co-opetition relationship for 2012 and beyond.
This VIRTUAL on-line event will cover key topics around the IBM Oracle Alliance. I am one of the speakers and will cover IBM System Storage offerings as they relate to Oracle software.
This is a chance for sellers to hear an update on what's new, unique and available to sell in 2012. The goal of this session is to help enable you to sell more IBM products and services with Oracle solutions in 2012! Learn where to go for help to better understand these solutions, close more deals and reach your targets.
Even through economic challenges, storage requirements have continued to grow along with the information explosion.
Join us for this informative webcast and hear from Jon Toigo, CEO and Managing Principal of Toigo Partners, as he discusses six cutting-edge storage technologies that are ready for prime time and can help transform your data center.
Date: Tuesday, February 28
Time: 1:00 pm EST, 12"00 pm CST, 10:00 am PST
The featured speaker is fellow blogger Jon Toigo, CEO and Managing Principal, Toigo Partners, an outspoken technology consumer advocate and vendor watchdog whose articles, columns, and blog posts on [DrunkenData.com] are enjoyed by over a million readers per month.
Raj hails from Toronto, Canada and will be able to provide the Canadian perspective on all things Storage. I had the pleasure to meet Raj in person here in Tucson when him and dozens of his cohorts came down for a multi-customer briefing at the [IBM Executive Briefing Center] where I work.
It takes me 20-30 minutes to complete a crossword or Sudoku puzzle. I am in no hurry, and I find the process relaxing. But what if you were paid to complete a puzzle? In that case, finishing the puzzle sooner, in fewer minutes, means more money in your paycheck per hour worked! However, getting paid would mean that doing these puzzles may no longer be fun or relaxing.
The idea of converting a hobby into a revenue-generating activity is not new. Who wouldn't want to earn money doing something you were planning to do already? The television is full of commercial advertisements for credit cards where you can earn Double Miles or Cash Rewards just for spending money on things you were going to spend on anyways.
But is "earn" the right word? The merchants pay a percentage fee every time a patron uses a credit card, and the bank is just providing a marketing incentive in the form of a portion of those fees back to the consumer, to encourage more usage of their card versus other forms of payment. Sort of like "profit sharing".
(FTC Disclosure: I am a full-time employee and shareholder of the IBM Corporation. This blog post should not be considered an endorsement for anything. My opinions and writings are based on publicly available information and my own experiences doing freelance work prior to my employment at IBM. I have no hands-on experience with Amazon Mechanical Turk, neither as a worker nor requester, have not participated in TopCoder contests, nor have I used the Viggle app. I do not have any financial interest in Amazon, TopCoder, Viggle or any other third-party company mentioned on this blog post, nor has anyone paid me to mention their company names, brands or offerings.)
Here's how it works. You get the app on your phone, and register each television show as you watch it. You can watch the show live, or much later recorded on your Tivo. You watch the shows you were going to watch anyways, and just provide your demographics, all in the name of market research. You get two points per minute of watching, and after 7,500 points, you get a $5 gift card from retailers such as from retailers such as Burger King, Starbucks, Best Buy, Sephora, Fandango, and CVS drugstores. For the typical American, it would take about three weeks to watch that much television!
Of course, this is not the only way to earn money working from home. A reader asked me for my opinions of [Amazon Mechanical Turk]. While the other examples above are done for marketing purposes, Mechanical Turk can be used for a variety of other things. Up to now, the IT industry has regarded the Cloud as the delivery of computing as a service, with the infrastructure, hardware and software existing on internationally networked servers, effectively invisible to the end user. This model is now to being applied broadly to people.
Basically, Mechanical Turk acts as a marketplace, where employers post Human Intelligent Tasks (HITs) that workers can do. Most can be completed in minutes and you are paid pennies to do so. Some examples might help illustrate what a HIT looks like:
Call a business and get the email address of the manager in charge.
Review a photograph and describe its style or content in three words or less
Select among multiple choices to categorize a job listing or company position
As a Mechanical Turk worker, you only work on the HITs you choose to work on, presumably those that interest you, and that you can do well and quickly. Workers can do this anytime, anywhere, such as 2:00am in the morning, at home, when you can't sleep or taking care of children. You can choose to work as much or as little as you like.
The employers--referred to as Mechanical Turk requesters--put money into their payroll accounts, load up their tasks, and hit publish. This gives them immediate access to a global, on-demand 24-by-7 workforce that can help complete thousands of HITs in minutes. These employers won't have to put an advertisement in the want ads and interview potential candidates, just to let them go later when the project is over.
Just like any other job, Mechanical Turk wages are reported to the IRS, and each person's work is evaluated for quality. In doing these tasks, you build up your "digital reputation" that will either prevent you or allow you to work on certain HITs. You can also take tests to reach Qualification levels to be eligible to work on HITs not available to everyone else.
Software engineers would have a hard time writing an Artificial Intelligence [AI] program to do these simple tasks, so being able to generate a HIT for something in the middle of a computer program might be the easiest way to get past a difficult part of an algorithm. Amusingly, Amazon describes this form of [crowdsourcing] as an artificial form of Artificial Intelligence!
While this approach may work for small, easily defined tasks, what about works that require a high amount of Human Intelligence, like storage software or hardware development?
When I was working for IBM as a software engineer in the 1980s and 1990s, it took us years to get a project done, using the traditional [Waterfall Model]. My job as a software architect was to estimate the thousands of lines of code (KLOC) a project would require, estimate the number of Person-Years (PY) it would take, and recommend the appropriate sized team. Back then, each engineer averaged only about 1,000 lines of software code per year, so KLOC and PY were often used interchangeably. Fellow IBM author Fred Brooks wrote an excellent book on the process called [The Mythical Man-Month].
The Waterfall model has the advantage that people only have to work a portion of the cycle on the project. In between, there was plenty of downtime to attend training, improve your skills, or take vacation. As our director Lynn Yates would often complain, "if they are only writing two lines of code in the morning, and two in the afternoon, why do they need time to rest?"
The Waterfall model was not perfect, and had its share of critics. One downside was that the clients didn't see anything until General Availability (GA), with a few getting a glimpse a few months earlier during our Early Support Program (ESP). By the time clients could tell us it was not what they wanted or expected, it was too late to change until the next release.
To address this concern, 17 software engineers wrote the now famous [Agile Manifesto]. The authors felt that collaboration, between the developers and with the clients, is critical to success. Business people and developers must work together daily throughout the project. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. The best architectures, requirements, and designs emerge from self-organizing teams. The result is an iterative approach that allows the client to see working prototypes early in the process, allowing last-minute changes to requirements to influence the final product.
Combining the Mechanical Turk concept with Agile programming methodology gives you what IBM calls an "Outcomes Model" approach. In the IBM research paper [Software Economies] (PDF, 5 pages), the authors argue that there are four fundamental principles needed for an "Outcomes Model" approach:
Autonomy. All of the actions necessary to bring jobs to completion should be driven by market forces; the process is
never gated by an entity outside of the market.
Inclusiveness. Everyone who provides information or performs work that leads to improvements should share in the
Transparency. The system should be transparent with respect to both the flow of money in the market and the tasks
performed by workers in the market.
Reliability. The system should be immune to manipulation, robust against attack (e.g., via insertion of untrusted code),
and prevent "shallow" work which would have to be re-done later.
I was surprised to see that [the TopCoder Community is 390,593 strong], nearly the size of the entire IBM company. TopCoder is focused on computer programming and digital creation using the Outcomes Model approach. Rather than paying everyone for their work, however, the platform is designed around challenges and competitions, and the top players or contributors are rewarded with cash prizes.
As an innovative company, IBM constantly explores a variety of means and approaches to offer value to its clients and customers. These new approaches may have some distinct advantages not just for IBM and its shareholders, but also for its clients and the freelancers hired to work on these projects. The global marketplace is getting flatter, smaller and smarter. It will be interesting how this plays out. If the discussion above encourages you to hone your technical skills, perhaps that is motivation enough to get off the couch and stop watching so much television!
Have you ever noticed that sometimes two movies come out that seem eerily similar to each other, released by different studios within months or weeks of each other? My sister used to review film scripts for a living, she would read ten of them and have to pick her top three favorites, and tells me that scripts for nearly identical concepts came all the time. Here are a few of my favorite examples:
1994: [Wyatt Earp] and [Tombstone] were Westerns recounting the famed gunfight at the O.K. Corral. Tombstone, Arizona is near Tucson, and the gunfight is recreated fairly often for tourists.
1998: [Armageddon] and [Deep Impact] were a pair of disaster movies dealing with a large rock heading to destroy all life on earth. I was in Mazatlan, Mexico to see the latter, dubbed in Spanish as "Impacto Profundo".
1998: [A Bug's Life] and [Antz] were computer-animated tales of the struggle of one individual ant in an ant colony.
2000: [Mission to Mars] and [Red Planet] were sci-fi pics exploring what a manned mission to our neighboring planet might entail.
This is different than copy-cat movies that are re-made or re-imagined many years later based on the previous successes of an original. Ever since my blog post [VPLEX: EMC's Latest Wheel is Round] in 2010 comparing EMC's copy-cat product that came our seven years after IBM's SAN Volume Controller (SVC), I've noticed EMC doesn't talk about VPLEX that much anymore.
This week, IBM announced [XIV Gen3 Solid-State Drive support] and our friends over at EMC announced [VFCache SSD-based PCIe cards]. Neither of these should be a surprise to anyone who follows the IT industry, as IBM had announced its XIV Gen3 as "SSD-Ready" last year specifically for this purpose, and EMC has been touting its "Project Lightning" since last May.
Fellow blogger Chuck Hollis from EMC has a blog post [VFCache means Very Fast Cache indeed] that provides additional detail. Chuck claims the VFCache is faster than popular [Fusion-IO PCIe cards] available for IBM servers. I haven't seen the performance spec sheets, but typically SSD is four to five times slower than the DRAM cache used in the XIV Gen3. The VFCache's SSD is probably similar in performance to the SSD supported in the IBM XIV Gen3, DS8000, DS5000, SVC, N series, and Storwize V7000 disk systems.
Nonetheless, I've been asked my opinions on the comparison between these two announcements, as they both deal with improving application performance through the use of Solid-State Drives as an added layer of read cache.
(FTC Disclosure: I am both a full-time employee and stockholder of the IBM Corporation. The U.S. Federal Trade Commission may consider this blog post as a paid celebrity endorsement of IBM servers and storage systems. This blog post is based on my interpretation and opinions of publicly-available information, as I have no hands-on access to any of these third-party PCIe cards. I have no financial interest in EMC, Fusion-IO, Texas Memory Systems, or any other third party vendor of PCIe cards designed to fit inside IBM servers, and I have not been paid by anyone to mention their name, brands or products on this blog post.)
The solutions are different in that IBM XIV Gen3 the SSD is "storage-side" in the external storage device, and EMC VFCache is "server-side" as a PCI Express [PCIe] card. Aside from that, both implement SSD as an additional read cache layer in front of spinning disk to boost performance. Neither is an industry first, as IBM has offered server-side SSD since 2007, and IBM and EMC have offered storage-side SSD in many of their other external storage devices. The use of SSD as read cache has already been available in IBM N series using [Performance Accelerator Module (PAM)] cards.
IBM has offered cooperative caching synergy between its servers and its storage arrays for some time now. The predecessor to today's POWER7-based were the iSeries i5 servers that used PCI-X IOP cards with cache to connect i5/OS applications to IBM's external disk and tape systems. To compete in this space, EMC created their own PCI-X cards to attach their own disk systems. In 2006, IBM did the right thing for our clients and fostered competition by entering in a [Landmark agreement] with EMC to [license the i5 interfaces]. Today, VIOS on IBM POWER systems allows a much broader choice of disk options for IBM i clients, including the IBM SVC, Storwize V7000 and XIV storage systems.
Can a little SSD really help performance? Yes! An IBM client running a [DB2 Universal Database] cluster across eight System x servers was able to replace an 800-drive EMC Symmetrix by putting eight SSD Fusion-IO cards in each server, for a total of 64 Solid-State drives, saving money and improving performance. DB2 has the Data Partitioning Feature that has multi-system DB2 configurations using a Grid-like architecture similar to how XIV is designed. Most IBM System x and BladeCenter servers support internal SSD storage options, and many offer PCIe slots for third-party SSD cards. Sadly, you can't do this with a VFCache card, since you can have only one VFCache card in each server, the data is unprotected, and only for ephemeral data like transaction logs or other temporary data. With multiple Fusion-IO cards in an IBM server, you can configure a RAID rank across the SSD, and use it for persistent storage like DB2 databases.
Here then is my side-by-side comparison:
IBM XIV Gen3 SSD Caching
Selected x86-based models of Cisco UCS, Dell PowerEdge, HP ProLiant DL, and IBM xSeries and System x servers
All of these, plus any other blade or rack-optimized server currently supported by XIV Gen3, including Oracle SPARC, HP Titanium, IBM POWER systems, and even IBM System z mainframes running Linux
Operating System support
Linux RHEL 5.6 and 5.7, VMware vSphere 4.1 and 5.0, and Windows 2008 x64 and R2.
All of these, plus all the other operating systems supported by XIV Gen3, including AIX, IBM i, Solaris, HP-UX, and Mac OS X
FCP and iSCSI
Vendor-supplied driver required on the server
Yes, the VFCache driver must be installed to use this feature.
No, IBM XIV Gen3 uses native OS-based multi-pathing drivers.
External disk storage systems required
None, it appears the VFCache has no direct interaction with the back-end disk array, so in theory the benefits are the same whether you use this VFCache card in front of EMC storage or IBM storage
XIV Gen3 is required, as the SSD slots are not available on older models of IBM XIV.
Shared disk support
No, VFCache has to be disabled and removed for vMotion to take place.
Yes! XIV Gen3 SSD caching shared disk supports VMware vMotion and Live Partition Mobility.
Support for multiple servers
An advantage of the XIV Gen3 SSD caching approach is that the cache can be dynamically allocated to the busiest data from any server or servers.
Support for active/active server clusters
Aware of changes made to back-end disk
No, it appears the VFCache has no direct interaction with the back-end disk array, so any changes to the data on the box itself are not communicated back to the VFCache card itself to invalidate the cache contents.
None identified. However, VFCache only caches blocks 64KB or smaller, so any sequential processing with larger blocks will bypass the VFCache.
Yes! XIV algorithms detect sequential access and avoid polluting the SSD with these blocks of data.
Number of SSD supported
One, which seems odd as IBM supports multiple Fusion-IO cards for its servers. However, this is not really a single point of failure (SPOF) as an application experiencing a VFCache failure merely drops down to external disk array speed, no data is lost since it is only read cache.
6 to 15 (one per XIV module) for high availability.
Pin data in SSD cache
Yes, using split-card mode, you can designate a portion of the 300GB to serve as Direct-attached storage (DAS). All data written to the DAS portion will be kept in SSD. However, since only one card is supported per server and the data is unprotected, this should only be used for ephemeral data like logs and temp files.
No, there is no option to designate an XIV Gen3 volume to be SSD-only. Consider using Fusion-IO PCIe card as a DAS alternative, or another IBM storage system for that requirement.
Pre-sales Estimating tools
Yes! CDF and Disk Magic tools are available to help cost-justify the purchase of SSD based on workload performance analysis.
IBM has the advantage that it designs and manufactures both servers and storage, and can design optimal solutions for our clients in that regard.
Well, it's Tuesday again, and you know what that means! IBM Announcements! Typically, IBM System Storage has three to five major product launches per year. Making announcements every Tuesday would have been two frequent, and having one big announcement every two or three years would be too far apart. Worldwide combined revenues for storage hardware and software grew double digits last year, comparing full-year 2011 to the prior 2010 year, and I am sure that 2012 will also be a good year for IBM as well! This week we have announcements for both disk and tape, but since 2012 is the 60th Diamond Anniversary for tape, I will start with tape systems first.
TS1140 support for JA/JJ tape cartridges
The TS1140 enterprise tape drive was announced at the [Storage Innovation Executive Summit] last May. It supported a new E07 format on three different new tape cartridges. Models "JC" was 4.0TB standard re-writeable tapes, "JY" was 4.0TB WORM tapes, and "JK" were 500GB economy tapes that were less expensive, but offered faster random access.
Generally, IBM has adopted an N-2 read, N-1 write [backward compatibility]. This means that the TS1140 could read E05 and E06 formatted tapes on JB and JX media, and could write E06 format on JB and JX media. However, there are a lot of older JA and JJ media, especially as part of TS7740 environments, so IBM now supports TS1140 drives to read J1A formatted JA and JJ media. This is not just for TS7740 environments, any TS1140 in stand-alone or tape library configurations will support this as well.
TS7700 R2.1 enhancements
IBM is a leader in tape virtualization with or without physical tape as back-end media. There are two hardware models of the [IBM Virtualization Engine TS7700 family] for the IBM System z mainframe. These virtual libraries are referred to as "clusters" in IBM literature.
The TS7740 Virtual Tape Library supports putting virtual tape images on disk first, then move less-active data to physical tape, which I covered in my blog post [IBM Announcements - July 2007].
A unique feature of the TS7700 series is support for a Grid configuration, which allows up to six different TS7700 clusters to be grouped into a single instance image. These clusters can be in local or remote locations, connected via WAN or LAN connections.
R2.1 is the latest software release of this successful IBM's TS7700 series.
True Sync Mode Copy. Before R2.1, the TS7700 offered "immediate mode copy". An application would write to a virtual tape, and when it was done with the tape and performed an unmount, the TS7700 would then replicate the tape contents to a secondary cluster on the grid. With True Sync Mode, data contents are replicated per implicit or explicit SYNC points. This is another IBM first in the IT tape industry.
Remote Mount Fail-over. When you have two or more TS7700 clusters in a grid configuration, you can do remote mounts. We've added fail-over multi-pathing up to four paths, so that if a link to a remote cluster is down, it will try one of the others instead.
Parallel Copies and Pre-Migration. On of my 19 patents is for the pre-migration feature for the IBM 3494 Virtual Tape Server (VTS) that carries forward into the TS7700, and is also used in the SONAS and Information Archive products. However, when the grid architecture was introduced, the engineers decided not to allow pre-migration and copies to secondary clusters to occur concurrently. Now these two operations can be done in parallel.
Merge two grids into one grid. Now that we can support up to six clusters into a single grid, we have people with 2-cluster and 3-cluster grids looking to merge them into one. Of course, all the logical and physical volume serials (VOLSER) must be unique!
Accelerate off JA/JJ Media. There are a lot of older JA and JJ media still in TS7700 libraries. This feature allows customers to speed up the transition to newer physical tape media.
Copy Export to E06 format on JB media. This one is clever, and I have to say I would have never thought about it. Let's say you have a TS7740 with TS1140 drives, but you want to export some virtual tapes to physical media to be sent to someone who only has a TS7740 connected with older TS1130 drives. These older drives can't read new JC media nor make sense of the E07 format. This feature will let you export to older JB media in E06 format so that it will be fully readable at the new location on the TS1130 drives.
Copy Export Merge service offering. Thanks to mergers and acquisitions, it is sometimes necessary to split off a portion of data from a TS7700 grid. In the past, IBM supported sending this export to a completely empty TS7700 library, but this new service offerings allows the export to be merged into an existing TS7700 that already contains data.
LTFS-SDE support for Mac OS X 10.7 Lion
How do people still not yet know about the Linear Tape File System [LTFS]? I mentioned this in my blogs back in 2010 in [April], [September], and [November]. Last year, LTFS was the [NAB Show Pick Hits Award] and an [Emmy] for revolutionizing the use of digital tape in Television broadcasting.
In layman's terms, the Single Drive Edition [LTFS-SDE] allows a tape cartridge to be treated like USB memory stick. It is supported on the LTO5 tape drives for systems running various levels of Windows, Linux and Mac OS X. Prior to this announcement, IBM supported Snow Leopard (10.5.6) and Leopard (10.6), and now supports Mac OS X 10.7 "Lion" release.
IBM first introduced Solid-State Drives (SSD) back in 2007 where it made sense the most, in [drive-for-drive replacements on blade servers in the IBM BladeCenter]. Blade servers typically only have a single drive, and SSD are both faster and use less energy on a drive-for-drive comparison, so this provided immediate benefit. Today, SSD are available on a variety of System x and POWER system servers.
In 2008, IBM rocked the world by being the first to reach [1 Million IOPS with Project Quicksilver]. This was an all-SSD configuration which many considered unrealistic (at the time), but it showed the potential for solid state drives.
When the [XIV Gen3 was Announced - July 2011], each module included an 1.8-inch "SSD-Ready" slot in the back. IBM made a Statement of Direction that IBM would someday offer SSD drives to put in these slots. Today's announcement is that IBM has finalized the qualification process, so now XIV Gen3 clients can have 400GB of usable non-volatile SSD read cache added to each module. This SSD can be added to existing XIV Gen3 boxes in the field, or it can be factory-installed in new shipments. If you have a 15-module XIV, that's 6TB of additional read cache! This SSD is entirely managed by the XIV Gen3, so you won't have to spend weeks reading manuals or specifying configuration parameters.
When you carve volumes on the XIV, you now have an option to enable or disable use of the SSD cache for each volume. Since XIV is being used in private and public cloud deployments, this offers the ability to offer premium performance at premium prices. The use of SSD is complementary to IBM XIV Quality of Service (QoS) performance levels, which are determined by host instead.
Well, that's the first major IBM System Storage launch of 2012. Let me know what you think in the comment section below.
Last week, on January 31, two of my colleagues retired from IBM. At IBM, retirements always happen on the last day of the month. Here is my memories of each, listed alphabetically by last name.
Mark Doumas retires after working 32 years with IBM. Mark was my manager for a few months in 2003. Back then, IBM was working on launching a variety of new products, including the IBM SAN File System (SFS), the IBM SAN Volume Controller (SVC), a new release of Tivoli Storage Manager (TSM), and TotalStorage Productivity Center (TPC), which was later renamed to IBM Tivoli Storage Productivity Center.
Mark was manager of the portfolio management team, and I was asked to manage the tape systems portfolio. I am no stranger to tape, as one of my 19 patents is for the pre-migration feature of the IBM 3494 Virtual Tape Server (VTS). The portfolio included LTO and Enterprise tape drives, tape libraries and virtual tape systems. My job was to help decide how much of IBM's money we should invest in each product area. This was less of a technical role, and more of a business-oriented project management position
Portfolio management is actually part of a chain of project management roles. At the lowest level are team leads that manage individual features, referred to as line items of a release. Release managers are responsible for all the line items of a particular release. Product managers determine which line items will be shipped in which release, and often have to balance across three or more releases. Architects help determine which products in a portfolio should have certain features. Since I was chief architect for DFSMS and Productivity Center, stepping up to portfolio manager was naturally the next rung on the career ladder.
(Side note: If you were wondering why I was only a few months on the job, it was because I was offered an even better position as Technical Evangelist for SVC. See my 2007 blog post [The Art of Evangelism] for a humourous glimpse of the kind of trouble I got in with that title on my business card!)
While my stint in this role was brief, I am still considered an honorary member of the tape development team. Nearly every week I present an overview of our tape systems portfolio at the Tucson Executive Briefing Center, or on the road at conferences and marketing events.
This year, 2012, marks the 60th anniversary of IBM Tape, but I will save that for a future post!
Jim is an IBM Fellow for IBM Systems and Technology Group. There are only 73 IBM Fellows currently working for IBM, and this is the highest honor IBM can bestow on an employee. He has been working with IBM since 1968 and now retires after 44 years! Jim was tasked with predicting the future of IT, and help drive strategic direction for IBM. Cost pressures, requirements for growth, accelerating innovation and changing business needs help influence this direction.
Many consider Jim one of the fathers of server virtualization. For those who think VMware invented the concept of running multiple operating systems on a single host machine, guess again! IBM developed the first server hypervisor in 1967, and introduced the industry's first [offical VM product on August 2, 1972] for the mainframe.
When I joined IBM in 1986, my first job was to work on what was then called DFHSM software for the MVS operating system. Each software engineer had unlimited access to his or her own VM instance of a mainframe for development and testing. This was way better than what we had in college, having to share time on systems for only a few minutes or hours per day. Today, DFHSM is now called the DFSMShsm component of DFSMS, an element of the z/OS operating system.
At various conferences like [SHARE] and [WAVV] we celebrated VM's 25th anniversary in 1997, and its 30th anniversary in 2002. Today, it is called z/VM and IBM continues to invest in its future. Last October, IBM announced [z/VM 6.2] release which provides Live Guest Relocation (LGR) to seemlessly move VM guest images from one mainframe to another, similar to PowerVM's Live Partition Mobility or VMware's VMotion.
Lately, it seems employees at other companies jump from job to job, and from employer to employer, on average every 4.1 years. According to [National Longitudinal Surveys] conducted by the [US. Government's Bureau of Labor Statistics], the average baby boomer holds 11 jobs. In contrast, it is quite common to see IBMers work the majority of their career at IBM.
The next time you have a tasty beverage in your hand, raise your glass! To Mark and Jim, you have earned our respect, and you both have certainly earned your retirement!
Mark your calendars! If you work in IT and have an interest in storage, then there are two upcoming conferences you might be interested in attending!
Join a network of your peers at
[IBM Pulse2012] who are fundamentally and cost-effectively changing the economics of IT and speeding the delivery of innovative products and services. With four days of top-notch education, Pulse 2012 will help you react with agility in changing competitive landscapes, reduce vulnerability throughout the service lifecycle, and continuously improve the business impact of the technology.
I presented at the very first IBM Pulse back in May 2008, which was a combination event to cover Tivoli Storage, Maximo and Netcool. For a bit of nostalgia, read my 2008 blog posts:
The IBM Pulse conference has certainly evolved over the past few years! The agenda is not yet finalized, so I don't know if I will be there again this year.
The second event has a new name. [IBM Edge2012] is the premier storage event that brings together innovative IBM technologies, world class training, leading industry experts, and compelling client success stories and best practices. Edge2012 is dedicated to helping you design, build and implement efficient storage infrastructure solutions.
We started doing these back in the mid-90s, entitled the "IBM Storage Symposium", then later the "IBM System Storage and Storage Networking Symposium". In 2007, I was there in Las Vegas presenting on a variety of topics. See my blog post [Storage Symposium 2007 recap].
In 2008, we had a version of the Storage Symposium down in Cuernavaca, Mexico. Not only did I present, but it was also a "book signing" event for my first book [Inside System Storage: Volume I]. Here were my blog posts: [Introduction], and [Conclusion]. We also had an event in the United States, as well as Montpelier, France, but since I already went to the one in Mexico, I let my colleagues go to these other ones instead.
In 2009, IBM experimented with combining two conferences under one roof in Chicago, IL. The IBM System Storage and Storage Networking Conference was combined with the IBM System x and BladeCenter Technical Conference. The idea was that server people would probably also be interested in storage, and storage admins might also be interested in x86-based servers. See my blog post
[Storage Symposium 2009 recap].
In 2010, System Storage and System x were once again combined, held in Washington DC, but the conferences were renamed to IBM System Storage Technical University and the IBM System x Technical University to give them a common look and feel. See my blog post [Storage University 2010 review].
In 2011, not satisfied that two data points was inconclusive, IBM continued the experiment, hosting both System Storage and System x conferences in Orlando, Florida. Here were my blog posts:
The results are now in. While I think it is admirable to run multiple conferences at the same time in the same place can help reduce costs and consolidate adminstration, it can have its drawbacks also. In the case of System Storage and System x, we learned a few things:
Having System x and Storage in the same conference gave the appearance that the conference was not focused on either. At smaller companies, there might be people who manage both x86 servers and storage, but at larger companies, servers and storage are managed by separate people, often in separate departments with different travel budgets.
Nearly all of IBM's storage attaches to IBM System x servers. However there are some clients that run AIX, IBM i or System z mainframes that might not have considered attending this conference, thinking that it was focused on storage for System x servers.
Both conferences were considered technical education, and might not have appealed to upper IT executives and directors as something to help make purchase decision from a business perspective, or to nework with peers of other decision makers.
The solution - IBM Edge. This conference is focused 100 percent on storage. There will be "Executive Edge" for decision makers to network with their peers, and "Technical Edge" for the storage admins to get the technical education they are looking for on IBM System Storage and Networking products and solutions. Please note that this conference was held in July or August in previous years, but will be held in June this year.
I am very excited about this new direction, and plan to be there in June 4-8 for this event!
My how time flies! The month is almost over, and people are asking if I plan to discuss my [New Years' Resolutions]. For those readers new to my blog, you can review the [resolutions I made in prior years]. I started blogging about my New Year's resolutions back in 2007 because IBM has a "black-out" period before it announces its year-end financial results, and I can't talk about IBM itself during that time.
"Tests done since 1933 show that people who talk about their intentions are less likely to make them happen.
Announcing your plans to others satisfies your self-identity just enough that you're less motivated to do the hard work needed.
In 1933, W. Mahler found that if a person announced the solution to a problem, and was acknowledged by others, it was now in the brain as a 'social reality', even if the solution hadn't actually been achieved."
The solution for this? Spread out your resolutions throughout the year. That is the advice from Jonah Lehrer in his Wall Street Journal article [Blame it on the Brain]. Here is an excerpt:
"Willpower, like a bicep, can only exert itself so long before it gives out; it's an extremely limited mental resource.
Given its limitations, New Year's resolutions are exactly the wrong way to change our behavior. It makes no sense to try to quit smoking and lose weight at the same time, or to clean the apartment and give up wine in the same month. Instead, we should respect the feebleness of self-control, and spread our resolutions out over the entire year. Human routines are stubborn things, which helps explain why 88% of all resolutions end in failure, according to a 2007 survey of over 3,000 people conducted by the British psychologist Richard Wiseman. Bad habits are hard to break—and they're impossible to break if we try to break them all at once."
Based on those two articles, I focused last year on a single resolution, to lose weight. It worked, I lost some weight, not as much as I wanted, and certainly not for the usual eat-less/exercise-more reasons.
First, I tried Tim Ferris' [Four Hour Body] diet, and I had every intention to post about my progress throughout the year, but that didn't happen. The diet involved eating a restricted diet for six days--including beans, green vegetables, and lean meats--then having one cheat day where you eat a whole bunch of the bad foods you weren't allowed the prior week. The problem I had was that I got so used to eating the same way six days a week, that I forgot to cheat! On this diet, cheating is not optional, it is mandatory. Mo, on the other hand, had no problem with the cheat days, and even extended this to cheat afternoons and cheat evenings!
Mid-year, I saw the movie [Forks Over Knives]. I consulted with my doctor, and switched over to a plant-based, whole-foods diet with his approval. This is basically [dietary veganism]: no eggs, no dairy, no meat, no fish, no poultry. What's left? Lots of slow carbs like beans, spinach and quinoa, that I had already learned to cook and eat earlier on Tim Ferriss' diet, without the stress of remembering to cheat on the weekend.
The nice thing about this diet is that you can eat a lot more than usual, so you are never hungry. The bad news is that I developed a vitamin deficiency, and so my doctor asked me to switch to a relaxed mostly-vegetarian diet, with some eggs, some fish, some meat, and lots of vitamin supplements.
I thought I would start 2012 with a bunch of funny resolutions, like the ones in [Chuck & Beans], but I decided to keep things on a serious level. If you've made resolutions, do not tell anyone what they are, and try focusing on a single one at a time.
For all of you who had a bad year in 2011, I hope you have a much better one in 2012!
Some job titles can be vague. Have you ever given your title to a person at a cocktail party, only to have to explain exactly what you do? With a title like "IBM Master Inventor and Senior Managing Consultant", this happens to me all the time. To help explain what we do at the Tucson Executive Briefing Center (EBC), I use the following analogy.
People who want to see or interact with animals have several options. One option is to go visit the animals in their natural habitat. A more convenient option, however, is to visit the animals in a zoo. Zoos bring together a wide variety of animals, making it convenient to visit all of them at one time.
I did not fully appreciate the advantage of zoos until I took a safari in Kenya, Africa a few years ago. The word safari means "long journey" in Swahili. For two weeks, we drove around in a Land Rover on bumpy roads across the country. The best time to see the animals was early in the morning and late in the afternoon. We would drive around for hours looking for a type animal we had not seen already. Most came to see the so-called "Big Five": Buffalo, Elephant, Leopard, Lion and Rhinoceros. After two weeks and hundreds of miles, we had seen the "Big Nine" which extends the Big Five to include the Cheetah, Zebra, Giraffe and Hippo, as well as seeing a variety of other, lesser known animals.
When it comes to zoos, there are two kinds.
Self-guided -- offering the basic zoo experience where you are handed a map to visit the animals on your own.
Docent-guided -- offering a richer zoo experience where the docent provides added value, leading visitors around the zoo, answering questions, providing education, and comparing the differences between the animals.
Over the past 15 years, IBM has been consolidating storage development in Tucson, Arizona moving storage-related projects from San Jose, CA, from Rochester, MN, and from Raleigh, NC. Tucson has the largest collection of IBM storage hardware and software development in North America. I am one of the three local "docents", guiding the clients that come to Tucson to visit the developers.
Here are some of the types of developers that our clients ask to interact with:
A was hired into IBM back in 1986 as a Research Scientist. When clients want to hear about IBM's future direction over the next 10-15 years, we bring in someone from IBM Research.
While disk systems may seem no more complicated as arranging books on a shelf, clients often want to talk to hardware engineers related to IBM's tape libraries, especially the IBM System Storage TS3500 library and the High-Density frame that can store multiple cartridges per slot in a spring-loaded manner.
I have a Bachelor's degree in Computer Engineering and Master's degree in Electrical Engineering, so I am able to speak both sides of the hardware/software divide. Software engineers here in Tucson develop the microcode that runs on disk and tape hardware, the various GUI, CLI and SMI-S API interfaces, as well as Tivoli Storage software, especially Tivoli Storage Manager (TSM) and Tivoli Storage Productivity Center.
IBM Tucson has a huge test lab, and our testers are very familiar with all of the subtle nuances of interoperability between servers, HBAs, switches and storage devices. We have system and function testers for the individual products, ISV testers to validate software compatability, performance testers, and environment testers to verify the storage devices can handle extremes in temperature, humidity, vibration and noise.
IBM has architects for each product line to help decide which features and functions are developed for each product release. While many software engineers have expertise narrowly focused on an individual component, the system architects need to have a broad awareness of the entire environment. Earlier in my career, I was the chief architect for DFSMS, the storage management element of the z/OS mainframe operating sytsem, and chief architect for what we now call Tivoli Storage Productivity Center.
Product and Portfolio Managers
Product and Portfolio managers are helpful to explain to clients why IBM invested more in some products than others. I had served as the Portfolio Manager for IBM tape systems. When clients want to talk about the business side of our products, such as pricing, licensing and leasing issues, we bring the product and portfolio managers in.
For some clients, high level executives want to speak to their counterparts at IBM, vice president to vice president, executive to executive. Our local IBM executives often help kick off the briefing in the morning, or provide the executive summary and discuss next steps at the end of the day. Golfing, dinners and drinks, of course, are always a popular scheduing option.
On behalf of the rest of the Tucson EBC, I would like to thank all the developers who have helped us last year with client briefings. There are too many to mention, and most are too humble to let me put their names in this blog. Team, your assistance is very appreciated!
Many IBMers consider Tucson to be the headquarters for storage, and I have heard IBM executives refer to Tucson as the center of the universe for storage products. However, IBM is a global company. Just as zoos do not pretend to be complete collections of animals, IBM storage development is not entirely contained in Tucson. IBM Research for storage is also done in Almaden CA, Yorktown Heights NY, and Haifa, Israel. Hardware development is also done in Japan, Europe and Israel. Tivoli Storage has locations in Beaverton, Oregon, and Austin, Texas, to name a few. IBM is a big company, so if I left your favorite location off the list, let me know in the comments below.
Some clients, sales reps and business partners have complained that Tucson is not the most convenient location to get to. I get that. One rep asked why we don't have briefing centers somewhere more accessible, such as Chicago or Atlanta, both cities offer a major airline hub. As much as I personally enjoy cities like Chicago or Atlanta, people don't visit zoos just to see the docents, they come to see the animals. Having docents located in Chicago or Atlanta, standing sadly in front of empty cages with no animals to interact with, makes no sense at all.
With over 350 days of sunshine per year, Tucson is actually a well-kept secret. Clients who have never been to Tucson discover the wonders of the Sonoran desert. Coyotes chase roadrunners across our parking lot. Several clients who have come to visit us have ended up buying retirement homes here. If you haven't been to Tucson, or it has been a while since your last trip, I encourage you to [schedule a briefing]. The weather right now is ideal!
This week I was aboard the Queen Mary in Long Beach, California! This was a business event organized by [Key Info Systems], a valued IBM Business Partner. Key Info resells IBM servers, storage and switches.
The Queen Mary retired in 1967, and has been converted into a hotel and events venue. The locals just parked their car and walked on board, but I got to stay Tuesday through Thursday in one of the cabins. It was long and narrow, with round windows! There were four dials for the bathtub: Cold Salt, Hot Fresh, Cold Fresh, and Hot Salt.
Stepping on the boat was like walking back in time through history! If you decide to go see it, check out the [Art Deco bar at the front of the Promenade deck. The ship is still in the water, but is permanently docked. It is sectioned off to prevent the ocean waves from affecting it, so we did not have the nauseous moving back and forth normally associated with cruise ships.
(It is with a bit of irony that we are on the Queen Mary just days after the tragedy of the [Costa Concordia], the largest Italian cruise ship that ran aground near Isola de Giglio. The captain will have to explain how he [fell into a lifeboat] before he had a chance to wait for everyone else to get safely off the shipwreck. He was certainly no [Captain Sulley]! I am thankful that most of the 4,200 people survived the incident.)
Lief Morin, Founder and Chief Executive for Key Info Systems, kicked off the meeting with highlights of 2011 successes. I have known Lief for years, as Key Info comes to the Tucson EBC on a frequent basis. This event was designed to give his sellers an update of what is the latest for each product line, and what to look forward to in the next 12-18 months.
The next speaker was from Vision Solutions that provides High Availability solutions for IBM i on Power Systems. In 2010, their company nearly doubled in size with the acquisition of Double-Take, which provides data replication for x86 servers running Windows, Linux, VMware, Hyper-V and other hypervisors. The capabilities of Double-Take sounded similar to what IBM offers with [Tivoli Storage Manager FastBack] and [Tivoli Storage Manager for Virtual Environments].
Dinner at Sir Winston's
Rather than take the "Ghosts and Legends" tour, I opted for dinner at the Queen Mary's signature restaurant, Sir Winston's. This is a fancy place, so dress accordingly. If you want the Raspberry soufflé, order it early as it takes 30 minutes to prepare!
[Storwize V7000], including the new Storwize V7000 Unified configuration
Storage is an important part of the Key Info Systems revenue stream, so I was glad to have lots of questions and interactions from the audience.
Murder Mystery Dinner
The acting troupe from [Dinner Detective] put on quite the show for us! With all that is going on in the world, it is good to laugh out loud every now and then.
In other murder mystery dinners I have participated in, each person is assigned a "character" and given a script of what to say and when to say it. This was different, we got to pick our own characters. I chose "Doctor Watson", from the Sherlock Holmes series. Several attendees thought it was a double meaning with [IBM Watson], the computer that figured out the clues on Jeopardy! television game show, and has since been [put to work at Wellpoint] to help out the Healthcare industry.
After the "murder" happened, two actors portraying policemen selected members of the audience to answer questions. We didn't get a script of what to say, so everyone had to "ad lib". I was singled out as a suspect, and had fun playing along in character. One of the attendees afterwards said he was impressed that I was able to fabricate such amusing and elaborate responses to their personal and embarassing questions. As a public speaker for IBM, I have had a lot of practice thinking quickly on my feet.
Fibre Channel and Ethernet Switches
The next two speakers gave us an update on Fibre Channel and Ethernet switches, and their thoughts on the inevitability of Fibre Channel over Ethernet (FCoE). One of the exciting new developments is the [Brocade Network Subscription] which creates a flexible pay-per-use Ethernet port rental model for customers. This is especially timely given the Financial Accounting Standards Board proposed [FASB Change 13] that affects operating leases in the balance sheet.
With the Brocade Network Subscription, you pay monthly for the ports you are using. Need more ports, Brocade will install the added gear. Use fewer ports, Brocade will take the equipment back. There is no term endpoint or residual value like tradtional leasing, so when you are done using the equipment, give it back any time. This is ideal for companies that may need to have a lot of Ethernet ports for the next 2-3 years, but then plan to taper down, and don't want to get stuck with a long-term commitment or capital depreciation.
The last speaker was from VMware. IBM is the #1 reseller of VMware, and VMware commands an impressive 81 percent marketshare in the x86 virtualization space. The speaker presented VMware's strategy going forward, which aligns well with IBM's own strategy, to help companies Cloud-enable their existing IT infrastructures, in preparation for eventual moves to Hybrid or Public cloud deployments.
Special thanks to Lief Morin for sponsoring this event, Raquel Hernandez from IBM for coordinating my travel, and Pete, Christina and Kendrell from Key Info Systems for organizing the activities!
I hope everyone had a nice Winter break. For my birthday last month, my good friends at [StarTech.com] sent me a nice [double-headed USB combo cable] that has both Micro-USB and Mini-USB connectors. I am always looking to reduce the number of cables I take with me on trips, and this one is perfect, as I have a Samsung 4G smart phone that uses the Micro-USB connector, and a Canon PowerShot digital camera that uses the Mini-USB connector.
(FTC Disclosure: The U.S. Federal Trade Commission may consider this a "celebrity endorsement" for StarTech's product. I have used the cable and it works as expected. My review is based on my own experience using the cable, and information publicly available. IBM and StarTech are independent companies. Aside from giving me this nice cable at no cost, I have not received any payment from StarTech or any other third party to mention them or their product on this blog, I am not affiliated with StarTech in any way, nor do I have any financial interest in their company.)
When the [Universal Serial Bus] standard first came out in the mid-1990s, my colleagues and I were all excited that this will finally put an end to all the proprietary plugs and cables that each manufacturer seemed to waste their time re-inventing the wheel with yet another cable connector. For the most part, USB has simplified this, and the USB cable can be used for both data transfer and for power charging.
Today, there are many alternatives to using a cable for data transfer, such as Wi-Fi and Bluetooth, but people are finding that their smart phones and other devices run out of juice way too often. At various conferences, I have seen several people panic looking for an electrical outlet to charge their device, and a few brazen enough to ask other attendees, "Can I plug my phone into your laptop?"
(Caution: Be careful allowing strangers to plug their device into your USB port, as this can provide data transfer in addition to power charging, spreading viruses or other malicious intent. On my Lenovo Thinkpad T410, one of the USB ports is colored yellow and is always powered on, even when my laptop is in suspend or hibernation mode. This would be a safe way to allow someone to charge off your power without concern for data transfer in either direction.)
Recently, I have flown on airplanes where each seat had a USB charging port, ideal if you want to listen to music or watch a video on your device. I have also driven a rental carthat had USB charging ports in addition to the traditional cigarette lighter option, especially useful if you need to make an emergency phone call at the side of the road, or if you are using the GPS navigation feature to find your way. These are both a good step in the right direction!
Carrying one cable instead of two might not seem like much of a big deal, but if you think about it, complexity in the IT industry is all about the number of cables admins have to deal with. The push from 1GbE to 10GbE can help reduce the number of cables. Converged Enhanced Ethernet (CEE) takes it one step further, allowing NFS, CIFS, iSCSI and FCoE to all flow over a single cable. This can greatly reduce complexity in your IT environment.
If you are interested in reducing the complexity in your IT environment, contact your local IBM Business Partner or sales representative.
This is my final post on my coverage of the 30th annual [Data Center Conference]. IBM was a Platinum sponsor, and there were over 2,600 attendees, of which 27 percent were IT Directors or higher. Two thirds of the companies have 5000 employees or more. Here is a recap of the last few sessions I attended.
Best Practices for Data Center consolidation
As if the conference co-chairs aren't already super-busy, here they are presenting one of the breakout sessions. In the 1990s, consolidation was done purely to reduce total cost of ownership (TCO). Today, there are a variety of other reasons, including issues with power and cooling, service level agreements, and security.
Of these, 25 percent plan to have more data centers in three years, and 47 percent plan to consolidate to fewer. The benefits to consolidation include economies of scale, staff reduction, reduced hardware facilities costs, and application retirement. Challenges include dealing with politics, building new facilities to replace the old ones, and bandwidth. Here were some of the primary reasons why data center consolidation projects fail:
Human Resources (HR) issues
Resources not freed available
Lack of Project Management skills
No rationalization at consolidated site
Interactive Polling Results
The last keynote session was Thursday morning. The conference co-chairs present the highlights of the interactive polling that was done during the week at this conference.
The first topic was social media. There was a lot of Twitter activity with hashtag #GartnerDC that I followed throughout the week. Most of the tweets seem to be from people who were not actually at the conference.
Some 45 percent of the attendees have implemented social media initiatives at their companies. What tooling are they using to accomplish this? There are some provided by the major ITSM vendors, tools specific for corporate social media such as Yammer, collaboration tools like Microsoft SharePoint and IBM's Lotus Connections, and public sites like Facebook and Twitter. Here were the poll results:
The next topic was focused on Mobile devices and Cloud Computing. For example, do companies store data in public cloud, or plan to in the future, for mobile devices?
One third of the attendees allow employees to bring their own tablet to work with full IT support. Only 18 percent allow employees to bring their own PC or laptop. Over 40 percent felt that their IT department was not yet ready to support smartphones.
What are the main drivers to adopt private cloud? Some are deploying private clouds as a way to defend their IT jobs from going to the public cloud. Here were the poll results:
What problems are companies trying to solve with cloud computing? Here were the poll results:
A majority of attendees that use VMware are exploring LInux KVM, such as Red Hat Enterprise Virtualization (RHEV) or Microsfot Hyper-V. What storage protocol are attendees using for their server virtualization? Here were the poll results:
The next topic was the process for IT service management. The top three were ITIL, CMMI and DevOps, with the majority using ITIL or ITIL in combination with something else. These are needed for release management, change management, performance management, capacity management and incident management. How collaborative is the relationship between IT operations and application development? Here were the poll results:
How well does IT operations contribute to business innovation? This year 38 percent were satisfied, and 33 percent unsatisfied. This was a big improvement over last year, that found 19 percent satisfied, 64 percent unsatisfied.
Building a Private Storage Cloud: Is It a Science Experiment?
While everyone understands the benefits of private and public cloud computing, there seems to be hesitation about hosted cloud storage. Some people have already adopted some form of cloud storage, and other plan to within 12 months. Here were the poll results:
The top three reasons for considering public cloud storage was to adopt lower-cost storage tier, to benefit from off-site storage, and staff constraints. The top concerns were security and performance.
The IT department will need to start thinking like a cloud provider, and perhaps adopt a hybrid cloud approach. What IT equipment can be re-used? What will the new IT operations look like in a Cloud environment? What were the primary use cases for cloud storage? Here were the poll results:
In addition to the major cloud providers (IBM, Amazon, etc.) there are a variety of new cloud storage startups to address these business needs.
So that wraps up my coverage of this conference. In addition to attending great keynote and breakout sessions, I was able to have great one-on-one discussions with clients at the Solution Showcase booth, during breaks and at meals. IBM's focus on Big Data, Workload-optimized Systems, and Cloud seems to resonate well with the analysts and attendees. I want to give special thinks to Lynda, Dana, Peggy, Hugo, David, Rick, Cris, Richard, Denise, Chloe, and all my colleagues, friends and family from Arizona for their support!
Continuing my coverage of the 30th annual [Data Center Conference]. here is a recap of Wednesday breakout sessions.
Aging Data: The Challenges of Long-Term Data Retention
The analyst defined "aging data" to be any data that is older than 90 days. A quick poll of the audience showed the what type of data was the biggest challenge:
In addition to aging data, the analyst used the term "vintage" to refer to aging data that you might actually need in the future, and "digital waste" being data you have no use for. She also defined "orphaned" data as data that has been archived but not actively owned or managed by anyone.
You need policies for retention, deletion, legal hold, and access. Most people forget to include access policies. How are people dealing with data and retention policies? Here were the poll results:
The analyst predicts that half of all applications running today will be retired by 2020. Tools like "IBM InfoSphere Optim" can help with application retirement by preserving both the data and metadata needed to make sense of the information after the application is no longer available. App retirement has a strong ROI.
Another problem is that there is data growth in unstructured data, but nobody is given the responsibility of "archivist" for this data, so it goes un-managed and becomes a "dumping ground". Long-term retention involves hardware, software and process working together. The reason that purpose-built archive hardware (such as IBM's Information Archive or EMC's Centera) was that companies failed to get the appropriate software and process to complete the solution.
Cloud computing will help. The analyst estimates that 40 percent of new email deployments will be done in the cloud, such as IBM LotusLive, Google Apps, and Microsoft Online365. This offloads the archive requirement to the public cloud provider.
A case study is University of Minnesota Supercomputing Institute that has three tiers for their storage: 136TB of fast storage for scratch space, 600TB of slower disk for project space, and 640 TB of tape for long-term retention.
What are people using today to hold their long-term retention data? Here were the poll results:
Bottom line is that retention of aging data is a business problem, techology problem, economic problem and 100-year problem.
A Case Study for Deploying a Unified 10G Ethernet Network
Brian Johnson from Intel presented the latest developments on 10Gb Ethernet. Case studies from Yahoo and NASA, both members of the [Open Data Center Alliance] found that upgrading from 1Gb to 10Gb Ethernet was more than just an improvement in speed. Other benefits include:
45 percent reduction in energy costs for Ethernet switching gear
80 percent fewer cables
15 percent lower costs
doubled bandwidth per server
Ruiping Sun, from Yahoo, found that 10Gb FCoE achieved 920 MB/sec, which was 15 percent faster than the 8Gb FCP they were using before.
IBM, Dell and other Intel-based servers support Single Root I/O Virtualization, or SR-IOV for short. NASA found that cloud-based HPC is feasible with SR-IOV. Using IBM General Parallel File System (GPFS) and 10Gb Ethernet were able to replace a previous environment based on 20 Gbps DDR Infiniband.
While some companies are still arguing over whether to implement a private cloud, an archive retention policy, or 10Gb Ethernet, other companies have shown great success moving forward!
Continuing my coverage of the 30th annual [Data Center Conference]. here is a recap of Wednesday morning sessions.
A Data Center Perspective on MegaVendors
The morning started with a keynote session. The analyst felt that the eight most strategic or disruptive companies in the past few decades were: IBM, HP, Cisco, SAP, Oracle, Apple and Google. Of these, he focused on the first three, which he termed the "Megavendors", presented in alphabetical order.
Cisco enjoys high-margins and a loyal customer base with Ethernet switch gear. Their new strategy to sell UP and ACROSS the stack moves them into lower-margin business like servers. Their strong agenda with NetApp is not in sync with their partnership with EMC. They recently had senior management turn-over.
HP enjoys a large customer base and is recognized for good design and manufacturing capabilities. Their challenges are mostly organizational, distracted by changes at the top and an untested and ever-changing vision, shifting gears and messages too often. Concerns over the Itanium have not helped them lately.
IBM defies simple description. One can easily recognize Cisco as an "Ethernet Switch" company, HP as a "Printer Company", Oracle as a "Database Company', but you can't say that IBM is an "XYZ" company, as it has re-invented itself successfully over its past 100 years, with a strong focus on client relationships. IBM enjoys high margins, sustainable cost structure, huge resources, a proficient sales team, and is recognized for its innovation with a strong IBM Research division. Their "Smarter Planet" vision has been effective in supporting their individual brands and unlock new opportuties. IBM's focus on growth markets takes advantage of their global reach.
His final advice was to look for "good enough" solutions that are "built for change" rather than "built to last".
Chris works in the Data Center Management and Optimization Services team. IBM owns and/or manages over 425 data centers, representing over 8 million square feet of floorspace. This includes managing 13 million desktops, and 325,000 x86 and UNIX server images, and 1,235 mainframes. IBM is able to pool resources and segment the complexity for flexible resource balancing.
Chris gave an example of a company that selected a Cloud Compute service provided on the East coast a Cloud Storage provider on the West coast, both for offering low rates, but was disappointed in the latency between the two.
Chris asked "How did 5 percent utilization on x86 servers ever become acceptable?" When IBM is brought in to manage a data center, it takes a "No Server Left Behind" approach to reduce risk and allow for a strong focus on end-user transition. Each server is evaluated for its current utilization:
Amazingly, many servers are unused. These are recycled properly.
1 to 19 percent
Workload is virtualized and moved to a new server.
20 to 39 percent
Use IBM's Active Energy Manager to monitor the server.
40 to 59 percent
Add more VMs to this virtualized server.
over 60 percent
Manage the workload balance on this server.
This approach allows IBM to achieve a 60 to 70 percent utilization average on x86 machines, with an ROI payback period of 6 to 18 months, and 2x-3x increase of servers-managed-per-FTE.
Storage is classified using Information Lifecycle Management (ILM) best practices, using automation with pre-defined data placement and movement policies. This allows only 5 percent of data to be on Tier-1, 15 percent on Tier-2, 15 percent on Tier-3, and 65 percent on Tier-4 storage.
Chris recommends adopting IT Service Management, and to shift away from one-off builds, stand-alone apps, and siloed cost management structures, and over to standardization and shared resources.
You may have heard of "Follow-the-sun" but have you heard of "Follow-the-moon"? Global companies often establish "follow-the-sun" for customer service, re-directing phone calls to be handled by people in countries during their respective daytime hours. In the same manner, server and storage virtualization allows workloads to be moved to data centers during night-time hours, following the moon, to take advantage of "free cooling" using outside air instead of computer room air conditioning (CRAC).
Since 2007, IBM has been able to double computer processing capability without increasing energy consumption or carbon gas emissions.
It's Wednesday, Day 3, and I can tell already that the attendees are suffering from "information overload'.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of more of the Tuesday afternoon sessions:
IBM CIOs and Storage
Barry Becker, IBM Manager of Global Strategic Outsourcing Enablement for Data Center Services, presented this session on Storage Infrastructure Optimization (SIO).
A bit of context might help. I started my career in DFHSM which moved data from disk to tape to reduce storage costs. Over the years, I wouuld visit clients, analyze their disk and tape environment, and provide a set of recommendations on how to run their operations better. In 2004, this was formalized into week-long "Information Lifecycle Management (ILM) Assessments", and I spent 18 months in the field training a group of folks on how to perform them. The IBM Global Technology Services team have taken a cross-brand approach, expanding this ILM approach to include evaluations of the application workloads and data types. These SIO studies take 3-4 weeks to complete.
Over the next decade, there will only be 50 percent more IT professionals than we have today, so new approaches will be needed for governance and automation to deal with the explosive growth of information.
SIO deals with both the demand and supply of data growth in five specific areas:
Data reclamation, rationalization and planning
Virtualization and tiering
Backup, business continuity and disaster recovery
Storage process and governance
Archive, Retention and Compliance
The process involves gathering data and interview business, financial and technical stakeholders like storage administrators and application owners. The interviews take less than one hour per person.
Over the past two years, the SIO team has uncovered disturbing trends. A big part of the problem is that 70 percent of data stored on disk has not been accessed in the past 90 days, and is unlikely to be accessed at all in the near future, so would probably be better to store on lower cost storage tiers.
Storage Resource Management (SRM) is also a mess, with over 85 percent of clients having serious reporting issues. Even rudimentary "Showback" systems to report back what every individual, group or department were using resulted in significant improvement.
Archive is not universally implemented mostly because retention requirements are often misunderstood. Barry attributed this to lack of collaboration between storage IT personnel, compliance officers, and application owners. A "service catalog" that identifies specific storage and data types can help address many of these concerns.
The results were impressive. Clients that follow SIO recommendations save on average 20 to 25 percent after one year, and 50 percent after three to five years. Implementing storage virtualization averaged 22 percent lower CAPEX costs. Those that implemented a "service catalog" saved on average $1.9 million US dollars. Internally, IBM's own operations have saved $13 million dollars implementing these recommendations over the past three years.
Reshaping Storage for Virtualization and Big Data
The two analysts presenting this topic acknowledged there is no downturn on the demand for storage. To address this, they recommend companies identify storage inefficiencies, develop better forecasting methodologies, implement ILM, and follow vendor management best practices during acquisition and outsourcing.
To deal with new challenges like virtualization and Big Data, companies must decide to keep, replace or supplement their SRM tools, and build a scalable infrastructure.
One suggestion to get upper management to accept new technologies like data deduplication, thin provisioning, and compression is to refer to them as "Green" technologies, as they help reduce energy costs as well. Thin provisioning can help drive up storage utilization to rates as high as you dare, typically 60 to 70 percent is what most people are comfortable with.
A poll of the audience found that top three initiatives for 2012 are to implement data deduplication, 10Gb Ethernet, and Solid-State drives (SSD).
The analysts explained that there are two different types of cloud storage. The first kind is storage "for" the cloud, used for cloud compute instances (aka Virtual Machines), such as Amazon EBS for EC2. The second kind is storage "as" the cloud, storage as a data service, such as Amazon S3, Azure Blob and AT&T Synaptic.
The analysts feel that cloud storage deployments will be mostly private clouds, bursting as needed to public cloud storage. This creates the need for a concept called "Cloud Storage Gateways" that manage this hybrid of some local storage and some remote storage. IBM's SONAS Active Cloud Engine provides long-distance caching in this manner. Other smaller startups include cTera, Nasuni, Panzura, Riverbed, StorSimple, and TwinStrata.
A variation of this are "storage gateways" for backup and archive providers as a staging area for data to be subsequently sent on to the remote location.
New projects like virtualization, Cloud computing and Big Data are giving companies a new opportunity to re-evaluate their strategies for storage, process and governance.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of some of the Tuesday afternoon sessions:
Brocade: Maximizing Your Cloud: How Data Centers Must Evolve
This was a session sponsored by Brocade to promote their concept of the "Ethernet Fabric". The first speaker, John McHugh, was from Brocade, and the second speaker was a client testimonial, Jamie Shepard, EVP for International Computerware, Inc.
John had an interesting take on today's network challenges. He feels that most LANs are organized for "North-South" traffic, referring to upload/downloads between clients and servers. However, the networks of tomorrow will need to focus on "East-West" traffic, referring to servers talking to other servers.
John was also opposed to integrated stacks that combine servers, storage and networking into a single appliance, as this prevents independent scaling of resources.
The Future of Backup is Not Backup
Primary data is growing at 40 to 60 percent compound annual growth rate (CAGR), but backup data is growing faster. Why? Because data that was not backed up before are now being backed up, including test data, development data, and mobile application data.
Backup costs are 19x more expensive than production software costs. There is an enormous gap in data protection because companies fail to factor this into their budgets. It is not uncommon for IT departments to use multiple backup tools, for example one tool for VMs, and another tool for servers, and a third product for desktops.
part of the problem is identifying who "buys" the backup software. The server team might focus on the operating systems supported. The storage team focuses on the disk and tape media supported. The application owners focus on the features and capabilities for backup that minimize impact to their application.
The analyst organized these issues into three "C's" of backup concerns: Cost, Capability and Complexity. Cost is not just the software license fee for the backup software, but the cost of backup media, courier fees, and transmisison bandwidth. Capability refers to the features and functions, and IT folks are tired of having to augment their backup solution with additional tools and scripts to compensate for lack of capability. Complexity refers to the challenges trying to get existing backup software to tackle new sources like Virtual Machines, Mobile apps, and so on.
Has everyone moved to a tape-less backup system? Polling results found that people are shifting back to tape, either in a tape-only environment, or to supplement their disk or disk-based virtual tape library (VTL). Here are the polling results:
The poll also showed the top three backup software vendors were Symantec, IBM and Commvault, which is consistent with marketshare. However, the analyst feels that by 2014, an estimated 30 percent of companies will change their backup softwar vendor out of frustration over cost, capability and/or complexity.
There are a lot new backup software products specific to dealing with Virtual Machines. Some are focused exclusively on VMware. When asked what tool people used to backup their VMs, the polling results showed the following. NOte that 20 percent for Other includes products from major vendors, like IBM Tivoli Storage Manager for Virtual Environments, as the analyst was more interested in the uptake of backup software from startups.
Some companies are considering Cloud Computing for backup. This is one area where having the cloud service provider at a distance is an actual advantage for added protection. A poll asking whether some or most data is backed up to the Cloud, either already today, or plans for the near future within the next 12 or 24 months, showed the following:
In addition to backup service providers, there are now several startups that offer file sharing, and some are adding "versioning" to this that can serve as an alternative to backup. These include DropBox, SugarSync, iCloud, SpiderOak and ShareFile.
The final topic was Snapshot and Disk Replication. These tend to be hardware-based, so they may not have options for versioning, scheduling, or application-aware capabilities normally associated with backup software. Space-efficient snapshots, which point unchanged data back to the original source, may not provide full data protection that disparate backup copies would provide. Here were polling results on whether snapshot/replication was used to augment or replace some or most of their backups:
Some of his observations and recommendations:
Maintenance is more expensive than acquisition cost. Don't focus on the tip of the iceberg. Some backup software is more efficient for bandwidth and media which will save tons of money in the long run.
Try to optimize what you have. He calls this the "Starbuck's effect". If you just need one coffee, then paying $4.50 for a cup makes sense. But if you need 100 coffees, you might be better off buying the beans.
Design backups to meet service level agreements (SLAs). In the past, backup was treated as one-size-fits-all, but today you can now focus on a workload by workload basis.
Be conservative in adopting new technologies until you have your backup procedures in place to handle data protection.
Backup is for operational recovery, not long-term retention of data. A poll showed two-thirds of the audience kept backup versions for longer than 60 days! Re-evaluate how long you keep backups, and how many versions you keep. If you need long-term retention, use archive process instead.
Recovery testing is a dying art. Practice recovery procedures so that you can do it safely and correctly when it matters most.
The analyst had a series of awesome pictures of large structures, the pyramids of Giza, the Chrysler building, and so on, and how they would look without their foundations in place. Backup is a foundation and should be treated as such in all IT planning purposes.
IT is evolving, but some basic needs like networking and backup procedures don't change. As companies re-evaluate their IT operations for Big Data, Cloud Computing and other new technologies, it is best to remember that some basic needs must be met as part of those evaluations.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the Tuesday morning sessions:
Wells Fargo: Data Center Lessons Learned from the Wachovia Acquisition
This was the next in their "Mastermind Interview" series. The analyst interviewed Scott Dillon, EVP and Head of Technology Infrastructure Services for Wells Fargo bank. Some 13 years ago, Wells Fargo merged with Norwest, and three years ago, Wells Fargo merged again, this time with Wachovia bank. Today, the new merged Wells Fargo manages 1.2 Trillion USD in assets, some 12,000 ATMs, and 9,000 branch offices within two miles of 50 percent of the US population.
On the technical side, Scott's team has to deal with 10,000 IT changes per month, spanning 85 discrete businesses that Wells Fargo is involved in. To help drive the consolidation, they formed a culture group called "One Wells Fargo".
Often, Wells Fargo and Wachovia used different applications for the same function. The consolidation team took the A-or-B-but-not-C approach, which means they would either choose the existing application that Wells Fargo was already using (A), or the one that Wachovia was already using (B), but not look for a replacement (C). They also wanted to avoid re-platforming any apps during the merger. This simplified the process of developing target operating models (TOMs).
Before each application cut-over, the consolidation team did dry-run, dress rehearsals and walkthroughs over the phone to ensure smooth success. They wanted a Wachovia account holder to be able to walk into the bank on one day, and then come back the next day as a Wells Fargo account holder, into the same branch office but now with Wells Fargo signage, with minimal disruption.
Wells Fargo also adopted a test-to-learn approach of choosing small test markets to see how well the transition would work before tackling larger, more complicated markets. For example, they started in Colorado, where Wells Fargo has a huge presence, but Wachovia had a small presence.
This was first and foremost a business merger, not just an IT merger. Each decision to 6-18 months to act on, and the IT team spent the last three years working every weekend to make this a reality.
A Satirical Look at Business and Technology
Comedian Bob Hirschfeld presented a light-hearted look at the IT industry. Bob actually attended sessions on Monday at this conference so his satire was exceptionally hard-hitting. He took jabs at the latest IT job requirements, padding on light poles, IBM Watson, social media's impact on dictators, various industry acronyms, virtualization, the various reasons why printer ink is so expensive, and the evil masterminds behind Powerpoint.
Storing Big Data takes a Village
Two analysts co-presented this session on the 12 dimensions of information management that revolve around the volume, variety and velocity of "Big Data".
In the past, it took a while to gather data, and a while to process the data, so annual, quarterly and monthly reports were common. Today, with high-velocity streams like Twitter, especially during cultural events or natural disasters, data is produced and analyzed quickly. It is important to sort the steady-state from the anomalies.
Myth 1: All data fits nicely into relational databases. The analysts feel the concept of putting everything into one big data base is dead. Some data sets are so complicated that traditional database joins would cause smoke to come out of the sides of the servers. Instead, new technologies have emerged, including NoSQL, Cassandra, Hadoop, Columnar databases, and In-memory databases. XML has helped to bring together disparate data formats.
Companies need to adapt to this new reality of Business Analytics. Here is a poll of the audience on how many are in what stage of adaptation:
Myth 2: Everyone will do Big Data with commodity hardware. Businesses want commmercial offerings that don't fail every day. (For example, instead of using open-source Hadoop, consider IBM's [InfoSphere BigInsights] commercial product based on Hadoop designed for the Enterprise).
Myth 3: Big Data is too big for backup. Certainly, traditional full-plus-incremental approaches fail to scale, but that is not the only option you have. Consider disk replication, snapshots, and integrated disk-and-tape blended solutions that adopt a more progressive backup methodology.
Capacity forecasting can be difficult with Big Data. Scale-out NAS systems, including IBM SONAS and the various me-too competitive offerings, were originally focused on High Performance Computing (HPC) and the Media & Entertainment (M&E) industries, are now ready for prime-time and appropriate for other use cases.
It's like the game of Clue, but instead of Professor Plum with the candlestick in the library, it was Chuck with the Cluster in the Closet. To avoid shadow IT creating huge Hadoop Clusters in your closets, encourage the use of Cloud Computing for "sandbox" projects. IBM, Amazon and others offer hosted MapReduce engines for this purpose.
What type of storage do you plan to use for Big Data? The top five, weighted from a list during a poll of the audience were: (78) traditional disk arrays, (71) Scale-out NAS, (46) pre-configured appliances, (30) Hadoop clusters, and (23) Cloud Storage.
Big Data is about doing things differently. Do your employees understand analytical techniques? Your company may need to start thinking about policies for capturing Big Data, storing it correctly, and analyzing it for insights and patterns needed to stay competitive.
It was good to mix reality with a bit of humor. Some of these conference attendees take themselves too seriously, and it is good to be reminded that IT is just part of the overall business operation.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the Monday afternoon sessions:
IBM Watson and your Data Center
Steve Sams, IBM VP of Site and Facilities Services, cleverly used IBM Watson as a way to explain how analytics can be used to help manage your data center. Sadly, most of the people at my table missed the connection between IBM Watson and Analytics. How does answering a single trivia question in under three seconds relate to the ongoing operations of a data center? If you were similarly confused, take a peak at my series of IBM Watson blog posts:
The analyst who presented this topic was probably the fastest-speaking Texan I have met. He covered various aspects of Cloud Computing that people need to consider. Why hasn't Cloud taken off sooner? The analyst feels that Cloud Computing wasn't ready for us, and we weren't ready for Cloud Computing. The fundamentals of Cloud Computing have not changed, but we as a society have. Now that many end users are comfortable consuming public cloud resources, from Facebook to Twitter to Gmail, they are beginning to ask for similar from their corporate IT.
Legal issues - see this hour-long video, [Cloud Law & Order], which discusses legal issues related to Cloud Computing.
Employee staffing - need to re-tool and re-train IT employees to start thinking of their IT as a service provider internally.
Hybrid Cloud - rather than struggle choosing between private and public cloud methodologies, consider a combination of both.
University of Rochester Medical Center (URMC) Cracks Code on Data Growth
Often times, the hour is split, 30 minutes of the sponsor talking about various products, followed by 30 minutes of the client giving a user experience. Instead, I decided to let the client speak for 45 minutes, and then I moderated the Q&A for the remaining 15 minutes. This revised format seemed to be well-received!
University of Rochester is in New York, about 60 miles east of Buffalo, and 90 miles from Toronto across Lake Ontario. Six years ago, Rick Haverty joined URMC as the Director of Infrastructure services, managing 130 of the 300 IT personnel at the Medical Center. I met Rick back in May, when he presented at the IBM [Storage Innovation Executive Summit] in New York City.
URMC has DS8000, DS5000, XIV, SONAS, Storwize V7000 and is in the process of deploying Storwize V7000 Unified. He presented how he has used these for continuous operations and high availability, while controlling storage growth and costs.
The Q&A was lively, focusing on how his team manages 1PB of disk storage with just four storage administrators, his choice of a "Vendor Neutral Archive" (VNA), and his experiences with integration.
This was a great afternoon, and I was glad to get all my speaking gigs done early in the week. I would like to thank Rick Haverty of URMC for doing a great job presenting this afternoon!
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the other Monday morning keynote sessions:
Driving Innovation to Achieve Dramatic Improvements
What is Innovation? It is a process that starts with one or more ideas, that results in change, that creates value. Easier said than done!
Innovation drives business growth. The analyst indicated that the IT infrastructure can either be in the way to impede business growth, neutral to enable growth, or contributing to business growth. Companies often find downtime as an inhibitor to business growth. The analyst gave these typical numbers.
Unplanned downtime (hours per year)
Planned Downtime (hours per year)
A big inhibitor to change is "cultural inertia", which states that the way things are prevent what they could be. Change requires both rewards and measures. Employees are often uncomfortable with change. Motivation should be with carrots not sticks.
(I often joke that the only people who are comfortable with change are babies with soiled diapers and prisoners on death row!)
The impedence to change is further amplified by leadership because what got them into their positions was their history of success, and often leaders perpetuate what worked for them in the past.
"There is nothing so useless as doing efficiently that which should not be done at all."
--- Peter Drucker
Nothing lasts forever, and companies should not try to avoid the inevitable. Innovators need to see themselves as change agents. the analyst feels that less than 10 percent of IT will adopt innovation to enact dramatic change. The analyst took a poll of the audience asking: Why isn't your IT Infrastructure and Operations more innovative? Over 800 attendees responded. Here were the results:
The analyst suggests treating Innovation like a team sport, with small 2-5 person teams. Search for breakthrough opportunities by setting audacious goals to inspire innovative thinking. What approach are most people doing today? Here are some polling results:
The analyst suggest it is more important to establish a culture of innovation first, and process second. Skunkworks projects are back in favor. IT folks should avoid the worship of so-called "best practices" as a reason to avoid change in trying something different. To think "outside-the-box" you need to get outside the box, or office, or cubicle, or wherever you work that prevents you from interacting with your internal or external customers. Customers can bring great insights on new approaches to take.
One new approach, born in the Cloud and now coming to the Enterprise is the concept of [DevOps], which consists of promoting collaboration between the "Appplication Development" half of IT, with the "Operations" half. If you never had heard of DevOps before, you are not alone, most of the attendees at this conference hadn't either. Here are the poll results:
Some companies have instituted a "Fresh Eyes" program, asking new-hires and early-tenure employees questions like: What surprised you the most when you joined the company? Was there anything that didn't make sense to you? Do you have any ideas to improve the way we do things?
"In a time of crisis we all have the potential to morph up to a new level and do things we never thought possible"
– Stuart Wilde
Why wait for a crisis?
Facebook: Efficient Infrastructure at a Massive Scale
Frank Frankovsky, the Director of Hardware and Design and Supply Chain at Facebook, was sitting right next to me in the audience. I didn't know this until it was his turn to speak, and he jumps up and walks to the stage! For those who live under a rock and/or are over 40 years old, Facebook is a social media site that allows people to maintain personal profiles, share photos, news and messages, play games, and create groups to organize events. They now support over 800 million accounts, a healthy percentage of the 1.9 billion people on the internet today.
Started in 2004, Facebook was originally hosted on standard server and storage hardware in colocation facilities. Facebook saved 38 percent costs by bringing their operations in-house, building their own servers from parts, and using no third-party software. Facebook has the advantage of owning their entire software stack, leveraging open source as much as possible. They even re-wrote their own PHP compiler, which they pronounce "Hip-Hop", short for high-performance-PHP.
Facebook can stand up a new data center in less than 10 months, from breaking ground to serving users. Most of Facebook's data centers sport a PUE less than 1.5, but their newest one in Prineville, Oregon is down to an amazing 1.07 level for a 7.5 Megawatt facility! How did they do it? Here are a few of their tricks:
Use Scale-Out architecture. Having lots of small servers, scattered in various data centers, allows them to survive a server failure, as well as having the luxury to shut down a datacenter when needed for maintenance reasons.
Free Cooling. Instead of air-conditioning, they pump in cold air from the outside, and send the heated exhaust back outdoors. Frank does not believe servers should be treated like humans, so their data centers run uncomfortably hot. The 50-year climate data is used to determine data center locations that have the optimal "free cooling" opportunities.
Eliminate UPS and PDU energy losses. Rather than running 480 VAC power through UPS that represent a 6 to 12 percent loss, and then PDU that introduce another 3 percent loss getting down to 208 or 120 VAC, Frank's team builds servers that feed direclty off the 480 VAC from the power company. For backup power, they use 48VDC batteries. One set of batteries can backup six racks of servers.
Target 6 to 8 KW per rack. Low-density racks are easier to keep cool.
Build their own IT equipment. Rather than buying commercially-available servers, Frank's team builds 1.5 U servers based on Intel "Westmere" chipset. 1.5U allows for larger fan radius than standard 1U pizza box format. (IBM's iDataPlex uses 2U fans for the same reason!) Facebook has a "Vanity free" design philosophy, so no fancy plastic bezels. In most cases, the covers are left off. Most (65 percent) of their servers are web front-ends. They plan new IT equipment based on Intel's "Sandy Bridge" chipset.
Use SATA drives. They buy the largest SATA drives available, directly from manufacturers, in direct-attach storage (DAS) in their servers. Data is organized in a Hadoop cluster, and they have developed their own internal "Haystack" for photo storage. Despite the floods in Thailand, Facebook has secured all the SATA disk they plan to buy for 2012 from their suppliers.
Use Solid-State drives. Their Database tier uses 100 percent Solid-State drives.
Frank is also a founder for the [Open Compute Project], which takes an "Open source" approach to IT hardware.
Facebook does not bother with hypervisors. Instead, they have adapted their own software to make full use of the CPU natively. This eliminates the "I/O Tax" penalty associated with VMware and other hypervisors.
Of course, not everyone owns their entire software stack, and can build their own servers! It was nice to hear how a company without such limitations can innovate to their advantage.
This week, I will be in Las Vegas for the 30th annual [Data Center Conference]. For those on Twitter, follow the conference on hashtag #GartnerDC, and follow me at [@az990tony]. IBM is a Global Partner and Platinum Sponsor for this event. Here is a recap of some of the Monday morning keynote sessions:
Welcome and Introduction
Monday morning kicked off with a welcome introduction from the conference coordinators. This is the highest attendance for this conference in its 30 year history, with 60 percent of the attending for their first time, and 18 percent only once before. This is the fourth time I am attending. Half of the attendees represent corporations with 20,000 employees or more, the other half from smaller companies and government agencies. The top five industries represented are financial services, public sector, healthcare, manufacturing, and energy.
This conference uses a clever "interactive polling" where hand-held devices can be used to select choices, and results of over 800 voters are presented immediately on the big screen.
For IT budgets, 42 percent plan to increase next year, 32 percent flat, and 26 percent lower, which are similar to the numbers last year. Of nine different IT challenges, the top three were managing storage growth, power/cooling issues, and adopting a Cloud strategy.
Top 10 Trends and how they will impact Data Center IT
The analyst presented top 10 business, technology and societal trends that will impact IT. He added a last-minute eleventh issue that he felt will impact everyone in 2012:
Consumerization and the Tablet. Back in 1997, a GB of flash memory cost $7,992 US dollars, and today that same GB costs only 25 cents. Employees are bringing their own devices to the workplace, and expecting IT support.
Infinite Data Center. You may never have to expand your floorspace again. Improvements in server and storage density can allow you to continually upgrade in place.
Energy Management. Data centers consume 100x more energy than the offices they support. The cost of energy is on part with IT equipment. Energy management is becoming an enterprise-wide discipline. A key performance indicator (KPI) can be "compute per kW" or "compute per Square foot".
Context Awareness. There are hundreds of thousands of apps for Android-based smart phones and iPhones. Context awareness allows an app to help business travelers in airports know what restaurants are nearby, their flight status, and alternate flights available, based entirely on their location.
Hybrid Clouds. By 2013, over 60 percent of cloud adoption will be to redeploy existing apps like email. Some 80 percent of cloud initiatives will be private or hybrid configurations. Customers want "good enough" technology, and thus Cloud will be mostly an augmentation strategy.
Fabric Computing. The opposite of fully-integrated stacks is the notion of having compute, memory and storage joined together via an interconnect fabric with software to manage the entire environment.
IT Complexity. Robert Glass's Law states that for every 25 percent increase in functionality, there is a 100 percent increase in complexity. See Roger Session's whitepaper [The IT Complexity Crisis: Danger and Opportunity] for more on this.
Patterns and Analytics. Big data and business analytics is a key platform. This is expected to grow 60 percent CAGR.
Impact of Virtualization. Virtualizing your environment should be considered a continuous process, not a one-time project. Many companies are running x86 servers at less than 55 percent, which the speaker considers under-utilized. Virtual Desktop Infrastructure (VDI) is a trade-off, may cost more but have other business benefits to consider. The problem is that many IT shops are organized vertially (a server team, storage team, network team) but problems surface horizontally, and there is no "ownership" for the resolution. Some use "tiger teams" to address this. Companies should reward lateral thinking.
Social Media. Of the ommunications on cell phones by college students, 98.4 percent are text messages, and only 1.6 percent voice phone calls. People search Google for "what was", but they search Twitter for "what is". Most of the growth on Twitter are in the 39-52 year-old demographic. The analyst felt that if your company is blocking or restricting access to facebook, twitter, youtube or other social networking sites, then shame on you. I agree!
Flooding in Thailand. Over two million square feet of HDD production space were flooded, and this will impact HDD prices for 2012. Already, a 2TB drive that was selling for $79 at local store is now selling for $190.
How To Get Your CFO's Support For Strategy and Funding
In the first of a series of "mastermind interviews", the analyst interviewed their own CFO Chris Lafond. Ultimately, it is about business results. They have grown annual 15-20 percent, from 250 million in 2003 to 1.3 billion US dollars in 2011 for annual revenue, 4600 employees, doing business in 85 countries. The company is focused on three business areas: Research, Consulting, and Events like this one. Chris does not approve 3-5 year projects, and instead requests projects be broken up into year-long phases. ROI can be very misleading, and he asks instead for benefits and contributions to initiatives.
It is important to keep the horse in front of the cart. Accounting departments should not drive business decisions. For example, companies should not move to the public cloud just so that the accounting department can shift from CAPex to OPex. Try to depreciate as soon as possible. Likewise, green technologies and social responsibility are factors, but not drivers of business decisions. Acquisitions are a natural evolution of the market, so risk mitigation strategies should be in place in case your vendor of choice is acquired by someone you don't like.
For BC/DR planning, the analyst has a single Data Center approach, but Chris indicated that IT is looking to expand this. Their single datacenter for one part of their business was in Florida, and the other in Massachusetts, and both impacted by Hurricanes or Earthquakes recently.
The "lightning round" asked Chris his thoughts, either thumbs up, thumbs down, or neutral, on single ideas or concepts. I liked this part of the interview!
Chargeback? Thumbs down. He doesn't feel you should have internal fighting over charge rates. He prefers showback instead.
BYO Device with stipend? Thumbs down, but inevitable. Giving people a chunk of money to buy their own laptop, smart phone or tablet of choice may wreak havoc on the IT department for support and service.
Telepresence? Thumbs down. Cool, but very expensive. I don't think people are prepared to exploit the benefits of this.
Corporate apps on public "app stores"? Thumbs down. Concerns over security and integration is main issue.
Access to Social Networks? Thumbs up. This is how employees communicate and collaborate. Don't stifle them doing the right things just because you are afraid they might waste 20 minutes on Facebook per day.
Your IT budget? It's up slightly 1-5 percent for 2012.
Cloud? Promising, some challenges related to integration and security.
Chris finished up with a story about an application team that indicated that they would need to make 100 customizations to an off-the-shelf general ledger financial application. Chris and the other executives asked to be presented each and every customization, and he was able to eliminate most of them.
Positive comments I heard from the audience was that these keynotes had real "meat" to them, and not just full of cliches and platitudes that is common for keynote sessions. I would have to agree.