Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
We have successfully arrived to Mumbai, India. Since this is my first time in India, I decidedto check out the town by going to the local McDonald's® restaurant. As a former software engineer of McDonald's, I love the food, and try to visit a McDonald's in every country I visit. Wikipedia calls our transportation an [Auto Rickshaw], but the locals called it a "tuk-tuk". This is not my first time in one, they have them in Thailand and Mexico as well.
We had the hotel identify the address of the closest McDonald's to our hotel. From past experienceI know that tuk-tuk drivers will suggest alternatives, in an effort to earn a larger fare, or to redirectto a preferred location where the driver might get special kick-backs. Our driver was no different.
The traffic was treacherous, the roadswere in roughshod condition, and sad looking stray dogs digging through piles of rubbish were everywhere. The local "Daily News and Analysis" newspaper this week estimates that there are over 70,000 stray dogs in Mumbai alone.What to do with all of these strays is a matter of controversy. In preparation for the Olympic games, China hasasked its restaurants to [take"dog" off their menus].Having lived in one of the poorest countries, and one of the richest, nothing surprises me anymore.
My IBM colleague, Curtis Neal, decided to join me for this adventure. Finally, after about 20 minutes, our driver parks the tuk-tuk. He told us the restaurant is only aboutthree blocks away by foot, he would allow us to treat him to lunch, and then he will take us back to the hotel.While we appreciated his fantastic imagination, we told him we just wanted to be taken one-way to the restaurant, to drop us off at the front door, and we would find another tuk-tuk for the return.
After a bit of argument, we settled on being left only one block away, and we would walk the rest.While we could not see exactly where the restaurant was when we got out, he at least pointed us in the right direction.
The problem was that we approached the restaurant from behind, and came up to its equivalent of a "drive thru" window,ordered our food, and then went to the second window to pick up our order. We were eating on the street. It was not until I decided to take this photo of the restaurant, that we discovered there was an entire seating area upstairs, and around the cornerthe main entrance!
There were plenty of tuk-tuks picking up and dropping people off, so we have no idea why ourprevious driver was unwilling to take us the entire distance.
Cows are sacred here in India, so thereare no beef-based hamburgers to choose from. My choices for sandwiches were:
Since my nutritionist asked me to avoid carbs and fried foods, I chose the McChicken with cheese combo meal with fries and a Coke.
Getting back was also a challenge. While we had no problem haling a tuk-tuk, we had no idea the address of ourhotel, and our driver had no idea where it was. We ended up driving around the city until we found a differenthotel, asked them if they knew where it was, and then eventually getting to our hotel. This is something I shouldhave planned for in advance, getting a card with the hotel details on it before leaving.
While it might seem like a simple trip, Curtis and I probably learned more about India this way than spending a week inside the comforts of our hotel.
Thirteen months ago, fellow IBM blogger Bob Sutor suggested the potential for avatars to [move from one virtual world to another].I thought this was far, far in the future myself, but this week, IBM and Linden Labs, the makersof Second Life, successfully teleported an avatar from SecondLife over to OpenSim. Here is the[Press Release].
If you are thinking there is no business value here, consider that Cisco has this incredible [11-minute demonstration video] that haspresenters in one city on the stage at another city.
Well, my job is done here in Tokyo, and my team is off next to Mumbai, India. This of course will takethe bulk of tomorrow in airplanes and airports, and not be as easy as teleporting in the metaverse!
Continuing my week in Tokyo, Japan, I was going to title this post "Chunks, Extents and Grains", but decidedinstead to use the fairy tale above.
Fellow blogger BarryB from EMC, on his The Storage Anarchist blog, once again shows off his [PhotoShop talents], in his post [the laurel and hardy of thin provisioning]. This time, BarryB depicts fellow blogger and IBM master inventor, Barry Whyte, as Stan Laurel and fellow blogger Hu Yoshida from HDS as Oliver Hardy.
At stake is the comparison in various implementations of thin provisioning among the major storage vendors.On the "thick end", Hu presents his case for 42MB chunks on his post [When is Thin Provisioning Too Thin]. On the "thin end", IBMer BarryW presents the "fine-grained" details of Space-efficient Volumes (SEV), made available with the IBM System Storage SAN Volume Controller (SVC) v4.3, in his series of posts:
BarryB paints both implementations as "extremes" in inefficiency. Some excerpts from his post:
"... Hitachi's "chubby" provisioning is probably more performance efficient with external storage than is the SVC's "thin" approach. But it is still horribly inefficient in context of capacity utilization.
... the "thin extent" size used by Symmetrix Virtual Provisioning is both larger than the largest that SVC uses, and (significantly) smaller than what Hitachi uses."
"free" may be the most expensive solution you can buy...
Before you rush off to put a bunch of SVCs running (free) SEV in front of your storage arrays, you might want to consider the performance implications of that choice. Likewise, for Hitachi's DP, you probably want to understand the impact on capacity utilization that DP will have. DP isn't free, and it isn't very space efficient, either."
BarryB would like you to think that since EMC has chosen an "extent" size between 257KB and 41MB it must therefore be the optimal setting, not too hot, and not too cold. As I mentioned last January in my post[DoesSize Really Matter for Performance?], EMC engineers had not yet decided what that extent size should be, andBarryB is noticeably vague on the current value.According to this [VMware whitepaper],the thin extent size is currently 768 KBin size. Future versions of the EMC Enginuity operating environment may change the thin extent size. (I am sure theEMC engineers are smarter and more decisive than BarryB would lead us to believe!)
BarryB is correct that any thin provisioning implementation is not "free", even though IBM's implementation is offeredat no additional charge. Some writes may be slowed downwaiting for additional storage to be allocated to satisfy the request, and some amount of storage must be set asideto hold the metadata directory to point to all these chunks, extents or grains. For the convenience of not havingto dynamically expand LUNs manually as more space is needed, you will pay both a performance and capacity "price".
However, as they say, the [proof of the pudding is in the eating], or perhaps I should say porridge in this case.Given that the DMX4 is slower than both HDS USP-V and IBM SVC, you won't see EMC publishing industry-standard[SPC benchmarks] using their"thin extent" implementation anytime soon. IBM allows a choice of grain size, from 32KB to 256KB, in an elegantdesign that keeps the metadata directory less than 0.1 to 0.5 percent overhead. I would be surprised if EMC canmake a case to be more efficient than that! The performance tests are stillbeing run, but what I have seen so far, people will be very pleased with the minimal impact from IBM SEV, an acceptable trade-off for improved utilization and reduced out-of-space conditions.
So if you are a client waiting for your EMC equipment to be fully depreciated so you can replace it for faster equipment from IBM or HDS, you can at least improveits performance and capacity utilization today by virtualizing it with IBM SAN Volume Controller.
Alan was a leader in blogging about IBM Lotus technologies and was very helpfulto me over the past few years in deploying new Lotus technologies at the IBM TucsonExecutive Briefing Center. The Lotus team taught me how to use Second Life, using theLotusSphere 2007 build to demonstrate the various possibilities that we used to run IBM System Storage events last year.
Alan, I wish you the best of luck on your exciting new position!
"... firms don't have the detailed electricity consumption data they need to implement energy efficiency initiatives. What they have is an energy bill for a facility."
A common adage is that "you can't manage what you don't measure." IBM has beefed up the ability to measure andmonitor electricity usage, not just IBM servers and storage, but also non-IBM IT equipment and facilities infrastructurelike UPS, HVAC, lighting and security alarm systems.
Hitch Green IT to data centre refurbishment projects
"Energy savings alone don't constitute a business case to overhaul an existing data centre, undertake a refurbishment project or build a new Green Data Centre."
Either CIOs don't have the measurements of electricity to perform an ROI or cost/benefit analysis, or the facilitiesfolks that sense improvements are possible may not see the big picture compared to other business investments.Instead, IBM seeks to incorporate IT energy efficiency best practices into existing business plans for data center improvements.
Tackle corporate energy efficiency and emissions
"... a strategy discussion and corporate carbon diagnostic are the start point to stimulate demand. Not a cold sell on Green IT."
Project Big Green is more than just an IT project.IBM's Global Business Services consultants have transformed it into a Carbon Management Strategy encompassing employees, information, property, the supply chain, customers and products. For companies that are looking atreducing their carbon footprint overall, this approach makes a lot of sense.
Differentiate offerings by industry and country
"The inability to get more power into urban data centres has driven demand for energy efficiency by banks, telcos and outsourcers."
Different countries, and different industries, have different priorities.Europe, and in particular the UK, focuses on carbon emissions as much as energy costs due to mandatory emissions caps.For data centers in the largest cities, an increase in electrical supply may not be available, or be too expensive,and the time it takes to build a new data center elsewhere, typically 12-18 months, may not be soon enough to handlecurrent business growth rates. Energy efficiency projects can help buy them some time.
Plan for slow customer adoption
"IBM is developing the market for IT energy efficiency and carbon management services. And its very much an early stage market today."
IBM is frequently on the forefront of new technologies and emerging markets, so it is no surprise that we areused to dealing with slow customer adoption. The combination of high energy costs, tightening regulations and stakeholder pressure will drive the market. Larger companies and government organizations that have the meansto make these necessary changes will probably lead the adoption curve.
Prepare for investment barriers to IT energy efficiency
"With the low hanging fruit picked, IBM has found that there is an unwillingness to spend money on planting a new orchard."
IBM has helped IT clients with quick fixes offering rapid payback such as adjusting data center temperature and humidity to reduce energy consumption. But in the current economic environment, persuading firms to install variable speed fans with a 6-year payback is much tougher. Again, this is a matter of CIOs and other upper level management balancingfinancial investment decisions with some foresight and vision for the future.
Project Big Green launched back in May 2007, and last month IBM renewed its commitment with Project Big Green 2.0,continuing to enhance product and service offerings in support for this much needed area. And while the leadersin the G8 Summit will discuss a variety of topics, three top "green" issues on their agenda include rising energy costs, global climate change and controlling carbon emissions.
Well, the weather here has turned awful, so I better turn off my computer to avoid lightning-strike damage.
For those looking for something to do to enjoy the "4th of July" US Independence day holiday tomorrow, thereis the [Team America: Sing-a-long at Tucson's Loft Cinemaat 6pm, you can still see the fireworks after the show is over. I did this last year and it was a lot of fun.
Also, you can check out the IBM Wimbledon build on Second Life. Here's the SLURL:[http://slurl.com/secondlife/IBM%207/133/180/23].Several IBMers will be "in world" at this virtual location on 4th of July. For all of my readers looking to check out Second Life, see what IBM can do, or talk to people who are familiar with this technology, here's your chance.
As for me, I'll be spending my "long weekend" in an airplane. Here's my travel schedule.
July 7-11: Tokyo, Japan - business meetings with IBM sales reps
July 13-18: Mumbai, India - business meetings with IBM business partners
If you will be at any of these locations on any of these dates and want to meet up, please let me know.You can click on the "send e-mail to Tony Pearson" button on the right panel of my blog.
(I was hoping that while I was in Asia, I could stop over and visit the schools I helped in Nepal and my friends at the Open Learning Exchange [OLE Nepal] as part of the One Laptop PerChild [OLPC Nepal] program, but I did not get all my ducks lined up for this with the appropriate travel approvals, visas and logistics. My apologies to Bryan, Sulochan and the rest of the team. Perhaps next year!)
Based on this success, and perhaps because I am also fluent in Spanish, I was asked to help with Proyecto Ceibal, the team for OLPC Uruguay. Normally theXS school server resides at the school location itself, so that even if the internet connection is disrupted or limited, the school kids can continue to access each other and the web cache content until internet connection is resumed.However, with a diverse developmentteam with people in United States, Uruguay, and India, we first looked to Linux hosting providers that wouldagree to provide free or low-cost monthly access. We spent (make that "wasted") the month of May investigating.Most that I talked to were not interested in having a customized Linux kernel on non-standard hardware on their shop floor, and wanted instead to offer their own standard Linux build on existing standard servers, managed by theirown system administrators, or were not interested in providing it for free. Since the XS-163 kernel is customizedfor the x86 architecture, it is one of those exceptions where we could not host it on an IBM POWER or mainframe as a virtual guest.
This got picked up as an [idea] for the Google's[Summer of Code] and we are mentoring Tarun, a 19-year-old student to actas lead software developer. However, summer was fast approaching, and we wanted this ready for the next semester. In June, our project leader, Greg, came up with a new plan. Build a machine and have it connected at an internet service provider that would cover the cost of bandwidth, and be willing to accept this with remote administration. We found a volunteer organization to cover this -- Thank you Glen and Vicki!
We found a location, so the request to me sounded simple enough: put together a PC from commodity parts that meet the requirements of the customizedLinux kernel, the latest release being called [XS-163]. The server would have two disk drives, three Ethernet ports, and 2GB of memory; and be installed with the customized XS-163 software, SSHD for remote administration, Apache web server, PostgreSQL database and PHP programming language.Of course, the team wanted this for as little cost as possible, and for me to document the process, so that it could be repeated elsewhere. Some stretch goals included having a dual-boot with Debian 4.0 Etch Linux for development/test purposes, an alternative database such as MySQL for testing, a backup procedure, and a Recover-DVD in case something goes wrong.
Some interesting things happened:
The XS-163 is shipped as an ISO file representing a LiveCD bootable Linux that will wipe your system cleanand lay down the exact customized software for a one-drive, three-Ethernet-port server. Since it is based on Red Hat's Fedora 7 Linux base, I found it helpful to install that instead, and experiment moving sections of code over.This is similar to geneticists extracting the DNA from the cell of a pit bull and putting it into the cell for a poodle. I would not recommend this for anyone not familiar with Linux.
I also experimented with modifying the pre-built XS-163 CD image by cracking open the squashfs, hacking thecontents, and then putting it back together and burning a new CD. This provided some interesting insight, but in the end was able to do it all from the standard XS-163 image.
Once I figured out the appropriate "scaffolding" required, I managed to proceed quickly, with running versionsof XS-163, plain vanilla Fedora 7, and Debian 4, in a multi-boot configuration.
The BIOS "raid" capability was really more like BIOS-assisted RAID for Windows operating system drivers. This"fake raid" wasn't supported by Linux, so I used Linux's built-in "software raid" instead, which allowed somepartitions to be raid-mirrored, and other partitions to be un-mirrored. Why not mirror everything? With two160GB SATA drives, you have three choices:
No RAID, for a total space of 320GB
RAID everything, for a total space of 160GB
Tiered information infrastructure, use RAID for some partitions, but not all.
The last approach made sense, as a lot of of the data is cache web page images, and is easily retrievable fromthe internet. This also allowed to have some "scratch space" for downloading large files and so on. For example,90GB mirrored that contained the OS images, settings and critical applications, and 70GB on each drive for scratchand web cache, results in a total of 230GB of disk space, which is 43 percent improvement over an all-RAID solution.
While [Linux LVM2] provides software-based "storage virtualization" similar to the hardware-based IBM System Storage SAN Volume Controller (SVC), it was a bad idea putting different "root" directories of my many OS images on there. With Linux, as with mostoperating systems, it expects things to be in the same place where it last shutdown, but in a multi-boot environment, you might boot the first OS, move things around, and then when you try to boot second OS, it doesn'twork anymore, or corrupts what it does find, or hangs with a "kernel panic". In the end, I decided to use RAIDnon-LVM partitions for the root directories, and only use LVM2 for data that is not needed at boot time.
While they are both Linux, Debian and Fedora were different enough to cause me headaches. Settings weredifferent, parameters were different, file directories were different. Not quite as religious as MacOS-versus-Windows,but you get the picture.
During this time, the facility was out getting a domain name, IP address, subnet mask and so on, so I testedwith my internal 192.168.x.y and figured I would change this to whatever it should be the day I shipped the unit.(I'll find out next week if that was the right approach!)
Afraid that something might go wrong while I am in Tokyo, Japan next week (July 7-11), or Mumbai, India the following week (July 14-18), I added a Secure Shell [SSH] daemon that runs automaticallyat boot time. This involves putting the public key on the server, and each remote admin has their own private key on their own client machine.I know all about public/private key pairs, as IBM is a leader in encryption technology, and was the first todeliver built-in encryption with the IBM System Storage TS1120 tape drive.
To have users have access to all their files from any OS image required that I either (a) have identical copieseverywhere, or (b) have a shared partition. The latter turned out to be the best choice, with an LVM2 logical volumefor "/home" directory that is shared among all of the OS images. As we develop the application, we might findother directories that make sense to share as well.
For developing across platforms, I wanted the Ethernet devices (eth0, eth1, and so on) match the actual ports they aresupposed to be connected to in a static IP configuration. Most people use DHCP so it doesn't matter, but the XSsoftware requires this, so it did. For example, "eth0" as the 1 Gbps port to the WAN, and "eth1/eth2" as the two 10/100 Mbps PCI NIC cards to other servers.Naming the internet interfaces to specific hardware ports wasdifferent on Fedora and Debian, but I got it working.
While it was a stretch goal to develop a backup method, one that could perform Bare Machine Recovery frommedia burned by the DVD, it turned out I needed to do this anyways just to prevent me from losing my work in case thingswent wrong. I used an external USB drive to develop the process, and got everything to fit onto a single 4GB DVD. Using IBM Tivoli Storage Manager (TSM) for this seemed overkill, and [Mondo Rescue] didn't handle LVM2+RAID as well as I wanted, so I chose [partimage] instead, which backs up each primary partition, mirrored partition, or LVM2 logical volume, keeping all the time stamps, ownerships, and symbolic links in tact. It has the ability to chop up the output into fixed sized pieces, which is helpful if you are goingto burn them on 700MB CDs or 4.7GB DVDs. In my case, my FAT32-formatted external USB disk drive can't handle files bigger than 2GB, so this feature was helpful for that as well. I standardized to 660 GiB [about 692GB] per piece, sincethat met all criteria.
The folks at [SysRescCD] saved the day. The standard "SysRescueCD" assigned eth0, eth1, and eth2 differently than the three base OS images, but the nice folks in France that write SysRescCD created a customized[kernel parameter that allowed the assignments to be fixed per MAC address ] in support of this project. With this in place, I was able to make a live Boot-CD that brings up SSH, with all the users, passwords,and Ethernet devices to match the hardware. Install this LiveCD as the "Rescue Image" on the hard disk itself, and also made a Recovery-DVD that boots up just like the Boot-CD, but contains the 4GB of backup files.
For testing, I used Linux's built-in Kernel-based Virtual Machine [KVM]which works like VMware, but is open source and included into the 2.6.20 kernels that I am using. IBM is the leadingreseller of Vmware and has been doing server virtualization for the past 40 years, so I am comfortable with thetechnology. The XS-163 platform with Apache and PostgreSQL servers as a platform for [Moodle], an open source class management system, and the combination is memory-intensive enough that I did not want to incur the overheads running production this manner, but it wasgreat for testing!
With all this in place, it is designed to not need a Linux system admin or XS-163/Moodle expert at the facility. Instead, all we need is someone to insert the Boot-CD or Recover-DVD and reboot the system if needed.
Just before packing up the unit for shipment, I changed the IP addresses to the values they need at the destination facility, updated the [GRUB boot loader] default, and made a final backup which burned the Recover-DVD. Hopefully, it works by just turning on the unit,[headless], without any keyboard, monitor or configuration required. Fingers crossed!
So, thanks to the rest of my team: Greg, Glen, Vicki, Tarun, Marcel, Pablo and Said. I am very excited to bepart of this, and look forward to seeing this become something remarkable!
Wrapping up this week's theme on why the System z10 EC mainframe can replace so many older, smaller,underutilized x86 boxes.This was all started to help fellow bloggers Jon Toigo of DrunkenData and Jeff Savit from Sun Microsystemsunderstand our IBM press release that we put out last February on this machine with my post[Yes, Jon, there is a mainframe that can help replace 1500 x86 servers] and my follow uppost [Virtualization, Carpools and Marathons"].The computations were based on running 1500 unique workloads as Linux guests under z/VM, and notrunning them as z/OS applications.
My colleagues in IBM Poughkeepsierecommended these books to provide more insight and in-depth understanding. Looks like some interesting summer reading. I put in quotes thesections I excerpted from the synopsis I found for each.
"From Microsoft to IBM, Compaq to Sun to DEC, virtually every large computer company now uses clustering as a key strategy for high-availability, high-performance computing. This book tells you why-and how. It cuts through the marketing hype and techno-religious wars surrounding parallel processing, delivering the practical information you need to purchase, market, plan or design servers and other high-performance computing systems.
Microsoft Cluster Services ("Wolfpack")
IBM Parallel Sysplex and SP systems
DEC OpenVMS Cluster and Memory Channel
Tandem ServerNet and Himalaya
Intel Virtual Interface Architecture
Symmetric Multiprocessors (SMPs) and NUMA systems"
Fellow IBM author Gregory Pfister worked in IBM Austin as a Senior Technical Staff Member focused on parallel processing issues, but I never met him in person. He points out that workloads fall into regions called parallel hell, parallel nirvana, and parallel purgatory. Careful examination of machine designs and benchmark definitions will show that the “industry standard benchmarks" fall largely in parallel nirvana and parallel purgatory. Large UNIX machines tend to be designed for these benchmarks and so are particularly well suited to parallel purgatory. Clusters of distributed systems do very well in parallel nirvana. The mainframe resides in parallel hell as do its primary workloads. The current confusion is where virtualization takes workloads, since there are no good benchmarks for it.
"In these days of shortened fiscal horizons and contracted time-to-market schedules, traditional approaches to capacity planning are often seen by management as tending to inflate their production schedules. Rather than giving up in the face of this kind of relentless pressure to get things done faster, Guerrilla Capacity Planning facilitates rapid forecasting of capacity requirements based on the opportunistic use of whatever performance data and tools are available in such a way that management insight is expanded but their schedules are not."
Neil Gunther points out that vendor claims of near linear scaling are not to be trusted and shows a method to “derate” scaling claims. His suggested scaling values for data base servers is closer IBM's LSPR-like scaling model, than TPC-C or SPEC scaling. I had mentioned that "While a 1-way z10 EC can handle 920 MIPS, the 64-way can only handle 30,657 MIPS."in my post, but still people felt I was using "linear scaling". Linear scaling would mean that if a 1Ghz single-core AMD Opteron can do four(4) MIPS, and an one-way z10 EC can do 920 MIPS, than one might assume that 1GHz dual-core AMD could do eight(8) MIPS, and the largest 64-way z10 EC can do theoretically 64 x 920 = 58,880 MIPS. The reality is closer to 6.866 and 30,657 MIPS, respectively.
This was never an IBM-vs-Sun debate. One could easily make the same argument that a large Sun or HP system could replace a bunch of small 2-way x86 servers from Dell. Both types of servers have their place and purpose, and IBMsells both to meet the different needs of our clients. The savings are in total cost of ownership, reducing powerand cooling costs, floorspace, software licenses, administration costs, and outages.
I hope we covered enough information so that Jeff can go back about talking about Sun products, and I can go backto talk about IBM storage products.
Continuing this week's theme on the z10 EC mainframe being able to perform the workloadof hundreds or thousands of small 2-way x86 servers, I offer a simple analogy.
One car, one driver
If you wonder why so many companies subscribe to the notion that you should only runa single application per server, blame Sun, who I think helped promote this idea.Not to be out-done, Microsoft, HP and Dell think that it is a great idea too. Imaginethe convenience for operators to be able to switch off a single machine and impactonly a single application. Imagine how much this simplifies new application development,knowing that you are the only workload on a set of dedicated resources.
This is analogous to a single car, single driver, where the car helps get the personfrom "point A" to "point B" and the single driver represents the driver and solepassenger of the vehicle. If this were a single driver on a energy-efficient motorcycleor scooter, than would be reasonable, but people often drive alone much bigger vehicles,what Jeff Savit would call "over-provisioning". Chips have increased in processingpower much faster than individual applications have increased their requirements, so as a result,you have over-provisioning.
Carpooling - one bus, one driver, and many other passengers riding along
This is how z/OS operates. Yes, you could have up to 60 LPARs that you could individuallyturn on and off, but where z/OS gets most of its advantages is that you can run many applicationsin a single OS instance, through the use of "Address Spaces" which act as application containers.Of course, it is more difficult to write for this environment, because you have to be a good"z/OS citizen", share resources nicely, and be WLM-compliant to allow your application to beswapped out for others.
While you get efficiencies with this approach, when you bring the OS down, all the apps on that OS image haveto stop with it. For those who have "Parallel Sysplex" that is not an issue. For example, let's say youhave three mainframes, each running several LPARs of z/OS, and your various z/OS images all are able toprocess incoming transactions for a common shared DB2 database. Thanks to DB2 sharing technology, youcould take down an individual LPAR or z/OS image, and not disrupt transaction processing, because theIP spreader just sends them to the remaining LPARs. A "Coupling Facility" allows for smooth operationsif any of the OS images are lost from an unexpected disaster or disruption.
Needless to say, IBM does not give each z/OS developer his or her own mainframe. Instead, we get to run z/OS guest images under z/VM. It was even possible to emulate the next generation S/390 chipsetto allow us to test software on hardware that hasn't been created yet. With HiperSockets, we canhave virtual TCP/IP LAN connections between images, have virtual coupling facilities, have virtualdisk and virtual tape, and so on. It made development and test that much more efficient, which iswhy z/OS is recognized as one of the most rock-solid bullet-proof operating systems in existence.
The negatives of carpooling or taking the bus applies here as well. I have been on buses that havestopped working, and 50 people are stranded. And you don't need more than two people to make thelogistics of most carpools complicated. This feeds the fear that people want to have separatemanageable units one-car-one-driver than putting all of their eggs into one basket, having to scheduleoutages together, and so on.
(Disclaimer: From 1986 to 2001 I helped the development of z/OS and Linux on System z. Mostof my 17 patents are from that time of my career!)
Bicycle races and Marathons
The third computing model is the Supercomputer. Here we take a lot of one-way and two-way machines,and lash them together to form an incredible machine able to perform mathematical computations fasterthan any mainframe. The supercomputer that IBM built for Los Alamos National Laboratory just clockedin at 1,000,000,000,000,000 floating point operations per second. This is not a single operating system,but rather each machine runs its own OS, is given its primary objective, and tries to get it done.NetworkWorld has a nice article on this titled:[IBM, Los Alamos smash petaflop barrier, triple supercomputer speed record].If every person in the world was armed with a handheld calculator and performed one calculation per second, it would take us 46 years collectively to do everything this supercomputer can do in one day.
I originally thought of bicycle races as an analogy for this, but having listened to Lance Armstrong at the[IBM Pulse 2008] conference, I learned thatbiking is a team sport, and I wanted something that had the "every-man-for-himself" approach to computing.So, I changed this to marathons.
The marathon was named after a fabled greek soldier was sent as messenger from the [Battle of Marathon to the City of Athens],a distance that is now standardized to 26 miles and 385 yards, or 42.195 kilometers for my readersoutside the United States.
If you were given the task to get thousands of people from "point A" to "point B" 26 plus milesaway, would you chose thousands of cars, each with a lone driver? Conferences with a lot of people in a few hotels useshuttle buses instead. A few drivers, a few buses, and you can get thousands of people from a fewplaces to a few places. But the workloads that are sent to supercomputers have a single end point,so a dispatcher node gives a message to each "greek soldier" compute node, and has them run it on their own. Somemake it, some don't, but for a supercomputer that is OK. When the message is delivered, the calculation for thatlittle piece is done, and the compute node gives it another message to process. All of the computations areassembled to come up with the final result. Applications must be coded very speciallyto be able to handle this approach, but for the ones that are, amazing things happen.
So, how does "server virtualization" come into play?
IBM has had Logical Partitions for quite some time. A logical partition, or LPAR, can run its own OSimage, and can be turned on and off without impacting other LPARs. LPARs can have dedicated resources,or shared resources with other LPARs. The IBM z10 EC can have up to 60 LPARs. System p and System i,now merged into the new "POWER Systems" product line, also support LPARs in this manner. Depending onthe size of your LPAR, this could be for a single OS and application, or a single OS with lots of applications.
Address Spaces/Application Containers
This is the bus approach. You have a single OS, and that is shared by a set of application containers. z/OS does this with address spaces, all running under a single z/OS image, and for x86there are products like [Parallels Virtuozzo Containers] that can run hundred of Windows instances under a single Windows OS image, or a hundred Linux imagesunder a single Linux OS image. However, you cannot mix and match Windows with Linux, just as all theaddress spaces on z/OS all have to be coded for the same z/OS level on the LPAR they run in.
The term "guests" were chosen to model this after the way hotels are organized. Each guest has a roomwith its own lockable entrance and privacy, but shared lobby, and in some countries, shared bathroomson every hall. This approach is used by z/VM, VMware and others. The z/VM operating system can handle any S/390-chip operating system guest, so you could have a mix ofz/OS, TPF, z/VSE, Linux and OpenSolaris, and even other z/VM levels running as guests. Many z/VM developers runin this "second level" mode to develop new versions of the z/VM operating system!
As part of the One Laptop Per Child [OLPC] development team (yes, I ama member of their open source community, and now have developer keys to provide contributions), I havebeen experimenting with Linux KVM. This was [folded into the base Linux 2.6.20 kernel and availableto run Linux and Windows guest images. This is a nice write-up on[Wikipedia].
The key advantage of this approach is that you are back to one-car-one-driver simplistic mode of thinking. Each guest can be turned on and off without impacting otherapplications. Each guest has its own OS image, so you can mix different OS on the same server hardware.You can have your own customized kernel modules, levels of Java, etc.Externally, it looks like you are running dozens of applications on a single server, but internally,each application thinks it is the only one running on its own OS. This gives you simpler codingmodel to base your test and development with.
Jeff is correct that running less than 10 percent utilization average across your servers is a cryingshame, and that it could be managed in a manner that raises the utilization of the servers so that fewer areneeeded. Just as people could carpool, or could take the bus to work, it just doesn't happen, and data centersare full of single-application servers.
VMware has an architectural limit of 128 guests per machine, and IBM is able to reach this withits beefiest System x3850 M2 servers, but most of the x86 machines from HP, Dell and Sun are less powerful,and only run a dozen or so guests. In all cases, fewer servers means it is simpler to manage, so moreapplications per server is always the goal in mind.
VMware can soak up 30 to 40 percent of the cycles, meaning the most you can get from a VMware-basedsolution is 60 to 70 percent CPU utilization (which is still much better than the typical 5 to 10 percent average utilization we see today!) z/VM has been finely tuned to incur as little as 7 percent overhead,so IBM can achieve up to 93 percent utilization.
Jeff argues that since many of the z/OS technologies that allow customers to get over90 percent utilization don't apply to Linux guests under z/VM, then all of the numbers are wrong.My point is that there are two ways to achieve 90 percent utilization on the mainframe, one is throughz/OS running many applications on a single LPAR (the application container approach), and the other through z/VM supporting many Linux OS images, each with one (or a few) applications (the virtual guest approach).
I am still gathering more research on this topic, so I will try to have it ready later this week.
I am saddened to learn that one of my favorite comedians, [George Carlin],passed away yesterday. He was famous for a skit about "seven words" you could not say on Television.A few of those came to mind in the response I got from my post[Yes, Jon,There is a mainframe that can help replace 1500 x86 servers, which attempted to provide an answerto a simple question about the IBM System z10 Enterprise Class (EC) mainframe.
Jon: So, where is the 1500 number coming from? Tony: I’ll investigate and get back to you.
My post tried to explain how IBM estimated that number. However, my fellow blogger from Sun, Jeff Savit, posted on his blog [No, there isn't a Santa Claus] in response. (If Sun'sshareholders are expecting anything other than a [lump of coal] under the tree this year, they should probablyread Sun's press release about their last [financial results].)A few others contacted me about this also, from a bunch of rather different angles, from reverse-engineering emulation of other company's chipsets to my use of internal codenames. (There are now MORE than seven words I can't type in this blog!) Jon is just trying to gather information, but his [head hurts] from all of this debate.
This week I will try to clarify some of the confusion.
Two weeks ago, I mentioned in my post [Pulse 2008 - Day 2 Breakout sessions] thatHenk de Ruiter from ABN Amro bank presented his success storyimplementing Information Lifecycle Management (ILM) across hisvarious data centers. I am no stranger to ABN Amro, having helped "ABN" and "Amro" banks merge their mainframe data in 1991. Henk has agreed to let me share with my readers more ofthis success story here on my blog:
Back in December 2005, Henkand his colleagues had come to visit the IBM Tucson ExecutiveBriefing Center (EBC) to hear about IBM products and services. At the time, I was part of our "STG Lab Services" team that performed ILM assessments at client locations. I explained to ABN Amro that the ILM methodology does not requirean all-IBM solution, and that ILM could even provide benefits with their current mix of storage, software and service providers.The ABN Amro team liked what I had to say, andmy team was commissioned to perform ILM assessments atthree of their data centers:
Sao Paulo (Brazil)
Chicago, IL (USA)
Each data center had its own management, its owndecision making, and its own set of issues, so we structuredeach ILM assessment independently. When we presented our results,we showed what each data center could do better with their existing mixed bagof storage, software and service providers, and also showed howmuch better their life would be with IBM storage, software andservices. They agreed to give IBM a chance to prove it, and soa new "Global Storage Study" was launched to take the recommendationsfrom our three ILM studies, and flesh out the details to make aglobally-integrated enterprise work for them. Once completed,it was renamed the "Global Storage Solution" (GSS).
Henk summarized the above with "I am glad to see Tony Pearsonin the audience, who was instrumental to making this all happen."As with many client testimonials, he presented a few charts onwho ABN Amro is today, the 12th largest bank worldwide, 8th largest in Europe. They operate in 53 countries and manage over a trillioneuros in assets.
They have over 20 data centers, with about 7 PB of disk, and over20 PB of tape, both growing at 50 to 70 percent CAGR. About 2/3 of theiroperations are now outsourced to IBM Global Services, the remaining 1/3is non-IBM equipment managed by a different service provider.
ABN Amro deployed IBM TotalStorage Productivity Center, variousIBM System Storage DS family disk systems, SAN Volume Controller (SVC), Tivoli StorageManager (TSM), Tivoli Provisioning Manager (TPM), and several other products. Armed with these products, they performed the following:
Clean Up. IBM uses the term "rationalization" to relate to the assignment of business value, to avoid confusion with theterm "classification" which many in IT relate to identifyingownership, read and write authorization levels. Often, in theinitial phases of an ILM deployment, a portion of the data isdetermined to be eligible for clean up, either to move to a lower-cost tier or deleted immediately. ABN Amro and IBM set a goal to identifyat least 20 percent of their data for clean up.
New tiers. Rather than traditional "storage tiers" which are often justTier 1 for Fibre Channel disk and Tier 2 for SATA disk, ABN Amroand IBM came up with seven "information infrastructure tiers" thatincorporate service levels, availability and protection status.They are:
High-performance, Highly-available disk with Remote replication.
High-performance, Highly-available disk (no remote replication)
Mid-performance, high-capacity disk with Remote replication
Mid-performance, high-capacity disk (no remote replication)
Non-erasable, Non-rewriteable (NENR) storage employinga blended disk and tape solution.
Enterprise Virtual Tape Library with remote replicationand back-end physical tape
Mid-performance physical tape
These tiers are applied equally across their mainframe anddistributed platforms. All of the tiers are priced per "primary GB", so any additional capacity required for replication orpoint-in-time copies, either local or remote, are all folded into the base price.ABN Amro felt a mission-critical applicationon Windows or UNIX deserves the same Tier 1 service level asa mission-critical mainframe application. Exactly!
Deployed storage virtualization for disk and tape. Thisinvolved the SAN Volume Controller and IBM TS7000 series library.
Implemented workflow automation. The key product here is IBM Tivoli Provisioning Manager
Started an investigation for HSM on distributed. This would be policy-based space management to migrate lessfrequently accessed data to the TSM pool for Windows or UNIX data.
While the deployment is not yet complete, ABN Amro feels they have alreadyrecognized business value:
Reduced cost by identifying data that should be stored on lower tiers
Simplified management, consolidated across all operating systems (mainframe, UNIX, Windows)
Increased utilization of existing storage resources
Reduced manual effort through policy-based automation, which can lead to fewer human errors and faster adaptability to new business opportunities
Standardized backup and other operational procedures
Henk and the rest of ABN Amro are quite pleased with the progress so far,although recent developments in terms of the takeover of ABN AMRO by aconsortium of banks means that the model is only implemented so far in Europe. Further rollout depends on the storage strategy of the new owners. Nonetheless,I am glad that I was able to work with Henk, Jason, Barbara, Steve, Tom, Dennis, Craig and othersto be part of this from the beginning and be able to see it rollout successfully over the years.
IBM is hosting a webcast about storage for SAP Environments. Learn how integrated IBM infrastructure solutions, specifically, customized for your SAP environments, can help lower your business costs, increases productivity in SAP development and test tasks, and improve resource utilization. This will include discussion of archive solutions with WebDAV, ArchiveLink and DR550;IBM Business Intelligence (BI) Accelerator; IBM support for SAP [Adaptive Computing]; and performance benchmark results. The session is intended for SAP and storage administrators, IT directorsand managers.
Here are the details:
Date: Wednesday, June 18, 2008
Time: 11:00am EDT (8:00am for those of us in Arizona or California)
( I cannot take credit for coining the new term "bleg". I saw this term firstused over on the [FreakonomicsBlog]. If you have not yet read the book "Freakonomics", I highly recommend it! The authors' blog is excellent as well.)
For this comparison, it is important to figure out how much workload a mainframe can support, how much an x86 cansupport, and then divide one from the other. Sounds simple enough, right? And what workload should you choose?IBM chose a business-oriented "data-intensive" workload using Oracle database. (If you wanted instead a scientific"compute-intensive" workload, consider an [IBM supercomputer] instead, the most recent of which clocked in over 1 quadrillion floating point operations per second, or PetaFLOP.) IBM compares the following two systems:
Sun Fire X2100 M2, model 1220 server (2-way)
IBM did not pick a wimpy machine to compare against. The model 1220 is the fastest in the series, with a 2.8Ghz x86-64 dual-core AMD Opteron processor, capable of running various levels of Solaris, Linux or Windows.In our case, we will use Oracle workloads running on Red Hat Enterprise Linux.All of the technical specifications are available at the[Sun Microsystems Sun Fire X1200] Web site.I am sure that there are comparable models from HP, Dell or even IBM that could have been used for this comparison.
IBM z10 Enterprise Class mainframe model E64 (64-way)
This machine can run a variety of operating systems also, including Red Hat Enterprise Linux (RHEL). The E64 has four "multiple processor modules" called"processor books" for a total of 77 processing units: 64 central processors, 11 system assist processors (SAP) and 2 spares. That's right, spare processors, in case any others gobad, IBM has got your back. You can designate a central processor in a variety of flavors. For running z/VM and Linux operating systems, the central processors can be put into "Integrated Facility for Linux" (IFL) mode.On IT Jungle, Timothy Patrick Morgan explains the z10 EC in his article[IBM Launches 64-Way z10 Enterprise Class Mainframe Behemoth]. For more information on the z10 EC, see the 110-page [Technical Introduction], orread the specifications on the[IBM z10 EC] Web site.
In a shop full of x86 servers, there are production servers, test and development servers, quality assuranceservers, standby idle servers for high availability, and so on. On average, these are only 10 percent utilized.For example, consider the following mix of servers:
125 Production machines running 70 percent busy
125 Backup machines running idle ready for active failover in case a production machine fails
1250 machines for test, development and quality assurance, running at 5 percent average utilization
While [some might question, dispute or challenge thisten percent] estimate, it matches the logic used to justify VMware, XEN, Virtual Iron or other virtualization technologies. Running 10 to 20 "virtual servers" on a single physical x86 machine assumes a similar 5-10 percent utilization rate.
Note: The following paragraphs have been revised per comments received.
Now the math. Jon, I want to make it clear I was not involved in writing the press release nor assisted with thesemath calculations. Please, don't shoot the messenger! Remember this cartoon where two scientists in white lab coats are writing mathcalculations on a chalkboard, and in the middle there is "and then a miracle happens..." to continue the rest ofthe calculations?
In this case, the miracle is the number that compares one server hardware platform to another. I am not going to bore people with details like the number of concurrent processor threads or the differencesbetween L1 and L3 cache. IBM used sophisticated tools and third party involvement that I am not allowed to talk about, and I have discussed this post with lawyers representing four (now five) different organizations already,so for the purposes of illustration and explanation only, I have reverse-engineered a new z10-to-Opteron conversion factor as 6.866 z10 EC MIPS per GHz of dual-core AMD Opteron for I/O-intensive workloads running only 10 percent average CPU utilization. Business applications that perform a lot of I/O don't use their CPU as much as other workloads.For compute-intensive or memory-intensive workloads, the conversion factor may be quite different, like 200 MIPS per GHz, as Jeff Savit from Sun Microsystems points out in the comments below.
Keep in mind that each processor is different, and we now have Intel, AMD, SPARC, PA-RISC and POWER (and others); 32-bit versus 64-bit; dual-core and quad-core; and different co-processor chip sets to worry about. AMD Opteron processors come in different speeds, but we are comparing against the 2.8GHz, so 1500 times 6.866 times 2.8 is 28,337. Since these would be running as Linux guestsunder z/VM, we add an additional 7 percent overhead or 2,019 MIPS. We then subtract 15 percent for "smoothing", whichis what happens when you consolidate workloads that have different peaks and valleys in workload, or 4,326 MIPS.The end is that we need a machine to do 26,530 MIPS. Thanks to advances in "Hypervisor" technological synergy between the z/VM operating system and the underlying z10 EC hardware, the mainframe can easily run 90 percent utilized when aggregating multiple workloads, so a 29,477 MIPS machine running at 90 percent utilization can handle these 26,530 MIPS.
N-way machines, from a little 2-way Sun Fire X2100 to the might 64-way z10 EC mainframe, are called "Symmetric Multiprocessors". All of the processors or cores are in play, but sometimes they have to taketurns, wait for exclusive access on a shared resource, such as cache or the bus. When your car is stopped at a red light, you are waiting for your turn to use the shared "intersection". As a result, you don't get linear improvement, but rather you get diminishing returns. This is known generically as the "SMP effect", and in IBM documentsthis as [Large System Performance Reference].While a 1-way z10 EC can handle 920 MIPS, the 64-way can only handle30,657 MIPS. The 29,477 MIPS needed for the Sun x2100 workload can be handled by a 61-way, giving you three extraprocessors to handle unexpected peaks in workload.
But are 1500 Linux guest images architecturally possible? A long time ago, David Boyes of[Sine Nomine Associates] ran 41,400 Linux guest images on a single mainframe using his [Test Plan Charlie], and IBM internallywas able to get 98,000 images, and in both cases these were on machines less powerful than the z10 EC. Neitherof these were tests ran I/O intensive workloads, but extreme limits are always worth testing. The 1500-to-1 reduction in IBM's press release is edge-of-the-envelope as well, so in production environments, several hundred guest images are probably more realistic, and still offer significant TCO savings.
The z10 EC can handle up to 60 LPARs, and each LPAR can run z/VM which acts much like VMware in allowing multipleLinux guests per z/VM instance. For 1500 Linux guests, you could have 25 guests each on 60 z/VM LPARs, or 250 guests on each of six z/VM LPARs, or 750 guests on two LPARs. with z/VM 5.3, each LPAR can support up to 256GB of memory and 32 processors, so you need at least two LPAR to use all 64 engines. Also, there are good reasons to have different guests under different z/VM LPARs, such as separating development/test from production workloads. If you had to re-IPLa specific z/VM LPAR, it could be done without impacting the workloads on other LPARs.
To access storage, IBM offers N-port ID Virtualization (NPIV). Without NPIV, two Linux guest images could not accessthe same LUN through the same FCP port because this would confuse the Host Bus Adapter (HBA), which IBM calls "FICON Express" cards. For example, Linux guest 1 asks to read LUN 587 block 32 and this is sent out a specific port, to a switch, to a disk system. Meanwhile, Linux guest 2 asks to read LUN 587 block 49. The data comes back to the z10 EC with the data, gives it to the correct z/VM LPAR, but then what? How does z/VM know which of the many Linux guests to give the data to? Both touched the same LUN, so it is unclear which made the request. To solve this, NPIV assigns a virtual "World Wide Port Name" (WWPN), up to 256 of them per physical port, so you can have up to 256 Linux guests sharing the same physical HBA port to access the same LUN.If you had 250 guests on each of six z/VM LPARs, and each LPAR had its own set of HBA ports, then all 1500 guestscould access the same LUN.
Yes, the z10 EC machines support Sysplex. The concept is confusing, but "Sysplex" in IBM terminology just means that you can have LPARs either on the same machine or on separate mainframes, all sharing the same time source, whether this be a "Sysplex Timer" or by using the "Server Time Protocol" (STP). The z10 EC can have STP over 6 Gbps Infiniband over distance. If you wantedto have all 1500 Linux guests time stamp data identically, all six z/VM LPARs need access to the shared time source. This can help in a re-do or roll-back situation for Oracle databases to complete or back-out "Units of Work" transactions. This time stamp is also used to form consistency groups in "z/OS Global Mirror", formerly called "XRC" for Extended Remote Distance Copy. Currently, the "timestamp" on I/O applies only to z/OS and Linux and not other operating systems. (The time stamp is done through the CDK driver on Linux, and contributed back to theopen source community so that it is available from both Novell SUSE and Red Hat distributions.)To have XRC have consistency between z/OS and Linux, the Linux guests would need to access native CKD volumes,rather than VM Minidisks or FCP-oriented LUNs.
Note: this is different than "Parallel Sysplex" which refers to having up to 32 z/OS images sharing a common "Coupling Facility" which acts as shared memory for applications. z/VM and Linux do not participate in"Parallel Sysplex".
As for the price, mainframes list for as little as "six figures" to as much as several million dollars, but I have no idea how much this particular model would cost. And, of course, this is just the hardware cost. I could not find the math for the $667 per server replacement you mentioned, so don't have details on that.You would need to purchase z/VM licenses, and possibly support contracts for Linux on System z to be fully comparable to all of the software license and support costs of the VMware, Solaris, Linux and/or Windows licenses you run on the x86 machines.
This is where a lot of the savings come from, as a lot of software is licensed "per processor" or "per core", and so software on 64 mainframe processors can be substantially less expensive than 1500 processors or 3000 cores.IBM does "eat its own cooking" in this case. IBM is consolidating 3900 one-application-each rack-mounted serversonto 30 mainframes, for a ratio of 130-to-1 and getting amazingly reduced TCO. The savings are in the followingareas:
Hardware infrastructure. It's not just servers, but racks, PDUs, etc. It turns out to be less expensive to incrementally add more CPU and storage to an existing mainframe than to add or replace older rack-em-and-stack-emwith newer models of the same.
Cables. Virtual servers can talk to each other in the same machine virtually, such as HiperSockets, eliminatingmany cables. NPIV allows many guests to share expensive cables to external devices.
Networking ports. Both LAN and SAN networking gear can be greatly reduced because fewer ports are needed.
Administration. We have Universities that can offer a guest image for every student without having a majorimpact to the sys-admins, as the students can do much of their administration remotely, without having physicalaccess to the machinery. Companies uses mainframe to host hundreds of virtual guests find reductions too!
Connectivity. Consolidating distributed servers in many locations to a mainframe in one location allows youto reduce connections to the outside world. Instead of sixteen OC3 lines for sixteen different data centers, you could have one big OC48 line instead to a single data center.
Software licenses. Licenses based on servers, cores or CPUs are reduced when you consolidate to the mainframe.
Floorspace. Generally, floorspace is not in short supply in the USA, but in other areas it can be an issue.
Power and Cooling. IBM has experienced significant reduction in power consumption and cooling requirementsin its own consolidation efforts.
All of the components of DFSMS (including DFP, DFHSM, DFDSS and DFRMM) were merged into a single product "DFSMS for z/OS" and is now an included element in the base z/OS operating system. As a result of these, customers typically have 80 to 90 percent utilization on their mainframe disk. For the 1500 Linux guests, however, most of the DFSMS features of z/OS do not apply. These functions were not "ported over" to z/VM nor Linux on any platform.
Instead, the DFSMS concepts have been re-implemented into a new product called "Scale-Out File Services" (SOFS) which would provide NAS interfaces to a blendeddisk-and-tape environment. The SOFS disk can be kept at 90 percent utilization because policies can place data, movedata and even expire files, just like DFSMS does for z/OS data sets. SOFS supports standard NAS protocols such as CIFS,NFS, FTP and HTTP, and these could be access from the 1500 Linux guests over an Ethernet Network Interface Card (NIC), which IBM calls "OSA Express" cards.
Lastly, IBM z10 EC is not emulating x86 or x86-64 interfaces for any of these workloads. No doubt IBM and AMD could collaborate together to come up with an AMD Opteron emulator for the S/390 chipset, and load Windows 2003 right on top of it, but that would just result in all kinds of emulation overhead.Instead, Linux on System z guests can run comparable workloads. There are many Linux applications that are functionally equivalent or the same as their Windows counterparts. If you run Oracle on Windows, you could runOracle on Linux. If you run MS Exchange on Windows, you could run Bynari on Linux and let all of your Outlook Expressusers not even know their Exchange server had been moved! Linux guest images can be application servers, web servers, database servers, network infrastructure servers, file servers, firewall, DNS, and so on. For nearly any business workload you can assign to an x86 server in a datacenter, there is likely an option for Linux on System z.
Hope this answers all of your questions, Jon. These were estimates based on basic assumptions. This is not to imply that IBM z10 EC and VMware are the only technologies that help in this area, you can certainly find virtualization on other systems and through other software.I have asked IBM to make public the "TCO framework" that sheds more light on this.As they say, "Your mileage may vary."
For more on this series, check out the following posts:
If in your travels, Jon, you run into someone interested to see how IBM could help consolidate rack-mounted servers over to a z10 EC mainframe, have them ask IBM for a "Scorpion study". That is the name of the assessment that evaluates a specific clientsituation, and can then recommend a more accurate estimate configuration.
Yesterday's post [Software Programmers as Bees]was not meant as "career advice", but certainly I got some interesting email as if it was.Orson Scott Card was poking fun at the culture clash between software programmers andmanagement/marketers, and I gave my perspective, having worked both types of jobs.
This is June. Many students are graduating from high school or college and lookingfor jobs. Some of these might be jobs just for the summer to make some spending money,and others mights be jobs like internships to explore different career paths. I found both programming and marketing are rewarding and interesting work, but each person is different.
There are a variety of ways to find out what your personality traits are,and then focus on those jobs or career paths that are best for those strengths. Hereis an online [Typology Test] based onthe work of psychologists Carl Jung and Isabel Myers-Briggs. The result is a four-letterscore that represents 16 possible personalities. For example, mine is "ENTP",which stands for "Extroverted, Intuitive, Thinking, Perceiving". You can find out otherfamous people that match your personality type. For ENTP, I am lumped together withfellow master inventor Thomas Edison, fellow author Lewis Carrol (Alice in Wonderland), Cooking great Julia Child, Comedians George Carlin and Rodney Dangerfield (I get no respect!),movie director Alfred Hitchcock, and actor Tom Hanks.
USA Today had an article ["CEOsvalue lessons from teen jobs"] which offers some career advice from successful business people.Of course, what worked for them may not work for you, all based on different personality types. Hereis an excerpt of the advice I thought the most useful:
"If you are committed, you will be successful." (unfortunately, the reverse is also true: if you are successful,you will be asked to move to a different job)
"Tackle offbeat jobs. Challenge conventional wisdom within reason. Come into contact with people from all walks of life."
"Show an interest, demonstrate you want to be on the job."
"Never limit yourself. Look beyond to what needs to be done, or should be done. Then do it. Stretch. Go beyond what others expect."
"Find a job that forces you to work effectively with people. No matter what you end up doing, dealing with others will be critical."
"Bring your best to the table every day. Learn professional responsibility and how to handle difficult situations."
"Listen carefully to what customers want."
Before IBM, I ran my own business. If you are thinking, "Maybe I will start my own business instead?" you might want to see this advice from Venture Capitalist [Guy Kawasaki on Innovation].While running your own business has advantages, like avoiding issues "working for the man", it has somedisadvantages as well. It is certainly not as easy as some people make it seem to be.
Of course, things are a lot different nowadays than they were when these CEOs were teenagers. And the pace ofchange does not seem to be slowing down any either. Here is a presentation on [SlideShare.net] that helps bring to focus the realities of globalization:
A faithful reader of this blog, Tom, sent me a link to Orson Scott Card's article titled[PROGRAMMERS AS BEES (or, how to kill a software company)]. "Is there any truth in this?" Tom asked?Having worked both sides of this fence as I approach my 22 year anniversary at IBM, I guess I can venturesome opinions on this piece. Let's start with this excerpt:
"The environment that nurtures creative programmers kills management and marketing types - and vice versa."
By this, he means "kills" in the UNIX sense, I imagine, and not the "Grand Theft Auto IV" sense.Different people solve problems differently. Some programmers have the luxury that theycan often focus on a single platform, single chipset, single OS, and so on, but Marketing types are tryingto come up with messaging that appeals to a broad audience, from people with business backgrounds to others with moretechnical backgrounds, and that can be more challenging. For programmers, "creative" is an adjective; formarketers, it's a noun.
"Programming is the Great Game. It consumes you, body and soul. When you're caught up in it, nothing else matters."
True. As a storage consultant, I find myself writing code a lot, from small programs, scripts, and even HTML codefor this blog. When you are in your zone, working on something, one can easily lose track of time.
"Here's the secret that every successful software company is based on: You can domesticate programmers the way beekeepers tame bees. You can't exactly communicate with them, but you can get them to swarm in one place and when they're not looking, you can carry off the honey. You keep these bees from stinging by paying them money. More money than they know what to do with. But that's less than you might think."
I have never tamed bees, but many of my friends who are still programmers are motivated by factors other thanmaximizing their income, such as: friendly co-workers, job security, casual attire, and interesting challenges. A few make more than they know what to do with, the rest have girlfriends"significant others" who solve that problem for them.
"One way or another, marketers get control. But...control of what? Instead of finding assembly lines of productive workers, they quickly discover that their product is produced by utterly unpredictable, uncooperative, disobedient, and worst of all, unattractive people who resist all attempts at management."
False. Either marketing had control in the first place (ala Apple, Inc.) or they never had. "Control of what?" is the key phrase here.
"The shock is greater for the coder, though. He suddenly finds that alien creatures control his life. Meetings, Schedules, Reports. And now someone demands that he PLAN all his programming and then stick to the plan, never improving, never tweaking, and never, never touching some other team's code."
True. But if you don't like surprises, perhaps software engineering is not the right career path for you.
"The hive has been ruined. The best coders leave. And the marketers, comfortable now because they're surrounded by power neckties and they have things under control, are baffled that each new iteration of their software loses market share as the code bloats and the bugs proliferate. Got to get some better packaging. Yeah, that's it."
This one depends. I've seen teams survive and manage, with junior programmers stepping up to backfill leadership roles, and other times, projects are scrapped, or started anew elsewhere. As for marketers, it doesn't take much to get one baffled, does it?
Continuing my catch-up on past posts, Jon Toigo on his DrunkenData blog, posted a ["bleg"] for information aboutdeduplication. The responses come from the "who's who" of the storage industry, so I will provide IBM'sview. (Jon, as always, you have my permission to post this on your blog!)
Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.
IBM offers two different forms of deduplication. The first is IBM System Storage N series disk system with Advanced Single Instance Storage (A-SIS), and the second is IBM Diligent ProtecTier software. Larry Freeman from NetApp already explains A-SIS in the [comments on Jon's post], so I will focus on the Diligent offering in this post. The key differentiators for Diligent are:
Data agnostic. Diligent does not require content-awareness, format-awareness nor identification of backup software used to send the data. No special client or agent software is required on servers sending data to an IBM Diligent deployment.
Inline processing. Diligent does not require temporarily storing data on back-end disk to post-process later.
Scalability. Up to 1PB of back-end disk managed with an in-memory dictionary.
Data Integrity. All data is diff-compared for full 100 percent integrity. No data is accidentally discarded based on assumptions about the rarity of hash collisions.
InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?
Diligent is focused on backup workloads, which has the best opportunity for deduplication benefits. The two main benefits are:
Keeping more backup data available online for fast recovery.
Mirroring the backup data to another remote location for added protection. With inline processing, only the deduplicated data is sent to the back-end disk, and this greatly reduces the amount of data sent over the wire to the remote location.
Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one inline function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?
As with any storage offering, the three gating factors are typically:
Will this meet my current business requirements?
Will this meet my future requirements for the next 3-5 years that I plan to use this solution?
What is the Total Cost of Ownership (TCO) for the next 3-5 years?
Assuming you already have backup software operational in your existing environment, it is possible to determine thenecessary ingest rate. How many "Terabytes per Hour" (TB/h) must be received, processed and stored from the backup software during the backup window. IBM intends to document its performance test results of specific software/hardwarecombinations to provide guidance to clients' purchase and planning decisions.
For post-process deployments, such as the IBM N series A-SIS feature, the "ingest rate" during the backup only has to receive and store the data, and the rest of the 24-hour period can be spent doing the post-processing to find duplicates. This might be fine now, but as your data grows, you might find your backup window growing, and that leaves less time for post-processing to catch up. IBM Diligent does the processing inline, so is unaffected by an expansion of the backup window.
IBM Diligent can scale up to 1PB of back-end data, and the ingest rate does not suffer as more data is managed.
As for TCO, post-process solutions must have additional back-end storage to temporarily hold the data until the duplicates can be found. With IBM Diligent's inline methodology, only deduplicated data is stored, so less disk space is required for the same workloads.
Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?
IBM Diligent emulates a tape library, so the incoming data appears as files to be written sequentially to tape. A file is a string of bytes. Unlike block-level algorithms that divide files up into fixed chunks, IBM Diligent performs diff-compares of incoming data with existing data, and identifies ranges of bytes that duplicate what already is stored on the back-end disk. The file is then a sequence of "extents" representing either unique data or existing data. The file is represented as a sequence of pointers to these extents. An extent can vary from2KB to 16MB in size.
De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?
For IBM Diligent, all of the data needed to reconstitute the data is stored on back-end disks. Assuming that all of your back-end disks are available after the disaster, either the original or mirrored copy, then you only need the IBM Diligent software to make sense of the bytes written to reconstitute the data. If the data was written by backup software, you would also need compatible backup software to recover the original data.
De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a subpoena or discovery request? Does de-dupe conflict with the non-repudiation requirements of certain laws?
I am not a lawyer, and certainly there are aspects of[non-repudiation] that may or may not apply to specific cases.
What I can say is that storage is expected to return back a "bit-perfect" copy of the data that was written. Thereare laws against changing the format. For example, an original document was in Microsoft Word format, but is converted and saved instead as an Adobe PDF file. In many conversions, it would be difficult to recreate the bit-perfect copy. Certainly, it would be difficult to recreate the bit-perfect MS Word format from a PDF file. Laws in France and Germany specifically require that the original bit-perfect format be kept.
Based on that, IBM Diligent is able to return a bit-perfect copy of what was written, same as if it were written to regular disk or tape storage, because all data is diff-compared byte-for-byte with existing data.
In contrast, other solutions based on hash codes have collisions that result in presenting a completely different set of data on retrieval. If the data you are trying to store happens to have the same hash code calculation as completely different data already stored on a solution, then it might just discard the new data as "duplicate". The chance for collisions might be rare, but could be enough to put doubt in the minds of a jury. For this reason, IBM N series A-SIS, that does perform hash code calculations, will do a full byte-for-byte comparison of data to ensure that data is indeed a duplicate of an existing block stored.
Some say that de-dupe obviates the need for encryption. What do you think?
I disagree. I've been to enough [Black Hat] conferences to know that it would be possible to read thedata off the back-end disk, using a variety of forensic tools, and piece together strings of personal information,such as names, social security numbers, or bank account codes.
Currently, IBM provides encryption on real tape (both TS1120 and LTO-4 generation drives), and is working withopen industry standards bodies and disk drive module suppliers to bring similar technology to disk-based storage systems.Until then, clients concerned about encryption should consider OS-based or application-based encryption from thebackup software. IBM Tivoli Storage Manager (TSM), for example, can encrypt the data before sending it to the IBMDiligent offering, but this might reduce the number of duplicates found if different encryption keys are used.
Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?
Re-constituting the data back to the original format on tape allows the original backup software to interpret the tape data directly to recover individual files. For example, IBM TSM software can write its primary backup copies to an IBM Diligent offering onsite, and have a "copy pool" on physical tape stored at a remote location. The physical tapes can be used for recovery without any IBM Diligent software in the event of a disaster. If the IBM Diligent back-end disk images are lost, corrupted, or destroyed, IBM TSM software can point to the "copy pool" and be fully operational. Individual files or servers could be restored from just a few of these tapes.
An NDMP-like tape backup of a deduplicated back-end disk would require that all the tapes are in-tact, available, and fully restored to new back-end disk before the deduplication software could do anything. If a single cartridge fromthis set was unreadable or misplaced, it might impact the access to many TBs of data, or render the entire systemunusable.
In the case of a 1PB of back-end disk for IBM Diligent, you would be having to recover over a thousand tapes back to disk before you could recover any individual data from your backup software. Even with dozens of tape drives in parallel, could take you several days for the complete process.This represents a longer "Recovery Time Objective" (RTO) than most people are willing to accept.
Some vendors are claiming de-dupe is “green” — do you see it as such?
Certainly, "deduplicated disk" is greener than "non-deduplicated" disk, but I have argued in past posts, supportedby Analyst reports, that it is not as green as storing the same data on "non-deduplicated" physical tape.
De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?
Deduplication can be applied to primary data, as in the case of the IBM System Storage N series A-SIS. As Larrysuggests, MS Exchange and SharePoint could be good use cases that represent the possible savings for squeezing outduplicates. On the mainframe, many master-in/master-out tape applications could also benefit from deduplication.
I do not believe that deduplication products will run efficiently with “update in place” applications, that is high levels of random writes for non-appending updates. OLTP and Database workloads would not benefit from deduplication.
Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?
In general, new technologies are introduced on software first, and then as implementations mature, get hardware-based to improve performance. The same was true for RAID, compression, encryption, etc. The Hifn card does "hash code" calculations that do not benefit the current IBM Diligent implementation. Currently, IBM Diligent performsLZH compression through software, but certainly IBM could provide hardware-based compression with an integrated hardware/software offering in the future. Since IBM Diligent's inline process is so efficient, the bottleneck in performance is often the speed of the back-end disk. IBM Diligent can get improved "ingest rate" using FC instead of SATA disk.
Sorry, Jon, that it took so long to get back to you on this, but since IBM had just acquired Diligent when you posted, it took me a while to investigate and research all the answers.
I'm glad to be back home in Tucson for a few weeks. All of these conferences kept mefrom reading up with what was going on in the blogosphere.
A few of us at IBM found it odd that EMC would announce their new Geographically Dispersed Disaster Restart (GDDR) the weekBEFORE their "EMC World" conference. Why not announce all of the stuff all at once instead at the conference?Were they worried that the admission that "Maui" software is still many months awaythat much of a negative stigma? The decision probably went something like this:
EMCer #1: GDDR is finally ready, should we announce now, or wait ONE week to make it part of the thingswe announce at EMC World?
EMCer #2: We are not announcing much at EMC World and what people really want us to talk about, Maui, wearen't delivering for a while. Why can't people understand we are company of hardware engineers, not software programmers! So, better not be associated with that quagmire at all.
EMCer #1: Yes, boss, I see your point. We'll announce this week then.
My fellow blogger and intellectual sparring partner, Barry Burke, on his Storage Anarchist blog, posted [are you wasting money on your mainframe dr solution?"] to bringup the GDDR announcement. The key difference is that IBM GDPS works withIBM, EMC and HDS equipment, being the fair-and-balanced folks that IBM clientshave come to expect, but it appears EMC GDDR works only with EMC equipment.Because GDDR does less, it also costs less. I can accept that. You get whatyou pay for. Of course, IBM does have a variety of protection levels, one probably will meet your budget and your business continuity needs.
To correct Barry's misperception, companies that buy IBM mainframe servers do have a choice.They can purchase their operating system from IBM, get their Linux or OpenSolarisfrom someone else like Red Hat or Novell, or build their own OS distribution fromreadily available open source. And unlike other servers that might require at leastone OS partition from the vendor, IBM mainframes can run 100 percent Linux.GDPS supports a mix of OS data. z/OS and Linux data can all be managed by GDPS.Companies that own mainframes know this. I can forgive the misperception from Barry,as EMC is focused on distributed servers instead, and many in their company may not have muchexposure to mainframe technology, or have ever spoken to mainframe customers.
But what almost had me fall out of my chair was this little nugget from his post:
"If you're an IBM mainframe customer, you are - by definition - IBM's profit stream."
Honestly, is there anyone out there that does not realize that IBM is a for-profitcorporation? In contrast, Barry would like his readers to believe that EMC is selling GDDR at cost, andthat EMC is a non-profit organization. While IBM has been delivering actual solutions thatour clients want, EMC continues to rumor that someday they might get around to offering something worthwhile.In the last six months, the shareholders have interpreted both strategies for what they really are,and the stock prices reflect that:
(Disclosure: I own IBM stock. I do not own EMC stock. Stock price comparisonsby Yahoo were based on publicly reported information. The colors blue and red to represent IBM and EMC, respectively, were selected by Yahoo graph-making facility. The color red does not necessarily imply EMC is losing money or having financial troubles.)
Of course, I for one would love to help Barry's dream of EMC non-profitability come true. If anyone has any suggestions how we can help EMC approach this goal, please post a comment below.
Well, it's Tuesday again, and we had several announcements this month, so here is a quick recap.We had some things announce May 13, and then some more announcements today, but since I was busywith conferences, will combine them into one post for the entire month of May 2008.
This time, I thought I would go "audio" with a recording from Charlie Andrews, IBM director ofproduct marketing for IBM System Storage:
Today was a special day! IBM launched the world's first "Global Archive Solutions Center" in Guadalajara, Mexico.We had a formal "ribbon cutting", shown here were the following dignitaries (from left to right):
Eugenio Godard, IBM Guadalajara site level executive
Andy Monshaw, IBM General Manager of IBM System Storage
Cindy Grossman, IBM VP of Tape and Archive solutions
Luis Guillermo Martinez Mora, Secretary of economic development for the state of Jalisco, Mexico
José Décurnex, IBM General Manager for the country of Mexico
In the morning, we had a series of speeches from Cindy Grossman, Andy Monshaw, Eugenio Godard, and Federico Lepe (technology advisor for the governor for the state of Jalisco, Mexico).
While the hordes of press journalists, analysts and clients were taking the lab tour, we took a snap of thefront entrance. The day was packed with activity.
After the lab tour, IBMers Clod Barrera and Craig Butler presented to the analysts.
Cindy Grossman explained why IBM created a solutions center specific to archive solutions, and why wechose Guadalajara for its location.
I presented the pains and challenges companies are facing, and why they should partner with IBM forarchive solutions to address those requirements
Harley Puckett and I split the group. Harley is my colleague at the IBM Tucson ExecutiveBriefing Center who was the focal point for the various aspects of launching for the past eight months.He presented and moderated the presentations and demos to a collection of prospective clients.
That's me on the left, with Harley on the right.
I moderated a series of speakers to press and analysts. These included:
Mark LaBelle, Spectrum Health server and storage manager, and Steve Lawrence, Spectrum Health image solutions architect, presented their success story using IBM Grid Medical Archive Solution (GMAS). [Spectrum Health] manages seven hospitals and 130 service locations in Michigan, USA.
Mark Uren, ABSA technical architect, presented their success story working with IBM in deploying their Information Lifecycle Management (ILM) which includes Enterprise Content Management and archiving. Mark flew in all the way from Johannesburg, South Africa. [ABSA] is the financial services subsidiary of Barclay's serving theAfrican continent.
Jeffrey Beallor, president of [Global Data Vaulting], presented his success story as both a client and IBM Business Partner, offering backup and archiving solutions through "Software as a Service" (SaaS) business model. GlobalData Vaulting has its data centers in Canada, but provides services to clients worldwide.
We had a Q&A panel with the company representatives from Spectrum Health, ABSA, and Global Data Vaulting; followed by a Q&A panel with the collection of IBM executives to take questions from the press and analysts.Special thanks to Cyntia, Daniela, Carlos, Raul and Salvador for their help in making this a successful event!
(all three photos on this blog post taken by Mauricio, a professional photographer IBM hired for this event)
Continuing my summary of Pulse 2008, the premiere service managementconference focusing on IBM Tivoli solutions, I attended and presentedbreakout sessions on Monday afternoon.
Tivoli Storage "State-of-the-Subgroup" update
Kelly Beavers, IBM director of Tivoli Storage, presented the first breakout for all of the Tivoli Storage subgroup.Tivoli has several subgroups, but Tivoli Storage leads with revenuesand profits over all the others.Tivoli storage has top performing business partner channel of anysubgroup in IBM's Software Group division.IBM is world's #1 provider of storage vendor (hardware, softwareand services), so this came to no surprise to most of the audience.
Looking at just the Storage Software segment, it is estimatedthat customers will spend $3.5 billion US dollars more in the year 2011 than they did last year in 2007. IBM is #2 or #3 in eachof the four major categories: Data Protection, Replication, Infrastructure management, and Resource management. In eachcategory, IBM is growing market share, often taking away share fromthe established leaders.
There was a lot of excitement over the FilesX acquisition.I am still trying to learn more about this, but what I have gathered so far is that it can:
Like turning a "knob", you can adjust the level of backupprotection from traditional discrete scheduled backups, to morefrequent snapshots, to continuous data protection (CDP). Inthe past, you often used separate products or features to dothese three.
Perform "instantaneous restore" by performing a virtualmount of the backup copy. This gives the appearance that therestore is complete.
This year marks the 15th anniversary of IBM Tivoli StorageManager (TSM), with over 20,000 customers. Also, this yearmarks the 6th year for IBM SAN Volume Controller, having soldover 12,000 SVC engines to over 4,000 customers.
Data Protection Strategies
Greg Tevis, IBM software architect for Tivoli Technical Strategy,and I presented this overview of data protection. We coveredthree key areas:
Protecting against unethical tampering with Non-erasable, Non-rewriteable (NENR) storage solutions
Protecting against unauthorized access with encryption ondisk and tape
Protecting against unexpected loss or corruption with theseven "Business Continuity" tiers
There was so much interest in the first two topics that weonly had about 9 minutes left to cover the third! Fortunately,Business Continuity will be covered in more detail throughoutthe week.
Henk de Ruiter from ABN Amro bank presented his success storyimplementing Information Lifecycle Management (ILM) across hisvarious data centers using IBM systems, software and services.
Making your Disk Systems more Efficient and Flexible
I did not come up with the titles of these presentations. Theteam that did specifically chose to focus on the "business value"rather than the "products and services" being presented. Inthis session, Dave Merbach, IBM software architect, and I presentedhow SAN Volume Controller (SVC), TotalStorage Productivity Center,System Storage Productivity Center, Tivoli Provisioning Managerand Tivoli Storage Process Manager work to make your disk storagemore efficient and flexible.
I attended the main tent sessions on Day 2 (Monday). The focuswas on Visibility, Control and Automation.
Steve is IBM senior VP and Group Executive of the IBM Software Group, and presented someinsightful statistics from the IBM Global Technology Outlookstudy, some recent IBM wins, and other nuggets of IT trivia:
In 2001, there were about 60 million transistors per humanbeing. By 2010, this is estimated to increase to one billion per human
In 2005, there were about 1.3 billion RFID tags, by 2010this is estimated to grow to over 30 billion
IBM helped the City of Stockholm, Sweden, reduce traffic congestion 20-25% using computer technology
Only about 25% data is original, the remaining75% is replicated
In 2007, there were approximately 281 Exabytes (EB), expected to increase to 1800 EB by the year 2011
70 percent of unstructured data is user-created content, but 85 percent of this will be managed by enterprises
Only 20% of data is subject to compliance rules and standards, and about 30% subject to security applications
Human error is the primary reason for breaches, with34% of organizations experiencing a major breach in 2006
10% of IT budget is energy costs (power and cooling), and thiscould rise to 50% in the next decade
30 to 60 percent of energy is wasted. During the next 5 years, people will spend as much on energy as they will on new hardware purchases.
Al Zollar is the General Manager of IBM Tivoli. He discussedthe 20 some recent software acquisitions, including Encentuate and FilesX earlier this year.
"The time has come to fully industrialize operations" -- Al Zollar
What did Al mean about "industrizalize"? This is theclosed-loop approach of continuous improvement, including design, delivery and management.
Al used several examples from other industries:
Henry Ford used standardized parts and processautomation. Assembly of an automatobile went from 12 hours by master craftsmen, to delivering a new model T every 23 seconds off anassembly line.
Power generation was developed by Thomas Edison. A satellite picture showed the extent of the [Blackout of 2003 in Northeast US and Canada]. The time for "smart grid" has arrived, making sensors andmeters more intelligent. This allows non-essential IP-enabled appliances in our home or office to be turned off to reduce energy consumption.
[McCarran International Airport] integrated the management of 13,000 assets with IBM Tivoli Maximo Enterprise Asset Management (EAM) software, and was able to increase revenues through more accurate charge-back. Unlike traditional EnterpriseResource Planning (ERP) applications, EAM offers the deep management of four areas: production equipment, facilities, transportation, and IT.
When compared to these other industries, management of IT is in itsinfancy. The expansion of [Web 2.0] and Service-Oriented Architecture [SOA] is driving this need.What people need is a "new enterprise data center" that IBM Tivoli software can help you manage across operational boundaries. IBM can integrate through open standards with management software from Cisco, Sun, OracleMicrosoft, CA, HP, BMC Software, Alcatel Lucent, and SAP.Together with our ecosystems of technology partners, IBM ismeeting these challenges.
IBM clients have achieved return on investment from gettingbetter control of their environment. This week there are client experience presentations Sandia National Labs, Spirit AeroSystems, Bank of America, and BT Converged communication services.
Chris O'Connor used some of his staff as "actors" to show an incredible live demo of various Tivoli and Maximo products for the mythical launch of "Project Vitalize", thenew online web store for a new "Aero Z bike" from the mythical VCA Bike and Motorcycle company.
Shoel Perelman played the role of "CIO".The CIO locked down all spending, and asked the IT staff to make the shift from bricks-and-mortar to web salesof this new product on in 15 months. While the company andsituation were mythical, all the products that were part of thelive demo are all readily available.The CIO had three goals:
What do we have? where is it? what's connected to what?Traditionally, these would be answered from lists in spreadsheets.The CIO had a goal to deploy IBM Tivoli Application DependenceDiscover Manager (TADDM) which discovered all hardware and software,with an easy to understand view, and how each piece serves the business applications.
Each of the teams have processes, and needed them consistent andrepeatable, tightly linked together. Time is often wasted on thephone coordinating IT changes. For this, the CIO had a goalto deploy Tivoli Change and Configuration Management Database (CCMDB) for "strict change control".The process dashboard is accessible for all teams, to see how all projects are progressing. There is also aCompliance dashboard, which identifies all changes by role, clearly spelling out who can do what.
There is a lot of computerized machinery, Manufacturing assets and robotics. The CIO set a goal to "do more with existing people", and needed to automate key processes.Sales rep wanted to add a new distributor to key web portal.This was all done through their "service catalog", When they needed to deploy a new application, they were able to find servers with available capacity and adjust using automatic provisioning. Thanks to IBM, the IT staff no longer get paged at 3am in the morning, and fewer days are spent in the "war room". They now have confidence that thelaunch will be successful.
Ritika Gunnar played the role of "Operations manager". She highlightedfive areas:
"Service viewer" dashboard with green/yellow/red indicators forall of their edge, application and datbase servers. This allowsher to get data 4-5 times faster and more accurate.
Tivoli Enterprise Portal eliminates bouncingaround various products.
Tivoli Common Reporting for CPU utilization of all systems, helps find excess capacity usingIBM Tivoli Monitor
On average, 85 percent of problems are caused by IT changes to the environment. IBM can help find dependencies, so that changes in one area do not impact other areas unexpectedly
Process Automation will Show changes that have been completed, in progress, or overdue.She can see all steps in a task or change request. A"workflow" automates all the key steps that need to be taken.
Laura Knapp played the role of "Facilities manager". She wanted to See all processes that apply to her work using a role-based process dashboard. The advantage of using IBM is that it changes work habits, reduces overtimeby 42 percent, improves morale. The IT staff now works as team,collaborates more, and jobs get done faster with fewer mistakes.Employees are online, accessing, monitoring and managing dataquicker. In days not weeks.
IBM Tivoli Enterprise Console (TEP) served as a common vehicle.She was able to pull up floor plan online, displaying all of the managed assets and mapped features. With the temperature overlay from Maximo Spatial, she was able to review hot spots on data center floor. Heat can cause servers to fail or shut down.
Power utilization chart at peak loadsCan now anticipate, predict and watch power consumption,and were able to justify replacement with newer, more energy-efficient equipment.
The CIO got back on stage, and explained the great success of thelaunch. They use Webstore usage tracking, security tools tracking all new registrations, and trackingserver and storage load.It now only takes hours, not weeks, to add new business partners and distributors.Tivoli Service Quality Assurance toolstrack all orders placed, processed, and shipped.Faster responsiveness is competitive advantage. TheirIT department is no longer seen as stodgy group, but as a world classorganization.
The live demo showed how IBM can help clients with rapid decisionmaking, speed and accuracy of change processes, and automation to take actions quickly. The result is a strong return on investment (ROI).
Liz Smith, IBM General Manager of Infrastructure Services, presented the results of an IBM survey to CEOs and CIOs asking questions like: What is the next big impact? Where are you investing?What will new datacenter look like?
The five key traits they found for companies of the future:
They were hungry for change
Innovative beyond customer imagination
Disruptive by nature
Genuine, not just generous
The IT infrastructure must be secure, reliable, and flexible.Taking care of environment is a corporate responsibility, notjust a way to reduce costs.
The five entry points for IBM Service Management: Integrate, Industrialize,Discover,Monitor and Protect.IBM Service management and compliance are critical for theGlobally Integrated Enterprise, with repeatable, scalable and consistent processes that enablechange to an automated workflow. This reduces errors, risks and costs, and improves productivity.IBM has talent, assets and experience to help any client get there.
Lance lives in Austin, TX, where IBM Tivoli is headquartered,so this made a good choice as a keynote speaker.He is best known for winning seven "Tour de France" bicycle races in a row, but he spoke instead gave an inspirational talk about how he survived cancer.
In 1996, Lance was diagnosed with cancer. Surprisingly, He said it was thegreatest thing that happened to him, and gave him new perspective on his life, family and the sport ofbicycling.Back then, there wasn't a webMD, Google or other Web 2.0 socialnetworking sites for Lance to better understand what he wasgoing through, learn more about treatment options, or find othersgoing through the same ordeal.
After his treatment, he was considered "damaged goods" by manyof the leading European bicycle teams. So, he joined the US Postal Serviceteam, not known for their wins, but often invited to sell TVrights to American audiences. Collaborating with his coachesand other members of his team, he revolutionized the bicycling sport, analyzed everything about the race, and built up morale.He won the first "yellow jersey" in 1999, and did so each yearfor a total of seven wins.
Lance formed the [Livestrong foundation] to help other cancer survivors. Nike came to him and proposed donating 5 million "rubber bracelets"colored yellow to match his seven yellow jerseys, with the name "Livestrong" embossed on them, that his foundation couldthen sell for one dollar apiece to raise funds. What some thought was a silly idea at first has started amovement.At the 2004 Olympics, many athletes from all nations and religious backgrounds, wore these yellow braceletsto show solidarity with this cause.To date, the foundation has sold over 72 million yellow bracelets, and these have served to provide a symbol,a brand, a color identity, to his cause.
He explained that doctor's have a standard speech to cancer survivors.As a patient, you can go out this doorway and never tell anyone,keep the situation private. Or you can go out this other doorway, you tell everybody your story. Lance chose the latter, and he felt it was the best decision he ever made.He wrote a book titled [It's Not About the Bike: My Journey Back to Life].
His call to action for the audience: find out what can you do to make a difference.A million non-governmental organizations[NGO] have started in the past 10 years. Don't just give cash, also give your time and passion.
It seems like I just get out of one conference, and into another. This week I am at Pulse 2008, which combines the best of IBM Tivoli and Maximo into one conference.Like many conferences, this one starts on Sunday, and ends on Thursday.
We're at the Swan and Dolphin hotels at [Walt Disney World] in Orlando, Florida. I've been to several conferences in Orlando, but this is my first time at the Swanand Dolphin. (When I walked into the main lobby, I had a bout of "deja vu". IBM LotusSphere was here last year, and they had a complete replica made in SecondLife!)
If you haven't been to Walt Disney World resorts, whether for a conference or vacation,there are two things you need to know:
Nothing is within a short "walking distance", you need to take a bus or boat to get anywhere
Despite this, you will be doing a lot of walking, so wear comfortable shoes!
Pulse encouraged everyone to blog and take pictures posted onto FlickR, here are a few from Sunday:
Lou and Elizabeth from [Syclo], an IBM Business Partner
Mike and Megha from [Birlasoft] show off their accreditation
Greg Tevis explains FilesX, recently acquired by IBM
I'm glad this is the final day of the IBM Systems Technical Conference (STC08) here in Los Angeles.While I enjoyed the conference, one quickly reaches saturation point with all the information presented.
XIV Architecture Overview
Before this conference, many of the attendees didn't understandIBM's strategy, didn't understand Web 2.0 and Digital archive workloads,and didn't understand why IBM acquired XIV to offer "yet another disk systemthat servers LUNs to distributed server platforms." Brian Shermanchanged all that!
Brian Sherman, IBM Advanced Technical Support (ATS), is part of the exclusive dedicated XIVtechnical team to install these boxes at client locations, so he is very knowledgeable with the technical aspects of the architecture. He presented what the current XIV-branded model that clients can purchase now in select countries, and what the IBM-branded model will change when available worldwide.
Those who missed my earlier series on XIV can find them here:
Beyond this, Brian gave additional information on how thin provisioning, storage pools, disk mirroring, consistency groups, management consoles, and microcode updates are implemented.
N series and VMware Deep Dive
Norm Bogard, IBM Advanced Technical Support, presented why the IBM N series makes such great disk storage for VMware
deployments. This wasclearly labeled as a "deep dive", so anyone who got lost in all of theacronyms could not blame Norm for misrepresentation.
IBM has been doing server virtualization for over 40 years, so it makes sense thatit happens to be the number one reseller of VMware offerings.VMware ESX server is a hypervisor that runs on x86 host, and provides an emulationlayer for "guest Operating Systems". Each guest can hvae one or more virtualdisks, which are represented by VMware as VMDK files. VMware ESX server acceptsread/write requests from the guests, and forwards them on to physical storage.Many of VMware's most exciting features requires storage to be external to thehost machine. [VMotion]allows guests to move from one host to another, [Distributed Resource Scheduler (DRS)]allows a set of hosts to load-balance the guestsacross the hosts, and [High Availability (HA)] allows the guests on a failed hostto be resurrected on a surviving host. All of these require external disk storage.
ESX server allows up to 256 LUNs, attached via FCP and/or iSCSI, and up to 32 NFS mount points. Across LUNs, ESX server uses VMFS file system, which is a clusteredfile system like IBM GPFS that allows multiple hosts to access the same LUNs.ESX server has its own built-in native multipathing driver, and even provides FCP-iSCSIand iSCSI multipathing. In other words, you can have a LUN on an IBM System Storage N series thatis attached over both FCP and iSCSI, so if the SAN switch or HBA fails, ESX servercan failover to the iSCSI connection.
ESX server can use NFS protocol to access the VMDK files instead. While the default is only 8 NFS mount points, you can increase this to 32 mount points. NAS can takeadvantage of Link Aggregate Control Protocol [LACP] groups, what some call "trunking" or "EtherChannel". This is the ability to consolidate multiple streams onto fewer inter-switch Ethernet links, similar to what happens on SAN switches.For the IBM N series, IBM recommends a "fixed" path policy, rather than "most recently used".
IBM recommends disabling SnapShot schedules, and setting the Snap reserve to 0 percent.Why? A snapshot of an ESX server datastore has the VMDK files of many guests, all of which would have had to quiesce or stop to make the data "crash consistent" for theSnapshot of the datastore to even make any sense. So, if you want to take Snapshots, itshould be something you coordinate with the ESX server and its guest OS images, and notscheduled by the N series itself.
If you are running NFS protocol to N series, you can turn off the "accesstime" updates. In normal file systems, when you read a file, it updates the"access time" in the file directory. This can be useful if you are looking forfiles that haven't been read in a while, such as software that migrates infrequentlyaccessed files to tape. Assuming you are not doing that on your N series, you might as well turnoff this feature, and reduce the unnecessary write activity to the IBM N series box.
ESX server can also support "thin provisioning" on the IBM N series. There isa checkbox for "space reserved". Checked means "thick provisioning" and uncheckedmeans "thin provisioning". If you decide to use "thin provisioning" with VMware,you should consider setting AutoSize to automatically increase your datastorewhen needed, and to auto-delete-snap your oldest snapshots first.
The key advantage of using NFS rather than FCP or iSCSI is that it eliminates theuse of the VMFS file system. IBM N series has the WAFL file system instead, andso you don't have to worry about VMFS partition alignment issue. Most VMDK aremisaligned, so the performance is sub-optimal. If you can align each VMDK to a32KB or 64KB boundary (depending on guest OS), then you can get better performance.WAFL does this for you automatically, but VMFS does not. For Windows guests, use "Windows PE" to configurecorrectly-aligned disks. For UNIX or Linux guests, use "fdisk" utility.
What Industry Analysts are saying about IBM
Vic Peltz gave a presentation highlighting the accolades from securities analysts, IT analysts, and newsagencies about IBM and IBM storage products. For example, analysts like that IBM offersmany of the exciting new technologies their clients are demanding, like "thin provisioning", RAID-6 double-drive protection,SATA and Solid State Disk (SSD) drive technology.Analysts also like that IBM is open to non-IBM heterogeneous environments. Whereas EMC Celerra gateways supportonly EMC disk, IBM N series gateways and IBM SAN Volume Controller support a mix of IBM and non-IBM equipment.
Analysts also like IBM's "datacenter-wide" approach to issues like security and "Green IT". Rather than focusingon these issues with individual point solutions, IBM attacks these challenges with a complete"end-to-end" solution approach. A typical 25,000 square foot data center consumes $2.6 million dollars USD in power andcooling today, and IBM has proven technologies to reduce this cost in half. IBM's DS8000 on average consume26.5 to 27.8 percent less electricity than a comparable EMC DMX-4 disk system. IBM's tape systemsconsume less energy than comparable Sun or HP models.
IBM iDataPlex product technical presentation
Vallard Benincosa, IBM Technical Sales Specialist, presented the recently-announced [IBM System x iDataPlex].This is designed for our clients that have thousands of x86 servers, that buy servers "racks at a time", tosupport Web 2.0 and digital archive workloads. The iDataPlex is designed for efficient power and cooling,rapid scalability, and usable server density.
iDataPlex is such a radical design departure, that it might be difficult to describe in words.Most racks take up two floor tiles, each tile is 2 foot by 2 foot square. In that space, a traditionalrack would have servers that were 19 inches wide slide in horizontally, with flashing lights and hot-swappabledisks in the front, and all the power supply, fans and networking connections in the back. Even with IBM BladeCenter,you have chassis in these racks, and then servers slide in vertically in the front, and all of the power supply, fanand networking connections in the back. To access these racks, you have to be able to open the door on boththe front and back. And the cooling has to go through at least 26.5 inches from the front of the equipment to the back.
iDataPlex turns the rack sideways. Instead of two feet wide, and four feet deep, it is four feet wide, and two feet deep.This gives you two 19 inch columns to slide equipment into, and the air only has to travel 15 inches from frontto back. Less distance makes cooling more efficient.
Next, iDataPlex makes only thing in the back the power cord, controlled by an intelligent power distribution unit (iPDU) so you can turnthe power off without having to physically pull the plug. Everything else is serviced from the front door.This means that the back door can now be an optional "Rear Door Heat Exchanger" [RDHX] that is filled with running water to makecooling the rack extremely efficient. Water from a cooler distirubtion unit (CDU) can power about threeto four RDHX doors.
Let's say you wanted to compare traditional racks with iDataPlex for 84 servers. You can put 42 "1U" serversin two racks each, each rack requires 10 kVA (kilo-volt-amps) so you give it two 8.6 kVA feeds each, that is fourfeeds, and at $1500-2000 dollars USD per month, will cost you $6000-8000. The iDataPlex you can fit 84 serversin one 20 kVA rack, with only three 8.6 kVA feeds, saving you $1500-2000 dollars USD per month.
Fans are also improved. Fan efficiency is based on their diameter, so small fans in 1U servers aren't as effective as iDataPlex's 2U fans, saving about 12-49W per server. Whereas typical 1U server racks spend 10-20percent of their energy on the fans, the iDataPlex spends only about 1 percent, saving 8 to 36 kWH per year per rack.
Each 2U chassis snaps into a single power supply and a bank of 2U fans. A "Y"power cord allows you to have one cord for two power supplies. A chassis can hold either two small server "flexnodes"or one big "flexnode". An iDataPlex rack can hold up to 84 small servers or 42 big servers. Since each "Y" cord can power up to four "flexnode" servers, you greatly reduce the number of PDU sockets taken,leaving some sockets available for traditional 1U switches.
The small "flexnode" server can have one 3.5 inch HDD, or two 2.5 inch HDD, either SAS or SATA, and the big "flexnode" can have twice these.If you need more storage, there is a 2U chassis that holds five 3.5 inch HDD or eight 2.5 inch HDD. These areall "simple-swappable" (servers must be powered down to pull out the drives). For hot-swappable drives, a 3Uchassis with twelve 3.5 inch SAS or SATA drives.
The small "flexnode" server has one [PCI Express] slot, the big servers have two. Thesecould be used for [Myrinet] clustering. With only 25W power,the PCI Express slots cannot support graphics cards.
The iDataPlex is managed using the "Extreme Cluster Administration Toolkit" [XCAT]. This is an open source project under Eclipse that IBM contributes to.
Finally was the concept of "pitch". This is the distance from the center of one "cold aisle" to the next "cold aisle".On typical data centers, a pitch is 9 to 11 tiles. With the iDataPlex it is only three tiles when using the RDHX doors, or six tiles without. Most data centers run out of power and cooling before they run out of floor space, so having more dense equipmentdoesn't help if it doesn't also use less electricity.Since the iDataPlex uses 40 percent less power and cooling, you can pack more racks persquare foot of an existing data center floor with the existing power and cooling available. That is what IBM calls "usable density"!
What Did You Say? Effective Questioning and Listening Techniques
Maria L. Anderson, IBM Human Resources Learning, gave this "professional development" talk. I deal with different clients every week, so I fully understand that there is a mix of art and science incrafting the right questions and listening to the responses.The focus was on howto ask better questions and improve the understanding and communication during consultative engagements. Thisinvolves the appropriate mix of closed and open-ended questions, exchanging or prefacing as needed. This wasa good overview of the ERIC technique (Explore, Refine, Influence, and Confirm).
Well, that wraps up my week here in Los Angeles.Special thanks to my two colleagues, Jack Arnold and Glenn Hechler, both from the Tucson Executive Briefing Center,who helped me prepare and review my presentations!
Continuing this week in Los Angeles, I went to some interesting sessions today at theSystems Technical Conference (STC08).
System Storage Productivity Center (SSPC) - Install and Configuration
Dominic Pruitt, an IBM IT specialist in our Advanced Technical Support team, presented SSPC and howto install and configure it. For those confused between the difference of TotalStorage ProductivityCenter and System Storage Productivity Center, the former is pure software that you install on aWindows or Linux server, and the latter is an IBM server, pre-installed with Windows 2003, TotalStorageProductivity Center software, TPCTOOL command line interface, DB2 Universal Database, the DS8000 Element Manager, SVC GUI and CIMOM, and [PuTTY] rLogin/SSH/Telnet terminal application software.
Of course, the problem with having a server pre-installed with a lot of software is that there is alwayssomeone that wants to customize it further. For those who just want to manage their DS8000 disk systems,for example, it is possible to uninstall the SVC GUI, CIMOM and PuTTY, and re-install them later when youchange your mind. As a general rule, it is not wise to mix CIMOMs on the same machine, as it might causeconflicts with TCP ports or Java level requirements, so if you want a different CIMOM than SVC, uninstallthe SVC CIMOM first. For those who have SVC, the SSPC replaces the SVC Master Console, so you can safelyturn off the SVC CIMOM on your existing SVC Master Consoles.
The base level is TotalStorage Productivity Center "Basic Edition", but you can upgrade the Productivity Centerfor Disk, Data and Fabric components with license keys. You can also run Productivity Center for Replication,but IBM recommends adding processor and memory to do this (IBM offers this as an orderable option).Whether you have the TotalStorage software or SSPC hardware, Productivity Center has a cool role-to-groups mapping feature.You can create user groups, either on the Windows server, the Active Directory, or other LDAP, and then map which roles should be assigned to users in each group.
Since Productivity Center manages a variety of different disk systems, it has made anattempt to standardize some terminology. The term "storage pool" refers to an extentpool on the DS8000, or a managed disk group on the SAN Volume Controller. Since the DS8000 can support both mainframe CKD volumes and LUNs for distributed systems, theterm "volume" refers to a CKD volume or LUN, and "disk" refers to the hard disk drive (HDD).
To help people learn Productivity Center, IBM offers single-day "remote workshops"that use Windows Remote Desktop to allow participants to install, customize and usethe software with no travel required.
IBM Integrated Approach to Archiving
Dan Marshall, IBM global program manager for storage and data services on our Global Technology Services team, presented IBM's corporate-wide integration to support archive across systems, software and services.One attendee asked me why I was there, given that "archive" is one of my areas of subject matter expertise that I present often at the Tucson Executive Briefing Center. I find it useful to watch others present the material, even material that I helped to develop, to see a different slant or spin on each talking point.
Archive is one area that brings all parts of IBM together: systems, software and services.Dan provided a look at archive from the services angle, providing an objective unbiasedview of the different software and systems available to solve specific challenges.
Encryption Key Manager (EKM) Design and Implementation
Jeff Ziehm, IBM tape technical sales specialist, presented IBM's EKM software, how it works in a tape environment, and how to deploy it in various environments. Since IBM is allabout being open and non-proprietary, the EKM software runs on Java on a variety ofIBM and non-IBM operating systems. IBM offers "keytool" command line interface (CLI) for the LTO4 and TS1120 tape systems, and "iKeyMan" graphical user interface (GUI) for theTS1120. Since it runs on Java, IBM Business Partners and technical support personneloften just [download and install EKM]onto their own laptops to learn how to use it.
Virtual Tape Update
We had three presenters at this one. First, Jeff Mulliken, formerly from Diligent and now a full IBM employee, presented the current ProtecTier softwarewith the HyperFactor technology, then Abbe Woodcock, IBM tape systems, compared Diligent with IBM's TS7520 and just-announced TS7530virtual tape libraries, and finally Randy Fleenor, IBM tape sales leader, presented IBM's strategy going forward in tape virtualization.
Let's start with Diligent. The ProtecTier software runs on any x86-64 server withat least four cores and the correct Emulex host bus adapter (HBA) cards. Using Red HatEnterprise Linux (RHEL) as a base, the ProtecTier software performs its deduplication entirely in-lineat an "ingest rate" of 400-450 MB/sec. This is all possible using 4GB memory-resident "dictionary table" that can map up to 1 PB of back end physical storage, which could represent as much as 25PB of "nominal" storage. Theserver is then point-to-point or SAN-attached to Fibre Channel disk systems.
As we learned yesterday from Toby Marek's session, there are four ways to performdeduplication:
full-file comparisons. Store only one copy of identical files.
fixed-chunk comparisons. Files are carved up into fixed-size chunks, and each chunkis compared or hashed to existing chunks to eliminate duplicates.
variable-chunk comparisons. Variable-length chunks are hashed or diffed to eliminate duplicate data.
content-aware comparisons. If you knew data was in Powerpoint format, for example,you could compare text, photos or charts against other existing Powerpoint files toeliminate duplicates.
IBM System Storage N series Advanced Single Instance Storage (A-SIS) uses fixed-chunkmethod, and Diligent uses variable-chunk comparisons. Diligent does this using "dataprofiling". For example, let's say most of my photographs are pictures of people, buildings, landscapes, flowers and IT equipment. When I back these up, the Diligentserver "profiles" each, and determines if any existing data have a similar profilethat might have at least 50 percent similar content. Diligent than reads in the data that is mostly likely similar, does a byte-for-byte ["diff" comparison], and creates variable-lengthchunks that are either identical or unique to sections of the existing data. Theunique data is compressed with LZH and written to disk, and the sequential series of pointer segments representing the ingested file is written in a separate section on disk.
That Diligent can represent profiles for 1PB of data in as little as 4GB memory-residentdictionary is incredible. By comparison, 10TB data would require 10 million entries on a content-aware solution, and 1.25 billion entries for one based on hash-codes.
Abbe Woodcock presented the TS7530 tape system that IBM announced on Tuesday. It has some advantages over the current Diligent offering:
Hardware-based compression (TS7520 and Diligent use software-based compression)
1200 MB/sec (faster ingest rate than Diligent)
1.7PB of SATA disk (more disk capacity than Diligent)
Support for i5/OS (Diligent's emulation of ATL P3000 with DLT7000 tapes not supported on IBM's POWER systems running i5/OS)
Ability to attach a real tape library
NDMP backup to tape
tape "shredding" (virtual equivalent of degaussing a physical tape to erase all previously stored data)
Randy Fleenor wrapped up the session telling us IBM's strategy going forward with all of thevirtual tape systems technologies. Until then, IBM is working on "recipes" or "bundles", puttingDiligent software with specific models of IBM System x servers and IBM System Storage DS4000 disk systemsto avoid the "do-it-yourself" problems of its current software-only packaging.
Understanding Web 2.0 and Digital Archive Workloads
I got to present this in the last time slot of the day, just before everyone headed off to the [Westin Bonaventure hotel] for our big fancy barbecue dinner. Like my previous sessionon IBM Strategy, this session was more oriented toward a sales audience, but both garnereda huge turn-out and were well-received by the technical attendees.
This session was requested because these new applications and workloads are what is driving IBM to acquire small start-ups like XIV, deploy Scale-Out File Services (SOFS), and develop the innovative iDataPlex server rack.
The session was fun because it was a mix of explanation of the characteristics ofWeb 2.0 services; my own experience as a blogger and user of Google Docs, FlickR, Second Life andTivo; and an exploration in how database and digital archives will impact thegrowth in computing and storage requirements.
I'll expand on some of these topics in later blog posts.
My session was the first in the morning, at 8:30am, but managed to pack the room full of people. A few looklike they just rolled in from Brocade's special get-together in Casey's Irish Pub the night before.I presented how IBM's storage strategy for the information infrastructure fits into the greater corporate-wide themes.To liven things up, I gave out copies of my book[Inside System Storage: Volume I] to those who asked or answered the toughest questions.
Data Deduplication and IBM Tivoli Storage Manager (TSM)
IBM Toby Marek compared and contrasted the various data deduplication technologies and products available, andhow to deploy them as the repository for TSM workloads. She is a software engineer for our TSM software product,and gave a fair comparison between IBM System Storage N series Advanced Single Instance Storage (A-SIS), IBMDiligent, and other solutions out in the marketplace.If you are going to combine technologies, then it isbest to dedupe first, then compress, and finally encrypt the data. She also explained about the many cleverways that TSM does data reduction at the client side greatly reduces the bandwidth traffic over the LAN,as well as reducing disk and tape resources for storage. This includes progressive "incremental forever" backup for file selection, incremental backups for databases, and adaptive sub-file backup.Because of these data reduction techniques, you may not get as much benefit as deduplication vendors claim.
The Business Value of Energy Efficiency Data Centers
Scott Barielle did a great job presenting the issues related to the Green IT data center. He is part of IBM"STG Lab Services" team that does energy efficiency studies for customers. It is not unusual for his teamto find potential savings of up to 80 percent of the Watts consumed in a client's data center.
IBM has done a lot to make its products more energy efficient. For example, in the United States, most datacenters are supplied three-phase 480V AC current, but this is often stepped down to 208V or 110V with powerdistribution units (PDUs). IBM's equipment allows for direct connection to this 480V, eliminating the step-downloss. This is available for the IBM System z mainframe, the IBM System Storage DS8000disk system, and larger full-frame models of our POWER-based servers, and will probably be rolled out to someof our other offerings later this year. The end result saves 8 to 14 percent in energy costs.
Scott had some interesting statistics. Typical US data centers only spend about 9 percent of their IT budgeton power and cooling costs. The majority of clients that engage IBM for an energy efficiency study are not tryingto reduce their operational expenditures (OPEX), but have run out, or close to running out, of total kW ratingof their current facility, and have been turned down by their upper management to spend the average $20 million USDneeded to build a new one. The cost of electricity in the USA has risen very slowly over the past 35 years, andis more tied the to fluctuations of Natural Gas than it is to Oil prices.(a recent article in the Dallas News confirmed this:["As electricity rates go up, natural gas' high prices, deregulation blamed"])
Cognos v8 - Delivering Operational Business Intelligence (BI) on Mainframe
Mike Biere, author of the book [BusinessIntelligence for the Enterprise], presented Cognos v8 and how it is being deployed for the IBMSystem z mainframe. Typically, customers do their BI processing on distributed systems, but 70 percent of the world's business data is on mainframes, so it makes sense to do yourBI there as well. Cognos v8 runs on Linux for System z, connecting to z/OS via [Hypersockets].
There are a variety of other BI applications on the mainframe already, including DataQuant,AlphaBlox, IBI WebFocus and SAS Enterprise Business Intelligence. In addition to accessing traditional onlinetransaction processing (OLTP) repositories like DB2, IMS and VSAM, using the [IBM WebSphere ClassicFederation Server], Cognos v8 can also read Lotus databases.
Business Intelligence is traditionally query, reporting and online analytics process (OLAP) for the top 10 to 15 percent of the company, mostly executives andanalysts, for activities like business planning, budgeting and forecasting. Cognos PowerPlay stores numericaldata in an [OLAP cube] for faster processing.OLAP cubes are typically constructed with a batch cycle, using either "Extract, Transfer, Load" [ETL], or "Change Data Capture" [CDC], which playsto the strength of IBM System z mainframe batch processing capabilities.If you are not familiar with OLAP, Nigel Pendse has an article[What is OLAP?] for background information.
Over the past five years, BI is now being more andmore deployed for the rest of the company, knowledge workers tasked with doing day-to-day operations. Thisphenomenom is being called "Operational" Business Intelligence.
IBM Glen Corneau, who is on the Advanced Technical Support team for AIX and System p, presented the IBMGeneral Parellel File System (GPFS), which is available for AIX, Linux-x86 and Linux on POWER.Unfortunately, many of the questions were related to Scale Out File Services (SOFS), which my colleague GlennHechler was presenting in another room during this same time slot.
GPFS is now in its 11th release since its introducing in 1997. All of the IBM supercomputers on the [Top 500 list] use GPFS. The largest deployment of GPFS is 2241 nodes.A GPFS environment can support up to 256 file systems, each file system can have up to 2 billion filesacross 2 PB of storage. GPFS supports "Direct I/O" making it a great candidate for Oracle RAC deployments.Oracle 10g automatically detects if it is using GPFS, and sets the appropriate DIO bits in the stream totake advantage of GPFS features.
Glen also covered the many new features of GPFS, such as the ability to place data on different tiers ofstorage, with policies to move to lower tiers of storage, or delete after a certain time period, all conceptswe call Information Lifecycle Management. GPFS also supports access across multiple locations and offersa variety of choices for disaster recovery (DR) data replication.
Perhaps the only problem with conferences like this is that it can be an overwhelming["fire hose"] of information!
This week I'm in Los Angeles for the Systems Technology Conference (STC '08).We have over 1900 IT professionals attending, of which 1200 IBMers from North America, Latin America,and Asia Pacific regions, as well as another 350 IBM Business Partners. The rest, including me, are world wideor from other areas.
Last January, IBM reorganized its team to be more client-focused. Instead of focused on products, we are nowclient-centric, and have teams to cover our large enterprise systems through direct sales force, business systemsfor sales through our channel business partners, and industry systems for specific areas like deep computing,digital surveillance and retail systems solutions.
In addition to 788 sessions to attend these next four days, we had a few main tent sessions.My third line (my boss' boss' boss) David Gelardi presented Enterprise Systems. This is the group I am in.
Akemi Watanabe presented for Business Systems. Her native language is Japanese, so to do an entire talk inEnglish was quite impressive. Her focus is on SMB accounts, those customers with less than 1000 employeesthat are looking for easy-to-use solutions. She mentioned IBM's new [Blue Business Platform] which includesLotus Foundation Start, an Application Integration Toolkit, and the Global Application Marketplace.
Part of this process is the merger of System p and System i into "POWER" systems, and then offering both midrangeand enterprise versions of these that run AIX, i5/OS and Linux on POWER. It turns out that only 9 percent of ourSystem i customers are only on this platform. Another 87 percent have Windows, so it makes sense to offer i5/OSon BladeCenter, to consolidate Windows servers from HP, Dell or Sun over to IBM.
Meanwhile, IBM's strategy to support Linux has proven successful. 25 percent of x86 servers now run Linux. IBMhas 600 full-time developers for Linux, over 500 of which contributed to the latest 2.6 kernel development. Our ["chiphopper"] program has successfullyported over 900 applications. There are now over 6500 applications that run on Linux applications, on our strategic alliances with Red Hat (RHEL) and Novell (SUSE) distributions of Linux.
Her recommendation to SMB reps: learn POWER systems, BladeCenter, and Linux. I agree!
Mary Coucher presented Industry systems. In addition to the game chips for the Sony Playstation, Nintendo Wii,and Microsoft Xbox-360, this segment focuses on Digital Video Surveillance (DVS), Retail Solutions, Healthcare and Life sciences (HCLS), OEM and embedded solutions, and Deep computing. She mentioned our recently announcediDataPlex solution.
IBM is focused on "real-world-aware" applications, which includes traffic, crime, surveillance, fraud, andRFID enablement. These are streams of data that happen real-time, that need to be dealt with now, not later.
Most people know that IBM has the majority of the top 500 supercomputers, but few may not realize that IBMalso has delivered solutions to the top 100 green companies. IBM success is explained in more detail in this[Press Release].
The group split up to four different platform meetings: Storage, Modular, Power, and Mainframe. Barry Rudolphpresented for the Storage platform. He talked about the explosion in information, business opportunities,risk and cost management. IBM has shifted from being product-focused, to the stack of servers and storage,to our latest focus on solutions across the infrastructure. He mentioned our DARPA win for [PERCS] which stands for productive,easy-to-use, reliable computing system.
My theme this week was to focus on "Do-it-Yourself" solutions, such as the "open storage" concept presentedby Sun Microsystems, but it has morphed into a discussion on vendor lock-in. Both deserve a bit of furtherexploration.
There were several reasons offered on why someone might pursue a "Do-it-Yourself" course of action.
Building up skills
In my post [Simply Dinners and Open Storage], I suggested that building a server-as-storage solution based on Sun's OpenSolaris operating system could serve to learn more about [OpenSolaris], and by extension, the Solaris operating system.Like Linux, OpenSolaris is open source and has distributions that run on a variety of chipsets, from Sun's ownSPARC, to commodity x86 and x86-64 hardware. And as I mentioned in my post [Getting off the island], a version of OpenSolaris was even shown to run successfully on the IBM System z mainframe.
"Learning by Doing" is a strong part of the [Constructivism] movement in education. TheOne Laptop Per Child [OLPC] uses this approach. IBM volunteers in Tucson and 40other sites [help young students build robots]constructed from [Lego Mindstorms]building blocks.Edward De Bono uses the term [operacy] to refer to the"skills of doing", preferred over just "knowing" facts and figures.
However, I feel OpenSolaris is late to the game. Linux, Windows and MacOS are all well-established x86-based operating systems that most home office/small office users would be familiar with, and OpenSolaris is positioning itself as "the fourth choice".
In my post[WashingtonGets e-Discovery Wakeup Call], I suggested that the primary motivation for the White House to switch from Lotus Notes over to Microsoft Outlookwas familiarity with Microsoft's offerings. Unfortunately, that also meant abandoning a fully-operational automated email archive system, fora manual do-it-yourself approach copying PST files from journal folders.
Familiarity also explains why other government employees might print out their emails and archive them on paperin filing cabinets. They are familiar with this process, it allows them to treat email in the same manner as they have treated paper documents in the past.
Cost, Control and Unique Requirements
The last category of reasons can often result if what you want is smaller or bigger than what is availablecommercially. There are minimum entry-points for many vendors. If you want something so small that it is notprofitable, you may end up doing it yourself. On the other end of the scale, both Yahoo and Google ended up building their data centers with a do-it-yourself approach, because no commercial solutions were available atthe time. (IBM now offers [iDataPlex], so that has changed!)
While you could hire a vendor to build a customized solution to meet your unique requirements, it might turn outto be less costly to do-it-yourself. This might also provide some added control over the technologies and components employed. However, as EMC blogger Chuck Hollis correctly pointed out for[Do-it-yourself storage],your solution may not be less costly than existingoff-the-shelf solutions from existing storage vendors, when you factor in scalability and support costs.
Of course, this all assumes that storage admins building the do-it-yourself storage have enough spare time to do so. When was the last time your storage admins had spare time of any kind?Will your storage admins provide the 24x7 support you could get from established storage vendors? Will theybe able to fix the problem fast enough to keep your business running?
From this, I would gather that if you have storage admins more familiar with Solaris than Linux, Windows or MacOS,and select commodity x86 servers from IBM, Sun, HP, or Dell, they could build a solution that has less vendor lock-in than something off-the-shelf from Sun. Let's explore the fears of vendor lock-in further.
The storage vendor goes out of business
Sun has not been doing so well, so perhaps "open storage" was a way to warn existing Sun storage customers thatbuilding your own may be the next alternative.The New York Times title of their article says it all:["Sun Microsystems Posts Loss and Plans to Reduce Jobs"]. Sun is a big company, so I don't expect them to close their doors entirely this year,but certainly fear of being locked-in to any storage vendor's solution gets worse if you fear the vendor might go out of business.
The storage vendor will get acquired by a vendor you don't like
We've seen this before. You don't like vendor A, so you buy kit from vendor B, only to have vendor A acquire vendorB after your purchase. Surprise!
The storage vendor will not support new applications, operating systems, or other new equipment
Here the fear is that the decisions you make today might prevent you from choices you want to make in the future.You might want to upgrade to the latest level of your operating system, but your storage vendor doesn't supportit yet. Or maybe you want to upgrade your SAN to a faster bandwidth speed, like 8 Gbps, but your storage vendordoesn't support it yet. Or perhaps that change would require re-writing lots of scripts using the existingcommand line interfaces (CLI). Or perhaps your admins would require new training for the new configuration.
The storage vendor will raise prices or charge you more than you expect on follow-on upgrades
For most monolithic storage arrays, adding additional disk capacity means buying it from the same vendor as the controller. I heard of one company recently who tried to order entry-level disk expansion drawer, at a lower price, solely to move the individual disk drives into a higher-end disk system. Guess what? It didn't work. Most storage vendors would not support such mixed configurations.
If you are going to purchase additional storage capacity to an existing disk system, it should cost no more thanthe capacity price rate of your original purchase. IBM offers upgrades at the going market rate, but not all competitors are this nice. Some take advantage of the vendor lock-in, charging more for upgrades and pocketing the difference as profit.
Vendor lock-in represents the obstacles in switching vendors in the event the vendor goes out of business, failsto support new software or hardware in the data center, or charges more than you are comfortable with. These obstacles can make it difficult to switch storage vendors, upgrade your applications, or meet otherbusiness obligations. IBM SANVolume Controller and TotalStorage Productivity Center can help reduce or eliminate many of these concerns. IBMGlobal Services can help you, as much or as little, as you want in this transformation. Here are the four levelsof the do-it-yourself continuum:
Let me figure it out myself
Tell me what to do
Help me do it
Do it for me
This is the self-service approach. Go to our website, download an [IBM Redbook], figure out whatyou need, and order the parts to do-it-yourself.
IBM Global Business Services can help understand your business requirementsand tell you what you need to meet them.
IBM Global Technology Services can help design, assemble and deploy asolution, working with your staff to ensure skill and knowledge transfer.
IBM Managed Storage Services can manage your storage, on-site at your location, or at an IBM facility. IBM provides a varietyof cloud computing and managed hosting services.
So, if you are currently a Sun server or storage customer concerned about these latest Sun announcements, give IBM a call, we'll help you switch over!
He feels I was unfair to accuse EMC of "proprietary interfaces" without spelling out what I was referring to. Here arejust two, along with the whines we hear from customers that relate to them.
EMC Powerpath multipathing driver
Typical whine: "I just paid a gazillion dollars to renew my annual EMC Powerpath license, so you will have to come back in 12 months with your SVC proposal. I just can't see explaining to my boss that an SVC eliminates the need for EMC Powerpath, throwing away all the good money we just spent on it, or to explain that EMC chooses not to support SVC as one of Powerpath's many supported devices."
EMC SRDF command line interface
Typical whine: "My storage admins have written tons of scripts that all invoke EMC SRDF command line interfacesto manage my disk mirroring environment, and I would hate for them to re-write this to use IBM's (also proprietary) command line interfaces instead."
Certainly BarryB is correct that IBM still has a few remaining "proprietary" items of its own. IBM has been in business over 80 years, but it was only the last 10-15 years that IBM made a strategic shift away from proprietary and over to open standards and interfaces. The transformation to "openness" is not yet complete, but we have made great progress. Take these examples:
The System z mainframe - IBM had opened the interfaces so that both Amdahl and Fujitsu made compatible machines.Unlike Apple which forbids cloning of this nature, IBM is now the single source for mainframes because the other twocompetitors could not keep up with IBM's progress and advancements in technology.
Update: Due to legal reasons, the statements referring to Hercules and other S/390 emulators havebeen removed.
The z/OS operating system - While it is possible to run Linux on the mainframe, most people associate the z/OSoperating system with the mainframe. This was opened up with UNIX System Services to satisfy requests from variousgovernments. It is now a full-fledged UNIX operating system, recognized by the [Open Group] that certifies it as such.
As BarryB alludes, the unique interfaces for disk attachment to System z known as Count-Key-Data (CKD) was published so that both EMC and HDS can offer disk systems to compete with IBM's high-end disk offerings. Linux on System zsupports standard Fibre Channel, allowing you to attach an IBM SVC and anyone's storage. Both z/OS and Linux on System z support NAS storage, so IBM N series, NetApp, even EMC Celerra could be used in that case.
The System i itself is still proprietary, but recently IBM announced that it will now support standard block size (512 bytes) instead of the awkward 528 byte blocks that only IBM and EMC support today. That means that any storage vendor will be ableto sell disk to the System i environment.
Advanced copy services, like FlashCopy and Metro Mirror, are as proprietary as the similar offerings from EMCand HDS, with the exception that IBM has licensed them to both EMC and HDS. Thanks to cross-licensing, you can do [FlashCopy on EMC] equipment. Getting all the storage vendors to agree to open standards for these copy services is still workin progress under [SNIA], but at least people who have coded z/OS JCL batchjobs that invoke FlashCopy utilities can work the same between IBM and EMC equipment.
So for those out there who thought that my comment about EMC's proprietary interfaces in any way implied thatIBM did not have any of its own, the proverbial ["pot calling the kettle black"] so to speak, I apologize.
BarryB shows off his [PhotoShop skills] with the graphic below. I take it as a compliment to be compared to anAll-American icon of business success.
TonyP and Monopoly's Mr. Pennybags Separated at Birth?
However, BarryB meant it as a reference back to long time ago when IBMwas a monopoly of the IT industry, which according to [IBM's History], ended in 1973. In other words, IBMstopped being a monopoly before EMC ever existed as a company, and long before I started working for IBM myself.
The anti-trust lawsuit that BarryB mentions happened in 1969, which forced IBM to separate some of the software from its hardware offerings, and prevented IBM from making various acquisitions for years to follow, forcing IBM instead into technology partnerships. I'm glad that's all behind us now!
Continuing my week's theme on how bad things can get following the "Do-it-yourself" plan, I start with James Rogers' piece in Byte and Switch, titled[Washington Gets E-Discovery Wakeup Call]. Here's an excerpt:
"A court filing today reveals there may be gaps in the backup tapes the White House IT shop used to store email. It appears that messages from the crucial early stages of the Iraq War, between March 1 and May 22, 2003, can't be found on tape. So, far from exonerating the White House staffers, the latest turn of events casts an even harsher light on their email policies.
Things are not exactly perfect elsewhere in the federal government, either. A recent [report from the Government Accountability Office (GAO)] identified glaring holes in agencies’ antiquated email preservation techniques. Case in point: printing out emails and storing them in physical files."
You might think that laws requiring email archives are fairly recent. For corporations, they began with laws like Sarbanes-Oxley that the second President Bush signed into law back in 2002. However, it appears that laws for US Presidents to keep their emails were in force since 1993, back when the first President Clinton was in office. (we might as all get used to saying this in case we have a "second" President Clinton next January!)
"The Federal Record Act requires the head of each federal agency to ensure that documents related to that agency's official business be preserved for federal archives. The Watergate-era Presidential Records Act augmented the FRA framework by specifically requiring the president to preserve documents related to the performance of his official duties. A [1993 court decision] held that these laws applied to electronic records, including e-mails, which means that the president has an obligation to ensure that the e-mails of senior executive branch officials are preserved.
In 1994, the Clinton administration reacted to the previous year's court decision by rolling out an automated e-mail-archiving system to work with the Lotus-Notes-based e-mail software that was in use at the time. The system automatically categorized e-mails based on the requirements of the FRA and PRA, and it included safeguards to ensure that e-mails were not deliberately or unintentionally altered or deleted.
When the Bush administration took office, it decided to replace the Lotus Notes-based e-mail system used under the Clinton Administration with Microsoft Outlook and Exchange. The transition broke compatibility with the old archiving system, and the White House IT shop did not immediately have a new one to put in its place.
Instead, the White House has instituted a comically primitive system called "journaling," in which (to quote from a [recent Congressional report]) "a White House staffer or contractor would collect from a 'journal' e-mail folder in the Microsoft Exchange system copies of e-mails sent and received by White House employees." These would be manually named and saved as ".pst" files on White House servers.
One of the more vocal critics of the White House's e-mail-retention policies is Steven McDevitt, who was a senior official in the White House IT shop from September 2002 until he left in disgust in October 2006. He points out what would be obvious to anyone with IT experience: the system wasn't especially reliable or tamper-proof."
So we have White House staffers manually creating PST files, and other government agencies printing out their emails and storing them in file cabinets. When I first started at IBM in 1986, before Notes or Exchange existed, we used PROFS on VM on the mainframe, and some of my colleagues printed out their emails and filed them in cabinets. I can understand how government employees, who might have grown up using mainframe systems like PROFS, might have just continued the practice when they switched to Personal Computers.
Perhaps the new incoming White House staff hired by George W. Bush were more familiar with Outlook and Exchange, and ratherthan learning to use IBM Lotus Notes and Domino, found it easier just to switch over. I am not going to debatethe pros and cons of "Lotus Notes/Domino" versus "Microsoft Outlook/Exchange" as IBM has automated email archiving systems that work great for both of these, as well as also for Novell Groupwise. So, taking the benefit of the doubt,when President Bush took over, he tossed out the previous administration's staff, and brought in his own people, andlet them choose the office productivity tools they were most comfortable with.Fair enough, happens every time a new President takes office. No big surprise there.
However, doing this without a clear plan on how to continue to comply with the email archive laws already on the books, and that it continues to be bad several years later, is appalling. I can understand why business are upset in deploying mandated archiving solutions when their own government doesn't have similar automation in place.
I had a great weekend, participating in this year's ["World Laughter Day"] yesterday, and preparingfor tonight's festivities, found me pulling out the various packages from "Simply Dinners" from my freezer.
A Tucson-based company, [Simply Dinners] offers an alternative to restaurant eating.My sister went there, assembled a set of freezer-proof plastic bags containingall the right ingredients based on specific recipes, and gave them to me for my birthday, and they have been sitting in my freezer ever since... until last weekend.
My sister was careful to choose items that fit my [Paleolithic Diet] that my nutritionist has me on. However, I was skepticalthat any plastic bag full of frozen groceries would be any better than anything I could assemble on my own.I did, after all, attend "chef school" and do know how to cook well. Each package was intended to be a "dinner for two" but since I am single, was two meals each for me.
So, I decided to try them out, which would also give me more room in my freezer for incoming items, and theycame out very well. The outside of each plastic bag was a label that explained all the steps required to heatthe food. Partially-cooked vegetables were wrapped in foil, and went in for the last 10 minutes of cooking the meat.The process was straightforward, and the meals were delicious, but nothing I could not have done on my own witha recipe and a trip to the grocery store.
The question is whether someone with little or no skills could achieve similar, or acceptable results. I havefriends who are limited to assembling sandwiches from luncheon meats and cheese slices, as anything involvingheat other than simply boiling water is beyond their skills.
The key difference between "cooking for yourself" and "building your own storage" is that you aren't buildingstorage for just yourself. Unless you are a one-person SMB company, you are building storage that all of youremployees and managers count on to do their jobs, and by extension your customers and stockholders count on.
Of course I had to read responses from others before jumping in with my thoughts.Dave Raffo from Storage Soup writes [Sun going down in storage],feeling this is yet another indication that Sun has lost their mind, recounting previous events that supportthat theory.EMC blogger Mark Twomey in his StorageZilla posts [When Open Isn't] felt a littlebit guilty kicking a competitor when down. EMC blogger Chuck Hollis questions the reasons peoplemight be tempted to even try this in his post [Do-it-Yourself Storage]. Here'san excerpt:
I really, really struggle with this concept, I do. Here's why:
Anything I use and get comfortable with -- well, I'm "locked in" to a certain degree. If I use a lot of storage software X; well, I'm sorta locked in, aren't I? Or, if I put my servers-as-storage on a three-year lease, I'm kind of locked in, aren't I?"
(For EMC, vendor lock-in is great when customers are using and comfortable with EMC products, and awful when they use andare comfortable with storage from someone else. But nobody who is "comfortable" with what they have ever complain about"vendor lock-in" do they? It's the ones who are growing uncomfortable and feel trapped in changing. Howinvolved a company's use of EMC's proprietary interfaces are can greatly determine the obstacles in switching toa different vendor.Of course, if you count yourself as someone growing uncomfortable with your existing storage vendor, IBM can help you fix that problem, but that is a subject for another post.)
Worried about "vendor lock-in"? Try "admin lock-in" where you must keep a storage admin around because he or shewas the one that put your storage together. I've seen several companies held hostage by their system adminsfor home-grown scripts that serve as "duct tape for the enterprise".The other issue is whether you have storage admins who have the necessary hardware and software engineering skillsto put suitable storage together. There are some very smart storage admins I know who could, and others that wouldhave a difficult time with this.
No doubt this is promising for the home office. I myself have taken several PCs that were running older versions of Windows,but not powerful enough to upgrade to Windows Vista, wiped them clean, loaded Linux, and configured them from everythingfrom simple browser workstations to full LAMP application server configurations. While this might sound easy, I am a professional hardwareand software engineer with Linux skills.I have no doubt that someone with sufficient engineering and Solaris skills could put together a storage system for home use.
One area where Sun definitely benefits from this "Open Storage" approach is to develop Solaris skills. I have no personal experience with OpenSolaris, but assume that if you learn it, you would be able to switch overto full Solaris quite easily.Today, most people have Windows, Linux and/or MacOS skills coming into the workforce, and this could be Sun's way of getting new fresh faces who understand Solaris commands to replace retiring "baby boomers". The lack of Solaris-knowledgeable admins is perhaps one reason why companies are switching to IBM AIX, Linux or Windows in theirdata center.
Certainly, IBM's strategic choice to support Linuxhas been a great success. People learn Linux on their home systems, and at school, and are able to carry those skillsto Linux running on everything from the smallest IBM blade server to IBM's biggest mainframe.
The videos on Sun for the "recipes" on how to put together various "storage configurations in ten minutes" appear simplerthan last summer's "How to hack an Apple iPhone to switch away from AT&T" procedures.
Continuing this week's theme on "best of breed", some questions arise: How is this calculated or determined?How is one storage solution "better" than another? Which attributes weigh more heavily in the decision?
Some attributes are directly measurable, like storage performance. For this, gather up a list ofall the storage products you are interested in, go to the [Storage Performance Council website],determine whether SPC-1 or SPC-2more closely matches your application workload, and then choose the best product fromthe benchmarks, discarding any vendors that don't bother to have benchmarks posted.The new SPC-2 benchmark was created, in part, to address new workloads for the Media and Entertainmentindustry. (For a comparison of the two, see my post [SPC benchmarks for Disk System Performance])
However, other attributes, like "easy to manage", are not as straightforward to measure.One client compared the complexity of different solutions by counting the number of cables involved to connect the various parts of each solution. Only external cables were considered. All of the cables inside an IBM SystemStorage DS8000 would not be counted. By this measure, a single IBM System z10 EC mainframe connected to a single IBM DS8000 disk system over a few FICON cables would therefore be "less complicated" than a thousand x86 servers connectedvia FCP SAN switches to dozens of disk systems.
But counting cables only handles the hardware part of the interconnections. You have to also considerthe interconnections between the software, between users, and between IT administrators. It is not alwaysobvious where those connections are, and how to count them into consideration.
This month, IBM introduced the first "Management Complexity Factor" (MCF) for the Media andEntertainment industry. IBM MCF a result of IBM's acquisition of NovusCG, and is an essential part of"Storage Optimization Services" being offered by IBM. Here is an excerpt from the[IBM Press Release]:
"Media companies are facing a double-edged sword with the exponential rise in digital media storage needs, coupled with concerns about optimizing storage to be more efficient," said Steve Canepa, vice president of Media and Entertainment, IBM. "By quickly and cost-effectively analyzing the interconnected IT and storage environments that increasingly comprise media operations, MCF for Media helps our clients identify opportunities for improvement and align their IT and business strategies."
Since 1995, IBM has invested more than $18 billion on public acquisitions, making it the most acquisitive company in the technology industry, based on volume of transactions.
IBM has a strong global focus on the media and entertainment industry across all of its services and products, serving all the major industry segments -- entertainment, publishing, information providers, media networks and advertising.
In his post on Rough Type titled ["McKinsey surveys the new software landscape"], Nick Carr discusses the growing acceptance in the marketplace for Software-as-a-Service, or SaaS.He summarizes the results of McKinsey's recent[Enterprise Software Customer Survey 2008].IBM is already well established as part of the Web 2.0 Big "5" (the other four are Google, Yahoo, Amazon and Microsoft), so it may not be much surprise that it introduced some new offerings focused on this emerging market.
For managed hosting, [IBM Managed Storage Services] hasbeen extended to support archive data through its entire lifecycle: supporting access, migration, non-erasablenon-rewriteable (NENR) protection, and expiration/destruction. This offering supports locating the storage onthe customer premises, a hosting center, or an IBM Service Deliver Center. IBM's blended disk and tape approachprovides a better alignment between information value and storage costs.
Last December IBM acquired Arsenal Digital, which offers a remote "Enterprise Email Archive" service, supporting retention policies that can apply per user,per group, or even my message, as needed. This service provides fast user access to email archives, as well as e-discovery search. The search is not just for the email body text, but supports over 370different attachment types as well. Deduplication technology is used to reduce the actual amount of storage needed by 80percent. All of this with the security and comfort of knowing that these email archives are encrypted and protected in a disaster recovery class datacenter managed by IBM.Blocks and Files presents their thoughts on this in the article["IBM storing data and mail in the cloud"].
The Radicati Group has published some interesting statistics about email archive in[Volume 4, Issue 3]. Here's an excerpt:
"In 2007, a typical corporate email account receives about 18 MB of data per day. This number is expected to grow to over 28 MB by 2011. Today, there is no way to effectively manage these messages, but with the help of an archiving solution.
Today, the worldwide percentage of corporate mailboxes protected by archiving solutions is estimated to be around 14%, however it is growing at a fast pace, and is expected to reach over 70% by 2011.
A survey of 102 corporate organizations worldwide, showed that 68% of large businesses view compliance as their top security concern in 2007."
For those who are actually providing these services to others, over the cloud, then you might want to use the new[IBM System x iDataPlex].Compared to traditional server environments, the iDataPlex provides five times the computing power by doubling the number of servers per rack, but with 40 percent less energy consumption. Thanks to clevercooling technology, the system can run in standard office "room temperature" environments. You cancustomize with a mix of compute, network and storage nodes to meet your application requirements.In addition to Web 2.0 and SaaS workloads, the iDataPlex can be useful for financial risk analysis,high performance computing, and even batch processing.
Whether you are looking to contract out for SaaS, or to provide a service to others over the cloud, IBM can help!
"Our survey data shows that over the past 12 months, more firms have bought their storage from a single vendor. While this might not be for everyone, it's worth serious consideration for your environment. Maybe you won't get the best price per gigabyte every time, but you'll probably save money in the long run because of simpler management, increased staff specialization, increased capacity utilization, and better customer service."
A Forrester survey of 170 companies ranging from SMBs to large enterprises in North America and Europe found that more than 80 percent bought their primary storage from one vendor over the last year. That includes 64 percent of the companies with more than 500 TB of raw storage.
The report, written by analyst Andrew Reichman, says using more than one primary storage vendor can make it more complex to manage, provision and support the storage environment. And while using multiple vendors can often bring better pricing, buying from one vendor can result in volume discounts.
“You may have tried to contain costs by forcing multiple incumbent vendors to continuously compete against each other, with price as the primary differentiator,” Reichman writes. “This strategy can reduce prices and limit vendor lock-in, but it can also lead to management complexity and poor capacity utilization.”
The report recommends keeping things simple by and using fewer vendors when possible. However, that advice comes with several caveats: buying all storage from one vendor means taking the bad with the good, and some vendors’ product families differ so much “they may as well come from different vendors.”
As if by coincidence, fellow blogger from EMC Chuck Hollis gives his reflections on this same topic. Here's an excerpt:
When it comes to buying storage (or any infrastructure technology, for that matter), there seem to be two camps:
Best-of-breed (i.e. multivendor): -- buy what's best, get the best price, keep all the vendors on their toes, etc. etc.
Single vendor: primarily use one vendor's offerings, and hold them accountable for the outcome.
If Chuck had said "multivendor" versus "single vendor", then that would have been a true dichotomy, but interestinglyhe equates best-of-breed with a multivendor approach. Let's consider two examples:
Disk from one vendor, Tape from another
Here is a multivendor strategy, and if you have a clear idea of what best-of-breed means to you, then you couldpick the best disk in the market, and the best tape in the market. However, I don't think this keeps either vendor"on their toes", or helps you negotiate lower prices by threatening to switch to the other vendor. In shops likethis, the staffing usually matches, so there are disk administration and tape operations, with little or no overlap, andlittle or no interest in retraining to use a new set of gear. It is true that disk-based VTL could be used where real tape libraries are used, but this may not be enough to threaten your existing vendors that you will switch all your disk to tape, or all your tape to disk.
One could argue that the vendor that sells the besttape could be the exact same vendor that sells the best disk. In this case, your multivendor strategy would actuallywork against you, forcing you away from one of your best-of-breed choices.
Disk and Tape from one vendor for some workloads, Disk and Tape from another vendor for other workloads
Here is a different multivendor strategy. Having disk and tape for the same vendor allows you to take advantageof possible synergies. The IT staff knows how to use the products from both vendors. This strategy does let you keep your vendors "on their toes". You can legitimately threaten to shift your budget from one vendor over another.However, whatever your definition of best-of-breed is, chances are the product from one vendor is, and the other vendor is not. Both meet some lowest common denominator, meeting some minimum set of requirements, which would allow you to swap out one for the other.
I guess I look at it differently. The equipment in your data center should be thought of as a team. Do your servers, storage and software work well together?
While Americans like to celebrate the accomplishments of individual musicians, athletes or executives, it is actually bands that compete against other bands, sports teams that compete against other sport teams, and companies that compete against other companies. Teamwork in the data center is not just for the people who work there, but also for the IT equipment. Just as a new incoming athlete may not get along well with teammates, shiny new equipment may not get along with your existing gear. Conversely, your existing infrastructure may not let the talents or features of your new equipment shine through.
Putting together the best parts from different teams might serve as a great diversion for those who enjoy["fantasy football"], it may not be the best approach for the data center. Instead, focus on managing your data center as a team, perhaps with theuse of IBM TotalStorage Productivity Center to minimize the heterogeneity of your different equipment. Pick an ITvendor that sells "team players" for your servers, storage and software, with broad support for interoperability and compatibility.
The "Storage Symposium Mexico - 2008" conference was a great success this week!
Day 1 - The plan was for me to arrive for the Wednesday night reception. Eachattendee was given a copy of my latest book[Inside System Storage: Volume I] and I was planning to sign them. I thought perhaps we should have a "book signing" tablelike all of the other published authors have.
Things didn't go according to plan. Thunderstorms at the Mexico City airport forced our pilot to find an alternate airport. Nearby Acapulco airport was the logical choice, but was full from all the otherflights, so the plane ended up in a tiny town called McAllen, Texas. I did not arrive until the morning of Day 2,so ended up signing the books throughout Thursday and Friday, during breaks and meals, wherever they couldfind me!
Special thanks to fellow IBMer Ian Henderson who picked me up from the airport at such an awkward hour anddrive me all the way to Cuernavaca!
All of us, IBMers, Business Partners and clients alike, all donned black tee-shirtswith a white eightbar logo for a group photo with one of those "wide lens" cameras. While we werebeing assembled onto the bleachers, I took this quick snapshot of myself and some of the guys behind me.
I was original scheduled to be first to speak, but with my flight delays, was moved to a time slot after lunch.After a big Mexican lunch, the conference coordinators were afraid the attendees might fall asleep,a Mexican tradition called [siesta], so I wasinstructed to WAKE THEM UP! Fortunately, my topic was Information Lifecycle Management, a topicI am very passionate about, since my days working on DFSMS on the mainframe. With 30percent reduction in hardware capital expenditures, 30 percent reduction in operational costs, and typical payback periods between 15 to 24 months, the presentation got everyone's attention.
Of course, a lot happens outside of the formal meetings. We had a Japanese theme dinner, where we woreJapanese Hachimaki [headbands]with the eightbar logo. For those not familiar with Japanese culture, hachimaki are worn today not so much for the practical purpose to catch the perspiration but rather for mental stimulation to express one's determination. Some students wear hachimaki when they study to put themselves in the right spirit and frame of mind.
Shown here are presenters Mike Griese (Infrastructure Management with IBM TotalStorage Productivity Center),Dave Larimer (Backup and Storage Management with IBM Tivoli Storage Manager), myself, and John Hamano(Unified Storage with IBM System Storage N series).
Day 3 - Wrapping up the week, I presented two more times.
First, I covered IBM Disk Virtualization with IBM SAN Volume Controller. One interesting question was if the SAN Volume Controller could be made to looklike a Virtual Tape Library. I explained that this was never part of the original design, but that if you wantto combine SVC with a VTL into a combined disk-and-tape blended solution, consider using theIBM product called Scale-Out File Services[SoFS] which I covered in my post[Moredetails about IBM clustered scalable NAS].
During one of the breaks, I took a picture of the behind-the-scenes staff that put this together. They had created these huge blocks representing puzzle pieces, emphasizing how IBM is one of the few ITvendors that can bring all the pieces together for a complete solution.
Shown hereare Mike Griese (presenter), Cyntia Martinez, Claudia Aviles, Cesar Campos (IBM Business Unit Executive forSystem Storage in Mexico), and Claudia Lopez. Each day the staff wore matching shirts so that it was easyto find them.
Later, I covered Archive and Compliance Solutions to highlight our complete end-to-end set of solutions.When asked to compare and contrast the architectures of the IBM System Storage DR550 with EMC Centera, I explainedthat the DR550 optimizes the use of online disk access for the most recent data. For example, if you aregoing to keep data for 10 years, maybe you keep the most recent 12 months on disk, and the rest is moved,using policy-based automation, to a tape library for the remaining nine years. This means that the disk insidethe DR550 is always being used to read and write the most recent data, the data you are most likely to retrievefrom an archive system. Data older than a year is still accessible, but might take a minute or two for the tapelibrary robot to fetch.The EMC Centera, on the other hand, is a disk-only solution. It offers no option to move older data to tape,nor the option to spin-down the drives to conserve power. It fills up after the same 12 months or so, and then you get towatch it the remaining nine years, consuming electricity and heating your data center.
I don't know about you, butI have never seen anyone purposely put in "space heaters" into their data center, but certainly a full EMC Centeradoes little else. Both devices use SATA drives and support disk mirroring between locations, but IBM DR550 offers dual-parity RAID-6, and supports encryption of the data on both the disk and the tape in the DR550. EMC Centerastill uses only RAID-5, and has not yet, as far as I know, offered any level of encryption. IBM System StorageDR550 was clocked at about three times faster than Centera at ingesting new archive objects over a 1GbE Ethernet connection.
This last photo is me and fellow IBMer Adriana Mondragón. She was one of my students in the [System Storage Portfolio Top Gun class],last February in Guadalajara, Mexico.She graduated in the top 10 percent of her group, earning her the prestigious titleof "Top Gun" storage sales specialist.
The conference wrapped up with a Mexican lunch with a traditional Mariachi band. I took pictures, but figured you allalready know what [Mariachi players] look like, and I didn't wantto detract from the otherwise serious tone of this blog post! This was the first System Storage Symposium in Mexico, butbased on its success, we might continue these annually.
Last week's focus was on tape libraries, both virtual and real, leading up to our IBM announcement ofacquiring Diligent Technologies. I was focused on HDS blogger Hu Yoshida's post about his conversation with Mark,who was on an expert panel about these topics. Mark discovered that of the top energy consumersin his datacenter, his tape library was in the top five, a surprising result. Hu suggested that switching to a VTL with deduplicationtechnology was a potential alternative, and I pointed to a whitepaper from the Clipper Group that suggested otherwise.
My response was that perhaps Highmark's choice of backup software was poorly written, or that they had set it up with thewrong parameters, and just changing hardware might not be the right answer. I went too far given that I didn't know which software they had, which parameters theywere using, or which tape technology was involved. This came across wrong. I meant to poke fun at Hu's response.I did not mean to imply that Mark and his staff hadmade poor choices, or that they should automatically reject Hu's advice to consider other hardware alternatives.
I have discussed the situation with Mark, and agree that I should know his situation better before offeringsuggestions of my own.
And, it's not too late to sign up for IBM Tivoli's [Pulse 2008] conference that will be heldin Orlando, Florida, May 18-22, 2008. I'll be there Sunday and Monday only, in the Tivoli Storage track, so if you are planning to attend and wish to meet up with me while I am there, please send me a note!
[Earth Day] is celebrated in many countries on April 22, which marks the anniversary of the birth of the modern environmental movement in 1970. Others celebrate this on the March equinox.
IBM has finally aggregated everything that we are doing around "Green" initiatives onto a single[IBM Green] landing page. This has everything from IBM's own activities as well as what we sell to our clients.
Also, to mark this occasion, IBM held an internal contest for employees to make videos about Earth Day,the environment, and IT's role in making the situation better. The grand prize winner, and 10 secondprize winners, are available on this [IBM Green Contest - YouTube channel].Of these, I liked "New Life for Old Silicon" (shown here on the left).
IBM also developed [Power Up, the Game], which is theEarth Day Network's "official" game for today's festivities. It's a 3-D game created by IBM Research to help save a fictitious planet - the goal being to help students learn about ecology and climate change. This game is also hoped to motivate young students to get interested in math, scienceand technology.Eightbar has a great post [PowerUp - A serious game out inthe wild] discussing this.Here's also a 3-minute[the making of "Power Up, the game" video] to geta behind-the-scenes look.The game is a downloadable Windows client that then connects to the main servers to run.
You could buy 10 liters of gasoline in Venezuela with this coin.
I'm back from South America, and am now in Chicago, Illinois. I'm having breakfast at the Starbucksdowntown, and thought I would make a post before all of my meetings today.
On this trip, I met with IBM Business Partners and sales reps from Argentina, Colombia, Ecuador and Venezuela. While I have visited thefirst three countries on past trips, this was my first time to Caracas, Venezuela. I grew up in La Paz, Bolivia, and speak Spanish fluently, so had no problemgetting around and holding discussions with everyone. While my friends in the US are oftensurprised I speak multiple languages, it doesn't surprise anyone I visit in other countries.If you are going to have worldwide job responsibilities for a global company that does businessin over 180 countries, the least you could do is learn a few additional languages. I suspect themajority of the 350,000 IBM employees speak at least two languages, the exceptions being mostly the 50,000 orso employees that live in the United States.
I flew on American Airlines from Tucson to Dallas to Caracas, and was only slightly delayed as a resultof all of the flight cancellations that happened earlier that week. Some companies designate a single "official airline" for their employees to use. That makessense if all of your employees are located in a single city, and that city is the hub for yourdesignated airline.IBM is too big, too spread out, and sells technology to nearly every airline to make sucha designation. Instead, IBM tries to spread its business out to multiple carriers, although all ofmy colleagues seems to have their own personal favorites. Mine are American Airlines, Singapore Airlines and Cathay Pacific.
While other people were upset over the delays, I found American Airlines did a great job keeping me informed,and all their employees I talked to seemed to be handling the situation fairly well. If youfly on American, I recommend you sign up for "text message" notifications. I did this for everyleg of my trip, and was kept up to date on times, gates and status. Very helpful!American Airlines even started their own corporate blog: [AA Conversation] (Special thanks to my friend[Paul Gillen] for pointing this out)
(I read somewhere that if you are going to travel anywhere, you need to remember to bringboth your sunscreen and your sense of humor, otherwise you are going to get burned. Goodadvice! Trust me, you don't even know how bad it can really be until you travel in the third world.)
Anyhoo, last week, IBM Venezuela celebrated its 70th anniversary. That's right, IBM has been doingbusiness in Venezuela for the past 70 years. Also last week, IBM put out its impressive [1Q08 quarterly results],including 10 percent growth for IBM System Storage product line worldwide, comparing what IBM earned this first quarter to what IBM earned the first quarter of last year. For just the Latin American countries,the growth for IBM System Storage was 20 percent!There are a lot of oil and gas companies in Venezuela. With a barrel of oil selling at more than$117 US dollars, these companies are looking to spend their newly earned profits on IBM systems, software and services.
As for the picture above, that is a one-thousand Bolivares coin, worth about 47 US cents atthis week's official exchange rate. As with many Latin American countries going through [years of high inflation], Venezuela was tired of all those zeros on their money. For example, a cheeseburger, freedom fries and a Cokeat McDonald's would set you back 20,000 Bolivares.This year the Venezuelan governmentcreated a new currency called "Bolivares Fuertes" (VEF), lopping off the last three zeros.So, the coin above would be replaced by a new coin with a big "1" on it instead, and an old 2000 Bolivares billwould be replaced by a new 2 Bolivares Fuertes bill. Unfortunately,I had to give all my new Venezuelan money back at the airport upon leaving, but they let me keep the coinabove, since it is old money, as a souvenir so that I could use it as a ball mark for playing golf.
(The term Bolivares is named after Simon Bolivar who was born in Caracas. He is famous throughoutSouth America, and was, and I am not making this up, the first president of Colombia, the secondpresident of Venezuela, the first president of Bolivia, and the sixth president of Peru. Here isthe [Wikipedia article] to learn more.)
Gasoline costs a mere 100 old Bolivares per liter.For those who don't do metric, gasoline therefore costsless than 18 cents per gallon. By comparison, in the USA, the average today was $3.47 US dollarsper gallon, of which 18.4 cents of this is Federal tax. That's right, we pay more just in taxes forgasoline than los venezolanos pay for it all.
The side effect of cheap gas is bad traffic. Everybody in Venezuela drives their own car, and nobody thinksabout the price of gasoline, carpooling, or taking public transportation, acting much like Americans used to, up until a few years ago. With some of the gridlock we faced, it might have been faster (but not safer)to walk there instead.
Which makes me wonder if American Airlines fills up their airplanes with fuel at these lower prices when theypick up people in Caracas to take them back to the United States. In 2002, fuel represented 10 percentof the average airline's operating expenses, but today it is now 25 percent. That is a drastic increase!
The same is happening in data centers. In the past, electricity was so cheap, and such a small percentof the total IT budget, nobody gave it much thought. But as the usage of electricity increased, andthe cost per KWh went up, this has a multiplying effect, and the growth in power and cooling costs isgrowing four times faster than the average IT hardware budget increase.
Normally, IBM only makes announcements on Tuesdays, but today, Friday, IBM announces that it acquired Diligent Technologies. What? I got a lot ofquestions about this, so I thought I would start with this...
When I posted in January that[IBM Acquires XIV],fellow EMC blogger Mark Twomey of StorageZilla fame, sent me a comment:
"Ah now Tony I wasn't poking fun. Indeed I find it fascinating that Moshe who's been sitting out on the fringes for years having been banished for being an obstructionist to EMC entering the mid-market is now back.
Which reminds me what happens with Diligent? There his as well aren't they or has he packed his stake in that in?"
As you might have guessed, I am privy to a lot of stuff going on behind the scenes at IBM that I can't talk about in this blog, and all these rumors in the blogosphere about IBM acquisition of Diligent was a topic I couldn't officially recognize, defend or deny, until official IBM announcements were made.
In his latest post, Mark wonders about[the last Tape and Mainframe sales person on earth]. He recounts my interaction with fellow HDS blogger Hu Yoshia about the energy benefits ofVirtual Tape Libraries. Knowing that we were going to announcement IBM's acquisition of Diligent soon, I thoughtthis would be a worthy exchange, driving up the sales of Diligent boxes (whether you buy them from IBM or HDS).Diligent already had reselling arrangements with HDS, and IBM plans to continue thosearrangements going forward with HDS. As I have explained before in my post [Supermarketsand Specialty Shops], IBM and HDS cater to different customers, so if a customer who wants the best technologyfrom a specialty shop, they can buy IBM Diligent products from HDS, but if they want one-stop shopping, they can buyIBM Diligent directly from IBM or its other IBM Business Partners.
(Perhaps a more tricky situation is that Diligent also had an arrangement with Sun Microsystems, which competesdirectly against IBM as another IT supermarket vendor, but I have not heard how IBM has decided to handle thisgoing forward.)
For more on this intricate mess of interconnected companies, alliances and partnerships, read Dave Raffo's article[Data dedupe dance cardfilling up] over at Storage Soup.
So, let's tackle the first question:
Q1. What will happen to IBM's real tape library business?
Come on! IBM is Number one in tape, we've had virtual tape libraries since 1997 (the first in the industry)and continue to do well in both virtual and real tape libraries. Both provide value to the customer, and bothhave their place as part of the overall "information infrastructure". This acquisition provides yet another choicefor clients on our "supermarket" shelf.
(For those following the ["which is greener"] discussion, the robot of the IBM TS3500 real tape library consumes185W per frame (when moving) and each tape drive consumes 50W (when actively working on a tape). Compared to 13W per SATA disk drive, each 6-drive frame of a TS3500 consumes as much electricity as 37 SATA disk drives. If you are not running backups 24x7, the total KWh per day for your tape library is actually quite less, but as several people have pointed out, there are customers that do run backups 80-90 percent of the time. LTO-4 tapes can hold 800GB uncompressed, and SATA disk are now available in 1TB (1000 GB) size, so you can have fun with your own comparisons.)
Meanwhile, Scott Waterhouse, one of the few people at EMC who understand tape workloadslike backup and archive, takes me to task in his Backup Blog with his post[I want a Red Ferrari].For those who are surprised that anyone at EMC might understand backup workloads, EMC did acquire a company calledLegato, and perhaps Scott came from that acquisition. I've never met Scott in person, but based solely only fromhis writings, he seems to know his stuff and makes strong arguments for using IBM Tivoli Storage Manager (TSM) with deduplication and virtual tape libraries.
While TSM does a good job of "deduplicating" at the client first, backing up only changed data, Scott feels database and email repositories must be backed up entirely each time, which is what happens in many other backup software products. Some clients might have 80 percent database/email and only 20 percent files, while others might have less than 20 percent database/email and 80 percent files, so this might influence whether deduplication will have small or big benefit.If TSM has to backup the entire database, even though little has changed since the last backup, that is where deduplication on a virtual tape library can come in handy. For IBM DB2 and Oracle databases, IBM TSM application-aware Tivoli Data Protection module interface backs up only changed data, not the entire file. Thanks to IBM's FilesX acquisition-- (also coincidently from Israel) --IBM can extend this support now to SQL Server databases as well.However, to be fair, Scott is partly correct, TSM does backup some database and email repositories in their entirety, which is why it is a good idea to have BOTH an IBM virtual tape library with deduplication and Tivoli Storage Manager to handle all cases. This brings us to the next question:
Q2. What will happen to IBM's patented "progressive backup" technology?
IBM will continue to use TSM's progressive backup technology. TSM already works great with Diligent virtual tapelibraries. One example is LAN-free backup. In this configuration, the TSM client writes its backups directly toa virtual or real tape library, over the SAN, and then sends the list of files backed up to the TSM server over theLAN to record in its database. This can greatly reduce IP traffic on your LAN during peak backup periods. For more about this, see the IBM Redbook titled["Get More Out of Your SAN with IBM Tivoli Storage Manager"].
Jon Toigo from DrunkenData asks[Did IBM Do Due Diligence Before Making Diligent Acquisition a Done Deal?] which is probably always a valid question. Unlike XIV, I wasn't part of the Diligent acquisition team, so I can't provide first hand account of the process. I am told that the IBM team did all the right things to make sure everything is going to turn out right.Sadly, many companies that make acquisitions in the IT industry fail to make them work. Fortunately, IBM is one of the few companies that has a great success record, with over 60 acquisitions in the past six years.In the Xconomy forum, Wade Rousch writes[IBM and the Art of Acquisitions]and gives some insight why IBM is different. Jon did not understand why Cindy Grossman, IBM VP of tape and archive solutions, ran the analyst conference call for this announcement, which brings me to the next question:
Q3. What is Diligent virtual tape library going to be categorized as, a disk system or a tape system?
IBM organizes its storage systems based on the host application workloads.Products to address disk workloads (SVC, DS8000 series, DS6000 series, DS4000 series, DS3000 series, N series, XIV Nextra) are in our disk systems group. Storage that appears to host applications like a tape system to address workloads like backup and archive (tape drives, libraries and tape virtualization) are in our tape and archive group. IBM Diligent has two products, one for big workloads and one for medium workloads. Both look liketape systems, so our tape and archive team, who understand tape workloads like backup and archive the best, are obviously the best choice to support IBM Diligent in the mix.
IBM will offer both N series and Diligent deduplication capabilities. For disk workloads, IBM N series offers a post-process deduplication feature at no additional charge. For tape workloads, IBM will now offer an in-line deduplication feature with Diligent Technologies. Different workloads, different offerings.
As with any acquisition, there will be some changes. The 100 folks from Diligent will get to learn the IBM wayof doing things. This brings me to our fifth and final question:
Q5. What is the correct spelling: deduplication or de-duplication?
It appears that Diligent has a corporate-wide standard to hyphenate this term (de-duplication), but the "word police" at IBM that control and standardize all "proper spellings, trademarks, and capitalization" have sent me corporate instructions a few days ago that IBM does not to hyphenate this term (deduplication). So, going forward, it will be "deduplication", or "dedupe" for short.I suspect one of the first tasks that our new IBMers from Diligent will be doing is removing all those hyphens fromthe [Diligent Technologies website]!
That's all for now, I'm off to Chicago, Illinois tomorrow!
I am still wiping the coffee off my computer screen, inadvertently sprayed when I took a sip while reading HDS' uber-blogger Hu Yoshida's post on storage virtualization and vendor lock-in.
HDS is a major vendor for disk storage virtualization, and Hu Yoshida has been around for a while, so I felt it was fair to disagree with some of the generalizations he made to set the record straight. He's been more careful ever since.
However, his latest post [The Greening of IT: Oxymoron or Journey to a New Reality] mentions an expert panel at SNW that includedMark O’Gara Vice President of Infrastructure Management at Highmark. I was not at the SNW conference last week in Orlando, so I will just give the excerpt from Hu's account of what happened:
"Later I had the opportunity to have lunch with Mark O’Gara. Mark is a West Point graduate so he takes a very disciplined approach to addressing the greening of IT. He emphasized the need for measurements and setting targets. When he started out he did an analysis of power consumption based on vendor specifications and came up with a number of 513 KW for his data center infrastructure....
The physical measurements showed that the biggest consumers of power were in order: Business Intelligence Servers, SAN Storage, Robotic tape Library, and Virtual tape servers....
Another surprise may be that tape libraries are such large consumers of power. Since tape is not spinning most of the time they should consume much less power than spinning disk - right? Apparently not if they are sitting in a robotic tape library with a lot of mechanical moving parts and tape drives that have to accelerate and decelerate at tremendous speeds. A Virtual Tape Library with de-duplication factor of 25:1 and large capacity disks may draw significantly less power than a robotic tape library for a given amount of capacity.
Obviously, I know better than to sip coffee whenever reading Hu's blog. I am down here in South America this week, the coffee is very hot and very delicious, so I am glad I didn't waste any on my laptop screen this time, especially reading that last sentence!
In that report, a 5-year comparison found that a repository based on SATA disk was 23 times more expensive overall, and consumed 290 times more energy, than a tape library based on LTO-4 tape technology. The analysts even considered a disk-based Virtual Tape Library (VTL). Focusing just on backups, at a 20:1 deduplication ratio, the VTL solution was still 5 times per expensive than the tape library. If you use the 25:1 ratio that Hu Yoshida mentions in his post above, that would still be 4 times more than a tape library.
I am not disputing Mark O'Gara's disciplined approach. It is possible that Highmark is using a poorly written backup program, taking full backups every day, to an older non-IBM tape library, in a manner that causes no end of activity to the poor tape robotics inside. But rather than changing over to a VTL, perhaps Mark might be better off investigating the use of IBM Tivoli Storage Manager, using progressive backup techniques, appropriate policies, parameters and settings, to a more energy-efficient IBM tape library.In well tuned backup workloads, the robotics are not very busy. The robot mounts the tape, and then the backup runs for a long time filling up that tape, all the meanwhile the robot is idle waiting for another request.
(Update: My apologies to Mark and his colleagues at Highmark. The above paragraph implied that Mark was using badproducts or configured them incorrectly, and was inappropriate. Mark, my full apology [here])
If you do decide to go with a Virtual Tape Library, for reasons other than energy consumption, doesn't it make sense to buy it from a vendor that understands tape systems, rather than buying it from one that focuses on disk systems? Tape system vendors like IBM, HP or Sun understand tape workloads as well as related backup and archive software, and can provide better guidance and recommendations based on years of experience. Asking advice abouttape systems, including Virtual Tape Libraries, from a disk vendor is like asking for advice on different types of bread from your butcher, or advice about various cuts of meat at the bakery.
The butchers and bakers might give you answers, but it may not be the best advice.
Fellow blogger and cartoon writer Scott Adams writes in his Dilbert Blog posts about the [Monte Hall problem].Monte Hall was the host of the American game show Let's Make a Deal. Here is an excerpt:
"The set up is this. Game show host Monte Hall offers you three doors. One has a car behind it, which will be your prize if you guess that door. The other two doors have goats. In other words, you have a 1/3 chance of getting the car.
You pick a door, but before it is opened to reveal what is behind it, Monte opens one of the doors you did NOT choose, which he knows has a goat behind it. And he asks if you want to stick with your first choice or move to the other closed door. One of those two doors has a car behind it. Monte knows which one but you don’t."
Mathematically, on your initial choice of doors, you have a 1/3 chance of picking the car, and 2/3 chance ofpicking a goat. But, after you make a choice, Monte knows which door(s) have goats behind them, and selectsone that exposes the goat. If you stay with your initial choice, you still have a 1/3 chance that you win acar, but if you change your mind and choose the other door, your odds double, you have a 2/3 chance of winning.This is not obvious at all to most people, so Scott points people to the [Wikipedia entry] that provides the mathematicaldetails.
What does this have to do with storage?
When you pick a disk system, you are hoping you pick the door with the car. You want a disk system that meets your performance requirements for your particular workload and easy to deploy, configure and manage, with a low total cost of ownership for the three, four or five years you plan to use it.However, with over forty different storage vendors, there are some doors that might have goats. Some vendorshave only 90 day warranties for their software, and I don't know any customers that replace their disk systems that often.
(It was pointed out to me that it was unfair in my last week's post about[Xiotech'slow cost RAID brick], that I singled out EMC offering minuscule 90day warranties for the software needed to run their disk systems. I apologize. I have sincelearned that HDS and HP also shaft their clients with 90 day warranties. Apparently thereare a lot of vendors out there who lack confidence in the quality of their software!)
It would be nice if everyone published all of their performance benchmarks so that you canchoose the right door with the car behind it, but sadly in the storage industry, not everyoneparticipates with industry-standard benchmarks like the[Storage Performance Council].
In other cases, people make their choices based on past decisions. Perhaps someone beforethem chose one vendor over another, and it seems simple enough just to stay with the originalchoice. It is amazing how often people stay with their company's original choice, what we call in the industry the "incumbent vendor", without exploring alternatives.
So, if you bought an EMC, HDS or HP disk system in the last 90 days, it's not too late for you.Tell your local IBM rep that you are afraid you picked the door with the goat, and that you want to change your mind, and choose the other door and go with IBM instead.
You will double your chances of being happier with your new choice!
Storage Networking World conference is over, and the buzz from the analysts appears to be focused onXiotech's low-cost RAID brick (LCRB) called Intelligent Storage Element, or ISE.
(Full disclosure: I work for IBM, not Xiotech, in case there weren't enough IBM references on this blog page to remindyou of that. I am writing this piece entirely from publicly available sources of information, and notfrom any internal working relationships between IBM and Xiotech. Xiotech is a member of the IBM BladeCenteralliance and our two companies collaborate together in that regard.)
Fellow blogger Jon Toigo in his DrunkenData blog posted [I’m Humming “ISE ISE Baby” this Week] and then a follow-up post[ISE Launches]. I looked up Xiotech's SPC-1benchmark numbers for the Emprise 5000 with both 73GB and 146GB drives, and at 8,202 IOPS per TB, does not seem to be as fast as IBM SAN VolumeControllers 11,354 IOPS per TB. Xiotech offers an impressive 5 year warranty (by comparison, IBM offers up to 4 years, and EMC I think is stillonly 90 days).Jon also wrote a review in [Enterprise Systems]that goes into more detail about the ISE.
Fellow blogger Robin Harris in his StorageMojo blog posted [SNW update - Xiotech’s ISE and the dilithium solution], feeling that Xiotech should win the "Best Announcement at SNW" prize. He points to the cool video on the[Xiotech website]. In that video, they claim 91,000 IOPS.Given that it took forty(40) 73GB drives (or 4 datapacs) in the previous example to get 8,202 IOPS for 1TB usable, I am guessing the 91,000 IOPS is probably 44 datapacs (440 drives) glommed together, representing 11TB usable.The ISE design appears very similar to the "data modules" used in IBM's XIV Nextra system.
Fellow blogger Mark Twomey from EMC in his StorageZilla blog posted[Xiotech: Industry second]correctly points out that Xiotech's 520-byte block (512 bytes plus extra for added integrity) was not the firstin the industry. Mark explains that EMC CLARiiON had this since the early 1990's, and implies in the title that this must have been the first in the industry, making Xiotech an industry second. Sorry Mark, both EMC and Xiotech were late to the game. IBM had been using 520-byte blocksize on its disk since 1980 with the System/38. This system morphed to the AS/400, and the blocksize was bumped up to 522 bytes in 1990, and is now called the System i, where the blocksize was bumped up yet again to 528 bytes in 2007.
While IBM was clever to do this, it actually means fewer choices for our System i clients, being only able to chooseexternal disk systems that explicitly support these non-standard blocksize values, such as the IBM System Storage DS8000and DS6000 series. (Yes, BarryB, IBM still sells the DS6000!) The DS6000 was specifically designed with the System i and smaller System z mainframes in mind, and in that niche does very well. Fortunately, as I mentioned in my February post [Getting off the island - the new i5/OS V6R1], IBM has now used virtualization, in the form of the VIOS logical partition, to allow i5/OS systems to attach to standard 512-byte block devices, greatly expanding the storage choices for our clients.
(Side note: SNW happens twice per year, so the challenge is having something new and fresh to talk about each time. While Andy Monshaw, General Manager of IBM System Storage, highlighted some of the many emerging technologies in his keynote address, IBM shipped on many of them prior to his last appearance in October 2007: thin provisioning in the IBM System Storage N series, deduplication in the IBM System Storage N series Advanced Single Instance Storage (A-SIS) feature, and Solid State Disk (SSD) drives in the IBM BladeCenter HS21-XM models. Of course, not everyone buys IBM gear the first day it is available, and IBM is not the only vendor to offer these technologies. My point is that for many people, these are still not yet deployed in their own data center, and so they are still in the future for them. However, since these IBM deliveries happened more than six months ago, they're old news in the eyes of the SNW attendees. While those who follow IBM closely would know that, others like[Britney Spears] may not.)
Back in the 1990s, when IBM was developing the IBM SAN Volume Controller (SVC), we generically called the managed disk arrays that were being virtualized by the SVC as "low-cost RAID brick" or LCRB. The IBM DS3400 is a good example of this. However, as we learned, SVC is not just for LCRB, it adds value in front of all kinds of disk systems, including the not-so-low-cost EMC DMX and IBM DS8000 disk systems. ISE might make a reasonable back-end managed disk device for IBM SVC to virtualize. This gives you the new cool features of Xiotech's ISE, with IBM SVC's faster performance, more robust functionality and advanced copy services.
Next week, I'll be in South America in meetings with IBM Business Partners and storage sales reps.
My colleague, Marissa Benekos, is on location with her video camera in Orlando, Florida for theComputerWorld [Storage Networking World] conference.
The IT specialists from the IBM booth were excited at David Bricker's debut on YouTube.Here's the rest of the gang in this [video].
Here's Andy Monshaw, General Manager of IBM System Storage and keynote speaker at this SNW event, summarizingIBM's "Information Infrastructure" strategy in 60 seconds in this [Youtube video].
This last video is Clod Barrera talking about the importance of security. Clod is an IBM Distinguished Engineerand Chief Technical Strategist for IBM System Storage product line. Here is his[Youtube video]
It looks like Marissa is having a lot of fun taking these videos at the event.More videos, as we get them, will be posted to the [IBM videos channel].
My IBM colleague Marissa Benekos brought her hand-held video camera to [Storage Networking World] conference in Orlando, Florida.I am not there, as I had a conflict with another conference going on here in Tucson, so am relyingon Marissa to feed me information to blog about.
In this segment, she interviews "booth babe" David Bricker. I've known David a long time,and if you are there at the conference, tell him I sent you to visit him at the IBM booth.
David Bricker shows off some of the IBM System Storage product line at SNWin this YouTube video (2 minutes)
Sadly, I can't be in two places at once. SNW is a great conference to attend!
Well, its Tuesday, and that means more IBM announcements!!!
Let's do a quick recap of what was announced for storage:
We now support 1000GB SATA-II drives in the DS4000 series. This is available for the DS4200 model 7V, DS4700, DS4800 as well as the expansion drawers EXP420 and EX810. When I asked our marketing team why we weren't going to say "1TB" like everyone else, they thought 1000GB sounds bigger. I guess I should not have asked that on April Fool's day. For more details, see the IBM press releases for the [DS4200/EXP420and DS4700/DS4800/EXP810].
IBM announced new machine code Release 1.4a for the The IBM Virtualization Engine™ TS7700 virtual tape library for our System z mainframe customers.Various features come with this new level of machine code. See the IBM [Press Release] for more details.
Load balancing across the grid
Host control over the copy of logical volumes on a cluster by cluster basis
Option to gracefully remove an individual cluster from an existing grid
Initial-state reset for TS7700 database for cluster cleanup
Option to upgrade single-cache to dual-cache configuration
Also announced were updates to the 7214 model 1U2. Technically this is not in the IBM System Storage product line,but instead is designed specifically for our System p server line. This is a "media drawer" that allows you to havetape on one side, and optical on the other, in a single enclosure. IBM announced that you can now have DAT160 80GBdrives that is read-write compatible with DAT72 and DDS4 drives, and half-high LTO-4 drives that can read LTO-2 media, and is read-write compatible with LTO-3 media.Read the IBM [Press Release] for details.
Finally, if you are in the United States, Canada or the Carribean, there is a special discount promotionfor tape libraries purchased before June 20, 2008. This includes IBM TS3100, TS3200, TS3310 and TS3500 libraries.See the [Promotion Details] for eligibility.
IBM has added capability to the IBM TotalStorage Productivity Center for Replication. A quick review of the differentoptions for this component.
base Replication (uni-directional from primary to disaster site)
Two-site replication (bi-directional, including failover and failback)
Three-site replication (site awareness for all the copy sessions between all three sites in all situations)
Productivity Center for Replication supported all these levels for DS8000, DS6000 and ESS 800 disk models, butfor SVC it only supported FlashCopy and Metro Mirror for the uni-directional base. IBM announced version 3.4 today that has added support for SVC for Global Mirror (asynchronous disk mirroring) and bi-directional failover/failback. This supports lets you have "practice volumes" that allow IT managers to perform "disaster recovery exercises" without disrupting production workloads.
Also, for the DS8000, there is support for the new Space Efficient FlashCopy and DynamicVolume Expansion features. Here is the IBM
The Productivity Center for Replication server can run on either a Windows/Linux-x86 server or a z/OS mainframe server.The Productivity Center for Replication on System z offers all the same new support for SVC and DS8000, as well asincorporated Basic HyperSwap capability that I mentioned in my post last February[DS8000 Enhancements for the IBM System z10 EC].
Here are the IBM press releases for the TotalStorage Productivity Center for Replication on[Windows/Linux-x86and System z] servers.
I'm at a Business Partner conference today, discussing these announcements and other topics, so need to go back to those festivities.
On StorageZilla, fellow blogger Mark Twomey introduces the latest entrant from EMC to the blogosphere,in his post [Polly Pearson's blog].
Although we share the same name, with the same exact spelling, I would like be the first to point out we are not related, at least as far as I know. Basing solely from her post[Welcome to my Blog - Part 1], sheis a year younger than I am, a lot better looking, majored in communications, and is not afraid to quit acrappy job for a much better job elsewhere. I on the other hand, majored in engineering, but agree wholeheartedly not to stick in a crappy situation. There is such a skills shortage out there in the IT industry,with a cap on U.S. [H-1B visas] at a paltry [65,000 this year]. If you don't like your IT job, you should be able toquit and find another one in the IT industry you are more passionate about.
On a similar theme, over at DrunkenData, Jon Toigo's latest post asks if you are[Feeling Insecure About Your Job?]ScoreLogix’s Job Security Index has fallen in the United States, with a sharp drop specifically for IT jobs. Jon points out that while it might be easy to point out that a number went up or down, it is far more difficultto explain why it did so. He gives a good piece of career advice:
Want to keep your job? Play by the rules of the front office: demonstrate the value of what you do for the company from the standpoint of cost-savings, risk reduction and process improvement. Make yourself indispensable. If they don’t appreciate you then, you need to move on. You will always be hiding in your cubical and sweating a pink slip ...
So shine bright. Be remarkable. It is not always easy to communicate your value in a technical position to cluelessnon-technical managers. Certainly, writing a blog helps. Within IBM, there are over 3500 bloggers. Most postwithin the safe confines behind the firewall, but manage to generate ideas, present valid arguments, and get theconversation rolling with the right set of people that might be difficult otherwise in a company like IBM of over350,000 employees scattered around the world. A few of us daringly blog in full public, and carry the conversationto our clients, prospects, analysts, journalists, Business Partners, and others within the IT industry.
So, Polly Pearson from EMC, although we have never met in person, I too welcome you to the blogosphere!
There is a difference between improving "energy efficiency" versus reducing "power consumption".
Let's consider the average 100 watt light bulb, of which 5 watts generate the desired feature (light), and 95 percent generated as undesired waste (heat). In this case, it would be 5 percent efficient. If you delivered a new light bulb that generated 3 watts of light for only 30 watts of energy, then you would have an offering that was more energy efficient (10 percent instead of 5 percent) and use 70 percent less power (30 watts instead of 100 watts). This new "dim bulb" would not be as bright as the original, but has other desirable energy qualities.
Nearly all of the output of data center equipment results in heat.In The Raised Floor blog [It's Too Darn Hot!], Will Runyon explains how IBM researcher Bruno Michel in Zurich has developed new ways to cool chips with water shot through thousands of nozzles, much like capillaries in the human body. This is just one of many developments that are part of IBM's [Project Big Green]
But what if the desired feature is heat, and the undesired feature is light?In the case of Hasbro's toy[Easy-Bake Oven],a 100W incadescent light bulb is used to bake small cakes. This is generating 95W of desired heat, and onlywasting 5 percent as light (unused inside the oven). That makes this little toy 95 percent energy efficient, butconsumes as much energy as any other 100W light bulb lamp or fixture in your house. With manufacturing switchingfrom incadescent to compact flourescent bulbs, this toy oven may not be around much longer.
While we all joke that it is just a matter of time before our employers make us ride stationary bicycles attached to generators to power our monstrous data centers, 23-year old student Daniel Sheridan designeda see-saw for kids in Africa to play on that generates electricity for nearby schools. [Dan won the "mostinnovative product" at the Enterprise Festival].
Another approach is to improve efficiency by converting previously undesirable outcomes to desirable. Brian Bergstein has a piece in Forbes titled["Heat From Data Center to Warm a Pool"].Here's an excerpt:
"In a few cases, the heat produced by the computers is used to warm nearby offices. In what appears to be a first, the town pool in Uitikon, Switzerland, outside Zurich, will be the beneficiary of the waste heat from a data center recently built by IBM Corp. (nyse: IBM) for GIB-Services AG.
As in all data centers, air conditioners will blast the computers with chilly air - to keep the machines from exceeding their optimum temperature of around 70 degrees - and pump hot air out.
Usually, the hot air is vented outdoors and wasted. In the Uitikon center, it will flow through heat exchangers to warm water that will be pumped into the nearby pool. The town covered the cost of some of the connecting equipment but will get to use the heat for free."
I see a business opportunity here. Next to every data center lamenting about their power and cooling, build a state-of-the-art fitness center for the employees and nearby townspeople. Exercise on a stationary bicyclegenerating electricity, while your kids play on the see-saw generating electricity, and then afterwards thewhole family can take a dip in the heated swimming pool. And if the company subscribes to the notion of a Results-Oriented Work Environment [ROWE],it could encourage its employees to take "fitness" breaks throughout the day, rather than having everyone there in the early morning or late evening hours, leveling out the energy generated.
In explaining the word "archive" we came up with two separate Japanese words. One was "katazukeru", and the other was "shimau". If you are clearing the dinner plates from the table after your meal, for example, it could be done for two reasons. Both words mean "to put away", but the motivation that drives this activity changes the word usage. The first reason, katazukeru, is because the table is important, you need the table to be empty or less cluttered to use it for something else, perhaps play some card game, work on arts and craft, or pay your bills. The second reason, shimau, is because the plates are important, perhaps they are your best tableware, used only for holidays or special occasions only, and you don't want to risk having them broken. As it turns out, IBM supports both senses of the word archive. We offer "space management" when the space on the table, (or disk or database), is more important, so older low-access data can be moved off to less expensive disk or tape. We also offer "data retention" where the data itself is valuable, and must be kept on WORM or non-erasable, non-rewriteable storage to meet business or government regulatory compliance.
The process of archiving your data from primary disk to alternate storage media can satisfy both motivations.
IBM offers software specifically to help with this archival process.For email archive, IBM offers [IBM CommonStore] for Lotus Domino and MicrosoftExchange. For database archive, including support for various ERP and CRM applications, IBM offers [IBM Optim] from the acquisition of Princeton Softech.
The problems occur when companies, under the excuse of simplification or consolidation, feel they can just usetheir backups as archives. They are taking daily backups of their email repositories and databases, and keepingthese for seven to ten years. But what happens when their legal e-discovery team needs to find all emails or database records related to a particular situation, an employee, client or account? Good luck! Most backupsare not indexed for this purpose, so storage admins are stuck restoring many different backups to temporary storage and combing through the files in hopes to find the right data.
Backups are intended for operational recovery of data that is lost or corrupted as a result of hardware failures, application defects, or human error. Disk mirroring or remote replication might help with hardware failures, but any logical deletion or corruption of data is immediately duplicated, so it is not a complete solution. FlashCopy or Snapshot point-in-time copies are useful to go back a short time to recover from logical failures, but since they are usually on the same hardware as the original copies, may not protect against hardware failures. And then there's tape, and while many people malign tape as a backup storage choice, 71 percent of customers send backups to tape, according to a 2007 Forrester Research report.
Backups often aren't viable unless restored to the same hardware platform, with the same operating system and application software to make sense of the ones and zeros. For this reason, people typically only keep two to five backup versions, for no more than 30 days, to support operational recovery scenarios. If you make updatesto your hardware, OS or application software, be sure to remember to take fresh new backups, as the old backupsmay no longer apply.
Archives are different. Often, these are copies that have been "hardened" or "fossilized" so that they make sense even if the original hardware, OS or application software is unavailable. They might be indexed so that they can be searched, so that you only have to retrieve exactly the data you are looking for. Finally, they are often stored with "rendering tools" that are able to display the data using your standard web browser, eliminating the need to have a fully working application environment.
Take any backup you might have from five years ago and try to retrieve the information. Can you do it? This might be a real eye-opener. You might have inherited this backup-as-also-archive approach from someone else, and are trying to figure out what to do differently that makes more sense. Call IBM, we can help.
Tim Ferris started the festivities with [The Grand Illusion: The Real Tim Ferriss speaks]. He claimed that for the past year, he outsourced the writing of his blog to a writer from India, and an editor from the Philippines. Given that his post was dated March 31, and he writes frequently about the benefits of outsourcing, it appeared like a legitimate post. However, Tim fessed up the following day, claiming that it was April 1 in Japan where he wrote it.
Guy Kawasaki wrote[April Fools' Stories You Shouldn't Believe]including my favorite #12 "Ruby on Rails cited Twitter as the centerpiece of its new 'Rails Can Scale' marketing program." Speaking of Twitter, Fellow IBM blogger Alan Lepofsky from our Lotus Notes team wrote[Great, now there is Twitter Spam]. It looked like a real post, but then I realized, ... everything on Twitter is spam!
Topics like energy consumption and global warming were fodder for posts and pranks.The post[Was Earth Hour a joke again?], argued thatthe preparation of "Earth Hour" last week in effect used up more energy than the hour of this annual "lights-off event" actually saved. This reminded me of John Tierney's piece in the New York Times ["How virtuous is Ed Begley, Jr.?"] where a scientist explains that it is more "green" for the environment to drive a car short distances than to walk:
If you walk 1.5 miles, Mr. Goodall calculates, and replace those calories by drinking about a cup of milk, the greenhouse emissions connected with that milk (like methane from the dairy farm and carbon dioxide from the delivery truck) are just about equal to the emissions from a typical car making the same trip. And if there were two of you making the trip, then the car would definitely be the more planet-friendly way to go.
Wayan Vota, my buddy over at OLPCnews, writes in his post[Windows XO Child Centric Development] that the "Sugar" operating environment on the innovative Linux-based XO laptops will soon be re-named the"Windows XO Operating System", with their new motto "Windows XO: A Child-Centric Operating Platform for Learning, Expression and Exploration." The mocked up photo of an XO laptop with the Windows XO logo was excellent!
The economists from Freakonomics explain in [And While You're at it, Toss the Nickel] that it costs the US Government 1.7 cents to produce each penny. The US government loses $50 million dollars each year making pennies. Each nickel costs 10 cents to produce. This one was dated March 31, so it could actually be true. Sad, but true.
My favorite, however, was EMC blogger Barry Burke's post["5773 > c"] explaining howtheir scientists were able to reduce latency on the EMC SRDF disk replication capability:
What the de-dupe team found is that there is a hidden feature within recent generations of this chip that allow a single bit, under certain circumstances, to represent TWO bits of information.
Still, almost 34% of the total bits transferred were in fact aligned double-zeros, far more than all other bit combinations - and most importantly, these were quite frequently byte-aligned, as required by this new-found capability. Makes sense, if you think about it - most of those 32- and 64-bit integers are used to store numbers that are relatively small (years, months, days, credit charges, account balances, etc.). So that's why the team decided to use this new two-fer bit to represent "00".
Mathematically, if you can transmit 34% of the data using half as many bits, you reduce the number of bits you have to transfer in total by 17%. Which, while not necessarily earth-shattering, is nothing to be ashamed of. On top of the SRDF performance enhancements delivered in 5772 (30% reduction in latency or 2x the distance), this new enhancement adds another 17% latency improvement (or ~1.4x more distance at the same latency). Combined with 5772, SRDF/S customers could see a 50% reduction in latency. And 5773 allows SRDF/A cycle times to be set below 5 seconds (with RPQ) - this new feature adds a little headroom to maximize bandwidth efficiency for the shortest possible RPO.
Again, this looked real, until I did the math. Start with the speed of light in a vacuum of space ("c" in BarryB's title) which is roughly 300,000 kilometers per second, or put into more understandable units, 300 kilometers per millisecond. However, light travels slower through all other materials, and for fiber optic glass it is only 200 kilometers per millisecond. Sending a block of data across 100km, and then getting a response back that it arrived safely, is a total round-trip distance of 200km, so roughly 1 millisecond. However, EMC SRDF often takes two or three round-trips per write, versus IBM Metro Mirror on the IBM System Storage DS8000 which has got this down to a single round-trip. The number of round-trips has a much bigger effect on latency than EMC's double-bit data compression technique. With IBM, you only experience about 1 millisecond latency per write for every 100km distance between locations, the shortest latency in the industry.
It is good that once a year, you should be skeptical of what you read in the blogosphere, and sometimes check the facts!