This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Continuing my romp through Australia and New Zealand, today I presented in Hobart, the second city on my seven-city tour. Hobart is on a separate island called Tasmania, just south of the main Australian continent. The island is heart-shaped, and Hobart is in the lower right ventricle.
Hobart boasts the second deepest harbour in the Southern Hemisphere (yesterday's Sydney Harbour being the first). It is quite cold here, but at least the skies are clear.
I stayed in the [Henry Jones Art Hotel], named after the famous owner of the IXL Jam Company. When I arrived, they presented me with a list of 18 known convicts that shared my last name: PEARSON. I checked and made sure I was not on the list. Then it was explained to me that here in Australia, everyone values their criminal ancestors, as this is how the country was formed. The names were from registry archives from the 19th century.
In keeping with the concept of an art hotel, each of the rooms were unique, which is a nice way of saying that they fit whatever they could into the spaces available. It's been a while since I stayed at a hotel with the phone at one end of the room, but the electrical outlet at the other. The thermostat was hidden in the bathroom, and I had to master some 16 different ropes to put down the shades, as the bright light from the [Cenotaph] was keeping me awake. I was able to take pictures of some of the art sculptures from the balcony.
This was a smaller event than Sydney, with only about two dozen attendees. This makes sense, as Hobart population is only about 250,000 people. Tasmania island hold about 1 million people overall, concentrated mostly along the center line of the island.
As we had done in Sydney, Anna Wells presented IBM strategy and products, Adam Beames, system administrator for Tennis Australia (shown here in the picture at left) presented his experiences transforming their datacenter, and I presented the future trends in storage.
In appreciation for Adam's presentations in Sydney and Hobart, I presented him with a copy of my book, [Inside System Storage: Volume I], available from my publisher, Lulu.com, in paperback, hard cover, and now also in eBook format for those with Kindle, Nook or other digital book readers. See panel at right on this blog for ordering information.
In last week's System Storage Portfolio Top Gun class in Dallas, some of the students were not familiarwith Really Simple Syndication (RSS). For the uninitiated, this can be intimidating.I thought a quick overview of what I've done might help:
Chose a "feed reader". I chose Bloglines but there are many others.
Use Technorati to search other blogs for keywords or phrases I am looking for.
When I find a blog that I like to continue tracking, I "add" it to my subscription list on bloglines. Just hit "add" and copy the URL of the blog you want to track. Bloglines will figure out the RSS keywords required.I track eight blogs at the momemnt, but some people with lots of time on their hands track 20 or more. It is easy to unsubscribe, so don't be afraid to try some out for a few days.
Since I was actually going to run a blog of my own, I read a few books on the topic. One I recommend is "Naked Conversations" by Robert Scoble and Shel Israel, both experienced bloggers.
Finally, I am not big on spell checking, but most places have the option to preview your post or comment before it actually gets posted, which is not a bad idea if you use any HTML tags.
For a quick taste of blogging, consider using Data Storage Blogger Feed Reader. This has a lot of blogs on the topic of storage, already added and categorized for your convenience, ready for your perusal.
I am sure there are many other ways to enjoy the Blogosphere, but this works for me.[Read More]
Jeff Garten, a professor of International Trade at the Yale School of Management covered the Post-Crisis Global Economy. How well did the world's governments do? Here was his "scorecard" of the five "R's":
Jeff gives world governments an "A", pumping about $20 trillion US dollars onto the world stage to stave off the worst impacts.
Jeff gives an "I" (Incomplete). Not quite an "F" as government regulations just have not been adopted to address situations like this.
Jeff gives this one an "I" also. The major inbalance is US borrowing so much from China, and China keeping its currency artificially low.
Jeff gives this a "B". Banks and other financial services have changed the way they do business and have taken some corrective actions on their own, often because strings attached to bailout funds.
Jeff gives this one a "C+", in that he is not hopeful for a quick recovery. Economists have five recovery models. A quick recovery has a "V" shape. A slower full recovery has a "U" shape. Some recoveries have premature upticks followed by a second crash, representing a "W" shape. Japan still has not recovered from their crash from last decade, like an "L" shape. Jeff feels that the United States will probably have a "reverse J" where it looks like a slow "U" shaped recovery over the next three years, but we never get back to our original prominence.
Jeff did not give the impression the worst was over. Rather, he felt there were still problems ahead, banks are still carrying a lot of bad debt and real estate industry may take a while to recover. He feels the era of a dollar-centric world that started circa 1945 is over, and that the dollar will continue to decline for several decades. Replacing this will be a combination of the Euro, Japanese Yen and Chinese Yuan.
What can we look forward to? There is a definite shift to Asia and other large emerging markets like Brazil. The "Global Commons" like food and energy are under severe stress. Global rules will go under a sort of remission. A resurgence of National governments to protect citizens is underway. Finally, there will be a return of Industrial policy.
Continuing my coverage of the Data Center 2010 conference, Monday I afternoon included presentations from IBM executives.
Blueprint for a Smart data center
Steve Sams, IBM Vice President, Global Site and Facilities Services, is well known at this conference. In charge of designing and building data center facilities for IBM and its clients, he has lots of experience in various datacenter configurations.
The presentation was an update from last year's [Data Center Cost Saving Actions Your CFO Will Love]. 70 cents of every IT dollar is spent on just keeping the existing systems running, leaving only 30 percent to handle growth and business transformation. Over 70 percent of datacenters are more than seven years old, and may not be designed to handle today's density in IT equipment.
Many companies wanting to virtualize are stalled. IBM's Server Virtualization Analytics services can help cut this transformation time in half, with an ROI of only 6-18 months for complex Wintel environments. This is just one of the 17 end-to-end datacenter analytics tools IBM offers. The results have been 220 percent more VM instances per admin FTE than traditional deployments. IBM drinks its own champagne, having saved over $4 Billion USD in its own datacenter consolidation and virtualization projects.
Want to Cut the Cost of Storage in Half? Here’s How
The speaker of this session started out with a startling prediction: the amount of storage purchased in the five years 2010-2014 will be 25x what was purchased in 2009, on a PB basis. Most attempts to stem this capacity growth have failed. Therefore, the focus to cut storage costs need to be elsewhere.
The first concern is poor utilization. Utilization on DAS averages 10 percent, SANs 40-50 percent. Thin provisioning can raise this to 60-75 percent. Thin Provisioning was first introduced for the mainframe storage in the 1990s by StorageTek which IBM resold as the IBM RAMAC Virtual Array (RVA), but many credit 3PAR for porting this over to distributed operating systems in 2002. Other options include data deduplication and compression to reduce the cost of storing data on disk.
The second approach is use of storage tiering. In this case, the speaker felt SATA was 3x cheaper ($/GB) but can also be 3x lower performance. Moving data between faster FC/SAS 10K and 15K RPM drives to slower 7200 RPM drives can offer some cost reductions.
Implementing "quotas" in email, file systems or other applications is one of the worst financial decisions an IT department can make, as it merely shifts the storage management from experts (IT staff) to non-experts (end users).
The speaker recommended using archive instead. Keeping backup tapes for long-term is not archive, backups should not be older than eight weeks old.
Interactive polls of the audience gave some interesting insight:
When asked expected storage capacity "compound annual growth rate" (CAGR) for the next few years, 26 percent estimate 35-50 CAGR, 30 percent estimate 50-75 CAGR, and 15 estimate greater than 75 percent CAGR.
For thin provisioning, 43 percent of the audience already are using it, and 33 percent plan to next year.
Similarly , 41 percent of audience is using data deduplication for their primary data, and 30 percent plan to next year.
For automated tiering that moves portions of data automatically between fast and slow tiers of storage to optimize performance, like IBM's Easy Tier, 20 percent are already using it, and 44 percent plan to next year.
41 percent already have some archiving for file systems, 17 percent plan to next year.
Only 6 percent have an all-disk backup/replication environment, but 20 percent plan to adopt this next year.
The downsize of trying to squeeze out costs with these approaches and technologies is that there can be negative impact to performance. The speaker suggested a balanced approach of adding lower cost storage to existing fast storage to meet both capacity and performance requirements.
Smarter Infrastructures Deliver Better Economics
Elaine Lennox, IBM Vice President and Business Line Executive for System Software, presented the "3 D's" of a Smarter Infrastructure: design, data and delivery.
Design: new technologies and approaches are forcing people to reconsider the design of their applications, their infrastructure and their facilities.
Data: on average, companies store 17 copies of the same piece of production data. Data needs to be managed better in the future.
Delivery: new types of cloud computing are changing the way IT services can be delivered, and how they are consumed by end users.
Roadmap to Enterprise Cloud Computing
This was a combo vendor/customer presentation. Rex Wang from Oracle presented an overview of Oracle's service and product offerings, and then Jonathan Levine, COO of LinkShare, presented his experiences deploying Oracle ExaData.
Rex presented Oracle's "Cloud maturity model" that has its customers go through the following steps:
Silo: each application on its own stack of software, server and storage.
Grid: virtualization for shared infrastructure and platforms (internal IaaS and PaaS).
Private cloud: self-service, policy-based management, metered chargeback and capacity planning.
Hybrid Cloud: workloads portable between private and public clouds, offering federation, cloud bursting, and interoperability.
Rex felt the standard "Buy vs Rent" argument in the business world applies to IT as well, and that there could be break-even points over long-term TCO analysis that favors one over the other. He cited internal research that showed 28 percent of Oracle customers have internal or private cloud, and 14 percent use public cloud. 25 percent use Application PaaS, 21 percent database PaaS, 5 percent Identity management PaaS, 10 percent Compute IaaS, 18 percent storage IaaS, and 15 percent Test/Dev IaaS.
Rex felt that in all the hype around taking a single host and dividing it into multiple VMs, people have forgotten that the opposite approach of taking multiple instances into clusters is also important. He also felt you have to look at the entire "Application Lifecycle" that goes from:
IT sets up the equipment as an internal PaaS or IaaS
Developers write the application
End users are trained and use the application
Application owners manage and monitor the application
IT meters the usage and does chargeback to each application owner
Oracle's ExaData and ExaLogic compete directly against IBM's Smart Analytics System, IBM CloudBurst, and IBM Smart Business Storage Cloud.
Next up was Jonathan Levine, COO of [LinkShare], a subsidiary of Rakutan in Japan. This is an [Affiliated Marketing] company. Instead of pay-per-view or pay-per-click web advertising, this company only gets paid when the "end user" actually buys something when clicking on web advertising.
The business runs on an 8TB data warehouse and 1 TB OLTP database, ingesting 50GB daily, with 400 million transactions per day with 8.5 GB/sec throughput.
They discovered that the Oracle ExaData did not work right out of the box. In fact, it took them about a year to get it working for them, roughly the same amount of months it took them on their last Oracle 10 to Oracle 11 conversion.
Part of their business allows advertisers and web content publishers to generate reports on activity. Jonathan indicates that if the response is longer than 5 seconds, it might as well be an hour. He called this the "Excel" rule, that results need to be as fast as local PC Microsoft Excel pivot table processing.
With the new Exadata, they met this requirement. Over 84 percent of their transactions happen under 2 seconds, 9 percent take 2-4 seconds, and another 4 percent in the 4-8 second range. They hope that as they approach the winter holiday season that they can handle 2-3x more traffic without negatively impacting this response time.
As I mentioned in my post [Moving Over to MyDeveloperWorks], those of us bloggers on IBM's DeveloperWorks are moving over to a new system called "MyDeveloperWorks" which has a host of new features.
Fortunately for me, I missed the note to volunteer to be one of the first bloggers on the block to volunteer to move over. I was traveling and decided not to deal with it until I got back.However, fellow IBM Master Inventor, Barry Whyte, was not so lucky. It is safe to say he was stupid enough to volunteer, and is probably regretting the decision every day since. In case you lost his RSS feed, or can't find him anymore on Google or whatever search engine, here is his[new blog].
As for my blog, I have asked to postpone the move until all the problems that Barry has encountered are resolved. That might be a awhile, but if you lose access to mine sometime in the near future, hopefully at least you have been warned as to what might have happened.
Continuing my week in Chicago, I decided to attend some of the presentations from the System x side. This is the advantage of running both conferences in the same hotel, attendees can choose how many of each they want to participate in.
Wayne Wigley, IBM Advanced Technical Support (ATS), presented a series of presentations on different server virtualization offerings available for System x and BladeCenter servers. I am very familiar with virtualization implemented on System z mainframes, as well as IBM's POWER systems, and have working knowledge of Linux KVM and Xen, so I was well prepared to handle hearing the latest about Microsoft's Hyper-V and VMware's Vsphere version 4.
Microsoft Hyper-V 2008
Hyper-V can run as part of Windows 2008, are standalone on its own.Different levels of Windows 2008 include licenses for different number of Windows virtual machines (VMs).Windows Server 2008 Standard includes 1 Windows VM, Enterprise includes 4 Windows VMs, and the Datacenter edition includes unlimited number of Windows VMs. If you want to run more Window VMs than come included, you need to pay extra for each additional one. For example, to run 10 Windows VMs on a 2-socket server would cost about $9000 US dollars on Standard but only $6000 US dollars on Datacenter edition (list prices from Microsoft Web site).
Unlike VMware, which takes a monolithic approach as hypervisor, Hyper-V is more like Xen with a microkernelized approach. This means you need a "parent" guest OS image, and the rest of the Guest OS images are then considered "child" images.These child images can be various levels of Windows, from Windows XP Pro to Windows Server 2008, Xen-enabled Linux, or even a non-hypervisor-aware OS.The "parent" guest OS image provides networking and storage I/O services to these "child" images.For the hypervisor-aware versions of Windows and Linux, Hyper-V allows optimized access to the hypervisor, "synthetic devices", and hypercalls. Synthetic devices present themselves as network devices, but only serve to pass data along the VMBus to other networking resources. This process does not require software emulation, and therefore offers higher performance for virtual machines and lower host system overhead.For non-hypervisor-aware OS images, Hyper-V provides device emulation through the "parent" image, which is slower.
Microsoft System Center Virtual Machine Manager (SCVMM) can manage both Hyper-V and VMware VI3 images.Wayne showed various screen shots of the GUI available to manage Hyper-V images.In standalone mode, you lose the nice GUI and management console.
Hyper-V supports external, internal and private virtual LANs (VLAN). External means that VMs can communicate with the outside world over standard ethernet connections. Internal means that VMs can communicate with "parent" and "child" guest images on the same server only. Private means that only "child" guests can communicate with other "child" images.
Hyper-V supports disk attached via IDE, SATA, SCSI, SAS, FC, iSCSI, NFS and CIFS. One mode is "Virtual Hard Disk" (VHD) similar to VMware VMDK files. The other is "pass through" mode, which are actual disk LUNs accessed natively. VHDs can be dynamic (thin provisioned), fixed (fully allocated), or differencing. The concept of differencing is interesting, as you start with a base read-only VHD volume image, and have a separate "delta" file that contains changes from the base image.
Some of the key features of Hyper-V 2008 are:
Being able to run concurrently 32-bit and 64-bit versions of Linux and Windows guest images
Support for 64 GB of memory and 4-way symmetric multiprocessing (SMP) per VM
Clustering for High Availability and Quick Migration of VM images
Live backup with integration with Microsoft's Volume Shadow Copy Services (VSS)
Virtual LAN (VLAN) support, and Virtual and Pass-through physical disk support
A clever VMbus, virtual service parent/client approach to sharing hardware
Optimized performance options for hypervisor-aware versions of Windows and Linux, and emulated supportfor non-hypervisor-aware OS images.
VMware Vsphere v4.0
This was titled as an "Overview" session, but really was an "Update" session on the newest features of this release. The big change appears to be that VMware added "v" in front of everything.
Under vCompute, there are some new features on VMware's Distributed Resource Scheduler (DRS) which includes recommended VM migrations. Dynamic Power Management (DPM) will move VMs during periods of low usageto consolidate onto fewer physical servers so as to reduce energy consumption.
Under vStorage, vSphere introduces an enhanced Plugable Storage Architecture (PSA), with supportfor Storage Array Type Plugins (SATP) and Path Selection Plugins (PSP). This vStorage API allows forthird party plugins for improved fault-tolerance and complex I/O load balancing algorithms. This releasealso has improved support for iSCSI, including Challenge-Handshake Authentication Protocol (CHAP) support.Similar to Hyper-V's dynamic VHD, VMware supports "thin provisioning" for their virtual disk VMDK files.A feature of "Storage Vmotion" allows conversion between "thick" and "thin" provisioning formats.
The vStorage API for Data Protection provide all the features of VMware Consolidated Backup (VCB). The APIprovides full, incremental and differential file-level backups for Windows and Linux guests, including supportfor snapshots and Volume Shadow Copy Services (VSS) quiescing.
VMware introduces direct I/O pass-through for both NIC and HBA devices. While thisallows direct access to SAN-attached LUNs similar to Hyper-V, you lose a lot of features like Vmotion, High Availability and Fault Tolerance. Wayne felt that these restrictions are temporary, that hopefully VMwarewill resolve this over the next 12 months.
Under vNetwork, VMware has virtual LAN switches called vSwitches. This includes support for IPv6and VLAN offloading.
The vSphere server can now run with up to 1TB of RAM and 64 logical CPUs to support up to 320 VM guest images.Each VM can have up to 255GB RAM and up to 8-way SMP.Vsphere ESX 4 introduces a new virtual hardware platform called VM Hardware v7. While Vsphere 4.0 can run VMs from ESX 2 and ESX 3, the problem is if you have new VMs based on this newer VM Hardware v7, you cannot run them on older ESX versions.
Vsphere comes in four sizes: Standard, Advanced, Enterprise, and Enterprise Plus, ranging in list price from $795 US dollars to $3495 US dollars.
While IBM is the #1 reseller of VMware, we also are proud to support Hyper-V, Xen, KVM and other similar products.Analysts expect most companies will have two or more server virtualization solutions in their data center, and it is good to see that IBM supports them all.
Chris Anderson, of Wired magazine, wrote a great article called The Long Tail.
This article became a book by the same name published earlier this year, and I just discovered it on a recent visit to Second Life. A lot of IBMers are now alsoSecond Lifers, and I suspect it is just a matter of time before we are conductingour customer briefings there, and getting our year-end bonuses paid directly in Linden bucks.(Those of you not familiar with Second Life can watch this 3-minute video fromthe folks at Text100)
Anyways, the Long Tail describes the new economy of entertainment thanks to digitalstorage. Here are some of the key insights.
In the past, entertainment was all about hits: hit songs, hit movies,hit novels, and this was primarily because of the economic realities restricted byphysical space. Chris writes: "An average movie theater will not show a film unless it can attract at least 1,500 people over a two-week run; that's essentially the rent for a screen. An average record store needs to sell at least two copies of a CD per year to make it worth carrying; that's the rent for a half inch of shelf space."
Things have changed. To drive the point home, Robbie Vann-Adibe (CEO of eCast), poses the trick question"What percentage of the top 10,000 titles in any online media store (Netflix, iTunes, Amazon, or any other) will rent or sell at least once a month?" The answer will surprise you. Write down your guess first, then go read here. His digital jukeboxes are able to play from a list of150,000 songs, not the few hundred you'd find at the Tap Room which is rated as having the best jukebox in Tucson.
The phenomenon is not just limited to music. "Take books," Chris writes, "The average Barnes & Noble carries 130,000 titles. Yet more than half of Amazon's book sales come from outside its top 130,000 titles. Consider the implication: If the Amazon statistics are any guide, the market for books that are not even sold in the average bookstore is larger than the market for those that are..."
This has incredible implications for the storage industry. For one, content providers are going to dig deep into their archives to digitize and deliver "long tail" offerings. If they don't have a deep archive, many will start to build one. Second, the need to search through that large volume of content will become more critical. Classifying and indexing with the appropriate tags and metadata will be an important task.
In keeping with the spirit to be a more kinder, gentler 2011, I decided last week to refrain from being the rain on someone else's parade that occurs immediately before, during or after a competitor's announcement or annual conference, and let EMC have their few moments in the spotlight last week. This of course allows me more time to learn about the announcements and reflect on marketplace reactions. Here's a quick look at the [EMC Press Release]:
A new VNXe disk system
Of the 41 new storage technologies and products EMC announced last week, the VNXe is EMC's "me-too" product to compete against other low-end disk systems like the IBM System Storage DS3524 and N3000 series. It looks truly new, developed organically from the ground up, with a new architecture, new OS. It comes in either the 2U-high VNXe3100 or the 3U-high VNXe3300. These employ 3.5-inch SAS drives to provide Ethernet-based NFS, CIFS and iSCSI host attachment. The $10K USD price tag appears to be for the hardware only. As is typical for EMC, they charge software features in bundles or "suites", so the actual TCO will be much higher. I have not seen any announcements whether Dell plans to resell either the VNXe nor the VNX models, now that they have acquired Compellent.
A new VNX disk system
Despite having a similar name as the VNXe, the VNX appears to be a re-hash of the Celerra/CLARiiON mess that EMC has been selling already, based on the old FLARE and DART operating systems of these older disk systems. This scales from 75 to 1000 SAS drives. While EMC calls the VNX "unified", it currently is only available in block-only and file-only models, with a future promise from EMC that they will offer a combined block-and-file version sometime in the future. EMC claims that the VNX will be faster than the predecessors, so hopefully that means EMC has joined the rest of the planet and will publish SPC-1 and SPC-2 benchmarks to back up that claim. They can compare against the SPC-1 benchmarks that our friends at NetApp ran against EMC CLARiiON.
New software for the VMAX
A long time ago, EMC announced they would provide non-disruptive automated tiering. Their first delivery "FAST V1" handled entire LUNs at a time. EMC now has finally "FAST VP" which we expected was going to be called "FAST V2", which provides sub-LUN automated tiering between Solid-state and spinning disk drives.. Meanwhile, IBM has been delivering "Easy Tier" on the IBM System Storage DS8000 series, SAN Volume Controller, and Storwize V7000 disk systems.
Data Domain Archiver
Competing against IBM, HP and Oracle in the tape arena, EMC's latest addition to the Data Domain family is designed for the long-term retention of backups? Archives of backups? Backups are short-lived, protecting against the unexpected loss from hardware failure or data corruption. Keeping backups as "archives" is generally a bad mistake, as it makes it hard to e-Discover the data you need when you need it, and may not have the appropriate hardware tor restore these old backups when you do find them.
I will have to dig deeper into all of these different technologies in separate posts in the future.
Several of my IBM colleagues will be attending the "Virtual Worlds 2007" conference today and tomorrow. This conference sold out so quickly that they have already scheduled a second one for October. The focus is on 3-D internet technologies likeSecond Life. Attendance is expected at over 600 people.
IBM is investing heavily in this new concept of v-business. Last year, I was one of only 325 IBMers on Second Life. Now, according to this Better than Life blog entry from Grady Booch, IBM Fellow, the number is over 4000!
Of course, the challenge for IBM, and others, is learning to market in virtual worlds. Already, my team is in-world, and we meet several times a week. Using Second Life is quickly becoming an essential business skill, like participating in conference calls, or responding to instant messages.
What does meeting in-world entail?
Scheduling a time and a place
Finding a time that people can meet is no different than scheduling a audio or video conference call. In general, you don't have to worry about travel, but you do have to be actively somewhere connected to everyone else.
Finding a place involves actually determining the island, region and coordinates to hold the meeting. You need to find a place with enough seating. You don't have to worry about daylight, each person can control how much or little sunlight shows up on their screen. You do have to make sure you pick a spot that nobody else plans to use at that same time. Just like scheduling conference rooms at the site or hotel, we have to schedule rooms in advance.
To avoid this hassle, I have created the "pocket conference room". This is a single object that I can "rez" onto the ground, from my inventory, with 40 chairs, a PowerPoint presentation screen, a podium for a speaker to stand behind, and stools for speakers to sit on if they are next on the agenda. Now, I can hold impromptu meetings in any sandbox, grassy knoll, or the roof top of a building.
As with any other meeting, you need some basic ground rules. I am not talking the usual "no shooting, no gambling, no selling" rules that you see everywhere in Second Life. Instead, rules like an avatar must stand up before speaking. Anyone with a question must first "raise their hand" and get recognized by the chair. These ground rules can be as formal as Robert's Rules of Order or more casual, depending on who is participating.
It costs 10 Linden Dollars (L$) per PAGE to upload a PowerPoint presentation. This has the immediate benefit of having everyone spend more time and effort on their presentation, trying to cut down the number of charts, and focus more on what they are going to say.
Public Speaking Skills
It is amazing. People who are too scared to speak in front of an audience in Real Life have no problem having their avatar stand in front of other avatars in Second Life. This has greatly broadened the pool of speakers to tap into.Are you a woman with a husky masculine voice? Are you a man with a high-pitched feminine voice? Now, you can create an avatar that matches your voice.
This turns out to be the biggest challenge. In Real Life, organizing a face-to-face meeting involves time and effort making sure the venue has everything you need, a platform, a podium, good Audio/Video system, etc. All people have to do is show up, sit in a chair and listen.
In Second Life, however, the aspects of venue are all covered, but getting people to show up is another story. People have to sign up for Second Life account, create an avatar, wear appropriate virtual clothing, figure out how to teleport near the venue, walk or fly the difference to get to the exact building and room, master the sitting-in-a-chair and hold-coffee-and-sip-occasionally process, and pay attention.
Perhaps the best part of Second Life is that if you are not paying attention, your avatar noticeably falls asleep, into a hunched-over position, what is called "afk" (short for Away From Keyboard). On the other hand, if you do need to step away from your desk, you can put your avatar in "afk" mode immediately, tell everyone why and perhaps when you'll be back, and then re-activate when you return. This is one of the best improvements over regular audio conference calls.
I suspect the need for having places in Second Life to hold meetings will become more and more in demand.At a time when real-estate sales in the US is slowing down, Coldwell Banker's Second Life efforts are ramping up. I am not making this up. Coldwell Banker is one of the nation's largest real estate brokerage firms. They are trying to bring the same "adult supervision" to virtual real-estate transactions, offering to help people buy and rent properties in Second Life.
Continuing my week in Chicago, for the IBM Storage Symposium 2009, I attended what in my opinion was the bestsession of the week. This was by a guy named Chip Copper, who covered IBM's set of Ethernet and Fibre Channelnetworking gear. Attributes are the four P's:
Power and Cooling (electricity usage)
Equipment comes in two flavors: Top-of-Rack (ToR) thin pizza box switches, and Middle-of-Row (MoR) much larger directors.The MoR directors are engineered for up to 50Gbps per half-slot, so 10GbE and the future 40GbE can be easily accommodated in a single half-slot, and the future 100GbE can be done with a full slot (two half-slots).
While many companies might have been contemplating the switch from copper wires to optical fiber, there is a new reason for copper cables: Power-over-Ethernet (PoE). Many IP-phones, digital video surveillance cameras, and other equipment can have a single cable that delivers both signal and electricity over copper. If you have already deployed optical fiber throughout the building, there are "last mile" options where the signals are converted to copper wires and electrical energy added for these types of devices.
Two directors can be connected together with Inter-Chassis Link (ICL) cables to make them look like a single director with twice the number of ports. These are different than Inter-Switch Links (ISL) as they are not counted as an extra "hop" for networking counting purposes, especially important for FICON usage.
Today, we have 1Gbps, 2Gbps, 4Gbps and 8Gbps Fibre Channel. Since these all use 10-for-8 encoding (10 bits represents one 8-bit byte), then in was easy to calculate throughput: 8Gpbs was 800 MB/sec, for example. Auto-negotiation between speeds is not done at the HBA card, switch or director blade itself, but in the Short Form-factor Pluggable (SFP) optical connector. However, you can only auto-negotiate if the encoding matches. The 4/2/1 SFP can run at 4Gbps or auto-negotiate to slower 2Gbps and 1Gbps. The 8/4/2 SFP can run at 8Gbps, or auto-negotiate down to slower 4Gpbs and 2Gbps. Folks who still have legacy 1Gbps equipment, but want to run some things at 8 Gbps, can buy 8Gbps-capable switches or director blades, but then put some 4/2/1 SFPs into them. These 4/2/1 SFP are cheaper, so this might be something to consider if budgets are tight. Some SFPs handle up to 10km distances, but others only 4km, so be careful not to order the wrong ones.
Unfortunately, there are proposals in place for 10Gbps and 40Gbps that would use a different 66-for-64 encoding (66 bits represent 8 bytes), so 10Gbps would be 1200 MB/sec. These are used today for ISL between directors and switches.In theory, the 40Gbps could auto-negotiate down to 10Gbps, but not to any of the 8/4/2/1 Gbps that use different 10-for-8 encoding.
For those who cannot afford a SAN768B, there is a smaller SAN384B that can carry: 192 ports (4Gpbs/2Gbps), 128 ports (8Gbps) or 24 ports (10Gbps). The SAN384B can be ICL connected to another SAN384B or even the SAN768B as your needs grow.
On the entry-level side, the SAN24B-4 offers a feature called "Access Gateway". This makes the SAN24B look like an SAN end-point host, rather than a switch, and makes initial deployment of integrated bundled solutions easier. Once connected to everything, you can convert it over to full "switch" mode.The SAN40B-4 and SAN80B-4 provide midrange level support, including Fibre Channel routing at the 8Gbps level. In fact, all 8Gbps ports include routing capability. IBM offers both single-port and dual-port 8Gbps host bus adapter (HBA) cards to connect to these switches. These HBA offer 16 virtual channels per port, so that if you have VMware running many guests, or want to connect both disk and tape to the same HBA, you can keep the channel traffic separate for Quality of Service (QoS).
Chip wrapped up his session to discuss Fibre Channel over Ethernet (FCoE), and explained why we need to have a loss-less Convergence Enhanced Ethernet (CEE) to meet the needs of storage traffic as well as traditional Fibre Channel does today. IBM offers all of the equipment you need to get started today on this FCoCEE, with Converged Network Ethernet cards for your System x servers, and a new SANB32 that has 24 10GbE CEE ports and 8 traditional 8Gbps FC ports. This means that you can put the CNA card in your existing servers, connect to this switch, and then connect to your existing 10GbE LAN and your existing 8Gpbs or 4Gpbs FC-based SAN to the rest of your storage devices.
Worried that the FCoE or CEE standards could change after you deploy this gear? Aren't most LAN and SAN switches based on Application-specific integrated circuit [ASIC] chips which are created in the factory? Don't worry, IBM's equipment have put all the standards-vulnerable portions of the logic into separate Field-programmable gate array [FPGA] that can be updated with simplya firmware upgrade. This is future-proofing I can agree with!
I had a great weekend, participating in this year's ["World Laughter Day"] yesterday, and preparingfor tonight's festivities, found me pulling out the various packages from "Simply Dinners" from my freezer.
A Tucson-based company, [Simply Dinners] offers an alternative to restaurant eating.My sister went there, assembled a set of freezer-proof plastic bags containingall the right ingredients based on specific recipes, and gave them to me for my birthday, and they have been sitting in my freezer ever since... until last weekend.
My sister was careful to choose items that fit my [Paleolithic Diet] that my nutritionist has me on. However, I was skepticalthat any plastic bag full of frozen groceries would be any better than anything I could assemble on my own.I did, after all, attend "chef school" and do know how to cook well. Each package was intended to be a "dinner for two" but since I am single, was two meals each for me.
So, I decided to try them out, which would also give me more room in my freezer for incoming items, and theycame out very well. The outside of each plastic bag was a label that explained all the steps required to heatthe food. Partially-cooked vegetables were wrapped in foil, and went in for the last 10 minutes of cooking the meat.The process was straightforward, and the meals were delicious, but nothing I could not have done on my own witha recipe and a trip to the grocery store.
The question is whether someone with little or no skills could achieve similar, or acceptable results. I havefriends who are limited to assembling sandwiches from luncheon meats and cheese slices, as anything involvingheat other than simply boiling water is beyond their skills.
The key difference between "cooking for yourself" and "building your own storage" is that you aren't buildingstorage for just yourself. Unless you are a one-person SMB company, you are building storage that all of youremployees and managers count on to do their jobs, and by extension your customers and stockholders count on.
Of course I had to read responses from others before jumping in with my thoughts.Dave Raffo from Storage Soup writes [Sun going down in storage],feeling this is yet another indication that Sun has lost their mind, recounting previous events that supportthat theory.EMC blogger Mark Twomey in his StorageZilla posts [When Open Isn't] felt a littlebit guilty kicking a competitor when down. EMC blogger Chuck Hollis questions the reasons peoplemight be tempted to even try this in his post [Do-it-Yourself Storage]. Here'san excerpt:
I really, really struggle with this concept, I do. Here's why:
Anything I use and get comfortable with -- well, I'm "locked in" to a certain degree. If I use a lot of storage software X; well, I'm sorta locked in, aren't I? Or, if I put my servers-as-storage on a three-year lease, I'm kind of locked in, aren't I?"
(For EMC, vendor lock-in is great when customers are using and comfortable with EMC products, and awful when they use andare comfortable with storage from someone else. But nobody who is "comfortable" with what they have ever complain about"vendor lock-in" do they? It's the ones who are growing uncomfortable and feel trapped in changing. Howinvolved a company's use of EMC's proprietary interfaces are can greatly determine the obstacles in switching toa different vendor.Of course, if you count yourself as someone growing uncomfortable with your existing storage vendor, IBM can help you fix that problem, but that is a subject for another post.)
Worried about "vendor lock-in"? Try "admin lock-in" where you must keep a storage admin around because he or shewas the one that put your storage together. I've seen several companies held hostage by their system adminsfor home-grown scripts that serve as "duct tape for the enterprise".The other issue is whether you have storage admins who have the necessary hardware and software engineering skillsto put suitable storage together. There are some very smart storage admins I know who could, and others that wouldhave a difficult time with this.
No doubt this is promising for the home office. I myself have taken several PCs that were running older versions of Windows,but not powerful enough to upgrade to Windows Vista, wiped them clean, loaded Linux, and configured them from everythingfrom simple browser workstations to full LAMP application server configurations. While this might sound easy, I am a professional hardwareand software engineer with Linux skills.I have no doubt that someone with sufficient engineering and Solaris skills could put together a storage system for home use.
One area where Sun definitely benefits from this "Open Storage" approach is to develop Solaris skills. I have no personal experience with OpenSolaris, but assume that if you learn it, you would be able to switch overto full Solaris quite easily.Today, most people have Windows, Linux and/or MacOS skills coming into the workforce, and this could be Sun's way of getting new fresh faces who understand Solaris commands to replace retiring "baby boomers". The lack of Solaris-knowledgeable admins is perhaps one reason why companies are switching to IBM AIX, Linux or Windows in theirdata center.
Certainly, IBM's strategic choice to support Linuxhas been a great success. People learn Linux on their home systems, and at school, and are able to carry those skillsto Linux running on everything from the smallest IBM blade server to IBM's biggest mainframe.
The videos on Sun for the "recipes" on how to put together various "storage configurations in ten minutes" appear simplerthan last summer's "How to hack an Apple iPhone to switch away from AT&T" procedures.
Continuing my summary of Pulse 2008, the premiere service managementconference focusing on IBM Tivoli solutions, I attended and presentedbreakout sessions on Monday afternoon.
Tivoli Storage "State-of-the-Subgroup" update
Kelly Beavers, IBM director of Tivoli Storage, presented the first breakout for all of the Tivoli Storage subgroup.Tivoli has several subgroups, but Tivoli Storage leads with revenuesand profits over all the others.Tivoli storage has top performing business partner channel of anysubgroup in IBM's Software Group division.IBM is world's #1 provider of storage vendor (hardware, softwareand services), so this came to no surprise to most of the audience.
Looking at just the Storage Software segment, it is estimatedthat customers will spend $3.5 billion US dollars more in the year 2011 than they did last year in 2007. IBM is #2 or #3 in eachof the four major categories: Data Protection, Replication, Infrastructure management, and Resource management. In eachcategory, IBM is growing market share, often taking away share fromthe established leaders.
There was a lot of excitement over the FilesX acquisition.I am still trying to learn more about this, but what I have gathered so far is that it can:
Like turning a "knob", you can adjust the level of backupprotection from traditional discrete scheduled backups, to morefrequent snapshots, to continuous data protection (CDP). Inthe past, you often used separate products or features to dothese three.
Perform "instantaneous restore" by performing a virtualmount of the backup copy. This gives the appearance that therestore is complete.
This year marks the 15th anniversary of IBM Tivoli StorageManager (TSM), with over 20,000 customers. Also, this yearmarks the 6th year for IBM SAN Volume Controller, having soldover 12,000 SVC engines to over 4,000 customers.
Data Protection Strategies
Greg Tevis, IBM software architect for Tivoli Technical Strategy,and I presented this overview of data protection. We coveredthree key areas:
Protecting against unethical tampering with Non-erasable, Non-rewriteable (NENR) storage solutions
Protecting against unauthorized access with encryption ondisk and tape
Protecting against unexpected loss or corruption with theseven "Business Continuity" tiers
There was so much interest in the first two topics that weonly had about 9 minutes left to cover the third! Fortunately,Business Continuity will be covered in more detail throughoutthe week.
Henk de Ruiter from ABN Amro bank presented his success storyimplementing Information Lifecycle Management (ILM) across hisvarious data centers using IBM systems, software and services.
Making your Disk Systems more Efficient and Flexible
I did not come up with the titles of these presentations. Theteam that did specifically chose to focus on the "business value"rather than the "products and services" being presented. Inthis session, Dave Merbach, IBM software architect, and I presentedhow SAN Volume Controller (SVC), TotalStorage Productivity Center,System Storage Productivity Center, Tivoli Provisioning Managerand Tivoli Storage Process Manager work to make your disk storagemore efficient and flexible.
I am not stranger to the Sugar learning platform, developed as part of the One Laptop per Child [OLPC] project.
As I mentioned in my post [Helping Young Students - part 1], I was part of the OLPC development team back in 2008, helped local volunteers deploy laptops to children in Nepal and Uruguay, mentored a college student in India, and learned a lot of Python programming language in the process.
Sugar is now developed by Sugar Labs, a nonprofit spin-off of OLPC. The code is free and open source desktop environment for many other machines, including as a "Desktop Environment" for Fedora Linux.
I kept my 40GB hard drive partitioned as follows. On the extended partition, sda5 will hold my system utilities, like Clonezilla and SystemRescue, and sda6 is my swap space, increased to 1500MB. Partition sda1 has Edubuntu 12.04 on it, and I will use sda2 to install Fedora with Sugar.
[Sugar-on-a-stick], is so named because it is designed so that each child has their own LiveUSB. This can run on PC with Windows or Mac OS without affecting those operating systems, allowing a child to use Sugar in the classroom, then take the stick home and continue on their home PC.
A 2GB or greater USB memory stick can hold both Fedora and Sugar, and use that to boot your desktop. Unfortunately, it requires 1GB of RAM, and I have only 512MB.
Can I just run Sugar natively on a Fedora install? Yes, thanks to the [Sugar not "on a stick"] instructions, I can install Fedora first, then just:
Fedora Desktop Edition - this is a LiveCD that requires 1GB RAM.
Fedora Network Install - this is a bootable CD that then uses the Internet to download the rest of the files required. Use this if you (a) have a fast Internet connection, or (b) do not have a DVD drive on your system.
Fedora Install DVD - this has all the software on the DVD itself.
I chose method 3 and downloaded the appropriate ISO file. While F17 only requires 512MB of RAM to run, the graphic installer requires 768MB, and is fully explained in this [29-step F17 installation guide].
To get around this, select "Troubleshooting" which then lets you select low-graphics/text mode installation that ran well under 512MB. I installed both LXDE and Sugar, and everything worked fine!
Why both LXDE and Sugar? Well, Sugar is quite a different environment, and I wanted LXDE as an alternative for the admin and teacher to use.
"Unlike most other desktop environments, Sugar does not use the 'desktop', 'folder' and 'window' metaphors. Instead, Sugar's default full-screen activities require users to focus on only one program at a time. Sugar implements a novel file-handling metaphor (the Journal), which automatically saves the user's running program session and allows him or her to later use an interface to pull up their past works by date, activity used or file type."
Fedora Upgrader tool (FedUp) command line interface
Fedora upgrade script
As you can probably guess from the title of this post, I chose method 2 "FedUp" as it seemed to be the least invasive. I was unsure if method-1 "Clean Install" of F18 would work with 512MB of RAM, and I have been through enough horrors of failed yum upgrades on my own Red Hat Enterprise Linux [RHEL] at work to avoid method 3. Method 4 is just a script to automate the steps of method 3.
The steps are fairly straightforward. First, install the FedUp package, run "yum update" to ensure you have all the latest kernel and F17 packages for everything else, and reboot.
Then run the fedup-cli command, which upgrades all the packages to F18 level and creates a special kernel level that will then finish the install after the second reboot. It took a while, so I let it run unattended. I put the debug log on partition sda5 in case anything went wrong.
What could go wrong? Well, it turns out that fedup works by updating the Grub2 boot loader configuration, but my grub2 resides on sda1 partition instead, owned by my existing Edubuntu. The reboot did not give me the option to run the specialized kernel to finish the process.
Fixing this was a hot mess, but I managed to configure Grub2 on Fedora, and complete the upgrade and get everything working as before. However, even though it just came out last year, [F18 version is already out of support]! This means I get a second chance to do FedUp, this time to F19 release. Oh boy! Fun!
While the second time went smoother, the problem was that F19 doesn't seem to run well in 512MB of RAM, and chances are F20 won't either.
So what have I learned from this?
Fedora is fully supported, has been around over 10 years, with a vibrant and helpful community.
Sugar is designed for kids, so adding a traditional desktop environment like XFCE or LXDE can be useful for administrator or teacher.
Offering multiple Linux versions in a dual-boot or triple-boot approach may complicate the Grub2 loader configuration and maintenance.
Fedora's "rolling upgrade" approach means that someone will need to consider upgrading to later versions at least every school year or semester to maintain support. Running fedup-cli or any of the other upgrade methods may be too complicated for your average teacher.
If you have any experience with Fedora or Sugar in the classroom, comment below!
Back in June, I mentioned this blog was [Moving to MyDeveloperWorks] which is based on IBM Lotus Connections.
Finally, the move is complete for all bloggers. If you are having problems with the redirects, you might need to unsubscribe and re-subscribe in your RSS feed reader. Here are the new links for several IBM bloggers that have moved over:
Kevin's perspective focused on the evolution over the past 100 years of "information science", in six chapters: sensing, memory, processing, logic, connecting, and architecture. He covers the technology from IBM Punched Cards and core memory, to the latest optical chips and the DeepQA technology in IBM Watson.
Steve's perspective was on IBM as a corporation, and how IBM and other corporations have evolved over the past century. In the late 19th century and early 20th century, "Internationals" had their headquarters in the United States, and regional sales and distribution offices elsewhere. The mid-20th century gave rise to "Multinationals" that invested more heavily in regional headquarters scattered across the globe. Today, in the 21st century, IBM and its clients are [Globally Integrated Entrprises] that move work to the lowest costs, best skills, and most attractive business climates.
Jeffrey M. O'Brien
Jeffrey M. O'Brien has been a senior editor [Fortune] and [Wired] magazines, and his work has appeared in The Best of Technology Writing, The Best American Science and Nature Writing, and The Best American Science Writing.
Jeffrey's perspective is on the impact technology has on humanity, organized into five steps towards progress: Seeing, Mapping, Understanding, Believing, and Acting. These steps have been around long before IBM, and Jeffrey is able to draw parallels to such efforts as Lewis & Clark mapping out the Louisiana Purchase, advancements in genetically modified foods, and the thousands of IBMers required to land a man on the moon.
This afternoon, everyone at the IBM Tucson site will be getting together to celebrate IBM's Centernnial!
Wrapping up my coverage of the 2013 IT Security and Storage Expo in Belgium, I noticed some interesting things in the other booths.
The EMC booth had a whiteboard so that clients could do some one-on-one collaboration. All of their cocktail waitresses were wearing sharp pin-stripe coats with matching mini-skirts.
Another booth had a "virtual graffiti wall". Using a "digital spraycan", you could write on the wall. I am not sure what connection this had with anything the company had to offer, but perhaps they also wanted to collaborate with attendees on solutions. In either case, it was very cool, and brought a lot of traffic.
(FTC Disclosure: I work for IBM. I was not paid to mention any of the other companies, their products or people on this blog post. Mentioning other companies is not to be considered an endorsement of any kind.)
There were some interesting costumes. Leila from [Aerohive] wearing a "bee costume" complete with black wings. Hans from STS in a bright orange business suit. (Orange is the national color of Belgium). Sophie from Fortinet handed out champagne. The plastic glassware were cones that snapped onto her tray, but they had no flat bottom to rest your glass down, so you had to hold it the entire time until you finished drinking it. The Homer Simpson sticker eating the Apple logo shows the Belgians have a sense of humor!
The NetApp booth had a huge banner claiming that "Data OnTap" was the #1 storage OS. Obviously Windows, AIX, Solaris and Linux aren't consider "storage Operating Systems" per se. Is NetApp claiming they outsell FreeNAS, the only other storage OS that I can think of?
While IBM and I.R.I.S-ICT easily won the "Best Looking Big Booth" award, I have to give the "Best Looking Small Booth" award to my friends at Hitachi Data Systems. Like EMC, the Hitachi team did not have any equipment on the floor, but they made use of their tiny space by having a Japanese theme, with cocktail waitresses in kimonos.
Perhaps E.A.R.T.H. could stand for IBM's "Energy-efficient Archive, Retention, Tape and Hybrid" storage offerings, which combined, had double-digit percent growth in Petabytes shipped (1Q10 versus 1Q09). This helped IBM gain market share. Last week's LTO-5 announcement was made at [NAB Show 2010] by the National Association of Broadcasters. Why? Because many digital media and entertainment people at this conference are interested in getting off "analog video". LTO-5 is 20 times cheaper than professional versions of the BetaMax or VHS tape currently used. So while many are trying to go "tape-less" by switching to disk, like the IBM DCS9900, they are finding that perhaps LTO-5 tape might be the better alternative. A key advantage of LTO-5 is that the cartridges can now be used like DVD-RW or USB thumb drives, with drag-and-drop file capability using the new Long Term File System (LTFS) on the LTO-5 cartridges. This earned a "Pick Hit" at the conference.
Overall, IBM storage revenues grew double digits, which leads me to believe that the worst of the financial melt-down is over, at least from an IT industry perspective. To learn more, see [IBM 1Q10 Financial Results].
It's Thursday here at the [Data Center Conference] here in Las Vegas. Trying to keep up with all the sessions and activities has been quite challenging. As is often the case, there are more sessions that I want to attend than I physically am able to, so have to pick and choose.
Making the Green Data Center a Reality
The sixth and final keynote was an expert panel session, with Mark Bramfitt from Pacific Gas and Electric [PG&E], and Mark Thiele from VMware.
Mark explained PG&E's incentive program to help data centers be more energyefficient. They have spent $7 million US dollars so far on this, and he has requested another$50 million US dollars over the next three years. One idea was to put "shells" aroundeach pod of 28 or so cabinets to funnel the hot air up to the ceiling, rather than havingthe hot air warm up the rest of the cold air supply.
The fundamental disconnect for a "green" data center is that the Facilities team pay for the electricity, but it is the IT department that makes decisions that impact its use. The PG&E rebates reward IT departments for making better decisions. The best metric available is"Power Usage Effectiveness" or [PUE], which is calculated by dividing total energy consumed in the data center, divided by energy consumed by the IT equipment itself.Typical PUE runs around 3.0 which means for every Watt used for servers, storage or network switches, another 2 Watts are used for power, cooling, and facilities. Companies are tryingto reduce their PUE down to 1.6 or so. The lower the better, and 1.0 is the ideal.The problem is that changing the data center infrastructure is as difficult as replacingthe phone system or your primary ERP application.
While California has [Title 24], stating energy efficiency standards for both residential and commercial buildings, it does notapply to data centers. PG&E is working to add data center standards into this legislation.
The two speakers also covered Data Center [bogeymans], unsubstantiated myths that prevent IT departments fromdoing the right thing. Here are a few examples:
Power cycles - some people believe that x86 servers can typically only handle up to 3000 shutdowns, and so equipment is often left running 24 hours a day to minimize these. Most equipment is kept less than 5 years (1826 days), so turning off non-essential equipment at night, and powering it back on the next morning, is well below this 3000 limit and can greatly reduce kWh.
Dust - many are so concerned about dust that they run extra air-filters which impactsthe efficiency of cooling systems air flow. New IT equipment tolerates dust much betterthan older equipment.
Humidity - Mark had a great story on this one. He said their "de-humidifier" broke,and they never got around to fixing it, and they went years without it, realizing they didn't need to de-humidify.
The session wrapped up with some "low hanging fruit", items that can provide immediate benefit with little effort:
Cold-aisle containment--Why are so few data centers doing this?
Colocation providers need to meter individual clients' energy usage -- IBM offers the instrumentation and software to make this possible
Air flow management--Simply organizing cables under the floor tiles could help this.
Virtualization and Consolidation.
High-efficiency power supplies
Managing IT from a Business Service Perspective
The "other" future of the data center is to manage it as a set of integrated IT services,rather than a collection of servers, storage and switches.IT Infrastructure Library (ITIL) is widely-accepted as a set of best practices to accomplish this "service management" approach. The presenter from ASG Software Solutions presented their Configuration Management Data Base (CMDB) and application dependency dashboard. Theyhave some customers with as many as 200,000 configuration items (CIs) in their CMDB.
The solution looked similar to the IBM Tivoli software stack presented earlier this yearat the [Pulse conference].Both ASG and IBM "eat their own dog food", or perhaps more accurately "drink their own champagne", using these software products to run their own internal IT operations.
For many, the future of a "green" data center managed as a set of integrated service are years away, but the technologies and products are available today, and there is no reasonto postpone these projects any longer than necessary. For more about IBM's approach togreen data center, see [Energy EfficiencySolutions]. You can also take IBM's[IT Service Management self-assessment] to help determine whichIBM tools you need for your situation.
The "Storage Symposium Mexico - 2008" conference was a great success this week!
Day 1 - The plan was for me to arrive for the Wednesday night reception. Eachattendee was given a copy of my latest book[Inside System Storage: Volume I] and I was planning to sign them. I thought perhaps we should have a "book signing" tablelike all of the other published authors have.
Things didn't go according to plan. Thunderstorms at the Mexico City airport forced our pilot to find an alternate airport. Nearby Acapulco airport was the logical choice, but was full from all the otherflights, so the plane ended up in a tiny town called McAllen, Texas. I did not arrive until the morning of Day 2,so ended up signing the books throughout Thursday and Friday, during breaks and meals, wherever they couldfind me!
Special thanks to fellow IBMer Ian Henderson who picked me up from the airport at such an awkward hour anddrive me all the way to Cuernavaca!
All of us, IBMers, Business Partners and clients alike, all donned black tee-shirtswith a white eightbar logo for a group photo with one of those "wide lens" cameras. While we werebeing assembled onto the bleachers, I took this quick snapshot of myself and some of the guys behind me.
I was original scheduled to be first to speak, but with my flight delays, was moved to a time slot after lunch.After a big Mexican lunch, the conference coordinators were afraid the attendees might fall asleep,a Mexican tradition called [siesta], so I wasinstructed to WAKE THEM UP! Fortunately, my topic was Information Lifecycle Management, a topicI am very passionate about, since my days working on DFSMS on the mainframe. With 30percent reduction in hardware capital expenditures, 30 percent reduction in operational costs, and typical payback periods between 15 to 24 months, the presentation got everyone's attention.
Of course, a lot happens outside of the formal meetings. We had a Japanese theme dinner, where we woreJapanese Hachimaki [headbands]with the eightbar logo. For those not familiar with Japanese culture, hachimaki are worn today not so much for the practical purpose to catch the perspiration but rather for mental stimulation to express one's determination. Some students wear hachimaki when they study to put themselves in the right spirit and frame of mind.
Shown here are presenters Mike Griese (Infrastructure Management with IBM TotalStorage Productivity Center),Dave Larimer (Backup and Storage Management with IBM Tivoli Storage Manager), myself, and John Hamano(Unified Storage with IBM System Storage N series).
Day 3 - Wrapping up the week, I presented two more times.
First, I covered IBM Disk Virtualization with IBM SAN Volume Controller. One interesting question was if the SAN Volume Controller could be made to looklike a Virtual Tape Library. I explained that this was never part of the original design, but that if you wantto combine SVC with a VTL into a combined disk-and-tape blended solution, consider using theIBM product called Scale-Out File Services[SoFS] which I covered in my post[Moredetails about IBM clustered scalable NAS].
During one of the breaks, I took a picture of the behind-the-scenes staff that put this together. They had created these huge blocks representing puzzle pieces, emphasizing how IBM is one of the few ITvendors that can bring all the pieces together for a complete solution.
Shown hereare Mike Griese (presenter), Cyntia Martinez, Claudia Aviles, Cesar Campos (IBM Business Unit Executive forSystem Storage in Mexico), and Claudia Lopez. Each day the staff wore matching shirts so that it was easyto find them.
Later, I covered Archive and Compliance Solutions to highlight our complete end-to-end set of solutions.When asked to compare and contrast the architectures of the IBM System Storage DR550 with EMC Centera, I explainedthat the DR550 optimizes the use of online disk access for the most recent data. For example, if you aregoing to keep data for 10 years, maybe you keep the most recent 12 months on disk, and the rest is moved,using policy-based automation, to a tape library for the remaining nine years. This means that the disk insidethe DR550 is always being used to read and write the most recent data, the data you are most likely to retrievefrom an archive system. Data older than a year is still accessible, but might take a minute or two for the tapelibrary robot to fetch.The EMC Centera, on the other hand, is a disk-only solution. It offers no option to move older data to tape,nor the option to spin-down the drives to conserve power. It fills up after the same 12 months or so, and then you get towatch it the remaining nine years, consuming electricity and heating your data center.
I don't know about you, butI have never seen anyone purposely put in "space heaters" into their data center, but certainly a full EMC Centeradoes little else. Both devices use SATA drives and support disk mirroring between locations, but IBM DR550 offers dual-parity RAID-6, and supports encryption of the data on both the disk and the tape in the DR550. EMC Centerastill uses only RAID-5, and has not yet, as far as I know, offered any level of encryption. IBM System StorageDR550 was clocked at about three times faster than Centera at ingesting new archive objects over a 1GbE Ethernet connection.
This last photo is me and fellow IBMer Adriana Mondragón. She was one of my students in the [System Storage Portfolio Top Gun class],last February in Guadalajara, Mexico.She graduated in the top 10 percent of her group, earning her the prestigious titleof "Top Gun" storage sales specialist.
The conference wrapped up with a Mexican lunch with a traditional Mariachi band. I took pictures, but figured you allalready know what [Mariachi players] look like, and I didn't wantto detract from the otherwise serious tone of this blog post! This was the first System Storage Symposium in Mexico, butbased on its success, we might continue these annually.
Continuing my quest to "set the record straight" about [IBM XIV Storage System] and IBM's other products, I find myself amused at some of the FUD out there. Some are almost as absurd as the following analogy:
Humans share over 50 percent of DNA with bananas. [source]
If you peel a banana, and put the slippery skin down on the sidewalk outside your office building, it couldpose a risk to your employees
If you peel a human, the human skin placed on the sidewalk in a similar manner might also pose similar risks.
Mr. Jones, who applied for the opening in your storage administration team, is a human being.
You wouldn't hire a banana to manage your storage, would you? This might be too risky!
The conclusion we are led to believe is that hiring Mr. Jones, a human being, is as risky as puttinga banana peel down on the sidewalk. Some bloggers argue that they are merely making a series of factual observations,and letting their readers form their own conclusions. For example, the IBM XIV storage system has ECC-protected mirrored cache writes. Some false claims about this were [properly retracted]using strike out font to show the correction made, other times the same statement appears in another post from the same blogger that[have not yet beenretracted] (Update: has now been corrected). Other bloggers borrow the false statement [for their own blog], perhaps not realizing theretractions were made elsewhere. Newspapers are unable to fix a previous edition, so are forced to publishretractions in future papers. With blogs, you can edit the original and post the changed version, annotated accordingly, so mistakes can be corrected quickly.
While it is possible to compare bananas and humans on a variety of metrics--weight, height, and dare I say it,caloric value--it misses the finer differences of what makes them different. Humans might share 98 percent withchimpanzees, but having an opposable thumb allows humans to do things that chimpanzeesother animals cannot.
Full Disclosure: I am neither vegetarian nor cannibal, and harbor no ill will toward bananas nor chimpanzees.No bananas or chimpanzees were harmed in the writing of this blog post. Any similarity between the fictitiousMr. Jones in the above analogy and actual persons, living or dead, is purely coincidental.
So let's take a look at some of IBM XIV Storage System's "opposable thumbs".
The IBM XIV system comes pre-formatted and ready to use. You don't have to spend weeks in meetings deciding betweendifferent RAID levels and then formatting different RAID ranks to match those decisions. Instead, you can start using the storage on the IBM XIV Storage System right away.
The IBM XIV offers consistent performance, balancing I/O evenly across all disk drive modules, even when performing SnapShot processing, or recovering from component failure. You don't have to try to separate data to prevent one workload from stealing bandwidth from another. You don't have to purchase extra software to determine where the "hot spots" are on the disk. You don't have to buy othersoftware to help re-locate and re-separate the data to re-balance the I/Os. Instead, you just enjoy consistentperformance.
The IBM XIV offers thin provisioning, allowing LUNs to grow as needed to accommodate business needs. You don'thave to estimate or over-allocate space for planned future projects. You don't have to monitor if a LUN is reaching80 or 90 percent full. You don't have to carve larger and larger LUNs and schedule time on the weekends to move thedata over to these new bigger spaces. Instead, you just write to the disk, monitoring the box as a whole, ratherthan individual LUNs.
The IBM XIV Storage System's innovative RAID-X design allows drives to be replaced with drives of any larger or smaller capacity. You don't have to find the exact same 73GB 10K RPM drive to match the existing 73GB 10K RPM drive that failed. Some RAID systems allow "larger than original" substitutions, for example a 146GB drive to replace a 73GB drive, but the added capacity is wasted, because of the way most RAID levels work. The problemis that many failures happen 3-5 years out, and disk manufacturers move on to bigger capacities and differentform factors, making it sometimes difficult to find an exact replacement or forcing customers to keep their own stockof spare drives. Instead, with the IBM XIV architecture, you sleep well at night, knowing it allows future drive capacities to act as replacements, and getting the full value and usage of that capacity.
In the case of IBM XIV Storage System, it is not clear whether
"Vendors" are those from IBM and IBM Business Partners, including bloggers like me employed by IBM,and "everybody else" includes IBM's immediate competitors, including bloggers employed by them.
-- or --
"Vendors" includes IBM and its competitors including any bloggers, so that "everybody else" refers instead to anyone not selling storage systems, but opinionated enough to not qualify as "objective third-party sources".
-- or --
"Vendors" includes official statements from IBM and its competitors, and "everybody else" refers to bloggerspresenting their own personal or professional opinions, that may or may not correspond to their employers.
That said, feel free to comment below on which of these you think the last two points of Steinhardt's rule istrying to capture. Certainly, I can't argue with the top two: a customer's own experience and the experiencesof other customers, which I mentioned previously in my post[Deceptively Delicious].
In that light, here is a 5-minute video on IBM TV with a customer testimonial from the good folksat [NaviSite], one of our manycustomer references for the IBM XIV Storage System.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(Yes, OK, it's actually Thursday. I wrote this post weeks ago, but was embargoed until Jan 10, and then was asked to wait until Jan 12 so that the IBM Marketing team could translate my text into 15 different languages.)
This week, the IBM DS8000 team announces a new High Performance Flash Enclosure (HPFE-Gen2) and a series of All-Flash Array DS8880F models that exploit this new technology.
New High Performance Flash Enclosure (HPFE-Gen2)
The original HPFE was 1U high with 16 or 30 flash cards, and could support RAID-5 or RAID-10. Most used RAID-5, resulting in four array sites of 6+P each, leaving two cards for spare. These 1.8-inch cards were only 400 or 800 GB in size, so the maximum raw capacity was only 24TB per 1U enclosure.
The new HPFE-Gen2 enclosure is a complete re-design, consisting of two Microbays and two TeraPacks. The I/O Bays attach to the Microbays via PCIe Gen3. The Microbays in turn attach to both TeraPacks via redundant 6 Gb or 12 Gb SAS.
Each TeraPack holds 24 flash cards each. Since the TeraPacks come in pairs, you can install 16, 32 or 48 flash cards per enclosure. Each 16-card set represents two array sites, for a maximum of six array sites per HPFE-Gen2.
RAID-5 for 400/800 GB. Two 6+P arrays, four 7+P arrays, and two spares.
RAID-6 for 400/800/1600/3200 GB. Two 5+P+Q arrays, four 6+P+Q arrays, and two spares.
RAID-10 for 400/800/1600/3200 GB. Two 3+3 arrays, four 4+4 arrays, and four spares.
(Technically, these new "Flash cards" are 2.5-inch Solid State Drives (SSD) placed into the HPFE Gen2 connected to the PCIe Gen3 interface, with 50 percent additional capacity to tolerate up to 10 drive-writes-per-day (DWDP). IBM will continue to call them "Flash Cards" for naming consistency between the two generations of HPFE.)
The new HPFE-Gen2 enclosures are substantially faster, offering up to 90 percent more IOPS, and up to 268 percent more throughput (GB/sec). The Microbays use a new flash-optimized ASIC to perform the RAID calculations.
New All-Flash Array DS8880F models
IBM introduces the DS8884F, DS8886F and DS8888F that are based entirely on the HPFE-Gen2 enclosures described above.
Hybrid - HDD/SSD/HPFE mix
Hybrid - HDD/SSD/HPFE mix
AFA - HPFE only
AFA - HPFE-Gen2 only
AFA - HPFE-Gen2 only
AFA - HPFE-Gen2 only
New zHyperLink connection
Also, as a "Statement of Direction", IBM intends to deliver field upgradable support for zHyperLink on existing IBM System Storage DS8880 machines for connection to z System servers. zHyperLink is a short-distance, mainframe-attach link designed for lower latency than High Performance FICON.
Typical latency with FICON/zHPF is around 140-170 microseconds, and this new zHyperLink is estimated to reduce this down to 20-30 microseconds, but is limited to 150 meter fiber optic cable distance. zHyperLink is intended to speed up DB2® for z/OS® transaction processing and improve active log throughput.
I am still in the black-out period waiting for IBM to announce its results, so I will
continue last week's theme on [New Year's Resolutions] to Eat Less and Exercise More.
(Note: I am neither a medical doctor nor registered dietician. I can share with you ideas that have worked for me, that might help you achieve your goals. I strongly suggest you read books and consult with medical experts as necessary.)
Take, for example, this group of fruits and vegetables. This is my week's haul from my local food co-op [Bountiful Baskets]: Avocados, Papayas, Potatoes, Strawberries, Grape Tomatoes, Oranges, Apples, Carrots, and Lemons.
So how many grams of Carbs, Fats and Proteins in this set? This has 1,026 grams of carbs, 78 grams of Fats, and 99 grams of protein, for a total of 4,875 calories.
On my diet, I am trying to have at least 90 grams of protein, but less than 150 grams of carbs, per day. While the fruits and veggies represent a full week's worth of carbs for me, it is only one day's worth of Protein.
"Most adults would benefit from eating more than the recommended daily intake of 56 grams, says Donald Layman, Ph.D., a professor emeritus of nutrition at the University of Illinois. The benefit goes beyond muscles, he says: Protein dulls hunger and can help prevent obesity, diabetes, and heart disease.
Now, if you're trying to lose weight, protein is still crucial. The fewer calories you consume, the more calories should come from protein, says Layman. You need to boost your protein intake to between 0.45 and 0.68 gram per pound to preserve calorie-burning muscle mass."
For men who weigh between 135 and 200 pounds, like me, the 90 grams of protein is within this guideline.
To lose weight, I need to eat fewer carbs than my body requires. Here is an excerpt from Paul Jaminet on [Perfect Health Diet]:
"So the body's net glucose needs are on the order of 600 to 800 calories per day.
For most people, we suggest 400 to 600 carb calories per day, about 200 less than the body utilizes. The remainder is made up by gluconeogenesis -- manufacture of glucose from protein."
Since carbs are 4 calories per gram, then 400-600 calories equates to 100-150 grams of carbs per day.
On some days, I eat less than 100 grams of carbs, but I would rather err on the low side than the high side over 150 grams.
Tracking your Dietary Intake
It is not always easy to estimate the amount of carbs, Fats and Proteins at any given meal.
If you want to stay within the guidelines above, at least initially to get started on your new diet, track your dietary intake. If you have a smartphone, there are apps that can take the guesswork out of eating.
For my Android-based phone, I use [Calorie Counter] by FatSecret. I can enter the foods that I eat at each meal, whether I am at home, at work, or eating out at a restaurant. It can help me decide between one choice and another, for example, or just let me know if I had enough for the day, or need to keep eating.
Here is a typical day. Notice that I had over 90 grams of Protein, but less than 150 grams of carbs.
Many restaurants now accommodate the low-carb, gluten-free diet. At Romano's Macaroni Grill, I asked them to substitute the pasta for some veggie, and they came out with grilled chicken and sautéed spinach with garlic. It was delicious!
At many hamburger places, you can ask for your burger "low-carb" or "protein-style" so that they replace the bun with lettuce leaves. You can eat this with your hands, or with fork and knife.
When I was in chef school, I learned what needed to be measured precisely, and what didn't. Over time, as you track your diet, you will find that you will be able to estimate the amount of each food item.
(FTC Disclosure: I work for IBM, and am a volunteer member of Bountiful Baskets co-op. I have no financial interest in, nor have I been paid to promote, any of the other companies or their products mentioned on this blog post.)
If you have come up with your own unique ways of meeting your dietary requirements and/or tracking your dietary intake, please post in the comments below!
Continuing my coverage of the Data Center 2010 conference, Tuesday morning I attended several sessions. The first was a serious IT discussion with Mazen Rawashdeh, Technology Executive from eBay, and the second was a lighthearted review of the benefits from Cloud Computing from humorist Dave Barry, and the third focused on re-architecting backup strategies.
eBay – How One Fast Growing Company is Solving its Infrastructure and Data Center Challenges
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change." -- Charles Darwin
So far, this has been the best session I have attended. eBay operates in 32 countries in seven languages, helping 90 million users to buy or sell 245 million items in 50,000 categories. Let's start with some statistics of the volume of traffic that eBay handles:
$2000 traded every second
cell phone sold every six seconds
pair of shoes sold every nine seconds
a major appliance sold every minute
93 billion database actions every day
50 TB of daily ingested daily
code changes to the eBay application are rolled in every day
In 2007, eBay discovered a disturbing trend, that infrastructure costs matched linear growth to business listing volume, which was an unsustainable model. Mazen Rawashdeh, eBay Marketplace Technology Operations, presented their strategy to break free from this problem. They want to double the number of listings without doubling their costs. They are 2 years into their 4 year plan:
Switched from expensive 12U high servers consuming 3 Kilowatts over to open source software on commodity 1-2U server hardware. Mazen owns all the costs from cement floor up to the web server.
Replaced team-optimized key performance indicators (KPI) with a common KPI. The server team focused on transactions per minute. The storage team was focused on utilization. The network team was focused on MB/sec bandwidth. The problem is that changes to optimize one might have negative impact to other teams. The new KPI was "Watts per listing" that allowed all teams to focus on a common goal.
Focused on changing the corporate culture for communicating clear measurable goals so that everyone understands the why and how of this new KPI. You have to spend money to save money in the long run. Consider costs at least 36 months out.
Changed from purchasing servers and depreciating them over 3 years to a lease model with server replacement tech refresh every 18 months. It is a bad idea to keep IT equipment after full depreciation, as energy savings alone on new equipment easily justifies 18-month replacement.
Adopted storage tiers. Storage is purchased not leased because it is more difficult to swap out disk arrays. They have 10-40 PB of disk. They do not use traditional backup, but rather use disk replication across distant locations. They are quick to delete or archive data that does not belong on their production systems.
Their results so far? They have reduced the Watts per listing by 70 percent over the past two years. They were able to double their volume with a relatively flat IT budget.
The Wit and Wisdom of Dave Barry, Humorist and Author
Dave Barry is a humor columnist. For 25 years he was a syndicated columnist whose work appeared in more than 500 newspapers in the United States and abroad, including the [Funny Times] that I subscribe to. In 1988 he won the Pulitzer Prize for Commentary about the election and politics in general. Dave has also written a total of 30 books, of which two of his books were used as the basis for the CBS TV sitcom "Dave's World," in which Harry Anderson played a much taller version of Dave.
I first met Dave about ten years ago at a SHARE conference in Minneapolis, MN. It was good to see him again.
Backup and Beyond
The analyst covered the "Three C's" of backup: cost, capability and complexity. There are many ways to implement backup, and he predicts that 30 percent of all companies will re-evaluate and re-architect their backup strategy, or at least change their backup software, by 2014 to address these three issues. Another survey indicates that 43 percent of companies are considering backup the primary reason they are investigating public cloud service providers.
The top three primary backup software vendors for the audience were Symantec, IBM, and Commvault. An interactive poll of the audience offered some insight:
There appears to be shift away from using disk to emulate tape (Virtual Tape Library) and instead use direct disk interfaces.
Some of the recommended actions were:
Exploit backup software features. On average, people keep 11 versions of backup, try cutting this down to four versions. IBM Tivoli Storage Manager allows this to be done via management class policies.
Implement a separate archive. Once data is archived and backed up, it reduces the backup load of production systems. Any chance to backup semi-static data less frequently will help.
Switch to capacity-based pricing which will allow more flexibility on server options to run backup software.
Implement data deduplication and compression, such as with IBM ProtecTIER data deduplication solution.
Consider a tiered recovery approach, where less critical applications have less backup protection. Many keep 1-2 years of backups, but 90 percent of all recoveries are for backups from the most recent 27 days. Reduce backup retention to 90 days.
Consider adopting a "Unified Recovery Management" strategy that protects laptops and desktops, remote office and branch offices, mission critical applications, and provide for business continuity and disaster recovery.
regularly test your recovery to validate your procedures and assumptions of your recoverability.
While the conference is divided into seven major tracks, it quickly becomes obvious that many of these IT datacenter issues overlap, and that approaches and decisions in one area can easily impact other areas.
Next week, I will be in Las Vegas for the 30th annual [Data Center Conference]. This is the fourth year attending this. For a bit of nostalgia, check out my blog posts from the [2008 event] and the [2009 event].
This week I'm in beautiful Guadalajara, Mexico teaching at our[System Storage Portfolio Top Gun class].We have all of our various routes-to-market represented here, including our direct sales force, our technicalteams, our online IBM.COM website sales, as well as IBM Business Partners.Everyone is excited over last week's IBM announcement of [4Q07 and full year 2007 results], which includesdouble-digit growth in our IBM System Storage business, led by sales of our DS8000, SAN Volume Controller and Tapesystems. Obviously, as an IBM employee and stockholder, I am biased, so instead I thought I would provide someexcerpts from other bloggers and journalists.
But what was striking in the company’s conference call on Thursday afternoon was the unhedged optimism in its outlook for 2008, given the strong whiff of recession fear elsewhere.
The questions from Wall Street analysts in the conference call had a common theme. Why are you so comfortable about the 2008 outlook? Now, that might just be professional churlishness, since so many of them have been so wrong recently about I.B.M. Wall Street had understandably thought, for example, that I.B.M.’s sales to financial services companies — the technology giant’s largest single customer category — would suffer in the fourth quarter, given the way banks have been battered by the mortgage credit crunch.
But Mr. Loughridge said that revenue from financial services customers rose 11 percent in the fourth quarter, to $8 billion. The United States, he noted, accounts for only 25 percent of I.B.M.’s financial services business.
The other thing that seems apparent is how much I.B.M.’s long-term strategy of moving up to higher-profit businesses and increasingly relying on services and software is working. Its huge services business grew 17 percent to $14.9 billion in the quarter. After the currency benefit, the gain was 10 percent, but still impressive. Software sales rose 12 percent to $6.3 billion.
Looking at IBM's business segments, it can be seen that they offer far more coverage of the technology space that those of the typical tech company:
IBM is just so big and diversified that there is little comparison between it and most other tech companies. IBM is a member of an elite group of companies like Cisco Systems (CSCO), Microsoft (MSFT), Oracle (ORCL) or Hewlett-Packard (HPQ).
IBM's wide international coverage and deep technological capabilities dwarf those of most tech companies. Not only do they have sales organizations worldwide but they have developers, consultants, R&D workers and supply chain workers in each geographic region. Their product mix runs from custom software to packaged enterprise software, hardware (mainframes and servers), semiconductors, databases, middleware technology, etc., etc. There are few tech companies that even attempt to support that many kinds and variations of products.
As color on the fourth quarter earnings announcement, there are a couple of observations that I would like to make. The first one speaks to IBM's international prowess. The company indicated that growth in the Americas was only 5%. International sales were a primary driver of IBM's good results. As an insight on the difference between IBM and most other tech companies, it is clear that nowadays, a tech company that isn't adept at selling internationally is going to be in trouble.
Terrific performance in a terrific year - no doubt a result of its strong global model. IBM operates in 170 countries, with about 65% of its employees outside US and about 30% in Asia Pacific. For fiscal 2007, revenues from Americas grew 4% to $41.1 billion (42% of total revenue), [EMEA] grew 14% to $34.7 billion (35%of total revenue), and Asia-Pacific grew by 11% to $19.5 billion (19.7% of total revenue). IBM sees growth prospects not just in [BRIC] but also countries like Malaysia, Poland, South Africa, Peru, and Singapore.
Thus far 2008–all two weeks of it–hasn’t been a pretty for the tech industry. Worries about the economy prevail. And even companies that had relatively good things to say like Intel get clobbered. It’s ugly out there–unless you’re IBM.
I am sure there will be more write-ups and analyses on this over the next coming weeks, and others will probably waituntil more tech companies announce their results for comparison.
IDC announced that IBM was number #1 in storage hardware (disk and tape combined)for 2006. Here are some excerpts from the IBM press release:
The newly released May 2007 report  by leading industry analyst firm IDC, "Worldwide Combined Disk and Tape Storage 2006 Market Share Update," shows IBM in the #1 overall position for all disk and tape storage hardware for the full year 2006.
In a total disk and tape storage hardware segment that increased to $28.2 billion in 2006, IBM captured 22.2 percent of the combined revenue for full year 2006, besting HP's 20.9 percent and EMC's 13.2 percent.
Five years ago, IBM was only #3 in this area, butis this new standing from IBM doing things better, or HP and EMC doing things poorly? Probably a little of both, but since it's not polite to point out the flaws of others in a blog, I will focus on what IBM is doing right, and I think our leadership in tape accounts for a good measure of this.
The resurgence of tape comes from a variety of factors:
The focus on being "green", to conserve energy power and cooling costs. Tape is the cheapest storage in this regard, as the tape cartridges only consume power when read or written.
Government regulations where more data must be stored for longer periods of time, such as theFederal Rules of Civil Procedures (FRCP), Sarbanes-Oxley, SEC regulations, and so on.
The widening gap in dollars per MB. Advancements in tape are outpacing disk. Disk is slowing down to about 25% improvement year on year, but tape continues its 30-40% improvement curve. A solution like Information Lifecycle Management (ILM) that moves older less valuable data from disk to tape can result in excellent cost savings.
Exciting "combined storage" solutions like the IBM System Storage DR550 and the IBM Grid Medical Archive Solution (GMAS) that combine disk and tape with internal hierarchy storage management of data, based on policies.
I worked with the IBM Redbooks residency team to review this paper and ensure it had the right focus. I did not want a Redpaper that just listed all of the IBM technologies available, but rather spend some effort on the business benefits, and realistic use cases with actual client examples, that help illustrate not just what a Smart Storage Cloud is, but why your business may benefit from having one, and how others have already benefited from their deployment.
To help promote this new Redpaper, my colleagues Larry Coyne and Karen Orlando filmed me talking about the book. This has been posted as a [4-minute YouTube video]. This is the first time we have promoted a Redpaper using a video, so let me know what you thinkk in the comment section below.
When new technologies are introduced to the marketplace, it is normal for customers to be skeptical.
My sister is a mechanical engineer, so when she needs to configure a part or component, she candesign it on the computer, and then use a "Rapid Prototyping Machine"that acts like a 3D printer, to generate a plastic part that matches the specifications. Some machinesdo this by taking a hunk of plastic and cutting it down to the appropriate shape, and others use glue andpowder to assemble the piece.
But not everything is that simple. Harry Beckwith deals with the issue of selling services and software featuresin his book "Selling the Invisible". How do you sell a service before it is performed? How do you sell a softwarefeature based on new technology that the customer is not familiar with?
Our good friends over at NetApp, our technology partners for the IBM System Storage N series, developed a"storage savings estimator" tool that can provide good insight into the benefits of Advanced Single InstanceStorage (A-SIS) deduplication feature.
I decided to run the tool to analyze my own IBM Thinkpad C: drive (Windows operating system and programs) and D: drive ("My Documents" folder containing all my data files) to see how much storage savings thetool would estimate. Here are my results:
WINXP-C-07G (C: drive)Total Number of Directories: 1272Total Number of Files: 56265Total Number of Symbolic Links: 0Total Number of Hard Links: 41996Total Number of 4k Blocks: 2395884Total Number of 512b Blocks: 18944730Total Number of Blocks: 2395884Total Number of Hole Blocks: 290258Total Number of Unique Blocks: 1611792Percentage of Space Savings: 20.61Scan Start Time: Wed Sep 5 14:37:06 2007Scan End Time: Wed Sep 5 14:53:51 2007
WINXP-D-07H (D: drive)Total Number of Directories: 507Total Number of Files: 7242Total Number of Symbolic Links: 0Total Number of Hard Links: 11744Total Number of 4k Blocks: 3954712Total Number of 512b Blocks: 31610595Total Number of Blocks: 3954712Total Number of Hole Blocks: 3204Total Number of Unique Blocks: 3524605Percentage of Space Savings: 10.79Scan Start Time: Wed Sep 5 14:21:16 2007Scan End Time: Wed Sep 5 14:34:30 2007
I am impressed with the results, and have a better understanding of the way A-SIS works. A-SIS looks at every4kB block of data, and creates a "fingerprint", a type of hash code of the contents. If two blocks have different "fingerprints", then the contents are known to be different. If two blocks have the same fingerprint, it is mathematically possible for them to be unique in content, so A-SIS schedules a byte-for-byte comparison to be sure they are indeed the same. This might happen hours after the block is initially written to disk, but is a much safer implementation, and does not slow down the applications writing data.
(In an effort to provide support "real time" as data was being written, earlier versions of deduplication
had to either assume that a hash collision was a match, or take time to perform the byte-for-byte comparisonrequired during the write process. Doing this byte-for-byte comparison when the device is the busiest doingwrite activities causes excessive undesirable load on the CPU.)
The estimator tool runs on any x86-based Laptop, personal computer or server, and can scan direct-attached, SAN-attached, or NAS-attached file systems. If you are a customer shopping around for deduplication, ask your IBM pre-sales technical support, storage sales rep, or IBM Business Partner to analyze your data. Tools like this can help make a simple cost-benefit analysis: the cost of licensing the A-SIS software feature versus the amount of storage savings.