Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
The marketshare data for external disk systems has been released by IDC for 4Q09. Overall, the market dropped 0.7 percent, comparing 4Q09 versus 4Q08. While EMC was quick to remind everyone that they were able to [maintain their #1 position] in the storage subset of "external disk systems", with the same 23.7 percent marketshare they had back in 4Q08 and revenues that were essentially flat, the real story concerns the shifts in the marketplace for the other major players. IBM grew revenue 9 percent, putting it nearly 5 points of marketshare ahead of HP. HP revenues dropped 7 percent, moving it further behind. Not mentioned in the [IBM Press Release] were NetApp and Dell, neck and neck for fourth place, with NetApp gaining 16.8 percent in revenues, while Dell dropped 13.5 percent. Both NetApp and Dell now have about 8 percent marketshare each. These top five storage vendors represent nearly 70 percent of the marketshare.
Given that HP is IBM's number one competitor, not just in storage but all things IT, this was a major win. Bob Evans from InformationWeek interviews my fifth-line manager, IBM executive Rod Adkins [IBM Claims Hardware Supremacy] where he shares his views and opinions about HP, Oracle-Sun, Cisco and Dell.
I'll add my two cents on what's going on:
Shift in Servers causes Shift in Storage
Hundreds of customers are moving away from HP and Sun over to IBM servers, and with it, are chosing IBM's storage offerings as well. IBM's rock-solid strategy (which I outlined in my post [Foundations and Flavorings]) has helped explain the different products and how they are positioned. HP's use of Itanium processors, and Sun's aging SPARC line, are both reasons enough to switch to IBM's lastest POWER7 processors, running AIX, IBM i (formerly i5/OS) and Linux operating systems.
Thunder in the Clouds
Some analysts predict that by 2013, one out of five companies won't even have their own IT assets. IBM supports all flavors of private, public and hybrid cloud computing models. IBM has its own strong set of offerings, is also the number one reseller of VMware, and has cloud partnerships with both Google and Amazon. HP and Microsoft have recently formed an alliance, but they have different takes on cloud computing. HP wants to be the "infrastructure" company, but Microsoft wants to focus on its ["three screens and a public cloud"] strategy. Microsoft has decided not to make its Azure Cloud operating system available for private cloud deployments. By contrast, IBM can start you with a private cloud, then help you transition to a hybrid cloud, and finally to a public cloud.
In the latest eX5 announcement, IBM's x86-based servers can run 78 percent more virtual machines per VMware license dollar. This will give IBM an advantage as HP shifts from Itanium to an all x86-based server line.
Network Attached Storage
There seems to be a shift away from FC and iSCSI towards NAS and FCoE storage networking protocols. This bodes bad for HP's acquisition of LeftHand, and Dell's acquisition of EqualLogic. IBM's SONAS for large deployments, and N series for smaller deployments, will compete nicely against HP's StorageWorks X9000 system.
Storage on Paper no longer Eco-friendly
HP beats IBM when you include consumer products like printers, which some might consider "Storage on Paper". At IBM, we often joke that 96 percent of HP's profits come from over-priced ink cartridges. With the latest focus on the environment, people are printing less. I have been printing less myself, setting my default printer to generate a PDF file instead. There are several tools available for this, including [CutePDF] and [BullZip]. As IBM employees switch from Microsoft Office to IBM's [Lotus Symphony], it has built-in "export-to-PDF" capability as well. People are also going to their local OfficeMax or CartridgeWorld to get their cartridges refilled, rather than purchase new ones. That has to be hurting HP's bottom line.
Don't Forget About Storage Management
The leading storage management suites today are IBM's Tivoli Storage Productivity Center and EMC's Control Center. HP's Storage Essentials doesn't quite beat either of these, and management software is growing in importance to more and more customers.
This week I got a comment on my blog post [IBM Announces another SSD Disk offering!]. The exchange involved Solid State Disk storage inside the BladeCenter and System x server line. Sandeep offered his amazing performance results, but we have no way to get in contact with him. So, for those interested, I have posted on SlideShare.net a quick five-chart presentation on recent tests with various SSD offerings on the eX5 product line here:
A long time ago, perhaps in the early 1990s, I was an architect on the component known today as DFSMShsm on z/OS mainframe operationg system. One of my job responsibilities was to attend the biannual [SHARE conference to listen to the requirements of the attendees on what they would like added or changed to the DFSMS, and ask enough questions so that I can accurately present the reasoning to the rest of the architects and software designers on my team. One person requested that the DFSMShsm RELEASE HARDCOPY should release "all" the hardcopy. This command sends all the activity logs to the designated SYSOUT printer. I asked what he meant by "all", and the entire audience of 120 some attendees nearly fell on the floor laughing. He complained that some clever programmer wrote code to test if the activity log contained only "Starting" and "Ending" message, but no error messages, and skip those from being sent to SYSOUT. I explained that this was done to save paper, good for the environment, and so on. Again, howls of laughter. Most customers reroute the SYSOUT from DFSMS from a physical printer to a logical one that saves the logs as data sets, with date and time stamps, so having any "skipped" leaves gaps in the sequence. The client wanted a complete set of data sets for his records. Fair enough.
When I returned to Tucson, I presented the list of requests, and the immediate reaction when I presented the one above was, "What did he mean by ALL? Doesn't it release ALL of the logs already?" I then had to recap our entire dialogue, and then it all made sense to the rest of the team. At the following SHARE conference six months later, I was presented with my own official "All" tee-shirt that listed, and I am not kidding, some 33 definitions for the word "all", in small font covering the front of the shirt.
I am reminded of this story because of the challenges explaining complicated IT concepts using the English language which is so full of overloaded words that have multiple meanings. Take for example the word "protect". What does it mean when a client asks for a solution or system to "protect my data" or "protect my information". Let's take a look at three different meanings:
The first meaning is to protect the integrity of the data from within, especially from executives or accountants that might want to "fudge the numbers" to make quarterly results look better than they are, or to "change the terms of the contract" after agreements have been signed. Clients need to make sure that the people authorized to read/write data can be trusted to do so, and to store data in Non-Erasable, Non-Rewriteable (NENR) protected storage for added confidence. NENR storage includes Write-Once, Read-Many (WORM) tape and optical media, disk and disk-and-tape blended solutions such as the IBM Grid Medical Archive Solution (GMAS) and IBM Information Archive integrated system.
The second meaning is to protect access from without, especially hackers or other criminals that might want to gather personally-identifiably information (PII) such as social security numbers, health records, or credit card numbers and use these for identity theft. This is why it is so important to encrypt your data. As I mentioned in my post [Eliminating Technology Trade-Offs], IBM supports hardware-based encryption FDE drives in its IBM System Storage DS8000 and DS5000 series. These FDE drives have an AES-128 bit encryption built-in to perform the encryption in real-time. Neither HDS or EMC support these drives (yet). Fellow blogger Hu Yoshida (HDS) indicates that their USP-V has implemented data-at-rest in their array differently, using backend directors instead. I am told EMC relies on the consumption of CPU-cycles on the host servers to perform software-based encryption, either as MIPS consumed on the mainframe, or using their Powerpath multi-pathing driver on distributed systems.
There is also concern about internal employees have the right "need-to-know" of various research projects or upcoming acquisitions. On SANs, this is normally handled with zoning, and on NAS with appropriate group/owner bits and access control lists. That's fine for LUNs and files, but what about databases? IBM's DB2 offers Label-Based Access Control [LBAC] that provides a finer level of granularity, down to the row or column level. For example, if a hospital database contained patient information, the doctors and nurses would not see the columns containing credit card details, the accountants would not see the columnts containing healthcare details, and the individual patients, if they had any access at all, would only be able to access the rows related to their own records, and possibly the records of their children or other family members.
The third meaning is to protect against the unexpected. There are lots of ways to lose data: physical failure, theft or even incorrect application logic. Whatever the way, you can protect against this by having multiple copies of the data. You can either have multiple copies of the data in its entirety, or use RAID or similar encoding scheme to store parts of the data in multiple separate locations. For example, with RAID-5 rank containing 6+P+S configuration, you would have six parts of data and one part parity code scattered across seven drives. If you lost one of the disk drives, the data can be rebuilt from the remaining portions and written to the spare disk set aside for this purpose.
But what if the drive is stolen? Someone can walk up to a disk system, snap out the hot-swappable drive, and walk off with it. Since it contains only part of the data, the thief would not have the entire copy of the data, so no reason to encrypt it, right? Wrong! Even with part of the data, people can get enough information to cause your company or customers harm, lose business, or otherwise get you in hot water. Encryption of the data at rest can help protect against unauthorized access to the data, even in the case when the data is scattered in this manner across multiple drives.
To protect against site-wide loss, such as from a natural disaster, fire, flood, earthquake and so on, you might consider having data replicated to remote locations. For example, IBM's DS8000 offers two-site and three-site mirroring. Two-site options include Metro Mirror (synchronous) and Global Mirror (asynchronous). The three-site is cascaded Metro/Global Mirror with the second site nearby (within 300km) and the third site far away. For example, you can have two copies of your data at site 1, a third copy at nearby site 2, and two more copies at site 3. Five copies of data in three locations. IBM DS8000 can send this data over from one box to another with only a single round trip (sending the data out, and getting an acknowledgment back). By comparison, EMC SRDF/S (synchronous) takes one or two trips depending on blocksize, for example blocks larger than 32KB require two trips, and EMC SRDF/A (asynchronous) always takes two trips. This is important because for many companies, disk is cheap but long-distance bandwidth is quite expensive. Having five copies in three locations could be less expensive than four copies in four locations.
Fellow blogger BarryB (EMC Storage Anarchist) felt I was unfair pointing out that their EMC Atmos GeoProtect feature only protects against "unexpected loss" and does not eliminate the need for encryption or appropriate access control lists to protect against "unauthorized access" or "unethical tampering".
(It appears I stepped too far on to ChuckH's lawn, as his Rottweiler BarryB came out barking, both in the [comments on my own blog post], as well as his latest titled [IBM dumbs down IBM marketing (again)]. Before I get another rash of comments, I want to emphasize this is a metaphor only, and that I am not accusing BarryB of having any canine DNA running through his veins, nor that Chuck Hollis has a lawn.)
As far as I know, the EMC Atmos does not support FDE disks that do this encryption for you, so you might need to find another way to encrypt the data and set up the appropriate access control lists. I agree with BarryB that "erasure codes" have been around for a while and that there is nothing unsafe about using them in this manner. All forms of RAID-5, RAID-6 and even RAID-X on the IBM XIV storage system can be considered a form of such encoding as well. As for the amount of long-distance bandwidth that Atmos GeoProtect would consume to provide this protection against loss, you might question any cost savings from this space-efficient solution. As always, you should consider both space and bandwidth costs in your total cost of ownership calculations.
Of course, if saving money is your main concern, you should consider tape, which can be ten to twenty times cheaper than disk, affording you to keep a dozen or more copies, in as many time zones, at substantially lower cost. These can be encrypted and written to WORM media for even more thorough protection.
Well, it's Tuesday again, and that means IBM announcements! Right on the heels of our big storage launch on February 9, today IBM announced some exciting options for its modular disk systems. Let's take a look:
2TB SATA-II drives
That's right, you can now DOUBLE your capacity with 2TB SATA type-II drives on the DS3950, DS4200, DS4700, DS5020, DS5100 and DS5300 disk controllers, as well as the DS4000 EXP420, EXP520, EXP810, EXP5000 and EXP5060 expansion drawers. Here are the Announcement Letters for the [HVEC] and [AAS] ordering systems.
300GB Solid State Drives
IBM also announces 300GB solid state drives (SSD) for the DS5100 and DS5300. These are four times larger than the 73GB drives IBM offered last year, for those workloads that need high read IOPS such as Online Transaction Processing (OLTP) and Enterprise Resource Planning (ERP) applications. Here is the [Announcement Letter].
New N series model N3400
For customers that need less than the minimum 21TB that our IBM Scale-Out Network Attach Storage (SONAS) can provide, IBM offers the new N3400 unified storage disk system, with support for NFS, CIFS, iSCSI and FCP. This is a 2U high 12 drive model that can be expanded up to 136 drives, basically doubling all the stats from last year's N3300 model. Fellow blogger, Rich Swain (IBM), does a great job recapping the speeds and feeds over on his blog [News and Information about IBM N series].
It also appears that the reports and rumors of the death of the DS6800 are premature. Don't believe misleading statements from competitors, such as those found written by fellow blogger BarryB (EMC), aka "the Storage Anarchist", in his latest post [Bring Out Your Dead] showing a cute little tombstone with "Feb 2010" on the bottom. Actually, if he had bothered to read IBM's [Announcement Letter], he would have realized that IBM plans to continue to sell these until June. Of course, IBM will continue to support both new and existing DS6800 customers for many years to come.
Technically, BarryB does not make any factually incorrect statements for me to correct on his blog. The idea that a product is "dead" is, of course, just opinion, and competitors poke fun at each others' announcements every day. One could argue that the EMC V-Max was "dead" after the ITG whitepaper [Cost/Benefit Case for IBM XIV Storage System - Comparing Costs for IBM XIV and EMC V-Max Systems] demonstrated that the IBM XIV cost 63 percent less than a comparable EMC V-Max over the life of three years total cost of ownership (TCO) back in July 2009. The comparison was made with data from clients in a variety of industries including manufacturing, health care, life sciences, telecommunications, financial services, and the public sector. This could explain why so many EMC customers are buying or investigating the IBM XIV and the rest of the IBM storage portfolio.
The technology industry is full of trade-offs. Take for example solar cells that convert sunlight to electricity. Every hour, more energy hits the Earth in the form of sunlight than the entire planet consumes in an entire year. The general trade-off is between energy conversion efficiency versus abundance of materials:
Get 9-11 percent efficiency using rare materials like indium (In), gallium (Ga) or cadmium (Cd).
Get only 6.7 percent efficiency using abundant materials like copper (Cu), tin (Sn), zinc (Zn), sulfur (S), and selenium (Se)
A second trade-off is exemplified by EMC's recent GeoProtect announcement. This appears similar to the geographic dispersal method introduced by a company called [CleverSafe]. The trade-off is between the amount of space to store one or more copies of data and the protection of data in the event of disaster. Here's an excerpt from fellow blogger Chuck Hollis (EMC) titled ["Cloud Storage Evolves"]:
"Imagine a average-sized Atmos network of 9 nodes, all in different time zones around the world. And imagine that we were using, say, a 6+3 protection scheme.
The implication is clear: any 3 nodes could be completely lost: failed, destroyed, seized by the government, etc.
-- and the information could be completely recovered from the surviving nodes."
For organizations worried about their information falling into the wrong hands (whether criminal or government sponsored!), any subset of the nodes would yield nothing of value -- not only would the information be presumably encrypted, but only a few slices of a far bigger picture would be lost.
Seized by the government?falling into the wrong hands? Is EMC positioning ATMOS as "Storage for Terrorists"? I can certainly appreciate the value of being able to protect 6PB of data with only 9PB of storage capacity, instead of keeping two copies of 6PB each, the trade-off means that you will be accessing the majority of your data across your intranet, which could impact performance. But, if you are in an illicit or illegal business that could have a third of your facilities "seized by the government", then perhaps you shouldn't house your data centers there in the first place. Having two copies of 6PB each, in two "friendly nations", might make more sense.
(In reality, companies often keep way more than just two copies of data. It is not unheard of for companies to keep three to five copies scattered across two or three locations. Facebook keeps SIX copies of photographs you upload to their website.)
ChuckH argues that the governments that seize the three nodes won't have a complete copy of the data. However, merely having pieces of data is enough for governments to capture terrorists. Even if the striping is done at the smallest 512-byte block level, those 512 bytes of data might contain names, phone numbers, email addresses, credit cards or social security numbers. Hackers and computer forensics professionals take advantage of this.
You might ask yourself, "Why not just encrypt the data instead?" That brings me to the third trade-off, protection versus application performance. Over the past 30 years, companies had a choice, they could encrypt and decrypt the data as needed, using server CPU cycles, but this would slow down application processing. Every time you wanted to read or update a database record, more cycles would be consumed. This forced companies to be very selective on what data they encrypted, which columns or fields within a database, which email attachments, and other documents or spreadsheets.
An initial attempt to address this was to introduce an outboard appliance between the server and the storage device. For example, the server would write to the appliance with data in the clear, the appliance would encrypt the data, and pass it along to the tape drive. When retrieving data, the appliance would read the encrypted data from tape, decrypt it, and pass the data in the clear back to the server. However, this had the unintended consequences of using 2x to 3x more tape cartridges. Why? Because the encrypted data does not compress well, so tape drives with built-in compression capabilities would not be able to shrink down the data onto fewer tapes.
(I covered the importance of compressing data before encryption in my previous blog post
[Sock Sock Shoe Shoe].)
Like the trade-off between energy efficiency and abundant materials, IBM eliminated the trade-off by offering compression and encryption on the tape drive itself. This is standard 256-bit AES encryption implemented on a chip, able to process the data as it arrives at near line speed. So now, instead of having to choose between protecting your data or running your applications with acceptable performance, you can now do both, encrypt all of your data without having to be selective. This approach has been extended over to disk drives, so that disk systems like the IBM System Storage DS8000 and DS5000 can support full-disk-encryption [FDE] drives.
Continuing my drawn out coverage of IBM's big storage launch of February 9, today I'll cover the IBM System Storage TS7680 ProtecTIER data deduplication gateway for System z.
On the host side, TS7680 connects to mainframe systems running z/OS or z/VM over FICON attachment, emulating an automated tape library with 3592-J1A devices. The TS7680 includes two controllers that emulate the 3592 C06 model, with 4 FICON ports each. Each controller emulates up to 128 virtual 3592 tape drives, for a total of 256 virtual drives per TS7680 system. The mainframe sees up to 1 million virtual tape cartridges, up to 100GB raw capacity each, before compression. For z/OS, the automated library has full SMS Tape and Integrated Library Management capability that you would expect.
Inside, the two control units are both connected to a redundant pair cluster of ProtecTIER engines running the HyperFactor deduplication algorithm that is able to process the deduplication inline, as data is ingested, rather than post-process that other deduplication solutions use. These engines are similar to the TS7650 gateway machines for distributed systems.
On the back end, these ProtecTIER deduplication engines are then connected to external disk, up to 1PB. If you get 25x data deduplication ratio on your data, that would be 25PB of mainframe data stored on only 1PB of physical disk. The disk can be any disk supported by ProtecTIER over FCP protocol, not just the IBM System Storage DS8000, but also the IBM DS4000, DS5000 or IBM XIV storage system, various models of EMC and HDS, and of course the IBM SAN Volume Controller (SVC) with all of its supported disk systems.
It's Tuesday, and that means more IBM announcements!
I haven't even finished blogging about all the other stuff that got announced last week, and here we are with more announcements. Since IBM's big [Pulse 2010 Conference] is next week, I thought I would cover this week's announcement on Tivoli Storage Manager (TSM) v6.2 release. Here are the highlights:
Client-Side Data Deduplication
This is sometimes referred to as "source-side" deduplication, as storage admins can get confused on which servers are clients in a TSM client-server deployment. The idea is to identify duplicates at the TSM client node, before sending to the TSM server. This is done at the block level, so even files that are similar but not identical, such as slight variations from a master copy, can benefit. The dedupe process is based on a shared index across all clients, and the TSM server, so if you have a file that is similar to a file on a different node, the duplicate blocks that are identical in both would be deduplicated.
This feature is available for both backup and archive data, and can also be useful for archives using the IBM System Storage Archive Manager (SSAM) v6.2 interface.
Simplified management of Server virtualization
TSM 6.2 improves its support of VMware guests by adding auto-discovery. Now, when you spontaneously create a new virtual machine OS guest image, you won't have to tell TSM, it will discover this automatically! TSM's legendary support of VMware Consolidated Backup (VCB) now eliminates the manual process of keeping track of guest images. TSM also added support of the Vstorage API for file level backup and recovery.
While IBM is the #1 reseller of VMware, we also support other forms of server virtualization. In this release, IBM adds support for Microsoft Hyper-V, including support using Microsoft's Volume Shadow Copy Services (VSS).
Automated Client Deployment
Do you have clients at all different levels of TSM backup-archive client code deployed all over the place? TSM v6.2 can upgrade these clients up to the latest client level automatically, using push technology, from any client running v5.4 and above. This can be scheduled so that only certain clients are upgraded at a time.
Simultaneous Background Tasks
The TSM server has many background administrative tasks:
Migration of data from one storage pool to another, based on policies, such as moving backups and archives on a disk pool over to a tape pools to make room for new incoming data.
Storage pool backup, typically data on a disk pool is copied to a tape pool to be kept off-site.
Copy active data. In TSM terminology, if you have multiple backup versions, the most recent version is called the active version, and the older versions are called inactive. TSM can copy just the active versions to a separate, smaller disk pool.
In previous releases, these were done one at a time, so it could make for a long service window. With TSM v6.2, these three tasks are now run simultaneously, in parallel, so that they all get done in less time, greatly reducing the server maintenance window, and freeing up tape drives for incoming backup and archive data. Often, the same file on a disk pool is going to be processed by two or more of these scheduled tasks, so it makes sense to read it once and do all the copies and migrations at one time while the data is in buffer memory.
Enhanced Security during Data Transmission
Previous releases of TSM offered secure in-flight transmission of data for Windows and AIX clients. This security uses Secure Socket Layer (SSL) with 256-bit AES encryption. With TSM v6.2, this feature is expanded to support Linux, HP-UX and Solaris.
Improved support for Enterprise Resource Planning (ERP) applications
I remember back when we used to call these TDPs (Tivoli Data Protectors). TSM for ERP allows backup of ERP applications, seemlessly integrating with database-specific tools like IBM DB2, Oracle RMAN, and SAP BR*Tools. This allows one-to-many and many-to-one configurations between SAP servers and TSM servers. In other words, you can have one SAP server backup to several TSM servers, or several SAP servers backup to a single TSM server. This is done by splitting up data bases into "sub-database objects", and then process each object separately. This can be extremely helpful if you have databases over 1TB in size. In the event that backing up an object fails and has to be re-started, it does not impact the backup of the other objects.
Continuing my series of posts on the IBM Storage launch of February 9, I cover some new disk options.
IBM System Storage DCS9900
The DCS9900 uses a 4U enclosure to hold 60 (that's sixty, SIX-ZERO) drives! Normally, hot-swapable drives face the front or back surface of the rack, but these surfaces are valuable "real estate", so instead, the drives stick downward into a tray that rolls out, giving you full access to access any of the drives. The DCS9900 added support for 2TB (7200 RPM) SATA drives, and 600GB (15K RPM) SAS drives. The systems use ten-pack RAID-6 ranks, 8+2P.
(If this sounds a lot like the newly announced SONAS product, it should! The two products share "DNA", and so can be considered sister products, packing 60 drives into a 4U enclosure. By comparison, the SONAS initially only supports 1TB SATA in RAID-6 ten-packs 8+2P, and 450GB SAS in RAID-5 ten-packs 8+P+S, but now that 2TB SATA and 600GB SAS drives have been qualified for the DCS9900, we hope to qualify these for the SONAS soon as well.)
Continuing on the [IBM Storage Launch of February 9], John Sing has offered to write the following guest post about the [announcement] of IBM Scale Out Network Attached Storage [IBM SONAS]. John and I have known each other for a while, traveled the world to work with clients and speak at conferences. He is an Executive IT Consultant on the SONAS team.
Guest Post written by John Sing, IBM San Jose, California
What is IBM SONAS? It’s many things, so let’s start with this list:
It’s IBM’s delivery of a productized, pre-packaged Scale Out NAS global virtual file server, delivered in a easy-to-use appliance
IBM’s solution for large enterprise file-based storage requirements, where massive scale in capacity and extreme performance is required, especially for today’s modern analytics-based Competitive Advantage IT applications
Scales to many petabytes of usable storage and billions of files in a single global namespace
Provides integrated central management, central deployment of petabyte levels of storage
Modular commercial-off-the-shelf [COTS] building blocks. I/O, storage, network capacity scale independently of each other. Up to 30 interface nodes and 60 storage nodes, in an IBM General Parallel File System [GPFS]-based cluster. Each 10Gb CEE interface node port is capable of streaming at 900 MB/sec
Files are written in block-sized chunks, striped over as many multiple disk drives in parallel – aggregating throughput on a massive scale (both read and write), as well as providing auto-tuning, auto-balancing
Functionality delivered via one program product, IBM SONAS Software, which provides all of above functions, along with clustered CIFS, NFS v2/v3 with session auto-failover, FTP, high availability, and more
IBM SONAS makes automated tiered storage achievable and realistic at petabyte levels:
Integrated high performance parallel scan engine capable of identifying files at over 10 million files per minute per node
Integrated parallel data movement engine to physically relocate the data within tiered storage
And we’re just scratching the surface. IBM has plans to deploy additional protocols, storage hardware options, and software features.
However, the real question of interest should be, “who really needs that much storage capacity and throughput horsepower?”
The answer may surprise you. IMHO, the answer is: almost any modern enterprise that intends to stay competitive. Hmmm…… Consider this: the reason that IT exists today is no longer to simply save cost (that may have been true 10 years ago). Everyone is reducing cost… but how much competitive advantage is purchased through “let’s cut our IT budget by 10% this year”?
Notice that in today’s world, there are (many) bright people out there, changing our world every day through New Intelligence Competitive Advantage analytics-based IT applications such as real time GPS traffic data, real time energy monitoring and redirection, real time video feed with analytics, text analytics, entity analytics, real time stream computing, image recognition applications, HDTV video on demand, etc. Think of how GPS industry, cell phone / Twitter / Facebook, iPhone and iPad applications, as examples, are creating whole new industries and markets almost overnight.
Then start asking yourself, “What's behind these Competitive Advantage IT applications – as they are the ones that are driving all my storage growth? Why do they need so much storage? What do those applications mean for my storage requirements?”
To be “real-time”, long-held IT paradigms are being broken every day. Things like “data proximity”: we can no longer can extract terabytes of data from production databases and load them to a data warehouse – where’s the “real-time” in that? Instead, today’s modern analytics-based applications demand:
Multiple processes and servers (sometimes numbering in the 100s) simultaneously ….
Running against hundreds of terabytes of data of live production data, streaming in from expanding number of smarter sensors, input devices, users
Producing digital image-intensive results that must be programatically sent to an ever increasing number of mobile devices in geographically dispersed storage
Requiring parallel performance levels, that used to be the domain only of High Performance Computing (HPC)
This is a major paradigm shift in storage – and that is the solution and storage capabilities that IBM SONAS is designed to address. And of course, you should be able to save significant cost through the SONAS global virtual file server consolidation and virtualization as well.
Certainly, this topic warrants more discussion. If you found it interesting, contact me, your local IBM Business Partner or IBM Storage rep to discuss Competitive Advantage IT applications and SONAS further.
Well, it's Tuesday again, and that means IBM announcements! Today we had a major launch, with so many products, services and offerings
that I can't fit them all into a single post, so I will split them up into several posts to give the attention they deserve. So, in this
post, I will focus on just the networking gear.
IBM Converged Switch B32
The "Converged" part of this switch refers to Converged Enhanced Ethernet (CEE), which is just a lossless Ethernet that meets certain standards to allow Fibre Channel over Ethernet (FCoE) that are still being discussed between Brocade and Cisco. Thankfully, IBM demanded both Brocade and Cisco stick to open agreed-upon standards, and the rest of the world gets to benefit from IBM's leadership in keeping everything as open and non-proprietary as possible.
The B32 ("B" because it was made by Brocade) starts with 24 10Gb Converged Enhanced Ethernet (CEE) ports, and then you can add eight Fibre Channel ports, for a total of 32 ports, hence the name B32. These are designed to be Top-of-Rack (TOR) switches. Basically, instead of having expensive optical cables for Ethernet and/or Fibre Channel out of each server, you have cheap twinax copper cables connecting the server's Converged Network Adapters (CNA) to this TOR switch, and then you can have the 10Gb Ethernet go to your regular Ethernet LAN, and your 8Gbps FC traffic go to your regular FC SAN. In other words, the CNA serves both the role of an Ethernet Network Interface Card (NIC) as well as a Fibre Channel Host Bus Adapter (HBA) card.
(You might see 8Gbps Fibre Channel represented as 8/4/2 or 2/4/8, this is just to remind you that these 8Gb FC ports can auto-negotiate down to 2Gbps and 4Gbps legacy hardware, but not 1Gbps. If you are still using 1Gbps FC, you need 4Gpbs SFP transceivers instead, shown often as 1/2/4 or 4/2/1.)
New SSN-16 module for Cisco directors and switches
When I present SAN gear to sales reps, I often get the question, "What is the difference between a switch and a director?" My quick and simple answer is that switches have fixed ports, but directors have slots that you can slide in different blades or expansion modules. The Cisco MDS9500 series are directors with slots, the three models provide a hint to their capacity. The last two digits represent the number of total slots, but the first two slots are already taken. In other words, model 9513 has 11 slots, model 9509 has seven slots, and model 9506 has four slots. You can have a 48-port blade in a slot, so in theory, you can have a maximum of 528 ports on the biggest model 9513.
However, if you want FCIP for disaster recovery, or I/O Acceleration (IOA) for remote e-vaulting tape libraries, you need a special 18/4 blade. This has 18 FC ports, four 1GbE ports and a special service processor that speaks FCIP or IOA. If you wanted two service processors for FCIP and two for IOA, you would need four of these blades, and that takes up slots that could have been used for 48-port blades instead. The solution? The new SSN-16 has sixteen 1GbE ports and four service processors, so with one slot, you can handle the FCIP and IOA processing that you previously used four cards, giving you three slots back to use with higher port-density cards.
Even better, you can put this new SSN-16 in the Cisco 9222i. The model 9222i is a "hybrid" switch with 22 fixed ports (18 FC ports, four fixed 1GbE ports, and a service processor, so basically the fixed port version of the 18/4 blade above), but it also has one slot! That one slot can be used for the SSN-16 to give you added FCIP or IOA capability.
For our mainframe clients, the FICON package includes four 24-port FICON blades and 96 SFP 4Gbps transceivers to fully populate them. Here is the IBM [Press Release].
Cisco Nexus 5000 series for IBM System Storage
The Cisco Nexus 5000 series is Cisco's entry into the Converged Enhanced Ethernet world, although Cisco sometimes refers to this as Data Center Ethernet (DCE), IBM will continue to use CEE when referring to either Brocade and Cisco gear. These are also Top-of-Rack aggregators that support CNA connections over cheaper twinax copper wires. Model 5010 has 10 ports that can be configured for either 1GbE or 10Gb CEE, 10 ports that are 10Gb CEE, and a slot for an expansion module. The Model 5020 has basically twice as much of everything, including two slots instead of one. Since 10Gb Ethernet does not auto-negotiate down to 1GbE, half the ports can be configured to run 1GbE instead. Frankly, that can be seen as wasting your precious Nexus ports with 1GbE connections, so you might find a 1GbE-to-10GbE aggregator that combines a dozen or more 1GbE to a few 10GbE links instead.
Today's announcement is that in addition to 10GbE and 4Gbps FC expansion modules, there is now an expansion module that supports 8Gbps Fibre Channel. Here is the IBM [Press Release].
Whether you choose Brocade or Cisco, nearly all of IBM System Storage disk and tape products can work today with Converged Enhanced Ethernet environments, either directly using iSCSI, NFS or CIFS, or using the FCoE methodology.
As you can see, it took me a whole post just to cover just our networking gear announcements, and I haven't even covered our disk, tape and cloud storage offerings. I'll get to these in later posts.
In addition to dominating the gaming world, producing chips for the Nintendo Wii, Sony PlayStation, and Microsoft Xbox 360, IBM also dominates the world of Linux and UNIX servers. Today, IBM announced its new POWER7 processor, and a line of servers that use this technology. Here is a quick [3-minute video] about the POWER7.
While others might be [Dancing on Sun's grave], IBM instead is focused on providing value to the marketplace. Here is another quick [2-minute video] about why thousands of companies have switched from Sun, HP and Dell over to IBM.
Am I dreaming? On his Storagezilla blog, fellow blogger Mark Twomey (EMC) brags about EMC's standard benchmark results, in his post titled [Love Life. Love CIFS.]. Here is my take:
A Full 180 degree reversal
For the past several years, EMC bloggers have argued, both in comments on this blog, and on their own blogs, that standard benchmarks are useless and should not be used to influence purchase decisions. While we all agree that "your mileage may vary", I find standard benchmarks are useful as part of an overall approach in comparing and selecting which vendors to work with, and which architectures or solution approaches to adopt, and which products or services to deploy. I am glad to see that EMC has finally joined the rest of the planet on this. I find it funny this reversal sounds a lot like their reversal from "Tape is Dead" to "What? We never said tape was dead!"
Impressive CIFS Results
The Standard Performance Evaluation Corporation (SPEC) has developed a series of NFS benchmarks, the latest, [SPECsfs2008] added support for CIFS. So, on the CIFS side, EMC's benchmarks compare favorably against previous CIFS tests from other vendors.
On the NFS side, however, EMC is still behind Avere, BlueArc, Exanet, and IBM/NetApp. For example, EMC's combination of Celerra gateways in front of V-Max disk systems resulted in 110,621 OPS with overall response time of 2.32 milliseconds. By comparison, the IBM N series N7900 (tested by NetApp under their own brand, FAS6080) was able to do 120,011 OPS with 1.95 msec response time.
Even though Sun invented the NFS protocol in the early 1980s, they take an EMC-like approach against standard benchmarks to measure it. Last year, fellow blogger Bryan Cantrill (Sun) gives his [Eulogy for a Benchmark]. I was going to make points about this, but fellow blogger Mike Eisler (NetApp) [already took care of it]. We can all learn from this. Companies that don't believe in standard benchmarks can either reverse course (as EMC has done), or continue their downhill decline until they are acquired by someone else.
(My condolences to those at Sun getting laid off. Those of you who hire on with IBM can get re-united with your former StorageTek buddies! Back then, StorageTek people left Sun in droves, knowing that Sun didn't understand the mainframe tape marketplace that StorageTek focused on. Likewise, many question how well Oracle will understand Sun's hardware business in servers and storage.)
What's in a Protocol?
Both CIFS and NFS have been around for decades, and comparisons can sometimes sound like religious debates. Traditionally, CIFS was used to share files between Windows systems, and NFS for Linux and UNIX platforms. However, Windows can also handle NFS, while Linux and UNIX systems can use CIFS. If you are using a recent level of VMware, you can use either NFS or CIFS as an alternative to Fibre Channel SAN to store your external disk VMDK files.
The Bigger Picture
There is a significant shift going on from traditional database repositories to unstructured file content. Today, as much as [80 percent of data is unstructured]. Shipments this year are expected to grow 60 percent for file-based storage, and only 15 percent for block-based storage. With the focus on private and public clouds, NAS solutions will be the battleground for 2010.
So, I am glad to see EMC starting to cite standard benchmarks. Hopefully, SPC-1 and SPC-2 benchmarks are forthcoming?
To avoid overwhelming people with too many features and functions, IBM decided to keep things simple for the first release. Let's take a look:
The base frame (2231-IA3) supports a single collection, from as small as 3.6 TB to as large as 72 TB of usable capacity. You can attach one expansion frame (2231-IS3) that holds two additional collections, 63 TB usable capacity for each collection. Disk capacity is increased in eight-drive (half-drawer) increments of 3.6 TB usable capacity each. A full configured IA system (304 drives, 1 TB raw capacity per drive) provides 198 TB usable capacity.
Of course, that is just the disk side of the solution. Like its predecessor, the IBM System Storage DR550, the IA v1.1 can also attach to external tape storage to store and protect petabytes (PB) of archive data. Hundreds of different IBM and non-IBM tape drives and libraries are supported, so that this can be easily incorporated into existing tape environments.
Each collection can be configured to one of three protection levels: basic, intermediate, and maximum.
Basic protection provides RAID protection of data using standard NFS group/user controls for access to read and write data. This can be useful for databases that need full read/write access. Users can assign expiration dates, but in Basic mode they can delete the data before the expiration date is reached.
Intermediate adds Non-Erasable Non-Rewriteable (NENR) protection against user actions to delete or modify protected data. However, similar to IBM N series "Enterprise SnapLock", intermediate mode allows authorized storage admins to clean up the mess, increase or reduce retention periods, and delete data if it is inadvertently protected. I often refer to this as "training wheels" for those who are trying to work out their workflow procedures before moving on to Maximum mode.
Maximum provides the strictest NENR protection for business, legal, government and industry requirements, comparable to IBM N series "Compliance SnapLock" mode, for data that traditionally were written to WORM optical media. Data cannot be deleted until the retention period ends. Retention periods of individual files and objects can be increased, but not decreased. Retention Hold (often referred to as Litigation Hold) can be used to keep a set of related data even longer in specific circumstances.
You can decide to upgrade your protection after data is written to a collection. Basic mode can be upgraded to Intermediate mode, for example, or Intermediate mode upgraded to Maximum.
To keep things simple, v1.1 of the Information Archive supports only two industry standard protocols: NFS and SSAM API. The NFS option allows standard file commands to read/write data. The System Storage Archive Manager (SSAM) API allows smooth transition from earlier IBM System Storage DR550 deployments. With this announcement, IBM will [discontinue selling the DR550 DR2 models].
As we say here at IBM, "Today is the best day to stop using EMC Centera." For more information, see the
IBM [Announcement Letter].
IBM makes another breakthrough today with an announcement about tape data density. Unlike hard disk drive technologies that are hitting physical limits, IBM is proving that tape technology still has plenty of life in its future.
When I first started working for IBM in Tucson, back in 1986, a 3420 tape reel held only 180MB of data, and a 3480 tape cartridge improved this to 200MB of data. Today's enterprise tapes, like 3592 cartridges for the TS1130 drives, or LTO4 cartridges for the IBM TS1040 drives, are half-inch wide, half-mile long, and can store 1 TB or more of data per cartridge, depending on how well the data can compress. To increase cartridge capacity, designers can make changes in three dimensions:
Wider tape: The film industry tried this, going from 35mm to 70mm film, only to find that most cinemas did not want to upgrade their equipment. Keeping the media dimensions to half inch wide allows much of the engineering hardware to continue unchanged.
Longer tape: The problem with longer tape is that either the reel inside gets fatter, or you need to develop flatter media to fit within the existing cartridge dimensions. Wider reels means a bigger tape cartridge external dimensions, forcing changes to shelving units, cartridge trays, and carrying units. The media just can't get any flatter without risking getting more brittle.
Denser bit recording: once a convenient width and length were established, improving bit density turned out to be the best way to increase cartridge capacity.
Working with FujiFilm Corporation of Japan, my colleagues at IBM Research facility in Zurich were able to demonstrate an incredible 29.5 Gigabits per square inch, nearly 40 times more dense than today's commercial tape technology. In the near future, we will be able to hold a 35TB tape cartridge in our hand. There was actually a lot to make this happen, improved giant magentoresistive read/write heads, better servo patterns to stay on track, thinner tracks less than a micron thick, and better signal-to-noise processing to accomplish this. To learn more, you can read the [Press Release] or watch this quick [4-minute YouTube video].
Last week's earthquake in Haiti reminds us all how fragile systems can be. Part of a complete Information Infrastructure is Information Security. Back in 2006, IBM [acquired Internet Security Services]. This week, IBM announces two sets of ISS Data Security Services: These services can include assessments of your current environment, running workshops to help gather requirements, help design security policies, and even follow through with implementation.
Endpoint Data Protection
Here "endpoint" refers to laptops, desktops, PDAs and smart phones. Not surprisingly, more and more mobile employees are relying on data stored on these endpoint devices, and they need to be protected and secure. [Endpoint Data Protection services] includings software, consulting and implementation of a solution that fits your environment.
Enterprise Content Protection
Here "enterprise content" refers to data that is stored centrally, such as a data center, and accessed over one or more networks. [Enterprise Content Protection services] will evaluate the data that is most sensitive, determine the various formats, identify risks, and provide guidance on how best to protect. Software is available to identify network exits and leakage points.
Both of these services include implementation of help desk support as well. To learn more, check out the ISS [Virtual Briefing Center].
"With Cisco Systems, EMC, and VMware teaming up to sell integrated IT stacks, Oracle buying Sun Microsystems to create its own integrated stacks, and IBM having sold integrated legacy system stacks and rolling in profits from them for decades, it was only a matter of time before other big IT players paired off."
Once again we are reminded that IBM, as an IT "supermarket", is able to deliver integrated software/server/storage solutions, and our competitors are scrambling to form their own alliances to be "more like IBM." This week, IBM announced new ordering options for storage software with System x servers, including BladeCenter blade servers and IntelliStation workstations. Here's a quick recap:
IBM Tivoli Storage Manager FastBack v6.1 supports both Windows and Linux! FastBack is a data protection solution for ROBO (Remote Office, Branch Office) locations. It can protect Microsoft Exchange, Lotus Domino, DB2, Oracle applications. FastBack can provide full volume-level recovery, as well as individual file recovery, and in some cases Bare Machine Recovery. FastBack v6.1 can be run stand-alone, or integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution.
FlashCopy Manager v2.1
FlashCopy Manager uses point-in-time copy capabilities, such as SnapShot or FlashCopy, to protect application data using an application-aware approach for Microsoft Exchange, Microsoft SQL server, DB2, Oracle, and SAP. It can be used with IBM SAN Volume Controller (SVC), DS8000 series, DS5000 series, DS4000 series, DS3000 series, and XIV storage systems. When applicable, FlashCopy manager coordinates its work with Microsoft's Volume Shadow Copy Services (VSS) interface. FlashCopy Manager can provide data protection using just point-in-time disk-resident copies, or can be integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution to move backup images to external storage pools, such as low-cost, energy-efficient tape cartridges.
General Parallel File System (GPFS) v3.3 Multiplatform
GPFS can support AIX, Linux, and Windows! Version 3.3 adds support for Windows 2008 Server on 64-bit chipset architectures from AMD and Intel. Now you can have a common GPFS cluster with AIX, Linux and Windows servers all sharing and accessing the same files. A GPFS cluster can have up to 256 file systems. Each of these file systems can be up to 1 billion files, up to 1PB of data, and can have up to 256 snapshots. GPFS can be used stand-alone, or integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution with parallel backup streams.
For full details on these new ordering options, see the IBM [Press Release].
It's that time again to think about [New Year's resolutions]! This fine tradition dates back 4000 years
to early Babylonians, with the most popular resolution back then was to return borrowed
Resolutions can be to work toward a specific goal, start doing something, or change your
habits to do something more often, or less often, than last year. Jim Collins from
37Signals suggests a [Stop Doing List]. Colin Beavan (aka [No Impact Man]) took this idea to the
extreme, giving up a year of electricity, coffee and toilet paper, and a bunch of other
things, in an effort to minimize his environmental impact.
This one was easy. Nearly all of my friends and family live in Tucson, so spending more
time merely involves spending less time out of town. With the economic meltdown of
2008, IBM set down strict travel restrictions, so I only traveled 11 weeks in 2009.
Enjoy Life More
I have mixed feelings on this one. The four hardest hit areas of the current economic
recession were southern Florida, southern Michigan, southern California and southern
Arizona. Last year, I had friends that lost their job, their home, their business, or
their battle with cancer. Trying to enjoy life while your friends are walking around
like zombies after nuclear winter just doesn't feel right.
Learn Something New
I was able to keep this one, in an unexpected way. Shortly after making this
resolution, I was asked to teach young kids the "C" programming language so they could
program LEGO Mindstorms robots. While I already know "C" in general, I had to learn to
build the robots and program the interface for the robot "brick" in order to teach
others. Sometimes, the best way to learn something new, is to offer to teach it to
others. This was a deeply rewarding way to give back to the community.
Make Tucson a better place, and enrich the lives of its residents
In addition to helping teach kids to build robots, I spent hundreds of hours and
thousands of dollars to support local Tucson organizations this year. Did it help? It
is hard to say. For example, you can spend an entire day sorting cans for the community
food bank, only to learn that this will all be consumed in a matter of days. At least I
will be paying less taxes!
Get better organized
This has been an ongoing struggle, but I made progress in 2008 and 2009. Last year, I
purchased a T-mobile G1 smart phone with Google and I have been using this as my
organization tool. It syncs up with my Gmail, Google Calendar, Google Contacts,
Remember the Milk, Delicious, and other sites I use. It certainly works better for me
than my past attempt using a [Hipster PDA].
Should people make their resolutions public? Derek Sivers cites research indicating
that [announcing your plans makes you less
motivated] to complete them. Given the long waits we saw between when storage
vendors like EMC announce some new feature to when it is actually delivered, there might
be a lot of truth to that. So, this year, I will do things differently and NOT make
public any New Year's resolutions for 2010.
Instead, the new VBC was designed to supplement in-person briefings, maintained by technical folks like myself who are part of the physical briefing centers. The site has four major sections:
This section features presentations for the five product lines and eight solution areas. The voiced-over presentations are recorded as flash videos.
Dozens of subject matter experts (SME) are contributing to this site with their own voices, blogs, presentations and videos. If you have been to a briefing center recently, you might recognize some familiar faces.
Scheduling and Planning
The VBC can also help plan and schedule for online and in-person visits. Not familiar with IBM's products and solutions? The VBC can help you get familiar before your briefing. Interested in discussing more about a particular topic? The VBC can identify webinars and briefings that you can attend. Already attended an IBM briefing in person and now you want to share the excitement with your colleagues back in the office? Use the VBC to help share your knowledge with others.
Not sure which briefing center to visit? Systems and Technology Group has 13 of them worldwide, and this section describes each one.
The VBC is available worldwide to all companies looking to buy IBM products and solutions, as well as IBM Business Partners and sales reps. Check it out!
Wrapping up my coverage of the Data Center Conference 2009, the week ends with a celebration. This year we had six "Hospitality Suites" sponsored by various different vendors. Each suite has its own theme, decorations and entertainment. The first suite was VMware's "Cloud 9 Ultra Lounge" which offered blue cotton candy martinis. IBM is the leading reseller of VMware.
When the red martini liquid was poured on top of the blue cotton candy, the result was a nasty muddish brown grey color. The guy on the left chose to get the martini without the blue cotton candy. We joked that this is perhaps a good metaphor for cloud computing in general. It looks good on paper, until you actually put it all together and realize it does not look as blue and puffy as you were expecting. However, it tasted good!
Next suite was sponsored by Cisco, one of IBM's storage networking partners. Cisco also decorated in blue, as the guy Jake in the middle demonstrates.
Next suite was sponsored by Brocade, our supplier for IBM-branded networking gear. They went with a red-and-black color scheme. Sadly, many of my pictures inside involved straight jackets and unicycles, so not appropriate for this blog. However, it was easy to remember that they were talking about their "extraordinary networks". Makes you want to help out Brocade by contacting your nearest IBM storage sales rep and buy yourself a SAN768B or two.
Somewhere along the way, we picked up Hawaiian leis at the "Margaritaville" Hospitality Suite, compliments of sponsor APC by Schneider Electric. We had the best "Filet Mignon" appetizers at "Club Dedupe" by our competitor DataDomain, and some fun with my friends over at Computer Associates' "Top Gun" suite. Pictured at right are Paula Koziol with Christian Barrera from Argentina. A good time was had by all.
Well, I'm back in Tucson, and thought I would close out my coverage of this year's Data Center Conference 2009 with some pictures. These first few are from the Solution Showcase.
There were four stations at the IBM booth. I had the "Information Infrastructure" station, you can see here I had my blook (blog-based book) on display "Inside System Storage: Volume I", a solid-state drive (in clear plexiglas to show all the chips inside), and the GUI panel for XIV.
What really stole the show was the IBM Portable Mobile Data Center (PMDC), which is a shipping crate with a fully running data center inside. In the one shown here, we had iDataPlex servers connected to an IBM XIV Storage System. Here is David Bricker striking a pose.
Inside, Monica Martinez shows off the iDataPlex servers. These are 1U servers that are only half as deep as regular servers, so you can pack 84 servers in the floorspace of 42 traditional 1U servers.
Two of these fit into a 2U chassis to share a common power supply and fan set. The trouble with traditional 1U servers is that fans do not have enough radius, so putting wider 2U fans for two servers gives you much better airflow.
Monica Martinez, Ruth Weinheimer, and Tamara Rice.
Continuing my coverage of last week's Data Center Conference 2009, my last breakout session of the week was an analyst presentation on Solid State Drive (SSD) technology. There are two different classes of SSD, consumer grade multi-level cell (MLC) running currently at $2 US dollars per GB, and Enterprise grade single-level cell (SLC) running at $4.50 US dollars per GB. Roughly 80 to 90 percent of the SSD is used in consumer use cases, such as digital cameras, cell phones, mobile devices, USB sticks, camcorders, media players, gaming devices and automotive.
While the two classes are different, the large R&D budgets spent on consumer grade MLC carry forward to help out enterprise grade SLC as well. SLC means there is a single level for each cell, so each cell can only hold a single bit of data, a one or a zero. MLC means the cell can hold multiple levels of charge, each representing a different value. Typically MLC can hold 3 to 4 bits of data per cell.
Back in 1997, SLC Enterprise Grade SSD cost roughly $7870 per GB. By 2013, Consumer Grade 4-bit MLC is expected to be only 24 cents per GB. Engineers are working on trade-offs between endurance cycles and retention periods. FLASH management software is the key differentiator, such as clever wear-leveling algorithms.
SSD is 10-15 times more expensive than spinning hard disk drives (HDD), and this price difference is expected to continue for a while. This is because of production volumes. In 4Q09, manufacturers will manufacturer 50 Exabytes of HDD, but only 2 Exabytes of SSD. The analyst thinks that SSD will only be roughly 2 percent of the total SAN storage deployed over the next few years.
How well did the audience know about SSD technology?
4 percent not at all
30 percent some awareness
30 percent enough to make purchase decision
21 percent able to quantify benefits and trade-offs
15 percent experts
SSD does not change the design objectives of disk systems. We want disk systems that are more scalable and have higher performance. We want to fully utilize our investment. We want intelligent self-management similar to caching algorithms. We want an extensible architecture.
What will happen to fast Fibre Channel drives? Take out your Mayan calendar. Already 84mm 10K RPM drives are end of life (EOL) in 2009. The analyst expects 67mm and 70mm 10K drives will EOL in 2010, and that 15K will EOL by 2012. A lot of this is because HDD performance has not kept up with CPU advancements, resulting in an I/O bottleneck. SSD is roughly 10x slower than DRAM, and some architectures use SSD as a cache extension. The IBM N series PAM II card and Sun 7000 series being two examples.
Let's take a look at a disk system with 120 drives, comparing 73GB HDD's versus 32GB SSD's.
per HDD drive
per SSD drive
There are various use cases for SSD. These include internal DAS, stand-alone Tier 0 storage, replace or complement HDD in disk arrays, and as an extension of read cache or write cache. The analyst believes there will be mixed MLC/SLC devices that will allow for mixed workloads. His recommendations:
Use SSD to eliminate performance and throughput bottlenecks
Consolidate workloads to maximize value
Use SLAs to identify workload candidates
Evaluate emerging technologies along with established vendors
Do not expect SSD to drastically reduce power/cooling
SSD will continue to complement HDD, primarily SATA disk
Trust but verify, check out customer references offered by storage vendors
Continuing my coverage of last week's Data Center Conference 2009, I attended another "User Experience" that was very well received. This time, it was Henry Sienkiewicz of the Department Information Systems Agency (DISA) presenting a real-world example of the business model behind a private cloud implementation. DISA is the US government agency that develops and runs software for the Army, Navy and Air Force.
Being part of the military presents its own unique set of challenges:
Acquisition of hardware to develop and test software is difficult
Budgets fluctuate so an elastic pay-for-use was desirable
End user access had to be secure and meet government regulations
It had to meet the technical aspects of scalable, elastic, dynamic, multi-tenant using shared resources
Using Cloud Computing simplifies provisioning, encourages the use of standards, and provides self-service. DISA has several solutions.
Rapid Access Computing Environment (RACE)
RACE is an internal private cloud with 24-hour provisioning for development and test requests, and 72 hour provisioning for production requests. The amount used is billed on a month-to-month basis, and offers a self-service portal so that developers and administrators can just pick and choose what they need. The result is a hosted server, similar to what you get from 1and1.com or GoDaddy.
Global Content Delivery Service (GCDS)
This provides long-term storage of data. An internal version of "Cloud Storage" for archive and fixed content.
This provides a place to maintain source code, basically their internal version of "SourceForge" used by Open Source projects.
In their traditional approach, a software project would take six months to procure the hardware, another 6-12 months code and test, and then another 6 months in certification, for a total of 18-24 months. With the new Cloud Computing approach that DISA adopted, procurement was down to 24-72 hours with RACE, code test took only 2-6 months with Forge.Mil, and certification could be done in days on RACE, resulting in a new total of only 3-6 months. Some challenges they found:
Service Level management and continuing the use of ITIL best practices
Balancing Military-level Security with Self-service Usability
Internal Funding and Chargeback, they had even adopted a way for developers to pay with their credit card
Cultural inertia, developers don't like to change or do things in a different way
Some lessons learned from this two-year experience:
It's a journey. Most of the user experiences for cloud adoption took two or more years to complete
Infrastructure Fundamentals continue to matter
Know your "marketplace", in this case, software development for military applications
Engage in your end-users early. In this case, Henry had wished he had involved input from software developers that would be using RACE, GCDS and Forge.MIL earlier in the process.
Return on Value analysis, this is different than Return on Investment, as many of the benefits of cloud like higher morale are intangible at first
Avoid fixed costs in negotiations with vendors. For example, he cited they use a lot of IBM because of IBM's pay-for-use billing model. They pay for MIPS used on IBM mainframes, and their IBM Tivoli software pricing is usage-based.
Continuing my coverage of last week's Data Center Conference 2009, held Dec 1-4 in Las Vegas, I find some of the best sessions are those "user experiences" by the CIO or IT directors that successfully completed a project and showed the benefits and pitfalls. Matt Merchant, CTO of General Electric (GE), gave an awesome presentation on tapping Cloud Storage to reduce their backup and archive costs.
They were concerned over their lack of e-Discovery tools, the high fixed cost and large administrator personnel load of their Veritas NetBackup software environment, the possibility of corrupted tape media, new compliance and regulatory issues, and the risk of moving unencrypted cartridges to remote vaulting facilities like Iron Mountain. I found it interesting their backup/archive approach is that backups are re-classified as archive after they are 35 days old.
GE's Disk-to-Disk-to-Tape (D2D2T) approach was costing them 50 cents per GB/month. Changing to a D2D with remote replication addressed some of their concerns over tape, but was more costly at 79 centers per GB/month. Given that Backup and Archive represent 30 percent of their IT budget, the largest non-application expense, they reviewed their options:
Continue with their Traditional BU/Archive approach
Adopt Internal DAS using cheaper SATA disk drives
Implement an Internal Cloud
Use External Cloud services
General Electric had a long list of requirements:
99.99 percent Availability
99.999 percent Reliability and data integrity of the data
Location independent access
Meets HIPAA, SAS70, PCI compliance requirements
Secure 3rd party access
Eliminate GE operations management personnel
Large file size uploads and resumable uploads (GE owns NBC Universal and some files are very large, movies can be 1.5 TB in size)
Encryption at rest
Multi-node capable, in other words, GE uploads it once and the Cloud Storage provider ensures that it is stored in two or more designated locations.
Child-level billing/management. Here child relates to department, division or other sub-division for reporting and management purposes.
Data integrity verification, such as with MD5 hash codes
GE evaluated Nirvanix, Amazon S3 and EMC and chose Nirvanix. They found Cloud storage worked best for backup, archive and large files, but was not a good fit for production/transactional data. However, they were not happy with proprietary APIs and vendor lock-in, so they wrote their own internal "Data Mover" called CloudStorage Manager that works with five different cloud storage providers through an abstraction layer. It is able to handle up to 8.8 GB per minute upload, has a policy engine that does encryption, compression and single-instance storage data deduplication at the file level. Some lessons learned include:
Challenge the skeptics
Run small pilot projects to get familiar with the technology and provider
Socialize (have a beer or coffee with) your Security and Legal teams early and often
Consider using multiple cloud providers
Test many different scenarios
The end result? They now have Cloud-based backups and archive for their GE Corp, NBC Universal and GE Asset Management divisions running at only 32 cents per GB/month, representing a 40-60 percent savings over their previous methods. This includes backups of their external Web sites, archives of their digital and production assets, RMAN backups including development/staging databases. They plan to add out-of-region compliance archive in 2010. They also plan to monetize their intellectual property by offering "CloudStorage Manager" as a software offering for others.
Continuing my coverage of last week's Data Center Conference 2009, held Dec 1-4 in Las Vegas, I attended an interesting session related to the battles between Linux, UNIX, Windows and other operating systems. Of course, it is no longer between general purpose operating systems, there are also thin appliances and "Meta OS" such as cloud or Real Time Infrastructure (RTI).
One big development is "context awareness". For the most part, Operating Systems assume they are one-to-one with the hardware they are running on, and Hypervisors like PowerVM, VMware, Xen and Hyper-V have worked by giving OS guests the appearance that this is the case. However, there is growing technology for OS guests to be "aware" they are running as guests, and to be aware of other guests running on the same Hypervisor.
The analyst divided up Operating Systems into three categories:
Operating systems that are typically used to support other OS by offering Web support or other infrastructure. Linux on POWER was an example given.
DBMS/Industry Vertical Applications
Operating systems that are strong for Data Base Management Systems (DBMS) and vertical industry applications. z/OS, AIX, HP-UX, HP NonStop, HP OpenVMS were given as examples.
General Purpose for a variety of applications
Operating systems that can run a range of applications, from Web/Infrastructure, DBMS/Vertical Apps, to others. Windows, Linux x86 and Solaris were offered as examples.
The analyst indicated that what really drove the acceptance or decline of Operating Systems were the applications available. When Software Development firms must choose which OS to support, they typically have to evaluate the different categories of marketplace acceptance:
For developing new applications: Windows-x86 and Linux-x86 are must-haves now
Declining but still valid are UNIX-RISC and UNIX-Itanium platforms
Viable niche are Non-x86 Windows (such as Windows-Itanium) and non-x86 Linux (Linux on POWER, Linux on System z)
Entrenched Legacy including z/OS and IBM i (formerly known as i5/OS or OS/400)
For the UNIX world, there is a three-legged stool. If any leg breaks, the entire system falls apart.
The CPU architecture: Itanium, SPARC and POWER based chipsets
Operating System: AIX, HP-UX and Solaris
Software stacks: SAP, Oracle, etc.
Of these, the analyst consider IBM POWER running AIX to be the safest investment. For those who prefer HP Integrity, consider waiting until "Tukwilla" codename project which will introduce new Itanium chipset in 2Q2010. For Sun SPARC, the European Union (EU) delay could impact user confidence in this platform. The future of SPARC remains now in the hands of Fujitsu and Oracle.
What platform will the audience invest in most over the next 5 years?
45 percent Windows
14 percent UNIX
37 percent Linux
4 percent z/OS
A survey of the audience about current comfort level of Solaris:
10 percent: still consider Solaris to be Strategic for their data center operations and will continue to use it
25 percent: will continue to use Solaris, but in more of a tactical way on a case-by-case basis
30 percent: have already begun migrating away
35 percent: Do not run Solaris
The analyst mentioned Microsoft's upcoming Windows Server 2008 R2, which will run only on 64-bit hardware but support both 32-bit and 64-bit applications. It will provide scalability up to 256 processor cores. Microsoft wants Windows to get into the High Performance Computing (HPC) marketplace, but this is currently dominated by Linux and AIX. The analyst's advice to Microsoft: System Center should manage both Windows and Linux.
Has Linux lost its popularity? The analyst indicated that companies are still running mission critical applications on non-Linux platforms, primarily z/OS, Solaris and Windows. What does help Linux are old UNIX Legacy applications, the existence of OpenSolaris x86, Oracle's Enterprise Linux, VMware and Hyper-V support for Linux, Linux on System z mainframe, and other legacy operating systems that are growing obsolete. One issue cited with Linux is scalability. Performance on systems with more than 32 processor cores is unpredictable. More mature operating systems like z/OS and AIX have stronger support for high-core environments.
A survey of the audience of which Linux or UNIX OS were most strategic to their operations resulted in the following weighted scores:
140 points: Red Hat Linux
71 points: AIX
80 points: Solaris
40 points: HP-UX
41 points: Novell SUSE Linux
19 points: Oracle Enterprise Linux
29 points: Other
The analyst wrapped up with an incredibly useful chart that summarizes the key reasons companies migrate from one OS platform to another:
Reduce Costs, Adopt HPC
DBMS, Complex projects
Availability of Admin Skills
Performance, Mission Critical Applications
Availability of Apps, leave incumbent UNIX server vendor
Consolidation, Reduce Costs
Certainly, all three types of operating system have a place, but there are definite trends and shifts in this marketspace.
Continuing my coverage of the Data Center Conference 2009, held Dec 1-4 in Las Vegas, the title of this session refers to the mess of "management standards" for Cloud Computing.
The analyst quickly reviewed the concepts of IaaS (Amazon EC2, for example), PaaS (Microsoft Azure, for example), and SaaS (IBM LotusLive, for example). The problem is that each provider has developed their own set of APIs.
(One exception was [Eucalyptus], which adopts the Amazon EC2, S3 and EBS style of interfaces. Eucalyptus is an open-source infrastrcture that stands for "Elastic Utility Computing Architecture Linking Your Programs To Useful Systems". You can build your own private cloud using the new Cloud APIs included Ubuntu Linux 9.10 Karmic Koala termed Ubuntu Enterprise Cloud (UEC). See these instructions in InformationWeek article [Roll Your Own Ubuntu Private Cloud].)
The analyst went into specific Virtual Infrastructure (VI) and public cloud providers.
Private Clouds can be managed by VMware tools. For remote management of public IaaS clouds, there is [vCloud Express], and for SaaS, a new service called [VMware Go].
Citrix is the Open Service Champion. For private clouds based on Xen Server, they have launched the [Xen Cloud Project] to help manage. For public clouds, they have [Citrix Cloud Center, C3], including an Amazon-based "Citrix C3 Labs" for developing and testing applications. For SaaS, they have [GoToMyPC and [GoToAssist].
Amazon offers a set of Cloud computing capabilities called Amazon Web Services [AWS]. For virtual private clouds, use the AWS Management Console. For IaaS (Amazon EC2), use [CloudWatch] which includes Elastic Load Balancing.
If you prefer a common management system independent of cloud provider, or perhaps across multiple cloud providers, you may want to consider one of the "Big 4" instead. These are the top four system management software vendors: IBM, HP, BMC Software, and Computer Associates (CA).
A survey of the audience found the number one challenge was "integration". How to integrate new cloud services into an existing traditional data center. Who will give you confidence to deliver not tools for remote management of external cloud services? Survey shows:
28 percent: VI Providers (VMware, Citrix, Microsoft)
19 percent: Big 4 System Management software vendors (IBM, HP, BMC, CA)
13 percent: Public cloud providers (Amazon, Google)
40 percent: Other/Don't Know
For internal private on-promise Clouds, the results were different:
40 percent: VI Providers (VMware, Citrix, Microsoft)
21 percent: Big 4 System Management software vendors (IBM, HP, BMC, CA)
13 percent: Emerging players (Eucalyptus)
26 percent: Other/Don't Know
Some final thoughts offered by the analyst. First, nearly a third of all IT vendors disappear after two years, and the cloud will probably have similar, if not worse, track record. Traditional server, storage and network administrators should not consider Cloud technologies as a death knell for in-house on-premises IT. Companies should probably explore a mix of private and public cloud options.
Continuing my coverage of the Data Center Conference, held Dec 1-4 in Las Vegas, an analyst presented the challenges of managing the rapid growth in storage capacity. Administrators ability to manage storage is not keeping up with the growth. His recommendations:
Aim to just meet but not exceed service level agreements (SLAs)
Revisit past IT decisions. This includes evaluating your SAN to NAS ratio.
Embrace new technologies when they are effective, this includes cloud storage, solid state drives, and interconnect technologies like FCoCEE.
Follow vendor management best practices, update your vendor "short list".
A survey of the audience found:
20 percent have a single external storage vendor
39 percent have two external storage vendors
18 percent have three external storage vendors
23 percent have four or more external storage vendors
Throughout the industry, storage vendors are following IBM's example of using commodity hardware parts. This is because custom ASICs are expensive, and changes take a minimum of three months development time. Software-based implementations can be updated more quickly.
In terms of technologies deployed of SAN, NAS, Compliance Archive (such as the IBM Information Archive), and Virtual Tape Library (VTL) such as the IBM TS7650 ProtecTIER data deduplications solution, here was the survey of the audience:
8 percent: SAN only
14 percent: SAN and NAS
23 percent: SAN, NAS and Compliance Archive
9 percent: SAN and VTL
14 percent: SAN, NAS and VTL
32 percent: SAN, NAS, Compliance Archive and VTL
Cost reduction techniques including thin provisioning, compression, data deduplication, Quality of Service tiers, and archiving. To reduce power and cooling requirements, switch from FC to SATA disk wherever possible, and move storage out of the data center, such as on tape cartridges or cloud storage.
For emerging technologies, the following survey:
16 percent have already implemented a new emerging technology (IBM XIV, Pillar, 3PAR, etc.)
30 percent plan to do so in 12-24 months
4 percent plan to do so in 24-48 months
50 percent have no plans, and will continue to stick with traditional storage technologies
As for adopting Cloud storage, here was the survey:
14 percent already have
31 percent plan to use Cloud storage in 12-24 months
13 percent plan to use Cloud storage in 24-48 months
42 percent have no plans to adopt Cloud storage
My take-away from this is that many companies are still "exploring" into different options available to them. Fortunately, IBM offers a broad portfolio of complete end-to-end solutions to make acquiring the right mix of technologies that are optimized for your workloads possible.
Continuing my coverage of the Data Center Conference 2009, we had a keynote session on Wednesday, Dec 2 (Day 3) that focused on the key technologies to watch for the data center.
It seems like every session this week mentioned Cloud Computing. It is service- based, scalable and elastic both upwards and downwards, uses shared resources and internet standards, and can be metered by use. There are three focal points related to Cloud Computing:
Consuming Cloud Services offered by other providers
Developing cloud-enabled applications and solutions
Implementing an internal "Prviate Cloud"
The analyst used the term "service boundary" to distinguish between IaaS, PaaS and SaaS cloud service models. For those still confused, here is how I explain Cloud Computing, using that analogy of transportation as an example.
You buy a car to get around town. You need to have a drivers license, carry liability insurance, and have a place to park your vehicle. You get to pick the make, model and color. You need to come up with thousands of dollars up front, or arrange some form of financing for monthly payments. It could take days or weeks to purchase, as you test drive different ones, research online, and check out feature comparisons between car dealers. You can drive wherever you want, whenever you want.
The same is done in the data center, you buy servers, storage and network gear, build a data center floor to hold it all, and hire server, storage and network administrators to manage it.
Infrastructure as a Service (IaaS)
You rent a car from a local Car Rental Agency. You still need a drivers license and carry liability insurance, but often you can get the insurance for the days or weeks that you are renting the car. You have limited choices of make, model and color. You don't need thousands of dollars, just enough to cover the daily or weekly rate. The rental process can be done in minutes.
IaaS providers have their own data centers, so you don't need your own. They can rent you floorspace and equipment on a monthly basis. Your server, storage and network administrators manage these remotely. Your OS choices are limited to the types of hardware available, typically x86 servers, SAN and NAS storage.
Platform as a Service (PaaS)
You take a taxi. Since you are not driving, you do not need a drivers license nor need liability insurance. The vehicle is typically a yellow four-door sedan. You don't need thousands of dollars, just enough to cover the ride, often metered by the distance traveled. Getting a taxi takes minutes, just a matter of calling the cab company, or hailing one streetside. Depending on the cab company, you can tell the taxi driver where to go, how to get there, and that you are in a hurry.
PaaS providers have data centers with servers, storage and networking gear. Your options are often Linux or Windows with some middleware web serving and database already running. You may still need some of your own server, storage and network admins to manage things remotely. Usage is metered, you pay for bandwidth, CPU and storage used. Typical rates for Cloud Storage, for example, is 25 cents per GB per month.
Software as a Service (SaaS)
You take public transportation, like the subway. You are not driving, so no need for license or insurance. The vehicle holds hundreds of passengers, and you have no options on the make, model or color. You only need enough to cover the cost of the ticket, which is often based on the distance traveled. You have to get to the subway station nearest you, and it takes you to the subway station nearest your eventual destination, so other forms of transportation may be required if this does not completely meet your requirements.
SaaS providers offer you the application already running in their data center on their servers. You are charged per employee per month that uses this application. You won't need server, storage or network administrators, but you might need your own software developers to customize the application, or compensate for its lack of functionality with surrounding applications if it does not exactly meet your needs. Google Gmail and IBM LotusLive are two examples of this.
Virtualization for Availability and Business Continuity
No surprise here, virtualization has proven quite useful to improve both high availability and continuous operations within the data center, as well as multiple site configurations for disaster recovery and business continuity. P-to-V is used to refer to the concept of running applications on physical servers at the primary location, but have these as virtual servers under VMware or Hyper-V at the disaster site secondary location to minimize the cost of standby equipment.
Reshaping the Data Center
Data Center facilities design is going modular, with design for server/storage/network "pod" and contained "power zones".
IT for Green
This is not making the IT department itself more environment-friendly, but using IT to make the entire company more environment-friendly, including using sensors to monitor input and output, reduce carbon footprint and monitor energy consumption per employee.
Virtual Desktop Infrastructure (VDI) is changing the way employees use IT services. Rather than having to maintain a full OS and application stack on each employees PC, using VDI and browser-based applications can help centralize and take back control, minimizing help desk costs.
Business Intelligence and Operational Analytics is taking off. In the past, decision support systems were limited to just the highest levels of executives and analysts that work for them, but now the technology is reaching a broader portion of the company, allowing knowledge workers to have more information to make better business decisions. We have seen this transition from employees working off fixed rules of thumb that apply to all situations, to decisions supported by market data, to now a more predictive analysis.
FLASH memory (Solid State Drives, SSD)
Solid State Drives and advances in memory will impact the storage world in the data center, much as it has in consumer electronics.
Reshaping the Server
This last prediction seemed far-fetched. The analyst felt that we will begin to see server components to be separated between CPU, memory and I/O support, so that you can seemlessly add or remote each from running servers. Some of this has happened with blade servers, with some components shraed by multiple servers that are hot-swappable.
Certainly, an interesting list of technologies to watch.
Continuing my coverage of the Data Center Conference, Dec 1-4, 2009 here in Las Vegas, this post focused on data protection strategies.
Two analysts co-presented this session which provided an overview of various data protection techniques. A quick survey of the audience found that 27 percent have only a single data center, 13 percent have load sharing of their mission critical applications across multiple data centers, and the rest use a failover approach to either development/test resources, standby resources or an outsourced facility.
There are basically five ways to replicate data to secondary locations:
Array-based replication. Many high-end disk arrays offer this feature. IBM's DS8000 and XIV both have synchronous and asynchronous mirroring. Data Deduplication can help in this regard to reduce the amount of data transmitted across locations.
NAS-based replication. I consider this just another variant of the first, but this can be file-based instead of block-based, and can often be done over the public internet rather than dark fiber.
Network-based replication. This is the manner that IBM SAN Volume Controller, EMC RecoverPoint, and others can replicate. The analysts liked this approach as it was storage vendor-independent.
Host-based replication. This is often done by the host's Operating System, such as through a Logical Volume Manager (LVM) component.
Application/Database replication. There are a variety of techniques, including log shipping of transactions, SQL replication, and active/active application-specific implementations.
The analysts felt that "DR Testing" has become a lost art. People are just not doing it as often as they should, or not doing it properly, resulting in surprises when a real disaster strikes.
A question came up about the confusion between "Disaster Recovery Tiers" and Uptime Institute's "Data Center Facilities Tiers". I agree this is confusing. Many clients call their most mission critical applications as Tier 1, less critical as Tier 2, and least critical as Tier 3. In 1983, IBM User Group GUIDE came up with "Business Continuity Tiers" where Tier 1 was the slowest recovery from manual tape, and Tier 7 was the fastest recovery with a completely automated site, network, server and storage failover. However, for Data Center facility tiers, Uptime has the simplest least available (99.3 percent uptime) data center as Tier 1, and the most advanced, redundant, highest available (99.995 percent) data center as Tier 4. This just goes to show that when one person starts using "Tier 1" or "Tier 4" terminology, it can be misinterpreted by others.
This week several IBM executives will present at the 28th Annual Data Center Conference here in Las Vegas. Here is a quick recap:
Steve Sams: Data Center Cost Saving Actions Your CFO Will Love
A startling 78 percent of today's data centers were built in the last century, before the "dot com" era and the adoption of high-density blade servers. IBM Vice President of Global Site and Facility Services, Steve Sams, presented actions that can help extend the life of existing data centers, help rationalize the infrastructure across the company, and design a new data center that is flexible and responsive to changing needs.
In one example, an 85,000 square foot datacenter in Lexington had reached 98 percent capacity based on power/cooling requirements. They estimated it would take $53 million US dollars to either upgrade the facility or build a new facility to meet projected growth. Instead, IBM was able to consolidate servers six-to-one, an 85 percent reduction. IBM also was able to make changes to the cooling equipment, redirect airflow and changed out the tiles, re-oriented the servers for more optimal placement, and implement measurement and management tools. The end result? The facility now has eight times the compute capability and enjoys 15 percent headroom for additonal growth. All this for only 1.5 million US dollar investment, instead of 53 million.
IBM builds hundreds of data centers for clients large and small. In addition to the "Portable Modular Data Center"(PMDC) shipping container on display at the Solution Showcase, IBM offers the "Scalable Modular Data Center", a turn-key system with a small 500 to 2500 square foot size for small customers. For larger deployments, the "Enterprise Modular Data Center" offers standardized deployments in 5000 square foot increments. IBM also offers "High Density Zones" which can be perfect way to avoid a full site retrofit.
Helene Armitage: IT-wide Virtualization
Helene is IBM General Manager of the newly formed IBM System Software division. A smarter planet will require more dynamic infrastructures, which is IBM's approach to helping clients through the virtualization journey. The virtualization of resources, workloads and business processes will require end-to-end management. To help, IBM offers IBM Systems Director.
Helene indicated that there are four stages of adoption:
Physical consolidation - VMware and Hyper-V are the latest examples of running many applications on fewer physical servers. Of course, IBM has been doing this for decades with mainframes, and has had virtualization on System i and System p POWER systems as well. A quick survey of the audience found that about 20 percent were doing server virtualization on non-x86 platforms (for example, PowerVM or System z mainframe z/VM)
Pools of resources - SAN Volume Controller is an example solution to manage storage as a pool of disparate storage resources. Supercomputers manage pools of servers.
Integrated Service Management - in the past, resources were managed by domain, resulting in islands of management. Now, with IBM Systems Director, you can manage AIX, IBM i, Linux and Windows servers, including non-IBM servers running Linux and Windows.
Service management can provide monitoring, provisioning, service catalog, self-service, and business-aligned processes.
Cloud computing - Helene agreed that not everyone will get to this stage. Some will adopt cloud computing, whether public, private or some kind of hybrid, and others may be fine at stage 3.
For those clients that want assistance, IBM offers three levels of help:
Help me decide what is best for me
Help me implement what I have decided to do
Help me manage and run my operations
With IBM's compelling vision for the future, best of breed solutions, leadership in management software, extensive experience in services, and solid business industry knowledge, it makes sense to tap IBM to help with your next IT transformation.
Jeff Garten, a professor of International Trade at the Yale School of Management covered the Post-Crisis Global Economy. How well did the world's governments do? Here was his "scorecard" of the five "R's":
Jeff gives world governments an "A", pumping about $20 trillion US dollars onto the world stage to stave off the worst impacts.
Jeff gives an "I" (Incomplete). Not quite an "F" as government regulations just have not been adopted to address situations like this.
Jeff gives this one an "I" also. The major inbalance is US borrowing so much from China, and China keeping its currency artificially low.
Jeff gives this a "B". Banks and other financial services have changed the way they do business and have taken some corrective actions on their own, often because strings attached to bailout funds.
Jeff gives this one a "C+", in that he is not hopeful for a quick recovery. Economists have five recovery models. A quick recovery has a "V" shape. A slower full recovery has a "U" shape. Some recoveries have premature upticks followed by a second crash, representing a "W" shape. Japan still has not recovered from their crash from last decade, like an "L" shape. Jeff feels that the United States will probably have a "reverse J" where it looks like a slow "U" shaped recovery over the next three years, but we never get back to our original prominence.
Jeff did not give the impression the worst was over. Rather, he felt there were still problems ahead, banks are still carrying a lot of bad debt and real estate industry may take a while to recover. He feels the era of a dollar-centric world that started circa 1945 is over, and that the dollar will continue to decline for several decades. Replacing this will be a combination of the Euro, Japanese Yen and Chinese Yuan.
What can we look forward to? There is a definite shift to Asia and other large emerging markets like Brazil. The "Global Commons" like food and energy are under severe stress. Global rules will go under a sort of remission. A resurgence of National governments to protect citizens is underway. Finally, there will be a return of Industrial policy.
Continuing my week in Las Vegas for the Data Center Conference 2009, I attended a keynote session on Service Management. There were two analysts that co-presented this session.
One analyst was the wife of a real CEO, and the other was the wife of a real CIO, so the two analysts explained that there was a langauge gap between IT and business. Use the analogy of a clock, business is concerned with the time shown on the front face is correct and ticking properly, but behind the scenes, the gears of the clock, represent IT, finance, supply chain and other operations.
Based on recent surveys, there is a 45 percent "alignment" between the goals of CEO and the goals of a CIO. CEOs are concerned about decision making, workforce productivity, and customer satisfaction. CIOs on the other hand are worried about costs, operations and change initiatives. Both CEOs and CIOs are focused on innovations that can improve business process. Service management strives to shorten the language gap between business and IT, by helping to drive operational excellence that benefits both CEO and CIO goals. Recent surveys found the key drivers for this are controlling costs, improving customer satisfaction, availability, agilty and making better business decisions.
Unfortunately, in this economy, the idea of "transformation" is out, and "restructuring" is in. In much the same way that employees have abandoned career development in favor of simple job preservation, companies are focused on tactical solutions to get through this financial meltdown, rather than launching transformation projects like deploying Service Management tools.
How much influence does the CIO have on running the rest of the business? Various surveys have found the following, ranked from most influential to least:
5-9 percent, Enterprise Leader
15-18 percent, Trusted Ally
25-32 percent, Partner
27-35 percent, Transactional
7-20 percent, At Risk
The bottom rank not only have little or no influence, but are at risk of losing their jobs. Evaluations based on a Maturity model finds many I&O operations in trouble, 11 percent taking some pro-active measures, 59 percent committed to improvement, and 30 percent aware of the problems.
IT Service Management tries to bring a similar discipline as Portfolio Management and Application Lifecycle Management. Why can't IT be treated like any other part of the business portfolio? What is the business value of IT? IT can help a business run, grow and even transform. IT can help consolidate and centralize shared services to help leverage resources and offer cost optimizations not just for itself, but for the business as a whole.
CIOs that can adopt IT Service Management can have a "Jacks or Better" chance for a seat at the executive table to help drive the business forward.
This week I am at the Data Center Conference 2009 in Las Vegas. There are some 1700 people registered this year for this conferece, representing a variety of industries like Public sector, Services, Finance, Healthcare and Manufacturing. A survey of the attendees found:
55 percent are at this conference for the first time.
18 percent once before, like me
15 percent two or three times before
12 percent four or more times before
Plans for 2010 IT budgets were split evenly, one third planning to spend more, one third planning to spend about the same, and the final third looking to cut their IT budgets even further than in 2009. The biggest challenges were Power/Cooling/Floorspace issues, aligning IT with Business goals, and modernizing applications. The top three areas of IT spend will be for Data Center facilities, modernizing infrastructure, and storage.
There are six keynote sessions scheduled, and 66 breakout sessions for the week. A "Hot Topic" was added on "Why the marketplace prefers one-stop shopping" which plays to the strengths of IT supermarkets like IBM, encourages HP to acquire EDS and 3Com, and forces specialty shops like Cisco and EMC to form alliances.
Day 2 began with a series of keynote sessions. Normally when I see "IO" or "I/O", I immediately think of input/output, but here "I&O" refers to Infrastructure and Operations.
Business Sensitivity Analysis leads to better I&O Solutions
The analyst gave examples from Alan Greenspan's biography to emphasize his point that what this financial meltdown has caused is a decline in trust. Nobody trusts anyone else. This is true between people, companies, and entire countries. While the GDP declined 2 percent in 2009 worldwide, it is expected to grow 2 percent in 2010, with some emerging markets expected to grow faster, such as India (7 percent) and China (10 percent). Industries like Healthcare, Utilities and Public sector are expected to lead the IT spend by 2011.
While IT spend is expected to grow only 1 to 5 percent in 2010, there is a significant shift from Capital Expenditures (CapEx) to Operational Expenses (OpEx). Five years ago, OpEx used to represent only 64 percent of IT budget in 2004, but today represents 76 percent and growing. Many companies are keeping their aging IT hardware longer in service, beyond traditional depreciation schedules. The analyst estimated over 1 million servers were kept longer than planned in 2009, and another 2 million will be kept longer in 2010.
An example of hardware kept too long was the November 17 delay of 2000 some flights in the United States, caused by a failed router card in Utah that was part of the air traffic control system. Modernizing this system is estimated to cost $40 billion US dollars.
Top 10 priorities for the CIO were Virtualization, Cloud Computing, Business Intelligence (BI), Networking, Web 2.0, ERP applications, Security, Data Management, Mobile, and Collaboration. There is a growth in context-aware computing, connecting operational technologies with sensors and monitors to feed back into IT, with an opportunity for pattern-based strategy. Borrowing a concept from the military, "OpTempo" allows a CIO to speed up or slow down various projects as needed. By seeking out patterns, developing models to understand those patterns, and then adapting the business to fit those patterns, a strategy can be developed to address new opportunities.
Infrastructure and Operations: Charting the course for the coming decade
This analyst felt that strategies should not just be focused looking forward, but also look left and right, what IBM calls "adjacent spaces". He covered a variety of hot topics:
65 percent of energy running x86 servers is doing nothing. The average x86 running only 7 to 12 percent CPU utilization.
Virtualization of servers, networks and storage are transforming IT to become on big logical system image, which plays well with Green IT initiatives. He joked that this is what IBM offered 20 years ago with Mainframe "Single System Image" sysplexes, and that we have come around full circle.
One area of virtualization are desktop images (VDI). This goes back to the benefits of green-screen 3270 terminals of the mainframe era, eliminating the headaches of managing thousands of PCs, and instead having thin clients rely heavily on centralized services.
The deluge in data continues, as more convenient access drives demand for more data. The anlyst estimates storage capacity will increase 650 percent over the next five years, with over 80 percent of this unstructured data. Automated storage tiering, ala Hierarchical Storage Manager (HSM) from the mainframe era, is once again popular, along with new technologies like thin provisioning and data deduplication.
IT is also being asked to do complex resource tracking, such as power consumption. In the past IT and Facilities were separate budgets, but that is beginning to change.
The fastest growing social nework was Twitter, with 1382 percent growth in 2009, of which 69 percent of new users that joined this year were 39 to 51 years old. By comparison, Facebook only grew by 249 percent. Social media is a big factor both inside and outside a company, and management should be aware of what Tweets, Blogs, and others in the collective are saying about you and your company.
The average 18 to 25 year old sends out 4000 text messages per month. In 24 hours, more text messages are sent out than people on the planet (6.7 billion). Unified Communications is also getting attention. This is the idea that all forms of communication, from email to texts to voice over IP (VoIP), can be managed centrally.
Smart phones and other mobile devices are changing the way people view laptops. Many business tasks can be handled by these smaller devices.
It costs more in energy to run an x86 server for three years than it costs to buy it. The idea of blade servers and componentization can help address that.
Mashups and Portals are an unrecognized opportunity. An example of a Mashup is mapping a list of real estate listings to Google Maps so that you can see all the listings arranged geographically.
Lastly, Cloud Computing will change the way people deliver IT services. Amusingly, the conference was playing "Both Sides Now" by Joni Mitchell, which has the [lyrics about clouds]
Unlike other conferences that clump all the keynotes at the beginning, this one spreads the "Keynote" sessions out across several days, so I will cover the rest over separate posts.
This week I am blogging from beautiful Caesars Palace hotel in Las Vegas, Nevada to report on what I see and hear at the
28th annual Data Center Conference. Today was simply registration, which opened at 4pm, and I was able to get my conference backpack, badge, and details of the week.
Already, I can tell there will be more people here, and it looks like the economy is on the rebound versus last year. Here are my
posts from 12 months ago when I attended this conference in 2008:
This year, we will have the IBM Portable Modular Data Center (PMDC) with XIV and iDataPlex inside, as well as several subject matter experts joining me at the solution center. Look for us in the "Hunter Green" shirts.
I almost sprayed coffee all over my screen when I read this post from fellow blogger from EMC Mark Twomey on his StorageZilla blog titled [Dead End]. In it he implies that you should only consider storage technologies based on x86 technologies such as those from Intel, not other CPU technologies like POWER or MIPS.
When IBM first came out with the SAN Volume Controller in 2003, we were able to show that adding Intel-based SVC nodes can improve the performance and functionality of POWER-based DMX boxes from EMC. EMC salesmen often retorted with "Yes, but do you really want to risk your mission-critical data going through an Intel-based processor solution?" This FUD implied that Intel had a bad reputation for quality and reliability. The original Symmetrix were based on Motorolla 68000's but they modernized to use IBM's POWER chips in their later models. EMC's previous attempt to use Intel technology was their EMC Invista, a commercial failure. It is no surprise then that EMC DMX customers are scared to death to move their mission critical data over to Intel-based V-max.
I have found the primary reason people fear Intel-based solutions is their experience with poorly-written Windows programs. There were enough of these poorly-written Windows programs that everyone has either personal experience, or knows someone who has, and that was enough.
It reminds me of the time I was in Vac, Hungary, giving a lab tour to a set of prospective clients where we manufacture the DS8000 series and SAN Volume Controller. Rows and rows of beautiful Hungarian women sliding disk drives in place, and big hefty Hungarian beefcake moving the finished units to their appropriate places. The head of the facility explained all about the hardware technology, how we check and double check all of the equipment individually, and together as a system. One client stated "Yes, but how often are problems from the hardware? We find nearly all of our problems on disk systems from whichever storage vendor we buy from are in the microcode." It's true.
Both Intel-based processors and POWER-based processors have all the technological functions needed to run storage systems. The difference is all in the microcode. So, if you are looking for safe and stable microcode, the IBM System Storage DS8700 continues its POWER-based tradition for compatibility with previous models. For those that demand x86-based units, the IBM SAN Volume Controller has been around since 2003, the XIV Storage System has been in production since 2005, and our IBM N series are also Intel-based, running Version 7 of the ONTAP operating system.
For those who want to meet me in person, there are two opportunities coming up in December.
Data Center Conference, December 1-4
Once again, I will be blogging from Ceasar's Palace Las Vegas at this year's [Data Center Conference 2009]! Last year's conference was a blast, and this one looks to be quite exciting. IBM is again a premier sponsor. Scheduled to speak are the following IBM executives:
Helene Armitage, the new General Manager of System Software, on "IT-Wide Virtualization, A Prerequisite for a Truly Dynamic Infrastructure"
Steve Sams, the VP of Sites and Facilities, on "Data Center Actions Your CFO Will Love"
Barry Rudolph, the VP of System Storage, on "Meeting the Information Infrastructure Challenge"
We will also have an IBM booth at the Solutions Showcase, showing off the latest in Cloud Computing,
Service Management, Information Infrastructure, and Workload-optimized systems. You will be able to schedule one-on-one sessions with IBM executives and subject matter experts. Best of all, we will have on display a Portable Modular Data Center [PMDC] that can hold a fully operational data center in a standard [20 foot shipping container].
IBM Virtualization and Consolidation Briefing, December 15
This is being done "open house" style. If you can get yourself to the IBM Tucson Executive Briefing Center, IBM will provide you breakfast, a series of presentations, lunch, and then even more presentations. Your stomach and brain will be full by the end of the day. Here is a list of the presentations:
Well, it's Tuesday again, and we have more IBM announcements.
XIV asynchronous mirror
For those not using XIV behind SAN Volume Controller, [XIV now offers native asynchronous mirroring] support to another XIV far, far away. Unlike other disk systems that are limited to two or three sites, an XIV can mirror to up to 15 other sites. The mirroring can be at the individual volume, or a consistency group of multiple volumes. Each mirror pair can have its own recovery point objective (RPO). For example, a consistency group of mission critical application data might be given an RPO of 30 seconds, but less important data might be given an RPO of 20 minutes. This allows the XIV to prioritize packets it sends across the network.
As with XIV synchronous mirror, this new asynchronous mirror feature can send the data over either its
Fibre Channel ports (via FCIP) or its Ethernet ports.
The IBM System Storage SAN384B and SAN768B directors now offer [two new blades!]
A 24-port FCoCEE blade where each port can handle 10Gb convergence enhanced Ethernet (CEE). CEE can be used to transmit Fibre Channel, TCP/IP, iSCSI and other Ethernet protocols. This connect directly to server's converged network adapter (CNA) cards.
A 24-port mixed blade, with 12 FC ports (1Gbps, 2Ggbs, 4Gbps, 8Gbps), 10 Ethernet ports (1GbE) and 2 Ethernet ports (10GbE). This would connect to traditional server NIC, TOE and HBA cards as well as traditional NAS, iSCSI and FC based storage devices.
IBM also announced the IBM System Storage [SAN06B-R Fibre Channel router]. This has 16 FC ports (1Gbps up to 8Gbps) and six Ethernet ports (1GbE), with support for both FC routing as well as FCIP extended distance support.
With the holiday season coming up at the end of the year, now is a great time to ask Santa for a new shiny pair of XIV systems, and some extra networking gear to connect them.
Well, I had a pleasant vacation. I took a trip up to beautiful Lake Powell in Northern Arizona as part of a "Murder Mystery Dinner" weekend. This trip was organized by AAA and Lake Powell in association with the professionals at [Murder Ink Productions] out of Phoenix.
The trip involved two busloads of people from Tucson and Phoenix driving up to Lake Powell, with a series of meals that introduced all the characters and gave out clues to solve a murder. At the end of the dinner on the last evening, we had to guess who dunnit, how, and why. I solved it, and got this lovely tee-shirt.
More importantly, the trip gave me a chance to read
[The Numerati] by Stephen Baker. The author explains all the different ways that "analysts" are able to crunch through the large volumes of data to gain insight. He has chapters on how this is done for shoppers in retail sales, voters for upcoming elections, patients for medical care, and even matchmaking services like chemistry.com. Like the Murder Mystery Dinner, there are too many suspects and too many clues, but these number-crunchers, which Mr. Baker calls The Numerati, are able to figure out through advanced business analytics.
FTC Notice: I recommend this book. I did not receive any compensation to mention this book on this blog, I did not receive a copy of the book free for this review, and I do not know the author. Everyone in my staff is reading this book, and I borrowed a copy from a co-worker.
If you don't understand how this all works, here is a quick 6-minute [video] on YouTube.
In his Backup Blog, fellow blogger Scott Waterhouse from EMC has yet another post about Tivoli Storage Manager (TSM) titled [TSM and the Elephant]. He argues that only the cost of new TSM servers should be considered in any comparison, on the assumption that if you have to deploy another server, you have to attach to it fresh new disk storage, a brand new tape library, and hire an independent group of backup administrators to manage. Of course, that is bull, people use much of existing infrastructure and existing skilled labor pool every time new servers are added, as I tried to point out in my post [TSM Economies of Scale].
However, Scott does suggest that we should look at all the costs, not just the cost of a new server, which we in the industry call Total Cost of Ownership (TCO). Here is an excerpt:
Final point: there is actually a really important secondary point here--what is the TCO of your backup infrastructure. In some ways, TSM is one of the most expensive (number of servers and tape drives, for example), relative to other backup applications. However, I think it would be a really interesting exercise to critically examine the TCO of the various backup applications at different scales to evaluate if there is any genuine cost differentiation between them.
Fortunately, I have a recent TCO/ROI analysis for a large customer in the Eastern United States that compares their existing EMC Legato deployment to a new proposed TSM deployment. The assessment was performed by our IBM Tivoli ROI Analyst team, using a tool developed by Alinean. The process compares the TCO of the currently deployed solution (in this case EMC Legato) with the TCO of the proposed replacement solution (in this case IBM TSM) for 55,000 client nodes at expected growth rates over a three year period, and determines the amount of investment, cost savings and other benefits, and return on investment (ROI).
Here are the results:
"A risk adjusted analysis of the proposed solution's impact was conducted and it was projected that implementing the proposed solutions resulted in $16,174,919 of 3 year cumulative benefits. Of these projected benefits, $8,015,692 are direct benefits and $8,159,227 are indirect benefits.
Top cumulative benefits for the project include:
Backup Coverage Risk Avoidance - $6,749,796
Reduction in Maintenance of Competitive Products - $1,576,000
Reduction in Existing Tivoli Maintenance (Storage and Monitoring) - $1,490,000
IT Operations Labor Savings - Storage Management - $982,919
Network Bandwidth Savings - $575,196
Standardization - $366,667
Future cost avoidance of addtional competitive licenses - $350,000
These benefits can be grouped regarding business impact as:
$6,456,025 in IT cost reductions
$1,559,667 in business operating efficiency improvements
$8,159,227 in business strategic advantage benefits
The proposed project is expected to help the company meet the following goals and drive the following benefits:
Reduce Business Risks $6,749,796
Consolidate and Standardize IT Infrastructure $4,975,667
Reduce IT Infrastructure Costs $2,057,107
Improve IT System Availability / Service Levels $1,409,431
Improve IT Staff Efficiency / Productivity $982,919
To implement the proposed project will require a 3 year cumulative investment of $5,760,094 including:
$0 in initial expenses
$4,650,000 in capital expenditures
$1,110,094 in operating expenditures
Comparing the costs and benefits of the proposed project using discounted cash flow analysis and factoring in a risk-adjusted discount rate of 9.5%, the proposed business case predicts:
Risk Adjusted Return on Investment (RA ROI) of 172%
Return on Investment (ROI) of 181%
Net Present Value (NPV) savings of $8,425,014
Payback period of 9.0 month(s)
Note: The project has been risk-adjusted for an overall deployment schedule of 5 months."
IBM Tivoli Storage Manager uses less bandwidth, fewer disk and tape storage resources than EMC Legato. For even a large deployment of this kind, payback period is only NINE MONTHS. Generally, if you can get a new proposed investment to have less than 24 month payback period you have enough to get both CFO and CIO excited, so this one is a no-brainer.
Perhaps this helps explain why TSM enjoys such a larger marketshare than EMC Legato in the backup software marketplace. No doubt Scott might be able to come up with a counter-example, a very small business with fewer than 10 employees where an EMC Legato deployment might be less expensive than a comparable TSM deployment. However, when it comes to scalability, TSM is king. The majority of the Fortune 1000 companies use Tivoli Storage Manager, and IBM uses TSM internally for its own IT, managed storage services, and cloud computing facilities.
Please welcome new IBM blogger Keith Stevenson, his new blog is called [Infovore]. He gives his take on the big October 20 announcement we had this week,
and will continue to cover topics related to storage of information.
Well, it's Tuesday again, but this time, today we had our third big storage launch of 2009! A lot got announced today as part of IBM's big "Dynamic Infrastructure" marketing campaign. I will just focus on the
disk-related announcements today:
IBM System Storage DS8700
IBM adds a new model to its DS8000 series with the
[IBM System Storage DS8700]. Earlier this month, fellow blogger and arch-nemesis Barry Burke from EMC posted [R.I.P DS8300] on this mistaken assumption that the new DS8700 meant that DS8300 was going away, or that anyone who bought a DS8300 recently would be out of luck. Obviously, I could not respond until today's announcement, as the last thing I want to do is lose my job disclosing confidential information. BarryB is wrong on both counts:
IBM will continue to sell the DS8100 and DS8300, in addition to the new DS8700.
Clients can upgrade their existing DS8100 or DS8300 systems to DS8700.
BarryB's latest post [What's In a Name - DS8700] is fair game, given all the fun and ridicule everyone had at his expense over EMC's "V-Max" name.
So the DS8700 is new hardware with only 4 percent new software. On the hardware side, it uses faster POWER6 processors instead of POWER5+, has faster PCI-e buses instead of the RIO-G loops, and faster four-port device adapters (DAs) for added bandwidth between cache and drives. The DS8700 can be ordered as a single-frame dual 2-way that supports up to 128 drives and 128GB of cache, or as a dual 4-way, consisting of one primary frame, and up to four expansion frames, with up to 384GB of cache and 1024 drives.
Not mentioned explicitly in the announcements were the things the DS8700 does not support:
ESCON attachment - Now that FICON is well-established for the mainframe market, there is no need to support the slower, bulkier ESCON options. This greatly reduced testing effort. The 2-way DS8700 can support up to 16 four-port FICON/FCP host adapters, and the 4-way can support up to 32 host adapters, for a maximum of 128 ports. The FICON/FCP host adapter ports can auto-negotiate between 4Gbps, 2Gbps and 1Gbps as needed.
LPAR mode - When IBM and HDS introduced LPAR mode back in 2004, it sounded like a great idea the engineers came up with. Most other major vendors followed our lead to offer similar "partitioning". However, it turned out to be what we call in the storage biz a "selling apple" not a "buying apple". In other words, something the salesman can offer as a differentiating feature, but that few clients actually use. It turned out that supporting both LPAR and non-LPAR modes merely doubled the testing effort, so IBM got rid of it for the DS8700.
Update: I have been reminded that both IBM and HDS delivered LPAR mode within a month of each other back in 2004, so it was wrong for me to imply that HDS followed IBM's lead when obviously development happened in both companies for the most part concurrently prior to that. EMC was late to the "partition" party, but who's keeping track?
Initial performance tests show up to 50 percent improvement for random workloads, and up to 150 percent improvement for sequential workloads, and up to 60 percent improvement in background data movement for FlashCopy functions. The results varied slightly between Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes, and I hope to see some SPC-1 and SPC-2 benchmark numbers published soon.
The DS8700 is compatible for Metro Mirror, Global Mirror, and Metro/Global Mirror with the rest of the DS8000 series, as well as the ESS model 750, ESS model 800 and DS6000 series.
New 600GB FC and FDE drives
IBM now offers [600GB drives] for the DS4700 and DS5020 disk systems, as well as the EXP520 and EXP810 expansion drawers. In each case, we are able to pack up to 16 drives into a 3U enclosure.
Personally, I think the DS5020 should have been given a DS4xxx designation, as it resembles the DS4700
more than the other models of the DS5000 series. Back in 2006-2007, I was the marketing strategist for IBM System Storage product line, and part of my job involved all of the meetings to name or rename products. Mostly I gave reasons why products should NOT be renamed, and why it was important to name the products correctly at the beginning.
IBM System Storage SAN Volume Controller hardware and software
Fellow IBM master inventory Barry Whyte has been covering the latest on the [SVC 2145-CF8 hardware]. IBM put out a press release last week on this, and today is the formal announcement with prices and details. Barry's latest post
[SVC CF8 hardware and SSD in depth] covers just part of the entire
The other part of the announcement was the [SVC 5.1 software] which can be loaded
on earlier SVC models 8F2, 8F4, and 8G4 to gain better performance and functionality.
To avoid confusion on what is hardware machine type/model (2145-CF8 or 2145-8A4) and what is software program (5639-VC5 or 5639-VW2), IBM has introduced two new [Solution Offering Identifiers]:
5465-028 Standard SAN Volume Controller
5465-029 Entry Edition SAN Volume Controller
The latter is designed for smaller deployments, supports only a single SVC node-pair managing up to
150 disk drives, available in Raven Black or Flamingo Pink.
EXN3000 and EXP5060 Expansion Drawers
IBM offers the [EXN3000 for the IBM N series]. These expansion drawers can pack 24 drives in a 4U enclosure. The drives can either be all-SAS, or all-SATA, supporting 300GB, 450GB, 500GB and 1TB size capacity drives.
The [EXP5060 for the IBM DS5000 series] is a high-density expansion drawer that can pack up to 60 drives into a 4U enclosure. A DS5100 or DS5300
can handle up to eight of these expansion drawers, for a total of 480 drives.
Pre-installed with Tivoli Storage Productivity Center Basic Edition. Basic Edition can be upgraded with license keys to support Data, Disk and Standard Edition to extend support and functionality to report and manage XIV, N series, and non-IBM disk systems.
Pre-installed with Tivoli Key Lifecycle Manager (TKLM). This can be used to manage the Full Disk Encryption (FDE) encryption-capable disk drives in the DS8000 and DS5000, as well as LTO and TS1100 series tape drives.
IBM Tivoli Storage FlashCopy Manager v2.1
The [IBM Tivoli Storage FlashCopy Manager V2.1] replaces two products in one. IBM used
to offer IBM Tivoli Storage Manager for Copy Services (TSM for CS) that protected Windows application data, and IBM Tivoli Storage Manager for Advanced Copy Services (TSM for ACS) that protected AIX application data.
The new product has some excellent advantages. FlashCopy Manager offers application-aware backup of LUNs containing SAP, Oracle, DB2, SQL server and Microsoft Exchange data. It can support IBM DS8000, SVC and XIV point-in-time copy functions, as well as the Volume Shadow Copy Services (VSS) interfaces of the IBM DS5000, DS4000 and DS3000 series disk systems. It is priced by the amount of TB you copy, not on the speed or number of CPU processors inside the server.
Don't let the name fool you. IBM FlashCopy Manager does not require that you use Tivoli Storage Manager (TSM) as your backup product. You can run IBM FlashCopy Manager on its own, and it will manage your FlashCopy target versions on disk, and these can be backed up to tape or another disk using any backup product. However, if you are lucky enough to also be using TSM, then there is optional integration that allows TSM to manage the target copies, move them to tape, inventory them in its DB2 database, and provide complete reporting.
Yup, that's a lot to announce in one day. And this was just the disk-related portion of the launch!
I am proud to announce we have yet another IBM blogger for the storosphere, Rich Swain from IBM's Research Triangle Park in Raleigh, North Carolina will blog about
[News and Information on IBM’s N series].
Rich is a Field Technical Sales Specialist with deep-dive knowledge and experience.
He's already posted a dozen or so entries, to give you a feel for the level of technical detail he will provide.
Please welcome Rich by following his blog and posting comments on his posts.
Well, it's Wednesday! Normally, IBM makes its announcements on Tuesday's, but this week that landed on the 13th, and some people are superstitious, so we pushed it back to today. Fellow IBM master inventory Barry Whyte starts the first in a series of posts with: New SVC v5 CF8 node with native SSD support.
There are really two separate items being announced for the IBM System Storage SAN Volume Controller (SVC):
SVC v5 software
The software moves from a 32-bit kernel to a 64-bit kernel. Fortunately, IBM had the foresight to know that would happen back in 2005, so models 8F2, 8F4 and 8G4 can be upgrade to this new software level and gain new functionality. This is because these models have 64-bit capable processors. Those with six-year-old 4F2 will continue to run on SVC 4.3.1, but should consider it's about time to upgrade soon.
New 2145-CF8 model
The CF8 is based on the IBM System x 3550M2. Each node can have up to 4 Solid-State Drives (SSD) that can be treated as SVC Managed Disk Groups. Virtual disks can be easily migrated from hard disk drives (HDD) over to SSD, processed, and then move them back to HDD. By treating the SSD as managed disks, rather than an extension of the cache, we are able to support all of the features and functions in a seamless manner.
As Barry says, IBM has been working on this for quite a while, and based on initial responses looks to be quite successful in the market!
(Note: I have been informed that this week the U.S. Federal Trade Commission has [announced an update] to its
[16 CFR Part 255: Guides Concerning the Use of Endorsements and Testimonials in Advertising]. As if it were not obvious enough already, I must emphasize that I work for IBM, IBM provides me all the equipment and related documentation that I need for me to blog about IBM solutions, and that I am paid to blog as part of my job description. Both my boss and I agree I am not paid enough, but that is another matter. Beginning December 1, 2009, all positive mentions of IBM products, solutions and services on this blog might be considered a "celebrity endorsement" by the FTC and others under these new guidelines. Negative mentions of IBM products are probably typos.)
At a conference once, a presenter discussing tips and techniques about public speaking told everyone to be
aware that everyone in the audience is "tuned into radio station WIIFM" (What's In It For Me). If a member of the audience cannot figure out why the information being presented is relevant to them individually, they may not pay attention for long. Likewise, when it comes to archiving data for long term retention, I think we have many people are tuned into KEFM (the Keep Everything Forever methodology). Two classic articles from Drew Robb on the subject are [Can Data Ever Be Deleted?] and [Experts Question 'Keep Everything' Philosophy].
(Note: For those of my readers who do not live in the US, most radio stations start with
the letter "K" if they are on the left half of the country, and "W" if they are on the right half. See
Thomas H. White's [Early Radio History] to learn more.)
Contrary to popular belief, IBM would rather have their clients implement a viable archive strategy than just mindlessly buying more disk and tape for a "Keep Everything Forever" methodology. Keeping all information around forever can be a liability, as data that you store can be used against you in a court of law. It can also make it difficult to find the information that you do need, because the sheer volume of information to sort through makes the process more time consuming.
The problem with most archive storage solutions is that they are inflexible, treating all data the same under a common set of rules. The IBM Information Archive is different. You can have up to three separate "collections".
Each collection can have its own set of policies and rules. You can have a collection that is locked down
for compliance with full Non-Erasable, Non-Rewriteable (NENR) enforcement, and another collection that allows
full read/write/delete capability.
Each collection can consist of either files or objects. Unlike other storage devices that force you to convert files into objects, or objects into files for their own benefit.
IBM Information Archive is scalable enough to support up to a billion of either files or objects per collection.
Each collection can span storage tiers, even across disk and tape resources.
Object collections are accessed using IBM System Storage Archive Manager (SSAM) application programming interface (API). People who use IBM Tivoli Storage Manager (TSM) archive or IBM System Storage DR550 are already familiar with this interface. An object can represent the archived slice of a repository, a set of rows from a database, a collection of emails from an individual mailbox user, etc.
File collections can be used for any type of data you would store on a NAS device. This includes databases, email repositories, static Web pages, seismic data, user documents, spreadsheets, presentations, medical images, photos, videos, and so on.
The IBM Information Archive solution was designed to work with a variety of Enterprise Content Management (ECM) software, and is part of the overall IBM Smart Archive strategy.
Well, it's Tuesday, and that means IBM announcements! Today is bigger, as there are a lot of Dynamic Infrastructure announcements throughout the company with a common theme, cloud computing and smart business systems that support the new way of doing things. Today, IBM announced its new "IBM Smart Archive" strategy that integrates software, storage, servers and services into solutions that help meet the challenges of today and tomorrow. IBM has been spending the past few years working across its various divisions and acquisitions to ensure that our clients have complete end-to-end solutions.
IBM is introducing new "Smart Business Systems" that can be used on-premises for private-cloud configurations, as well as by cloud-computing companies to offer IT as a service.
IBM [Information Archive] is the first to be unveiled, a disk-only or blended disk-and-tape Information Infrastructure solution that offers a "unified storage" approach with amazing flexibility for dealing with various archive requirements:
For those with applications using the IBM Tivoli Storage Manager (TSM) or IBM System Storage Archive Manager (SSAM) API of the IBM System Storage DR550 data retention solution, the Information Archive will provide a direct migration, supporting this API for existing applications.
For those with IBM N series using SnapLock or the File System Gateway of the DR550, the Information Archive will support various NAS protocols, deployed in stages, including NFS, CIFS, HTTP and FTP access, with Non-Erasable, Non-Rewriteable (NENR) enforcement that are compatible with current IBM N series SnapLock usage.
For those using NAS devices with PACS applications to store X-rays and other medical images, the Information Archive will provide similar NAS protocol interfaces. Information Archive will support both read-only data such as X-rays, as well as read/write data such as Electronic Medical Records.
Information Archive is not just for compliance data that was previously sent to WORM optical media. Instead, it can handle all kinds of data, rewriteable data, read-only data, and data that needs to be locked down for tamper protection. It can handle structured databases, emails, videos and unstructured files, as well as objects stored through the SSAM API.
The Information Archive has all the server, storage and software integrated together into a single machine type/model number. It is based on IBM's General Parallel File System (GPFS) to provide incredible scalability, the same clustered file system used by many of the top 500 supercomputers. Initially, Information Archive will support up to 304TB raw capacity of disk and Petabytes of tape. You can read the [Spec Sheet] for other technical details.
For those who prefer a more "customized" approach, similar to IBM Scale-Out File Services (SoFS), IBM has [Smart Business Storage Cloud]. IBM Global Services can customize a solution that is best for you, using many of the same technologies. In fact, IBM Global Services announced a variety of new cloud-computing services to help enterprises determine the best approach.
In a related announcement, IBM announced [LotusLive iNotes], which you can think of as a "business-ready" version of Google's GoogleApps, Gmail and GoogleCalendar. IBM is focused on security and reliability but leaves out the advertising and data mining that people have been forced to tolerate from consumer-oriented Web 2.0-based solutions. IBM's clients that are already familiar with on-premises version of Lotus Notes will have no trouble using LotusLive iNotes.
There was actually a lot more announced today, which I will try to get to in later posts.
Bruce Allen from BR Allen Associates LLC, an IT technology strategy and consulting firm, has written an excellent 9-page White Paper contrasting IBM and EMC's latest strategies. Here are some key excerpts:
"The term “information infrastructure” is over 40 years old, but its characteristics and requirements in today’s world are quite new indeed. Specifically, federating all storage enterprise wide, consolidating and standardizing onto virtualized, high-capacity media, and enabling dynamic, cloud-ready provisioning are among the major new IT challenges. Moreover, continued explosive storage growth demands that a systematic approach be crafted to address the full spectrum of current and future (information) compliance, availability, retention and security goals. For many customers, this transformation must occur amidst a storage growth rate of 50%-70% CAGR.
...IBM’s Information Infrastructure focus is a core element and foundational pillar in its Dynamic Infrastructure and New Intelligence initiatives, both well defined and tightly coupled to an umbrella vision and strategy called “Smarter Planet.” It is also important to remember that IBM has its own vast, internal infrastructure, and is transforming it in the same manner prescribed to customers. IBM’s increased investment in solution centers and expertise to develop and test drive customer solutions demonstrates its resolve in this area.
...In contrast, storage vendor EMC references information infrastructure as half of its bifurcated strategy,1 with virtualization being the other half. The two are represented by slightly overlapping circles, and interestingly, these two circles essentially mirror the EMC organization. ...Analysis of both the strategy and the organization indicates a continued strong product focus, a stark contrast to IBM’s strategy that puts solutions first and products second.
...IBM’s Information Infrastructure strategy and portfolio takes a more holistic approach and appears to be shifting its own organizations and partners from pure product focus to a true solution orientation that more directly addresses customer needs. ...IBM views these elements as integral to any information-led transformation, but its competitors fall well short in this arena.
...As a system vendor, IBM clearly has a more in-depth set of offerings and a more elegant strategy and vision for providing a dynamic information environment than its competitors. None of the other system vendors have made the strides, or the investments, that IBM has.
...Because of its size and breadth, IBM uniquely has all of the pieces, and also has a vast information infrastructure of its own to build and manage. IBM often uses its internal systems to showcase new capabilities, as shown in these examples:
In an early cloud computing production pilot, IBM was able to reduce costs by managing more than 92,000 worldwide users with one storage cloud and one delivery team. Lessons learned from this deployment helped IBM establish cloud computing requirements for today’s products and services.
In 2009, IBM deployed a unified, centralized customer support portal for all technical support tools and information. The portal unifies all IBM systems, software, and services support sites, including those from recent acquisitions. By leveraging its own portal, database, and storage technology, IBM was able to consolidate multiple support sites into a single portal. The new portal dramatically simplifies the user experience for clients with multiple IBM products, while helping IBM control infrastructure costs.
...a key difference between IBM and EMC is IBM’s orientation to total-solution provisioning, not just for one application at a time, but for the entire set of infrastructure needs that customers have. To ensure this, a clearly articulated strategy and vision keeps IBM’s focus on the bigger picture as it addresses each customer’s requirements.
...Efforts tied to cloud computing have helped vendor organizations to work together better toward composite and integrated solutions, but the vague specifications and lack of immediate revenue keep most vendor sales organizations focused on their respective products. The only other way to address the challenges of integrating people and technology as described above is to put a clear strategy in place with specific tactical goals and objectives. This is where IBM leads the industry in making demonstrable progress in building solutions that achieve the goals of its dynamic infrastructure model and strategy.
...IBM is in a unique position to deliver and support the full information infrastructure “stack” and address all of its clients’ information-centric challenges. The combination of IBM’s storage technology, information management products, aggressive financing, and best-of-breed integrated services supported by world-class expertise and proven experience, provide the building blocks for the world’s strongest information infrastructure portfolio.
Mr. Allen also discusses the successes of two real client examples, Virginia Commonwealth University Health Systems (VCUHS), and INTTRA, the largest multi-carrier e-commerce platform for the ocean shipping industry.
Well, it's Tuesday, which means IBM Announcements!
We have both disk and tape related announcements today.
2 TB Drives
Yes, they are finally here. IBM now offers [2 TB SATA drives for its IBM System Storage DCS9900 series] disk systems. These are 5400 RPM, slower than traditional 7200 RPM SATA drives. This increases the maximum capacity of a single DCS9900 from 1200 TB to 2400 TB. The DCS9900 is IBM's MAID system (Massive Array of Idle Disk) which allows for drive spin-down to reduce energy costs and is ideal for long term retention of archive data that must remain on disk for High Performance Computing or video streaming.
TS3000 System Console
The TS3000 System Console [provides improved features for service and support] of up to 24 tape library frames or 43 unique tape systems. Tape frames include those of the TS7740, TS7720 and TS7650. Tape systems include TS3500, TS3400 or 3494 libraries as well as stand-alone TS1120 and TS1130 drives. Having the TS3000 System Console in place is a benefit to both IBM and the customer, as it improves IBM's ability to provide service in a more timely manner.
Both announcements are part of IBM's strategy to provide cost-effective, energy-efficient, long-term retention storage for archive data.
I saw this as an opportunity to promote the new IBM Tivoli Storage Manager v6.1 which offers a variety of new scalability features, and continues to provide excellent economies of scale for large deployments, in my post [IBM has scalable backup solutions].
"So does TSM scale? Sure! Just add more servers. But this is not an economy of scale. Nothing gets less expensive as the capacity grows. You get a more or less linear growth of costs that is directly correlated to the growth of primary storage capacity. (Technically, it costs will jump at regular and predictable intervals, by regular and predictable and equal amounts, as you add TSM servers to the infrastructure--but on average it is a direct linear growth. Assuming you are right sized right now, if you were to double your primary storage capacity, you would double the size of the TSM infrastructure, and double your associated costs.)"
I talked about inaccurate vendor FUD in my post [The murals in restaurants], and recently, I saw StorageBod's piece, [FUDdy Waters]. So what would "economies of scale" look like? Using Scott's own words:
Without Economies of Scale
"If it costs you $5 to backup a given amount of data, it probably costs you $50 to back up 10 times that amount of data, and $500 to back up 100 times that amount of data."
With Economies of Scalee
"If anybody can figure out how to get costs down to $40 for 10 times the amount of data, and $300 for 100 times the amount of data, they will have an irrefutable advantage over anybody that has not been able to leverage economies of scale."
So, let's do some simple examples. I'll focus on a backup solution just for employee workstations, each employee has 100GB of personal data to backup on their laptop or PC. We'll look at a one-person company, a ten-person company, and a hundred-person company.
Case 1: The one-person company
Here the sole owner needs a backup solution. Here are all the steps she might perform:
Spend hours of time evaluating different backup products available, and make sure her operating system, file system and applications are supported
Spend hours shopping for external media, this could be an external USB disk drive, optical DVD drive, or tape drive, and confirm it is supported by the selected backup software.
Purchase the backup software, external drive, and if optical or tape, blank media cartridges.
Spend time learning the product, purchase "Backup for Dummies" or similar book, and/or taking a training class.
Install and configure the software
Operate the software, or set it up to run automatically, and take the media offsite at the end of the day, and back each morning
Case 2: The ten-person company
I guess if each of the ten employees went off and performed all of the same steps as above, there would be no economies of scale.
Fortunately, co-workers are amazingly efficient in avoiding unnecessary work.
Rather than have all ten people evaluate backup solutions, have one person do it. If everyone runs the same or similar operating system, file systems and applications, this can be done about the same as the one-person case.
Ditto on the storage media. Why should 10 people go off and evaluate their own storage media. One person can do it for all ten people in about the same time as it takes for one person.
Purchasing the software and hardware. Ok, here is where some costs may be linear, depending on your choices. Some software vendors give bulk discounts, so purchasing 10 seats of the same software could be less than 10 times the cost of one license. As for storage hardware, it might be possible to share drives and even media. Perhaps one or two storage systems can be shared by the entire team.
For a lot of backup software, most of the work is in the initial set up, then it runs automatically afterwards. That is the case for TSM. You create a "dsm.opt" file, and it can list all of the include/exclude files and other rules and policies. Once the first person sets this up, they share it with their co-workers.
Hopefully, if storage hardware was consolidated, such that you have fewer drives than people, you can probably have fewer people responsible for operations. For example, let's have the first five employees sharing one drive managed by Joe, and the second five employees sharing a second drive managed by Sally. Only two people need to spend time taking media offsite, bringing it back and so on.
Case 3: The hundred-person company
Again, it is possible that a hundred-person company consists of 10 departments of 10 people each, and they all follow the above approach independently, resulting in no economies of scale. But again, that is not likely.
Here one or a few people can invest time to evaluate backup solutions. Certainly far less than 100 times the effort for a one-person company.
Same with storage media. With 100 employees, you can now invest in a tape library with robotic automation.
Purchase of software and hardware. Again, discounts will probably apply for large deployments. Purchasing 1 tape library for all one hundred people is less than 10 times the cost and effort of 10 departments all making independent purchases.
With a hundred employees, you may have some differences in operating system, file systems and applications. Still, this might mean two to five versions of dsm.opt, and not 10 or 100 independent configurations.
Operations is where the big savings happen. TSM has "progressive incremental backup" so it only backs up changed data. Other backup schemes involve taking period full backups which tie up the network and consume a lot of back end resources. In head-to-head comparisons between IBM Tivoli Storage Manager and Symantec's NetBackup, IBM TSM was shown to use significantly less network LAN bandwidth, less disk storage capacity, and fewer tape cartridges than NetBackup.
The savings are even greater with data deduplication. Either using hardware, like IBM TS76750 ProtecTIER data deduplication solution, or software like the data deduplication capability built-in with IBM TSM v6.1, you can take advantage of the fact that 100 employees might have a lot of common data between them.
So, I have demonstrated how savings through economies of scale are achieved using IBM Tivoli Storage Manager. Adding one more person in each case is cheaper than the first person. The situation is not linear as Scott suggests. But what about larger deployments? IBM TS3500 Tape Library can hold one PB of data in only 10 square feet of data center floorspace. The IBM TS7650G gateway can manage up to 1 PB of disk, holding as much as 25 PB of backup copies. IT Analysts Tony Palmer, Brian Garrett and Lauren Whitehouse from Enterprise Strategy Group tried IBM TSM v6.1 out for themselves and wrote up a ["Lab Validation"] report. Here is an excerpt:
"Backup/recovery software that embeds data reduction technology can address all three of these factors handily. IBM TSM 6.1 now has native deduplication capabilities built into its Extended Edition (EE) as a no-cost option. After data is written to the primary disk pool, a deduplication operation can be scheduled to eliminate redundancy at the sub-file level. Data deduplication, as its name implies, identifies and eliminates redundant data.
TSM 6.1 also includes features that optimize TSM scalability and manageability to meet increasingly demanding service levels resulting from relentless data growth. The move from a proprietary back-end database to IBM DB2 improves scalability, availability, and performance without adding complexity; the DB2 database is automatically maintained and managed by TSM. IBM upgraded the monitoring and reporting capabilities to near real-time and completely redesigned the dashboard that provides visibility into the system. TSM and TSM EE include these enhanced monitoring and reporting capabilities at no cost."
The majority of Fortune 1000 customers use IBM Tivoli Storage Manager, and it is the backup software that IBM uses itself in its own huge data centers, including the cloud computing facilities. In combination with IBM Tivoli FastBack for remote office/branch office (ROBO) situations, and complemented with point-in-time and disk mirroring hardware capabilities such as IBM FlashCopy, Metro Mirror, and Global Mirror, IBM Tivoli Storage Manager can be an effective, scalable part of a complete Unified Recovery Management solution.
This week, some of my coworkers are out at
[VMworld 2009] in San Francisco. IBM is a platinum sponsor, and is the leading reseller of VMware software. Here is the floor plan for our IBM booth there:
Virtual Data Center in a Box & Virtual Networking on
IBM & VMware Joint Collaboration on Power Monitoring
“Always on IT” Business Continuity Solution
IBM System Storage™ XIV®
[IBM XIV Storage System] is a revolutionary, easily managed, open disk system, designed to meet today’s ongoing IT challenges. This system now supports VMware 4.0 and extends the benefits of virtualization to your storage system, enabling easy provisioning and self-tuning after hardware changes. Its unique grid-based architecture represents the next generation of high-end storage and delivers outstanding performance, scalability, reliability and features, along with management simplicity and exceptional TCO.
IBM Storage Solutions with VMware
Featured products include: The new IBM System Storage DS5020 , Virtual Disk solutions with IBM System Storage SAN Volume Controller, IBM Tivoli Storage Productivity Center, and IBM System Storage ProtecTIER Data Deduplication solutions.
Server virtualization with VMware vSphere offers significant benefits to an organization, including increased asset utilization, simplified management and faster server provisioning. In addition to these benefits, VMware enables business agility and business continuity with more advanced features such as VMotion, high availability, fault tolerance, and Site Recovery Manager that all require dependable high-performance shared storage. Adding storage solutions --including virtualized storage-- from IBM delivers complementary benefits to your information infrastructure that extend and enhance the benefits of VMware vSphere while increasing overall reliability, availability and performance to help you transform into a dynamic infrastructure. IBM can provide the right storage solution for your environment and requirements. Our solutions help maximize efficiency with lower costs and provide affordable, scalable storage solutions that help you solve your particular needs.
Stop by to learn how our the exciting new storage solutions can help optimize VMware including self-encrypting storage, automated, affordable disaster recovery with VMware SRM easier and faster provisioning of storage for virtual machines, dramatically improved storage utilization with ProtecTIER deduplication, and how the DS5000 has lower costs Total Cost of Acquisition (TCA) than typical competitors.
IBM Smart Business Desktop Cloud
IBM System x® iDataPlex™: Get More on the Floor
Virtual Client Solutions from IBM
IBM also is sponsoring some breakout sessions:
Leverage Storage Solutions for a Smarter Infrastructure
Simplify and Optimize with IBM N series
IBM SAN Volume Controller: Virtualized Storage for Virtual Servers
XIV: Storage Reinvented for today's dynamic world
Wish I was there, looks like a lot of good information!
This week, SHARE conference is being held at the Colorado Convention Center in Denver, Colorado. I covered this conference for 10 years earlier in my career. Now, my colleague Curtis Neal covers these on a regular basis, and is giving the following presentations this week:
IBM Virtual Tape Products: DILIGENT ProtecTIER and TS7700 Update
Wednesday (1:30-2:30pm), Curtis will present IBM's premier virtual tape libraries. The TS7650G ProtecTIER Data Deduplication solution supports distributed systems like Windows, UNIX and Windows. The TS7700 supports the IBM System z mainframe.
SAN Volume Controller Update
Thursday (8:00-9:00am), Curtis will cover the latest features of the IBM System Storage SAN Volume Controller (SVC). SVC has features like thin provisioning, vDisk mirroring, and cascaded FlashCopy support.
IBM System Storage DS5000 Update
Friday (8:00-9:00am), Curtis will cover the DS5100 and DS5300, and probably the new DS5020 model. These are midrange disk systems that provide excellent performance for distributed systems, full disk encryption, and intermix of FC and SATA drives.
Unlike other conferences where people just go once and are never seen again, SHARE brings back the same people back year after year, so that you can maintain relationships across organizations, and can carry on forward-looking strategic discussions.
Well, it's Tuesday again, and that means IBM announcements!
We've got a variety of storage-related items today, so here's my quick recap:
DS5020 and EXP520 disk systems
[IBM System Storage DS5020]
provides the functional replacement for DS4700 disk systems. These are combined controller
and 16 drives in a compact 3U package.
The EXP520 expansion drawer provides additional 16 drives per 3U drawer. A DS5020 can
support upo to six additional EXP520, for a total of 112 drives per system.
The DS5020 supports both 8 Gbps FC as well as 1GbE iSCSI.
New Remote Support Manager (DS-RSM model RS2)
The [IBM System Storage DS-RSM Model
RS2] supports of up to 50 disk systems, any mix of DS3000, DS4000 and DS5000 series.
It includes "call home" support, which is really "email home", sending error alerts to IBM
if there are any problems. The RSM also allows IBM to dial-in to perform diagnostics before
arrival, reducing the time needed to resolve a problem. The model RS2 is a beefier model
with more processing power than the prior generation RS1.
New Ethernet Switches
With the increased interest in iSCSI protocol, and the new upcoming Fibre Channel over
Convergence Enhanced Ethernet (FCoCEE), IBM's re-entrance into the ethernet switch market
has drawn a lot of interest.
The [IBM Ethernet Switch r-
series] offers 4-slot, 8-slot, 16-slot, and 32-slot models. Each slot can handle either
16 10GbE ports, or 48 1GbE ports. This means up to 1,536 ports.
The [c-series] now offers a
24-port model. This is either 24 copper and 4 fiber optic, or 24 fiber optic.
The "hybrid fiber" SFP fiber optic can handle either single or multi-mode, eliminating the
need to commit to one or the other, providing greater data center flexibility.
The [IBM Ethernet Switch B24X]
offers 24 fiber optic (that can handle 10GbE or 1GbE) and 4 copper (10/100/1000 MbE RJ45)
Storage Optimization and Integration Services
[IBM Storage Optimization and
Integration Services] are available. IBM service consultants use IBM's own
Storage Enterprise Resource Planner (SERP) software to evaluate your environment and provide
recommendations on how to improve your information infrastructure. This can be especially
helpful if you are looking at deploying server virtualization like VMware or Hyper-V.
As people look towards deploying a dynamic infrastructure, these new offerings can be a
Eventually, there comes a time to drop support for older, outdated programs that don't meet the latest standards. I had several complain that they could not read my last post on Internet Explorer 6. The post reads fine on more modern browsers like Firefox 3 and even Google's Chrome browser, but not IE6.
Google confirms that warnings are appearing:
[Official: YouTube to stop IE6 support].
My choice is to either stop embedding YouTube videos, some of which are created by my own marketing team specifically on my behalf, or drop support for IE6. I choose the latter. If you are still using IE6, please consider switching to Firefox 3 or Google Chrome instead.
This week, scientists at IBM Research and the California Institute of
Technology announced a scientific advancement that could be a major
breakthrough in enabling the semiconductor industry to pack more power
and speed into tiny computer chips, while making them more energy
efficient and less expensive to manufacture. IBM is a leader in
solid-state technology, and this scientific breakthrough shows promise.
But first, a discussion of how solid-state chips are made in the first place. Basically, a round thin wafer is etched using [photolithography]
with lots of tiny transistor circuits. The same chip is repeated over
and over on a single wafer, and once the wafer is complete, it is
chopped up into little individual squares. Wikipedia has a nice article
on [semiconductor device fabrication], but I found this
[YouTube video] more illuminating.
Up until now, the industry was able to get features down to 22
nanometers, and were hitting physical limitations to get down to
anything smaller. The new development from IBM and Caltech is to use
self-assembling DNA strands, folded into specific shapes using other
strands that act as staples, and then using these folded structures as
scaffolding to place in nanotubes. The result? Features as small as 6 nanometers. How cool is that?
While NAND Flash Solid-State Drives are available today, this new
technique can help develop newer, better technologies like Phase Change
Over on his Backup Blog, fellow blogger Scott Waterhouse from EMC has a post titled
[Backup Sucks: Reason #38]. Here is an excerpt:
Unfortunately, we have not been able to successfully leverage economies of scale in the world of backup and recovery. If it costs you $5 to backup a given amount of data, it probably costs you $50 to back up 10 times that amount of data, and $500 to back up 100 times that amount of data.
If anybody can figure out how to get costs down to $40 for 10 times the amount of data, and $300 for 100 times the amount of data, they will have an irrefutable advantage over anybody that has not been able to leverage economies of scale.
I suspect that where Scott mentions we in the above excerpt, he is referring to EMC in general, with products like
Legato. Fortunately, IBM has scalable backup solutions, using either a hardware approach, or one purely with software.
The hardware approach involves using deduplication hardware technology as the storage pool for IBM Tivoli Storage Manager (TSM). Using this approach, IBM Tivoli Storage Manager would receive data from dozens, hundreds or even thousands
of client nodes, and the backup copies would be sent to an IBM TS7650 ProtecTIER data deduplication appliance, IBM TS7650G gateway, or IBM N series with A-SIS. In most cases, companies have standardized on the operating systems and applications used on these nodes, and multiple copies of data reside across employee laptops. As a result, as you have more nodes backing up, you are able to achieve benefits of scale.
Perhaps your budget isn't big enough to handle new hardware purchases at this time, in this economy. Have no fear,
IBM also offers deduplication built right into the IBM Tivoli Storage Manager v6 software itself. You can use sequential access disk storage pool for this. TSM scans and identifies duplicate chunks of data in the backup copies, and also archive and HSM data, and reclaims the space when found.
If your company is using a backup software product that doesn't scale well, perhaps now is a good time to switch over to IBM Tivoli Storage Manager. TSM is perhaps the most scalable backup software product in the marketplace, giving IBM an "irrefutable advantage" over the competition.
This week, I was in the Phoenix area presenting at TechData's TechSelect University. TechData is one of IBM's IT distributors,
and TechSelect is their community of 440 resellers and 20 vendors. This year they celebrate their 10 year anniversary of this event. I covered three particular topics, and I was videotaped for those who were not able to attend my session. (There were very few empty seats at my sessions)
IBM Business Partners now realize that the "killer app" for storage is combining the IBM System Storage SAN Volume Controller with entry-level or midrange disk storage systems for an awesome solution. Solutions based on either the Entry Edition or the standard hardware models can compete well with a variety of robust features, including thin provisioning, vDisk mirroring, FlashCopy, Metro and Global Mirror. This has the advantage that the SVC can extend these functions not just to newly purchased disk capacity, but also existing storage capacity. The newly purchased capacity can be DS3400, DS4700 or the new DS5000 models. This is great "investment protection" for small and medium sized businesses.
LTO-4 drives and automation
The Linear Tape Open (LTO) consortium--consisting of IBM, HP and Quantum--has proven wildly successful, ending the
vendor-lockin from SDLT tape. I presented the latest LTO-4 offerings, including the TS2240, TS2340, TS2900, TS3100
and TS3200. The LTO consortium has already worked out a technology roadmap for LTO-5 and LTO-6. The LTO-4 drives
support WORM cartridges and on-board hardware-based encryption. The encryption keys can be managed with IBM Tivoli Key Lifecycle Manager (TKLM).
SAN and FCoCEE switches
IBM has agreements with Brocade, Cisco and Juniper Networks for various networking gear. I focused on entry-level switches for SAN fabrics, the SAN24B-4 and Cisco 9124, as well as new equipment for Convergence Enhanced Ethernet (CEE),
including IBM's Converged Network Adapater (CNA) for System x servers, and the SAN32B switch that has 24 10GbE CEE ports and 8 FC ports that support 8/4/2 and 4/2/1 SFP transceivers. FCoE Clients that want to deploy Fibre Channel over CEE (FCoCEE) today have everything the need to get started.
The venue was the
[Sheraton Wild Horse Pass Resort and Spa] in Chandler, just south of Phoenix. This compound includes [Rawhide], an 1800's era Western Town attraction, a rodeo arena, and a casino still under construction.
Dinners were held nearby at the infamous
[Rustler's Rooste] Steakhouse on South mountain.
Back in June, I mentioned this blog was [Moving to MyDeveloperWorks] which is based on IBM Lotus Connections.
Finally, the move is complete for all bloggers. If you are having problems with the redirects, you might need to unsubscribe and re-subscribe in your RSS feed reader. Here are the new links for several IBM bloggers that have moved over:
Continuing my week in Chicago, for the IBM Storage Symposium 2008, we had sessions that focused on individual products. IBM System Storage SAN Volume Controller (SVC) was a popular topic.
SVC - Everything you wanted to know, but were afraid to ask!
Bill Wiegand, IBM ATS, who has been working with SAN Volume Controller since it was first introduced in 2003. answered some frequently asked questions about IBM System Storage SAN Volume Controller.
Do you have to upgrade all of your HBAs, switches and disk arrays to the recommended firmware levels before upgrading SVC? No. These are recommended levels, but not required. If you do plan to update firmware levels, focus on the host end first, switches next, and disk arrays last.
How do we request special support for stuff not yet listed on the Interop Matrix?
Submit an RPQ/SCORE, same as for any other IBM hardware.
How do we sign up for SVC hints and tips? Go to the IBM
[SVC Support Site] and select the "My Notifications" under the "Stay Informed" box on the right panel.
When we call IBM for SVC support, do we select "Hardware" or "Software"?
While the SVC is a piece of hardware, there are very few mechanical parts involved. Unless there are sparks,
smoke, or front bezel buttons dangling from springs, select "Software". Most of the questions are
related to the software components of SVC.
When we have SVC virtualizing non-IBM disk arrays, who should we call first?
IBM has world-renown service, with some of IT's smartest people working the queues. All of the major storage vendors play nice
as part of the [TSAnet Agreement when a mutual customer is impacted.
When in doubt, call IBM first, and if necessary, IBM will contact other vendors on your behalf to resolve.
What is the difference between livedump and a Full System Dump?
Most problems can be resolved with a livedump. While not complete information, it is generally enough,
and is completely non-disruptive. Other times, the full state of the machine is required, so a Full System Dump
is requested. This involves rebooting one of the two nodes, so virtual disks may temporarily run slower on that
What does "svc_snap -c" do?The "svc_snap" command on the CLI generates a snap file, which includes the cluster error log and trace files from all nodes. The "-c" parameter includes the configuration and virtual-to-physical mapping that can be useful for
disaster recovery and problem determination.
I just sent IBM a check to upgrade my TB-based license on my SVC, how long should I wait for IBM to send me a software license key?
IBM trusts its clients. No software license key will be sent. Once the check clears, you are good to go.
During migration from old disk arrays to new disk arrays, I will temporarily have 79TB more disk under SVC management, do I need to get a temporary TB-based license upgrade during the brief migration period?
Nope. Again, we trust you. However, if you are concerned about this at all, contact IBM and they will print out
a nice "Conformance Letter" in case you need to show your boss.
How should I maintain my Windows-based SVC Master Console or SSPC server?
Treat this like any other Windows-based server in your shop, install Microsoft-recommended Windows updates,
run Anti-virus scans, and so on.
Where can I find useful "How To" information on SVC?
Specify "SAN Volume Controller" in the search field of the
[IBM Redbooks vast library of helpful books.
I just added more managed disks to my managed disk group (MDG), can I get help writing a script to redistribute the extents to improve wide-striping performance?
Yes, IBM has scripting tools available for download on
[AlphaWorks]. For example, svctools will take
the output of the "lsinfo" command, and generate the appropriate SVC CLI to re-migrate the disks around to optimize
performance. Of course, if you prefer, you can use IBM Tivoli Storage Productivity Center instead for a more
Any rules of thumb for sizing SVC deployments?
IBM's Disk Magic tool includes support for SVC deployments. Plan for 250 IOPS/TB for light workloads,
500 IOPS/TB for average workloads, and 750 IOPS/TB for heavy workloads.
Can I migrate virtual disks from one manage disk group (MDG) to another of different extent size?
Yes, the new Vdisk Mirroring capability can be used to do this. Create the mirror for your Vdisk between the
two MDGs, wait for the copy to complete, and then split the mirror.
Can I add or replace SVC nodes non-disruptively? Absolutely, see the Technotes
[SVC Node Replacement page.
Can I really order an SVC EE in Flamingo Pink? Yes. While my blog post that started all
this [Pink It and Shrink It] was initially just some Photoshop humor, the IBM product manager for SVC accepted this color choice as an RPQ option.
The default color remains Raven Black.
Continuing my week in Chicago, for the IBM Storage Symposium 2008, I attended two presentations on XIV.
XIV Storage - Best Practices
Izhar Sharon, IBM Technical Sales Specialist for XIV, presented best practices using XIV in various environments.He started out explaining the innovative XIV architecture: a SATA-based disk system from IBM can outperformFC-based disk systems from other vendors using massive parallelism. He used a sports analogy:
"The men's world record for running 800 meters was set in 1997 by Wilson Kipketer of Denmark in a time of 1:41.11.
However, if you have eight men running, 100 meters each, they will all cross the finish line in about 10 seconds."
Since XIV is already self-tuning, what kind of best practices are left to present? Izhar presented best practicesfor software, hosts, switches and storage virtualization products that attach to the XIV. Here's some quickpoints:
Use as many paths as possible.
IBM does not require you to purchase and install multipathing software as other competitors might. Instead, theXIV relies on multipathing capabilities inherent to each operating system.For multipathing preference, choose Round-Robin, which is now available onAIX and VMware vSphere 4.0, for example. Otherwise, fixed-path is preferred over most-recently-used (MRU).
Encourage parallel I/O requests.
XIV architecture does not subscribe to the outdated notion of a "global cache". Instead, the cache is distributed across the modules, to reduce performance bottlenecks. Each HBA on the XIV can handle about 1400requests. If you have fewer than 1400 hosts attached to the XIV, you can further increase parallel I/O requests by specifying a large queue depth in the host bus adapter (HBA).An HBA queue depth of 64 is a good start. Additional settings mightbe required in the BIOS, operating system or application for multiple threads and processes.
For sequential workloads, select host stripe size less than 1MB. For random, select host stripe size larger than 1MB. Set rr_min_io between ten(10) and the queue depth(typically 64), setting it to half of the queue depth is a good starting point.
If you have long-running batch jobs, consider breaking them up into smaller steps and run in parallel.
Define fewer, larger LUNs
Generally, you no longer need to define many small LUNs, a practice that was often required on traditionaldisk systems. This means that you can now define just 1 or 2 LUNs per application, and greatly simplifymanagement. If your application must have multiple LUNs in order to do multiple threads or concurrent I/O requests, then, by all means, define multiple LUNs.
Modern Data Base Management Systems (DBMS) like DB2 and Oracle already parallelize their I/O requests, sothere is no need for host-based striping across many logical volumes. XIV already stripes the data for you.If you use Oracle Automated Storage Management (ASM), use 8MB to 16MB extent sizes for optimal performance.
For those virtualizing XIV with SAN Volume Controller (SVC), define manage disks as 1632GB LUNs, in multiple of six LUNs per managed disk group (MDG), to balance across the six interface modules. Define SVC extent size to 1GB.
XIV is ideal for VMware. Create big LUNs for your VMFS that you can access via FCP or iSCSI.
Organize data to simplify Snapshots.
You no longer need to separate logs from databases for performance reasons. However, for some backup productslike IBM Tivoli Storage Manager (TSM) for Advanced Copy Services (ACS), you might want to keep them separatefor snapshot reasons. Gernally, putting all data for an application on one big LUNgreatly simplifies administration and snapshot processing, without losing performance.If you define multiple LUNs for an application, simply put them into the same "consistencygroup" so that they are all snapshot together.
OS boot image disks can be snapshot before applying any patches, updates or application software, so that ifthere are any problems, you can reboot to the previous image.
Employ sizing tools to plan for capacity and performance.
The SAP Quicksizer tool can be used for new SAP deployments, employing either the user-based orthroughput-based sizing model approach. The result is in mythical unit called "SAPS", which represents0.4 IOPS for ERP/OLTP workloads, and 0.6 IOPS for BI/BW and OLAP workloads.
If you already have SAP or other applications running, use actual I/O measurements. IBM Business Partners and field technical sales specialists have an updated version of Disk Magic that can help size XIV configurations fromPERFMON and iostat figures.
Lee La Frese, IBM STSM for Enteprise Storage Performance Engineering, presented internal lab test results forthe XIV under various workloads, based on the latest hardware/software levels [announced two weeks ago]. Three workloadswere tested:
Web 2.0 (80/20/40) - 80 percent READ, 20 percent WRITE, 40 percent cache hits for READ.YouTube, FlickR, and the growing list at [GoWeb20] are applications with heavy read activity, but because of[long-tail effects], may not be as cache friendly.
Social Networking (50/50/50) - 50 percent READ, 50 percent WRITE, 50 percent cache hits for READ.Lotus Connections, Microsoft Sharepoint, and many other [social networking] usage are more write intensive.
Database (70/30/50) - 70 percent READ, 30 percent WRITE, 50 percent cache hits for READ.The traditional workload characteristics for most business applications, especially databases like DB2 andOracle on Linux, UNIX and Windows servers.
The results were quite impressive. There was more than enough performance for tier 2 application workloads,and most tier 1 applications. The performance was nearly linear from the smallest 6-module to the largest 15-module configuration. Some key points:
A full 15-module XIV overwhelms a single SVC 8F4 node-pair. For a full XIV, consider 4 to 8 nodes 8F4 models, or 2 to 4 nodes of an 8G4. For read-intensive cache-friendly workloads, an SVC in front of XIV was able to deliver over 300,000 IOPS.
A single node TS7650G ProtecTIER can handle 6 to 9 XIV modules. Two nodes of TS7650G were needed to drivea full 15-module XIV. A single node TS7650 in front of XIV was able to ingest 680 MB/sec on the seventh day with17 percent per-day change rate test workload using 64 virtual drives. Reading the data back got over 950 MB/sec.
For SAP environments where response time 20-30 msec are acceptable, the 15-module XIV delivered over 60,000 IOPS. Reducing this down to 25,000-30,000 cut the msec response time to a faster 10-15 msec.
These were all done as internal lab tests. Your mileage may vary.
Not surprisingly, XIV was quite the popular topic here this week at the Storage Symposium. There were many moresessions, but these were the only two that I attended.
Continuing my week in Chicago, at the IBM System x and BladeCenter Technical Conference, I attended an
awesome session that summarized IBM's Linux directions. Pat Byers presented the global forces that are
forcing customers to re-evaluate the TCO of their operating system choices, the need for rapid integration
in an ever-changing business climate, government stimulus packages, and technology that has enabled much
better solutions than we had during the last economic turn-down in 2001-2003.
IBM has been committed to Linux for over 10 years now. I was part of the initial IBM team in the 1990s to work on Linux for the mainframe. In various roles, I helped get Linux attachment tested for disk and tape systems, and helped get Linux selected as an operating system platform of choice for our storage management software.
Today, Linux-based server generate $7 Billion US dollars in revenues. For UNIX customers, Linux provides greater flexibility for hardware platform. For Windows customers, Linux provides better security and reliability.
Initially, Linux was used for simple infrastructure applications, edge-of-the-network and Web-based workloads.
This evolved to Application and Data serving, Enterprise applications like ERP, CRM and SCM. Today,
Linux is well positioned to help IBM make our world a smarter planet, able to handle business-critical applications. It is the only operating system to scale to the full capability of the biggest IBM System x3950M2 server.
Pat gave an examples of IBM's work with Linux helping clients.
City of Stockholm
The city of Stockholm, Sweden introduced congestion pricing to reduce traffic.
IBM helped them deploy systems to collect tariffs from 300,000 vehicles a day, with real-time scanning and recognition of vehicle license plates, Web-accessible payment processing, and analytics for metrics and reporting. This configuration was able to
[reduce traffic by 25 percent in the first month].
IBM helped [ConAgra Foods] switch their SAP environment from a monolithic Solaris on SPARC deployment, to a more distributed one using Novell SUSE Linux on x86. The result? Six times faster performance at 75 percent lower total cost of ownership!
IBM's strategy has been to focus on working with two of the major Linux distributors: Red Hat and Novell. It also works with [Asianux] which is like the UnitedLinux for Asia, internationalized for Japan, Korea, and China. It handles special requests for other distributions, from CentOS to Ubuntu, as needed on a case by case basis.
IBM's Linux Technology Center of 600 employees help to enable IBM products for Linux, make Linux a better operating system, expand Linux's reach, and help drive collaboration and innovation. In fact, IBM is the #3 corporate contributor to the open source Linux kernel, behind Red Hat (#1) and Novell (#2). For most IBM products, IBM tests with Linux as rigorously as it does Microsoft Windows. IBM offers complete RTS/ServicePac and SupportLine service and support contracts for Red Hat and Novell Linux.
At the IBM Solutions Center this week, several booths used Linux bootable USB sticks to run their software.
[Novell SUSE Studio] was developed to help
customize Linux to the specific needs for independent vendors.
Both Red Hat and Novell offer distributions in four categories:
Standard - for small entry-level servers, with support for a few virtual guests
Advanced Platform - for bigger servers, and support for many or unlimited number of virtual guests
High Performance Computing - HPC and Analytics for large grid deployments
Real Time - for real time processing, such as with
[IBM WebSphere Real Time], where
sub-second response time is critical.
A key difference between Red Hat and Novell appears to be on their strategy towards server virtualization.
Red Hat wants to position itself as the hypervisor of choice, for both servers and desk top virtualization, announcing Kernel-based Virtual Machine
[KVM] on their Red Hat Enterprise Linux (RHEL) 5.4 release, and their new upcoming
RHEV-V, a tight 128MB hypervisor to compete against VMware ESXi. Meanwhile, Novell is focusing SUSE to be
the perfect virtual guest OS, being hypervisor-aware an dhaving consistent terms and licensing when run under any hypervisor, including VMware, Hyper-V, Citrix Xen, KVM or others.
IBM has tons of solutions that are based on Linux, including the IBM Information Server blade, the InfoSphere Balanced Warehouse, SAN Volume Controller (SVC), TS7650 ProtecTIER data deduplication virtual tape library, Grid Medical Archive Solution (GMAS), Scale-out File Services (SoFS), Lotus Foundations, and the IBM Smart Cube.
If you are interested in trying out Linux, IBM offers evaluation copies at no charge for 30 to 90 days. For
more on how to deploy Linux successfully on IBM servers, see the
[IBM Linux Blueprints] landing page.
Continuing my week in Chicago, for the IBM Storage Symposium 2008, I attended several sessions intended to answer the questions of the audience.
In an effort to be cute, the System x team have a "Meet the xPerts" session at their System x and BladeCenter Technical Conference, so the storage side decided to do the same. Traditionally, these have been called "Birds of a Feature", "Q&A Panel", or "Free-for-All". They allow anyone to throw out a question, and have the experts in the room, either
IBM, Business Partner or another client, answer the question from their experience.
Meet the Experts - Storage for z/OS environments
Here were some of the questions answered:
I've seen terms like "z/OS", "zSeries" and "System z" used interchangeably, can you help clarify what this particular session is about?
IBM's current mainframe servers are all named "System z", such as our System z9 or System z10. These replace the older zSeries models of hardware. z/OS is one of the six operating systems that run on this hardware platform. The other five are z/VM, z/VSE, z/TPF, Linux and OpenSolaris. The focus of this session will be storage attached and used for z/OS specifically, including discussions of Omegamon and DFSMS software products.
What can we do to reduce our MIPS-based software licensing costs from our third party vendors?
Consider using IBM System z Integrated Information Processor
What about 8 Gbps FICON?
IBM has already announced
[FICON Express8] host bus adapter (HBA) cards, that will auto-negotiate to 4Gbps and 2Gbps speeds. If you don't need full 8Gbps speed now, you can
still get the Express8 cards, but put 4/2/1 Gbps SFP ports instead. Currently, LongWave (LW) is only supported to 4km at 8Gbps speed.
I want to use Global Mirror for my DS8100 to my remote DS8100, but also make test copies of my production data to
an older ESS 800 I have locally. Any suggestions? Yes, consider using FlashCopy to simplify this process.
I have Global Mirror (GM) running now successfully with DSCLI, and now want to deploy IBM Tivoli Storage Productivity Center for Replication. Is that possible? Yes, Productivity Center for Replication will detect existing GM relationships, and start managing them.
I have already deployed HyperPAV and zHPF, is there any value in getting Solid-State Drives as well?
HyperPAV and zHPF impact CONN time, but SSD impacts DISC time, so they are mutually complementary.
How should I size my FlashCopy SE pool? SE refers to "Space Efficient", which stores only the changes
between the source and destination copies of each LUN or CKD volume involved. General recommendation is to start with 20 percent and adjust accordingly.
How many RAID ranks should I configure per DS8000 extent pool? IBM recommends 4 to 8 ranks per pool.
Meet the Experts: Storage for Linux, UNIX and Windows distributed systems
This session was focused on storage systems attached to distributed servers, as well as products from Tivoli used to manage them. Here were some of the questions answered:
When we migrated from Tivoli Storage Manager v5 to v6, we lost our favorite "Operational Reporting" tool. How can we get TOR back? You now get the new Tivoli Common Reporting tool.
How can we identify appropriate port distribution for multiple SVC node pairs for load balancing?
IBM Tivoli Storage Productivity Center v4.1 has hot-spot analysis with recommendations for Vdisk migrations.
We tried TotalStorage Productivity Center way back when, but the frequent upgrades were killing us. How has it been lately? It has been much more stable since v3.3, and completely renamed to Tivoli Storage Productivity Center to avoid association with versions 1 and 2 of the predecessor product. The new "lightweight agents" feature of v4.1 resolve many of the problems you were experiencing.
We have over 1600 SVC virtual disks, how do we handle this in IBM Tivoli Storage Productivity Center? Use the Filter capability in combination with clever naming conventions for your virtual disks.
How can we be clever when we are limited to only 15 characters? Ok. We understand.
We are currently using an SSPC with Windows 2003 and 2GB memory, but we are only using the Productivity Center for Replication feature of it. Can we move the DB2 database over to a Windows 2008 server with 4GB of memory?
Consider using the IBM Tivoli Storage Productivity Center for Replication software instead of SSPC for special
circumstances like this.
We love the XIV GUI, how soon will all other IBM storage products have it also? As with every acquisition,
IBM evaluates if there are technologies from new products that can be carried back to existing products.
We are currently using 12 ports on our existing XIV, and love it so much we plan to buy a second frame, but are concerned about consuming another 12 ports on our SAN switch. Any suggestions? Yes, use only six ports per frame. Just because you have more ports, doesn't mean you are required to use them.
We have heard there are concerns from the legal community about using deduplication technology, any ideas how to address that?
Nobody here in the room is a lawyer, and you should consult legal counsel for any particular situation.
None of the IBM offerings intended for non-erasable, non-rewriteable (NENR) data retention records (DR550, WORM tape, N series SnapLock) support dedupe today, and none of IBM's deduplication offerings (TS7650,N series A-SIS,TSM) make any claims for fit-for-purpose for compliance regulatory storage. However, be assured that all of IBM's dedupe technology involves byte-for-byte comparisons so that you never lose any data due to false hash collisions. For all IBM compliance storage, what you write will be read back in the correct sequence of ones and zeros.
Continuing my week in Chicago, for the IBM Storage Symposium 2009, I attended what in my opinion was the bestsession of the week. This was by a guy named Chip Copper, who covered IBM's set of Ethernet and Fibre Channelnetworking gear. Attributes are the four P's:
Power and Cooling (electricity usage)
Equipment comes in two flavors: Top-of-Rack (ToR) thin pizza box switches, and Middle-of-Row (MoR) much larger directors.The MoR directors are engineered for up to 50Gbps per half-slot, so 10GbE and the future 40GbE can be easily accommodated in a single half-slot, and the future 100GbE can be done with a full slot (two half-slots).
While many companies might have been contemplating the switch from copper wires to optical fiber, there is a new reason for copper cables: Power-over-Ethernet (PoE). Many IP-phones, digital video surveillance cameras, and other equipment can have a single cable that delivers both signal and electricity over copper. If you have already deployed optical fiber throughout the building, there are "last mile" options where the signals are converted to copper wires and electrical energy added for these types of devices.
Two directors can be connected together with Inter-Chassis Link (ICL) cables to make them look like a single director with twice the number of ports. These are different than Inter-Switch Links (ISL) as they are not counted as an extra "hop" for networking counting purposes, especially important for FICON usage.
Today, we have 1Gbps, 2Gbps, 4Gbps and 8Gbps Fibre Channel. Since these all use 10-for-8 encoding (10 bits represents one 8-bit byte), then in was easy to calculate throughput: 8Gpbs was 800 MB/sec, for example. Auto-negotiation between speeds is not done at the HBA card, switch or director blade itself, but in the Short Form-factor Pluggable (SFP) optical connector. However, you can only auto-negotiate if the encoding matches. The 4/2/1 SFP can run at 4Gbps or auto-negotiate to slower 2Gbps and 1Gbps. The 8/4/2 SFP can run at 8Gbps, or auto-negotiate down to slower 4Gpbs and 2Gbps. Folks who still have legacy 1Gbps equipment, but want to run some things at 8 Gbps, can buy 8Gbps-capable switches or director blades, but then put some 4/2/1 SFPs into them. These 4/2/1 SFP are cheaper, so this might be something to consider if budgets are tight. Some SFPs handle up to 10km distances, but others only 4km, so be careful not to order the wrong ones.
Unfortunately, there are proposals in place for 10Gbps and 40Gbps that would use a different 66-for-64 encoding (66 bits represent 8 bytes), so 10Gbps would be 1200 MB/sec. These are used today for ISL between directors and switches.In theory, the 40Gbps could auto-negotiate down to 10Gbps, but not to any of the 8/4/2/1 Gbps that use different 10-for-8 encoding.
For those who cannot afford a SAN768B, there is a smaller SAN384B that can carry: 192 ports (4Gpbs/2Gbps), 128 ports (8Gbps) or 24 ports (10Gbps). The SAN384B can be ICL connected to another SAN384B or even the SAN768B as your needs grow.
On the entry-level side, the SAN24B-4 offers a feature called "Access Gateway". This makes the SAN24B look like an SAN end-point host, rather than a switch, and makes initial deployment of integrated bundled solutions easier. Once connected to everything, you can convert it over to full "switch" mode.The SAN40B-4 and SAN80B-4 provide midrange level support, including Fibre Channel routing at the 8Gbps level. In fact, all 8Gbps ports include routing capability. IBM offers both single-port and dual-port 8Gbps host bus adapter (HBA) cards to connect to these switches. These HBA offer 16 virtual channels per port, so that if you have VMware running many guests, or want to connect both disk and tape to the same HBA, you can keep the channel traffic separate for Quality of Service (QoS).
Chip wrapped up his session to discuss Fibre Channel over Ethernet (FCoE), and explained why we need to have a loss-less Convergence Enhanced Ethernet (CEE) to meet the needs of storage traffic as well as traditional Fibre Channel does today. IBM offers all of the equipment you need to get started today on this FCoCEE, with Converged Network Ethernet cards for your System x servers, and a new SANB32 that has 24 10GbE CEE ports and 8 traditional 8Gbps FC ports. This means that you can put the CNA card in your existing servers, connect to this switch, and then connect to your existing 10GbE LAN and your existing 8Gpbs or 4Gpbs FC-based SAN to the rest of your storage devices.
Worried that the FCoE or CEE standards could change after you deploy this gear? Aren't most LAN and SAN switches based on Application-specific integrated circuit [ASIC] chips which are created in the factory? Don't worry, IBM's equipment have put all the standards-vulnerable portions of the logic into separate Field-programmable gate array [FPGA] that can be updated with simplya firmware upgrade. This is future-proofing I can agree with!
Continuing my week in Chicago, I decided to attend some of the presentations from the System x side. This is the advantage of running both conferences in the same hotel, attendees can choose how many of each they want to participate in.
Wayne Wigley, IBM Advanced Technical Support (ATS), presented a series of presentations on different server virtualization offerings available for System x and BladeCenter servers. I am very familiar with virtualization implemented on System z mainframes, as well as IBM's POWER systems, and have working knowledge of Linux KVM and Xen, so I was well prepared to handle hearing the latest about Microsoft's Hyper-V and VMware's Vsphere version 4.
Microsoft Hyper-V 2008
Hyper-V can run as part of Windows 2008, are standalone on its own.Different levels of Windows 2008 include licenses for different number of Windows virtual machines (VMs).Windows Server 2008 Standard includes 1 Windows VM, Enterprise includes 4 Windows VMs, and the Datacenter edition includes unlimited number of Windows VMs. If you want to run more Window VMs than come included, you need to pay extra for each additional one. For example, to run 10 Windows VMs on a 2-socket server would cost about $9000 US dollars on Standard but only $6000 US dollars on Datacenter edition (list prices from Microsoft Web site).
Unlike VMware, which takes a monolithic approach as hypervisor, Hyper-V is more like Xen with a microkernelized approach. This means you need a "parent" guest OS image, and the rest of the Guest OS images are then considered "child" images.These child images can be various levels of Windows, from Windows XP Pro to Windows Server 2008, Xen-enabled Linux, or even a non-hypervisor-aware OS.The "parent" guest OS image provides networking and storage I/O services to these "child" images.For the hypervisor-aware versions of Windows and Linux, Hyper-V allows optimized access to the hypervisor, "synthetic devices", and hypercalls. Synthetic devices present themselves as network devices, but only serve to pass data along the VMBus to other networking resources. This process does not require software emulation, and therefore offers higher performance for virtual machines and lower host system overhead.For non-hypervisor-aware OS images, Hyper-V provides device emulation through the "parent" image, which is slower.
Microsoft System Center Virtual Machine Manager (SCVMM) can manage both Hyper-V and VMware VI3 images.Wayne showed various screen shots of the GUI available to manage Hyper-V images.In standalone mode, you lose the nice GUI and management console.
Hyper-V supports external, internal and private virtual LANs (VLAN). External means that VMs can communicate with the outside world over standard ethernet connections. Internal means that VMs can communicate with "parent" and "child" guest images on the same server only. Private means that only "child" guests can communicate with other "child" images.
Hyper-V supports disk attached via IDE, SATA, SCSI, SAS, FC, iSCSI, NFS and CIFS. One mode is "Virtual Hard Disk" (VHD) similar to VMware VMDK files. The other is "pass through" mode, which are actual disk LUNs accessed natively. VHDs can be dynamic (thin provisioned), fixed (fully allocated), or differencing. The concept of differencing is interesting, as you start with a base read-only VHD volume image, and have a separate "delta" file that contains changes from the base image.
Some of the key features of Hyper-V 2008 are:
Being able to run concurrently 32-bit and 64-bit versions of Linux and Windows guest images
Support for 64 GB of memory and 4-way symmetric multiprocessing (SMP) per VM
Clustering for High Availability and Quick Migration of VM images
Live backup with integration with Microsoft's Volume Shadow Copy Services (VSS)
Virtual LAN (VLAN) support, and Virtual and Pass-through physical disk support
A clever VMbus, virtual service parent/client approach to sharing hardware
Optimized performance options for hypervisor-aware versions of Windows and Linux, and emulated supportfor non-hypervisor-aware OS images.
VMware Vsphere v4.0
This was titled as an "Overview" session, but really was an "Update" session on the newest features of this release. The big change appears to be that VMware added "v" in front of everything.
Under vCompute, there are some new features on VMware's Distributed Resource Scheduler (DRS) which includes recommended VM migrations. Dynamic Power Management (DPM) will move VMs during periods of low usageto consolidate onto fewer physical servers so as to reduce energy consumption.
Under vStorage, vSphere introduces an enhanced Plugable Storage Architecture (PSA), with supportfor Storage Array Type Plugins (SATP) and Path Selection Plugins (PSP). This vStorage API allows forthird party plugins for improved fault-tolerance and complex I/O load balancing algorithms. This releasealso has improved support for iSCSI, including Challenge-Handshake Authentication Protocol (CHAP) support.Similar to Hyper-V's dynamic VHD, VMware supports "thin provisioning" for their virtual disk VMDK files.A feature of "Storage Vmotion" allows conversion between "thick" and "thin" provisioning formats.
The vStorage API for Data Protection provide all the features of VMware Consolidated Backup (VCB). The APIprovides full, incremental and differential file-level backups for Windows and Linux guests, including supportfor snapshots and Volume Shadow Copy Services (VSS) quiescing.
VMware introduces direct I/O pass-through for both NIC and HBA devices. While thisallows direct access to SAN-attached LUNs similar to Hyper-V, you lose a lot of features like Vmotion, High Availability and Fault Tolerance. Wayne felt that these restrictions are temporary, that hopefully VMwarewill resolve this over the next 12 months.
Under vNetwork, VMware has virtual LAN switches called vSwitches. This includes support for IPv6and VLAN offloading.
The vSphere server can now run with up to 1TB of RAM and 64 logical CPUs to support up to 320 VM guest images.Each VM can have up to 255GB RAM and up to 8-way SMP.Vsphere ESX 4 introduces a new virtual hardware platform called VM Hardware v7. While Vsphere 4.0 can run VMs from ESX 2 and ESX 3, the problem is if you have new VMs based on this newer VM Hardware v7, you cannot run them on older ESX versions.
Vsphere comes in four sizes: Standard, Advanced, Enterprise, and Enterprise Plus, ranging in list price from $795 US dollars to $3495 US dollars.
While IBM is the #1 reseller of VMware, we also are proud to support Hyper-V, Xen, KVM and other similar products.Analysts expect most companies will have two or more server virtualization solutions in their data center, and it is good to see that IBM supports them all.
Continuing my week in Chicago for the IBM Storage and Storage Networking Symposium and System x and BladeCenter Technical Conference, I presented a variety of topics.
Hybrid Storage for a Green Data Center
The cost of power and cooling has risen to be a #1 concern among data centers. I presented the following hybrid storage solutions that combine disk with tape. These provide the best of both worlds, the high performance access time of disk with the lower costs and reduced energy consumption of tape.
IBM [System Storage DR550] - IBM's Non-erasable, Non-rewriteable (NENR) storage for archive and compliance data retention
IBM Grid Medical Archive Solution [GMAS] - IBM's multi-site grid storage for PACS applications and electronic medical records[EMR]
IBM Scale-out File Services [SoFS] - IBM's scalable NAS solution that combines a global name space with a clustered GPFS file system, serving as the ideal basis for IBM's own[Cloud Computing and Storage] offerings
Not only do these help reduce energy costs, they provide an overall lower total cost of ownership (TCO) thantraditional WORM optical or disk-only storage configurations.
The Convergence of Networks - Understanding SAN, NAS and iSCSI in the Data Center Network
This turned out to be my most popular session. Many companies are at a crossroads in choosing data and storage networking solutions in light of recent announcements from IBM and others. In the span of 75 minutes, I covered:
Block storage concepts, storage virtualization and RAID levels
File system concepts, how file systems map files to block storage
Network Attach Storage, the history of the NFS and CIFS protocols, Pros and Cons of using NAS
Storage Area Networks, the history of SAN protocols including ESCON, FICON and FCP, Pros and Cons of using SAN
IP SAN technologies, iSCSI and Fibre Channel over Ethernet (FCoE), Pros and Cons of using this approach
Network Convergence with Infiniband and Fibre Channel over Convergence Enhanced Ethernet (FCoCEE), why Infiniband was not adopted historically in the marketplace as a storage protocol, and the features and enhancements of Convergence Enhanced Ethernet (CEE) needed to merge NAS, SAN and iSCSI traffic onto a single converged data center network [DCN]
Yes, it was a lot of information to cover, but I managed to get it done on time.
IBM Tivoli Storage Productivity Center version 4.1 Overview and Update
In conferences like these, there are two types of product-level presentations. An "Overview" explains howproducts work today to those who are not familiar with it. An "Update" explains what's new in this version of the product for those who are already familiar with previous releases. I decided to combine these into one sessionfor IBM's new version of [Tivoli Storage Productivity Center].I was one of the original lead architects of this product many years ago, and was able to share many personalexperiences about its evolution in development and in the field at client facilities.Analysts have repeatedly rated IBM Productivity Center as one of the top Storage Resource Management (SRM) tools available in the marketplace.
Information Lifecycle Management (ILM) Overview
Can you believe I have been doing ILM since 1986? I was the lead architect for DFSMS which provides ILM support for z/OS mainframes. In 2003-2005, I spent 18 months in the field performingILM assessments for clients, and now there are dozens of IBM practitioners in Global Technology Services andSTG Lab Services that do this full time. This is a topic I cover frequently at the IBM Executive Briefing Center[EBC], because it addressesseveral top business challenges:
Reducing costs and simplifying management
Improving efficiency of personnel and application workloads
Managing risks and regulatory compliance
IBM has a solution based on five "entry points". The advantage of this approach is that it allows our consultants to craft the right solution to meet the specific requirements of each client situation. These entry points are:
Tiered Information Infrastructure - we don't limit ourselves to just "Tiered Storage" as storage is only part of a complete[information infrastructure] of servers,networks and storage
Storage Optimization and Virtualization - including virtual disk, virtual tape and virtual file solutions
Process Enhancement and Automation - an important part of ILM are the policies and procedures, such as IT Infrastructure Library [ITIL] best practices
Archive and Retention - space management and data retention solutions for email, database and file systems
I did not get as many attendees as I had hoped for this last one, as I was competing head-to-head in the same time slot as Lee La Frese covering IBM's DS8000 performance with Solid State Disk (SSD) drives, John Sing covering Cloud Computing and Storage with SoFS, and Eric Kern covering IBM Cloudburst.
I am glad that I was able to make all of my presentations at the beginning of the week, so that I can then sit back and enjoy the rest of the sessions as a pure attendee.
This week I am in Chicago for the IBM Storage and Storage Networking Symposium, which coincides with the System x and BladeCenter Technical Conference. This allows the 800 attendees to attend both storage or server presentations at their convenience. There were hundreds of sessions, over 20 time slots, so for each time slot, you have 15 or so topics to choose from.Mike Kuhn kicked off the series of keynote sessions. Here's my quick recap of each one:
Curtis Tearte, General Manager, IBM System Storage
Curtis replaced Andy Monshaw as General Manager for IBM System Storage. His presentation focused on how storage fits into IBM's Dynamic Infrastructure strategy. Some interesting points:
a billion camera-enabled cell phones were sold in 2007, compared to 450 million in 2006.
IBM expects that there will be 2 billion internet users by 2011, as well as trillions of "things".
In the US, there were 2.2 million medical pharmacy dispensing errors resulting for handwritten prescriptions.
Time wasted looking for parking spaces in Los Angeles consumed 47,000 gallons of gasoline, and generated 730 tons of carbon dioxide.
In the US, 4.2 billion hours are lost, and 2.9 billion gallons of gas consumed, due to traffic congestion.
Over the past decade, servers went from 8 watts to 100 watts per $1000 US dollars.
Data growth appears immune to the economic recession. The digital footprint per person is expected to grow from 1TB today to over 15TB by 2020.
10 hours of YouTube videos are uploaded every minute.
Bank of China manages 380 million bank accounts, processing over 10,000 transactions per second.
At the end of the session, Curtis transitioned from demonstrating his knowledge and passion of storage to his knowledge and passion in his favorite sport: baseball. Chicago is home to both the Cubs and the White Sox.
Roland Hagan, Vice President Business Line Executive, System x
IBM sets the infrastructure agenda for the entire industry. The Dynamic Infrastructure initiative is not just IT, but a complete end-to-end view across all of the infrastructures in play, including transportation, manufacturing, services and facilities.Companies spent over $60 billion US dollars on servers last year. Of these, 53 percent for x86-based servers, 9 percent for Itanium-based, 26 percent for RISC-based (POWER6, SPARC, etc.), and 11 percent mainframe. Theeconomic downturn has impacted revenues, but the percentages continue about the same.
The dominant deployment model remains one application per server. As a result, power, cooling and management costs have grown tremendously. There are system admins opposed to consolidating server images with VMware, Hyper-V, Xen or other server virtualizaition technologies. Roland referred to these admins as "server huggers".To help clients adopt cloud computing technologies, IBM introduced [Cloudburst] appliances. IBM plans to offer specialized versions for developers, for service providers, and for enterprises.
IBM's Enterprise-X Architecture is what differentiates IBM's x86-based servers from all the competitors, surrounding Intel and AMD processors with technology that provides distinct advantages. For example, to support server virtualization, IBM's eX4 provides support for more memory, which often is more critical than CPU resources when deploying large number of guest OS images. IBM System x servers have an integrated management module (IMM) and was the first to change over from BIOS to the new Unified Extensible Firmware Interface [UEFI] standard.
IBM servers offer double the performance, consume half the power, and cost a third less to manage, than comparably priced servers from competitors. Of the top 20 more energy efficient server deployments, 19 are from IBM. Roland cited customer reference SciNet, a 4000-server supercomputer with 30,000 cores based on IBM [iDataPlex] servers. At 350 TeraFLOPs it is ranked #16 fastest supercomputer in the world, and #1 in Canada. With apower usage effectiveness (PUE) less than 1.2, it also is very energy efficient. This means that for every 12 watts of electricity going in to the data center, 10 watts are used for servers, storage and networking gear, andonly 2 watts used for power and cooling. Traditional data centers have PUE around 2.5, consuming 25 watts total for every 10 watts used by servers, storage and networking gear.
Clod Barrera, Distinguished Engineer, Chief Technical Strategist for IBM System Storage
Clod presented trends and directions for disk and tape technology, disk and tape systems, and the direction towards cloud computing.
Long-time readers of my blog know that typically IBM makes its announcements on Tuesdays, but this week, we had an announcement today, Wednesday!
IBM announced agreements with Brocade, Cisco and Juniper Networks to help build more dynamic infrastructures. An IBM study estimates that the "digital footprint" of each person will grow from 1TB today to 16TB by the year 2020, and all of that data will need bandwidth to get around.IBM’s Data Center Networking (DCN) Initiative is focused on providing clients with solutions to address these three key areas in networking:
Partner for choice of networking products based and open standards to ensure clients can select from a full range of hardware options for data center networks. IBM will build on its strong partnerships with leading networking vendors to provide greater customer choice.
Integrate with IBM Data Center Products and Services. IBM continues to deliver a comprehensive portfolio of services for network design, integration and management from IBM Global Technology Services (GTS).
IBM differentiates itself with Common Data Center Management, where we can bring together software products, such as IBM Systems Director and Tivoli portfolio, to create what we refer to as a "Unified Service Management" software platform.
Here's a sample of what IBM announced:
IBM continues its strategic support for Fibre Channel over Ethernet (FCoE) with a Converged B32 switch, and Converged Network Adapters (CNA) for IBM System x servers. A CNA card does the job of both the Ethernet Network Interface Card (NIC) as well as the Fibre Channel Host Bus Adapter (HBA), reducing the number of cables from each server.
IBM and its Business Partners will resell the Cisco Nexus 5000 Series Switches, a leading family of high-performance, low-latency switches for data center networks supporting lossless 10GbE, Fibre Channel and Fibre Channel over Ethernet (FCoE).
IBM will OEM selected Juniper EX and MX switches and routers. This expands upon IBM and Juniper's long-term relationship that includes a reseller agreement with IBM Global Technology Services, collaboration on Juniper's Stratus Project, and IBM's ten worldwide Cloud Labs.
Last week, I was in Austin, and had dinner at [Rudy's Country Store and BBQ]. They offer their self-proclaimed "Worst BBQ in Austin!" with brisket, sausage and other meats by weight. I got a beer, some potato salad, and creamed corn, all at additional cost, of course. When I went to the cashier to pay, I was offered all the white bread I wanted at no additional charge. Are you kidding me? You are going to charge me for beer, but give me 8 to 12 complimentary slices of white bread (practically half a loaf)? Honestly, I consider bread and beer to be basically the same functional food item, differing only in solid versus liquid form. I chose to have only four slices. The food was awesome!
I am reminded of that from my latest exchange with EMC.It didn't take long after IBM's announcement yesterday of IBM's continued investment in its strategic product set, IBM System Storage DS8000 series, that competitors responded. In particular, fellow blogger BarryB from EMC has a post [DS8000 Finally Gets Thin Provisioning] that pokes fun at the new Thin Provisioning feature.
Interestingly, the attack is not on the technical implementation, which is straightforward and rock-solid, but rather that the feature is charged at a flat rate of $69,000 US dollars (list price) per disk array. BarryB claims that recently EMC Corporate has decided to reduce the price of their own thin provisioning, called Symmetrix Virtual Provisioning (VP) on select subset of models of their storage portfolio, although I have not found an EMC press release to confirm. In other words, EMC will bury the cost of thin provisioning into the total cost for new sales, and stop shafting, er.. over-charging their existing Symmetrix customers that are interesting in licensing this feature.
BarryB claims this was a lucky coincidence that his blog post happened just days before IBM's announcement.
(Update: While the timing appears suspicious, I am not accusing Mr. Burke in anywrongdoing of insider information of IBM's plans, nor am I aware of any investigations on this matter from the SEC or any other government agency, and apologize if my previous attempt at humor suggested otherwise. BarryB claimsthat the reduction in price was motivated to counter publicly announced HDS's "Switch In On" program, that it is not a secret thatEMC reduced VP pricing weeks ago, effective beginning 3Q09, just not widely advertised in any formal EMC press releases.Perhaps this new VP pricing was only disclosed to just EMC's existing Symmetrix customers, Business Partners, and employees. Perhaps EMC's decision not to announce this in a Press Release was to avoid upsetting all the EMC CLARiiON customers that continue to pay for Thin Provisioning, or to avoid a long line of existing VP customers asking for refunds. In any case, people are innocent until proven otherwise, and BarryB rightfully deserves the presumption of innocence in this regard. I'm sorry, BarryB, for any trouble my previous comments may have caused you.)
Instead, let's explore some events over the past year that have led up to this.
Let's start with what EMC previously charged for this feature. Software features like this often follow a common pricing method, based per TB, so larger configurations pay more, but tiered in a manner that larger configurations pay less per TB, combined with a yearly maintenance cost.
(Updated: EMC has asked me nicely not to post their actual list prices,so I will provide rough estimates instead. According to BarryB, these are no longer the current prices, soI present them as historical figures for comparison purposes only.)
Initial List price
Software Maintenance (SWMA) percentage
Software Maintenance per year
Number of years
Software License Cost (4 years)
Holy cow! How did EMC get away charging so much for this? To be fair, these are often deeply discounted, a practice common among the industry. However, it was easy for IBMers to show EMC customers that putting SVC or N series gateways in front of their existing EMC disks was more cost effective. Both SVC and N series, as well as IBM's XIV, provide thin provisioning at no additional charge.
HDS offers their own thin provisioning called Hitachi Dynamic Provisioning.Hitachi also offers an SVC-like capability to virtualize storage behind the USP-V. However, I suspect thatfewer than 10 percent of their install base actually licensed this capability because it cost so much. Under the cost pressure from IBM's thin provisioning capabilities in SVC, XIV and N series, Hitachi launched its ["Switch It On"] marketing campaign to activate virtualization and provide some features at no additional charge, including the first 10TB of Hitachi Dynamic Provisioning.
Last week, Martin Glassborow on his StorageBod blog, argued that EMC and HDS should[Set the Wide Stripes Free]. Here is an excerpt:
HDS and EMC are both extremely guilty in this regard, both Virtual Provisioning and Dynamic Provisioning cost me extra as an end-user to license. But this is the technology upon which all future block-based storage arrays will be built. If you guys want to improve the TCO and show that you are serious about reducing the complexity to manage your arrays, you will license for free. You will encourage the end-user to break free from the shackles of complexity and you will improve the image of Tier-1 storage in the enterprise.
Martin is using the term "free" in two contexts above. In the Linux community, we are careful to clarify "free, as in free speech" or "free, as in free beer". Technically, EMC's virtual provisioning is neither, as one has to purchase the hardware to get the feature, so the term "at no additional charge" is more legally correct.
However, the discussion of "free beer" brings me back to my first paragraph about Rudy's BBQ. Nearly everyone eats bread, with the exception of those with [Celiac Disease] that causesan intolerance for gluten protein in wheat, so burying the cost of white bread in the base cost of the BBQ meat is reasonable. In contrast, not everyone drinks beer, and there are probably several people whowould complain if the cost of beer was included in the cost of the BBQ meat, so charging separately forbeer makes business sense.
The same applies in the storage industry. When all (or most) customers of a product can benefit from a feature, it makes sense to include it at no additional charge. When a significant subset might not want to pay a higher base price because they won't use or benefit from a feature, it makes sense to make it optionally priced.
For the IBM SVC, XIV and N series, all customers can benefit from thin provisioning, so it is included at no additional charge.
For the IBM System Storage DS8000, perhaps some 30 to 40 percent of our clients have only System z and/or System i servers attached, and therefore would not benefit from this new thin provisioning. It may seem unfair to raise the price on everybody. The $69,000 flat rate was competitively priced against the prices EMC, HDS and 3PAR were charging for similar capability, and lower than the cost to add a new SVC cluster in front of the DS8000. IBM also charges an annual maintenance, but far lower than what others charged as well.
(Note: These list prices are approximate, and vary slightly based on whether you are on legacy, ESA, Servicesuite or ServiceElect software and subscription (S&S) service plans, and the machine type/model. The tables were too complicated to include here in this post, so these numbers are rounded for comparison purposes only.)
IBM flat rate
Software Maintenance per year (approx)
Number of years
Software License Cost (4 years)
Pricing is more art than science. Getting the right pricing structure that appears fair to everyone involved can be a complicated process.
Well, it's Tuesday, and you know what that means? IBM announcements!
Today we had several for the IBM System Storage product line. Here are some of them:
DS8000 gets thinner, leaner and faster
The 4.3 level of microcode for the IBM System Storage DS8000 series disk systems [announced enhancements] for both fixed block architecture (FBA) LUNs and count key data (CKD) volumes.
For FBA LUNs that attach to Linux, UNIX and Windows distributed systems, IBM announced DS8000 Thin Provisioning native support. Of course, many people already had this by putting IBM System Storage SAN Volume Controller (SVC) in front, but now DS8000 clients out there without SVC can also achieve benefits ofthin provisioning. This support also improves quick initialization a whopping 2.6 times faster.
For CKD volumes attached to z/OS on System z mainframes, IBM announced zHPF multitrack support for z/OS 1.9 and above. zHPF provide high performance FICON performance, and can now handle multitrack I/O transfers foreven better performance for zFS, HFS, PDSE, and extended striped data sets.
XIV gets better connected
A lot of XIV[announced enhancements] and preview announcements centered around better connectivity. Here's a run down:
Better host attachment connectivity by beefing up the interface modules that hold the FCP and iSCSI interface cards. XIV disk arrays have 3 to 6 of these in different configurations, and since they manage both their own disks,as well as receive host I/O requests for other disks, are basically doing double-duty.These interface modules can now be ordered as [Dual-CPU] modules.
Better infrastructure management by connecting XIV with the industry standard SMI-S interface to IBM Tivoli Storage Productivity Center. Now, XIV can be part of the single pane of glass console that manages all of your other disk arrays, tape libraries and SAN fabrics.
Better copy services for backups by connecting XIV with IBM Tivoli Storage Manager Advanced Copy Services. TSM for Advanced Copy Services is application aware and can coordinate XIV Snapshots similar to its current support for SVC and DS8000 FlashCopy capabilities.
Better connectivity to security systems by supporting LDAP credentials. Before, you had individual userid and passwords for each XIV, and these were probably different than all the other userid/password combinations you have for every other box on your data center floor. IBM is working on getting all products to support theLightweight Directory Access Protocol, or [LDAP] so that we can reach the nirvana of "single sign-on",one userid/password per administrator for all IT devices in the company.
Better support with flexible warranty periods and non-disruptive code load options.
Better remote copy support by connecting to sites far, far away. IBM previewed that it will provideasynchronous disk mirroring from one XIV to another XIV natively. Before this, XIV's synchronous mirroring was limited to 300km distances. Many of our clients do long distance global mirroring of their XIV today behind an SVC, but again, for those out there that don't yet have an SVC, this can be a reasonable alternative.
TS7650 ProtecTIER data deduplication appliance now offers "no dedupe" option
In what some might consider a surprising move, IBM announced a "no dedupe" licensing option on their premiere deduplication solution, which somewhat reminds me of IBM's NOCOPY option on DS8000 FlashCopy. At first I thought "Are you kidding me?!?!" However, this new license option allows the TS7650 appliance to compete with other virtual tape libraries (VTL) that do not offer deduplication capability on an even playing field. It also allows TS7650 to be used for data that doesn'tdedupe very well, such as seismic recordings, satellite images, or what have you. There are also clients who do not yet feel comfortable to dedupe their financial records for compliance reasons.This option now allows IBM to withdraw from marketing the TS7530 non-dedupe library. Having one technology thatdoes both dedupe and no-dedupe is better than offering two separate libraries based on different technologies.
The ProtecTIER series also announced [IP remote distance replication]. This can be used to replicate virtualtape cartridges in one ProtecTIER over to another ProtecTIER at a remote location. You can decide to replicateall or just a subset of your virtual tapes, and this feature can be used to migrate, merge or split ProtecTIERconfigurations as your needs grow. Before this support, our TS7650G clients replicated the disk repositoryusing native disk array replication technology, such as Global Mirror on the DS8000, but that meant that all data was replicated over to the secondary site. Now, with this new IP replication feature, you can be selective, and replicate only those virtual tapes that are mission critical.
The appliance now supports up to 36TB of disk capacity, and the new "IBM i" operating system on System i servers,formerly known as i5/OS.
GPFS does Windows
IBM's General Parallel File System (GPFS) has the lion's marketshare of file systems used in the [Top 500 Supercomputers]. For a while, it was limited to just Linux and AIX operating system support, but version 3.3 [extends this to Windows 2008 on 64-bit architectures]. GPFS isthe file system used in IBM's Scale-Out File Services, the underlying technology of IBM's Cloud Computing and Storage offerings.
In addition to keynote speakers Curtis Tearte, General Manager for IBM System Storage, and Clod Barrera, Chief Technical Strategist, my colleague Jack Arnold and I from the [IBM Tucson Executive Briefing Center] will present four topics each.
Now that the frozen economy is starting to thaw, I have been traveling like crazy this month. So far, I have been to Rochester, MN, Los Angeles and San Diego, CA, and now currently in Austin, TX. On the plus side, I was able to enjoy the [Fourth of July] holiday weekend on the beaches of San Diego.
(If you have not been to California beaches lately, here's a quick [video] reminder)
So the big news this week is that the auction over Data Domain is over, and EMC's bid finally won over NetApp. Both NetApp and EMC have data deduplication capabilities in their existing product lines, but neither could compete against IBM's TS7650G ProtecTIER Data Deduplication gateway and TS7650 ProtecTIER appliances, and so were hell-bent to buy Data Domain for large amounts. The final price agreed upon was over two billion US dollars for Data Domain.
For the most part, Data Domain's products are targeted towards small and medium sized businesses, whereas IBM's TS7650 and TS7650G products targets medium and larger sized enterprises.So now that EMC has a viable data deduplication solution, it looks like it will be yet another IBM-vs-EMC debate going forward.
Ideally, every airline would use the most experienced seasoned professional airline pilots money could buy, but some airlines, in an effort to compete on ticket price, may elect instead to have less experienced pilots.Here's a great excerpt:
Airline history lesson 101: It used to be, up until the mid 1980’s, that a young pilot would be hired on at a major carrier, become a flight engineer (FE), and then spend a few years managing the systems of the older-generation airplanes. But he or she was learning all the while. These new “pilots” sat in the FE seat and did their job, all the while observing the “pilots” doing the flying, day in and day out.
The FE’s learned from the seasoned pilots about the real world of flying into the Chicago O’Hares and New York LaGuardias. They learned decision making, delegation, and the reality of “captain’s final authority” as confirmed in the law. When they got the chance to upgrade, they became a copilot. The copilot’s duty was to assist the captain in flying; but even during their time as the new copilot, they had the luxury of the FE looking over their shoulders — i.e., more learning. This three-man-crew concept, now a fond memory in the domestic markets but used predominately in international flying, was considered one more layer of protection. But it’s gone.
To become the public speaker I am today, IBM put me through a variety of speaking classes. I taught high school and college classes to practice in front of groups. But most importantly, I traveled with seasoned colleagues and watched them in action from the front row.I learned how to handle tough questions, how to react to hecklers causing trouble, and how to deal with the unexpected before, during and after each presentation. In addition to speaking skills, I ended up having to learn travel skills, foreign language skills, and a variety of cultural social skills. All part of the job in my line of work.
Likewise, being a storage administrator is an important job, and for some data centers, not something to give lightly to a fresh college graduate. Unless they have had format IT Infrastructure Library [ITIL] certification coursework, I doubt they would understand the processes and disciplines demanded by the typical data center. I have been to accounts where new hires are not allowed to touch production systems for the first two years. Instead they watch the seasoned professionals do their jobs, and are given only access to "sand box" systems that are used for application testing or Quality Assurance (QA). Sadly, I have also been to other accounts where people with no storage experience whatsoever were tossed into the admin pool and let loose with superuser passwords, all in an effort to save money during times of exponential data growth rates, only to pay the price later with outages or lost data.
The parallels between the airline industry and the IT industry are eerie.
As I mentioned in my post [Moving Over to MyDeveloperWorks], those of us bloggers on IBM's DeveloperWorks are moving over to a new system called "MyDeveloperWorks" which has a host of new features.
Fortunately for me, I missed the note to volunteer to be one of the first bloggers on the block to volunteer to move over. I was traveling and decided not to deal with it until I got back.However, fellow IBM Master Inventor, Barry Whyte, was not so lucky. It is safe to say he was stupid enough to volunteer, and is probably regretting the decision every day since. In case you lost his RSS feed, or can't find him anymore on Google or whatever search engine, here is his[new blog].
As for my blog, I have asked to postpone the move until all the problems that Barry has encountered are resolved. That might be a awhile, but if you lose access to mine sometime in the near future, hopefully at least you have been warned as to what might have happened.
Jon Toigo has a funny cartoon on his post, [As I Listen to EMC Brag on “New” Functionality…]. Basically, it pokes fun that many of us bloggers argue which vendor was first to introduce some technology or another. We all do this, myself included.
Recently, Claus Mikkelsen's, currently with HDS, [brought up accurately some past history from the 1990s], which is before many storage bloggers hired on with their current employers. Claus and I worked together for IBM back then, so I recognized many of the events he mentions that I can't talk about either. In many cases, IBM or HDS delivered new features before EMC.
I've been reading with some amusement as fellow blogger Barry Burke asked Claus a series of questions about Hitachi's latest High Availability Manager (HAM) feature. Claus was too busy with his "day job" and chose to shut Barry down. Sadly, HDS set themselves up for ridicule this round, first by over-hyping a function before its announcement, and then announcing a feature that IBM and EMC have offered for a while. The problem and confusion for many is that each vendor uses different terminology. Hitachi's HAM is similar to IBM's HyperSwap and EMC's AutoSwap. The implementations are different, of course, which is often why vendors are often asked to compare and contrast one implementation to another.
In his latest response,[how to mind the future of a mission-critical world], Barry reports that several HDS bloggers now censor his comments.That's too bad. I don't censor comments, within reason, including Barry's inane questions about IBM's products, and am glad that he does not censor my inane questions to him about EMC products in return. The entire blogosphere benefits from these exchanges, even if they are a bit heated sometimes.
We all have day jobs, and often are just too busy, or too lazy, to read dozens or hundreds of pages of materials, if we can even find them in the first place. Not everyone has the luxury of a "competitive marketing" team to help do the research for you, so if we can get an accurate answer or clarification about a product that is generally available directly from a vendor's subject matter expert, I am all for that.
This week, I have been presenting how to do important things without travel. Of course, there are times where you need some boots on the ground, while your support team remains remote.
Last month, fellow co-worker Liz Goodman reached out to me. She was part of a ten-person team that went to Tanzania as part of IBM's[Corporate Service Corps]. Other teams went to Brazil, China, Ghana, Romania, South Africa, The Philippines, Turkey and Vietnam.(I've been to half these other countries, but the closest I have ever been toTanzania was a safari I took in Kenya that included the Masai Mara national park which runsalong the border with Tanzania's Serengheti national park).
Liz was one of the lucky[200 candidates chosen among over 5000 applications] IBM reviews each year for this program. IBM does business in over170 countries, so learning to work in or with emerging growth markets requires a bit of "cultural intelligence".Liz and three others worked with the University of Dudoma [UDOM] to lead some students in adopting a [Moodle] infrastructure based on Linux, Apache, PHP and MySQL [LAMP] platform. She noticed that I had experience with both Moodle and LAMP from [my work with OLPC], and reached out to me for help.I was able to provide some insight, things to watch out for, and how to tackle not just the technical challenges, but a few that many don't consider:
Educational content. Digitizing materials already available in hardcopy, or obtaining digital rights to existing content.
Business Process. Getting the teachers and students to adopt new process and procedures enabled by these new capabilities.
Project Management. Fortunately, Liz is already [PMP-certified], and knows well the importance of managing even a small 4-person, 4-week project like this.
How well did her team do? Liz blogged before, during and after her trip. Read all about iton her blog [Liz Goes To Tanzania]!
Jim Stallings, IBM General Manager for Global Markets, will explain why a smarter planet needs a dynamicinfrastructure. I used to work for Jim, when he was in charge of the IBM Linux initiative and I was on the Linux forS/390 mainframe team.
Erich Clementi, IBM Vice President, Strategy & General Manager Enterprise Initiatives, will explain how to best leverage opportunities with cloud computing.
Steve Forbes, Chairman and CEO of Forbes Inc. and Editor-in-Chief of Forbes Magazine, will presentGlobal Outlooks and the Challenge of Change.
Rich Lechner, IBM Vice President, Energy & Environment, will explain the importance of Building an Energy-Efficient Dynamic Infrastructure. I also worked for Rich, back when he was the VP of Marketing for IBM System Storage, and Iwas back then the "Technical Evangelist". See my post [The Art of Evangelism] to better understand why I don't carry that title anymore.
In addition to these presentations, you will be able to "walk" around to different booths and have on-line chats with subject matter experts and download resources. Don't worry, this is not based on [Second Life], but rather using "On24" much simpler visual interface.Of course, you can follow on [Twitter] or join the fan club at[Facebook].
This is a worldwide event, with translated resource materials and on-line subject matter experts in six different languages (English, French, Italian, German, Mandarin and Japanese). Those in North, Central and South Americas can participate June 23, and those in Europe, Asia and the rest of the world on June 24. [Register Today] and mark your calendars!
Continuing this week's theme of doing important things without leaving town, I present our results foran exciting project I started earlier this year.
For seven weeks, my coworker Mark Haye and I voluntarily led a class of students here in Tucson, Arizona in an after-school pilot project to teach the ["C" programming language] using [LEGO® Mindstorms® NXT robots]. The ten students, boys and girls ages 9 to 14 years old, were already part of the FIRST [For Inspiration and Recognition of Science and Technology] program, and participated in FIRST Lego League[FLL] robot competitions.Since the students were already familiar building robots, and programming them with a simple graphical system of connecting blocks that perform actions. However, to compete in the next level of robot competitions, FIRST Tech Challenge [FTC],we need to leave this simple graphical programming behind, and upgrade to more precise "C" programming.
Mark is a software engineer for IBM Tivoli Storage Manager and has participated in FLL competitions over the past nine years. This week, he celebrates his 25th anniversary at IBM, and I celebrate my 23rd. The teacher, Ms. Ackerman, and the students referred to us as "Coach Mark" and "Coach Tony".
This was the first time I had worked with LEGO NXT robots. For those not familiar with these robots, you can purchase a kit at your localtoy store. In addition to regular LEGO bricks, beams, and plates, there are motors, wheels, and sensors. A programmable NXT brick has three outputs (marked A,B, and C) to control three motors, and four inputs (marked 1,2,3,4) to receive values from sensors. Programs are written and compiled on laptops and then downloaded to the NXT programmable brick through an USB cable, or wirelessly via Bluetooth.
In the picture shown, an image of the Mars planetary surface is divided into a grid with thick black lines.A light sensor between the front two wheels of the robot is over the black line.
We used the [RobotC programming firmware] and integrated development environment (IDE) from [Carnegie Mellon University].The idea of this pilot was to see how well the students could learn "C". With only a few hours after class on each Wednesday, could we teach young students "C" programming in just seven weeks?
My contribution? I have taught both high school and college classes, and spent over 15 years programming for IBM, so Mark asked me to help.We started with a basic lesson plan:
A brief history of the "C" language
Understanding statements and syntax
Setting motor speed and direction
Compiling and downloading your first program
Understanding the "while" loop
Retrieving input sensor values
Understanding the "if-then-else" statement
Defining variables with different data types
Manipulating string variables
Writing a program for the robot to track along a black line on a white background.
Understanding local versus global scope variables
Writing a program for a robot to count black lines as it crosses them.
Perform left turns, right turns, and to cross a specific number of lines on a grid pattern to move the robot to a specific location.
Weeks 6 and 7
Mission Impossible: come up with a challenge to make the robot do something that would be difficult to accomplish using the previous NXT visual programming language.
At the completion of these seven weeks, I sat down to interview "Coach Mark"on his thoughts on this pilot project.
This is a practical programming skill. The "C" language is used throughout the world to program everything from embedded systems to operating systems, and even storage software. This would allow the robots to handle more precise movements, more accurate turns, and more complicated missions.
Can kids learn "C" in only seven weeks?
Part of the pilot project was to see how well the students could understand the material. They were already familiar with building the robots, and understood the basics of programming sensors and motors, so we were hoping this was a good foundation to work from. Some kids managed very well, others struggled.
Did everything go according to plan?
The first two weeks went well, turning on motors and having robots move forward and backward were easy enough. We seemed to lose a few students on week 3, and things got worse from there. However, several of the students truly surprised us and managed to implement very complicated missions. We were quite pleased with the results.
What kind of problems did the kids encounter?
Touch sensor required loops waiting for pressing. Motors did not necessarily turn as expected until more advanced methods were used. Making 90 degree left and right turns accurately was more difficult than expected.
Any funny surprises?
Yes, we had a Challenge Map representing the Mars planetary surface from a previous FLL competition that was dark red and divided into squares with thick black lines. An active light sensor returns a value of "0" (complete darkness) to "100" (bright white).However, the Mars surface had craters that were dark enough to be misinterpreted as a black line causing some unusual results. This required some enhanced programming techniques to resolve.
Did robots help or hurt the teaching process?
I think they helped. Rather than writing programs that just display "Hello World!" on a computer screen, the students can actually see robots move, and either do what they expect, or not!
And when the robots didn't do what they were expected to?
The students got into "debug" mode. They were already used to doing this from previous FLL competitions, but with RobotC, you can leave the USB cable connected (or use wireless Bluetooth) and actually gather debugging information while the robot is running, to see the value of sensors and other variables and help determine why things are not working properly.
Any applicability to the real world of storage?
We have robots in the IBM System Storage TS3500 tape library. These robots scan bar code labels, pull tapes out of shelves and mount them into drives.The programming skills are the same needed for storage software, suchas IBM Tivoli Storage Manager or IBM Tivoli Storage Productivity Center.
The world is becoming smarter, instrumented with sensors, interconnected over a common network, and intelligent enough to react and respond correctly. The lessons of reading sensor values and moving motors can be considered the first step in solutions that help to make a smarter planet.
Spend twenty hours a week running a project for a non-profit.
Teach yourself Java, HTML, Flash, PHP and SQL. Not a little, but mastery. [Clarification: I know you can't become a master programmer of all these in a year. I used the word mastery to distinguish it from 'familiarity' which is what you get from one of those Dummies type books. I would hope you could write code that solves problems, works and is reasonably clear, not that you can program well enough to work for Joel Spolsky. Sorry if I ruffled feathers.]
Volunteer to coach or assistant coach a kids sports team.
Start, run and grow an online community.
Give a speech a week to local organizations.
Write a regular newsletter or blog about an industry you care about.
Learn a foreign language fluently.
Write three detailed business plans for projects in the industry you care about.
Self-publish a book.
Run a marathon.
In 2007, 51 percent of graduating college students could find jobs in their field, and this year it has dropped to only 20 percent. If you find yourself with some time on your hands, either recently graduated or recently unemployed, consider volunteerism.Last year, I chose to donate my time and money to an innovative project called "One Laptop per Child" [OLPC]. It was one of my [New Years Resolutions] for 2008. I was actually "recruited" by folks from the OLPC after they read my [series of blog posts] on things that can be done with their now famous green-and-white XO laptop.
The first half of the year, I spent helping "Open Learning Exchange Nepal" [OLE Nepal], a non-government organization (NGO) to help education in that country. XO laptops were provided to second and sixth graders at several schools, and my assignment was to help with the school "XS" server. This would be the server that all the laptops connect to. My blog posts on this included:
Rather than [Move to Nepal], I was able to help by building an identical XS server in Tucson, and provide support remotely. This included getting the "Mesh Antennas"to be properly recognized, having an internet filter using [DansGuardian] software, and working out backup procedures.
For the second half of the year, I was asked to mentor a college student inHyderabad, India as part of the ["Google Summer of Code"] to develop an[Educational Blogger System]on the XS server. We called it "EduBlog" and based it on the popular [Moodle] educational software platform.This was going to be tested with kids from Uruguay, but sending a serverdown to this country proved politically-challenging, so instead, I [builta server and shipped it] to a co-location facility in Pennsylvania that agreed to donate the cost and expenses needed to run the server there with full internet connection. I acted as "system admin" for the box, was able to connect remotely via SSH, while Tarun, the college student I was mentoring, developed the EduBlog software. Twice the system washacked, but I was able to restore the system remotely thanks to a multi-boot configuration that allowedme to reboot to a read-only operating system image and restore the operating system and data.
The students and teachers in Uruguay were helped locally by [Proyecto Ceibal]. We were able to translate the system into Spanish, and the project was a big success, enough to convince local government to provideXO laptops to their students to further the benefits.
I get a lot of suggestions for what to put on my blog.I realize that tweets are limited to 140 characters, so pointing to a video URL without muchexplanation or warning can be dangerous. An email can at least add appropriate warnings,such NSFW (Not Safe For Work) or "sorry if this offends you". The only warning I got fora video posted to YouTube by "StorageNetworkDud" was this short email:
"Sorry about the language they have used in some translations, but not sure who put this. It was on twitter."
Fortunately, I have my browser set up not to automatically play YouTube videos. The titlehelped warn me of the content, which turned out to be a [fan-subbed] scene from a World War II movie with brown-shirted tyrannical leader of an evil empire talking to his top generals. He dismisses all but threewith "Hollis, Burke, and Twomey stay in here" followed by a lengthy recap of EMC's recent troublesin the marketplace. At least in the video, the fuhrer correctly follows Tim Sander's advice:"if you have to tell someone bad news, say it in person."
While I understand that many people don't like EMC, the #3 storagevendor in the world, this type of "geek humor" hits a new low. The video was posted over amonth ago, but in light of the recent [shooting in Washington DC], I felt it was just notappropriate to post it here.
Readers, I appreciate all the suggestions, but give me some better warning next time!
This week, the [Global Language Monitor] announced that "Web 2.0" became the One Millionth word of the English language. The average American only uses about 10,000 word vocabulary.
One way to improve your vocabulary is to read my blook (blog-based book), Inside System Storage: Volume I, which includes a 900-word glossary of storage-related terms. My blook is now available in hardcover or paperback at [Amazon] as well as direct from my publisher[Lulu]:
I have started working on a new book which I hope to have available for purchase later this year.
This week I am in Minneapolis, MN, so was hoping that the complicated process of moving this blog over to "MyDeveloperWorks" would happen while I was gone, but alas, that does not appear to be the case.
Meanwhile, my partner in crime, Barry Whyte, has moved his blog [Storage Virtualization]successfully over to the new server.
Perhaps next week. If all goes well, the URL links should redirect correctly, but those of you out there using feed readers might require you to re-subscribe to get the right RSS feeds.
Continuing my blog coverage of the [Forrester IT Forum 2009 conference],I finally catch up with some keynote sessions this morning. Here's my recap on the rest of the main tent general session keynote presentations from BP, Microsoft and CFIL.
Dana Deasy, CIO and Group VP, Information Technology and Services (IT&S), BP
Dana presented "The gift we’ve been given – reinventing the IT organization". He is the CIO of BP, an energy company that made over 360 billion dollars selling oil and gas. In fact, it is the fourth largest company in the world, with 92,000 employees in more than 100 countries. Back in 2007, business was good but the senior management team felt that IT needed to be straightened out.Dana was brought in as a "fresh thinking" outsider, managing a group 4000 IT staff composed mostly of contractors, dealing with more than 2000 IT suppliers and more than 60 versions of SAP.
Dana presented the results of their IT makeover. In the first year, he was able to cut out 400 million US dollars from the IT budget, including the reduction of 500 people from the IT staff. He increased the employee/contractor ratio to 40/60, with plans to bring this up to 65/35 over the next year. He was able to get 1800 IT employees to perform a self-assessment to understand their strengths and weaknesses. He was able to centralize the IT leadership team, and deploy a common [ITIL] best practices implementation.
What did he learn from all this? Here were his top four "lessons learned":
No time to dwell but know your facts
Work in parallel to push the pace of change
Listen but in the end take your own counsel
Tell a compelling story to energize your employees and your leadership
Chris Capossela, Senior VP of Information Worker Product Management Group, Microsoft
Chris presented "Uncovering Value in the Cloud and On Your Desktop", onhow Microsoft customers are taking advantage of the software they have already purchased.For example, Jamba Juice was able to use Microsoft SharePoint to cut down locating documents from 15 minutes to just seconds, reducing 10-15 hours per week for more than 500 managers. More importantly, they were more confident that document they found was the right one. This is often referred to as "one version of the truth." In another example, Tyson Foods was able to connect Microsoft Word to their SAP application, and have that then connect to their Microsoft SharePoint.
Chris was amazed that many Microsoft customers don't take advantage of all that is available to them.He gave four examples:
Planning Services: If you buy an enterprise license to Microsoft products, you get planning services, from either Microsoft's own Microsoft Consulting Services or from thousands of Microsoft Business Partners. Only 8 percent of customers take advantage of this.
Home Use Rights: For enterprise license customers, employees can purchase "home use rights" to use the Enterprise level of Microsoft Office software for only 10 US dollars, but only about 3 percent take advantage of this.
Training: Many enterprise licenses come with 2-4 weeks of training vouchers, but only 40 percent take advantage of these vouchers.
E-Learning: Microsoft also offers e-learning, which Microsoft customers can either have delivered from Microsoft's own hosted services, or they can get a copy of the E-learning materials hosted inside their own company firewall. Again, few take advantage of this.
Chris wrapped up his presentation by citing some examples of customers that migrated from in-house, on-premises collaboration software to Microsoft's "Exchange Online" and "SharePoint Online" cloud computing Software-as-a-Service [SaaS] offerings. The cloud versions of these software do not offer all the features as the on-premise versions, but Microsoft is working to close this gap.
(IBM offers similar cloud computing services for email and collaboration called [LotusLive])
Gary presented "Tough Times: Opportunity for Innovation and Corporate Makeover". He had some greatquotes intended to help people become better leaders, like this:
“Leadership failures do not usuallyresult from leaders not knowing what todo; rather these failures result becauseleaders fail to do what they know fullwell they should and must do.Most leaders never get fully comfortable withthe changes that they wish for theirorganizations.”
Change the Conversation - employees want to have a compelling reason to change.
Create a compelling description of the future - employees want a vision of where they are headed.
Emotionally enlist employees in the cause - leaders are not remembered for their attributes, as much as the causes they stood for.
Help me understand the business - employees often do not have information in context to act accordingly.
Choose passionate - employees want to see leaders that are passionate and confident on the process and strategic direction.
Create a To-Stop list - we all have "to do" lists, but perhaps you need a "to don't" list. In other words, a list of bad habits and practices you need to discontinue.
Gary indicated that trust must be given before it is earned. If a leader doesn't trust the employees, how do you expect the employees to trust the leader? When asking employees to change their behavior, or self-assess their own skills, a leader must emphasize "I mean you no harm." Otherwise,mistrust will undermine the intended results.
The keynote sessions the past three days have provided clear motivation to the CIOs and IT leaders in the audience to consider making the necessary changes, with impressive results and actionable advice.
Well, I am back from Las Vegas, and had a pleasant [US Memorial Day] holiday yesterday.
Today is Tuesday, and that means more IBM announcements! IBM announced that the DCS9900 now supports an intermix of SAS and SATA drives. The DCS9900 is purpose-built specifically for the High-Performance-Computing (HPC) and Video Broadcasting industries.
The system is a combination of 4U controllers and 3U expansion drawers. The controllers handle either FC or Infiniband attachment to host servers. The expansion drawers hold up to 60 drives each. With the new features of intermix, the following drives are supported:
7200 RPM SATA drives in 500, 750 and 1000 GB capacities
15K RPM SAS drives in 146, 300 and 450 GB capacities
The DCS9900 groups the drives into sets of 10, in RAID-6 ranks of 8+2P. IBM supports either 5, 10 or 20 expansion drawers to make a complete system. The maximum configuration would be 1200 drives of the 1000GB SATA drives, for a total of 1.2 PB in two frames. Each rank must be all the same type and capacity drive, but you can mix different types within the entire system.
The DCS9900 supports "Sleep Mode", an implementation of Massive Array of Idle Disks [MAID] technology, whereby each RAID rank can be either awake and spinning, or in energy-efficient standby mode. This makes for a more "green" storage system for data that is not accessed frequently.
The track sessions were aligned by job role. Here are all the tracks:
Track A: CIO
Track B: Enterprise Architecture professional
Track C: Application Development & Program Management professional
Track D: Information & Knowledge Management professional
Track E: Sourcing & Vendor Management professional
Track F: Business Process & Applications professional
Track G: IT Infrastructure & Operations professional
Track H: Security & Risk professional
Track I: Technology Product Management & Marketing professional
As an IBM consultant, I deal with all of these different kinds of professionals, so I thought I would try to attend a variety of sessions this week. Here are my notes from a few of these:
Transforming IT for Lean Times: Organizational Structure
The Forrester analyst presented the concept of "Lean IT". This is not just a process to make IT skinny or marginal through commoditization. Rather, it is to meet business needs through differentiated services. Gone is the "one-size fits all" mentality. Lean IT can be used to streamline IT capabilities to enable employees to get their jobs done. Continuous improvement is done through a series of "Rapid Improvement Events" (RIE), a methodology known as "Kaizen Blitz".The focus is to reduce waste, including rework, firefighting, meetings, unnecessary reports and paperwork, working groups, and task forces. A common mistake is to reorganize departments before understanding the fundamental requirements, or make every employee use the same PC just to simplify the job of IT.
Traditionally, IT departments had three jobs. The first is Utility, to keep the lights on and systems running. The second is Productivity, to enhance existing systems and applications. The third is Innovation and Business Transformation.The problem has been that many IT leaders have been "IT Supply Managers" ensuring there is adequate supply of these. Instead, the Forrester analyst suggests redefining the role to one of "Demand Broker". Some companies have already done this. The CIO manages the demand for IT from Business Units, Business Processes, Information Workers, as well as suppliers, business partners and customers. As a demand broker, the CIO could then use these demands to optimize and prioritize IT resources.
Why Tech will lead Economic Recovery
The current 5.7 percent drop in IT spending in 2009 during this global financial meltdown is actually similar to the drop in ITspending in 2001-2003. However, the Forrester analyst anticipates that IT spending will bounce back in 2010.His reasoning came from looking at past IT spending trends since the 1950s. He found four clear sequencesconsisting of 6-10 years of growth and investment in IT, followed by 6-10 years of refinement and digestionwhere business leaders try to make the most use of these investments. The four sequences of investment and growth are:
1976-1985 personal computing, empowering individual productivity
1992-2000 network computing, enabling e-business and the internet boom
2008-2016 smart computing, optimizing business results through flexible and responsive IT that incorporates awareness and analytics to solve new business problems.
He argued that this trend was already starting to show itself. There was an uptick in IT spending in 2008 before the financial melt-down, and he feels this is why the tech industry sector will drive the economic recovery in 2010. The top five industries that will lead the adoption of smart computing will be: Government, Healthcare and Life Sciences, Utilities, Education, and Personal Services. These represent 54 percent of IT spending in the US, and also represent a large portion of the US stimulus package.
Smart Computing can be summarized as the "Four A's":
Awareness - instrumentation like RFID chips, sensors and video surveillance
Analysis - intelligent recognition of patterns and finding anomalies
Alternatives - identifying alternative responses
Actions - dealing with threats or capturing opportunity
Smart computing in these industries reflects the need for more vertically-aligned industry-specific solutions.IBM is well-positioned in this area, having both the hardware, software and services for smart computing, as well as deep industry-specific expertise. Other industry-specific vendors, like General Electric and Siemens, have the vertical alignment, and are working to adopt smart computing. Meanwhile, Oracle/Sun and Microsoft are also investing in smart computing, and have the potential to develop more vertically-aligned industry-specific solutions. Other IT vendors will have a choice to make: stay horizontal or go vertical.
ERP's Evolving Landscape: Impact for Application Professionals
Enterprise Resource Planning (ERP) software vendors are consolidating, with
the average ERP deployment 10 years old, triggering many to re-evaluate how
well the promises of ERP match the reality. On the positive side, ERP promised to reduce the total number of applications, provide somefinancial stability and integration, and for the most part the Forresteranalyst felt these promises were mostly met. However, in deploying newcapabilities, lowering TCO or establishing a partnership between vendorand client, ERP vendors got low marks.
Customers are now demanding more end-to-end solutions, especially withmore industry-specific functionality. Technologyadvances should be used to boost the business value. For example,ERP lags in SaaS adoption. Frequent upgrades to meet regulatory requirementscould drive stronger interest in SaaS deployments of ERP.
Customers would also like the end-user experience with the ERP to bemore role-based, with actionable insight and intelligent responses related to the user's job responsibilities. Hybrid ERP solutions that span deploymentacross on-premises, SaaS and managed hosting services might be neededto ease this transition.
What You Should Know Before Signing a Contract with a Disaster Recovery Services provider
This session was less about RTO or RPO, and more about broader considerations.The leading disaster recovery service providers are IBM Business Continuityand Resiliency Services [BCRS], SunGard, ICM/UK, and HP. The Forrester analyst did not think HP was treating this as strategically as they could,and often are behind the scenes through other business partners.
Will this [oligopoly] continue? Theanalyst thinks there will be an increase in the number of disaster recovery service providers. Contenders includeTelcos like Qwest, AT&T, BT, and Verizon Business; SMB-focused firms like i365 and Venyu;and cloud computing IaaS providers like SAVVIS.
So what should you consider when putting out an RFP? Here were a few suggestions:
Make sure there are schedules for all of your platforms (x86, Unix, System i and System z)
Identify all fees, including "declaration fee" and "occupancy fee"
Costs of Disaster Recovery test exercises, including how many, and their duration in days.
How quickly you can access their facility after you declare the disaster
Whether the provider has alternative Data Centers, depending on the scope of the regional disaster
Evolving the 4 P's of Marketing to Grow Revenue in Emerging Markets
I was in IBM marketing for seven years. For those without marketing backgrounds, the 4 P's of marketing are product, promotion, placement and pricing. There is no "global" audience, eachcountry, region or locale has unique characteristics, and requires go-to-market (GTM) strategiesbe tailored to each situation. For example, in Russia, decision makers are more influenced byWeb sites and Industry magazines; in Europe, they are more influenced by peers and word-of-mouth;and in Latin America, direct sales force are most influential. In many countries, blogs are more influential than they are in the United States.
Companies of all sizes can do the right things. For example, IBM translates its materials into 31different languages. Meanwhile, 50-person [LogMeIn] has 17 million users because they have localized their offering, and even allow online purchase in local currencies. Third party consultants that knowthe local region may be needed to break into new geographies.
Certainly the opportunity is there. Worldwide, there are an estimated 9 million small businesses,and another 630,000 medium size businesses. These SMBs employ 22 percent of the workforce in Russia, 55 percentin Europe, and 80 percent in Japan.To reach them, you may need to explore new channels,such as government agencies, academia, non-government organizations (NGO), and trade associations.The traditional supply chain of vendor, distributor and reseller may need be redefined as a demandnetwork, with co-marketing programs, peer-to-peer relationships and shared knowledge resources.
IBM collaborated with the International Finance Corporation [IFC] tocreate the [SME Toolkit], a resource and online communityfor small and medium enterprises, translated and localized into several languages. IBM also workedwith Chinese government to select Wuxi, China as the location for its Cloud Computing center as partof the [Wuxi Tai Hu New Town Science and Education Industrial Park].
This was a great week! Lot's to digest and think about.
Continuing my blog coverage of the [Forrester IT Forum 2009 conference],I will group a bunch of topics related to Cloud Computing into one post. Cloud Computing was a big topichere at the IT Forum, and probably was also in the other two conferences IBM participated in this week inLas Vegas:
The CIOs and IT professionals at this Forrester IT Forum seemed to be IT decision makers with a broader view. There was a lot of interest in Cloud Computing. What is Cloud Computing? Basically, it is renting IT capability on an as-needed basis from a computing service provider. The different levels of cloud computing depends on what the computing service provider actually provides. How do these compare with traditional co-location facilities or your own in-house on-premises computing? Here's my handy-dandy quick-reference guide:
Cloud Software-as-a-Service [SaaS], Examples: SalesForce and Google Apps.
Cloud Infrastructure-as-a-Service [IaaS], such as Amazon EC2, RackSpace.
Tradtional Co-Location facility, you park your equipment on rented floorspace, power, cooling and bandwidth.
Traditional On-Premises, what most people do today, build or buy your own data center, buy the hardware, write or buy the software, then install and manage it.
A main tent session had a moderated Q&A panel of three Forrester Analysts titled "Saving, Making and Risking Cash with Cloud Computing." Here are some key points from this panel:
Is Cloud Computing just another tool in the IT toolbox, or does it represent a revolution? The panel gave arguments for both. As a set of technology, protocols and standards, it is an evolutionary progression of other standards already in place, and an extension of methods used in co-location and time-share facilities. However,from a business model perspective, Cloud Computing represents a revolutionary trend, eliminating in some cases huge up-front capital expenses and/or long-term outsourcing contracts. PaaS and IaaS offerings can be rented by the hour, for example.
An example of using Cloud Computing for a one-time batch job: The New York Times decided to build an archive of 11 million articles, but this meant having to convert them all from TIFF to PDF format. The IT person they put in charge of this rented 100 machines on [Amazon Elastic Compute Cloud (EC2)] for 24 hours and was able to convert all 4TB of data for only $240 US dollars.
Cloud Computing can make it easier for companies to share information with clients, suppliers and business partners, eliminating the need to punch holes through firewalls to provide access.
Since it is relatively cheap for companies to try out different cloud computing offerings with little or no capital investment, the spaghetti model applies--"throw it on the wall, and see what sticks!"
What application areas should you consider running in the cloud? Employee self-service portals-Yes, ERP-Mixed, On-time batch jobs-Mixed, Email-Yes, Access Control-No, Web 2.0-Mixed, Testing/QA-Mixed, Back Office Transactions-No, Disaster Recovery-Mixed.
Different IT roles will see varying benefits and risks with cloud computing. However, by 2011, every new IT project must answer the question "Why not run in the cloud?"
There were a variety of track sessions that explored different aspects of cloud computing:
Software-as-a-Server: When and Why
This session had three Forrester analysts in a Q&A panel format. SaaS can provide much-needed relief from application support, maintenance and upgrade chores. The choice and depth of offerings is improving from SaaS providers. However, when comparing TCO between SaaS and on-premises deployments, can yield different results for different use cases. For example, a typical SaaS rate of $100 US dollars per user per month, with discounts, could be $1000 per year, or $10,000 over a 10-year period. Compare that to the total 10-year costs of an on-premises deployment, and you have a good ball-park comparison. SaaS can provide faster time-to-value, and you can easily just try-before-you-buy several alternative offerings before making a decision.
The downside to SaaS is that you need to understand their data center, where it is located, and how it is protected for backup and disaster recovery. Some SaaS providers have only a single data center, so it mightbe disruptive if it experiences a regional disaster.
Cloud IT Services: The Next Big Thing or Just Marketing Vapor?
Economic pressures are forcing companies to explore alternatives, and Cloud IT services are providingadditional options over traditional outsourcing. Only 70-80 percent of companies are satisfied with traditionaloutsourcing, so there is opportunity for Cloud IT services to address those not satisfied. Scalable, consumption-based billing with Web-based accessibility and flexibility is an attractive proposition. Tenyears ago, you could not buy an hour on a mainframe with your credit card, now you can.
Cloud technologies are mature, and there is interest in using these services. About 10 percent of companies are piloting SaaS offerings, 16 percent piloting PaaS offerings, and 13 percent investing in deploying "private clouds" within their data center. This week Aneesh Chopra, who is Barack Obama's pick as the first CTO for the US Federal Government, [stated to congressional leaders]: “The federal government should be exploring greater use of cloud computing where appropriate.”
IBM is betting heavily on their Cloud Computing strategy, has already gone through the reorganizations needed to be positioned well, and claims to have thousands of clients already. HP has some cloud offerings focused on their enterprise customers. Dell is investing and reorganizing for cloud as well.
Network Strategic Planning for Challenging Times
While not limited to Cloud Computing, companies are seeing WAN traffic doubling every 18 months, but withoutthe corresponding increases in budget to cover it. The Forrester analyst covered WAN optimization management services, hybrid Ethernet-MPLS offerings to help people transition from MPLS VPNs to Carrier-grade Ethernet.
Who should you hire for WAN optimization? Do you trust your own Telco that provides your bandwidth to help you figure out ways to use less of it? Alternatives include System Integrators and Service providers like IBM and EDS.Or, you could try to do it yourself, but this requires capital investment in gear and performance monitoring software.
New workloads like Voice over IP (VoIP) and digital surveillance can help cost-justify upgrading your MPLS VPNs to Carrier-grade Ethernet. The possibility of converging this with iSCSI and/or Fibre Channel (FC) over Ethernet (FCoE) and this can help reduce costs as well. Both MPLS and Ethernet will co-exist for awhile, and hybrid offerings from Telcos will help ease the transition. In the meantime, switching some workloads to Cloud Computing can provide immediate relief to in-house networks now. Converging voice, video, LAN, WAN and SAN traffic may require the IT departments to reorganize how the IT role of "network administrator" is handled.
Navigating the Myriad New Sourcing Models
The landscape of outsourcing has changed with the introducing of new Cloud Computing offerings. However, adapting these new offerings to internal preferences may prove challenging. The Forrester analyst suggesting being ready to try to influence their companies to adopt Cloud Computing as a new sourcing option.
Traditional outsourcing just manages your existing hardware and software, often referred to as "Your mess for less!" However, outsourcing contract law is mature and many outsource providers are large, well-established providers. In contrast, some SaaS providers are small, and the few that are largemay be fairly new to the outsourcing business. Here are some things to consider:
Where will the data physically be located? There are government regulations, such as the US Patriot Act, that can influence this decision.Many Canadian and European customers are avoiding providers where datais stored in the United States for this reason.
What is the service delivery chain? Some cloud providers in turn useother cloud providers. For example a SaaS provider might develop the software and then rent the platform it runs on from a PaaS, which in turn mightbe using offshore or co-location facilities to actually house their equipment.Knowing the service delivery chain may prove important on contractnegotiations. Clarify "cloud" terminology and avoid mixed metaphors.
What is their contingency plan? What is your contingency plan if the system is slow or inaccessible. What is their plan to protect against data loss during disasters? What if they go out of business? Source Code Escrow has proven impractical in many cases. SLAs should provide for performance, availability and other key metrics. However, service level penalties are not a cure-all for major disruptions, loss of revenues or reputation.
How will they handle security, compliance and audits? Heavy regulatory requirements may favor dedicated resources to be used.
Who has "custodianship" of the data? Will you get the data back if you discontinue the contract? If so, what format will it be in, and will it make any sense if you are not running the same application as the cloud provider?
Will they provide transition assistance? Moving from on-premises to cloud may involve some effort, including re-training of end users.
Are the resources shared or dedicated? For shared resource environments, is the capacity "fenced off" in any way to prevent having other clients impact your performance or availability.
I am glad to see so much interest in Cloud Computing. To learn more, here is IBM's [Cloud Computing] landing page.
A Forrester Analyst drew the analogy of a river to the upcoming onslaught of millennials. Some 100 years ago, smart companies positioned themselves near rivers, the water provided power as well as a means of transporting products. However, today, being positioned near a river doesn't ensure company success, and there are plenty of examples of companies that have existed a long time now filing for bankruptcy.
As we get out of this recession, the war for people will be intense. In the United States, as many as 76 million[Baby Boomers], born between 1946 and 1964, are retiring or approaching retirement, being replaced by 46 million [Gen X], born between 1965 and 1976. By 2010, there will be as many as 31 million [Millenials], born between 1977 and 1998, in the workforce.
To drive the point home, the Forrester analyst cited [Whirlpool] as an example, a company more than 100 years old, with 73,000 employees across 170 countries. Whirlpool manufactures kitchen, laundry and other home appliances. From 1997 to 2002, however, Whirlpool's per-ticket sales were dropping at a rate of 3.4 percent per year. To reverse this trend, they established the Whirlpool Young Professional program, assigned I-mentors, and invested in Web 2.0 collaboration tools. They realized that they needed to harness the Gen X and Millenial energy. The result?From 2002 to 2006, they had a compete turn-around, with per-ticket sales growing 5.9 percent per year.
Since I covered IBM's keynote session yesterday, I thought it would only be fair to cover HP's today.IBM and HP are the top two IT vendors in the world, and not surprisingly also the top two IT storage vendors, and are both platinum sponsors for this event.
Phil McKinney, VP and CTO of Hewlett-Packard (HP) Personal Systems Group
Phil presented "Enabling Innovation: A Strength In Any Economy", which covered HP's approach to innovationnot just within HP itself, but also to help their customers. He presented an interesting progression forIT. In the first, IT is very technology-centric, focusing on standardizing platforms and automating tasks.In the second, IT is more process-oriented, standardizing and automating business processes measured for reliable IT outcomes. In the third, IT is business-aligned, standardizing and automating services, measured on business results. He argued that the challenge was for companies to transform their IT through this progression to improve business impact.
To help customers, HP focuses on four aspects of an Innovation Management Framework:
Strategy, Measurement and Metrics
Systems, Collaboration tools and knowledge management
Culture, Education and Training
Ecosystem, business partnerships and customer innovation
He wrapped up his talk reminding us that ideas without execution are just hobbies.
Tom Peck, Senior VP and CIO, Levi Strauss & Co
Levi Strauss & Co. manufactures denim pants and other clothing apparel, and has been doing so for more than 150 years. Tom made a point to actually wear denim jeans and a sports coat on stage for his talk.His presentation "Dealing with Disruption" was not about disruptive technologies, but rather the disruption the economic downturn has impact the retail industry. To survive through this recession, IT leaders needto be bold about their hiring, reorganizing and rethinking of IT because disruption is everywhere.
IT is not a cost center at Levi Strauss, and represents only 3.5 percent of their total expenses. Instead, they have educated their stakeholders that IT is an investment for competitive advantage. They have focused on simplifying, which is important because their line of pants has grown incredibly complex. When you factor in the different fabrics, colors, styles, sizes, fit and finish, you end up with a large numberof different pants. This complexity came from an effort to provide exactly what every customer thought they needed. He cited a great quote:
“If I had asked people what they wanted, they would have said faster horses.” --- Henry Ford
This same complexity occurs in IT. To address the changes needed, Tom combined "Lean IT" principles with "Six Sigma" methodologies. Lean IT helped identify problems with the overall flow of processes and provided the tools to remove steps that did not add value. Six Sigma was applied to the remaining steps that did add value, to improve capability and effectiveness.
Companies that have been around for awhile, like IBM, Whirlpool and Levi Strauss & Co., have learned to adapt to the changing business and IT landscape, and adopt new ideas for new ways of doing things.
Bob Moffat, IBM Senior VP and Group Executive who leads IBM's world-renown Systems and Technology Group, and Cris Espinosa, System x and BladeCenter platform sales specialist, one of the many employees like me at the bottom of same Systems and Technology Group.
Here'sBob Moffatholding an IBM BladeCenter HS22 blade server.A lot of the IBM executives were on hand to help show off IBM's strategy and vision, and highlight how to deploy IBM systems that are the right fit for today.
This is Ira Chavis, IBM consulting IT specialist, who covers how to manage and reduce energy usage today withIBM Energy Efficiency offerings.
Here is Casey Bell, our liaison for the conference, and Angela Reese, who presented Tivoli's latest appliances:
IBM Tivoli Foundations Application Manager
IBM Tivoli Foundations Service Manager
These are IBM servers pre-installed with IBM Tivoli software to help monitor applications and provide service desk support. These are part of our IBM Express initiative that focus solutions aimed for medium sized businesses. By offering them as appliances, rather than just software, these can be deployed in only an hour in most situations.
And here I am, Tony Pearson, IBM certified Dynamic Infrastructure solutions expert, covering servers, storage and networking systems that help to improve service, reduce costs and manage risk. IBM offers flexible systems, software and services to match a wide diversity of business needs, designed for today's challenges and tomorrow's opportunities.
This was a long day, and looks to be a long week ahead.
Forrester analysts kicked off the keynote sessions for Day 1 of the Forrester IT Forum 2009 event. The theme for this conference is "Redefining IT's value to the Enterprise."Rather than focusing on blue-sky futures that are decades away, Forrester wants to present instead a blend of pragmatic informationthat is actionable now in the next 90 days along with some forward-looking trends.
If you ask CEOs how well their IT operations are doing, 75 percent will saythey are doing great. However, if you dig down, and ask how their companies are leveraging IT to help generate revenues, reduce costs, improve employee morale, drive profits, improve customer service, or manage risks, then the percentage drops down to 30 to 35 percent.
What are the root causes of this "perception gap" in value between business and IT? Several ideas come to mind:
Some CEOs still consider IT departments as "cost centers". Rather than exploiting technology to help drive the rest of the business, they are seen as a necessary evil, an extension of the accounting department, for example.
Some CEOs consider IT's role as basically "keeping the lights on". They only notice IT when the lights go out, or other business outages caused by disruptions in IT.
IT departments measure themselves in technology terms, not business terms. CEOs and the rest of the senior management team may not be "tech savvy", and the CIO and IT directors may not be "business savvy", resulting in failure to communicate IT's role and value to the rest of the business.
This conference is focused on CIOs and IT professionals, and how they can bridge the tech/business gap. The first two executive keynote presentations emphasized this point.
Bob Moffat, Senior VP and Group Executive, IBM
Bob Moffat (my fifth-line manager, or if you prefer, my boss's boss's boss's boss's boss) is the Senior VP and Group Executive of IBM's Systems and Technology Group that manufactures storage and other hardware. He presented how IBM is helping our clients deploy smarter solutions. Globalization has changed world business markets, has changed the reach of information technology, and has changed our client's needs.To support that, IBM is focused on making the world a smarter planet, instrumented with appropriate sensors, interconnected over converging networks, and intelligent to provide visibility, control and automation.
It's time to rethink IT in light of these new developments, to think about IT in client terms, with business metrics. Bob gave several internal and customer examples, here's one from the City of Stockholm:
Covering nine square miles of Stockholm Sweden, IBM led [the largest project of its kind] for traffic congestion in Europe. To reduce congestion caused by 300,000 vehicles, the City of Stockhold enacted a "congestion fee" with real-time recognition of license plates and a Web infrastructure to collect payments. The analytics, metrics and incentives have paid off. Since August 2007, traffic is reduced 18 percent, a reduction of travel time on inner streets, and a 9 percent increase in "green" vehicles.
In addition to smarter traffic, IBM has initiatives for smarter water, smarter energy, smarterhealthcare, smarter supply chain, and smarter food supply.
Dave Barnes, Senior VP and CIO, United Postal Service (UPS)
Dave Barnes must act as the "trusted advisor" to the rest of the senior management team. UPS delivers packages worldwide. They put sensors on all of the vehicles, not just to know how fast they were driving,but also how often they drove in reverse gear, and sensors on the engines to determine maintenance schedules.Analytics found that driving in reverse was the most dangerous, and by providing this information to the drivers themselves, the drivers were able to come up with their own innovative ways to minimize accidents.This is one role of IT, to provide employees the information they need to enable them to be better at their own jobs.
Dave also mentioned the importance of collaborating across business units. Their "Information Technology Steering Committee (ITSC)" has 15 members, of which only three are from the IT department. This helped deploy social media initiatives within UPS. For example, Twitter has been adopted so that senior management can get unfiltered customer feedback. This is perhaps another key role of IT, to flatten an organization from cultural hierarchies that prevent top brass up in the ivory tower from hearing what is going wrong down on the street. Too often, a customer or client complains to the nearest employee, and this may or may not get passed up accurately along the chain of command. Twitter allowed executives to see what was going on for themselves.
Dave also covered the "Best Neighbor" approach. If you were going to build a deck in your back yard, you might ask your neighbors that have already done this, and learn from their experience. Sadly, this does not happen enough in IT. To address this, UPS has a "Tech Governance Group" that focused on business process across the organization. For example, they improved "package flow", reducing 100 million miles in the past few years.
Lastly, he mentioned that many technologists are "loners". They have a few like that, but try to hire techies who look to team across business units instead. Likewise, they try to hire business people who are somewhat tech savvy. For example, they have encouraged business employees to write their own reports, rather than requesting new reports to be developed by the IT department. The end result, the business people get exactly the reports they want, faster than waiting for IT to do it. Another role for IT is to provide end-users the tools to make their own reports.
(Dave didn't mention what tools these were, but it sounded like the Business Intelligence and Reporting Tools [BIRT] that IBM uses.)
These two sessions were a great one-two punch to the audience of 600 CIOs and IT professionals. First, IBM sets the groundwork for what needs to be done. Then, UPS shows how they did exactly that, adopting a dynamic infrastructure and got great results. This is going to be an interesting week!
Well, I have arrived safely in Las Vegas, and the [Forrester IT Forum 2009 conference] started today at noon, with a Welcome Reception at the Technology Showcase. As a platinum sponsor, IBM has a booth manned by several subject matter experts, on the fourth floor of the Sands Expo, between the Venetian and Palazzo hotels.
On the left side, we have Paula Koziol and our [Kaon Interactive] 3D touch-screen.
On the right side, we have Ira Chavis and Cris Espinosa, and a rack of some of IBM's latest offerings.
I'm here all week to blog about this event. When I am not in sessions, I will probably hang out at the Technology Showcase with my colleagues above. If you are in Las Vegas, and want to connect, please contact me.
Rather than hold this in IBM's Kirkland facility, we chose to have this instead at the [Chateau Ste. Michelle] winery, which was just a few miles away in Woodinville.
The weather was a perfect match to pair with the information we presented. Clear sky in the low 70s.
This was a typical roadshow event, serve breakfast, meet everyone, have four main-tent sessions, answer questions, then finish with a nice lunch. Here was the speaker line-up:
Jerry Mixon presented IBM Global Services to help customers improve service, reduce costs and manage risk.
Steve Loeschorn presented the latest on IBM System x and BladeCenter offerings. Steve showed how IBM'sSystem x servers were more energy efficient than x86 servers from HP and Dell.
Michael Middleton presented the latest from IBM POWER® systems. Michael had some interesting statisticsthat showed IBM's AIX operating system to be more reliable than Sun Solaris, HP-UX, andMicrosoft Windows. Based on a study of 400 companies, AIX averaged only 36 minutes of downtime per year.
Continuing my ongoing discussion on Solid State Disk (SSD), fellow blogger BarryB (EMC) points out in his [latest post]:
Oh – and for the record TonyP, I don't think I ever said EMC was using a newer or different EFDs than IBM. I just asserted that EMC knows more than IBM about these EFDs and how they actually work a storage array under real-world workloads.
(Here "EFD" is refers to "Enterprise Flash Drive", EMC's marketing term for Single Layer Cell (SLC) NAND Flash non-volatile solid-state storage devices. Both IBM and EMC have been selling solid-state storage for quite some time now, but EMC felt that a new term was required to distinguish the SLC NAND Flash devices sold in their disk systems from solid-state devices sold in laptops or blade servers. The rest of the industry, including IBM, continues to use the term SSD to refer to these same SLC NAND Flash devices that EMC is referring to.)
Although STEC asserts that IBM is using the latest ZeusIOPS drives, IBM is only offering the 73GB and 146GB STEC drives (EMC is shipping the latest ZeusIOPS drives in 200GB and 400GB capacities for DMX4 and V-Max, affording customers a lower $/GB, higher density and lower power/footprint per usable GB.)
Here is where I enjoy the subtleties between marketing and engineering. Does the above seem like he is saying EMC is using newer or different drives? What are typical readers expected to infer from the statement above?
That there are four different drives from STEC, in four different capacities. In the HDD world, drives of different capacities are often different, and larger capacities are often newer than those of smaller capacities.
That the 200GB and 400GB are the latest drives, and that 73GB and 146GB drives are not the latest.
That STEC press release is making false or misleading claims.
Uncontested, some readers might infer the above and come to the wrong conclusions. I made an effort to set the record straight. I'll summarize with a simple table:
Usable (conservative format)
Usable (aggressive format)
So, we all agree now that the 256GB drives that are formatted as 146GB or 200GB are in fact the same drives, that IBM and EMC both sell the latest drives offered by STEC, and that the STEC press release was in fact correct in its claims.
I also wanted to emphasize that IBM chose the more conservative format on purpose. BarryB [did the math himself] and proved my key points:
Under some write-intensive workloads, an aggressive format may not last the full five years. (But don't worry, BarryB assures us that EMC monitors these drives and replaces them when they fail within the five years under their warranty program.)
Conservative formats with double the spare capacity happen to have roughly double the life expectancy.
I agree with BarryB that an aggressive format can offer a lower $/GB than the conservative format. Cost-conscious consumers often look for less-expensive alternatives, and are often willing to accept less-reliable or shorter life expectancy as a trade-off. However, "cost-conscious" is not the typical EMC targeted customer, who often pay a premiumfor the EMC label. To compensate, EMC offers RAID-6 and RAID-10 configurations to provide added protection. With a conservative format, RAID-5 provides sufficient protection.
(Just so BarryB won't accuse me of not doing my own math, a 7+P RAID-5 using conservative format 146GB drives would provide 1022GB of capacity, versus 4+4 RAID-10 configuration using aggressive format 200GB drives only 800GB total.)
In an ideal world, you the consumer would know exactly how many IOPS your application will generate over the next five years, exactly how much capacity you will require, be offered all three drives in either format to choose from, and make a smart business decision. Nothing, however, is ever this simple in IT.
Recently, IBM and the University of Texas Medical Branch (UTMB) [launched an effort] using IBM's World Community Grid "virtual supercomputer" to allow laboratory tests on drug candidates for drug-resistant influenza strains and new strains, such as H1N1 (aka "swineflu"), in less than a month.
Researchers at the University of Texas Medical Branch will use [World Community Grid] to identify the chemical compounds most likely to stop the spread of the influenza viruses and begin testing these under laboratory conditions. The computational work adds up to thousands of years of computer time which will be compressed into just months using World Community Grid. As many as 10 percent of the drug candidates identified by calculations on World Community Grid are likely to show antiviral activity in the laboratory and move to further testing.
According to the researchers, without access to World Community Grid's virtual super computing power, the search for drug candidates would take a prohibitive amount of time and laboratory testing.
This reminded me of an 18-minute video of Larry Brilliant at the 2006 Technology, Entertainment and Design [TED] conference. Back in 2006, Larry predicted a pandemic in the next three years, and here it is 2009 and we have the H1N1 virus.
His argument was to have "early detection" and "early response" to contain worldwide diseases like this.
A few months after Larry's "call to action" in 2006, IBM and over twenty major worldwide public health institutions, including the World Health Organization [WHO] and the Centers for Disease Control and Prevention [CDC], [announced the Global Pandemic Initiative], a collaborative effort to help stem the spread of infectious diseases.
One might think that with our proximity to Mexico that the first cases would have been the border states, such as Arizona, but instead there were cases as far away as New York and Florida. The NYT explains in an article [Predicting Flu With the Aid of (George) Washington] that two rival universities, Northwestern University and Indiana University, both predicted that there would be about 2500 cases in the United States, based on air traffic control flight patterns, and the tracking data from a Web site called ["Where's George"] which tracks the movement of US dollar bills stamped with the Web site URL.
The estimates were fairly close. According to the Centers for Disease Control and Prevention [H1N1 Flu virus tracking page], there are currently 3009 cases of H1N1 in 45 states, as of this writing.
This is just another example on how an information infrastructure, used properly to provide insight, make predictions, and analyze potential cures, can help the world be a smarter planet. Fortunately, IBM is leading the way.
My post last week [Solid State Disk on DS8000 Disk Systems] kicked up some dust in the comment section.Fellow blogger BarryB (a member of the elite [Anti-Social Media gang from EMC]) tried to imply that 200GB solid state disk (SSD) drives were different or better than the 146GB drives used in IBM System Storage DS8000 disk systems. I pointed out that they are the actual same physical drive, just formatted differently.
To explain the difference, I will first have to go back to regular spinning Hard Disk Drives (HDD). There are variances in manufacturing, so how do you make sure that a spinning disk has AT LEAST the amount of space you are selling it as? The solution is to include extra. This is the same way that rice, flour, and a variety of other commodities are sold. Legally, if it says you are buying a pound or kilo of flour, then it must be AT LEAST that much to be legal labeling. Including some extra is a safe way to comply with the law. In the case of disk capacity, having some spare capacity and the means to use it follows the same general concept.
(Disk capacity is measured in multiples of 1000, in this case a Gigabyte (GB) = 1,000,000,000 bytes, not to be confused with [Gibibyte (GiB)] = 1,073,741,824 bytes, based on multiples of 1024.)
Let's say a manufacturer plans to sell 146GB HDD. We know that in some cases there might be bad sectors on the disk that won't accept written data on day 1, and there are other marginally-bad sectors that might fail to accept written data a few years later, after wear and tear. A manufacturer might design a 156GB drive with 10GB of spare capacity and format this with a defective-sector table that redirects reads/writes of known bad sectors to good ones. When a bad sector is discovered, it is added to the table, and a new sector is assigned out of the spare capacity.Over time, the amount of space that a drive can store diminishes year after year, and once it drops below its rated capacity, it fails to meet its legal requirements. Based on averages of manufacturing runs and material variances, these could then be sold as 146GB drives, with a life expectancy of 3-5 years.
With Solid State Disk, the technology requires a lot of tricks and techniques to stay above the rated capacity. For example, you can format a 256GB drive as a conservative 146GB usable, with an additional 110GB (75 percent) spare capacity to handle all of the wear-leveling. You could lose up to 22GB of cells per year, and still have the rated capacity for the full five-year life expectancy.
Alternatively, you could take a more aggressive format, say 200GB usable, with only 56GB (28 percent) of spare capacity. If you lost 22GB of cells per year, then sometime during the third year, hopefully under warranty, your vendor could replace the drive with a fresh new one, and it should last the rest of the five year time frame. The failed drive, having 190GB or so usable capacity, could then be re-issued legally as a refurbished 146GB drive to someone else.
The wear and tear on SSD happens mostly during erase-write cycles, so for read-intensive workloads, such as boot disks for operating system images, the aggressive 200GB format might be fine, and might last the full five years.For traditional business applications (70 percent read, 30 percent write) or more write-intensive workloads, IBM feels the more conservative 146GB format is a safer bet.
This should be of no surprise to anyone. When it comes to the safety, security and integrity of our client's data, IBM has always emphasized the conservative approach.[Read More]
(Note: IBM [Guidelines] prevent me from picking blogfights, so this post is only to set the record straight on some misunderstandings, point to some positive press about IBM's leadership in this area, and for me to provide a different point of view.)
First, let's set the record straight on a few things. The [RedPaper is still in draft form] under review, and so some information has not yet been updated to reflect the current situation.
You can have 16 or 32 SSD per DA pair. However, you can only have a maximum of 128 SSD drives total in any DS8100 or DS8300. In the case of the IBM DS8300 with 8 DA pairs, it makes more senseto spread the SSD out across all 8 pairs, and perhaps this is what confused BarryB.
Yes, you can order an all-SSD model of the IBM DS8000 disk system. I don't see anywhere in the RedPaper that suggests otherwise, and I have confirmed with our offering manager that this is the case.
The 73GB and 146GB are freshly manufactured from STEC. The 146GB drive and 200GB drives are actually the same drive but just formatted differently. The 200GB format does not offer as much spare capacity for wear-leveling, and are therefore intended only for read-intensive workloads. (Perhaps EMC wants you to find this out the hard way so that you replace them more often???) These reduced-spare-capacity formats may not be appropriate with some write-intensive workloads. Don't let anyone from EMC try to misrepresent the 73GB or 146GB drives from STEC as older, obsolete, collecting dust in a warehouse, or otherwise no longer manufactured by STEC.
You can relocate data from HDD to SSD using "Data Set FlashCopy", a feature that does not involve host-based copy services, does not consume any MIPS on your System z mainframe, and is performed inside the DS8000 disk system. You can also use host-based copy services as well, but it is not the only way.
You can use any supported level of z/OS with SSD in the IBM DS8000. There is ENHANCED support mentioned in the RedPaper that you get only with z/OS 1.8 and above, allowing you to create automation policies that place data sets onto SSD or non-SSD storage pools. This synergy makes SSD with IBM DS8000 superior to the initial offerings that EMC had offered without this OS support.
I find it amusing that BarryB's basic argument is that IBM's initial release of SSD disk on DS8000 is less than what the potential architecture could be extended to support much more. Actually, if you look at EMC's November release of Atmos, as well as their most recent announcement of V-Max, they basically say the same thing "Stay Tuned, this is just our initial release, with various restrictions and limitations, but more will follow." Architecturally, IBM DS8000 could support a mix of SSD and non-SSD on the same DA pairs, could support RAID6 and RAID10 as well, and could support larger capacity drives or use higher-capacity read-intensive formats. These could all be done via RPQ if needed, or in a follow-on release.
BarryB's second argument is that IBM is somehow "throwing cold water" on SSD technology. That somehow IBM is trying to discourage people from using SSD by offering disk systems with this technology. IBM offered SSD storage on BladeCenter servers LONG BEFORE any EMC disk system offering, and IBM continues to innovate in ways that allow the best business value of this new technology. Take for example this 24-page IBM Technical Brief:[IBM System z® and System Storage DS8000:Accelerating the SAP® Deposits Management Workload With Solid State Drives]. It is full of example configurations that show that SSD on IBM DS8000 can help in practical business applications. IBM takes a solution view, and worked with DB2, DFSMS, z/OS, High Performance FICON (zHPF), and down the stack to optimize performance to provide real business value innovation. Thanks to this synergy,IBM can provide 90 percent of the performance improvement with only 10 percent of the SSD disk capacity as EMC offerings. Now that's innovative!
The price and performance differences between FC and SATA (what EMC was mostly used to) is only 30-50 percent. But the price and performance differences between SSD and HDD is more than an order of magnitude in some cases 10-30x, similar to the differences between HDD and tape. Of course, if you want hybrid solutions that take best advantage of SSD+HDD, it makes more sense to go to IBM, the leading storage vendor that has been doing HDD+Tape hybrid solutions for the past 30 years. IBM understands this better, and has more experience dealing with these orders of magnitude than EMC.
But don't just take my word for it. Here is an excerpt from Jim Handy, from [Objective Analysis] market research firm, in a recent Weekly Review from [Pund-IT] (Volume 5, Issue 23--May 6, 2009):
"What about IBM? One thing that we are finding is that IBM really “Gets It” in the area offlash in the data center. Readers of the Pund-IT Review will not only recall that IBM Researchpushed its SSD-based “Quicksilver” storage system to one million IOPS using Fusion-ioflash-based storage, but they also may have noticed that the recent MySQL and mem-cachedappliances recently introduced by Schooner Information Technology are both flash-enableddevices introduced in partnership with IBM. Ironically, while other OEMs are takingthe cautious approach of introducing a standard SSD option to their systems first, IBM appearsto have been working on several approaches simultaneously to bring flash to thedata center not only in SSDs, but in innovative ways as well."
As for why STEC put out a press release on their own this week without a corresponding IBM press release, I can only say that IBM already announced all of this support back in February, and I blogged about it in my post [Dynamic Infrastructure - Disk Announcements 1Q09]. This is not the first time one of IBM's suppliers has tried to drum up business in this manner. Intel often funds promotions for IBM System x servers (the leading Intel-based servers in the industry) to help drive more business for their Xeon processor.
So, BarryB, perhaps its time for you to take out your green pen and work up another one of your all-too-common retraction and corrections.[Read More]
Wrapping up this week's theme on Cloud Computing, I finish with an IBM announcement for two new products to help clients build private cloud environments from their existing Service Oriented Architecture (SOA) deployments.
IBM WebSphere CloudBurst Appliance -- a new hardware appliance that provides access to software virtual images and patterns that can be used as is or easily customized, and then securely deployed, managed and maintained in a private cloud.
IBM WebSphere Application Server Hypervisor Edition -- a version of IBM WebSphere Application Server software optimized to run in a virtualized hardware server environments such as VMware, and comes preloaded in WebSphere Cloudburst.
With more than 7,000 customer implementations worldwide, IBM is the SOA market leader. Of course, both of these products above can be used with IBM System Storage solutions, including Cloud-Optimized Storage offerings like Grid Medical Archive Solution (GMAS), Grid Access Manager software, Scale-Out File Services (SoFS), and the IBM XIV disk system.
IBM is part of the "Cloud Computing 5" major vendors pushing the envelope (the other four are Google, Microsoft, Amazon and Yahoo). In fact, IBM has a number of initiatives that allow customers to leverage IBM software in a cloud. IBM is working in collaboration with Amazon Web Services (AWS), a subsidiary of Amazon.com, Inc. to make IBM software available in the Amazon Elastic Compute Cloud (Amazon EC2). WebSphere sMash, Informix Dynamic Server, DB2, and WebSphere Portal with Lotus Web Content Management Standard Edition are available today through a "pay as you go" model for both development and production instances. In addition to those products, IBM is also announcing the availability of IBM Mashup Center and Lotus Forms Turbo for development and test use in Amazon EC2, and intends to add WebSphere Application Server and WebSphere eXtreme Scale to these offerings.
For more about IBM's leadership in Cloud Computing, see the IBM [Press Release].
Continuing this week's theme on Cloud Computing, Dynamic Infrastructure and Data Center Networking, IBM unveiled details of an advanced computing system that will be able to compete with humans on Jeopardy!, America’s favorite quiz television show. Additionally, officials from Jeopardy! announced plans to produce a human vs. machine competition on the renowned show.
For nearly two years, IBM scientists have been working on a highly advanced Question Answering (QA) system, codenamed "Watson" after IBM's first president, [Thomas J. Watson]. The scientists believe that the computing system will be able to understand complex questions and answer with enough precision and speed to compete on Jeopardy!Produced by Sony Pictures Television, the trivia questions on Jeopardy! cover a broad range of topics, such as history, literature, politics, film, pop culture, and science. It poses a grand challenge for a computing system due to the variety of subject matter, the speed at which contestants must provide accurate responses, and because the clues given to contestants involve analyzing subtle meaning, irony, riddles, and other complexities at which humans excel and computers traditionally do not. Watson will incorporate massively parallel analytical capabilities and, just like human competitors, Watson will not be connected to the Internet or have any other outside assistance.
If this all sounds familiar, you might remember some of the events that have led up to this:
In 1984, the movie ["The Terminator"] introduced the concept of [Skynet], a fictional computer system developed by the militarythat becomes self-aware from its advanced artificial intelligence.
In 1997, an IBM computer called Deep Blue defeated World Chess Champion [Garry Kasparov] in a famous battle of human versus machine. To compete at chess, IBM built an extremely fast computer that could calculate 200 million chess moves per second based on a fixed problem. IBM’s Watson system, on the other hand, is seeking to solve an open-ended problem that requires an entirely new approach – mainly through dynamic, intelligent software – to even come close to competing with the human mind. Despite their massive computational capabilities, today’s computers cannot consistently analyze and comprehend sentences, much less understand cryptic clues and find answers in the same way the human brain can.
In 2005, Ray Kurzweil wrote [The Singularity is Near] referring to the wonders that artificial intelligence will bring to humanity.
The research underlying Watson is expected to elevate computer intelligence and human-to-computer communication to unprecedented levels. IBM intends to apply the unique technological capabilities being developed for Watson to help clients across a wide variety of industries answer business questions quickly and accurately.
It's Tuesday, which means IBM announcements, and today IBM made some major announcementsthat support a [Dynamic Infrastructure]! I hinted at this yesterday, choosing the week's theme to be all about Cloud Computing and Alternative Sourcing. I will briefly highlight today's announcements related to storage here, and try to go into more detail over the next few weeks.
Ethernet switches and routers
In support of Cloud Computing and Cloud Storage, IBM is now back in theEthernet networking business. This is part of storage as protocols likeiSCSI, CIFS and NFS are gaining prominence. Extending IBM's existing OEMrelationship with Brocade, there are four series:
[c-series] - "c" for Compact, these are 1U high fixed port switches
[g-series] - "g" for Pay-as-you-Grow using IronStack stacking technology to allow up to 8 switches to be glommed, glued, er.. "gathered" together as a single virtual chassis.
[m-series] - "m" for Multiprotocol Label Switching [MPLS] which supports routing between LAN and WAN networks over OC12 and OC48 lines.
[s-series] - "s" for slots, the B08S has eight slots, and the B16S has sixteen slots, supporting up to 384 ports. These models support Power-over-Ethernet [PoE] that simplifies attaching Voice-over-IP (VoIP) telephones and IP-based surveillance cameras.
IBM announced it will strengthen its partnership with Juniper Networks, and continues to consider Cisco a strategic partner as well. To help customer position themselves for Cloud Computing and Cloud Storage,IBM also launches some new services:
The IBM [DS5000] now supports self-encrypting disk drives, known also as "full-disk encryption" or FDE, for added security, and 8Gbps Fibre Channel (FC) ports for added performance. The DS5300 model in particular now supports up to 448 disk drives for added scalability.
Comprehensive Data Protection Solution
IBM's [Data Protection Solution] shows off IBM's awesome synergy between servers, storage and software. Combining System x servers, Tivoli Storage Manager FastBack software, and DS5000, DS4000 or DS3000 series disk systems. The solution is designed to both Windows-based servers and their applications, offering bare metal restores, and application–level protection for Oracle, SQL, Exchange and SAP.
Tivoli Storage Productivity Center
Last February, IBM previewed the renaming of TotalStorage Productivity Center to its new name,Tivoli Storage Productivity Center. Today, IBM announces [Tivoli Storage Productivity Center v4.1]. Some key changes include:
Productivity Center for Fabric has been merged into Productivity Center for Disk
Productivity Center for Replication is now integrated, but remains separately licensed
Productivity Center can now feed input to IBM's Novus Storage Enterprise Resource Planner [SERP]
TS7650 ProtecTIER Data Deduplication IP-based replication
IBM previews IP-based replication which allows the TS7650 appliance or TS7650G gateway to sendvirtual tape data over to a remote location. This is instead of having the underlying disk systemsperform the replication on its behalf. Having the TS7650 do the replication is preferred, as itcan maintain virtual cartridge integrity, when a virtual tape is unmounted the replication can beginat that point.
This week's theme is alternative sourcing through Cloud Computing.
I thoughtI would start off the week interviewing an owner at a Small or Medium-sized Business [SMB] that recently adopted this approach.
Meet Fred, one of the new co-owners of my singles activities club, TucsonFun and Adventures, known affectionately as [TFA]. TFA recentlyadopted a new "Software-as-a-Service" [SaaS] for the company's Web site.
While the experience is still fresh in his mind, I thought this would be a goodopportunity to illustrate some of the concepts of alternative sourcing through Cloud Computing byusing a local example.
Give me some background on the company. How long has it been around? How many employees?
TFA has been in business since 1997, and has six employees, including an office manager, event planners and event coordinators.
How critical is "Web presence" to the business?
It's very important in several ways.First, the TFA staff plans 25 events per month, and our hundreds of members register for these events mostly through the Web site. Second, we have it connected to our bank accounts, so that it can process credit cards to collect the funds for renewals and event registrations.Third, it serves as a way to communicate upcoming events to our members, especially trips, so they can save the date on their own calendars. And fourth, the Web site serves as a "landing page"for all of our radio spots, newspaper ads, and other marketing efforts.
TFA had a Web site before, and now you have helped launch this new Web site. What motivated this change?
Our members were complaining about our 1999-era Web site. The pages were written in HTML, ASP (Active Service Page) and SQL (Structured Query Language) connected to a Microsoft SQL Server 2005 database. It was mostly text-based, with the only animation being text scrolling horizontally across the screen. The Web hostingprovider offered reliable access, but was located in New York state on East Coast time. If a member signed up for an event after 9pm or 10pm here in Tucson, it was marked as the next date, which could change the price of the event, or indicate the deadline was missed.If there were any changes to the pages or logic needed, or new columns required in the database, it gotexpensive. The TFA employees don't know how to program in ASP or SQL, so we hadto hire outside professionals each time.
Does this new Software-as-a-Service (SaaS) Web site address these problems you were trying to solve?
Yes. The new Web site is hosted by [Memberize] which provides a hosted membership management application. The TFA staff can nowmanage its membership, plan events, and communicate them with graphics, videos,and links to maps. They don't need to know ASP or SQL programming, because a built-in[WYSIWYG] editor is simple enough for anyone with standard word-processing skills. The database allowed the optionto add customized fields for each member we have in our club.
Was it difficult to switch over?
Not at all. Memberize gave us a 60-day free trial, and we needed all that time totransfer over our membership records, customize the style of the overall templatefor all pages, and then copy over the content from our old Web site. Wehad to transfer over our e-commerce service over, and contact GoDaddy to transfer the domain. The employee training required was fairly minimal.Cost-wise, it was only a few hundred dollars one-time setup fee, and then we pay a monthly fee,based on a tiered pricing structure based on the count of our active members.
How has the reaction been from your membership?
I've gotten a lot of positive feedback. The learning curve was minimal. Ourmembers found the new Web site intuitive and interactive. For example, thecalendar of events can be shown in a single month-at-a-glance format, with greendots showing the events you are signed up for.
And from your perspective, Fred, is the new Web site easy to administer?
Yes, I can now easily generate standard reports, and create my own ad-hoc reports as needed. This wasn't possible with the old system unless I hired an ASP programmer.
Hopefully, this provides some insight on how even the smallest SMB enterprises can adopt a Dynamic Infrastructure through alternative sourcing. Cloud Computing takes many forms, including Software-as-a-Service managed offerings.