Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], here is my quick recap of the keynote sessions presented Monday morning. Marlin Maddy, Worldwide Technical Events Executive for IBM Systems Lab Services and Training, served as emcee.
Roland Hagan, IBM Vice President for IBM System x server platform, presented on how IBM is redefining the x86 computing experience. More than 50 percent of all servers are x86 based. These x86 servers are easy to acquire, enjoy a large application base, and can take advantage of readily available skilled workforce for administration. The problem is that 85 percent of x86 processing power remains idle, energy costs are 8 times what they were 12 years ago, and management costs are now 70 percent of the IT budget.
IBM has the number one market share for scalable x86 servers. Roland covered the newly announced eX5 architecture that has been deployed in both rack-optimized models as well as IBM BladeCenter blade servers. These can offer 2x the memory capacity as competitive offerings, which is important for today's server virtualization, database and analytics workloads. This includes 40 and 80 DIMM models of blades, and 64 to 96 DIMM models of rack-optimized systems. IBM also announced eXFlash, internal Solid State Drives accessible at bus speeds. FlexNode allows a 4-node system to dynamically change to 2 separate 2-node systems.
By 2013, analysts estimate that 69 percent of x86 workloads will be virtualized, and that 22 percent of servers will be running some form of hypervisor software. By 2015, this grows to 78 percent of x86 workloads being virtualized, and 29 percent of servers running hypervisor.
Doug Balog, IBM Vice President and Disk Storage Business Line Executive, presented how the growth of information results in a "perfect storom" for the storage industry. Storage Admins are focused on managing storage growth and the related costs and complexity, proper forecasting and capacity planning, and backup administration. IBM's strategy is to help clients in the following areas:
Storage Efficiency - getting the most use out of the resources you invest
Service Delivery - ensuring that information gets to the right people at the right time, simplify reporting and provisioning
Data Protection - protecting data against unethical tampering, unauthorized access, and unexpected loss and corruption
He wrapped up his talk covering the success of DS8700 and XIV. In fact, 60 percent of XIV sales are to EMC customers. The TCO of an XIV is less than half the TCO of a comparable EMC VMAX disk system.
Dave McQueeney, IBM Vice President for Strategy and CTO for US Federal, covered how IBM's Smarter Planet vision for smarter cities, smarter healthcare, smarter energy grid and smarter traffic are being adopted by the public sector. Almost every data center in US Federal government is out of power, floor space and/or cooling capability. An estimated 80 percent of US Federal government IT budgets are spent on maintenance and ongoing operations, leaving very little left over for the big transformational projects that President Barack Obama wants to accomplish.
Who has the most active Online Transaction Processing (OLTP)? You might guess a big bank, but it is the US Department of Homeland Security (DHS), with a system processing 600 million transactions per day. Another government agency is #2, and the top Banking application is finally #3. The IBM mainframe has solved problems 10 to 15 years ago that the distributed systems are just now encountering today. Worldwide, more than 80 percent of banks use mainframes to handle their financial transactions.
IBM's recent POWER7 set of servers are proving successful in the field. For example, Allianz was able to consolidate 60 servers to 1. Running DB2 on POWER7 server is 38 percent less expensive than Oracle on x86 Nehalem processors. For Java, running JVM on POWER7 is 73 percent better than JVM on x86 Nehalem.
The US federal government ingests a large amount of data. It has huge 10-20 PB data warehouses. In fact, the amount of GB received every year by the US federal government alone exceed the production of all disk drives produced by all drive manufacturers. This means that all data must be processed through "data reduction" or it is gone forever.
The last keynote for Monday was given by Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist for System Storage. He started out shocking the audience with his view that the "disk drive industry is a train wreck". While R&D in disk drives enjoyed a healthy improvement curve up to about 2004, it has now slowed down, getting more difficult and more expensive to improve performance and capacity of disk drives. The rest of his presentation was organized around three themes:
Integrated Stacks - while new-comers like Oralce/Sun and the VCE coalition are promoting the benefits of integrated stacks, IBM has been doing this for the past five decades. New advancements in Server and Storage virtualization provide exciting new opportunities.
Integrated Systems - solutions like IBM Information Archive and SONAS, and new features like Easy Tier that help adopt SSD transparently. As it gets harder and harder to scale-up, IBM has moved to innovative scale-out architectures.
Integrated Data Center management - companies are now realizing that management and governance are critical factors of success, and that this needs to be integrated between traditional IT, private, public and hybrid cloud computing.
This was a great inspiring start for what looks like an awesome week!
This week, July 26-30, 2010, I am in Washington DC for the annual [2010 System Storage Technical University]. As with last year, we have joined forces with the System x team. Since we are in Washington DC this time, IBM added a "Federal Track" to focus on government challenges and solutions. So, basically, offering attendees the option to attend three conferences for one low price.
This conference was previously called the "Symposium", but IBM changed the name to "Technical University" to emphasize the technical nature of the conference. No marketing puffery like "Journey to the Private Cloud" here! Instead, this is bona fide technical training, qualifying attendees to count this towards their Continuing Professional Education (CPE).
(Note to my readers:The blogosphere is like a playground. In the center are four-year-olds throwing sand into each other's faces, while mature adults sit on benches watching the action, and only jumping in as needed. For example, fellow blogger Chuck Hollis (EMC) got sand in his face for promising to resign if EMC ever offered a tacky storage guarantee, and then [failed to follow through on his promise] when it happened.
Several of my readers asked me to respond to another EMC blogger's latest [fistful of sand].
A few months ago, fellow blogger Barry Burke (EMC) committed to [stick to facts] in posts on his Storage Anarchist blog. That didn't last long! BarryB apparently has fallen in line with EMC's over-promise-then-under-deliver approach. Unfortunately, I will be busy covering the conference and IBM's robust portfolio of offerings, so won't have time to address BarryB's stinking pile of rumor and hearsay until next week or later. I am sorry to disappoint.)
This conference is designed to help IT professionals make their business and IT infrastructure more dynamic and, in the process, help reduce costs, mitigate risks, and improve service. This technical conference event is geared to IT and Business Managers, Data Center Managers, Project Managers, System Programmers, Server and Storage Administrators, Database Administrators, Business Continuity and Capacity Planners, IBM Business Partners and other IT Professionals. This week will offer over 300 different sessions and hands-on labs, certification exams, and a Solutions Center.
For those who want a quick stroll through memory lane, here are my posts from past events:
In keeping up with IBM's leadership in Social Media, IBM Systems Lab Services and Training team running this event have their own [Facebook Fan Page] and
[blog]. IBM Technical University has a Twitter account [@ibmtechconfs], and hashtag #ibmtechu. You can also follow me on Twitter [@az990tony].
Continuing this week's coverage of IBM's 3Q announcements, today it's all about storage for our mainframe clients.
IBM System Storage DS8700
IBM is the leader in high-end disk attached to mainframes, with the IBM DS8700 being our latest model in a long series of successful products in this space. Here are some key features:
Full Disk Encryption (FDE), which I mentioned in my post [Different Meanings of the word "Protect"]. FDE are special 15K RPM Fibre Channel drives that include their own encryption chip, so that IBM DS8700 can encrypt the data at rest without impacting performance of reads or writes. The encryption keys are managed by IBM Tivoli Key Lifecycle Manager (TKLM).
Easy Tier, which I covered in my post [DS8700 Easy Tier Sub Lun Automatic Migration] which offers what EMC promised but has yet to deliver, the ability to have CKD volumes and FBA LUNs to straddle the fence between Solid State Drives (SSD) and spinning disk. For example, a 54GB CKD volume could have 4GB on SSD and the remaining 50GB on spinning drives. The hottest extents are moved automatically to SSD, and the coldest moved down to spinning disk. To learn more about Easy Tier, watch my [7-minute video] on IBM [Virtual Briefing Center].
z/OS Distributed Data Backup (zDDB), announced this week, provides the ability for a program running on z/OS to backup data written by distributed operating systems like Windows or UNIX stored in FBA format. In the past, to backup FBA LUNs involved a program like IBM Tivoli Storage Manager client to read the data natively, send it over Ethernet LAN to a TSM Server, which could run on the mainframe and use mainframe resources. This feature eliminates the Ethernet traffic by allowing a z/OS program to read the FBA blocks through standard FICON channels, which can then be written to z/OS disk or tape resources. Here is the [Announcement Letter] for more details.
One program that takes advantage of this new zDDB feature already is Innovation's [FDRSOS], which I pronounce "fudder sauce". If you are an existing FDRSOS customer, now is a good time to get rid of any EMC or HDS disk and replace with the new IBM DS8700 system.
IBM System Storage TS7680 ProtecTIER Deduplication Gateway for System z
When it comes to virtual tape libraries that attach to mainframes, the two main players are IBM TS7700 series and Oracle StorageTek Virtual Storage Manager (VSM). However, mainframe clients with StorageTek equipment are growing frustrated over Oracle's lack of commitment for mainframe-attachable storage. To make matters worse, Oracle recently missed a key delivery date for their latest enterprise tape drive.
What's new this week is that IBM now supports native IP-based asynchronous replication of virtual tapes at distance, from one TS7680 to another TS7680. This replaces the method of replication using the back end disk features. The problem with using disk replication is that all the virtual tapes will be copied over. Instead, the ProtecTIER administrator can decide which subset of virtual tapes should be replicated to the remote site, and that can reduce both storage requirements as well as bandwidth costs. See the [Announcement Letter] for more details.
By combining multiple components into a single "integrated system", IBM can offer a blended disk-and-tape storage solutions. This provides the best of both worlds, high speed access using disk, while providing lower costs and more energy efficiency with tape. According to a study by the Clipper Group, tape can be 23 times less expensive than disk over a 5 year total cost of ownership (TCO).
I've also covered Hierarchical Storage Management, such as my post [Seven Tiers of Storage at ABN Amro], and my role as lead architect for DFSMS on z/OS in general, and DFSMShsm in particular.
However, some explanation might be warranted in the use of these two terms in regards to SONAS. In this case, ILM refers to policy-based file placement, movement and expiration on internal disk pools. This is actually a GPFS feature that has existed for some time, and was tested to work in this new configuration. Files can be individually placed on either SAS (15K RPM) or SATA (7200 RPM) drives. Policies can be written to move them from SAS to SATA based on size, age and days non-referenced.
HSM is also a form of ILM, in that it moves data from SONAS disk to external storage pools managed by IBM Tivoli Storage Manager. A small stub is left behind in the GPFS file system indicating the file has been "migrated". Any reference to read or update this file will cause the file to be "recalled" back from TSM to SONAS for processing. The external storage pools can be disk, tape or any other media supported by TSM. Some estimate that as much as 60 to 80 percent of files on NAS have low reference and should be stored on tape instead of disk, and now SONAS with HSM makes that possible.
This distinction allows the ILM movement to be done internally, within GPFS, and the HSM movement to be done externally, via TSM. Both ILM and HSM movement take advantage of the GPFS high-speed policy engine, which can process 10 million files per node, run in parallel across all interface nodes. Note that TSM is not required for ILM movement. In effect, SONAS brings the policy-based management features of DFSMS for z/OS mainframe to all the rest of the operating systems that access SONAS.
HTTP and NIS support
In addition to NFS v2, NFS v3, and CIFS, the SONAS v1.1.1 adds the HTTP protocol. Over time, IBM plans to add more protocols in subsequent releases. Let me know which protocols you are interested in, so I can pass that along to the architects designing future releases!
SONAS v1.1.1 also adds support for Network Information Service (NIS), a client/server based model for user administration. In SONAS, NIS is used for netgroup and ID mapping only. Authentication is done via Active Directory, LDAP or Samba PDC.
SONAS already had synchronous replication, which was limited in distance. Now, SONAS v1.1.1 provides asynchronous replication, using rsync, at the file level. This is done over Wide Area Network (WAN) across to any other SONAS at any distance.
Interface modules can now be configured with either 64GB or 128GB of cache. Storage now supports both 450GB and 600GB SAS (15K RPM) and both 1TB and 2TB SATA (7200 RPM) drives. However, at this time, an entire 60-drive drawer must be either all one type of SAS or all one type of SATA. I have been pushing the architects to allow each 10-pack RAID rank to be independently selectable. For now, a storage pod can have 240 drives, 60 drives of each type of disk, to provide four different tiers of storage. You can have up to 30 storage pods per SONAS, for a total of 7200 drives.
An alternative to internal drawers of disk is a new "Gateway" iRPQ that allows the two storage nodes of a SONAS storage pod to connect via Fibre Channel to one or two XIV disk systems. You cannot mix and match, a storage pod is either all internal disk, or all external XIV. A SONAS gateway combined with external XIV is referred to as a "Smart Business Storage Cloud" (SBSC), which can be configured off premises and managed by third-party personnel so your IT staff can focus on other things.
See the Announcement Letters for the SONAS [hardware] and [software] for more details.
For those who are wondering how this positions against IBM's other NAS solution, the IBM System Storage N series, the rule of thumb is simple. If your capacity needs can be satisfied with a single N series box per location, use that. If not, consider SONAS instead. For those with non-IBM NAS filers that realize now that SONAS is a better approach, IBM offers migration services.
Both the Information Archive and the SONAS can be accessed from z/OS or Linux on System z mainframe, from "IBM i", AIX and Linux on POWER systems, all x86-based operating systems that run on System x servers, as well as any non-IBM server that has a supported NAS client.
Well, today's Tuesday, and you know what that means... IBM Announcements!
This week, IBM has their huge 3Q Launch. This on top of the [2Q results] IBM released yesterday. You can read how the rest of the company did, but it is good to see that IBM grew in both revenue and market share for storage!
As with any IBM launch of this magnitude, there are so many enhancements, I will spread them across several posts.
IBM System Storage TS7610 ProtecTIER® Deduplication Appliance Express
The TS7610 is a smaller appliance than the TS7650 we introduced last year, taking up only 3U of rack space (2U for the disk itself, and a 1U slide rail to simplify maintenance). This is designed for smaller deployments, such as midsized businesses between 100 and 1000 employees that backup 3TB of data per week or less. The unit relies on RAID protected SATA drives. Thanks to the same ProtecTIER data deduplication we have on the TS7650, the TS7610 can hold up to 135TB of backup data on just 5.4TB of disk capacity, with in-line data ingest at 80 MB/sec performance. This little Virtual Tape Library (VTL) emulates up to four TS3500 libraries, with 64 LTO-3 drives and over 8000 virtual tapes. See the [Announcement letter] for details.
The [ProtecTIER Entry Edition] offers a hub-and-spoke approach to replication. You can have up to twelve(12) TS7610 boxes (the "spokes") replicate to a central VTL (the "hub"). This can be ideal for protecting remote office or branch office deployments.
IBM dobules the storage capacity by utilizing 2TB hard disk drives for the N3300 and N3400 series models, maximizes customer satisfaction through Partner Select Bundles (software bundles) for all of the N3000 series (N3300, N3400, N3600), and offers Application and Server Packs (software bundles) for N3400 models.
For the high-end, IBM introduces an enhanced Performance Acceleration Module (PAM II) bundle for N7900 Gateway. This bundle includes two 512GB Solid State Drive PAM II adapters, two dual-port 10GbE TOE network interface cards (NIC), and various software features.
The DS5020 and EXP520 joins their bigger siblings DS5100 and DS5300 in supporting Solid State Drives (SSD), available in 73GB and 300GB capacities. A new air filter bezel is also available for these when used in dusty environments. See the [Announcement letter] for details.
For my friends down in Brazil, A new 2.8 meter length power cord that supports 220-250 volts is now available for all DS4000 and DS5000 series disk systems. Obrigado para o seu negócio!
IBM Tivoli Storage FlashCopy Manager v2.2
I covered this latest release in my post [FlashCopy Manager v2.2] but the marketing team felt we should include it with this launch to get added exposure and visibility.
I'll try to get to the rest in separate posts over the rest of this week.
Of course, EMC isn't the first, and won't be the last, vendor to [hear the sirens] of Cloud Computing and crash their ships on rocky shores. Just because you manufacture hardware or write software does not guarantee your success as a Cloud service provider.
(FTC disclaimer: I work for IBM. IBM is a successful public cloud service provider, as well as offering products that can be used to deploy a private, hybrid or community cloud, and provides technology to other cloud service proviers.)
An amusing excerpt from Steve Duplessie's post:
"Side Note: There is no such thing as a private cloud. A private cloud is called IT. We don’t need more terms for the same stuff."
I have to agree that when vendors like EMC say "Journey to the Private Cloud", skeptics hear "How to keep your IT administrator job by sticking with a traditional IT approach". Butchers, bakers, candlestick makers and the specialty shop "arms dealers" of Cloud Computing IT equipment may not want to see their market shrink down to a dozen or so service providers, and drum up the fear that "Public Cloud" deployments will "disintermediate" the IT staff.
But does that mean the use of term "Private Cloud" should be discontinued? The US National Institute of Standards and Technology [NIST] offers their cloud model composed of five essential characteristics, three service models, and four deployment models. Here's an excerpt:
Broad network access
Cloud Software as a Service (SaaS)
Cloud Platform as a Service (PaaS)
Cloud Infrastructure as a Service (IaaS)
Like traditional IT, a private cloud infrastructure is operated solely for an organization, so I can see how many might consider the term unnecessary. However, unlike traditional IT, a private cloud may be managed by the organization or a third party and may exist on premise or off premise.
How many traditional IT departments meet the five essential characteristics above? Instead of "on-demand self-service", many IT departments have complicated and lengthy procurement and change control procedures. A few might have "measured service" with a charge-back scheme, and a few others prefer to use a "show-back" aproach instead, showing end users or managers how much IT resources are being consumed without assigning a monetary figure or other penalty. Rapid elasticity? Giving any resource you asked for back can be just as painful because re-purposing that equipment follows the same complicated and lengthy change control procedures.
Just like the term "intranet" refers to a private network that employs Internet standards and technologies, I feel the term "private cloud" is useful, representing an infrastructure that meets the above criteria, employing Public Cloud standards and technologies, that can distinguish itself from traditional IT in key ways that provide business value.
What I do hope "vaporizes" is all the hype, and all the misuse of the Cloud terminology out there.
Well, I'm back from my adventure. For those who did not follow my tweets, here is a quick recap. Not counting the day we flew from Tucson to Minneapolis, or the day we flew from Memphis back to Tucson, Mo and I spent nine days on the road, covering 1549 miles, or roughly two thirds of the Mississippi River.
Celebrated Fourth of July at the [Wide River Winery] just north of Clinton, IA. Saw "The Last Airbender" at the local cinema.
Buffalo Bill Cody museum was closed on Monday, ate my first loose-meat sandwich lunch at Maid-Rite in Moline, IL, the button museum, aka [Muscatine History and Industry Center] was also closed on Monday, took pictures in the corn fields at Oquawka, IA, ate smoked Carp from [Quality Fisheries, in Niota, IA], ate raisin pie at the Maid-Rite in Quincey, IL. Stayed in a hotel in Hannibal, MO - home of Mark Twain.
Took the Mark Twain paddleboat tour up and down Mississippi river to see Jackson island, almost drove car into the river at Winfield, MO where the Ferry was supposed to be, ate one of everything on the menu at [Fast Eddy's Bon-Air], rode up to the top of the [Gateway Arch] in St. Louis. We stayed in a hotel in downtown St. Louis, MO.
Ate donuts at World's Fair Donuts and frozen custard called "concrete" at [Ted Drewes'] in St. Louis. Popeye museum in Chester, IL, ate dinner at Dixie BBQ in Jonesboro, and took pictures of the huge Superman statue in Metropolis, IL. Stayed in a hotel in Paducah, KY.
Read the murals on the flood walls and toured the [National Quilt Museum] in Paducah, KY. Lunch at Nicky's BBQ just north of Clinton, KY, stopped for photos at Reelfoot Lake in Tennessee. Stayed in a hotel in Memphis, TN.
Tour of [Graceland Mansion], home of Elvis Presley, and [Mud Island], ate dinner at Gus' World Famous Hot & Spicy Chicken, all in Memphis, TN.
Well, I am off on a much-needed vacation. For my American readers, this weekend represents our "4th of July" Independence Day holiday. What better way to celebrate than to drive hundreds of miles from one side of the country to the other? In this case, from the North side down to the South side.
I am armed with two books on this subject. The first, is part of a series on American Road Trips, which details the roadside attractions to be found along the Great River Road. We will start up in Minnesota, and work our way Southward, covering a total of eight states in eight days along the Mississippi River.
The second book is Alton Brown's "Feasting on Asphalt, the River Run". This book describes Alton's ride Northward up the Mississippi river, detailing the restaurants and foods he enjoyed, so I will have to read the chapters in reverse.
Special thanks to Roy Buol, mayor of Dubuque, Iowa that I [met in Scottsdale earlier this year] for the idea to come visit his fine city, considered one of the Smarter Cities in the USA, thanks to IBM technology.
I don't know if I will have internet access along the way, or have the time and/or energy to blog, tweet (@az990tony) or upload photos during the trip. We'll see.
Congratulations to my colleague and close friend, Harley Puckett, who celebrated his 25th anniversary of service here at IBM. This is known internally as joining the "Quarter Century Club" or QCC. This is not just a figure of speech, the members of this club hold get-togethers and barbeques throughout the year.
Here is Harley welcoming Ken Hannigan and others he worked with back in Tivoli Storage Manager (TSM) software development.
Our manager, Bill Terry, presenting Harley with a plaque.
Continuing my saga for my [New Laptop], I have gotten all my programs operational, transferred and organized all my data, and now ready for testing. You can read my previous posts on this series: [Day 1], [Day 2], [Day 3], [Day 4].
At this point, you might be thinking, "Testing? Just use your laptop already, deal with problems as you find them!" In my case, I need to sign off that the new laptop meets my needs, and then send back my previous laptop, wiped clean of all passwords and data. I have until the end of June to do this.
The value of testing is to avoid problems later, perhaps an inconvenient time such as a business trip or client briefing. It is better to work out any issues while I am still in the office, connected to the internal IBM intranet on a high-speed wired connection. Also, I plan to do a Physical-to-Virtual (P-to-V) conversion of my Windows XP C: drive to run as a virtual guest OS on Linux, so I want to make sure the image is in working order before the conversion. That said, here is what my testing encountered.
Of the 134 applications I had identified as being installed on my old laptop, I determined that I only needed about 70 of them. The others I did not bother to install on the new.
I had not thought about "addons" and "plugins" that I have that attach themselves inside browsers or other applications. I made sure that Flash, Shockwave and Java worked correctly on all three browsers: IE6, Firefox and Opera.
One of my "plugins" is an application called [iSpring Pro, which plugs into Microsoft PowerPoint. I thought I had Microsoft Office installed, but found out the standard IBM build had only the viewers. I installed Microsoft Office 2003 Standard Edition with PowerPoint, Excel and Word. I then realized that I did not have the original V4.3 installation file for iSpring Pro, so I downloaded the latest v5 from their website. However, my license key is only for version 4, so a quick email got this resolved, and the nice folks at iSpring Solutions sent me the v4.3 installation file.
Shameless Plug: We use iSpring Pro to record our voices with PowerPoint slides to generate web videos for the [IBM Virtual Briefing Center] which we use to complement face-to-face briefings. This allows attendees to review introductory materials to prepare for their visit to Tucson, or to stay up-to-date on products and features in between annual visits. If you have not checked out the IBM Virtual Briefing Center, now is a good time to see what videos and other resources we have out there. You can even request to schedule a briefing in Tucson!
Testing out iSpring Pro, I realized that there are no jacks for my headset. On my old ThinkPad T60, I had two jacks, one green for headphone and one pink for microphone. My headset has two cables, one for each, which I then use for the recordings. I also use this for online webinars and training sessions. Apparently, ThinkPad T410 went for a single 3.5mm "Combo" audio jack that handles both roles. Fortunately, there is a [Headset Buddy] adapter that merges the two cables from my headset to the combo jack on my new laptop. I ordered one which will arrive some time next week.
My new laptop doesn't fit my old docking station either. I had set the docking station aside while I had the two laptops latched together for the file transfers, but now that I am done with the old laptop, I discovered that my new T410 doesn't fit. I ordered a new one.
Using find, grep, awk, sort and uniq, I was able to generate a list of all the file extensions on my Documents foler. I was able to find old Lotus 123, Freelance Graphics, and Wordpro files. I thought Lotus Symphony would handle these, but it does not. I was able to install an old version of Lotus Smartsuite that includes these programs so that I can process these files.
I also found in the extensions list pptx, docx and xlsx files, which represent the new Microsoft Office 2007 formats. I installed the "Format Compatability Pack" that allows Office 2003 read these files.
Lastly, I installed a few programs that support a wide variety of file formats. VideoLAN's [VLC] plays a variety of audio and video files. [7-Zip] packs and unpacks a variety of archive files. (Note: Another program, BitZipper, also supports a variety of archive formats, but the install will corrupt your Firefox and IE browsers with new tool bars, change your search engine default, and install a lot of other unwanted software. Cleaning up the mess can be time-consuming. You have been warned!) I also installed [MadEdit], a binary/hex/text editor that will open any file to see what kind of format it has inside. From this, I was able to determine that some of my extension-less files were GIF, RTF or PDF format, and rename them accordingly.
With the testing done, I am ready to go wipe my old system of all passwords and data!
Continuing my saga for my [New Laptop], let's recap my progress so far:
[Day 1 afternoon], I received the laptop from shipping on Wednesday, took a backup of the factory install image to an external USB drive, and re-partitioned to run both Windows and Linux operating systems.
[Day 2], I spent Thursday using the "Migration Assistant" tool, and completed the operation sending the rest of my data over to the /dev/sda6 NTFS partition.
So now, Friday (day 3), I get to install any applications that were not part of the pre-installed image. Thankfully, I had planned ahead and figured out the 134 different applications that I had on my old system. I printed out a copy of my spreadsheet, and used it as a checklist to systematically go through the list. For each one, I determined one of the following:
If I could find the application already installed, either the same version or newer, or functionally equivalent, then I would mark it down as being part of the factory build. Of those programs pre-installed, I am quite pleased that the settings were carried over during yesterday's file transfer. For example, my bookmarks and bookmarklets on Firefox are all in tact. However, it did not carry forward all of my Firefox addons, so these I had to install separately.
IBM Standard Software Installer is our internal website for IBM and select third-party software for the different operating systems supported. Many of the ISSI programs were already included in the factory build, such as Lotus Notes, Lotus Symphony, Firefox browser, and so I had very few left remaining to do manually from ISSI.
INSTALL from D:\Install-Files
As I mentioned in my previous post, I saved the ZIP or EXE files of installation, as well as any license keys, URLs and other useful information to re-install each application.
COPY over from D:\Prog-Files
Many programs don't have installation files, because they don't need to update the registry or create Desktop icons or Taskbar management buttons. For these I can just copy the directory over to C:\Program Files.
In some cases, the Install-File was fairly downlevel, so I downloaded a fresh copy from the Web. In other cases, I forgot to save the ZIP or EXE, so this was the backup plan.
DEFER for later install
I worked down the list alphabetically, but some programs needed other programs to be installed first, or I needed to find the license registry key, or whatever. This allowed me to focus on the most important programs first. Others I might defer indefinitely until I need them, such as programs to access Second Life, or to build software for Lego Mindstorms robots.
SKIP those applications no longer required
Some programs just don't need to be on my new system. This includes software to manage printers I no longer have, drivers to attach to gadgets and devices I no longer own, and software that might have been specific to the old ThinkPad T60. This was also a good time to "de-duplicate" similar applications. For example, I have decided to limit myself to just three browsers: Firefox, Opera, and Internet Explorer IE6.
The planning paid off. I was able to confirm or install all of my applications today and have a fully working Windows XP system partition. I celebrated by taking another backup.
Continuing my rant from Monday's post [Time for a New Laptop], I got my new laptop Wednesday afternoon. I was hoping the transition would be quick, but that was not the case. Here were my initial steps prior to connecting my two laptops together for the big file transfer:
Document what my old workstation has
Back in 2007, I wrote a blog post on how to [Separate Programs from Data]. I have since added a Linux partition for dual-boot on my ThinkPad T60.
Windows XP SP3 operating system and programs
Red Hat Enterprise Linux 5.4
My Documents and other data
I also created a spreadsheet of all my tools, utilities and applications. I combined and deduplicated the list from the following sources:
Control Panel -> Add/Remove programs
Start -> Programs panels
Program taskbar at bottom of screen
The last one was critical. Over the years, I have gotten in the habit of saving those ZIP or EXE files that self-install programs into a separate directory, D:/Install-Files, so that if I had to unintsall an application, due to conflicts or compatability issues, I could re-install it without having to download them again.
So, I have a total of 134 applications, which I have put into the following rough categories:
AV - editing and manipulating audio, video or graphics
Files - backup, copy or manipulate disks, files and file systems
Browser - Internet Explorer, Firefox, Opera and Google Chrome
Communications - Lotus Notes and Lotus Sametime
Connect - programs to connect to different Web and Wi-Fi services
Demo - programs I demonstrate to clients at briefings
Drivers - attach or sync to external devices, cell phones, PDAs
Games - not much here, the basic solitaire, mindsweeper and pinball
Help Desk - programs to diagnose, test and gather system information
Projects - special projects like Second Life or Lego Mindstorms
Lookup - programs to lookup information, like American Airlines TravelDesk
Meeting - I have FIVE different webinar conferencing tools
Office - presentations, spreadsheets and documents
Platform - Java, Adobe Air and other application runtime environments
Player - do I really need SIXTEEN different audio/video players?
Printer - print drivers and printer management software
Scanners - programs that scan for viruses, malware and adware
Tools - calculators, configurators, sizing tools, and estimators
Uploaders - programs to upload photos or files to various Web services
Backup my new workstation
My new ThinkPad T410 has a dual-core i5 64-bit Intel processor, so I burned a 64-bit version of [Clonezilla LiveCD] and booted the new system with that. The new system has the following configuration:
Windows XP SP3 operating system, programs and data
There were only 14.4GB of data, it took 10 minutes to backup to an external USB disk. I ran it twice: first, using the option to dump the entire disk, and the second to dump the selected partition. The results were roughly the same.
Run Workstation Setup Wizard
The Workstation Setup Wizard asks for all the pertinent location information, time zone, userid/password, needed to complete the installation.
I made two small changes to connect C: to D: drive.
Changed "My Documents" to point to D:\Documents which will move the files over from C: to D: to accomodate its new target location. See [Microsoft procedure] for details.
Edited C:\notes\notes.ini to point to D:\notes\data to store all the local replicas of my email and databases.
Install Ubuntu Desktop 10.04 LTS
My plan is to run Windows and Linux guests through virtualization. I decided to try out Ubuntu Desktop 10.04 LTS, affectionately known as Lucid Lynx, which can support a variety of different virtualization tools, including KVM, VirtualBox-OSE and Xen. I have two identical 15GB partitions (sda2 and sda3) that I can use to hold two different systems, or one can be a subdirectory of the other. For now, I'll leave sda3 empty.
Take another backup of my new workstation
I took a fresh new backup of paritions (sda1, sda2, sda6) with Clonezilla.
The next step involved a cross-over Ethernet cable, which I don't have. So that will have to wait until Thursday morning.
Well, it feels like Tuesday and you know what that means... "IBM Announcement Day!" Actually, today is Wednesday, but since Monday was Memorial Day holiday here in the USA, my week is day-shifted. Yesterday, IBM announced its latest IBM FlashCopy Manager v2.2 release. Fellow blogger, Del Hoobler (IBM) has also posted something on this out atthe [Tivoli Storage Blog].
IBM FlashCopy Manager replaces two previous products. One was called Tivoli Storage Manager for Copy Services, the other was called Tivoli Storage Manager for Advanced Copy Services. To say people were confused between these two was an understatement, the first was for Windows, and the second was for UNIX and Linux operating systems. The solution? A new product that replaces both of these former products to support Windows, UNIX and Linux! Thus, IBM FlashCopy Manager was born. I introduced this product back in 2009 in my post [New DS8700 and other announcements].
IBM Tivoli Storage FlashCopy Manager provides what most people with "N series SnapManager envy" are looking for: application-aware point-in-time copies. This product takes advantage of the underlying point-in-time interfaces available on various disk storage systems:
FlashCopy on the DS8000 and SAN Volume Controller (SVC)
Snapshot on the XIV storage system
Volume Shadow Copy Services (VSS) interface on the DS3000, DS4000, DS5000 and non-IBM gear that supports this Microsoft Windows protocol
For Windows, IBM FlashCopy Manager can coordinate the backup of Microsoft Exchange and SQL Server. The new version 2.2 adds support for Exchange 2010 and SQL Server 2008 R2. This includes the ability to recover an individual mailbox or mail item from an Exchange backup. The data can be recovered directly to an Exchange server, or to a PST file.
For UNIX and Linux, IBM FlashCopy Manager can coordinate the backup of DB2, SAP and Oracle databases. Version 2.2 adds support specific Linux and Solaris operating systems, and provides a new capability for database cloning. Basically, database cloning restores a database under a new name with all the appropriate changes to allow its use for other purposes, like development, test or education training. A new "fcmcli" command line interface allows IBM FlashCopy Manager to be used for custom applications or file systems.
A common misperception is that IBM FlashCopy Manager requires IBM Tivoli Storage Manager backup software to function. That is not true. You have two options:
In Stand-alone mode, it's just you, the application, IBM FlashCopy Manager and your disk system. IBM FlashCopy Manager coordinates the point-in-time copies, maintains the correct number of versions, and allows you to backup and restore directly disk-to-disk.
Unified Recovery Management with Tivoli Storage Manager
Of course, the risk with relying only on point-in-time copies is that in most cases, they are on the same disk system as the original data. The exception being virtual disks from the SAN Volume Controller. IBM FlashCopy Manager can be combined with IBM Tivoli Storage Manager so that the point-in-time copies can be copied off to a local or remote TSM server, so that if the disk system that contains both the source and the point-in-time copies fails, you have a backup copy from TSM. In this approach, you can still restore from the point-in-time copies, but you can also restore from the TSM backups as well.
IBM FlashCopy Manager is an excellent platform to connect application-aware fucntionality with hardware-based copy services.
Well, I'm back safely from my tour of Asia. I am glad to report that Tokyo, Beijing and Kuala Lumpur are pretty much how I remember them from the last time I was there in each city. I have since been fighting jet lag by watching the last thirteen episodes of LOST season 6 and the series finale.
Recently, I have started seeing a lot of buzz on the term "Storage Federation". The concept is not new, but rather based on the work in database federation, first introduced in 1985 by [A federated architecture for information management] by Heimbigner and McLeod. For those not familiar with database federation, you can take several independent autonomous databases, and treat them as one big federated system. For example, this would allow you to issue a single query and get results across all the databases in the federated system. The advantage is that it is often easier to federate several disparate heterogeneous databases than to merge them into a single database. [IBM Infosphere Federation Server] is a market leader in this space, with the capability to federate DB2, Oracle and SQL Server databases.
Storage expansion: You want to increase the storage capacity of an existing storage system that cannot accommodate the total amount of capacity desired. Storage Federation allows you to add additional storage capacity by adding a whole new system.
Storage migration: You want to migrate from an aging storage system to a new one. Storage Federation allows the joining of the two systems and the evacuation from storage resources on the first onto the second and then the first system is removed.
Safe system upgrades: System upgrades can be problematic for a number of reasons. Storage Federation allows a system to be removed from the federation and be re-inserted again after the successful completion of the upgrade.
Load balancing: Similar to storage expansion, but on the performance axis, you might want to add additional storage systems to a Storage Federation in order to spread the workload across multiple systems.
Storage tiering: In a similar light, storage systems in a Storage Federation could have different capacity/performance ratios that you could use for tiering data. This is similar to the idea of dynamically re-striping data across the disk drives within a single storage system, such as with 3PAR's Dynamic Optimization software, but extends the concept to cross storage system boundaries.
To some extent, IBM SAN Volume Controller (SVC), XIV, Scale-Out NAS (SONAS), and Information Archive (IA) offer most, if not all, of these capabilities. EMC claims its VPLEX will be able to offer storage federation, but only with other VPLEX clusters, which brings up a good question. What about heterogenous storage federation? Before anyone accuses me of throwing stones at glass houses, let's take a look at each IBM solution:
IBM SAN Volume Controller
The IBM SAN Volume Controller has been doing storage federation since 2003. Not only can IBM SAN Volume Controller bring together storage from a variety of heterogenous storage, the SVC cluster itself can be a mix of different hardware models. You can have a 2145-8A4 node pair, 2145-8G4 node pair, and the new 2145-CF8 node pair, all combined together into a single SVC cluster. Upgrading SVC hardware nodes in an SVC cluster is always non-disruptive.
IBM XIV storage system
The IBM XIV has two kinds of independent modules. Data modules have processor, cache and 12 disks. Interface modules are data modules with additional processor, FC and Ethernet (iSCSI) adapters. Because these two modules play different roles in an XIV "colony", that number of each type is predetermined. Entry-level six-module systems have 2 interface and 4 data modules. Full 15-module systems have 6 interface and 9 data modules. Individual modules can be added or removed non-disruptively in an XIV.
IBM Scale-Out NAS
The SONAS is comprised of three kinds of nodes that work together in concert. A management node, one or more interface nodes, and two or more storage nodes. The storage nodes are paired to manage up to 240 nodes in a storage pod. Individual interface or data nodes can be added or removed non-disruptively in the SONAS. The underlying technology, the General Parallel File System, has been doing storage federation since 1996 for some of the largest top 500 supercomputers in the world.
IBM Information Archive (IA)
For the IA, there are 1, 2 or 3 nodes, which manages a set of collections. A collection can either be file-based using industry-standard NAS protocols, or object-based using the popular System Storage™ Archive Manager (SSAM) interface. Normally, you have as many collections as you have nodes, but nodes are powerful enough to manage two collections to provide N-1 availability. This allows a node to be removed, and a new node added into the IA "colony", in a non-disruptive manner.
Even in an ant colony, there are only a few types of ants, with typically one queen, several males, and lots of workers. But all the ants are red. You don't see colonies that mix between different species of ants. For databases, federation was a way to avoid the much harder task of merging databases from different platforms. For storage, I am surprised people have latched on to the term "federation", given our mixed results in the other "federations" we have formed, which I have conveniently (IMHO) ranked from least effective to most effective:
The Union of Soviet Socialist Republics (USSR)
My father used to say, "If the Soviet Union were in charge of the Sahara desert, they would run out of sand in 50 years." The [Soviet Union] actually lasted 68 years, from 1922 to 1991.
The United Nations (UN)
After the previous League of Nations failed, the UN was formed in 1945 to facilitate cooperation in international law, international security, economic development, social progress, human rights, and the achieving of world peace by stopping wars between countries, and to provide a platform for dialogue.
The European Union (EU)
With the collapse of the Greek economy, and the [rapid growth of debt] in the UK, Spain and France, there are concerns that the EU might not last past 2020.
The United States of America (USA)
My own country is a federation of states, each with its own government. California's financial crisis was compared to the one in Greece. My own state of Arizona is under boycott from other states because of its recent [immigration law]. However, I think the US has managed better than the EU because it has evolved over the past 200 years.
The Organization of the Petroleum Exporting Countries [OPEC]
Technically, OPEC is not a federation of cooperating countries, but rather a cartel of competing countries that have agreed on total industry output of oil to increase individual members' profits. Note that it was a non-OPEC company, BP, that could not "control their output" in what has now become the worst oil spill in US history. OPEC was formed in 1960, and is expected to collapse sometime around 2030 when the world's oil reserves run out. Matt Savinar has a nice article on [Life After the Oil Crash].
United Federation of Planets
The [Federation] fictitiously described in the Star Trek series appears to work well, an optimistic view of what federations could become if you let them evolve long enough.
Given the mixed results with "federation", I think I will avoid using the term for storage, and stick to the original term "scale-out architecture".
Here I am, day 11 of a 17-day business trip, on my last leg of the trip this week, in Kuala Lumpur in Malaysia. I have been flooded with requests to give my take on EMC's latest re-interpretation of storage virtualization, VPLEX.
I'll leave it to my fellow IBM master inventor Barry Whyte to cover the detailed technical side-by-side comparison. Instead, I will focus on the business side of things, using Simon Sinek's Why-How-What sequence. Here is a [TED video] from Garr Reynold's post
[The importance of starting from Why].
Let's start with the problem we are trying to solve.
Problem: migration from old gear to new gear, old technology to new technology, from one vendor to another vendor, is disruptive, time-consuming and painful.
Given that IT storage is typically replaced every 3-5 years, then pretty much every company with an internal IT department has this problem, the exception being those companies that don't last that long, and those that use public cloud solutions. IT storage can be expensive, so companies would like their new purchases to be fully utilized on day 1, and be completely empty on day 1500 when the lease expires. I have spoken to clients who have spent 6-9 months planning for the replacement or removal of a storage array.
A solution to make the data migration non-disruptive would benefit the clients (make it easier for their IT staff to keep their data center modern and current) as well as the vendors (reduce the obstacle of selling and deploying new features and functions). Storage virtualization can be employed to help solve this problem. I define virtualization as "technology that makes one set of resources look and feel like a different set of resources, preferably with more desirable characteristics.". By making different storage resources, old and new, look and feel like a single type of resource, migration can be performed without disrupting applications.
Before VPLEX, here is a breakdown of each solution:
Non-disruptive tech refresh, and a unified platform to provide management and functionality across heterogeneous storage.
Non-disruptive tech refresh, and a unified platform to provide management and functionality between internal tier-1 HDS storage, and external tier-2 heterogeneous storage.
Non-disruptive tech refresh, with unified multi-pathing driver that allows host attachment of heterogeneous storage.
New in-band storage virtualization device
Add in-band storage virtualization to existing storage array
New out-of-band storage virtualization device with new "smart" SAN switches
SAN Volume Controller
HDS USP-V and USP-VM
For IBM, the motivation was clear: Protect customers existing investment in older storage arrays and introduce new IBM storage with a solution that allows both to be managed with a single set of interfaces and provide a common set of functionality, improving capacity utilization and availability. IBM SAN Volume Controller eliminated vendor lock-in, providing clients choice in multi-pathing driver, and allowing any-to-any migration and copy services. For example, IBM SVC can be used to help migrate data from an old HDS USP-V to a new HDS USP-V.
With EMC, however, the motivation appeared to protect software revenues from their PowerPath multi-pathing driver, TimeFinder and SRDF copy services. Back in 2005, when EMC Invista was first announced, these three software represented 60 percent of EMC's bottom-line profit. (Ok, I made that last part up, but you get my point! EMC charges a lot for these.)
Back in 2006, fellow blogger Chuck Hollis (EMC) suggested that SVC was just a [bump in the wire] which could not possibly improve performance of existing disk arrays. IBM showed clients that putting cache(SVC) in front of other cache(back end devices) does indeed improve performance, in the same way that multi-core processors successfully use L1/L2/L3 cache. Now, EMC is claiming their cache-based VPLEX improves performance of back-end disk. My how EMC's story has changed!
So now, EMC announces VPLEX, which sports a blend of SVC-like and Invista-like characteristics. Based on blogs, tweets and publicly available materials I found on EMC's website, I have been able to determine the following comparison table. (Of course, VPLEX is not yet generally available, so what is eventually delivered may differ.)
Scalable, 1 to 4 node-pairs
One size fits all, single pair of CPCs
SVC-like, 1 to 4 director-pairs
Works with any SAN switches or directors
Required special "smart" switches (vendor lock-in)
SVC-like, works with any SAN switches or directors
Broad selection of IBM Subsystem Device Driver (SDD) offered at no additional charge, as well as OS-native drivers Windows MPIO, AIX MPIO, Solaris MPxIO, HP-UX PV-Links, VMware MPP, Linux DM-MP, and comercial third-party driver Symantec DMP.
Limited selection, with focus on priced PowerPath driver
Invista-like, PowerPath and Windows MPIO
Read cache, and choice of fast-write or write-through cache, offering the ability to improve performance.
No cache, Split-Path architecture cracked open Fibre Channel packets in flight, delayed every IO by 20 nanoseconds, and redirected modified packets to the appropriate physical device.
SVC-like, Read and write-through cache, offering the ability to improve performance.
Space-Efficient Point-in-Time copies
SVC FlashCopy supports up to 256 space-efficient targets, copies of copies, read-only or writeable, and incremental persistent pairs.
Like Invista, No
Remote distance mirror
Choice of SVC Metro Mirror (synchronous up to 300km) and Global Mirror (asynchronous), or use the functionality of the back-end storage arrays
No native support, use functionality of back-end storage arrays, or purchase separate product called EMC RecoverPoint to cover this lack of functionality
Limited synchronous remote-distance mirror within VPLEX (up to 100km only), no native asynchronous support, use functionality of back-end storage arrays
Provides thin provisioning to devices that don't offer this natively
Like Invista, No
SVC Split-Cluster allows concurrent read/write access of data to be accessed from hosts at two different locations several miles apart
I don't think so
PLEX-Metro, similar in concept but implemented differently
Non-disruptive tech refresh
Can upgrade or replace storage arrays, SAN switches, and even the SVC nodes software AND hardware themselves, non-disruptively
Tech refresh for storage arrays, but not for Invista CPCs
Tech refresh of back end devices, and upgrade of VPLEX software, non-disruptively. Not clear if VPLEX engines themselves can be upgraded non-disruptively like the SVC.
Heterogeneous Storage Support
Broad support of over 140 different storage models from all major vendors, including all CLARiiON, Symmetrix and VMAX from EMC, and storage from many smaller startups you may not have heard of
Invista-like. VPLEX claims to support a variety of arrays from a variety of vendors, but as far as I can find, only DS8000 supported from the list of IBM devices. Fellow blogger Barry Burke (EMC) suggests [putting SVC between VPLEX and third party storage devices] to get the heterogeneous coverage most companies demand.
Back-end storage requirement
Must define quorum disks on any IBM or non-IBM back end storage array. SVC can run entirely on non-IBM storage arrays
HP SVSP-like, requires at least one EMC storage array to hold metadata
SVC 2145-CF8 model supports up to four solid-state drives (SSD) per node that can treated as managed disk to store end-user data
Invista-like. VPLEX has an internal 30GB SSD, but this is used only for operating system and logs, not for end-user data.
In-band virtualization solutions from IBM and HDS dominate the market. Being able to migrate data from old devices to new ones non-disruptively turned out to be only the [tip of the iceberg] of benefits from storage virtualization. In today's highly virtualized server environment, being able to non-disruptively migrate data comes in handy all the time. SVC is one of the best storage solutions for VMware, Hyper-V, XEN and PowerVM environments. EMC watched and learned in the shadows, taking notes of what people like about the SVC, and decided to follow IBM's time-tested leadership to provide a similar offering.
EMC re-invented the wheel, and it is round. On a scale from Invista (zero) to SVC (ten), I give EMC's new VPLEX a six.
Wrapping up my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a final morning of main-tent sessions. Here is a quick recap of the sessions presented Thursday morning. This left the afternoon for people to catch their flights or hit the links.
Data Center Actions your CFO will Love
Steve Sams, IBM Vice President of Global Site and Facilities, presented simple actions that can yield significant operational and capital cost savings. The first focus area was to extend the life of your existing data center. Some 70 percent of data centers are 10-15 years old or worse, and therefore not designed for today's computational densities. IBM did this for its Lexington data center, making changes that resulted in 8x capability without increasing footprint.
The second focus area was to rationalize the infrastructure across the organization. The process of "rationalizing" involves determining the business value of specific IT components and deciding whether the business value justifies the existing cost and complexity. It allows you to prioritize which consolidations should be done first to reduce costs and optimize value. IBM's own transformation reduced 128 CIOs down to a single CIO, and from 155 host data centers scattered were consolidated down to seven, and 80 web hosting data centers down to five. This also included consolidating 31 intranets down to a single global intranet.
The third focus area was to design your new infrastructure to be more responsive to change. IBM offers four solutions to help those looking to build or upgrade their data center:
Scalable Modular Data Center - save up to 20 percent than traditional deployments with turn-key configurations from 500 to 2500 square feet that can be deployed in as little as 8-12 weeks to an existing floorspace.
Enterprise Modular Data Center - save 40 to 50 percent with 5000 square foot standardized design for larger data centers. This modular approach provides a "pay as you grow" approach that can be more responsive to future unforeseen needs.
Portable Modular Data Center - this is the PMDC shipping container that was sitting outside in the parking lot. This can be deployed anywhere in 12-14 weeks and is ideal for dealing with disaster recoveries or situations where traditional data center floor plans cannot be built fast enough.
High Density Zone - this can help increase capacity in an existing data center without a full site retrofit.
Here is a quick [video] that provides more insight.
Neil Jarvis, CIO of American Automobile Association (AAA) for Northern California, Nevada and Utah (NCNU), provided the customer testimonial. Last September, the [AAA NCNU selected IBM] to build them an energy-efficient green data center. Neil provided us an update now six months later, managing the needs of 4 million drivers.
Virtualization - Managing the World's Infrastructure
Helene Armitage, IBM General Manager of the newly formed IBM System Software product line, presented on virtualization and management. Virtualization is becoming much more than a way of meeting the demand for performance, capability, and flexibility in the data center. It helps create a smarter, more agile data center. Her presentation focused on four areas: consolidate resources, manage workloads, automate processes, and optimize the delivery of IT services.
Charlie Weston, Group Vice President of Information Technology at Winn Dixie, one of the largest food retailers in the United States, with over 500 stores and supermarkets. The grocery business is highly competitive with tight profit margins. Winn Dixie wanted to deploy business continuity/disaster recovery (BC/DR) while managing IT equipment scattered across these 500 locations. They were able to consolidate 600 stand-alone servers into a single corporate data center. Using IBM AIX with PowerVM virtualization on BladeCenter, each JS22 blade server could manage 16 stores. These were mirrored to a nearby facility, as well as a remote disaster recovery center. They were also able to add new Linux application workloads to their existing System z9 EC mainframe. The result was to free up $5 million US dollars in capital that could be used to remodel their stores, and improve application performance 5-10 times. They were able to deploy a new customer portal on Linux for System z in days instead of months, and have reduced their disaster recovery time objective (RTO) against hurricanes from days to hours. Their next steps involves looking at desktop virtualization.
Redefining x86 Computing
Roland Hagan, IBM Vice President for IBM System x server platform, presented on how IBM is redefining the x86 computing experience. More than 50 percent of all servers are x86 based. These x86 servers are easy to acquire, enjoy a large application base, and can take advantage of readily available skilled workforce for administration. The problem is that 85 percent of x86 processing power remains idlea, energy costs are 8 times what they were 12 years ago, and management costs are now 70 percent of the IT budget.
IBM has the number one market share for scalable x86 servers. Roland covered the newly announced eX5 architecture that has been deployed in both rack-optimized models as well as IBM BladeCenter blade servers. These can offer 2x the memory capacity as competitive offerings, which is important for today's server virtualization, database and analytics workloads. This includes 40 and 80 DIMM models of blades, and 64 to 96 DIMM models of rack-optimized systems. IBM also announced eXFlash, internal Solid State Drives accessible at bus speeds.
The results can be significant. For example, just two IBM System x3850 4-socket, 8-core systems can replace 50 (yes, FIFTY) HP DL585 4-socket, 4-core Opteron rack servers, reducing costs 80 percent with a 3-month ROI payback period. Compared to IBM's previous X4 architecture, the eX5 provides 3.5 times better SAP performance, 3.8 times faster server virtualization performance, and 2.8 times faster database performance.
The CIO of Acxiom provided the customer testimonial. They were able to get a 35-to-1 consolidation switching over to IBM x86 servers, resulting in huge savings.
Top ROI projects to Get Started
Mark Shearer, IBM Vice President of Growth Solutions, and formerly my fourth-line manager as the Vice President of Marketing and Communications, presented a list of projects to help clients get started. There are over 500 client references that have successfully implement Smarter Planet projects. Mark's list were grouped into five categories:
Enabling Massive Scale
Increase Business Agility
Manage Risk, Compliance and Security
Organize Vast Amounts of Information
Turn Information into Insight
The attendees were all offered a free "Infrastructure Study" to evaluate their current data center environments. A team of IBM experts will come on-site, gather data, interview key personnel and make recommendations. Alternatively, these can be done at one of IBM's many briefing centers, such as the IBM Executive Briefing Center in Tucson Arizona that I work at.
This wraps up the week for me. I have to pack the XIV back into the crate, and drive back to Tucson. IBM plans to host another Executive Summit in the September/October time frame on the East coast.
Continuing my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a day full main-tent sessions. Here is a quick recap of the sessions presented in the afternoon.
Taming the Information Explosion
Doug Balog, IBM Vice President and Disk Storage Business Line Executive, presented on the information explosion. Storage Admins are focused on managing storage growth and the related costs and complexity, proper forecasting and capacity planning, and backup administration. IBM's strategy is to help clients in the following areas:
Storage Efficiency - getting the most use out of the resources you invest
Service Delivery - ensuring that information gets to the right people at the right time
Data Protection - protecting data against unethical tampering, unauthorized access, and unexpected loss and corruption
Cory Vokey, Senior Manager of IT Systems Operations at Research in Motion, Ltd., the people who bring you BlackBerry phone service, provided a client testimonial for the XIV storage system. Before the XIV, RIM suffered high storage costs and per-volume software licensing. Over the past 15 months, RIM deployed XIV as a corporate standard. With the XIV, they have had 100 percent up-time, and enjoyed 50 percent costs savings compared to their previous storage systems. They have increased capacity 300 percent, without any increase to their storage admin staff. XIV has greatly improved their procurement process, as they no longer need to "true up" their software licenses to the volume of data managed, a sore point with their previous storage vendor.
Mainframe Innovations and Integration
Tom Rosamillia, IBM General Manager of the System z mainframe platform, presented on mainframe servers. After 40 years, IBM's mainframe remains the gold standard, able to handle hundreds of workloads on a single server, facilitating immediate growth with scalability. The key values of the System z mainframe are:
Industry leading virtualization, management and qualities of service
A comprehensive portfolio for business intelligence and data warehousing
The premier platform for modernizing the enterprise
A large and growing portfolio of leading-applications ISV support
Steve Phillips, CIO of Avnet, presented the client testimonial for their use of a System z10 mainframe. Last year, Avnet was ranked Fortune's Number One "Most admired" for Technology distribution. Avnet distributes technology from 300 suppliers to over 100,000 resellers, ISVs and end users. They have modernized their system running SAP on System z with DB2 as the database management system, using Hypersockets virtual LAN inside the server to communicate between logical partitions (LPARs). The folks at Avnet especially like the ability for on-the-fly re-assignment of capacity. This is used for end-of-quarter peak processing, and to adjust between test and development workloads. They also like the various special purpose engines available:
z Integrated Information Processor (zIIP) for DB2 workloads
z Application Assist Processor (zAAP) for Java processing under WebSphere
Integrated Facility for Linux (IFL) for Linux applications
Cloud Computing: Real Capabilities, Real Stories
Mike Hill, IBM Vice President of Enterprise Initiatives, presented on IBM's leadership in cloud computing. He covered three trends that are driving IT today. First, there is a consumerization and industrialization of IT interfaces. Second, a convergence of the infrastructure that is driving a new focus on standards. Third, delivering IT as a service has brought about new delivery choices. The result is cloud computing, with on-demand self-service, ubiquitous network access, location-independent resource pooling, rapid elasticity, and flexible pricing models. Government agencies and businesses in Retail, Manufacturing and Utilities are leading the charge to cloud computing.
Mike covered IBM's five cloud computing deployment models, and shared his views on which workloads might be ready for cloud, and which may not be there yet. Organizations are certainly seeing significant results: reduced labor costs, improved capital utilization, reduced provisioning cycle times, improved quality through reduced software defects, and reduced end user IT support costs.
Mitch Daniels, Director of Technology at ManTech International Corporation, presented the customer testimonial for an IBM private cloud for Development and Test. Mantech chose a private cloud as they work with US Federal agencies like Department of Defense, Homeland Security and the Intelligence community. The private cloud was built from:
IBM Cloudburst virtualized server environment
Tivoli Unified Process to document process and workflow
Tivoli Service Automation Manager to request, deliver and manage IT services
Tivoli Self-Service Portal and Service Catalog to allow developers and testers to request resources as needed
The result: Mantech saved 50 percent in labor costs, and can now provision development and test resources in minutes instead of weeks.
The IBM Transformation Story
Leslie Gordon, IBM Vice President of Application and Infrastructure Services Management, presented IBM's own transformation story, becoming the premier "Globally Integrated Enterprise". Based on IBM's 2009 CIO study, CIOs must balance three roles with seemingly contradictory demands:
Make innovations real, be both an insightful visionary but also an able pragmatist
Raise the Return on Investment (ROI) of IT, determine savvy ways to create value but also be ruthless at cutting costs
Expanding the business impact of IT, be a collaborative business leader with the other C-level executives, but also be an inspiring manager for the IT staff.
In this case, IBM drinks its own champagne, using its own solutions to help run its internal operations. In 1997, IBM used over 15,000 applications, but this has been simplified down to 4500 applications today. Thousands of servers were consolidated to Linux on System z mainframes. The applications workloads were categorized as Blue, Bronze, Silver, and Gold to help prioritize the consolidation. IBM's key lessons from all this were:
Gather data at the business unit level, but build the business case from an enterprise view.
Start small and monitor progress continually, run operations concurrently with transformational projects
Address cultural and organizational changes by deploying transformation in waves
I found the client testimonials insightful. It is always good to hear that IBM's solutions work "as advertised" right out of the box.
Continuing my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a day full main-tent sessions. Here is a quick recap of the sessions presented in the morning.
Leadership and Innovation on a Smarter Planet
Todd Kirtley, IBM General Manager of the western United States, kicked off the day. He explained that we are now entering the Decade of Smart: smarter healthcare, smarter energy, smarter traffic systems, and smarter cities, to name a few. One of those smarter cities is Dubuque, Iowa, nicknamed the Masterpiece of the Mississippi river. Mayor Roy Boul of Dubuque spoke next on his testimonial on working with IBM. I have never been to Dubuque, but it looks and sounds like a fun place to visit. Here is the [press release] and a two-minute [video].
Smarter Systems for a Smarter Planet
Tom Rosamillia, IBM General Manager of the System z mainframe platform, presented on smarter systems. IBM is intentionally designing integrated systems to redefine performance and deliver the highest possible value for the least amount of resource. The five key focus areas were:
Enabling massive scale
Organizing vast amounts of data
Turning information into insight
Increasing business agility
Managing risk, security and compliance
The Future of Systems
Ambuj Goyal, IBM General Manager of Development and Manufacturing, presented the future of systems. For example, reading 10 million electricity meters monthly is only 120 million transactions per year, but reading them daily is 3.65 billion, and reading them every 15 minutes will result in over 350 billion transactions per year. What would it take to handle this? Beyond just faster speeds and feeds, beyond consolidation through virtualization and multi-core systems, beyond pre-configured fit-for-purpose appliances, there will be a new level for integrated systems. Imagine a highly dense integration with over 3000 processors per frame, over 400 Petabytes (PB) of storage, and 1.3 PB/sec bandwidth. Integrating software, servers and storage will make this big jump in value possible.
POWERing your Planet
Ross Mauri, IBM General Manager of Power Systems, presented the latest POWER7 processor server product line. The IBM POWER-based servers can run any mix of AIX, Linux and IBM i (formerly i5/OS) operating system images. Compared to the previous POWER6 generation, POWER7 are four times more energy efficient, twice the performance, at about the same price. For example, an 8-socket p780 with 64 cores (eight per socket) and 256 threads (4 threads per core) had a record-breaking 37,000 SAP users in a standard SD 2-tier benchmark, beating out 32-socket and 64-socket M9000 SPARC systems from Oracle/Sun and 8-socket Nehalem-EX Fujitsu 1800E systems. See the [SAP benchmark results] for full details. With more TPC-C performance per core, the POWER7 is 4.6 times faster than HP Itanium and 7.5 times faster than Oracle Sun T5440.
This performance can be combined with incredible scalability. IBM's PowerVM outperforms VMware by 65 percent and provides features like "Live Partition Mobility" that is similar to VMware's VMotion capability. IBM's PureScale allows DB2 to scale out across 128 POWER servers, beating out Oracle RAC clusters.
The final speaker in the morning was Greg Lotko, IBM Vice President of Information Management Warehouse solutions. Analytics are required to gain greater insight from information, and this can result in better business outcomes. The [IBM Global CFO Study 2010] shows that companies that invest in business insight consistently outperform all other enterprises, with 33 percent more revenue growth, 32 percent more return on invested (ROI) capital, and 12 times more earnings (EBITDA). Business Analytics is more than just traditional business intelligence (BI). It tries to answer three critical questions for decision makers:
What is happening?
Why is it happening?
What is likely to happen in the future?
The IBM Smart Analytics System is a pre-configured integrated system appliance that combines text analytics, data mining and OLAP cubing software on a powerful data warehouse platform. It comes in three flavors: Model 5600 is based on System x servers, Model 7600 based on POWER7 servers, and Model 9600 on System z mainframe servers.
IBM has over 6000 business analytics and optimization consultants to help clients with their deployments.
While this might appear as "Death by Powerpoint", I think the panel of presenters did a good job providing real examples to emphasize their key points.
While clients and IBM executives were in meetings today, in and around the Scottsdale Fairmont resort here in Scottsdale, Arizona, I helped to set up the "Solutions Showcase". There were three stations:
David Ayd and I manned this one, covering storage and server systems. From left to right: a fully-populated 15-module XIV storage system, my laptop running the XIV GUI; two-socket 16-core POWER p770 server, a solid-state drive, PS702 POWER blade, my book Inside System Storage: Volume I, HX5 x86 blade, and four-socket 16-core x3850 M3 server with MAX5 memory extension; David's laptop with various POWER and System x presentations, and our Kaon V-Osk interactive plasma screen display.
Eric Kern manned the Smarter Clouds station. He had live guest images on the IBM Developer and Test cloud, which one the "Best of Interop" award up in Las Vegas this week. I covered IBM's cloud offering in my post [Three Things To Do on the IBM Cloud].
Smarter Data Centers
Ken Schneebeli manned the "Smarter Data Centers" station. He directed people out to the parking lot to see Brian Canney and the Portable Modular Data Center (PMDC). The one here is 8.5 feet by 8.5 feet by 40 feet in size and can be configured and deployed in 12-14 weeks to any location. We can fit any mix of IBM and non-IBM equipment, provided it meets physical dimensions. Want a DS8700 disk system? The PMDC can hold up to 3-frame configurations of the DS8700. Want an eclectic mix of Sun, HP and Dell servers with HDS and EMC disk in your PMDC? IBM can do that too.
After we finished setup, we joined the clients at the "Welcome Reception" on the Lagoon Lawn. The weather was quite pleasant.
Special thanks to Jasdeep Purdhani, Lisa Gates, and Kelly Olson for their help organizing this event.