This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Tony is author of the Inside System Storage series of books, available on Lulu.com! Order your copies today!
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Featured Redbooks and Redpapers:
"The postings on this site solely reflect the personal views of each author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management."
(c) Copyright Tony Pearson and IBM Corporation.
All postings are written by Tony Pearson unless noted otherwise.
Tony Pearson is employed by IBM. Mentions of IBM Products, solutions or services might be deemed as "paid
endorsements" or "celebrity endorsements" by the US Federal Trade Commission.
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections platform will be sunset on December 31, 2019. On January 1, 2020, this blog will no longer be available. More details available on our FAQ.
Continuing my saga for my [New Laptop], I have gotten all my programs operational, transferred and organized all my data, and now ready for testing. You can read my previous posts on this series: [Day 1], [Day 2], [Day 3], [Day 4].
At this point, you might be thinking, "Testing? Just use your laptop already, deal with problems as you find them!" In my case, I need to sign off that the new laptop meets my needs, and then send back my previous laptop, wiped clean of all passwords and data. I have until the end of June to do this.
The value of testing is to avoid problems later, perhaps an inconvenient time such as a business trip or client briefing. It is better to work out any issues while I am still in the office, connected to the internal IBM intranet on a high-speed wired connection. Also, I plan to do a Physical-to-Virtual (P-to-V) conversion of my Windows XP C: drive to run as a virtual guest OS on Linux, so I want to make sure the image is in working order before the conversion. That said, here is what my testing encountered.
Of the 134 applications I had identified as being installed on my old laptop, I determined that I only needed about 70 of them. The others I did not bother to install on the new.
I had not thought about "addons" and "plugins" that I have that attach themselves inside browsers or other applications. I made sure that Flash, Shockwave and Java worked correctly on all three browsers: IE6, Firefox and Opera.
One of my "plugins" is an application called [iSpring Pro, which plugs into Microsoft PowerPoint. I thought I had Microsoft Office installed, but found out the standard IBM build had only the viewers. I installed Microsoft Office 2003 Standard Edition with PowerPoint, Excel and Word. I then realized that I did not have the original V4.3 installation file for iSpring Pro, so I downloaded the latest v5 from their website. However, my license key is only for version 4, so a quick email got this resolved, and the nice folks at iSpring Solutions sent me the v4.3 installation file.
Shameless Plug: We use iSpring Pro to record our voices with PowerPoint slides to generate web videos for the [IBM Virtual Briefing Center] which we use to complement face-to-face briefings. This allows attendees to review introductory materials to prepare for their visit to Tucson, or to stay up-to-date on products and features in between annual visits. If you have not checked out the IBM Virtual Briefing Center, now is a good time to see what videos and other resources we have out there. You can even request to schedule a briefing in Tucson!
Testing out iSpring Pro, I realized that there are no jacks for my headset. On my old ThinkPad T60, I had two jacks, one green for headphone and one pink for microphone. My headset has two cables, one for each, which I then use for the recordings. I also use this for online webinars and training sessions. Apparently, ThinkPad T410 went for a single 3.5mm "Combo" audio jack that handles both roles. Fortunately, there is a [Headset Buddy] adapter that merges the two cables from my headset to the combo jack on my new laptop. I ordered one which will arrive some time next week.
My new laptop doesn't fit my old docking station either. I had set the docking station aside while I had the two laptops latched together for the file transfers, but now that I am done with the old laptop, I discovered that my new T410 doesn't fit. I ordered a new one.
Using find, grep, awk, sort and uniq, I was able to generate a list of all the file extensions on my Documents foler. I was able to find old Lotus 123, Freelance Graphics, and Wordpro files. I thought Lotus Symphony would handle these, but it does not. I was able to install an old version of Lotus Smartsuite that includes these programs so that I can process these files.
I also found in the extensions list pptx, docx and xlsx files, which represent the new Microsoft Office 2007 formats. I installed the "Format Compatability Pack" that allows Office 2003 read these files.
Lastly, I installed a few programs that support a wide variety of file formats. VideoLAN's [VLC] plays a variety of audio and video files. [7-Zip] packs and unpacks a variety of archive files. (Note: Another program, BitZipper, also supports a variety of archive formats, but the install will corrupt your Firefox and IE browsers with new tool bars, change your search engine default, and install a lot of other unwanted software. Cleaning up the mess can be time-consuming. You have been warned!) I also installed [MadEdit], a binary/hex/text editor that will open any file to see what kind of format it has inside. From this, I was able to determine that some of my extension-less files were GIF, RTF or PDF format, and rename them accordingly.
With the testing done, I am ready to go wipe my old system of all passwords and data!
Continuing my saga for my [New Laptop], I have gotten all my programs operational, and now it is a good time to re-evaluate how I organize my data. You can read my previous posts on this series: [Day 1], [Day 2], [Day 3].
I started my career at IBM developing mainframe software. The naming convention was simple, you had 44 character dataset names (DSN), which can be divided into qualifiers separated by periods. Each qualifier could be up to 8 characters long. The first qualifier was called the "high level qualifier" (HLQ) and the last one was the "low level qualifier" (LLQ). Standard naming conventions helped with ownership and security (RACF), catalog management, policy-based management (DFSMS), and data format identification. For example:
PROD.PAYROLL.JCL
TEST.PAYROLL.JCL
U.PEARSON.TEST.JCL
In the first case, we see that the HLQ is "PROD" for production, the application is PAYROLL and this file holds job control language (JCL). The LLQ often identified the file type. The second can be a version for testing a newer version of this application. The third represents user data, in which case my userid PEARSON would have my own written TEST JCL. I have seen successful naming conventions with 3, 4, 5 and even 6 qualifiers. The full dataset name remains the same, even if it is moved from one disk to another, or migrated to tape.
(We had to help one client who had all their files with single qualifier names, no more than 8 characters long, all in the Master Catalog (root directory). They wanted to implement RACF and DFSMS, and needed help converting all of their file names and related JCL to a 4-qualifer naming convention. It took seven months to make this transformation, but the client was quite pleased with the end result.)
While the mainframe has a restrictive approach to naming files, the operating systems on personal computers provide practically unlimited choices. File systems like NTFS or EXT3 support filenames as long as 254 characters, and pathnames up to 32,000 characters. The problem is that when you move a file from one disk to another, or even from one directory structure to another, the pathname will change. If you rely on the pathname to provide critical information about the meaning or purpose of a file, that could get lost when moving the files around.
I found several websites that offered organization advice. On The Happiness Project blog, Gretchen Rubin [busts 11 myths] about organization. On Zenhabits blog, Leo Babauta offers [18 De-cluttering tips].
Peter Walsh's [Tip No. 185] suggests using nouns to describe each folder. Granted these are about physical objects in your home or office, but some of the concepts can apply to digital objects on your disk drive.
"Use the computer’s sorting function. Put “AAA” (or a space) in front of the names of the most-used folders and “ZZZ” (or a bullet) in front of the least-used ones, so the former float to the top of an alphabetical list and the latter go to the bottom."
Personally, I hate spaces anywhere in directory and file names, and the thought of putting a space at the front of one to make it float to the top is even worse. Rather than resorting to naming folders with AAA or ZZZ, why not just limit the total number of files or directories so they are all visible on the screen. I often sort by date to access my most frequently-accessed or most-recently-updated files.
Of all the suggestions I found, Peter Walsh's "Use Nouns" seemed to be the most useful. Wikipedia has a fascinating article on [Biological Classification]. Certainly, if all living things can be put into classifications with only seven levels, we should not need more than seven levels of file system directory structure either! So, this is how I decided to organize my files on my new Thinkad T410:
C: Drive
Windows XP operating system programs and applications. I have structured this so that if I had to replace my hard disk entirely while traveling, I could get a new drive and restore just the operating system on this drive, and a few critical data files needed for the trip. I could then do a full recovery when I was back in the office. If I was hit with a virus that prevented Windows from booting up, I could re-install the Windows (or Linux) operating system without affecting any of my data.
D: Drive
This will be for my most active data, files and databases. I have the Windows "My Documents" point to D:\Documents directory. Under Archives, I will keep files for events that have completed, projects that have finished, and presentations I used that year. If I ever run out of space on my disk drive, I would delete or move off these archives first. I have a single folder for all Downloads, which I can then move to a more appropriate folder after I decide where to put them. My Office folder holds administrative items, like org charts, procedures, and so on.
As a consultant, many of my files relate to Events, these could be Briefings, Conferences, Meetings or Workshops. These are usually one to five days in duration, so I can hold here background materials for the clients involved, agendas, my notes on what transpired, and so on. I keep my Presentations separately, organized by topic. I also am involved with Projects that might span several months or ongoing tasks and assignments. I also keep my Resources separately, these could be templates, training materials, marketing research, whitepapers, and analyst reports.
A few folders I keep outside of this structure on the D: drive. [Evernote] is an application that provides "folksonomy" tagging. This is great in that I can access it from my phone, my laptop, or my desktop at home. Install-files are all those ZIP and EXE files to install applications after a fresh Windows install. If I ever had to wipe clean my C: drive and re-install Windows, I would then have this folder on D: drive to upgrade my system. Finally, I keep my Lotus Notes database directory on my D: drive. Since these are databases (NSF) files accessed directly by Lotus Notes, I saw no reason to put them under the D:\Documents directory structure.
Documents
Archives
2006
2007
2008
2009
Downloads
Events
Briefings
Conferences
Meetings
Workshops
Office
Presentations
Projects
Resources
Evernote
Install-Files
Notes
Data
E: Drive
This will be for my multimedia files. These don't change often, are mostly read-only, and could be restored quickly as needed.
Audio
Images
Video
I'll give this new re-organization a try. Since I have to take a fresh backup to Tivoli Storage Manager anyways, now is the best time to re-organize the directory structure and update my dsm.opt options file.
Continuing my saga for my [New Laptop], let's recap my progress so far:
[Day 1 afternoon], I received the laptop from shipping on Wednesday, took a backup of the factory install image to an external USB drive, and re-partitioned to run both Windows and Linux operating systems.
[Day 2], I spent Thursday using the "Migration Assistant" tool, and completed the operation sending the rest of my data over to the /dev/sda6 NTFS partition.
So now, Friday (day 3), I get to install any applications that were not part of the pre-installed image. Thankfully, I had planned ahead and figured out the 134 different applications that I had on my old system. I printed out a copy of my spreadsheet, and used it as a checklist to systematically go through the list. For each one, I determined one of the following:
BUILD
If I could find the application already installed, either the same version or newer, or functionally equivalent, then I would mark it down as being part of the factory build. Of those programs pre-installed, I am quite pleased that the settings were carried over during yesterday's file transfer. For example, my bookmarks and bookmarklets on Firefox are all in tact. However, it did not carry forward all of my Firefox addons, so these I had to install separately.
ISSI Download
IBM Standard Software Installer is our internal website for IBM and select third-party software for the different operating systems supported. Many of the ISSI programs were already included in the factory build, such as Lotus Notes, Lotus Symphony, Firefox browser, and so I had very few left remaining to do manually from ISSI.
INSTALL from D:\Install-Files
As I mentioned in my previous post, I saved the ZIP or EXE files of installation, as well as any license keys, URLs and other useful information to re-install each application.
COPY over from D:\Prog-Files
Many programs don't have installation files, because they don't need to update the registry or create Desktop icons or Taskbar management buttons. For these I can just copy the directory over to C:\Program Files.
WEB Download
In some cases, the Install-File was fairly downlevel, so I downloaded a fresh copy from the Web. In other cases, I forgot to save the ZIP or EXE, so this was the backup plan.
DEFER for later install
I worked down the list alphabetically, but some programs needed other programs to be installed first, or I needed to find the license registry key, or whatever. This allowed me to focus on the most important programs first. Others I might defer indefinitely until I need them, such as programs to access Second Life, or to build software for Lego Mindstorms robots.
SKIP those applications no longer required
Some programs just don't need to be on my new system. This includes software to manage printers I no longer have, drivers to attach to gadgets and devices I no longer own, and software that might have been specific to the old ThinkPad T60. This was also a good time to "de-duplicate" similar applications. For example, I have decided to limit myself to just three browsers: Firefox, Opera, and Internet Explorer IE6.
The planning paid off. I was able to confirm or install all of my applications today and have a fully working Windows XP system partition. I celebrated by taking another backup.
Continuing my saga regarding my [New Laptop], I managed on
[Wednesday afternoon] to prepare my machine with separate partitions for programs and data. I was hoping to wrap things up on day 2 (Thursday), but nothing went smoothly.
Just before leaving late Wednesday evening, I thought I would try running the "Migration Assistant" overnight by connecting the two laptops with a REGULAR Ethernet cable. The instructions indicated that in "most" cases, two laptops can be connected using a regular "patch cord" cable. These are the kind everyone has, the connects their laptop to the wall socket for wired connection to the corporate intranet, or their personal computers to their LAN hubs at home. Unfortunately, the connection was not recognized, so I suspected that this was one of the exceptions not covered.
(There are two types of Ethernet cables. The ["patch cord"] connects computers to switches. The ["crossover" cable] connects like devices, such as computers to computers, or switches to switches. Four years ago, I used a crossover cable to transfer my files over, and assumed that I would need one this time as well.)
Thursday morning, I borrowed a crossover cable from a coworker. It was bright pink and only about 18 inches long, just enough to have the two laptops side by side. If the pink crossover cable were any shorter, the two laptops would be back to back. I kept the old workstation in the docking station, which allowed it to remain connected to my big flat screen, mouse, keyboard and use the docking stations RJ45 to connect to the corporate intranet. That left the RJ45 on the left side of the old system to connect via crossover cable to the new system. But that didn't work, of course, because the docking station overrides the side port, so we had to completely "undock" and go native laptop to laptop.
Restarting the Migration Assistant, I unplug the corporate intranet cable from the old laptop, put one end of the pink cable into each Ethernet port of each laptop. On the new system, Migration Assistant asks to setup a password and provides an IP address like 169.254.aa.bb with a netmask of 255.255.0.0 and I am supposed to type this IP address over on the old system for it to reach out and connect. It still didn't connect.
We tried a different pink crossover cable, no luck. My colleague Harley brought over his favorite "red" crossover cable, that he has used successfully many times, but still didn't work. The helpful diagnostic advice was to disable all firewall programs from one or both systems.
I disabled Symantec Client Firewall on both systems. Still not working. I even tried booting both systems up in "safe" mode, using MSCONFIG to set the reboot mode as "safe with networking" as the key option. Still not working. At this point, I was afraid that I would have to use the alternate approach, which was to connect both systems to our corporate 100 Mbps system, which would be painfully slow. I only have one active LAN cable in my office, so the second computer would have to sit outside in the lobby.
Looking at the IP address on the old system, it was 9.11.xx.yy, assigned by our corporate DHCP, so not even in the same subnet of the new computer. So, I created profiles on ThinkVantage Access Connections on both systems, with 192.168.0.yy netmask 255.255.255.0 on the old system, and 192.168.0.bb on the new system. This worked, and a connection between the two systems was finally recognized.
Since I had 23GB of system files and programs on my old C: drive, and 80GB of data on my old D: drive, I didn't think I would run out of space on my new 40GB C: drive and 245GB D: drive, but it did! The Migration Assistant wanted to my D:\Documents on my new C: drive and refused to continue. I had to turn off D:\Documents from the list so that it could continue, processing only the programs and system settings on C: drive. It took 61 minutes to scan 23GB on my C: drive, identify 12,900 files to move, representing 794MB of data. Seriously? Less than 1GB of data moved!
It then scanned all of the programs I had on my old system, and decided that there were none that needed to be moved or installed on the new system. The closing instructions explained there might be a few programs that need to be manually installed, and some data that needed to be transferred manually.
Given the performance of Migration Assistant, I decided to just setup a direct Network Mapping of the new D: drive as Y: on my old system, and just drag and drop my entire folder over. Even at 1000 Mbps, this still took the rest of the day. I also backed up C:\Program Files using [System Rescue CD] to my external USB drive, and restored as D:\prog-files, just in case. In retrospect, I realize it would have been faster just to have dumped my D: drive to my USB drive, and restore it on the new system.
I'll leave the process of re-installing missing programs for Friday.
My how time flies. This week marks my 24th anniversary working here at IBM. This would have escaped me completely, had I not gotten an email reminding me that it was time to get a new laptop. IBM manages these on a four-year depreciation schedule, and I received my current laptop back in June 2006, on my 20th anniversary.
When I first started at IBM, I was a developer on DFHSM for the MVS operating system, now called DFSMShsm on the z/OS operating system. We all had 3270 [dumb terminals], large cathode ray tubes affectionately known as "green screens", and all of our files were stored centrally on the mainframe. When Personal Computers (PC) were first deployed, I was assigned the job of deciding who got them when. We were getting 120 machines, in five batches of 24 systems each, spaced out over the next two years. I was assigned the job of recommending who should get a PC during the first batch, the second batch, and so on. I was concerned that everyone would want to be part of the first batch, so I put out a survey, asking questions on how familiar they were with personal computers, whether they owned one at home, were familiar with DOS or OS/2, and so on.
It was actually my last question that helped make the decision process easy:
How soon do you want a Personal Computer to replace your existing 3270 terminal?
1-60 days
61-120 days
121-180 days
As late as possible
Never
I had five options, and roughly 24 respondents checked each one, making my job extremely easy. Ironically, once the early adopters of the first batch discovered that these PC could be used for more than just 3270 terminal emulation, many of the others wanted theirs sooner.
Back then, IBM employees resented any form of change. Many took their new PC, configured it to be a full-screen 3270 emulation screen, and continued to work much as they had before. My mentor, Jerry Pence, would print out his mails, and file the printed emails into hanging file folders in his desk credenza. He did not trust saving them on the mainframe, so he was certainly not going to trust storing them on his new PC. One employee used his PC as a door stop, claiming he will continue to use his 3270 terminal until they take it away from him.
Moving forward to 2006, I was one of the first in my building to get a ThinkPad T60. It was so new that many of the accessories were not yet available. It had Windows XP on a single-core 32-bit processor, 1GB RAM, and a huge 80GB disk drive. The built-in 1GbE Ethernet went unused for a while, as we had 16 Mbps Token Ring network.
I was the marketing strategist for IBM System Storage back then, and needed all this excess power and capacity to handle all my graphic-intense applications, like GIMP and Second Life.
Over the past four years, I made a few slight improvements. I partitioned the hard drive to dual-boot between Windows and Linux, and created a separate partition for my data that could be accessed from either OS. I increased the memory to 2GB and replaced the disk with a drive holding 120GB capacity.
A few years ago, IBM surprised us by deciding to support Windows, Linux and Mac OS computers. But actually it made a lot of sense. IBM's world-renown global services manages the help-desk support of over 500 other companies in addition to the 400,000 employees within IBM, so they already had to know how to handle these other operating systems. Now we can choose whichever we feel makes us more productive. Happy employees are more productive, of course. IBM's vision is that almost everything you need to do would be supported on all three OS platforms:
Lotus Notes
Access your email, calendar, to-do list and corporate databases via Lotus Notes on either Windows, Linux or Mac OS. Corporate databases store our confidential data centrally, so we don't have to have them on our local systems. We can make local replicas of specific databases for offline access, and these are encrypted on our local hard drive for added protection. Emails can link directly to specific entries in a database, so we don't have huge attachments slowing down email traffic. IBM also offers LotusLive, a public cloud offering for companies to get out of managing their own email Lotus Domino repositories.
Lotus Symphony
Create presentations, documents and spreadsheets on either Windows, Linux or Mac OS. Lotus Symphony is based on open source OpenOffice and is compatible with Microsoft Office. This allows us to open and update directly in Microsoft's PPT, DOC and XLS formats.
Firefox Browser
Many of the corporate applications have now been converted to be browser-accessible. The Firefox browser is available on Windows, Linux and Mac OS. This is a huge step forward, in my opinion, as we often had to download applications just to do the simplest things like submit our time-sheet or travel expense reimbursement. I manage my blog, Facebook and Twitter all from online web-based applications.
The irony here is that the world is switching back to thin clients, with data stored centrally. The popularity of Web 2.0 helped this along. People are using Google Docs or Microsoft OfficeOnline to eliminate having to store anything locally on their machines. This vision positions IBM employees well for emerging cloud-based offerings.
Sadly, we are not quite completely off Windows. Some of our Lotus Notes databases use Windows-only APIs to access our Siebel databases. I have encountered PowerPoint presentations and Excel spreadsheets that just don't render correctly in Lotus Symphony. And finally, some of our web-based applications work only in Internet Explorer! We use the outdated IE6 corporate-wide, which is enough reason to switch over to Firefox, Chrome or Opera browsers. I have to put special tags on my blog posts to suppress YouTube and other embedded objects that aren't supported on IE6.
So, this leaves me with two options: Get a Mac and run Windows on the side as a guest operating system, or get a ThinkPad to run Windows or Windows/Linux. I've opted for the latter, and put in my order for a ThinkPad 410 with a dual-core 64-bit i5 Intel processor, VT-capable to provide hardware-assistance for virtualization, 4GB of RAM, and a huge 320GB drive. It will come installed with Windows XP as one big C: drive, so it will be up to me to re-partition it into a Windows/Linux dual-boot and/or Windows and Linux running as guest OS machine.
(Full disclosure to make the FTC happy: This is not an endorsement for Microsoft or against Apple products. I have an Apple Mac Mini at home, as well as Windows and Linux machines. IBM and Apple have a business relationship, and IBM manufactures technology inside some of Apple's products. I own shares of Apple stock, I have friends and family that work for Microsoft that occasionally send me Microsoft-logo items, and I work for IBM.)
I have until the end of June to receive my new laptop, re-partition, re-install all my programs, reconfigure all my settings, and transfer over my data so that I can send my old ThinkPad T60 back. IBM will probably refurbish it and send it off to a deserving child in Africa.
If you have an old PC or laptop, please consider donating it to a child, school or charity in your area. To help out a deserving child in Africa or elsewhere, consider contributing to the [One Laptop Per Child] organization.
The BP oil spill in the Gulf of Mexico is a good reminder that all organizations should consider practice and execution of their contingency plans. In this most recent case, the [Deepwater Horizon] oil platform had an explosion on April 20, resulting in oil spewing out at an estimated 19,000 barrels per day. While some bloggers have argued that BP failed to plan, and therefore planned to fail, I found that hard to believe. How can a billion-dollar multinational company not have contingency plans?
The truth is, BP did have plans. Karen Dalton Beninato of New Orleans' City Voices discusses BP's Gulf of Mexico Regional Oil Spill Response Plan (OSRP) in her article [BP's Spill Plan: What they knew and when they knew it]. A
[redacted 90-page version of the OSRP] is available on their website.
The plan indicates that it may be 30 days from the time a deep offshore leak reaches the shoreline, giving OSRP participants plenty of time to take action.
(Having former politicians [blame environmentalists] for this crisis does not help much either. At least the deep shore rigs give you 30 days to react to a leak before the oil gets to the shoreline. Having oil rigs closer to shore will just shorten this time to react. Allowing onshore oil rigs does not mean oil companies would discontinue their deep offshore operations. There are thousands of oil rigs in the Gulf of Mexico. Extracting oil in the beautiful Alaska National Wildlife Reserve [ANWR] might be safer, it does not eliminate the threat entirely, and any leak there would be damaging to the local plant and animals in the same manner.)
So perhaps the current crisis was not the result of a lack of planning, but inadequate practice and execution. The same is true for IT Business Continuity / Disaster Recovery (BC/DR) plans. In all cases, there are four critical parts:
Plan
The planning team needs to anticipate every possible incident, determine the risks involved and the likelihood of impact, and either accept them, or decide to mitigate them. This can include natural disasters (hurricanes, fires, floods) and technical issues (computer viruses, power outages, network disruption).
Prepare
Mitigation can involve taking backups, having replicated copies at a remote location, creating bootable media, training all of the appropriate employees, and having written documented procedures. IBM's Unified Recovery Management approach can protect your entire IT operations, from laptops of mobile employees, to remote office/branch office (ROBO) locations, to regional and central data centers.
Practice
When was the last time you practiced your Business Continuity / Disaster Recovery plan? I have seen this done at a variety of levels. At the lowest level, it is all done on paper, in a conference room, with all participants talking through their respective actions. These are often called "walk-throughs". At the highest level, you turn off power to your data center --on a holiday weekend to minimize impact to operating revenues-- and have the team bring up applications at the alternate site.
As many as 80 percent of these BC/DR exercises are considered failures, in that if a real disaster would have occurred, the participants are convinced they would not have achieved their target goals of Recovery Time Objective (RTO). However, they are not complete failures if they can help improve the plans, help identify new incidents that were not previously considered, and help train the participants in recovery procedures.
Execute
The last part is execution. In my career, I have been onsite for many Disaster Recovery exercises as well as after real disasters have occured. I am not surprised how many people assume that if they have plans in place, have made preparations, and have one to three practice drills per year, that the actual "execution" would directly follow. While the book [Execution] by Bossidy and Charan is not focused on IT BC/DR plans per se, it is a great read on how to manage the actual execution of any kind of business plan. I have read this book and recommend it.
If you have not tested out your IT department's BC/DR plans. Perhaps its time to dust off your copy, review it, and schedule some time for practice.
Well, it feels like Tuesday and you know what that means... "IBM Announcement Day!" Actually, today is Wednesday, but since Monday was Memorial Day holiday here in the USA, my week is day-shifted. Yesterday, IBM announced its latest IBM FlashCopy Manager v2.2 release. Fellow blogger, Del Hoobler (IBM) has also posted something on this out atthe [Tivoli Storage Blog].
IBM FlashCopy Manager replaces two previous products. One was called Tivoli Storage Manager for Copy Services, the other was called Tivoli Storage Manager for Advanced Copy Services. To say people were confused between these two was an understatement, the first was for Windows, and the second was for UNIX and Linux operating systems. The solution? A new product that replaces both of these former products to support Windows, UNIX and Linux! Thus, IBM FlashCopy Manager was born. I introduced this product back in 2009 in my post [New DS8700 and other announcements].
IBM Tivoli Storage FlashCopy Manager provides what most people with "N series SnapManager envy" are looking for: application-aware point-in-time copies. This product takes advantage of the underlying point-in-time interfaces available on various disk storage systems:
FlashCopy on the DS8000 and SAN Volume Controller (SVC)
Snapshot on the XIV storage system
Volume Shadow Copy Services (VSS) interface on the DS3000, DS4000, DS5000 and non-IBM gear that supports this Microsoft Windows protocol
For Windows, IBM FlashCopy Manager can coordinate the backup of Microsoft Exchange and SQL Server. The new version 2.2 adds support for Exchange 2010 and SQL Server 2008 R2. This includes the ability to recover an individual mailbox or mail item from an Exchange backup. The data can be recovered directly to an Exchange server, or to a PST file.
For UNIX and Linux, IBM FlashCopy Manager can coordinate the backup of DB2, SAP and Oracle databases. Version 2.2 adds support specific Linux and Solaris operating systems, and provides a new capability for database cloning. Basically, database cloning restores a database under a new name with all the appropriate changes to allow its use for other purposes, like development, test or education training. A new "fcmcli" command line interface allows IBM FlashCopy Manager to be used for custom applications or file systems.
A common misperception is that IBM FlashCopy Manager requires IBM Tivoli Storage Manager backup software to function. That is not true. You have two options:
Stand-alone Mode
In Stand-alone mode, it's just you, the application, IBM FlashCopy Manager and your disk system. IBM FlashCopy Manager coordinates the point-in-time copies, maintains the correct number of versions, and allows you to backup and restore directly disk-to-disk.
Unified Recovery Management with Tivoli Storage Manager
Of course, the risk with relying only on point-in-time copies is that in most cases, they are on the same disk system as the original data. The exception being virtual disks from the SAN Volume Controller. IBM FlashCopy Manager can be combined with IBM Tivoli Storage Manager so that the point-in-time copies can be copied off to a local or remote TSM server, so that if the disk system that contains both the source and the point-in-time copies fails, you have a backup copy from TSM. In this approach, you can still restore from the point-in-time copies, but you can also restore from the TSM backups as well.
IBM FlashCopy Manager is an excellent platform to connect application-aware fucntionality with hardware-based copy services.
Here I am, day 11 of a 17-day business trip, on my last leg of the trip this week, in Kuala Lumpur in Malaysia. I have been flooded with requests to give my take on EMC's latest re-interpretation of storage virtualization, VPLEX.
I'll leave it to my fellow IBM master inventor Barry Whyte to cover the detailed technical side-by-side comparison. Instead, I will focus on the business side of things, using Simon Sinek's Why-How-What sequence. Here is a [TED video] from Garr Reynold's post
[The importance of starting from Why].
Let's start with the problem we are trying to solve.
Problem: migration from old gear to new gear, old technology to new technology, from one vendor to another vendor, is disruptive, time-consuming and painful.
Given that IT storage is typically replaced every 3-5 years, then pretty much every company with an internal IT department has this problem, the exception being those companies that don't last that long, and those that use public cloud solutions. IT storage can be expensive, so companies would like their new purchases to be fully utilized on day 1, and be completely empty on day 1500 when the lease expires. I have spoken to clients who have spent 6-9 months planning for the replacement or removal of a storage array.
A solution to make the data migration non-disruptive would benefit the clients (make it easier for their IT staff to keep their data center modern and current) as well as the vendors (reduce the obstacle of selling and deploying new features and functions). Storage virtualization can be employed to help solve this problem. I define virtualization as "technology that makes one set of resources look and feel like a different set of resources, preferably with more desirable characteristics.". By making different storage resources, old and new, look and feel like a single type of resource, migration can be performed without disrupting applications.
Before VPLEX, here is a breakdown of each solution:
IBM
HDS
EMC
Why?
Non-disruptive tech refresh, and a unified platform to provide management and functionality across heterogeneous storage.
Non-disruptive tech refresh, and a unified platform to provide management and functionality between internal tier-1 HDS storage, and external tier-2 heterogeneous storage.
Non-disruptive tech refresh, with unified multi-pathing driver that allows host attachment of heterogeneous storage.
How?
New in-band storage virtualization device
Add in-band storage virtualization to existing storage array
New out-of-band storage virtualization device with new "smart" SAN switches
What?
SAN Volume Controller
HDS USP-V and USP-VM
Invista
For IBM, the motivation was clear: Protect customers existing investment in older storage arrays and introduce new IBM storage with a solution that allows both to be managed with a single set of interfaces and provide a common set of functionality, improving capacity utilization and availability. IBM SAN Volume Controller eliminated vendor lock-in, providing clients choice in multi-pathing driver, and allowing any-to-any migration and copy services. For example, IBM SVC can be used to help migrate data from an old HDS USP-V to a new HDS USP-V.
With EMC, however, the motivation appeared to protect software revenues from their PowerPath multi-pathing driver, TimeFinder and SRDF copy services. Back in 2005, when EMC Invista was first announced, these three software represented 60 percent of EMC's bottom-line profit. (Ok, I made that last part up, but you get my point! EMC charges a lot for these.)
Back in 2006, fellow blogger Chuck Hollis (EMC) suggested that SVC was just a [bump in the wire] which could not possibly improve performance of existing disk arrays. IBM showed clients that putting cache(SVC) in front of other cache(back end devices) does indeed improve performance, in the same way that multi-core processors successfully use L1/L2/L3 cache. Now, EMC is claiming their cache-based VPLEX improves performance of back-end disk. My how EMC's story has changed!
So now, EMC announces VPLEX, which sports a blend of SVC-like and Invista-like characteristics. Based on blogs, tweets and publicly available materials I found on EMC's website, I have been able to determine the following comparison table. (Of course, VPLEX is not yet generally available, so what is eventually delivered may differ.)
IBM SVC
EMC Invista
EMC VPLEX
Hardware
Scalable, 1 to 4 node-pairs
One size fits all, single pair of CPCs
SVC-like, 1 to 4 director-pairs
SAN Fabric
Works with any SAN switches or directors
Required special "smart" switches (vendor lock-in)
SVC-like, works with any SAN switches or directors
Multi-pathing driver
Broad selection of IBM Subsystem Device Driver (SDD) offered at no additional charge, as well as OS-native drivers Windows MPIO, AIX MPIO, Solaris MPxIO, HP-UX PV-Links, VMware MPP, Linux DM-MP, and comercial third-party driver Symantec DMP.
Limited selection, with focus on priced PowerPath driver
Invista-like, PowerPath and Windows MPIO
Cache
Read cache, and choice of fast-write or write-through cache, offering the ability to improve performance.
No cache, Split-Path architecture cracked open Fibre Channel packets in flight, delayed every IO by 20 nanoseconds, and redirected modified packets to the appropriate physical device.
SVC-like, Read and write-through cache, offering the ability to improve performance.
Space-Efficient Point-in-Time copies
SVC FlashCopy supports up to 256 space-efficient targets, copies of copies, read-only or writeable, and incremental persistent pairs.
No
Like Invista, No
Remote distance mirror
Choice of SVC Metro Mirror (synchronous up to 300km) and Global Mirror (asynchronous), or use the functionality of the back-end storage arrays
No native support, use functionality of back-end storage arrays, or purchase separate product called EMC RecoverPoint to cover this lack of functionality
Limited synchronous remote-distance mirror within VPLEX (up to 100km only), no native asynchronous support, use functionality of back-end storage arrays
Thin Provisioning
Provides thin provisioning to devices that don't offer this natively
No
Like Invista, No
Campus-wide access
SVC Split-Cluster allows concurrent read/write access of data to be accessed from hosts at two different locations several miles apart
I don't think so
PLEX-Metro, similar in concept but implemented differently
Non-disruptive tech refresh
Can upgrade or replace storage arrays, SAN switches, and even the SVC nodes software AND hardware themselves, non-disruptively
Tech refresh for storage arrays, but not for Invista CPCs
Tech refresh of back end devices, and upgrade of VPLEX software, non-disruptively. Not clear if VPLEX engines themselves can be upgraded non-disruptively like the SVC.
Heterogeneous Storage Support
Broad support of over 140 different storage models from all major vendors, including all CLARiiON, Symmetrix and VMAX from EMC, and storage from many smaller startups you may not have heard of
Limited support
Invista-like. VPLEX claims to support a variety of arrays from a variety of vendors, but as far as I can find, only DS8000 supported from the list of IBM devices. Fellow blogger Barry Burke (EMC) suggests [putting SVC between VPLEX and third party storage devices] to get the heterogeneous coverage most companies demand.
Back-end storage requirement
Must define quorum disks on any IBM or non-IBM back end storage array. SVC can run entirely on non-IBM storage arrays
None
HP SVSP-like, requires at least one EMC storage array to hold metadata
Internal storage
SVC 2145-CF8 model supports up to four solid-state drives (SSD) per node that can treated as managed disk to store end-user data
None
Invista-like. VPLEX has an internal 30GB SSD, but this is used only for operating system and logs, not for end-user data.
In-band virtualization solutions from IBM and HDS dominate the market. Being able to migrate data from old devices to new ones non-disruptively turned out to be only the [tip of the iceberg] of benefits from storage virtualization. In today's highly virtualized server environment, being able to non-disruptively migrate data comes in handy all the time. SVC is one of the best storage solutions for VMware, Hyper-V, XEN and PowerVM environments. EMC watched and learned in the shadows, taking notes of what people like about the SVC, and decided to follow IBM's time-tested leadership to provide a similar offering.
EMC re-invented the wheel, and it is round. On a scale from Invista (zero) to SVC (ten), I give EMC's new VPLEX a six.
Wrapping up my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a final morning of main-tent sessions. Here is a quick recap of the sessions presented Thursday morning. This left the afternoon for people to catch their flights or hit the links.
Data Center Actions your CFO will Love
Steve Sams, IBM Vice President of Global Site and Facilities, presented simple actions that can yield significant operational and capital cost savings. The first focus area was to extend the life of your existing data center. Some 70 percent of data centers are 10-15 years old or worse, and therefore not designed for today's computational densities. IBM did this for its Lexington data center, making changes that resulted in 8x capability without increasing footprint.
The second focus area was to rationalize the infrastructure across the organization. The process of "rationalizing" involves determining the business value of specific IT components and deciding whether the business value justifies the existing cost and complexity. It allows you to prioritize which consolidations should be done first to reduce costs and optimize value. IBM's own transformation reduced 128 CIOs down to a single CIO, and from 155 host data centers scattered were consolidated down to seven, and 80 web hosting data centers down to five. This also included consolidating 31 intranets down to a single global intranet.
The third focus area was to design your new infrastructure to be more responsive to change. IBM offers four solutions to help those looking to build or upgrade their data center:
Scalable Modular Data Center - save up to 20 percent than traditional deployments with turn-key configurations from 500 to 2500 square feet that can be deployed in as little as 8-12 weeks to an existing floorspace.
Enterprise Modular Data Center - save 40 to 50 percent with 5000 square foot standardized design for larger data centers. This modular approach provides a "pay as you grow" approach that can be more responsive to future unforeseen needs.
Portable Modular Data Center - this is the PMDC shipping container that was sitting outside in the parking lot. This can be deployed anywhere in 12-14 weeks and is ideal for dealing with disaster recoveries or situations where traditional data center floor plans cannot be built fast enough.
High Density Zone - this can help increase capacity in an existing data center without a full site retrofit.
Here is a quick [video] that provides more insight.
Neil Jarvis, CIO of American Automobile Association (AAA) for Northern California, Nevada and Utah (NCNU), provided the customer testimonial. Last September, the [AAA NCNU selected IBM] to build them an energy-efficient green data center. Neil provided us an update now six months later, managing the needs of 4 million drivers.
Virtualization - Managing the World's Infrastructure
Helene Armitage, IBM General Manager of the newly formed IBM System Software product line, presented on virtualization and management. Virtualization is becoming much more than a way of meeting the demand for performance, capability, and flexibility in the data center. It helps create a smarter, more agile data center. Her presentation focused on four areas: consolidate resources, manage workloads, automate processes, and optimize the delivery of IT services.
Charlie Weston, Group Vice President of Information Technology at Winn Dixie, one of the largest food retailers in the United States, with over 500 stores and supermarkets. The grocery business is highly competitive with tight profit margins. Winn Dixie wanted to deploy business continuity/disaster recovery (BC/DR) while managing IT equipment scattered across these 500 locations. They were able to consolidate 600 stand-alone servers into a single corporate data center. Using IBM AIX with PowerVM virtualization on BladeCenter, each JS22 blade server could manage 16 stores. These were mirrored to a nearby facility, as well as a remote disaster recovery center. They were also able to add new Linux application workloads to their existing System z9 EC mainframe. The result was to free up $5 million US dollars in capital that could be used to remodel their stores, and improve application performance 5-10 times. They were able to deploy a new customer portal on Linux for System z in days instead of months, and have reduced their disaster recovery time objective (RTO) against hurricanes from days to hours. Their next steps involves looking at desktop virtualization.
Redefining x86 Computing
Roland Hagan, IBM Vice President for IBM System x server platform, presented on how IBM is redefining the x86 computing experience. More than 50 percent of all servers are x86 based. These x86 servers are easy to acquire, enjoy a large application base, and can take advantage of readily available skilled workforce for administration. The problem is that 85 percent of x86 processing power remains idlea, energy costs are 8 times what they were 12 years ago, and management costs are now 70 percent of the IT budget.
IBM has the number one market share for scalable x86 servers. Roland covered the newly announced eX5 architecture that has been deployed in both rack-optimized models as well as IBM BladeCenter blade servers. These can offer 2x the memory capacity as competitive offerings, which is important for today's server virtualization, database and analytics workloads. This includes 40 and 80 DIMM models of blades, and 64 to 96 DIMM models of rack-optimized systems. IBM also announced eXFlash, internal Solid State Drives accessible at bus speeds.
The results can be significant. For example, just two IBM System x3850 4-socket, 8-core systems can replace 50 (yes, FIFTY) HP DL585 4-socket, 4-core Opteron rack servers, reducing costs 80 percent with a 3-month ROI payback period. Compared to IBM's previous X4 architecture, the eX5 provides 3.5 times better SAP performance, 3.8 times faster server virtualization performance, and 2.8 times faster database performance.
The CIO of Acxiom provided the customer testimonial. They were able to get a 35-to-1 consolidation switching over to IBM x86 servers, resulting in huge savings.
Top ROI projects to Get Started
Mark Shearer, IBM Vice President of Growth Solutions, and formerly my fourth-line manager as the Vice President of Marketing and Communications, presented a list of projects to help clients get started. There are over 500 client references that have successfully implement Smarter Planet projects. Mark's list were grouped into five categories:
Enabling Massive Scale
Increase Business Agility
Manage Risk, Compliance and Security
Organize Vast Amounts of Information
Turn Information into Insight
The attendees were all offered a free "Infrastructure Study" to evaluate their current data center environments. A team of IBM experts will come on-site, gather data, interview key personnel and make recommendations. Alternatively, these can be done at one of IBM's many briefing centers, such as the IBM Executive Briefing Center in Tucson Arizona that I work at.
This wraps up the week for me. I have to pack the XIV back into the crate, and drive back to Tucson. IBM plans to host another Executive Summit in the September/October time frame on the East coast.
Continuing my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a day full main-tent sessions. Here is a quick recap of the sessions presented in the afternoon.
Taming the Information Explosion
Doug Balog, IBM Vice President and Disk Storage Business Line Executive, presented on the information explosion. Storage Admins are focused on managing storage growth and the related costs and complexity, proper forecasting and capacity planning, and backup administration. IBM's strategy is to help clients in the following areas:
Storage Efficiency - getting the most use out of the resources you invest
Service Delivery - ensuring that information gets to the right people at the right time
Data Protection - protecting data against unethical tampering, unauthorized access, and unexpected loss and corruption
Cory Vokey, Senior Manager of IT Systems Operations at Research in Motion, Ltd., the people who bring you BlackBerry phone service, provided a client testimonial for the XIV storage system. Before the XIV, RIM suffered high storage costs and per-volume software licensing. Over the past 15 months, RIM deployed XIV as a corporate standard. With the XIV, they have had 100 percent up-time, and enjoyed 50 percent costs savings compared to their previous storage systems. They have increased capacity 300 percent, without any increase to their storage admin staff. XIV has greatly improved their procurement process, as they no longer need to "true up" their software licenses to the volume of data managed, a sore point with their previous storage vendor.
Mainframe Innovations and Integration
Tom Rosamillia, IBM General Manager of the System z mainframe platform, presented on mainframe servers. After 40 years, IBM's mainframe remains the gold standard, able to handle hundreds of workloads on a single server, facilitating immediate growth with scalability. The key values of the System z mainframe are:
Industry leading virtualization, management and qualities of service
A comprehensive portfolio for business intelligence and data warehousing
The premier platform for modernizing the enterprise
A large and growing portfolio of leading-applications ISV support
Steve Phillips, CIO of Avnet, presented the client testimonial for their use of a System z10 mainframe. Last year, Avnet was ranked Fortune's Number One "Most admired" for Technology distribution. Avnet distributes technology from 300 suppliers to over 100,000 resellers, ISVs and end users. They have modernized their system running SAP on System z with DB2 as the database management system, using Hypersockets virtual LAN inside the server to communicate between logical partitions (LPARs). The folks at Avnet especially like the ability for on-the-fly re-assignment of capacity. This is used for end-of-quarter peak processing, and to adjust between test and development workloads. They also like the various special purpose engines available:
z Integrated Information Processor (zIIP) for DB2 workloads
z Application Assist Processor (zAAP) for Java processing under WebSphere
Integrated Facility for Linux (IFL) for Linux applications
Cloud Computing: Real Capabilities, Real Stories
Mike Hill, IBM Vice President of Enterprise Initiatives, presented on IBM's leadership in cloud computing. He covered three trends that are driving IT today. First, there is a consumerization and industrialization of IT interfaces. Second, a convergence of the infrastructure that is driving a new focus on standards. Third, delivering IT as a service has brought about new delivery choices. The result is cloud computing, with on-demand self-service, ubiquitous network access, location-independent resource pooling, rapid elasticity, and flexible pricing models. Government agencies and businesses in Retail, Manufacturing and Utilities are leading the charge to cloud computing.
Mike covered IBM's five cloud computing deployment models, and shared his views on which workloads might be ready for cloud, and which may not be there yet. Organizations are certainly seeing significant results: reduced labor costs, improved capital utilization, reduced provisioning cycle times, improved quality through reduced software defects, and reduced end user IT support costs.
Mitch Daniels, Director of Technology at ManTech International Corporation, presented the customer testimonial for an IBM private cloud for Development and Test. Mantech chose a private cloud as they work with US Federal agencies like Department of Defense, Homeland Security and the Intelligence community. The private cloud was built from:
IBM Cloudburst virtualized server environment
Tivoli Unified Process to document process and workflow
Tivoli Service Automation Manager to request, deliver and manage IT services
Tivoli Self-Service Portal and Service Catalog to allow developers and testers to request resources as needed
The result: Mantech saved 50 percent in labor costs, and can now provision development and test resources in minutes instead of weeks.
The IBM Transformation Story
Leslie Gordon, IBM Vice President of Application and Infrastructure Services Management, presented IBM's own transformation story, becoming the premier "Globally Integrated Enterprise". Based on IBM's 2009 CIO study, CIOs must balance three roles with seemingly contradictory demands:
Make innovations real, be both an insightful visionary but also an able pragmatist
Raise the Return on Investment (ROI) of IT, determine savvy ways to create value but also be ruthless at cutting costs
Expanding the business impact of IT, be a collaborative business leader with the other C-level executives, but also be an inspiring manager for the IT staff.
In this case, IBM drinks its own champagne, using its own solutions to help run its internal operations. In 1997, IBM used over 15,000 applications, but this has been simplified down to 4500 applications today. Thousands of servers were consolidated to Linux on System z mainframes. The applications workloads were categorized as Blue, Bronze, Silver, and Gold to help prioritize the consolidation. IBM's key lessons from all this were:
Gather data at the business unit level, but build the business case from an enterprise view.
Start small and monitor progress continually, run operations concurrently with transformational projects
Address cultural and organizational changes by deploying transformation in waves
I found the client testimonials insightful. It is always good to hear that IBM's solutions work "as advertised" right out of the box.
Continuing my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a day full main-tent sessions. Here is a quick recap of the sessions presented in the morning.
Leadership and Innovation on a Smarter Planet
Todd Kirtley, IBM General Manager of the western United States, kicked off the day. He explained that we are now entering the Decade of Smart: smarter healthcare, smarter energy, smarter traffic systems, and smarter cities, to name a few. One of those smarter cities is Dubuque, Iowa, nicknamed the Masterpiece of the Mississippi river. Mayor Roy Boul of Dubuque spoke next on his testimonial on working with IBM. I have never been to Dubuque, but it looks and sounds like a fun place to visit. Here is the [press release] and a two-minute [video].
Smarter Systems for a Smarter Planet
Tom Rosamillia, IBM General Manager of the System z mainframe platform, presented on smarter systems. IBM is intentionally designing integrated systems to redefine performance and deliver the highest possible value for the least amount of resource. The five key focus areas were:
Enabling massive scale
Organizing vast amounts of data
Turning information into insight
Increasing business agility
Managing risk, security and compliance
The Future of Systems
Ambuj Goyal, IBM General Manager of Development and Manufacturing, presented the future of systems. For example, reading 10 million electricity meters monthly is only 120 million transactions per year, but reading them daily is 3.65 billion, and reading them every 15 minutes will result in over 350 billion transactions per year. What would it take to handle this? Beyond just faster speeds and feeds, beyond consolidation through virtualization and multi-core systems, beyond pre-configured fit-for-purpose appliances, there will be a new level for integrated systems. Imagine a highly dense integration with over 3000 processors per frame, over 400 Petabytes (PB) of storage, and 1.3 PB/sec bandwidth. Integrating software, servers and storage will make this big jump in value possible.
POWERing your Planet
Ross Mauri, IBM General Manager of Power Systems, presented the latest POWER7 processor server product line. The IBM POWER-based servers can run any mix of AIX, Linux and IBM i (formerly i5/OS) operating system images. Compared to the previous POWER6 generation, POWER7 are four times more energy efficient, twice the performance, at about the same price. For example, an 8-socket p780 with 64 cores (eight per socket) and 256 threads (4 threads per core) had a record-breaking 37,000 SAP users in a standard SD 2-tier benchmark, beating out 32-socket and 64-socket M9000 SPARC systems from Oracle/Sun and 8-socket Nehalem-EX Fujitsu 1800E systems. See the [SAP benchmark results] for full details. With more TPC-C performance per core, the POWER7 is 4.6 times faster than HP Itanium and 7.5 times faster than Oracle Sun T5440.
This performance can be combined with incredible scalability. IBM's PowerVM outperforms VMware by 65 percent and provides features like "Live Partition Mobility" that is similar to VMware's VMotion capability. IBM's PureScale allows DB2 to scale out across 128 POWER servers, beating out Oracle RAC clusters.
IBM AIX on POWER sytsems is also the most reliable UNIX operating system, which is 2.3 times more reliable than Oracle Sun Solaris on SPARC, HP-UX or Apple MacOS, and 10 times more reliable than Windows 2008 Server on x86 platforms. See the [ITIC 2009 Global Server Hardware and Server OS Reliability Survey].
Analytics and Information
The final speaker in the morning was Greg Lotko, IBM Vice President of Information Management Warehouse solutions. Analytics are required to gain greater insight from information, and this can result in better business outcomes. The [IBM Global CFO Study 2010] shows that companies that invest in business insight consistently outperform all other enterprises, with 33 percent more revenue growth, 32 percent more return on invested (ROI) capital, and 12 times more earnings (EBITDA). Business Analytics is more than just traditional business intelligence (BI). It tries to answer three critical questions for decision makers:
What is happening?
Why is it happening?
What is likely to happen in the future?
The IBM Smart Analytics System is a pre-configured integrated system appliance that combines text analytics, data mining and OLAP cubing software on a powerful data warehouse platform. It comes in three flavors: Model 5600 is based on System x servers, Model 7600 based on POWER7 servers, and Model 9600 on System z mainframe servers.
IBM has over 6000 business analytics and optimization consultants to help clients with their deployments.
While this might appear as "Death by Powerpoint", I think the panel of presenters did a good job providing real examples to emphasize their key points.
While clients and IBM executives were in meetings today, in and around the Scottsdale Fairmont resort here in Scottsdale, Arizona, I helped to set up the "Solutions Showcase". There were three stations:
Smarter Systems
David Ayd and I manned this one, covering storage and server systems. From left to right: a fully-populated 15-module XIV storage system, my laptop running the XIV GUI; two-socket 16-core POWER p770 server, a solid-state drive, PS702 POWER blade, my book Inside System Storage: Volume I, HX5 x86 blade, and four-socket 16-core x3850 M3 server with MAX5 memory extension; David's laptop with various POWER and System x presentations, and our Kaon V-Osk interactive plasma screen display.
Smarter Clouds
Eric Kern manned the Smarter Clouds station. He had live guest images on the IBM Developer and Test cloud, which one the "Best of Interop" award up in Las Vegas this week. I covered IBM's cloud offering in my post [Three Things To Do on the IBM Cloud].
Smarter Data Centers
Ken Schneebeli manned the "Smarter Data Centers" station. He directed people out to the parking lot to see Brian Canney and the Portable Modular Data Center (PMDC). The one here is 8.5 feet by 8.5 feet by 40 feet in size and can be configured and deployed in 12-14 weeks to any location. We can fit any mix of IBM and non-IBM equipment, provided it meets physical dimensions. Want a DS8700 disk system? The PMDC can hold up to 3-frame configurations of the DS8700. Want an eclectic mix of Sun, HP and Dell servers with HDS and EMC disk in your PMDC? IBM can do that too.
After we finished setup, we joined the clients at the "Welcome Reception" on the Lagoon Lawn. The weather was quite pleasant.
Special thanks to Jasdeep Purdhani, Lisa Gates, and Kelly Olson for their help organizing this event.
This week, Tuesday, Wednesday and Thursday, I am at the IBM Dynamic Infrastructure Executive Summit at the beautiful Fairmont Resort in Scottsdale, Arizona. This is a mix of indoor and outdoor meetings, one-on-ones with IBM executives, and main-tent sessions.
The Solutions Showcase will cover the following:
Smarter Systems
As the bar for performance gets higher and the need to manage, store and analyze massive amounts of information escalates, systems must scale to meet the needs of the business. The latest server and storage technology innovations including: POWER7, eX5, XIV, ProtecTIER, SONAS, and System z Solution Editions.
Smarter Data Centers
Today’s data centers are under extreme power and cooling pressures and space constraints. How can you get more out of your existing facility, while planning for future requirements? IBM energy efficiency consultants will tell you how you can reduce both CAPEX and OPEX costs and plan for future growth with consolidation and virtualization, energy efficient (energy star) equipment and modular data center solutions. Be sure to check out the IBM Portable Modular Data Center (PMDC) that fits in a standard shipping crate!
Smarter Clouds
IBM’s Cloud Computing solutions provide you with flexible, dynamic, secure and cost-efficient delivery choices from pay-per-use (by the hour, week or year) at IBM cloud centers around the world, conditioning your infrastructure to build your own private cloud or out-of-the box cloud solutions that are quick and easy to deploy. Which workloads are the best fit for cloud computing? How do you decide which cloud computing is right for your organization? Cloud experts will talk about the options, give you recommendations based on your business objectives and help you get started.
It seems everyone is talking about stacks, appliances and clouds.
On StorageBod, fellow blogger Martin Glassborow has a post titled [Pancakes!] He feels that everyone from Hitachi to Oracle is turning into the IT equivalent of the International House of Pancakes [IHOP] offering integrated stacks of software, servers and storage.
Cisco introduced its "Unified Computing System" about a year ago, [reinventing the datacenter with an all-Ethernet approach]. Cisco does not offer its own hypervisor software nor storage, so there are two choices. First, Cisco has entered a joint venture, called Acadia, with VMware and EMC, to form the Virtual Computing Environment (VCE) coalition. The resulting stack was named Vblock, which one blogger had hyphenated as Vb-lock to raise awareness to the proprietary vendor lock-in nature of this stack. Second, Cisco, VMware and NetApp had a similar set of [Barney press releases] to announce a viable storage alternative to those not married to EMC.
"Only when it makes sense. Oracle/Sun has the better argument: when you know exactly what you want from your database, we’ll sell you an integrated appliance that will do exactly that. And it’s fine if you roll your own.
But those are industry-wide issues. There are UCS/VCE specific issue as well:
Cost. All the integration work among 3 different companies costs money. They aren’t replacing existing costs – they are adding costs. Without, in theory, charging more.
Lock-in. UCS/Vblock is, effectively, a mainframe with a network backplane.
Barriers to entry. Are there any? Cisco flagged hypervisor bypass and large memory support as unique value-add – and neither seems any more than a medium-term advantage.
BOT? Build, Operate, Transfer. In theory Vblocks are easier and faster to install and manage. But customers are asking that Acadia BOT their new Vblocks. The customer benefit over current integrator practice? Lower BOT costs? Or?
Price. The 3 most expensive IT vendors banding together?
Longevity. Industry “partnerships” don’t have a good record of long-term success. Each of these companies has its own competitive stresses and financial imperatives, and while the stars may be aligned today, where will they be in 3 years? Unless Cisco is piloting an eventual takeover."
Fellow blogger Bob Sutor (IBM) has an excellent post titled
[Appliances and Linux]. Here is an excerpt:
"In your kitchen you have special appliances that, presumably, do individual things well. Your refrigerator keeps things cold, your oven makes them hot, and your blender purees and liquifies them. There is room in a kitchen for each of these. They work individually but when you are making a meal they each have a role to play in creating the whole.
You could go out and buy the metal, glass, wires, electrical gadgets, and so on that you would need to make each appliance but it is is faster, cheaper, and undoubtably safer to buy them already manufactured. For each device you have a choice of providers and you can pay more for additional features and quality.
In the IT world it is far more common to buy the bits and pieces that make up a final solution. That is, you might separately order the hardware components, the operating system, and the applications, and then have someone put them all together for you. If you have an existing configuration you might add more blades or more storage devices.
You don’t have to do this, however, in every situation. Just from a hardware perspective, you can buy a ready-made machine just waiting for the on switch to be flicked and the software installed. Conversely, you might get a pre-made software image with operating system and applications in place, ready to be provisioned to your choice of hardware. We can get even fancier in that the software image might be deployable onto a virtual machine and so be a ready made solution runnable on a cloud.
Thus in the IT world we can talk about hardware-only appliances, software-only appliances (often called virtual software appliances), and complete hardware and software combinations. The last is most comparable to that refrigerator or oven in your kitchen."
If your company was a restaurant, how many employees would you have on hand to produce your own electricity from gas generators, pump your own water from a well, and assemble your own toasters and blenders from wires and motors? I think this is why companies are re-thinking the way they do their own IT.
Rather than business-as-usual, perhaps a mix of pre-configured appliances, consisting of software, server and storage stacked to meet a specific workload, connected to public cloud utility companies, might be the better approach. By 2013, some analysts feel that as many as 20 percent of companies might not even have a traditional IT datacenter anymore.
“By employing techniques like virtualization, automated management, and utility-billing models, IT managers can evolve the internal datacenter into a ‘private cloud’ that offers many of the performance, scalability, and cost-saving benefits associated with public clouds. Microsoft provides the foundation for private clouds with infrastructure solutions to match a range of customer sizes, needs and geographies.
The public cloud:
“Cloud computing is expanding the traditional web-hosting model to a point where enterprises are able to off-load commodity applications to third-party service providers (hosters) and, in the near future, the Microsoft Azure Services Platform. Using Microsoft infrastructure software and Web-based applications, the public cloud allows companies to move applications between private and public clouds.”
Finally, I saw this from fellow blogger, Barry Burke(EMC), aka the Storage Anarchist, titled [a walk through the clouds] which is really a two-part post.
The first part describes a possible future for EMC customers written by EMC employee David Meiri, envisioning a wonderful world with "No more Metas, Hypers, BIN Files...."
The vision is a pleasant one, and not far from reality. While EMC prefers to use the term "private cloud" to refer to both on-premises and off-premises-but-only-your-employees-can-VPN-to-it-and-your-IT-staff-still-manages-it flavors, the overall vision is available today from a variety of Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) providers.
A good analogy for "private cloud" might be a corporate "intranet" that is accessible only within the company's firewall. This allowed internal websites where information to be disseminated to employees could be posted, using standard HTML and standard web browsers that are already deployed on most PCs and workstations. Web pages running on an intranet can easily be moved to an external-facing website without too much rework or trouble.
The second part has Barry claiming that EMC has made progress towards a "Virtual Storage Server" that might be announced at next month's EMC World conference.
Seriously?
When people hear "Storage Virtualization" most immediately think of the two market leaders, IBM SAN Volume Controller and Hitachi Data Systems (HDS) Universal Storage Platform (USP) products. Those with a tape bent might throw in IBM's TS7000 virtual tape libraries or Oracle/Sun's Virtual Storage Manager (VSM). And those focused on software-only solutions might recall Symantec's Veritas Volume Manager (VxVM), DataCore's SANsymphony, or FalconStor's IPStor products.
But what about EMC's failed attempt at storage virtualization, the Invista? After five years of failing to deliver value, EMC has so far only publicised ONE customer reference account, and I estimate that perhaps only a few dozen actual customers are still running on this platform. Compare that to IBM selling tens of thousands of SAN Volume Controllers, and HDS selling thousands of their various USP-V and USP-VM products, and you quickly realize that EMC has a lot of catching up to do. EMC's first delivered Invista about 18 months after IBM SAN Volume Controller, similar to their introduction of Atmos being 18 months after our Scale-Out File Services (SoFS) and their latest CLARiiON-based V-Max coming out 18 months after IBM's XIV storage system.
So what will EMC's Invista follow-on "Virtual Storage Server" product look like? No idea. It might be another five years before you actually hear about a customer using it. But why wait for EMC to get their act together?
IBM offers solutions TODAY that can make life as easy as envisioned here. IBM offers integrated systems sold as ready-to-use appliances, customized "stacks" that can be built to handle particular workloads, residing on-premises or hosted at an IBM facility, and public cloud "as-a-service" offerings on the IBM Cloud.
My colleagues, Harley Puckett (left) and Jack Arnold (right) were highlighted in today's Arizona Daily Star, our local newspaper, as part of an article on IBM's success and leadership in the IT storage industry. At 1400 employees here in Tucson, IBM is Southern Arizona's 36th largest employer.
Highlighted in the article:
DS8700 with the new Easy Tier feature
TS7650 ProtecTIER virtual tape library with data deduplication capability
LTO-5 tape and the new Long Term File System (LTFS)
XIV with the new 2TB drive, for a maximum per-rack usable capacity of 161 TB.
Perhaps E.A.R.T.H. could stand for IBM's "Energy-efficient Archive, Retention, Tape and Hybrid" storage offerings, which combined, had double-digit percent growth in Petabytes shipped (1Q10 versus 1Q09). This helped IBM gain market share. Last week's LTO-5 announcement was made at [NAB Show 2010] by the National Association of Broadcasters. Why? Because many digital media and entertainment people at this conference are interested in getting off "analog video". LTO-5 is 20 times cheaper than professional versions of the BetaMax or VHS tape currently used. So while many are trying to go "tape-less" by switching to disk, like the IBM DCS9900, they are finding that perhaps LTO-5 tape might be the better alternative. A key advantage of LTO-5 is that the cartridges can now be used like DVD-RW or USB thumb drives, with drag-and-drop file capability using the new Long Term File System (LTFS) on the LTO-5 cartridges. This earned a "Pick Hit" at the conference.
Overall, IBM storage revenues grew double digits, which leads me to believe that the worst of the financial melt-down is over, at least from an IT industry perspective. To learn more, see [IBM 1Q10 Financial Results].
Greg and 3PAR's Marc Farley did an "ambush" interview with the folks at the IBM booth at SNW, including Paula Koziol about Twitter, and [Rich Swain] about IBM's latest SONAS product. Here is their post [Storage Monkey business with IBM]:
You can learn more about SONAS from my post [More Details about IBM Clustered NAS]. SONAS is based on software that has been available since 1996, on commodity off-the-shelf server and storage systems, but building a complete system was left as an exercise to the end-user, which many of the top 500 Supercomputers have done.
Back in November 2007, IBM announced Scale-Out File Services (SoFS) which was a set of IBM Global Technical Services to build a customized solution from the software and a set of servers, disk and tape storage. Customized configurations were done for a variety of workloads from Digital Media to Scientific Research High Performance Computing (HPC). Last year, SoFS was renamed to IBM Smart Business Storage Cloud (SBSC).
This year, IBM was able to package all of the software and hardware into an easy to order machine-type model that has everything cabled and ready to use. This is what SONAS is today.
Continuing my discussion of this week's announcements of IBM storage products, I will cover the announcements that double storage capacity per footprint.
Linear Tape Open - Generation 5
IBM announced [LTO-5 drives], the TS2250 half-height and the TS2350 full-height drives, as well as support for LTO-5 drives in its various tape libraries: TS3100, TS3200, and TS3500. The native 1.5TB capacity of the LTO-5 cartridge is nearly double the 800GB capacity of the LTO-4 predecessor. With 2:1 compression, that's 3TB of data per cartridge! Performance-wise, the data transfer rate is 140 MB/sec, about 17 percent improvement over the 120MB/sec of the LTO-4 technology. The TS2250, TS2350, TS3100 and TS3200 now all offer dual-SAS ports for higher availability.
LTO-5 carries forward many of the advancements of past generations. For example, LTO-5 continues the G-2/G-1 "backward compatibility" architecture, which means that the LTO-5 drive can read LTO-3 and LTO-4 cartridges, and can write LTO-4 cartridges. Like the LTO-3 and LTO-4, the same LTO-5 drive can read and write WORM or regular rewriteable cartridges. Like the LTO-4, the LTO-5 offers drive-level data-at-rest encryption. These use a symmetric 256-bit AES key, managed by IBM Tivoli Key Lifecycle Manager (TKLM).
One thing that is new in LTO-5 is the Long Term File System [LTFS] available on the TS2250 and TS2350, which allows you to treat the tape as a hierarchical file system, with files and folders, that you can drag and drop like any other file system.
XIV storage system
IBM [doubles the capacity of the XIV storage system] by supporting 2TB SATA drives. A full 15-module frame can hold up to 161TB of usable capacity. The smallest 6-module system with 2TB can hold up to 55TB of usable capacity. At this time, all of the drives in an XIV must be the same type, so we do not yet allow intermix of 1TB and 2TB in the same frame. The 2TB are more energy efficient, with a full 15-module frame consuming on average 6.7 kVA, compared to 7.8 kVA for the 1TB drives. The performance is roughly the same, so if, for example, your application workload got 3700 IOPS per module with 1TB drives, it will get about the same 3700 IOPS per module with 2TB drives.
TS7650 ProtecTIER Data Deduplication
IBM now supports [many-to-one virtual tape volume mirroring] on the ProtecTIER. In other words, you can have two or more locations sending data to a single ProtecTIER disaster recovery site.
N series disk system
The EXN1000 and EXN3000 can now double in capacity with 2TB SATA drives. These can be attached to the N3000 entry-level models, such as the N3400.
DS3000 disk system
The DS3200, DS3300 and DS3400, as well as their related expansion drawers, now supports 2TB SATA drives. This means that a single control unit with three expansion drawers can hold up to 96TB of raw capacity (48 drives).
DS8700 disk system
The DS8700 also now supports 2TB SATA drives, for a maximum raw capacity over 2PB, as well as new 600GB Fibre Channel drives. Now that IBM offers [Easy Tier] functionality, pairing Solid State Drives with slower, energy-efficient SATA disk makes a lot of financial sense.
That's a lot of announcements! As always, feel free to dig into each of the links to learn more about each product.
Well, it's Tuesday, and that means IBM announcements!
IBM kicks EMC in the teeth with the announcement of System Storage Easy Tier, a new feature available at no additional charge on the DS8700 with the R5.1 level microcode. Barry Whyte introduces the concept in his [post this morning]. I will use SLAM (sub-LUN automatic movement) to refer generically to IBM Easy Tier and EMC FAST v2. EMC has yet to deliver FAST v2, and given that they just recently got full-LUN FAST v1 working a few months ago, it might be next year before you see EMC sub-LUN FAST v2.
Here are the key features of Easy Tier on the DS8700:
Sub-LUN Automatic Movement
IBM made it really easy to implement this on the DS8700. Today, you have "extent pools" that can be either SSD-only or HDD-only. With this new announcement, we introduce "mixed" SSD+HDD extent pools. The hottest extents are moved to SSD, and cooler extents are moved down to HDD. The support applies to both Fixed block architecture (FBA) LUNs as well as Count-Key-Data (CKD) volumes. In other words, an individual LUN or CKD volume can have some of its 1GB extents on SSD and other extents on FC or SATA disk.
Entire-LUN Manual Relocation
Entire-LUN Manual Relocation (ELMR, pronounced "Elmer"?) is similar to what EMC offers now with FAST v1. With this feature, you can now relocate an entire LUN non-disruptively from any extent pool to any other extent pool. You can relocate LUNs from an SSD-only or HDD-only pool over to a new Easy Tier-managed "mixed" pool, or take a LUN out of Easy Tier management by moving it to an SSD-only or HDD-only pool. Of course, this support also applies to both Fixed block architecture (FBA) LUNs as well as Count-Key-Data (CKD) volumes.
This feature also can be used to relocate LUNs and CKD volumes from FC to SATA pools, from RAID-10 to RAID-5 pools, and so on.
Pool Mergers
What if you already have SSD-only and HDD-only pools and want to use Easy Tier? You can now merge pools to create a "mixed" pool.
SSD Mini-Packs
Before this announcement, you had to buy 16 solid-state drives at a time, called Mega-packs. Now, you can choose to buy just 8 SSD at a time, called Mini-packs. It turns out that just moving as little as 10 percent of your data from Fibre Channel disk over to Solid-State with Easy Tier can result in up to 300 to 400 percent performance improvement. IBM plans to publish formal SPC-1 benchmark results using Easy Tier-managed mixed extent pool in a few weeks.
Storage Tier Advisor Tool (STAT)
Don't have SSD yet, or not sure how awesome Easy Tier will be for your data center? The IBM Storage Tier Advisor Tool will analyze your extents and estimate how much benefit you will derive if you implement Easy Tier with various amounts of SSD. Those clients with R5.1 microcode on their DS8700 can download from the [DS8700 FTP site].
To learn more, see the [Easy Tier landing page] and the 10-page Easy Tier chapter in [DS8000 Introduction and Planning Guide]. IBM also had announcements regarding LTO-5 tape, N series and XIV storage systems, which I will get to in later posts.
They say "Great Minds think alike" and that imitation is "the sincerest form of flattery." Both of these quotes came to mind when I read fellow blogger Chuck Hollis' (EMC) excellent April 7th blog post [The 10 Big Ideas That Are Shaping IT Infrastructure Today]. Not surprisingly, some of his thoughts are similar to those I had presented two weeks ago in my March 22nd post [Cloud Computing for Accountants]. Here are two charts that caught my eye:
Telephone Operators
On page 13 of my deck, I had an old black and white photo of telephone operators, as part of a section on the history of selecting "cloud" as the iconic graphic to represent all networks. Chuck has this same graphic on his chart titled "#1 The Industrialization of IT Infrastructure".
Looks like Chuck and I use the same "stock photo" search facility!
Arms Dealers
On page 45 on my deck, I had a list of major "arms dealers" that deliver the hardware and software components needed to build Cloud Computing. Chuck has a similar chart, titled "#2 The Consolidation of the IT Industry", but with some interesting differences.
Let's look at some of the key differences:
The left-to-right order is slightly different. I chose a 1-2-4-2-1 symmetrical pattern purely on aesthetic reasons. My presentation was to a bunch of accountants, and so I was trying not to make it sound like an "Infomercial" for IBM products and offerings. My sequence is roughly chronological, in that Oracle announced its intention to acquire Sun, then Cisco, VMware and EMC announced their VCE coalition, followed closely by Cisco, VMware and NetApp announcing they work together well also, followed by [HP extended alliance with Microsoft] on Jan 13, 2010. As the IT marketplace is maturing, more and more customers are looking for an IBM-like one-stop shopping experience, and certainly various "mini-mall" alliances have formed to try to compete in this space.
I had HP and Microsoft in the same column, referring only to the above-mentioned January announcement. HP is all about private cloud hardware infrastructures, but Microsoft is all about "three screens and the public cloud", so not sure how well this alliance will work out from a Cloud Computing perspective. This was not to imply that the other stacks don't work well with Microsoft software. They all do. Perhaps to avoid that controversy, Chuck chose to highlight HP's acquisition of EDS services instead.
I used the vendor logos in their actual colors. Notice that the colors black, blue and red occur most often. These happen to be the three most popular ballpoint pen ink colors found on the very same paper documents these computer companies are trying to eliminate. Paper-less office, anyone? Chuck chose instead to colorize each stack with his own color scheme. While blue for IBM and orange for Sun Microsystems make some sense, it is not clear if he chose green for Cisco/VMware/EMC for any particular reason. Perhaps he was trying to subtly imply that the VCE stack is more energy efficient? Or maybe the green refers to money to indicate that the VCE stack is the most expensive? Either way, I would pit IBM's server/storage/software stack up against anything of comparable price from these other stacks in any energy efficiency bake-off.
What about the Cisco/VMware/NetApp combination? All three got together to assure customers this was a viable combination. IBM is the number one reseller of VMware, and VMware runs great with IBM's N series NAS storage, so I do not dispute Cisco's motivation here. It makes sense for Cisco to two-time EMC in this manner. Why should Cisco limit itself to a single storage supplier? Et tu VMware? Having VMware chose NetApp over its parent company EMC was a bit of a shock. No surprise that Chuck left NetApp out of his chart.
No love for Dell? I give Dell credit for their work with Virtual Desktop Images (VDI), and for embracing Ubuntu Linux for their servers. Dell's acquisitions of EqualLogic iSCSI-based disk systems and Perot Systems for services are also worth noting. Dell used to resell some of EMC's gear, but perhaps that relationship continues to fade away, as I [predicted back in 2007]. Chuck's decision to leave Dell off his chart speaks volumes to where this relationship stands, and where it is going.
Perhaps we are all in just one big ["echo chamber"], as we are all coming up with similar observations, talking to similar customers, and reviewing similar market analyst reports. I am glad, at least this time, that Chuck and I for the most part agree where the marketplace is going. We live in interesting times!