This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Tony is author of the Inside System Storage series of books, available on Lulu.com! Order your copies today!
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Featured Redbooks and Redpapers:
"The postings on this site solely reflect the personal views of each author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management."
(c) Copyright Tony Pearson and IBM Corporation.
All postings are written by Tony Pearson unless noted otherwise.
Tony Pearson is employed by IBM. Mentions of IBM Products, solutions or services might be deemed as "paid
endorsements" or "celebrity endorsements" by the US Federal Trade Commission.
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections platform will be sunset on December 31, 2019. On January 1, 2020, this blog will no longer be available. More details available on our FAQ.
Well, it's Tuesday, and you know what that means! IBM Announcements!
This week, IBM announces its latest versions of IBM [Tivoli Storage Productivity Center v4.2] and [Tivoli Storage Productivity Center for Replication on System z v4.2]. I was the original lead architect for Productivity Center back in the version 1 and version 2 days, and am proud to see my little baby has grown up to be a fine young citizen. Analysts recognize IBM Tivoli Storage Productivity Center as one of the top three best SRM products currently in the marketplace. Here are the key highlights:
Storage Resource Agent
The "Storage Resource Agent" introduced for Linux, AIX and Windows in v4.1 is a lightweight agent, written in native "C" language instead of Java, to avoid all the resources that Java consumes. In this release, it is now supported for HP-UX and Solaris, and adds file level and database level storage resource management (SRM) reporting for all five platforms.
For new customer deployments, this will eliminate all the pain setting up a "Common Agent Manager". The Productivity Center server will send out the agent, the agent collects the data, and can then optionally uninstall itself. In this manner, you always have the latest version of the code collecting the data. For those with Common Agent Manager already installed, you can continue running as is, or slowly transition over to the new lightweight agent methodology.
Full support for IBM XIV Storage System
IBM XIV® Storage System support updated to include provisioning, data path explorer and performance management reporting. Before this release, Productivity Center could only discover and provide rudimentary capacity information for XIV systems. Now you can carve LUNs and monitor XIV disk performance just like you can with most other disk systems.
Storage Area Network (SAN) configuration planning
For those who have both Productivity Center Standard Edition (SE) and Productivity Center for Replication, the SAN Config Planner is now "replication-aware" and will add LUNs to existing copy sessions, or create new copy sessions, and ensure that the devices chosen meet the appropriate criteria.
HyperSwap™ for the IBM AIX® environment
On z/OS mainframes, if you experience an outage on a storage system, Productivity Center for Replication (TPC-R) can automatically swap to the synchronous mirror copy without disruption to the operating system or application. Now, IBM has extended this awesome feature to the AIX platform for high availability in POWER-based server environments.
Detailed Session Reporting for Global Mirror
Before, TPC-R enforced the notion of only one Global Mirror master per storage system. Now, TPC-R v4.2 is capable of supporting multiple Global MIrror sessions, and provide more detailed session reporting for these environments. This can be useful if for some unknown reason the bits are not being shoveled from point A to point B, and you need to do some "problem determination".
SVC Incremental FlashCopy
Productivity Center for Replication now adds support for the "Incremental" feature of SVC FlashCopy. While FlashCopy requests are processed instantaneously, there is background processing required that can consume cycles. Incremental processing keeps track of what changed since the last FlashCopy, and minimizes this behind-the-scenes overhead.
Integrated Distributed Disaster Recovery manager
IBM Tivoli System Automation Application Manager [TSA-AM] can now integrate with TPC-R to provide application-aware disaster recovery capability. This can coordinate between IBM Tivoli System Automation for Multiplatforms [TSA-for-MP], IBM HACMP/PowerHA, as well as other clustering products like Microsoft Cluster Services (MSCS) and Veritas Cluster Services on Solaris. When TSA-AM detects an outage, it can notify Globally Dispersed Parallet Sysplex Distributed Cluster Management (GDPS-DCM) to take action. This integration was actually completed with TPC v4.1 back in April, but got buried deep inside our big storage launch, so I bring it up again as a gentle reminder that IBM offers the best end-to-end management on the planet.
At last month's Storage University, I presented an overview of [Tivoli Storage Productivity Center v4.1]. Many of the questions were along the lines of "When will TPC do xyz?" and all I could answer was "Soon" since I knew they would be delivered with this TPC v4.2 release, but I couldn't provide any more details than that at the time.
Mark your calednars! If you live or work anywhere near Australia or New Zealand, I will be presenting in a 7-city series in both countries. Here is my schedule:
I am just one of the speakers. We will have at each location the local IBM team and IBM clients giving testimonials. All the speakers will be available afterward for Q&A. It's shaping up to be an exciting series of events!
Well, it's that Back-To-School time again! Mo's thirteen-year-old reluctantly enters the eight grade, still upset the summer ended so abruptly. Richard's nephew returns to the University of Arizona for another year. Natalie has chosen to move to Phoenix and pursue a post-grad degree at Arizona State University. They all have two things in common, they all want a new computer, and they are all on a budget.
Fellow blogger Bob Sutor (IBM) pointed me to an excellent article on [How to Build Your Own $200 PC], which reminded me of the [XS server I built] for my 2008 Google Summer of Code project with the One Laptop per Child organization. Now that the project is over, I have upgraded it to Ubuntu Desktop 10.04 LTS, known as Lucid Lynx. Building your own PC with your student is a great learning experience in itself. Of course, this is just the computer itself, you still need to buy the keyboard, mouse and video monitor separately, if you don't already have these.
If you are not interested in building a PC from scratch, consider taking an old Windows-based PC and installing Linux to bring it new life. Many of the older PCs don't have enough processor or memory to run Windows Vista or the latest Windows 7, but they will all run Linux.
(If you think your old system has resale value, try checking out the ["trade-in estimator"] at the BestBuy website to straighten out your misperception. However, if you do decide to sell your system, consider replacing the disk drive with a fresh empty one, or wipe the old drive clean with one of the many free Linux utilities. Jason Striegel on Engadget has a nice [HOWTO Erase your old hard disk drive] article. If you don't have your original manufacturer's Windows installation discs, installing Linux instead may help keep you out of legal hot water.)
Depending on what your school projects require, you want to make sure that you can use a printer or scanner with your Linux system. Don't buy a printer unless it is supported by Linux. The Linux Foundation maintains a [Printer Compatability database]. Printing was one of the first things I got working for my Linux-based OLPC laptop, which I documented in my December 2007 post [Printing on XO Laptop with CUPS and LPR] and got a surprising following over at [OLPC News].
To reduce paper, many schools are having students email their assignments, or use Cloud Computing services like Google Docs. Both the University of Arizona and Arizona State University use Google Docs, and the students I have talked with love the idea. Whether they use a Mac, Linux or Windows PC, all students can access Google Docs through their browser. An alternative to Google Docs is Windows Live Skydrive, which has the option to upload and edit the latest Office format documents from the Firefox browser on Linux. Both offer you the option to upload GBs of files, which could be helpful transferring data from an old PC to a new one.
Lastly, there are many free video games for Linux, for when you need to take a break from all that studying. Ever since IBM's [36-page Global Innovation Outlook 2.0] study showed that playing video games made you a better business leader, I have been encouraging all students that I tutor or mentor that playing games is a more valuable use of your time than watching television. IBM considers video games the [future of learning]. Even the [Violent Video Games are Good for Kids]. It is no wonder that IBM provides the technology that runs all the major game platforms, including Microsoft Xbox360, Nintendo Wii and Sony PlayStation.
(FTC disclosure: I work for IBM. IBM has working relationships with Apple, Google, Microsoft, Nintendo and Sony. I use both Google Docs and Microsoft Live Skydrive for personal use, and base my recommendations purely on my own experience. I own stock in IBM, Google and Apple. I have friends and family that work at Microsoft. I own an Apple Mac Mini and Sony PlayStation. I was a Linux developer earlier in my IBM career. IBM considers Linux a strategic operating system for both personal and professional use. IBM has selected Firefox as its standard browser internally for all employees. I run Linux both at home and at the office. I graduated from the University of Arizona, and have friends who either work or take classes there, as well as at Arizona State University.)
Linux skills are marketable and growing more in demand. Linux is used in everything from cellphones to mainframes, as well as many IBM storage devices such as the IBM SAN Volume Controller, XIV and ProtecTIER data deduplication solution. In addition to writing term papers, spreadsheets and presentations with OpenOffice, your Linux PC can help you learn programming skills, web design, and database administration.
To all the students in my life, I wish you all good things in the upcoming school year!
Continuing my coverage of the annual [2010 System Storage Technical University], I attended some sessions from the System x and Federal track side of this conference.
Grid, SOA and Cloud Computing
Bill Bauman, IBM System x Field Technical Support Specialist and System x University celebrity, presented the differences between Grid, SOA and Cloud Computing. I thought this was an odd combination to compare and contrast, but his presentation was well attended.
Grid - this is when two or more independently owned and managed computers are brought together to solve a problem. Some research facilities do this. IBM helped four hospitals connect their computers together into a grid to help analyze breast cancer. IBM also supports the [World Community Grid] which allows your personal computer to be connected to the grid and help process calculations.
SOA - SOA, which stands for Service Oriented Architecture, is an approach to building business applications as a combination of loosely-coupled black-box components orchestrated to deliver a well-defined level of service by linking together business processes. I often explain SOA as the the business version of Web 2.0. You can download a free copy of the eBook "SOA for Dummies" at the [IBM Smart SOA] landing page.
Cloud - A Cloud is a dynamic, scalable, expandable, and completely contractible architecture. It may consist of multiple, disparate, on-premise and off-premise hardware and virtualized platforms hosting legacy, fully installed, stateless, or virtualized instances of operating systems and application workloads.
Tom Vezina, IBM Advanced Technical Sales Specialist, presented "Chaos to Cloud Computing". Survey results show that roughly 70 percent of cloud spend will be for private clouds, and 30 percent for public, hybrid or community clouds. Of the key motivations for public cloud, 77 percent or respondents cited reducing costs, 72 percent time to value, and 50 percent improving reliability.
Tom ran over 500 "server utilization" studies for x86 deployments during the past eight years. Of these, the worst was 0.52 percent CPU utilization, the best was 13.4 percent, and the average was 6.8 percent. When IBM mentions that 85 percent of server capacity is idle, it is mostly due to x86 servers. At this rate, it seems easy to put five to 20 guest images onto a machine. However, many companies encounter "VM stall" where they get stuck after only 25 percent of their operating system images virtualized.
He feels the problem is with the fact most Physical-to-Virtual (P2V) migrations are manual efforts. There are tools available like Novell [PlateSpin Recon] to help automate and reduce the total number of hours spent per migration.
System x KVM Solutions
Boy, I walked into this one. Many of IBM's cloud offerings are based on the Linux hypervisor called Kernel-based Virtual Machine [a href="http://www.linux-kvm.org/page/Main_Page">KVM] instead of VMware or Microsoft Hyper-V. However, this session was about the "other KVM": keyboard video and mouse switches, which thankfully, IBM has renamed to Console Managers to avoid confusion. Presenters Ben Hilmus (IBM) and Steve Hahn (Avocent) presented IBM's line of Local Console Managers (LCM) and Global Console Managers (GCM) products.
LCM are the traditional KVM switches that people are familiar with. A single keyboard, video and mouse can select among hundreds of servers to perform maintenance or check on status. GCM adds KVM-over-IP capabilities, which means that now you can access selected systems over the Ethernet from a laptop or personal computer. Both LCM and GCM allow for two-level tiering, which means that you can have an LCM in each rack, and an LCM or GCM that points to each rack, greatly increasing the number of servers that can be managed from a single pane of glass.
Many severs have a "service processor" to manage the rest of the machine. IBM RSA II, HP iLO, and Dell DRAC4 are some examples. These allow you to turn on and off selected servers. IBM BladeCenter offers an Management Module that allows the chassis to be connected to a Console Manager and select a specific blade server inside. These can also be used with VMware viewer, Virtual Network Computing (VNC), or Remote Desktop Protocol (RDP).
IBM's offerings are unique it that you can have an optical CD/DVD drive or USB external storage attached at the LCM or GCM, and make it look like the storage is attached to the selected server. This can be used to install or upgrade software, transfer log files, and so on. Another great use, and apparently the motivation for having this session in the "Federal Track", is that the USB can be used to attach a reader for a smart card, known as a Common Access Card [CAC] used by various government agencies. This provides two-factor authentication [TFA]. For example, to log into the system, you enter your password (something you know) and swipe your employee badge smart card (something you have). The combination are validated at the selected server to provide access.
I find it amusing that server people limit themselves to server sessions, and storage people to storage sessions. Sometimes, you have to step "outside your comfort zone" and learn something new, something different. Open your eyes and look around a bit. You might just be surprised what you find.
(FTC note: I work for IBM. IBM considers Novell a strategic Linux partner. Novell did not provide me a copy of Platespin Recon, I have no experience using it, and I mention it only in context of the presentation made. IBM resells Avocent solutions, and we use LCM gear in the Tucson Executive Briefing Center.)
Continuing my coverage of the annual [2010 System Storage Technical University], I participated in the storage free-for-all, which is a long-time tradition, started at SHARE User Group conference, and carried forward to other IT conferences. The free-for-all is a Q&A Panel of experts to allow anyone to ask any question. These are sometimes called "Birds of a Feather" (BOF). Last year, they were called "Meet the Experts", one for mainframe storage, and the other for storage attached to distributed systems. This year, we had two: one focused on Tivoli Storage software, and the second to cover storage hardware. This post provides a recap of the Storage Hardware free-for-all.
The emcee for the event was Scott Drummond. The other experts on the panel included Dan Thompson, Carlos Pratt, Jack Arnold, Jim Blue, Scott Schroder, Ed Baker, Mike Wood, Steve Branch, Randy Arseneau, Tony Abete, Jim Fisher, Scott Wein, Rob Wilson, Jason Auvenshine, Dave Canan, Al Watson, and myself, yours truly, Tony Pearson.
What can I do to improve performance on my DS8100 disk system? It is running a mix of sequential batch processing and my medical application (EPIC). I have 16GB of cache and everything is formatted as RAID-5.
We are familiar with EPIC. It does not "play well with others", so IBM recommends you consider dedicating resources for just the EPIC data. Also consider RAID-10 instead for the EPIC data.
How do I evaluate IBM storage solutions in regards to [PCI-DSS] requirements.
Well, we are not lawyers, and some aspects of the PCI-DSS requirements are outside the storage realm. In March 2010, IBM was named ["Best Security Company"] by SC Magazine, and we have secure storage solutions for both disk and tape systems. IBM DS8000 and DS5000 series offer Full Disk Encryption (FDE) disk drives. IBM LTO-4/LTO-5 and TS1120/TS1130 tape drives meet FIPS requirements for encryption. We will provide you contact information on an encryption expert to address the other parts of your PCI-DSS specific concerns.
My telco will only offer FCIP routing for long-distance disk replication, but my CIO wants to use Fibre Channel routing over CWDM, what do I do?
IBM XIV, DS8000 and DS5000 all support FC-based long distance replication across CWDM. However, if you don't have dark fiber, and your telco won't provide this option, you may need to re-negotiate your options.
My DS4800 sometimes reboots repeatedly, what should I do.
This was a known problem with microcode level 760.28, it was detecting a failed drive. You need to replace the drive, and upgrade to the latest microcode.
Should I use VMware snapshots or DS5000 FlashCopy?
VMware snapshots are not free, you need to upgrade to the appropriate level of VMware to get this function, and it would be limited to your VMware data only. The advantage of DS5000 FlashCopy is that it applies to all of your operating systems and hypervisors in use, and eliminates the consumption of VMware overhead. It provides crash-consistent copies of your data. If your DS5000 disk system is dedicated to VMware, then you may want to compare costs versus trade-offs.
Any truth to the rumor that Fibre Channel protocol will be replaced by SAS?
SAS has some definite cost advantages, but is limited to 8 meters in length. Therefore, you will see more and more usage of SAS within storage devices, but outside the box, there will continue to be Fibre Channel, including FCP, FICON and FCoE. The Fibre Channel Industry Alliance [FCIA] has a healthy roadmap for 16 Gbps support and 20 Gbps interswitch link (ISL) connections.
What about Fibre Channel drives, are these going away?
We need to differentiate the connector from the drive itself. Manufacturers are able to produce 10K and 15K RPM drives with SAS instead of FC connectors. While many have suggested that a "Flash-and-Stash" approach of SSD+SATA would eliminate the need for high-speed drives, IBM predicts that there just won't be enough SSD produced to meet the performance needs of our clients over the next five years, so 15K RPM drives, more likely with SAS instead of FC connectors, will continue to be deployed for the next five years.
We'd like more advanced hands-on labs, and to have the certification exams be more product-specific rather than exams for midrange disk or enterprise disk that are too wide-ranging.
Ok, we will take that feedback to the conference organizers.
IBM Tivoli Storage Manager is focused on disaster recovery from tape, how do I incorporate remote disk replication.
This is IBM's Unified Recovery Management, based on the seven tiers of disaster recovery established in 1983 at GUIDE conference. You can combine local recovery with FastBack, data center server recovery with TSM and FlashCopy manager, and combine that with IBM Tivoli Storage Productivity Center for Replication (TPC-R), GDOC and GDPS to manage disk replication across business continuity/disaster recovery (BC/DR) locations.
IBM Tivoli Storage Productivity Center for Replication only manages the LUNs, what about server failover and mapping the new servers to the replicated LUNs?
There are seven tiers of disaster recovery. The sixth tier is to manage the storage replication only, as TPC-R does. The seventh tier adds full server and network failover. For that you need something like IBM GDPS or GDOC that adds this capability.
All of my other vendor kit has bold advertising, prominent lettering, neon lights, bright colors, but our IBM kit is just black, often not even identifying the specific make or model, just "IBM" or "IBM System Storage".
IBM has opted for simplified packaging and our sleek, signature "raven black" color, and pass these savings on to you.
Bring back the SHARK fins!
We will bring that feedback to our development team. ("Shark" was the codename for IBM's ESS 800 disk model. Fiberglass "fins" were made as promotional items and placed on top of ESS 800 disk systems to help "identify them" on the data center floor. Unfortunately, professional golfer [<a href="http://www.shark.com/">Greg Norman</a>] complained, so IBM discontinued the use of the codename back in 2005.)
Where is Infiniband?
Like SAS, Infiniband had limited distance, about 10 to 15 meters, which proved unusable for server-to-storage network connections across data center floorspace. However, there are now 150 meter optical cables available, and you will find Infiniband used in server-to-server communications and inside storage systems. IBM SONAS uses Infiniband today internally. IBM DCS9900 offers Infiniband host-attachment for HPC customers.
We need midrange storage for our mainframe please?
In addition to the IBM System Storage DS8000 series, the IBM SAN Volume Controller and IBM XIV are able to connect to Linux on System z mainframes.
We need "Do's and Don'ts" on which software to run with which hardware.
IBM [Redbooks] are a good source for that, and we prioritize our efforts based on all those cards and letters you send the IBM Redbooks team.
The new TPC v4 reporting tool requires a bit of a learning curve.
The new reporting tool, based on Eclipse's Business Intelligence Reporting Tool [BIRT], is now standardized across the most of the Tivoli portfolio. Check out the [Tivoli Common Reporting] community page for assistance.
An unfortunate side-effect of using server virtualization like VMware is that it worsens management and backup issues. We now have many guests on each blade server.
IBM is the leading reseller of VMware, and understands that VMware adds an added layer of complexity. Thankfully, IBM Tivoli Storage Manager backups uses a lightweight agent. IBM [System Director VMcontrol] can help you manage a variety of hypervisor environments.
This was a great interactive session. I am glad everyone stayed late Thursday evening to participate in this discussion.
Continuing coverage of my week in Washington DC for the annual [2010 System Storage Technical University], I attended several XIV sessions throughout the week. There were many XIV sessions. I could not attend all of them. Jack Arnold, one of my colleagues at the IBM Tucson Executive Briefing Center, often presents XIV to clients and Business Partners. He covered all the basics of XIV architecture, configuration, and features like snapshots and migration. Carlos Lizarralde presented "Solving VMware Challenges with XIV". Ola Mayer presented "XIV Active Data Migration and Disaster Recovery".
Here is my quick recap of two in particular that I attended:
XIV Client Success Stories - Randy Arseneau
Randy reported that IBM had its best quarter ever for the XIV, reflecting an unexpected surge shortly after my blog post debunking the DDF myth last April. He presented successful case studies of client deployments. Many followed a familiar pattern. First, the client would only purchase one or two XIV units. Second, the client would beat the crap out of them, putting all kinds of stress from different workloads. Third, the client would discover that the XIV is really as amazing as IBM and IBM Business Partners have told them. Finally, in the fourth phase, the client would deploy the XIV for mission-critical production applications.
A large US bank holding company managed to get 5.3 GB/sec from a pair of XIV boxes for their analytics environment. They now have 14 XIV boxes deployed in mission-critical applications.
A large equipment manufacturer compared the offerings among seven different storage vendors, and IBM XIV came out the winner. They now have 11 XIV boxes in production and another four boxes for development/test. They have moved their entire VMware infrastructure to IBM XIV, running over 12,000 guest instances.
A financial services company bought their first XIV in early 2009 and now has 34 XIV units in production attached to a variety of Windows, Solaris, AIX, Linux servers and VMware hosts. Their entire Microsoft Exchange was moved from HP and EMC disk to IBM XIV, and experienced noticeable performance improvement.
When a University health system replaced two competitive disk systems with XIV, their data center temperature dropped from 74 to 68 degrees Fahrenheit. In general, XIV systems are 20 to 30 percent more energy efficient per usable TB than traditional disk systems.
A service provider that had used EMC disk systems for over 10 years evaluated the IBM XIV versus upgrading to EMC V-Max. The three year total cost of ownership (TCO) of EMC's V-Max was $7 Million US dollars higher, so EMC counter-proposed CLARiiON CX4 instead. But, in the end, IBM XIV proved to be the better fit, and now the customer is happy having made the switch.
The manager of an information communications technology service provider was impressed that the XIV was up and running in just a couple of days. They now have over two dozen XIV systems.
Another XIV client had lost all of their Computer Room Air Conditioning (CRAC) units for several hours. The data center heated up to 126 degrees Fahrenheit, but the customer did not lose any data on either of their two XIV boxes, which continued to run in these extreme conditions.
Optimizing XIV Performance - Brian Cormody
This session was an update from the [one presented last year] by Izhar Sharon. Brian presented various best practices for optimizing the performance when using specific application workloads with IBM XIV disk systems.
Oracle ASM: Many people allocate lots of small LUNs, because this made sense a long time ago when all you had was just a bunch of disks (JBOD). In fact, many of the practices that DBAs use to configure databases across disks become unnecessary with XIV. Wth XIV, you are better off allocating a few number of very large LUNs from the XIV. The best option was a 1-volume ASM pool with 8MB AU stripe. A single LUN can contain multiple Oracle databases. A single LUN can be used to store all of the logs.
VMware: Over 70 percent of XIV customers use it with VMware. For VMFS, IBM recommends allocating a few number of large LUNs. You can specify the maximum of 2181 GB. Do not use VMware's internal LUN extension capability, as IBM XIV already has thin provisioning and works better to allow XIV to do this for you. XIV Snapshots provide crash-consistent copies without all the VMware overhead of VMware Snapshots.
SAP: For planning purposes, the "SAPS" unit equates roughly to 0.4 IOPS for ERP OLTP workloads, and 0.6 IOPS for BW/BI OLAP workloads. In general, an XIV can deliver 25-30,000 IOPS at 10-15 msec response time, and 60,000 IOPS at 30 msec response time. With SAP, our clients have managed to get 60,000 IOPS at less than 15 msec.
Microsoft Exchange: Even my friends in Redmond could not believe how awesome XIV was during ESRP testing. Five Exchange 2010 servers connected two a pair of XIV boxes using the new 2TB drawers managed 40,000 mailboxes at the high profile (0.15 IOPS per mailbox). Another client found four XIV boxes (720 drives) was able to handle 60,000 mailboxes (5GB max), which would have taken over 4000 drives if internal disk drives were used instead. Who said SANs are obsolete for MS Exchange?
Asynchronous Replication: IBM now has an "Async Calculator" to model and help design an XIV async replication solution. In general, dark fiber works best, and MPLS clouds had the worst results. The latest 10.2.2 microcode for the IBM XIV can now handle 10 Mbps at less than 250 msec roundtrip. During the initial sync between locations, IBM recommends setting the "schedule=never" to consume as much bandwidth as possible. If you don't trust the bandwidth measurements your telco provider is reporting, consider testing the bandwidth yourself with [iPerf] open source tool.
Several members of the XIV team thanked me for my April 5th post [Double Drive Failure Debunked: XIV Two Years Later]. Since April 5th, IBM has sold more XIV units this quarter than any prior quarters. I am glad to have helped!
IBM Tivoli Storage Productivity Center version 4.1 Overview
In conferences like these, there are two types of product-level presentations. An "Overview" explains how products work today to those who are not familiar with it. An "Update" explains what's new in this version of the product for those who are already familiar with previous releases. This session was an Overview of [Tivoli Storage Productivity Center], plus some information of IBM's Storage Enterprise Resource Planner [SERP] from IBM's acquisition of NovusCG.
I was one of the original lead architects of Productivity Center many years ago, and was able to share many personal experiences about its evolution in development and in the field at client facilities. Analysts have repeatedly rated IBM Productivity Center as one of the top Storage Resource Management (SRM) tools available in the marketplace.
I would like to thank my colleague Harley Puckett for his assistance in putting the finishing touches on this presentation. This was my best attended session of the week, indicating there is a lot of interest in this product in particular, and managing a heterogeneous mix of storage devices in general. To hear a quick video introduction, see Harley Puckett's presentation at the [IBM Virtual Briefing Center].
Information Lifecycle Management (ILM) Overview
Can you believe I have been doing ILM since 1986? I was the lead architect for DFSMS which provides ILM support for z/OS mainframes. In 2003-2005, I spent 18 months in the field performing ILM assessments for clients, and now there are dozens of IBM practitioners in Global Services and Lab Services that do this full time. This is a topic I cover frequently at the IBM Executive Briefing Center [EBC], because it addresses several top business challenges:
Reducing costs and simplifying management
Improving efficiency of personnel and application workloads
Managing risks and regulatory compliance
IBM has a solution based on five "entry points". The advantage of this approach is that it allows our consultants to craft the right solution to meet the specific requirements of each client situation. These entry points are:
Tiered Information Infrastructure - we don't limit ourselves to just "Tiered Storage" as storage is only part of a complete[information infrastructure] of servers,networks and storage
Storage Optimization and Virtualization - including virtual disk, virtual tape and virtual file solutions
Process Enhancement and Automation - an important part of ILM are the policies and procedures, such as IT Infrastructure Library [ITIL] best practices
Archive and Retention - space management and data retention solutions for email, database and file systems
When I presented ILM last year, I did not get many attendees. This time I had more, perhaps because of the recent announcement of ILM and HSM support in IBM SONAS and our April announcement of IBM DS8700 Easy Tier has renewed interest in this area.
I have safely returned back to Tucson, but have still a lot of notes of the other sessions I attended, so will cover them this week.
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], I presented a session on Storage for the Green Data Center, and attended a System x session on Greening the Data Center. Since they were related, I thought I would cover both in this post.
Storage for the Green Data Center
I presented this topic in four general categories:
Drivers and Metrics - I explained the three key drivers for consuming less energy, and the two key metrics: Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE).
Storage Technologies - I compared the four key storage media types: Solid State Drives (SSD), high-speed (15K RPM) FC and SAS hard disk, slower (7200 RPM) SATA disk, and tape. I had comparison slides that showed how IBM disk was more energy efficient than competition, for example DS8700 consumes less energy than EMC Symmetrix when compared with the exact same number and type of physical drives. Likewise, IBM LTO-5 and TS1130 tape drives consume less energy than comparable HP or Oracle/Sun tape drives.
Integrated Systems - IBM combines multiple storage tiers in a set of integrated systems managed by smart software. For example, the IBM DS8700 offers [Easy Tier] to offer smart data placement and movement across Solid-State drives and spinning disk. I also covered several blended disk-and-tape solutions, such as the Information Archive and SONAS.
Actions and Next Steps - I wrapped up the talk with actions that data center managers can take to help them be more energy efficient, from deploying the IBM Rear Door Heat Exchanger, or improving the management of their data.
Greening of the Data Center
Janet Beaver, IBM Senior Manager of Americas Group facilities for Infrastructure and Facilities, presented on IBM's success in becoming more energy efficient. The price of electricity has gone up 10 percent per year, and in some locations, 30 percent. For every 1 Watt used by IT equipment, there are an additional 27 Watts for power, cooling and other uses to keep the IT equipment comfortable. At IBM, data centers represent only 6 percent of total floor space, but 45 percent of all energy consumption. Janet covered two specific data centers, Boulder and Raleigh.
At Boulder, IBM keeps 48 hours reserve of gasoline (to generate electricity in case of outage from the power company) and 48 hours of chilled water. Many power outages are less than 10 minutes, which can easily be handled by the UPS systems. At least 25 percent of the Computer Room Air Conditioners (CRAC) are also on UPS as well, so that there is some cooling during those minutes, within the ASHRAE guidelines of 72-80 degrees Fahrenheit. Since gasoline gets stale, IBM runs the generators once a month, which serves as a monthly test of the system, and clears out the lines to make room for fresh fuel.
The IBM Boulder data center is the largest in the company: 300,000 square feet (the equivalent of five football fields)! Because of its location in Colorado, IBM enjoys "free cooling" using outside air temperature 63 percent of the year, resulting in a PUE of 1.3 rating. Electricity is only 4.5 US cents per kWh. The center also uses 1 Million KwH per year of wind energy.
The Raleigh data center is only 100,000 Square feet, with a PUE 1.4 rating. The Raleigh area enjoys 44 percent "free cooling" and electricity costs at 5.7 US cents per kWh. The Leadership in Energy and Environmental Design [LEED] has been updated to certify data centers. The IBM Boulder data center has achieved LEED Silver certification, and IBM Raleigh data center has LEED Gold certification.
Free cooling, electricity costs, and disaster susceptibility are just three of the 25 criteria IBM uses to locate its data centers. In addition to the 7 data centers it manages for its own operations, and 5 data centers for web hosting, IBM manages over 400 data centers of other clients.
It seems that Green IT initiatives are more important to the storage-oriented attendees than the x86-oriented folks. I suspect that is because many System x servers are deployed in small and medium businesses that do not have data centers, per se.
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], here is my quick recap of the keynote sessions presented Monday morning. Marlin Maddy, Worldwide Technical Events Executive for IBM Systems Lab Services and Training, served as emcee.
Jim Northington
Jim Northington, IBM System x Business Line Executive, covered the IT industry's "Love/Hate Relationship" with x86 platform. Many of the physical limitations that were previously a pain on this platform are now addressed, through a combination of IBM's new innovative eX5 architecture and virtualization technologies.
Jim also presented the [IBM CloudBurst] solution. IBM CloudBurst is one of the many "Integrated Systems" designed to help simplify deployment. Based on IBM BladeCenter, the IBM CloudBurst is basically a Private Cloud rack for those that are ready to deploy in their own data center.
Jim feels that server virtualization on x86 platforms is still in its infancy. IBM calls it the 70/30 rule: 70 percent of x86 workloads are running virtualized on 30 percent of the physical servers.
Maria Azua
Maria Azua, IBM Vice President of Cloud Computing Enablement, presented on Cloud Computing. Technology is being adopted at faster rates. It took 40 years for radio to get 60 million listeners, 20 years for 60 million television viewers, 3 years to get 60 million surfers on the Internet, but it only took 4 months to get 60 million players on Farmville!
Maria covered various aspects of Cloud Computing: virtualization images, service catalog, provisioning elasticity, management and billing services, and virtual networks. With Cloud Computing, the combination of virtualization technologies, standardization, and automation can reduce costs and improve flexibility.
We've seen this happen before. Telcos transitioned from human operators to automated digital switches. Manufacturers went from having small teams of craftsmen to assembly lines of robots. Banks went from long lines of bank tellers to short lines at the ATM.
Maria said that companies are faced with three practical choices:
Do-it-Yourself, buy the servers, storage and switches and connect everything together.
Purchase pre-installed "integrated systems" to simplify deployment.
Subscribe to Cloud computing, allowing a service provider do all this for you.
In countries where network access is not ubiquitous, IBM has developed tools for the cloud that work in "offline" mode. IBM has also developed or modified tools to run better in the cloud. Launching a computer instance from the cloud from the service catalog is so easy to do, your 5-year-old child can do this!
Want to see Cloud Computing in action? Check out [Innovation.ed.gov], which is run in the IBM cloud, for the US Department of Education's website to foster innovation.
Whether you adopt public, private or a hybrid cloud computing approach, Maria suggests you take time to plan, test your applications for standardization, examine all risks, and explore new workloads that might be good candidates. Otherwise, moving to the cloud might just mean "More mess for less". Maria provided a list of applications that IBM considers good fit for Cloud Computing today.
I heard several audience members indicate that this is the first time someone finally explained Cloud Computing to them in a way that made sense!
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], here is my quick recap of the keynote sessions presented Monday morning. Marlin Maddy, Worldwide Technical Events Executive for IBM Systems Lab Services and Training, served as emcee.
Roland Hagen
Roland Hagan, IBM Vice President for IBM System x server platform, presented on how IBM is redefining the x86 computing experience. More than 50 percent of all servers are x86 based. These x86 servers are easy to acquire, enjoy a large application base, and can take advantage of readily available skilled workforce for administration. The problem is that 85 percent of x86 processing power remains idle, energy costs are 8 times what they were 12 years ago, and management costs are now 70 percent of the IT budget.
IBM has the number one market share for scalable x86 servers. Roland covered the newly announced eX5 architecture that has been deployed in both rack-optimized models as well as IBM BladeCenter blade servers. These can offer 2x the memory capacity as competitive offerings, which is important for today's server virtualization, database and analytics workloads. This includes 40 and 80 DIMM models of blades, and 64 to 96 DIMM models of rack-optimized systems. IBM also announced eXFlash, internal Solid State Drives accessible at bus speeds. FlexNode allows a 4-node system to dynamically change to 2 separate 2-node systems.
By 2013, analysts estimate that 69 percent of x86 workloads will be virtualized, and that 22 percent of servers will be running some form of hypervisor software. By 2015, this grows to 78 percent of x86 workloads being virtualized, and 29 percent of servers running hypervisor.
Doug Balog
Doug Balog, IBM Vice President and Disk Storage Business Line Executive, presented how the growth of information results in a "perfect storom" for the storage industry. Storage Admins are focused on managing storage growth and the related costs and complexity, proper forecasting and capacity planning, and backup administration. IBM's strategy is to help clients in the following areas:
Storage Efficiency - getting the most use out of the resources you invest
Service Delivery - ensuring that information gets to the right people at the right time, simplify reporting and provisioning
Data Protection - protecting data against unethical tampering, unauthorized access, and unexpected loss and corruption
He wrapped up his talk covering the success of DS8700 and XIV. In fact, 60 percent of XIV sales are to EMC customers. The TCO of an XIV is less than half the TCO of a comparable EMC VMAX disk system.
Dave McQueeney
Dave McQueeney, IBM Vice President for Strategy and CTO for US Federal, covered how IBM's Smarter Planet vision for smarter cities, smarter healthcare, smarter energy grid and smarter traffic are being adopted by the public sector. Almost every data center in US Federal government is out of power, floor space and/or cooling capability. An estimated 80 percent of US Federal government IT budgets are spent on maintenance and ongoing operations, leaving very little left over for the big transformational projects that President Barack Obama wants to accomplish.
Who has the most active Online Transaction Processing (OLTP)? You might guess a big bank, but it is the US Department of Homeland Security (DHS), with a system processing 600 million transactions per day. Another government agency is #2, and the top Banking application is finally #3. The IBM mainframe has solved problems 10 to 15 years ago that the distributed systems are just now encountering today. Worldwide, more than 80 percent of banks use mainframes to handle their financial transactions.
IBM's recent POWER7 set of servers are proving successful in the field. For example, Allianz was able to consolidate 60 servers to 1. Running DB2 on POWER7 server is 38 percent less expensive than Oracle on x86 Nehalem processors. For Java, running JVM on POWER7 is 73 percent better than JVM on x86 Nehalem.
The US federal government ingests a large amount of data. It has huge 10-20 PB data warehouses. In fact, the amount of GB received every year by the US federal government alone exceed the production of all disk drives produced by all drive manufacturers. This means that all data must be processed through "data reduction" or it is gone forever.
Clod Barrera
The last keynote for Monday was given by Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist for System Storage. He started out shocking the audience with his view that the "disk drive industry is a train wreck". While R&D in disk drives enjoyed a healthy improvement curve up to about 2004, it has now slowed down, getting more difficult and more expensive to improve performance and capacity of disk drives. The rest of his presentation was organized around three themes:
Integrated Stacks - while new-comers like Oralce/Sun and the VCE coalition are promoting the benefits of integrated stacks, IBM has been doing this for the past five decades. New advancements in Server and Storage virtualization provide exciting new opportunities.
Integrated Systems - solutions like IBM Information Archive and SONAS, and new features like Easy Tier that help adopt SSD transparently. As it gets harder and harder to scale-up, IBM has moved to innovative scale-out architectures.
Integrated Data Center management - companies are now realizing that management and governance are critical factors of success, and that this needs to be integrated between traditional IT, private, public and hybrid cloud computing.
This was a great inspiring start for what looks like an awesome week!
By combining multiple components into a single "integrated system", IBM can offer a blended disk-and-tape storage solutions. This provides the best of both worlds, high speed access using disk, while providing lower costs and more energy efficiency with tape. According to a study by the Clipper Group, tape can be 23 times less expensive than disk over a 5 year total cost of ownership (TCO).
I've also covered Hierarchical Storage Management, such as my post [Seven Tiers of Storage at ABN Amro], and my role as lead architect for DFSMS on z/OS in general, and DFSMShsm in particular.
However, some explanation might be warranted in the use of these two terms in regards to SONAS. In this case, ILM refers to policy-based file placement, movement and expiration on internal disk pools. This is actually a GPFS feature that has existed for some time, and was tested to work in this new configuration. Files can be individually placed on either SAS (15K RPM) or SATA (7200 RPM) drives. Policies can be written to move them from SAS to SATA based on size, age and days non-referenced.
HSM is also a form of ILM, in that it moves data from SONAS disk to external storage pools managed by IBM Tivoli Storage Manager. A small stub is left behind in the GPFS file system indicating the file has been "migrated". Any reference to read or update this file will cause the file to be "recalled" back from TSM to SONAS for processing. The external storage pools can be disk, tape or any other media supported by TSM. Some estimate that as much as 60 to 80 percent of files on NAS have low reference and should be stored on tape instead of disk, and now SONAS with HSM makes that possible.
This distinction allows the ILM movement to be done internally, within GPFS, and the HSM movement to be done externally, via TSM. Both ILM and HSM movement take advantage of the GPFS high-speed policy engine, which can process 10 million files per node, run in parallel across all interface nodes. Note that TSM is not required for ILM movement. In effect, SONAS brings the policy-based management features of DFSMS for z/OS mainframe to all the rest of the operating systems that access SONAS.
HTTP and NIS support
In addition to NFS v2, NFS v3, and CIFS, the SONAS v1.1.1 adds the HTTP protocol. Over time, IBM plans to add more protocols in subsequent releases. Let me know which protocols you are interested in, so I can pass that along to the architects designing future releases!
SONAS v1.1.1 also adds support for Network Information Service (NIS), a client/server based model for user administration. In SONAS, NIS is used for netgroup and ID mapping only. Authentication is done via Active Directory, LDAP or Samba PDC.
Asynchronous Replication
SONAS already had synchronous replication, which was limited in distance. Now, SONAS v1.1.1 provides asynchronous replication, using rsync, at the file level. This is done over Wide Area Network (WAN) across to any other SONAS at any distance.
Hardware enhancements
Interface modules can now be configured with either 64GB or 128GB of cache. Storage now supports both 450GB and 600GB SAS (15K RPM) and both 1TB and 2TB SATA (7200 RPM) drives. However, at this time, an entire 60-drive drawer must be either all one type of SAS or all one type of SATA. I have been pushing the architects to allow each 10-pack RAID rank to be independently selectable. For now, a storage pod can have 240 drives, 60 drives of each type of disk, to provide four different tiers of storage. You can have up to 30 storage pods per SONAS, for a total of 7200 drives.
An alternative to internal drawers of disk is a new "Gateway" iRPQ that allows the two storage nodes of a SONAS storage pod to connect via Fibre Channel to one or two XIV disk systems. You cannot mix and match, a storage pod is either all internal disk, or all external XIV. A SONAS gateway combined with external XIV is referred to as a "Smart Business Storage Cloud" (SBSC), which can be configured off premises and managed by third-party personnel so your IT staff can focus on other things.
See the Announcement Letters for the SONAS [hardware] and [software] for more details.
For those who are wondering how this positions against IBM's other NAS solution, the IBM System Storage N series, the rule of thumb is simple. If your capacity needs can be satisfied with a single N series box per location, use that. If not, consider SONAS instead. For those with non-IBM NAS filers that realize now that SONAS is a better approach, IBM offers migration services.
Both the Information Archive and the SONAS can be accessed from z/OS or Linux on System z mainframe, from "IBM i", AIX and Linux on POWER systems, all x86-based operating systems that run on System x servers, as well as any non-IBM server that has a supported NAS client.
Of course, EMC isn't the first, and won't be the last, vendor to [hear the sirens] of Cloud Computing and crash their ships on rocky shores. Just because you manufacture hardware or write software does not guarantee your success as a Cloud service provider.
(FTC disclaimer: I work for IBM. IBM is a successful public cloud service provider, as well as offering products that can be used to deploy a private, hybrid or community cloud, and provides technology to other cloud service proviers.)
An amusing excerpt from Steve Duplessie's post:
"Side Note: There is no such thing as a private cloud. A private cloud is called IT. We don’t need more terms for the same stuff."
I have to agree that when vendors like EMC say "Journey to the Private Cloud", skeptics hear "How to keep your IT administrator job by sticking with a traditional IT approach". Butchers, bakers, candlestick makers and the specialty shop "arms dealers" of Cloud Computing IT equipment may not want to see their market shrink down to a dozen or so service providers, and drum up the fear that "Public Cloud" deployments will "disintermediate" the IT staff.
But does that mean the use of term "Private Cloud" should be discontinued? The US National Institute of Standards and Technology [NIST] offers their cloud model composed of five essential characteristics, three service models, and four deployment models. Here's an excerpt:
"Essential Characteristics:
On-demand self-service
Broad network access
Resource pooling
Rapid elasticity
Measured Service
Service Models:
Cloud Software as a Service (SaaS)
Cloud Platform as a Service (PaaS)
Cloud Infrastructure as a Service (IaaS)
Deployment Models:
Private cloud.
Community cloud.
Public cloud.
Hybrid cloud"
Like traditional IT, a private cloud infrastructure is operated solely for an organization, so I can see how many might consider the term unnecessary. However, unlike traditional IT, a private cloud may be managed by the organization or a third party and may exist on premise or off premise.
How many traditional IT departments meet the five essential characteristics above? Instead of "on-demand self-service", many IT departments have complicated and lengthy procurement and change control procedures. A few might have "measured service" with a charge-back scheme, and a few others prefer to use a "show-back" aproach instead, showing end users or managers how much IT resources are being consumed without assigning a monetary figure or other penalty. Rapid elasticity? Giving any resource you asked for back can be just as painful because re-purposing that equipment follows the same complicated and lengthy change control procedures.
Last December, I wrote a post covering a conference session by US Department Information Services Agency (DISA) on their [Rapid Access Computing Environment].
Just like the term "intranet" refers to a private network that employs Internet standards and technologies, I feel the term "private cloud" is useful, representing an infrastructure that meets the above criteria, employing Public Cloud standards and technologies, that can distinguish itself from traditional IT in key ways that provide business value.
What I do hope "vaporizes" is all the hype, and all the misuse of the Cloud terminology out there.
Well, I am off on a much-needed vacation. For my American readers, this weekend represents our "4th of July" Independence Day holiday. What better way to celebrate than to drive hundreds of miles from one side of the country to the other? In this case, from the North side down to the South side.
I am armed with two books on this subject. The first, is part of a series on American Road Trips, which details the roadside attractions to be found along the Great River Road. We will start up in Minnesota, and work our way Southward, covering a total of eight states in eight days along the Mississippi River.
The second book is Alton Brown's "Feasting on Asphalt, the River Run". This book describes Alton's ride Northward up the Mississippi river, detailing the restaurants and foods he enjoyed, so I will have to read the chapters in reverse.
Special thanks to Roy Buol, mayor of Dubuque, Iowa that I [met in Scottsdale earlier this year] for the idea to come visit his fine city, considered one of the Smarter Cities in the USA, thanks to IBM technology.
I don't know if I will have internet access along the way, or have the time and/or energy to blog, tweet (@az990tony) or upload photos during the trip. We'll see.
Congratulations to my colleague and close friend, Harley Puckett, who celebrated his 25th anniversary of service here at IBM. This is known internally as joining the "Quarter Century Club" or QCC. This is not just a figure of speech, the members of this club hold get-togethers and barbeques throughout the year.
Here is Harley welcoming Ken Hannigan and others he worked with back in Tivoli Storage Manager (TSM) software development.
Our manager, Bill Terry, presenting Harley with a plaque.
Confused about what storage solutions you need? IBM now has a [Storage Evaluation Tool] that you can use to find out about IBM's latest products, solutions and offerings.
The tool will is customized for different industries, job roles, and challenge areas. Give it a try!
Continuing my saga for my [New Laptop], I have gotten all my programs operational, transferred and organized all my data, and now ready for testing. You can read my previous posts on this series: [Day 1], [Day 2], [Day 3], [Day 4].
At this point, you might be thinking, "Testing? Just use your laptop already, deal with problems as you find them!" In my case, I need to sign off that the new laptop meets my needs, and then send back my previous laptop, wiped clean of all passwords and data. I have until the end of June to do this.
The value of testing is to avoid problems later, perhaps an inconvenient time such as a business trip or client briefing. It is better to work out any issues while I am still in the office, connected to the internal IBM intranet on a high-speed wired connection. Also, I plan to do a Physical-to-Virtual (P-to-V) conversion of my Windows XP C: drive to run as a virtual guest OS on Linux, so I want to make sure the image is in working order before the conversion. That said, here is what my testing encountered.
Of the 134 applications I had identified as being installed on my old laptop, I determined that I only needed about 70 of them. The others I did not bother to install on the new.
I had not thought about "addons" and "plugins" that I have that attach themselves inside browsers or other applications. I made sure that Flash, Shockwave and Java worked correctly on all three browsers: IE6, Firefox and Opera.
One of my "plugins" is an application called [iSpring Pro, which plugs into Microsoft PowerPoint. I thought I had Microsoft Office installed, but found out the standard IBM build had only the viewers. I installed Microsoft Office 2003 Standard Edition with PowerPoint, Excel and Word. I then realized that I did not have the original V4.3 installation file for iSpring Pro, so I downloaded the latest v5 from their website. However, my license key is only for version 4, so a quick email got this resolved, and the nice folks at iSpring Solutions sent me the v4.3 installation file.
Shameless Plug: We use iSpring Pro to record our voices with PowerPoint slides to generate web videos for the [IBM Virtual Briefing Center] which we use to complement face-to-face briefings. This allows attendees to review introductory materials to prepare for their visit to Tucson, or to stay up-to-date on products and features in between annual visits. If you have not checked out the IBM Virtual Briefing Center, now is a good time to see what videos and other resources we have out there. You can even request to schedule a briefing in Tucson!
Testing out iSpring Pro, I realized that there are no jacks for my headset. On my old ThinkPad T60, I had two jacks, one green for headphone and one pink for microphone. My headset has two cables, one for each, which I then use for the recordings. I also use this for online webinars and training sessions. Apparently, ThinkPad T410 went for a single 3.5mm "Combo" audio jack that handles both roles. Fortunately, there is a [Headset Buddy] adapter that merges the two cables from my headset to the combo jack on my new laptop. I ordered one which will arrive some time next week.
My new laptop doesn't fit my old docking station either. I had set the docking station aside while I had the two laptops latched together for the file transfers, but now that I am done with the old laptop, I discovered that my new T410 doesn't fit. I ordered a new one.
Using find, grep, awk, sort and uniq, I was able to generate a list of all the file extensions on my Documents foler. I was able to find old Lotus 123, Freelance Graphics, and Wordpro files. I thought Lotus Symphony would handle these, but it does not. I was able to install an old version of Lotus Smartsuite that includes these programs so that I can process these files.
I also found in the extensions list pptx, docx and xlsx files, which represent the new Microsoft Office 2007 formats. I installed the "Format Compatability Pack" that allows Office 2003 read these files.
Lastly, I installed a few programs that support a wide variety of file formats. VideoLAN's [VLC] plays a variety of audio and video files. [7-Zip] packs and unpacks a variety of archive files. (Note: Another program, BitZipper, also supports a variety of archive formats, but the install will corrupt your Firefox and IE browsers with new tool bars, change your search engine default, and install a lot of other unwanted software. Cleaning up the mess can be time-consuming. You have been warned!) I also installed [MadEdit], a binary/hex/text editor that will open any file to see what kind of format it has inside. From this, I was able to determine that some of my extension-less files were GIF, RTF or PDF format, and rename them accordingly.
With the testing done, I am ready to go wipe my old system of all passwords and data!
Continuing my saga for my [New Laptop], I have gotten all my programs operational, and now it is a good time to re-evaluate how I organize my data. You can read my previous posts on this series: [Day 1], [Day 2], [Day 3].
I started my career at IBM developing mainframe software. The naming convention was simple, you had 44 character dataset names (DSN), which can be divided into qualifiers separated by periods. Each qualifier could be up to 8 characters long. The first qualifier was called the "high level qualifier" (HLQ) and the last one was the "low level qualifier" (LLQ). Standard naming conventions helped with ownership and security (RACF), catalog management, policy-based management (DFSMS), and data format identification. For example:
PROD.PAYROLL.JCL
TEST.PAYROLL.JCL
U.PEARSON.TEST.JCL
In the first case, we see that the HLQ is "PROD" for production, the application is PAYROLL and this file holds job control language (JCL). The LLQ often identified the file type. The second can be a version for testing a newer version of this application. The third represents user data, in which case my userid PEARSON would have my own written TEST JCL. I have seen successful naming conventions with 3, 4, 5 and even 6 qualifiers. The full dataset name remains the same, even if it is moved from one disk to another, or migrated to tape.
(We had to help one client who had all their files with single qualifier names, no more than 8 characters long, all in the Master Catalog (root directory). They wanted to implement RACF and DFSMS, and needed help converting all of their file names and related JCL to a 4-qualifer naming convention. It took seven months to make this transformation, but the client was quite pleased with the end result.)
While the mainframe has a restrictive approach to naming files, the operating systems on personal computers provide practically unlimited choices. File systems like NTFS or EXT3 support filenames as long as 254 characters, and pathnames up to 32,000 characters. The problem is that when you move a file from one disk to another, or even from one directory structure to another, the pathname will change. If you rely on the pathname to provide critical information about the meaning or purpose of a file, that could get lost when moving the files around.
I found several websites that offered organization advice. On The Happiness Project blog, Gretchen Rubin [busts 11 myths] about organization. On Zenhabits blog, Leo Babauta offers [18 De-cluttering tips].
Peter Walsh's [Tip No. 185] suggests using nouns to describe each folder. Granted these are about physical objects in your home or office, but some of the concepts can apply to digital objects on your disk drive.
"Use the computer’s sorting function. Put “AAA” (or a space) in front of the names of the most-used folders and “ZZZ” (or a bullet) in front of the least-used ones, so the former float to the top of an alphabetical list and the latter go to the bottom."
Personally, I hate spaces anywhere in directory and file names, and the thought of putting a space at the front of one to make it float to the top is even worse. Rather than resorting to naming folders with AAA or ZZZ, why not just limit the total number of files or directories so they are all visible on the screen. I often sort by date to access my most frequently-accessed or most-recently-updated files.
Of all the suggestions I found, Peter Walsh's "Use Nouns" seemed to be the most useful. Wikipedia has a fascinating article on [Biological Classification]. Certainly, if all living things can be put into classifications with only seven levels, we should not need more than seven levels of file system directory structure either! So, this is how I decided to organize my files on my new Thinkad T410:
C: Drive
Windows XP operating system programs and applications. I have structured this so that if I had to replace my hard disk entirely while traveling, I could get a new drive and restore just the operating system on this drive, and a few critical data files needed for the trip. I could then do a full recovery when I was back in the office. If I was hit with a virus that prevented Windows from booting up, I could re-install the Windows (or Linux) operating system without affecting any of my data.
D: Drive
This will be for my most active data, files and databases. I have the Windows "My Documents" point to D:\Documents directory. Under Archives, I will keep files for events that have completed, projects that have finished, and presentations I used that year. If I ever run out of space on my disk drive, I would delete or move off these archives first. I have a single folder for all Downloads, which I can then move to a more appropriate folder after I decide where to put them. My Office folder holds administrative items, like org charts, procedures, and so on.
As a consultant, many of my files relate to Events, these could be Briefings, Conferences, Meetings or Workshops. These are usually one to five days in duration, so I can hold here background materials for the clients involved, agendas, my notes on what transpired, and so on. I keep my Presentations separately, organized by topic. I also am involved with Projects that might span several months or ongoing tasks and assignments. I also keep my Resources separately, these could be templates, training materials, marketing research, whitepapers, and analyst reports.
A few folders I keep outside of this structure on the D: drive. [Evernote] is an application that provides "folksonomy" tagging. This is great in that I can access it from my phone, my laptop, or my desktop at home. Install-files are all those ZIP and EXE files to install applications after a fresh Windows install. If I ever had to wipe clean my C: drive and re-install Windows, I would then have this folder on D: drive to upgrade my system. Finally, I keep my Lotus Notes database directory on my D: drive. Since these are databases (NSF) files accessed directly by Lotus Notes, I saw no reason to put them under the D:\Documents directory structure.
Documents
Archives
2006
2007
2008
2009
Downloads
Events
Briefings
Conferences
Meetings
Workshops
Office
Presentations
Projects
Resources
Evernote
Install-Files
Notes
Data
E: Drive
This will be for my multimedia files. These don't change often, are mostly read-only, and could be restored quickly as needed.
Audio
Images
Video
I'll give this new re-organization a try. Since I have to take a fresh backup to Tivoli Storage Manager anyways, now is the best time to re-organize the directory structure and update my dsm.opt options file.
Continuing my saga for my [New Laptop], let's recap my progress so far:
[Day 1 afternoon], I received the laptop from shipping on Wednesday, took a backup of the factory install image to an external USB drive, and re-partitioned to run both Windows and Linux operating systems.
[Day 2], I spent Thursday using the "Migration Assistant" tool, and completed the operation sending the rest of my data over to the /dev/sda6 NTFS partition.
So now, Friday (day 3), I get to install any applications that were not part of the pre-installed image. Thankfully, I had planned ahead and figured out the 134 different applications that I had on my old system. I printed out a copy of my spreadsheet, and used it as a checklist to systematically go through the list. For each one, I determined one of the following:
BUILD
If I could find the application already installed, either the same version or newer, or functionally equivalent, then I would mark it down as being part of the factory build. Of those programs pre-installed, I am quite pleased that the settings were carried over during yesterday's file transfer. For example, my bookmarks and bookmarklets on Firefox are all in tact. However, it did not carry forward all of my Firefox addons, so these I had to install separately.
ISSI Download
IBM Standard Software Installer is our internal website for IBM and select third-party software for the different operating systems supported. Many of the ISSI programs were already included in the factory build, such as Lotus Notes, Lotus Symphony, Firefox browser, and so I had very few left remaining to do manually from ISSI.
INSTALL from D:\Install-Files
As I mentioned in my previous post, I saved the ZIP or EXE files of installation, as well as any license keys, URLs and other useful information to re-install each application.
COPY over from D:\Prog-Files
Many programs don't have installation files, because they don't need to update the registry or create Desktop icons or Taskbar management buttons. For these I can just copy the directory over to C:\Program Files.
WEB Download
In some cases, the Install-File was fairly downlevel, so I downloaded a fresh copy from the Web. In other cases, I forgot to save the ZIP or EXE, so this was the backup plan.
DEFER for later install
I worked down the list alphabetically, but some programs needed other programs to be installed first, or I needed to find the license registry key, or whatever. This allowed me to focus on the most important programs first. Others I might defer indefinitely until I need them, such as programs to access Second Life, or to build software for Lego Mindstorms robots.
SKIP those applications no longer required
Some programs just don't need to be on my new system. This includes software to manage printers I no longer have, drivers to attach to gadgets and devices I no longer own, and software that might have been specific to the old ThinkPad T60. This was also a good time to "de-duplicate" similar applications. For example, I have decided to limit myself to just three browsers: Firefox, Opera, and Internet Explorer IE6.
The planning paid off. I was able to confirm or install all of my applications today and have a fully working Windows XP system partition. I celebrated by taking another backup.
Continuing my saga regarding my [New Laptop], I managed on
[Wednesday afternoon] to prepare my machine with separate partitions for programs and data. I was hoping to wrap things up on day 2 (Thursday), but nothing went smoothly.
Just before leaving late Wednesday evening, I thought I would try running the "Migration Assistant" overnight by connecting the two laptops with a REGULAR Ethernet cable. The instructions indicated that in "most" cases, two laptops can be connected using a regular "patch cord" cable. These are the kind everyone has, the connects their laptop to the wall socket for wired connection to the corporate intranet, or their personal computers to their LAN hubs at home. Unfortunately, the connection was not recognized, so I suspected that this was one of the exceptions not covered.
(There are two types of Ethernet cables. The ["patch cord"] connects computers to switches. The ["crossover" cable] connects like devices, such as computers to computers, or switches to switches. Four years ago, I used a crossover cable to transfer my files over, and assumed that I would need one this time as well.)
Thursday morning, I borrowed a crossover cable from a coworker. It was bright pink and only about 18 inches long, just enough to have the two laptops side by side. If the pink crossover cable were any shorter, the two laptops would be back to back. I kept the old workstation in the docking station, which allowed it to remain connected to my big flat screen, mouse, keyboard and use the docking stations RJ45 to connect to the corporate intranet. That left the RJ45 on the left side of the old system to connect via crossover cable to the new system. But that didn't work, of course, because the docking station overrides the side port, so we had to completely "undock" and go native laptop to laptop.
Restarting the Migration Assistant, I unplug the corporate intranet cable from the old laptop, put one end of the pink cable into each Ethernet port of each laptop. On the new system, Migration Assistant asks to setup a password and provides an IP address like 169.254.aa.bb with a netmask of 255.255.0.0 and I am supposed to type this IP address over on the old system for it to reach out and connect. It still didn't connect.
We tried a different pink crossover cable, no luck. My colleague Harley brought over his favorite "red" crossover cable, that he has used successfully many times, but still didn't work. The helpful diagnostic advice was to disable all firewall programs from one or both systems.
I disabled Symantec Client Firewall on both systems. Still not working. I even tried booting both systems up in "safe" mode, using MSCONFIG to set the reboot mode as "safe with networking" as the key option. Still not working. At this point, I was afraid that I would have to use the alternate approach, which was to connect both systems to our corporate 100 Mbps system, which would be painfully slow. I only have one active LAN cable in my office, so the second computer would have to sit outside in the lobby.
Looking at the IP address on the old system, it was 9.11.xx.yy, assigned by our corporate DHCP, so not even in the same subnet of the new computer. So, I created profiles on ThinkVantage Access Connections on both systems, with 192.168.0.yy netmask 255.255.255.0 on the old system, and 192.168.0.bb on the new system. This worked, and a connection between the two systems was finally recognized.
Since I had 23GB of system files and programs on my old C: drive, and 80GB of data on my old D: drive, I didn't think I would run out of space on my new 40GB C: drive and 245GB D: drive, but it did! The Migration Assistant wanted to my D:\Documents on my new C: drive and refused to continue. I had to turn off D:\Documents from the list so that it could continue, processing only the programs and system settings on C: drive. It took 61 minutes to scan 23GB on my C: drive, identify 12,900 files to move, representing 794MB of data. Seriously? Less than 1GB of data moved!
It then scanned all of the programs I had on my old system, and decided that there were none that needed to be moved or installed on the new system. The closing instructions explained there might be a few programs that need to be manually installed, and some data that needed to be transferred manually.
Given the performance of Migration Assistant, I decided to just setup a direct Network Mapping of the new D: drive as Y: on my old system, and just drag and drop my entire folder over. Even at 1000 Mbps, this still took the rest of the day. I also backed up C:\Program Files using [System Rescue CD] to my external USB drive, and restored as D:\prog-files, just in case. In retrospect, I realize it would have been faster just to have dumped my D: drive to my USB drive, and restore it on the new system.
I'll leave the process of re-installing missing programs for Friday.
My how time flies. This week marks my 24th anniversary working here at IBM. This would have escaped me completely, had I not gotten an email reminding me that it was time to get a new laptop. IBM manages these on a four-year depreciation schedule, and I received my current laptop back in June 2006, on my 20th anniversary.
When I first started at IBM, I was a developer on DFHSM for the MVS operating system, now called DFSMShsm on the z/OS operating system. We all had 3270 [dumb terminals], large cathode ray tubes affectionately known as "green screens", and all of our files were stored centrally on the mainframe. When Personal Computers (PC) were first deployed, I was assigned the job of deciding who got them when. We were getting 120 machines, in five batches of 24 systems each, spaced out over the next two years. I was assigned the job of recommending who should get a PC during the first batch, the second batch, and so on. I was concerned that everyone would want to be part of the first batch, so I put out a survey, asking questions on how familiar they were with personal computers, whether they owned one at home, were familiar with DOS or OS/2, and so on.
It was actually my last question that helped make the decision process easy:
How soon do you want a Personal Computer to replace your existing 3270 terminal?
1-60 days
61-120 days
121-180 days
As late as possible
Never
I had five options, and roughly 24 respondents checked each one, making my job extremely easy. Ironically, once the early adopters of the first batch discovered that these PC could be used for more than just 3270 terminal emulation, many of the others wanted theirs sooner.
Back then, IBM employees resented any form of change. Many took their new PC, configured it to be a full-screen 3270 emulation screen, and continued to work much as they had before. My mentor, Jerry Pence, would print out his mails, and file the printed emails into hanging file folders in his desk credenza. He did not trust saving them on the mainframe, so he was certainly not going to trust storing them on his new PC. One employee used his PC as a door stop, claiming he will continue to use his 3270 terminal until they take it away from him.
Moving forward to 2006, I was one of the first in my building to get a ThinkPad T60. It was so new that many of the accessories were not yet available. It had Windows XP on a single-core 32-bit processor, 1GB RAM, and a huge 80GB disk drive. The built-in 1GbE Ethernet went unused for a while, as we had 16 Mbps Token Ring network.
I was the marketing strategist for IBM System Storage back then, and needed all this excess power and capacity to handle all my graphic-intense applications, like GIMP and Second Life.
Over the past four years, I made a few slight improvements. I partitioned the hard drive to dual-boot between Windows and Linux, and created a separate partition for my data that could be accessed from either OS. I increased the memory to 2GB and replaced the disk with a drive holding 120GB capacity.
A few years ago, IBM surprised us by deciding to support Windows, Linux and Mac OS computers. But actually it made a lot of sense. IBM's world-renown global services manages the help-desk support of over 500 other companies in addition to the 400,000 employees within IBM, so they already had to know how to handle these other operating systems. Now we can choose whichever we feel makes us more productive. Happy employees are more productive, of course. IBM's vision is that almost everything you need to do would be supported on all three OS platforms:
Lotus Notes
Access your email, calendar, to-do list and corporate databases via Lotus Notes on either Windows, Linux or Mac OS. Corporate databases store our confidential data centrally, so we don't have to have them on our local systems. We can make local replicas of specific databases for offline access, and these are encrypted on our local hard drive for added protection. Emails can link directly to specific entries in a database, so we don't have huge attachments slowing down email traffic. IBM also offers LotusLive, a public cloud offering for companies to get out of managing their own email Lotus Domino repositories.
Lotus Symphony
Create presentations, documents and spreadsheets on either Windows, Linux or Mac OS. Lotus Symphony is based on open source OpenOffice and is compatible with Microsoft Office. This allows us to open and update directly in Microsoft's PPT, DOC and XLS formats.
Firefox Browser
Many of the corporate applications have now been converted to be browser-accessible. The Firefox browser is available on Windows, Linux and Mac OS. This is a huge step forward, in my opinion, as we often had to download applications just to do the simplest things like submit our time-sheet or travel expense reimbursement. I manage my blog, Facebook and Twitter all from online web-based applications.
The irony here is that the world is switching back to thin clients, with data stored centrally. The popularity of Web 2.0 helped this along. People are using Google Docs or Microsoft OfficeOnline to eliminate having to store anything locally on their machines. This vision positions IBM employees well for emerging cloud-based offerings.
Sadly, we are not quite completely off Windows. Some of our Lotus Notes databases use Windows-only APIs to access our Siebel databases. I have encountered PowerPoint presentations and Excel spreadsheets that just don't render correctly in Lotus Symphony. And finally, some of our web-based applications work only in Internet Explorer! We use the outdated IE6 corporate-wide, which is enough reason to switch over to Firefox, Chrome or Opera browsers. I have to put special tags on my blog posts to suppress YouTube and other embedded objects that aren't supported on IE6.
So, this leaves me with two options: Get a Mac and run Windows on the side as a guest operating system, or get a ThinkPad to run Windows or Windows/Linux. I've opted for the latter, and put in my order for a ThinkPad 410 with a dual-core 64-bit i5 Intel processor, VT-capable to provide hardware-assistance for virtualization, 4GB of RAM, and a huge 320GB drive. It will come installed with Windows XP as one big C: drive, so it will be up to me to re-partition it into a Windows/Linux dual-boot and/or Windows and Linux running as guest OS machine.
(Full disclosure to make the FTC happy: This is not an endorsement for Microsoft or against Apple products. I have an Apple Mac Mini at home, as well as Windows and Linux machines. IBM and Apple have a business relationship, and IBM manufactures technology inside some of Apple's products. I own shares of Apple stock, I have friends and family that work for Microsoft that occasionally send me Microsoft-logo items, and I work for IBM.)
I have until the end of June to receive my new laptop, re-partition, re-install all my programs, reconfigure all my settings, and transfer over my data so that I can send my old ThinkPad T60 back. IBM will probably refurbish it and send it off to a deserving child in Africa.
If you have an old PC or laptop, please consider donating it to a child, school or charity in your area. To help out a deserving child in Africa or elsewhere, consider contributing to the [One Laptop Per Child] organization.
The BP oil spill in the Gulf of Mexico is a good reminder that all organizations should consider practice and execution of their contingency plans. In this most recent case, the [Deepwater Horizon] oil platform had an explosion on April 20, resulting in oil spewing out at an estimated 19,000 barrels per day. While some bloggers have argued that BP failed to plan, and therefore planned to fail, I found that hard to believe. How can a billion-dollar multinational company not have contingency plans?
The truth is, BP did have plans. Karen Dalton Beninato of New Orleans' City Voices discusses BP's Gulf of Mexico Regional Oil Spill Response Plan (OSRP) in her article [BP's Spill Plan: What they knew and when they knew it]. A
[redacted 90-page version of the OSRP] is available on their website.
The plan indicates that it may be 30 days from the time a deep offshore leak reaches the shoreline, giving OSRP participants plenty of time to take action.
(Having former politicians [blame environmentalists] for this crisis does not help much either. At least the deep shore rigs give you 30 days to react to a leak before the oil gets to the shoreline. Having oil rigs closer to shore will just shorten this time to react. Allowing onshore oil rigs does not mean oil companies would discontinue their deep offshore operations. There are thousands of oil rigs in the Gulf of Mexico. Extracting oil in the beautiful Alaska National Wildlife Reserve [ANWR] might be safer, it does not eliminate the threat entirely, and any leak there would be damaging to the local plant and animals in the same manner.)
So perhaps the current crisis was not the result of a lack of planning, but inadequate practice and execution. The same is true for IT Business Continuity / Disaster Recovery (BC/DR) plans. In all cases, there are four critical parts:
Plan
The planning team needs to anticipate every possible incident, determine the risks involved and the likelihood of impact, and either accept them, or decide to mitigate them. This can include natural disasters (hurricanes, fires, floods) and technical issues (computer viruses, power outages, network disruption).
Prepare
Mitigation can involve taking backups, having replicated copies at a remote location, creating bootable media, training all of the appropriate employees, and having written documented procedures. IBM's Unified Recovery Management approach can protect your entire IT operations, from laptops of mobile employees, to remote office/branch office (ROBO) locations, to regional and central data centers.
Practice
When was the last time you practiced your Business Continuity / Disaster Recovery plan? I have seen this done at a variety of levels. At the lowest level, it is all done on paper, in a conference room, with all participants talking through their respective actions. These are often called "walk-throughs". At the highest level, you turn off power to your data center --on a holiday weekend to minimize impact to operating revenues-- and have the team bring up applications at the alternate site.
As many as 80 percent of these BC/DR exercises are considered failures, in that if a real disaster would have occurred, the participants are convinced they would not have achieved their target goals of Recovery Time Objective (RTO). However, they are not complete failures if they can help improve the plans, help identify new incidents that were not previously considered, and help train the participants in recovery procedures.
Execute
The last part is execution. In my career, I have been onsite for many Disaster Recovery exercises as well as after real disasters have occured. I am not surprised how many people assume that if they have plans in place, have made preparations, and have one to three practice drills per year, that the actual "execution" would directly follow. While the book [Execution] by Bossidy and Charan is not focused on IT BC/DR plans per se, it is a great read on how to manage the actual execution of any kind of business plan. I have read this book and recommend it.
If you have not tested out your IT department's BC/DR plans. Perhaps its time to dust off your copy, review it, and schedule some time for practice.
Well, it feels like Tuesday and you know what that means... "IBM Announcement Day!" Actually, today is Wednesday, but since Monday was Memorial Day holiday here in the USA, my week is day-shifted. Yesterday, IBM announced its latest IBM FlashCopy Manager v2.2 release. Fellow blogger, Del Hoobler (IBM) has also posted something on this out atthe [Tivoli Storage Blog].
IBM FlashCopy Manager replaces two previous products. One was called Tivoli Storage Manager for Copy Services, the other was called Tivoli Storage Manager for Advanced Copy Services. To say people were confused between these two was an understatement, the first was for Windows, and the second was for UNIX and Linux operating systems. The solution? A new product that replaces both of these former products to support Windows, UNIX and Linux! Thus, IBM FlashCopy Manager was born. I introduced this product back in 2009 in my post [New DS8700 and other announcements].
IBM Tivoli Storage FlashCopy Manager provides what most people with "N series SnapManager envy" are looking for: application-aware point-in-time copies. This product takes advantage of the underlying point-in-time interfaces available on various disk storage systems:
FlashCopy on the DS8000 and SAN Volume Controller (SVC)
Snapshot on the XIV storage system
Volume Shadow Copy Services (VSS) interface on the DS3000, DS4000, DS5000 and non-IBM gear that supports this Microsoft Windows protocol
For Windows, IBM FlashCopy Manager can coordinate the backup of Microsoft Exchange and SQL Server. The new version 2.2 adds support for Exchange 2010 and SQL Server 2008 R2. This includes the ability to recover an individual mailbox or mail item from an Exchange backup. The data can be recovered directly to an Exchange server, or to a PST file.
For UNIX and Linux, IBM FlashCopy Manager can coordinate the backup of DB2, SAP and Oracle databases. Version 2.2 adds support specific Linux and Solaris operating systems, and provides a new capability for database cloning. Basically, database cloning restores a database under a new name with all the appropriate changes to allow its use for other purposes, like development, test or education training. A new "fcmcli" command line interface allows IBM FlashCopy Manager to be used for custom applications or file systems.
A common misperception is that IBM FlashCopy Manager requires IBM Tivoli Storage Manager backup software to function. That is not true. You have two options:
Stand-alone Mode
In Stand-alone mode, it's just you, the application, IBM FlashCopy Manager and your disk system. IBM FlashCopy Manager coordinates the point-in-time copies, maintains the correct number of versions, and allows you to backup and restore directly disk-to-disk.
Unified Recovery Management with Tivoli Storage Manager
Of course, the risk with relying only on point-in-time copies is that in most cases, they are on the same disk system as the original data. The exception being virtual disks from the SAN Volume Controller. IBM FlashCopy Manager can be combined with IBM Tivoli Storage Manager so that the point-in-time copies can be copied off to a local or remote TSM server, so that if the disk system that contains both the source and the point-in-time copies fails, you have a backup copy from TSM. In this approach, you can still restore from the point-in-time copies, but you can also restore from the TSM backups as well.
IBM FlashCopy Manager is an excellent platform to connect application-aware fucntionality with hardware-based copy services.
Here I am, day 11 of a 17-day business trip, on my last leg of the trip this week, in Kuala Lumpur in Malaysia. I have been flooded with requests to give my take on EMC's latest re-interpretation of storage virtualization, VPLEX.
I'll leave it to my fellow IBM master inventor Barry Whyte to cover the detailed technical side-by-side comparison. Instead, I will focus on the business side of things, using Simon Sinek's Why-How-What sequence. Here is a [TED video] from Garr Reynold's post
[The importance of starting from Why].
Let's start with the problem we are trying to solve.
Problem: migration from old gear to new gear, old technology to new technology, from one vendor to another vendor, is disruptive, time-consuming and painful.
Given that IT storage is typically replaced every 3-5 years, then pretty much every company with an internal IT department has this problem, the exception being those companies that don't last that long, and those that use public cloud solutions. IT storage can be expensive, so companies would like their new purchases to be fully utilized on day 1, and be completely empty on day 1500 when the lease expires. I have spoken to clients who have spent 6-9 months planning for the replacement or removal of a storage array.
A solution to make the data migration non-disruptive would benefit the clients (make it easier for their IT staff to keep their data center modern and current) as well as the vendors (reduce the obstacle of selling and deploying new features and functions). Storage virtualization can be employed to help solve this problem. I define virtualization as "technology that makes one set of resources look and feel like a different set of resources, preferably with more desirable characteristics.". By making different storage resources, old and new, look and feel like a single type of resource, migration can be performed without disrupting applications.
Before VPLEX, here is a breakdown of each solution:
IBM
HDS
EMC
Why?
Non-disruptive tech refresh, and a unified platform to provide management and functionality across heterogeneous storage.
Non-disruptive tech refresh, and a unified platform to provide management and functionality between internal tier-1 HDS storage, and external tier-2 heterogeneous storage.
Non-disruptive tech refresh, with unified multi-pathing driver that allows host attachment of heterogeneous storage.
How?
New in-band storage virtualization device
Add in-band storage virtualization to existing storage array
New out-of-band storage virtualization device with new "smart" SAN switches
What?
SAN Volume Controller
HDS USP-V and USP-VM
Invista
For IBM, the motivation was clear: Protect customers existing investment in older storage arrays and introduce new IBM storage with a solution that allows both to be managed with a single set of interfaces and provide a common set of functionality, improving capacity utilization and availability. IBM SAN Volume Controller eliminated vendor lock-in, providing clients choice in multi-pathing driver, and allowing any-to-any migration and copy services. For example, IBM SVC can be used to help migrate data from an old HDS USP-V to a new HDS USP-V.
With EMC, however, the motivation appeared to protect software revenues from their PowerPath multi-pathing driver, TimeFinder and SRDF copy services. Back in 2005, when EMC Invista was first announced, these three software represented 60 percent of EMC's bottom-line profit. (Ok, I made that last part up, but you get my point! EMC charges a lot for these.)
Back in 2006, fellow blogger Chuck Hollis (EMC) suggested that SVC was just a [bump in the wire] which could not possibly improve performance of existing disk arrays. IBM showed clients that putting cache(SVC) in front of other cache(back end devices) does indeed improve performance, in the same way that multi-core processors successfully use L1/L2/L3 cache. Now, EMC is claiming their cache-based VPLEX improves performance of back-end disk. My how EMC's story has changed!
So now, EMC announces VPLEX, which sports a blend of SVC-like and Invista-like characteristics. Based on blogs, tweets and publicly available materials I found on EMC's website, I have been able to determine the following comparison table. (Of course, VPLEX is not yet generally available, so what is eventually delivered may differ.)
IBM SVC
EMC Invista
EMC VPLEX
Hardware
Scalable, 1 to 4 node-pairs
One size fits all, single pair of CPCs
SVC-like, 1 to 4 director-pairs
SAN Fabric
Works with any SAN switches or directors
Required special "smart" switches (vendor lock-in)
SVC-like, works with any SAN switches or directors
Multi-pathing driver
Broad selection of IBM Subsystem Device Driver (SDD) offered at no additional charge, as well as OS-native drivers Windows MPIO, AIX MPIO, Solaris MPxIO, HP-UX PV-Links, VMware MPP, Linux DM-MP, and comercial third-party driver Symantec DMP.
Limited selection, with focus on priced PowerPath driver
Invista-like, PowerPath and Windows MPIO
Cache
Read cache, and choice of fast-write or write-through cache, offering the ability to improve performance.
No cache, Split-Path architecture cracked open Fibre Channel packets in flight, delayed every IO by 20 nanoseconds, and redirected modified packets to the appropriate physical device.
SVC-like, Read and write-through cache, offering the ability to improve performance.
Space-Efficient Point-in-Time copies
SVC FlashCopy supports up to 256 space-efficient targets, copies of copies, read-only or writeable, and incremental persistent pairs.
No
Like Invista, No
Remote distance mirror
Choice of SVC Metro Mirror (synchronous up to 300km) and Global Mirror (asynchronous), or use the functionality of the back-end storage arrays
No native support, use functionality of back-end storage arrays, or purchase separate product called EMC RecoverPoint to cover this lack of functionality
Limited synchronous remote-distance mirror within VPLEX (up to 100km only), no native asynchronous support, use functionality of back-end storage arrays
Thin Provisioning
Provides thin provisioning to devices that don't offer this natively
No
Like Invista, No
Campus-wide access
SVC Split-Cluster allows concurrent read/write access of data to be accessed from hosts at two different locations several miles apart
I don't think so
PLEX-Metro, similar in concept but implemented differently
Non-disruptive tech refresh
Can upgrade or replace storage arrays, SAN switches, and even the SVC nodes software AND hardware themselves, non-disruptively
Tech refresh for storage arrays, but not for Invista CPCs
Tech refresh of back end devices, and upgrade of VPLEX software, non-disruptively. Not clear if VPLEX engines themselves can be upgraded non-disruptively like the SVC.
Heterogeneous Storage Support
Broad support of over 140 different storage models from all major vendors, including all CLARiiON, Symmetrix and VMAX from EMC, and storage from many smaller startups you may not have heard of
Limited support
Invista-like. VPLEX claims to support a variety of arrays from a variety of vendors, but as far as I can find, only DS8000 supported from the list of IBM devices. Fellow blogger Barry Burke (EMC) suggests [putting SVC between VPLEX and third party storage devices] to get the heterogeneous coverage most companies demand.
Back-end storage requirement
Must define quorum disks on any IBM or non-IBM back end storage array. SVC can run entirely on non-IBM storage arrays
None
HP SVSP-like, requires at least one EMC storage array to hold metadata
Internal storage
SVC 2145-CF8 model supports up to four solid-state drives (SSD) per node that can treated as managed disk to store end-user data
None
Invista-like. VPLEX has an internal 30GB SSD, but this is used only for operating system and logs, not for end-user data.
In-band virtualization solutions from IBM and HDS dominate the market. Being able to migrate data from old devices to new ones non-disruptively turned out to be only the [tip of the iceberg] of benefits from storage virtualization. In today's highly virtualized server environment, being able to non-disruptively migrate data comes in handy all the time. SVC is one of the best storage solutions for VMware, Hyper-V, XEN and PowerVM environments. EMC watched and learned in the shadows, taking notes of what people like about the SVC, and decided to follow IBM's time-tested leadership to provide a similar offering.
EMC re-invented the wheel, and it is round. On a scale from Invista (zero) to SVC (ten), I give EMC's new VPLEX a six.
Wrapping up my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a final morning of main-tent sessions. Here is a quick recap of the sessions presented Thursday morning. This left the afternoon for people to catch their flights or hit the links.
Data Center Actions your CFO will Love
Steve Sams, IBM Vice President of Global Site and Facilities, presented simple actions that can yield significant operational and capital cost savings. The first focus area was to extend the life of your existing data center. Some 70 percent of data centers are 10-15 years old or worse, and therefore not designed for today's computational densities. IBM did this for its Lexington data center, making changes that resulted in 8x capability without increasing footprint.
The second focus area was to rationalize the infrastructure across the organization. The process of "rationalizing" involves determining the business value of specific IT components and deciding whether the business value justifies the existing cost and complexity. It allows you to prioritize which consolidations should be done first to reduce costs and optimize value. IBM's own transformation reduced 128 CIOs down to a single CIO, and from 155 host data centers scattered were consolidated down to seven, and 80 web hosting data centers down to five. This also included consolidating 31 intranets down to a single global intranet.
The third focus area was to design your new infrastructure to be more responsive to change. IBM offers four solutions to help those looking to build or upgrade their data center:
Scalable Modular Data Center - save up to 20 percent than traditional deployments with turn-key configurations from 500 to 2500 square feet that can be deployed in as little as 8-12 weeks to an existing floorspace.
Enterprise Modular Data Center - save 40 to 50 percent with 5000 square foot standardized design for larger data centers. This modular approach provides a "pay as you grow" approach that can be more responsive to future unforeseen needs.
Portable Modular Data Center - this is the PMDC shipping container that was sitting outside in the parking lot. This can be deployed anywhere in 12-14 weeks and is ideal for dealing with disaster recoveries or situations where traditional data center floor plans cannot be built fast enough.
High Density Zone - this can help increase capacity in an existing data center without a full site retrofit.
Here is a quick [video] that provides more insight.
Neil Jarvis, CIO of American Automobile Association (AAA) for Northern California, Nevada and Utah (NCNU), provided the customer testimonial. Last September, the [AAA NCNU selected IBM] to build them an energy-efficient green data center. Neil provided us an update now six months later, managing the needs of 4 million drivers.
Virtualization - Managing the World's Infrastructure
Helene Armitage, IBM General Manager of the newly formed IBM System Software product line, presented on virtualization and management. Virtualization is becoming much more than a way of meeting the demand for performance, capability, and flexibility in the data center. It helps create a smarter, more agile data center. Her presentation focused on four areas: consolidate resources, manage workloads, automate processes, and optimize the delivery of IT services.
Charlie Weston, Group Vice President of Information Technology at Winn Dixie, one of the largest food retailers in the United States, with over 500 stores and supermarkets. The grocery business is highly competitive with tight profit margins. Winn Dixie wanted to deploy business continuity/disaster recovery (BC/DR) while managing IT equipment scattered across these 500 locations. They were able to consolidate 600 stand-alone servers into a single corporate data center. Using IBM AIX with PowerVM virtualization on BladeCenter, each JS22 blade server could manage 16 stores. These were mirrored to a nearby facility, as well as a remote disaster recovery center. They were also able to add new Linux application workloads to their existing System z9 EC mainframe. The result was to free up $5 million US dollars in capital that could be used to remodel their stores, and improve application performance 5-10 times. They were able to deploy a new customer portal on Linux for System z in days instead of months, and have reduced their disaster recovery time objective (RTO) against hurricanes from days to hours. Their next steps involves looking at desktop virtualization.
Redefining x86 Computing
Roland Hagan, IBM Vice President for IBM System x server platform, presented on how IBM is redefining the x86 computing experience. More than 50 percent of all servers are x86 based. These x86 servers are easy to acquire, enjoy a large application base, and can take advantage of readily available skilled workforce for administration. The problem is that 85 percent of x86 processing power remains idlea, energy costs are 8 times what they were 12 years ago, and management costs are now 70 percent of the IT budget.
IBM has the number one market share for scalable x86 servers. Roland covered the newly announced eX5 architecture that has been deployed in both rack-optimized models as well as IBM BladeCenter blade servers. These can offer 2x the memory capacity as competitive offerings, which is important for today's server virtualization, database and analytics workloads. This includes 40 and 80 DIMM models of blades, and 64 to 96 DIMM models of rack-optimized systems. IBM also announced eXFlash, internal Solid State Drives accessible at bus speeds.
The results can be significant. For example, just two IBM System x3850 4-socket, 8-core systems can replace 50 (yes, FIFTY) HP DL585 4-socket, 4-core Opteron rack servers, reducing costs 80 percent with a 3-month ROI payback period. Compared to IBM's previous X4 architecture, the eX5 provides 3.5 times better SAP performance, 3.8 times faster server virtualization performance, and 2.8 times faster database performance.
The CIO of Acxiom provided the customer testimonial. They were able to get a 35-to-1 consolidation switching over to IBM x86 servers, resulting in huge savings.
Top ROI projects to Get Started
Mark Shearer, IBM Vice President of Growth Solutions, and formerly my fourth-line manager as the Vice President of Marketing and Communications, presented a list of projects to help clients get started. There are over 500 client references that have successfully implement Smarter Planet projects. Mark's list were grouped into five categories:
Enabling Massive Scale
Increase Business Agility
Manage Risk, Compliance and Security
Organize Vast Amounts of Information
Turn Information into Insight
The attendees were all offered a free "Infrastructure Study" to evaluate their current data center environments. A team of IBM experts will come on-site, gather data, interview key personnel and make recommendations. Alternatively, these can be done at one of IBM's many briefing centers, such as the IBM Executive Briefing Center in Tucson Arizona that I work at.
This wraps up the week for me. I have to pack the XIV back into the crate, and drive back to Tucson. IBM plans to host another Executive Summit in the September/October time frame on the East coast.
Continuing my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a day full main-tent sessions. Here is a quick recap of the sessions presented in the afternoon.
Taming the Information Explosion
Doug Balog, IBM Vice President and Disk Storage Business Line Executive, presented on the information explosion. Storage Admins are focused on managing storage growth and the related costs and complexity, proper forecasting and capacity planning, and backup administration. IBM's strategy is to help clients in the following areas:
Storage Efficiency - getting the most use out of the resources you invest
Service Delivery - ensuring that information gets to the right people at the right time
Data Protection - protecting data against unethical tampering, unauthorized access, and unexpected loss and corruption
Cory Vokey, Senior Manager of IT Systems Operations at Research in Motion, Ltd., the people who bring you BlackBerry phone service, provided a client testimonial for the XIV storage system. Before the XIV, RIM suffered high storage costs and per-volume software licensing. Over the past 15 months, RIM deployed XIV as a corporate standard. With the XIV, they have had 100 percent up-time, and enjoyed 50 percent costs savings compared to their previous storage systems. They have increased capacity 300 percent, without any increase to their storage admin staff. XIV has greatly improved their procurement process, as they no longer need to "true up" their software licenses to the volume of data managed, a sore point with their previous storage vendor.
Mainframe Innovations and Integration
Tom Rosamillia, IBM General Manager of the System z mainframe platform, presented on mainframe servers. After 40 years, IBM's mainframe remains the gold standard, able to handle hundreds of workloads on a single server, facilitating immediate growth with scalability. The key values of the System z mainframe are:
Industry leading virtualization, management and qualities of service
A comprehensive portfolio for business intelligence and data warehousing
The premier platform for modernizing the enterprise
A large and growing portfolio of leading-applications ISV support
Steve Phillips, CIO of Avnet, presented the client testimonial for their use of a System z10 mainframe. Last year, Avnet was ranked Fortune's Number One "Most admired" for Technology distribution. Avnet distributes technology from 300 suppliers to over 100,000 resellers, ISVs and end users. They have modernized their system running SAP on System z with DB2 as the database management system, using Hypersockets virtual LAN inside the server to communicate between logical partitions (LPARs). The folks at Avnet especially like the ability for on-the-fly re-assignment of capacity. This is used for end-of-quarter peak processing, and to adjust between test and development workloads. They also like the various special purpose engines available:
z Integrated Information Processor (zIIP) for DB2 workloads
z Application Assist Processor (zAAP) for Java processing under WebSphere
Integrated Facility for Linux (IFL) for Linux applications
Cloud Computing: Real Capabilities, Real Stories
Mike Hill, IBM Vice President of Enterprise Initiatives, presented on IBM's leadership in cloud computing. He covered three trends that are driving IT today. First, there is a consumerization and industrialization of IT interfaces. Second, a convergence of the infrastructure that is driving a new focus on standards. Third, delivering IT as a service has brought about new delivery choices. The result is cloud computing, with on-demand self-service, ubiquitous network access, location-independent resource pooling, rapid elasticity, and flexible pricing models. Government agencies and businesses in Retail, Manufacturing and Utilities are leading the charge to cloud computing.
Mike covered IBM's five cloud computing deployment models, and shared his views on which workloads might be ready for cloud, and which may not be there yet. Organizations are certainly seeing significant results: reduced labor costs, improved capital utilization, reduced provisioning cycle times, improved quality through reduced software defects, and reduced end user IT support costs.
Mitch Daniels, Director of Technology at ManTech International Corporation, presented the customer testimonial for an IBM private cloud for Development and Test. Mantech chose a private cloud as they work with US Federal agencies like Department of Defense, Homeland Security and the Intelligence community. The private cloud was built from:
IBM Cloudburst virtualized server environment
Tivoli Unified Process to document process and workflow
Tivoli Service Automation Manager to request, deliver and manage IT services
Tivoli Self-Service Portal and Service Catalog to allow developers and testers to request resources as needed
The result: Mantech saved 50 percent in labor costs, and can now provision development and test resources in minutes instead of weeks.
The IBM Transformation Story
Leslie Gordon, IBM Vice President of Application and Infrastructure Services Management, presented IBM's own transformation story, becoming the premier "Globally Integrated Enterprise". Based on IBM's 2009 CIO study, CIOs must balance three roles with seemingly contradictory demands:
Make innovations real, be both an insightful visionary but also an able pragmatist
Raise the Return on Investment (ROI) of IT, determine savvy ways to create value but also be ruthless at cutting costs
Expanding the business impact of IT, be a collaborative business leader with the other C-level executives, but also be an inspiring manager for the IT staff.
In this case, IBM drinks its own champagne, using its own solutions to help run its internal operations. In 1997, IBM used over 15,000 applications, but this has been simplified down to 4500 applications today. Thousands of servers were consolidated to Linux on System z mainframes. The applications workloads were categorized as Blue, Bronze, Silver, and Gold to help prioritize the consolidation. IBM's key lessons from all this were:
Gather data at the business unit level, but build the business case from an enterprise view.
Start small and monitor progress continually, run operations concurrently with transformational projects
Address cultural and organizational changes by deploying transformation in waves
I found the client testimonials insightful. It is always good to hear that IBM's solutions work "as advertised" right out of the box.
Continuing my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a day full main-tent sessions. Here is a quick recap of the sessions presented in the morning.
Leadership and Innovation on a Smarter Planet
Todd Kirtley, IBM General Manager of the western United States, kicked off the day. He explained that we are now entering the Decade of Smart: smarter healthcare, smarter energy, smarter traffic systems, and smarter cities, to name a few. One of those smarter cities is Dubuque, Iowa, nicknamed the Masterpiece of the Mississippi river. Mayor Roy Boul of Dubuque spoke next on his testimonial on working with IBM. I have never been to Dubuque, but it looks and sounds like a fun place to visit. Here is the [press release] and a two-minute [video].
Smarter Systems for a Smarter Planet
Tom Rosamillia, IBM General Manager of the System z mainframe platform, presented on smarter systems. IBM is intentionally designing integrated systems to redefine performance and deliver the highest possible value for the least amount of resource. The five key focus areas were:
Enabling massive scale
Organizing vast amounts of data
Turning information into insight
Increasing business agility
Managing risk, security and compliance
The Future of Systems
Ambuj Goyal, IBM General Manager of Development and Manufacturing, presented the future of systems. For example, reading 10 million electricity meters monthly is only 120 million transactions per year, but reading them daily is 3.65 billion, and reading them every 15 minutes will result in over 350 billion transactions per year. What would it take to handle this? Beyond just faster speeds and feeds, beyond consolidation through virtualization and multi-core systems, beyond pre-configured fit-for-purpose appliances, there will be a new level for integrated systems. Imagine a highly dense integration with over 3000 processors per frame, over 400 Petabytes (PB) of storage, and 1.3 PB/sec bandwidth. Integrating software, servers and storage will make this big jump in value possible.
POWERing your Planet
Ross Mauri, IBM General Manager of Power Systems, presented the latest POWER7 processor server product line. The IBM POWER-based servers can run any mix of AIX, Linux and IBM i (formerly i5/OS) operating system images. Compared to the previous POWER6 generation, POWER7 are four times more energy efficient, twice the performance, at about the same price. For example, an 8-socket p780 with 64 cores (eight per socket) and 256 threads (4 threads per core) had a record-breaking 37,000 SAP users in a standard SD 2-tier benchmark, beating out 32-socket and 64-socket M9000 SPARC systems from Oracle/Sun and 8-socket Nehalem-EX Fujitsu 1800E systems. See the [SAP benchmark results] for full details. With more TPC-C performance per core, the POWER7 is 4.6 times faster than HP Itanium and 7.5 times faster than Oracle Sun T5440.
This performance can be combined with incredible scalability. IBM's PowerVM outperforms VMware by 65 percent and provides features like "Live Partition Mobility" that is similar to VMware's VMotion capability. IBM's PureScale allows DB2 to scale out across 128 POWER servers, beating out Oracle RAC clusters.
IBM AIX on POWER sytsems is also the most reliable UNIX operating system, which is 2.3 times more reliable than Oracle Sun Solaris on SPARC, HP-UX or Apple MacOS, and 10 times more reliable than Windows 2008 Server on x86 platforms. See the [ITIC 2009 Global Server Hardware and Server OS Reliability Survey].
Analytics and Information
The final speaker in the morning was Greg Lotko, IBM Vice President of Information Management Warehouse solutions. Analytics are required to gain greater insight from information, and this can result in better business outcomes. The [IBM Global CFO Study 2010] shows that companies that invest in business insight consistently outperform all other enterprises, with 33 percent more revenue growth, 32 percent more return on invested (ROI) capital, and 12 times more earnings (EBITDA). Business Analytics is more than just traditional business intelligence (BI). It tries to answer three critical questions for decision makers:
What is happening?
Why is it happening?
What is likely to happen in the future?
The IBM Smart Analytics System is a pre-configured integrated system appliance that combines text analytics, data mining and OLAP cubing software on a powerful data warehouse platform. It comes in three flavors: Model 5600 is based on System x servers, Model 7600 based on POWER7 servers, and Model 9600 on System z mainframe servers.
IBM has over 6000 business analytics and optimization consultants to help clients with their deployments.
While this might appear as "Death by Powerpoint", I think the panel of presenters did a good job providing real examples to emphasize their key points.
While clients and IBM executives were in meetings today, in and around the Scottsdale Fairmont resort here in Scottsdale, Arizona, I helped to set up the "Solutions Showcase". There were three stations:
Smarter Systems
David Ayd and I manned this one, covering storage and server systems. From left to right: a fully-populated 15-module XIV storage system, my laptop running the XIV GUI; two-socket 16-core POWER p770 server, a solid-state drive, PS702 POWER blade, my book Inside System Storage: Volume I, HX5 x86 blade, and four-socket 16-core x3850 M3 server with MAX5 memory extension; David's laptop with various POWER and System x presentations, and our Kaon V-Osk interactive plasma screen display.
Smarter Clouds
Eric Kern manned the Smarter Clouds station. He had live guest images on the IBM Developer and Test cloud, which one the "Best of Interop" award up in Las Vegas this week. I covered IBM's cloud offering in my post [Three Things To Do on the IBM Cloud].
Smarter Data Centers
Ken Schneebeli manned the "Smarter Data Centers" station. He directed people out to the parking lot to see Brian Canney and the Portable Modular Data Center (PMDC). The one here is 8.5 feet by 8.5 feet by 40 feet in size and can be configured and deployed in 12-14 weeks to any location. We can fit any mix of IBM and non-IBM equipment, provided it meets physical dimensions. Want a DS8700 disk system? The PMDC can hold up to 3-frame configurations of the DS8700. Want an eclectic mix of Sun, HP and Dell servers with HDS and EMC disk in your PMDC? IBM can do that too.
After we finished setup, we joined the clients at the "Welcome Reception" on the Lagoon Lawn. The weather was quite pleasant.
Special thanks to Jasdeep Purdhani, Lisa Gates, and Kelly Olson for their help organizing this event.
This week, Tuesday, Wednesday and Thursday, I am at the IBM Dynamic Infrastructure Executive Summit at the beautiful Fairmont Resort in Scottsdale, Arizona. This is a mix of indoor and outdoor meetings, one-on-ones with IBM executives, and main-tent sessions.
The Solutions Showcase will cover the following:
Smarter Systems
As the bar for performance gets higher and the need to manage, store and analyze massive amounts of information escalates, systems must scale to meet the needs of the business. The latest server and storage technology innovations including: POWER7, eX5, XIV, ProtecTIER, SONAS, and System z Solution Editions.
Smarter Data Centers
Today’s data centers are under extreme power and cooling pressures and space constraints. How can you get more out of your existing facility, while planning for future requirements? IBM energy efficiency consultants will tell you how you can reduce both CAPEX and OPEX costs and plan for future growth with consolidation and virtualization, energy efficient (energy star) equipment and modular data center solutions. Be sure to check out the IBM Portable Modular Data Center (PMDC) that fits in a standard shipping crate!
Smarter Clouds
IBM’s Cloud Computing solutions provide you with flexible, dynamic, secure and cost-efficient delivery choices from pay-per-use (by the hour, week or year) at IBM cloud centers around the world, conditioning your infrastructure to build your own private cloud or out-of-the box cloud solutions that are quick and easy to deploy. Which workloads are the best fit for cloud computing? How do you decide which cloud computing is right for your organization? Cloud experts will talk about the options, give you recommendations based on your business objectives and help you get started.
It seems everyone is talking about stacks, appliances and clouds.
On StorageBod, fellow blogger Martin Glassborow has a post titled [Pancakes!] He feels that everyone from Hitachi to Oracle is turning into the IT equivalent of the International House of Pancakes [IHOP] offering integrated stacks of software, servers and storage.
Cisco introduced its "Unified Computing System" about a year ago, [reinventing the datacenter with an all-Ethernet approach]. Cisco does not offer its own hypervisor software nor storage, so there are two choices. First, Cisco has entered a joint venture, called Acadia, with VMware and EMC, to form the Virtual Computing Environment (VCE) coalition. The resulting stack was named Vblock, which one blogger had hyphenated as Vb-lock to raise awareness to the proprietary vendor lock-in nature of this stack. Second, Cisco, VMware and NetApp had a similar set of [Barney press releases] to announce a viable storage alternative to those not married to EMC.
"Only when it makes sense. Oracle/Sun has the better argument: when you know exactly what you want from your database, we’ll sell you an integrated appliance that will do exactly that. And it’s fine if you roll your own.
But those are industry-wide issues. There are UCS/VCE specific issue as well:
Cost. All the integration work among 3 different companies costs money. They aren’t replacing existing costs – they are adding costs. Without, in theory, charging more.
Lock-in. UCS/Vblock is, effectively, a mainframe with a network backplane.
Barriers to entry. Are there any? Cisco flagged hypervisor bypass and large memory support as unique value-add – and neither seems any more than a medium-term advantage.
BOT? Build, Operate, Transfer. In theory Vblocks are easier and faster to install and manage. But customers are asking that Acadia BOT their new Vblocks. The customer benefit over current integrator practice? Lower BOT costs? Or?
Price. The 3 most expensive IT vendors banding together?
Longevity. Industry “partnerships” don’t have a good record of long-term success. Each of these companies has its own competitive stresses and financial imperatives, and while the stars may be aligned today, where will they be in 3 years? Unless Cisco is piloting an eventual takeover."
Fellow blogger Bob Sutor (IBM) has an excellent post titled
[Appliances and Linux]. Here is an excerpt:
"In your kitchen you have special appliances that, presumably, do individual things well. Your refrigerator keeps things cold, your oven makes them hot, and your blender purees and liquifies them. There is room in a kitchen for each of these. They work individually but when you are making a meal they each have a role to play in creating the whole.
You could go out and buy the metal, glass, wires, electrical gadgets, and so on that you would need to make each appliance but it is is faster, cheaper, and undoubtably safer to buy them already manufactured. For each device you have a choice of providers and you can pay more for additional features and quality.
In the IT world it is far more common to buy the bits and pieces that make up a final solution. That is, you might separately order the hardware components, the operating system, and the applications, and then have someone put them all together for you. If you have an existing configuration you might add more blades or more storage devices.
You don’t have to do this, however, in every situation. Just from a hardware perspective, you can buy a ready-made machine just waiting for the on switch to be flicked and the software installed. Conversely, you might get a pre-made software image with operating system and applications in place, ready to be provisioned to your choice of hardware. We can get even fancier in that the software image might be deployable onto a virtual machine and so be a ready made solution runnable on a cloud.
Thus in the IT world we can talk about hardware-only appliances, software-only appliances (often called virtual software appliances), and complete hardware and software combinations. The last is most comparable to that refrigerator or oven in your kitchen."
If your company was a restaurant, how many employees would you have on hand to produce your own electricity from gas generators, pump your own water from a well, and assemble your own toasters and blenders from wires and motors? I think this is why companies are re-thinking the way they do their own IT.
Rather than business-as-usual, perhaps a mix of pre-configured appliances, consisting of software, server and storage stacked to meet a specific workload, connected to public cloud utility companies, might be the better approach. By 2013, some analysts feel that as many as 20 percent of companies might not even have a traditional IT datacenter anymore.
“By employing techniques like virtualization, automated management, and utility-billing models, IT managers can evolve the internal datacenter into a ‘private cloud’ that offers many of the performance, scalability, and cost-saving benefits associated with public clouds. Microsoft provides the foundation for private clouds with infrastructure solutions to match a range of customer sizes, needs and geographies.
The public cloud:
“Cloud computing is expanding the traditional web-hosting model to a point where enterprises are able to off-load commodity applications to third-party service providers (hosters) and, in the near future, the Microsoft Azure Services Platform. Using Microsoft infrastructure software and Web-based applications, the public cloud allows companies to move applications between private and public clouds.”
Finally, I saw this from fellow blogger, Barry Burke(EMC), aka the Storage Anarchist, titled [a walk through the clouds] which is really a two-part post.
The first part describes a possible future for EMC customers written by EMC employee David Meiri, envisioning a wonderful world with "No more Metas, Hypers, BIN Files...."
The vision is a pleasant one, and not far from reality. While EMC prefers to use the term "private cloud" to refer to both on-premises and off-premises-but-only-your-employees-can-VPN-to-it-and-your-IT-staff-still-manages-it flavors, the overall vision is available today from a variety of Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) providers.
A good analogy for "private cloud" might be a corporate "intranet" that is accessible only within the company's firewall. This allowed internal websites where information to be disseminated to employees could be posted, using standard HTML and standard web browsers that are already deployed on most PCs and workstations. Web pages running on an intranet can easily be moved to an external-facing website without too much rework or trouble.
The second part has Barry claiming that EMC has made progress towards a "Virtual Storage Server" that might be announced at next month's EMC World conference.
Seriously?
When people hear "Storage Virtualization" most immediately think of the two market leaders, IBM SAN Volume Controller and Hitachi Data Systems (HDS) Universal Storage Platform (USP) products. Those with a tape bent might throw in IBM's TS7000 virtual tape libraries or Oracle/Sun's Virtual Storage Manager (VSM). And those focused on software-only solutions might recall Symantec's Veritas Volume Manager (VxVM), DataCore's SANsymphony, or FalconStor's IPStor products.
But what about EMC's failed attempt at storage virtualization, the Invista? After five years of failing to deliver value, EMC has so far only publicised ONE customer reference account, and I estimate that perhaps only a few dozen actual customers are still running on this platform. Compare that to IBM selling tens of thousands of SAN Volume Controllers, and HDS selling thousands of their various USP-V and USP-VM products, and you quickly realize that EMC has a lot of catching up to do. EMC's first delivered Invista about 18 months after IBM SAN Volume Controller, similar to their introduction of Atmos being 18 months after our Scale-Out File Services (SoFS) and their latest CLARiiON-based V-Max coming out 18 months after IBM's XIV storage system.
So what will EMC's Invista follow-on "Virtual Storage Server" product look like? No idea. It might be another five years before you actually hear about a customer using it. But why wait for EMC to get their act together?
IBM offers solutions TODAY that can make life as easy as envisioned here. IBM offers integrated systems sold as ready-to-use appliances, customized "stacks" that can be built to handle particular workloads, residing on-premises or hosted at an IBM facility, and public cloud "as-a-service" offerings on the IBM Cloud.
My colleagues, Harley Puckett (left) and Jack Arnold (right) were highlighted in today's Arizona Daily Star, our local newspaper, as part of an article on IBM's success and leadership in the IT storage industry. At 1400 employees here in Tucson, IBM is Southern Arizona's 36th largest employer.
Highlighted in the article:
DS8700 with the new Easy Tier feature
TS7650 ProtecTIER virtual tape library with data deduplication capability
LTO-5 tape and the new Long Term File System (LTFS)
XIV with the new 2TB drive, for a maximum per-rack usable capacity of 161 TB.