This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections platform will be sunset on December 31, 2019. On January 1, 2020, this blog will no longer be available. More details available on our FAQ.
This week, I will be in Auckland, New Zealand for the [IBM System x and System Storage Technical Symposium]. This is a three-day event, with 35 unique sessions and labs. The agenda is organized with a keynote session in the beginning, followed by 12 time slots over three days, each slot offering five different break-out session topics to choose from. Here is a recap of Day 1:
The keynote was led by Phil Tasker, IBM Business Unit Executive (BUE) for STG Education Programs in Growth Markets, then Matt Paterson, General Manager for Sales in New Zealand say a few words. IBM is in the Top 10 Training Hall of Fame, and conducts over 40,000 classes worldwide, resulting in over 1.3 million student days of instructions. IBM Systems Lab and Training technical hosts over three dozen conferences like this one every year. This is the first time that System x and Storage Symposium has been run in New Zealand, and based on the incredibly good turn-out, will probably be a regular event.
Matt Ziegler - HPC
Matt Ziegler, IBM Senior HPC Solutions Architect for the iDataPlex marketing team, gave an introdcution to HPC during the keynote, then provided more details in a break-out session.
In the High Performance Computing (HPC) market, IBM POWER used to be the dominant chipset, with over 200 of the top 500 supercomputers back in June 2001. Today, only about 50 use POWER. Rather, over 350 of the top 500 supercomputers use x86 instead. HPC represents a 6.3 percent growth opportunity for computer, 9.3 percent growth for storage, and 8.6 percent growth for services.
IBM's leadership in energy efficiency applies to HPC as well. In the "Green 500", a ranking based on MFLOPS/Watt, 19 of the top 25 are from IBM. IBM's iDataPlex is the most energy efficient x86 platform, at 401 MFLOPS per Watt.
Overall, x86 is growing. In 2005, x86 had 48 percent of the market, RISC/Itanium had 39 percent, and mainframe had 12 percent. In 2009, x86 grew to 56 percent, RISC/Itanium dropped to 33 percent, and mainframe to 11 percent. By 2014, Matt projects that x86 will be 63 percent, RISC/Itanium will drop to 30 percent, and mainframe to 7 percent.
The most popular form factor for x86 are blades. Growing from 8 percent in 2005, to 20 percent in 2009, and projected to be 33 percent by 2014.
IBM's Storage Strategy in the Era of Smarter Computing
I gave this presentation twice today. It has evolved quite a bit from the version I presented in Orlando last July. Attendees appreciated that my colorful analogies and stories helped them better understand the concepts of Big Data analytics, Workload-Optimized systems, and Cloud Storage offerings.
SONAS Product Review and Demo
Rich Swain presented IBM's Scale-Out Network Attached Storage (SONAS) and provided a live demo connecting to a box here in New Zealand. This is a topic I often present at the Tucson Executive Briefing Center, but it is always good to hear someone else's spin.
Phil Tasker invited everyone to the Welcome Reception after the last sessions. There was food and drink, and prizes! One person won an Xbox-360 game console, and two people won iPads.
Special thanks to Anthony Vandewerdt, who sent me his version of this presentation that he planned to present in Australia next week. I "smartened it up" (or whatever the appropriate phrase is the opposite of "dumbed it down") for the technical audience.
Recovery procedures for single and double drive failures. A double drive failure on an XIV typically involves less recovery effort than traditional RAID5-based disk systems, and in many cases results in no data loss whatsoever. I provided details on this in my blog post [Double Drive Failure Debunked: XIV Two Years Later], so no need to repeat myself here.
Replacing the Automatic Transfer Switch (ATS) non-disruptively. To support either single-phase and triple-phase power sources, the XIV uses an ATS to take two independent power feeds, and distribute this out to the three Uninterruptible Power Supplies (UPS).
Built-in Migration capability to copy data off other disk systems over to the XIV.
Configuring Synchronous and Asynchronous mirroring using either the Fibre Channel or Internet Protocol ports.
Optimizing the use of XIV for VMware, AIX and other operating systems.
The IBM XIV Storage System is quite popular in New Zealand, with four times more boxes sold per capita than the other countries in the Asia Pacific region. I covered both the A14 model as well as the new Gen3 model.
Business Continuity/Disaster Recovery (BC/DR) Update: Lessons, Planning, Solutions
My colleague Vic Peltz from IBM Almaden presented on lessons learned from Hurricane Katrina and various other natural disasters. Unlike tradtional presentations the focus on technology, Vic took a different approach, focusing on people and procedures. I was here last year when the earthquake hit Christchurch on the south island, so I was well aware that BC/DR was top of mind for many of the attendees. Throughout this week, I have felt tremors, and many of the locals told me that these happen all the time.
Introduction to IBM Storwize V7000
I knew I was in trouble when the request for me to present Storwize sounded like something from [Mission Impossible]:
"Good morning, Mr. Pearson. Your mission, should you choose to accept it, involves presenting Storwize V7000 in Auckland, New Zealand. You may also present the Storwize V7000 Unified, but it is essential that you not cover the SAN Volume Controller or SONAS products from which they are based upon, as you will not have enough time. The audience is very technical, so be careful. As always, should any questions come up that you cannot answer, the conference coordinators will disavow all knowledge of your actions, nor reimburse your laundry charges. This message will self-destruct in five seconds."
Well, I accomplished my mission in 75 minutes. I was able to cover the block-only version of the IBM Storwize V7000, with support for clustering the control enclosures, expansion drawers and external storage virtualization. I then spent a few minutes on the block-and-file Storwize V7000 Unified, which adds support for CIFS, NFS, HTTPS, FTP and SCP protocols through two new "file modules", with integrated support for backup and anti-virus checking. I covered both IBM Easy Tier for sub-LUN automated tiering between Solid-State Drives (SSD) and spinning disk, as well as Active Cloud Engine for file-based movement between disk and tape.
Continuing my coverage of the [IBM System x and System Storage Technical Symposium], I thought I would start with some photos. I took these with cell phone, and without realizing how much it would cost, uploaded them to Flickr at international data roaming rates. Oops!
Here are some of the banners used at the conference. Each break-out session room was outfitted with a "Presentation Briefcase" that had everything a speaker might need, including power plug adapters and dry-erase markers for the whiteboard. What a clever idea!
Here is a recap of the last and final day 3:
Understanding IBM's Storage Encryption Options
Special thanks to Jack Arnold for providing me his deck for this presentation. I presented IBM's leadership in encryption standards, including the [OASIS Key Management Interoperability Protocol] that allows many software and hardware vendors to interoperate. IBM offers the IBM Tivoli Key Lifecycle Manager (TKLM v2) for Windows, Linux, AIX and Solaris operating systems, and the IBM Security Key Lifecycle Manager (v1.1) for z/OS.
Encrypting data at rest can be done several ways, by the application at the host server, in a SAN-based switch, or at the storage system itself. I presented how IBM Tivoli Storage Manager, the IBM SAN32B-E4 SAN switch, and various disk and tape devices accomplish this level of protection.
NAS @ IBM
Rich Swain, IBM Field Technical Sales Specialist for NAS solutions, provided an overview of IBM's NAS strategy and the three products: Scale-Out Network Attached Storage (SONAS), Storwize V7000 Unified, and N series.
IBM System Networking Convergence CEE/DCB/FCoE
Mike Easterly, IBM Global Field Marketing Manager for IBM System Networking, presented on Network convergence. He wants to emphasize that "Convergence is not just FCoE!" rather it is bringing together FCoE with iSCSI, CIFS, NFS and other Ethernet-based protocols. In his view, "All roads lead to Ethernet!"
There are a lot new standards that didn't exist a few years ago, such as PCI-SIG's Single Root I/O Virtualization [SR-IOV], Virtual Ethernet Port Aggregator [VEPA], and [VN-Tag], Data Center Bridging [DCB], Layer-2 Multipath [L2MP], and my favorite: Transparent Interconnect of Lots of Links [TRILL].
Last year, IBM acquired Blade Network Technologies (BNT), which was the company that made IBM BladeCenter's Advanced Management Module (AMM) and BladeCenter Open Fabric Manager (BOFM). BNT also makes Ethernet switches, so it has been merged with IBM's System Storage team, forming the IBM System Storage and Networking team. Most of today's 10GbE is either fiber optic, Direct Attach Copper (DAC) that supports up to 8.5 meter length cables, or 10GBASE-T which provides longer distances of twisted pair. IBM's DS3500 uses 10GBASE-T for its 10GbE iSCSI support.
Last month, IBM announced 40GbE! I missed that one. The IT industry also expects to deliver 100GbE by 2013. For now, these will be used as up-links between other switches, as most servers don't have the capacity to pump this much data through their buses. With 40GbE and 100GbE, it would be hard to ignore Ethernet as the common network standard to drive convergence.
Fibre Channel, such as FCP and FICON, are still the dominant storage networking technology, but this is expected to peak around 2013 and start declining thereafter in favor of iSCSI, NAS and FCoE technologies. Already the enhancements like "Priority-based Flow Control" made to Ethernet to support FCoE have also helped out iSCSI and NAS deployments as well.
The iSCSI protocol is being used with Microsoft Exchange, PXE Boot, Server virtualization hypervisors like VMware and Hyper-V, as well as large Database and OLTP. IBM's SVC, Storwize V7000, XIV, DS5000, DS3500 and N series all support iSCSI.
IBM's [RackSwitch] family of products can help offload traffic at $500 per port, compared to traditional $2000 per port for IBM SAN32B or Cisco Nexus5000 converged top-of-rack switches.
IBM's System Networking strategy has two parts. For Ethernet, offer its own IBM System Networking product line as well as continue its partnership with Juniper Networks. For Fibre Channel and FCoE, continue strategic partnerships with Brocade and Cisco. IBM will lead the industry, help drive open standards to adopt Converged Enhanced Ethernet (CEE), provide flexibility and validate data center networking solutions that work end-to-end.
The keynote was led by Phil Tasker, IBM Business Unit Executive (BUE) for STG Education Programs in Growth Markets, then Joe Screnci, head of IBM Storage Sales for Australia. IBM is in the Top 10 Training Hall of Fame, and conducts over 40,000 classes worldwide, resulting in over 1.3 million student days of instructions. IBM Systems Lab and Training technical hosts over three dozen conferences like this one every year.
Next was Clod Barrera, Distinguished Engineer and Chief Technical Strategist for the IBM System Storage product line. He covered future trends in storage as they relate to IBM's Smarter COmputing initiative.
Storage for the Clouds
Clod Barrera presented this break-out session on Cloud Storage. He covered why clouds matter, the various types and purposes of cloud, technology and architectures, and where IBM is headed to support this trend.
Storage for Cloud computing was $1 Billion USD business in 2010, and is expected to grow 32 percent CAGR through, compared to 3.8 percent for non-cloud storage. Clod estimates that 10 to 15 percent of all storage will be in cloud deployments by 2015. Of this storage, analysts expect 50 percent in private clouds, and the other 50 percent in public clouds. For private clouds, clients are looking to "Cloudify" their existing IT infrastructures. For public clouds, the projects are mostly green field.
IBM is also looking to the "arms dealer" of choice for Telcos and other companies looking to launch their own Cloud Services. IBM has a Cloud Services Provider Platform (CSP2) specifically to provide all the tools and technologies needed to make this possible.
Last month, IBM launched several new solutions for Cloud. The IBM Starter Kit for Cloud will help existing IT environments adopt cloud technologies. The IBM Service Agility Accelerator for Cloud is available for more advanced deployments. IBM Service Delivery Manager (ISDM) integrates a collection of software to provide complete integrated service management. IBM CloudBurst provides an integrated hardware-and-software stack for both x86 and POWER chipsets.
Multi-tenancy is also a big issue, and this varies depending on deployment model: IaaS, PaaS, or SaaS. Multi-tenancy is needed to help divide up management tasks, and to ensure that shared resources are paid for and meet SLA requirements accordingly.
Clod feels there are good reasons to use high performance, transactional SAN storage for VMware environments, versus NAS which many people consider simpler to deploy. IBM is also active in open standards, including SNIA's Cloud Data Management Interface [CDMI].
Journey to the Private Cloud
Gary Luke from Brocade provided this session on IBM's SAN384B-2 and SAN768B-2 SAN directors. Brocade is one of IBM's suppliers for SAN switches, and thanks to TRILL being adopted last August by IETF, supports multi-hop FCoE configurations! However, Gary did not talk about FCoE, but rather native FCP and FICON support in these new directors.
According to VMware, only 30 percent of x86 workloads are virtualized by any hypervisor. Gary feels that server virtualization and the use of Solid-State Drives (SSD) in disk arrays are driving existing 8 Gbps SAN to upgrade to 16 Gbps. Gary feels that Fibre-Channel based SANs are best positioned to handle unpredictable peaks in a 24-by-7 world.
The SAN384B-2 can house up to 256 ports (8 Gbps) or 192 ports (16 Gbps) in four slots, 9U chassis. The SAN768B-2 can handle twice these, in a 12U chassis. The nice thing about the 16Gbps ports is that they can auto-negotiate down to 10, 8, 4 and 2 Gbps. This is far better than typical N-2 support, often referred to as the speeds supported, such as 4/2/1 and 8/4/2. An upcoming FOS release will allow people with previous generation SAN384B-1/SAN768B-1 directors to move their 8Gbps blades over to the new SAN384B-2/SAN768B-2 generation models.
Since most CWDM and DWDM only support maximum 10 Gbps FC and 10GbE, Brocade's 16Gbps can automatically drop down to 10 Gbps for direct attachment to CWDM/DWDM, rather than having a step-down box normally required.
A major advancement is the change from copper to optical "Inter-Chassis Links" (ICL). Unlike Inter-switch links (ISL) that use up SAN ports on each box, the ICL is faster, more efficient and does not consume ports. Normally, clients would connect two directors together, but now you can connect up to six chassis together! For example, you can have four SAN368B-2 connected to your host servers, ICL attached to two SAN768B-2, that are then connected to your disk and tape storage devices. The fiber optic ICL allow for up to 50 meters distance. Combining six chassis together would allow the complex to support over 3,000 ports (8 Gbps) or 2,300 ports (16 Gbps).
The SAN384B-2 and SAN768B-2 supports "virtual SAN" logical switches, traffic isoliation (TI) zones, fabric-assigned WWNNs, and fabric-based QoS.
Lastly, Brocade offers a free utility called [SANhealth] that will gather data from your b-type, m-type and even Cisco MDS-based SAN. The data can then be sent to Brocade for analysis, and Brocade will then email back some nice Visio graphs, spreadsheets and other analysis results on the health of your SAN.
Since Clod Barrera introduced IBM's Smarter Computing initiative during yesterday's keynote session, I took it to the next lower level, with a presentation on how IBM's Storage Strategy aligns with the Smarter Computing approach.
Deduplication -- It's Not Magic, It's Math!
Local IBMer Paul Rizio presented this high-level session on the concepts of data deduplication, and how it is implemented in IBM's N series, TSM and ProtecTIER virtual tape libraries. I first met Paul earlier this year when we were both instructors at Top Gun classes we held in Auckland, New Zealand and Sydney, Australia.
IBM Information Archive for files, email and eDiscovery
This was a reprise of my presentation that I gave last July in Orlando, Florida (see my blog post [IBM Storage University - Day 1]). I explained the differences between backup and archive, the differences between Tivoli Storage Manager and System Storage Archive Manager, and the Information Archive (IA) The Information Archive for files, email and eDiscovery bundle combines IA hardware with content collectors for files and email, eDiscovery analyzer and eDiscovery manager software.
What are Industry Consultants saying about IBM Storage?
Vic Peltz, from our IBM Almaden Research Center, presented this lively presentation on how IT industry analysts gather their information and structure their findings into various models. For many in the audience, this would be their first exposure to concepts like a "Magic Quadrant", "MarketScope" and the various stages of the "Hype Cycle".
IBM SONAS and the Smart Business Storage Cloud
The title of this session just rolls off my tongue, similar to "James and the Giant Peach" or "Harold and the Purple Crayon". I had presented this back in July (see my blog post [IBM Storage University - Cloud Storage]). This time, I had updated the materials to reflect the new SONAS R1.3 release, and the new IBM SmartCloud offerings announced last month.
Of course the big news is that U.S. President Barack Obama is here in Australia, with a stop in Canberra (not far from Melbourne), followed by a stop in Darwin on the north side of this country. This is his first official visit to Australia as president.
IBM Tivoli Storage Productivity Center v4.2.2 Overview and Update
This was an updated version of the presentation I gave last July in Orlando, Florida (see my post [IBM Storage University - Day 1]). Since it might have been awhile since the Australian audience had heard about the latest and greatest for Tivoli Storage Productivity Center, I decided to cover the enhancements of 4.2.0, 4.2.1 and 4.2.1 combined.
IBM Tivoli Storage Productivity Center is an important part of IBM's "Storage Hypervisor" solution, combining a single pane of glass for management with non-disruptive storage virtualization with SVC and Storwize V7000.
IBM Storwize V7000 and SVC integration with VMware
Alexi Giral from IBM Sydney presented this session on how Storwize V7000 and SVC serve as the "Storage Hypervisor" for VMware server virtualization environments. The focus was on the FCP and iSCSI block-only access modes of these devices, although one could use IBM Storwize V7000 Unified to provide NFS file-level access to VMware. Alexi covered both VMware Vsphere v4 and v5, as there are a few differences.
IBM Storwize V7000 and SVC supports thin provisioning, VMware's VAAI interface, VMware's Site Recovery Manager, and provides a storage management plug-in to Vmware's vCenter. The SVC has extended the distance for split-cluster configurations that support VMware's vMotion live partition mobility and High Availiability (HA) up to 300km using active DWDM.
Tape Storage Reinvented: What's New and Exciting in the Tape World?
Special thanks to Jim Fisher and Jim Karp for providing me this presentation, videos and supporting materials for me to present this session. I gave this as the first break-out session on Tuesday, and then repeated as the last break-out session on Thursday. Several of the attendees in the audience mocked my title, with taunts like "What could be NEW or EXCITING about tape?" I covered four key areas:
The new TS1140 tape drive, including the corresponding model-JC tape that holds 4TB native (12 TB compressed!).
The enhanced TS3500 with the Tape Library Connector Shuttle. I had a video that shows how tapes can be sent from one TS3500 tape library string to another.
The new Linear Tape File System (LTFS), both the single drive edition and the library edition
The new 3592-C07 FICON controller for our mainframe clients
By the end of the session, the folks that taunted me were honestly impressed that they learned a few things, and had not realized so much has been developed recently in the world of tape.
Well, it's Tuesday again, and you know what that means!
This Thursday is the Thanksgiving holiday here in the United States, so instead of announcing IBM products, I wanted to announce the general availability of my latest book, [Inside System Storage: Volume III].
This book includes blog posts from May 2008 to March 2009, along with the ever popular behind-the-scenes commentary on what was going on during IBM's launch of the Information Infrastructure initiative.
Do you know someone who celebrates Chanukah, Christmas, Kwanza, or the Winter Solstice, and have a hard time finding the right gift?
Do you know a client or IBM Business Partner that would appreciate a nominally-priced gift to thank them for their business?
Do you know someone newly hired into IBM or another IT company that could benefit from behind-the-scenes insight and commentary?
As with the other two volumes, Inside System Storage: Volume III is available in your choice of paperback, hardcover, and eBook (Adobe PDF) format.
In the spirit of Thanksgiving, I would like to thank my editor, Susan Pollard, who put in the extra effort, working evenings and weekends, to get this book done in time for the upcoming holiday season. For those outside the United States, there is an American tradition to shop in brick-and-mortar stores on Black Friday (the day after Thanksgiving) and to shop on-line for books like mine on Cyber Monday (the Monday after Thanksgiving).
I would also like to thank my publisher, Lulu.com, for upgrading me to "Spotlight" level, so now I have a spotlight page titled [Books Written by Tony Pearson], making it easy for you to order any of my books in various formats.
And last, but not least, I would like to thank all my friends and family that were supportive these past few difficult months while I was putting this book together.
Next month, I will be in Las Vegas, Dec 4-8, speaking at Gartner's [Data Center Conference]. If you order a book today, and bring it with you to the IBM booth at the Solution Expo, I can sign it for you!
Back in October, Daryl Pereira asked me for an interview about my blog. I get a lot of these requests, but this one was different. Daryl is on the IBM DeveloperWorks team, and he was going to interview me to for the "Great Mind Challenge". This is a fun competition for a group of about 100 college students from San Jose State University to get them to learn blogging best practices and techniques.
This was the one post that put me into the #1 position, with over 70,000 hits so far and counting, and that does not include all the people who read my blog through feed readers or the various cross-postings on IBM Storage Community and IBM Virtual Briefing Center.
This blog post was part of a series on IBM Watson, the computer that beat two humans on the "Jeapoardy!" television game show. Having worked closely with the IBM Research scientists to understand how IBM Watson worked so that I could blog about it, I thought a good way for readers to appreciate how it was put together was to explain how to assemble a scaled-down version. My inspiration was an article by John Pultorak that explained [how to build your own Apollo Guidance Computer (AGC) in your basement].
The blog post series proved to be a big hit. IBM Watson helps to demonstrate many modern computer techniques, including business analytics of Big Data, Cloud Computing, and parallel programming techniques such as Hadoop. Showing that a "Watson Jr." could be built in your basement helped to emphasize that IBM Watson was made from hardware and software that are generally available today.
I am very proud of this blog post. I worked with Moshe Yanai and the rest of the XIV team to be completely accurate and correct to set the right level of expectations. So many false statements and FUD had been thrown out about what would happen if a double drive failure happened during the short 30 minute window of opportunity, and it turns out that in most cases, no data is lost, and in all other cases, the lost data can be easily identified and restored. In most cases, this will be less recovery required than a double drive failure on a traditional RAID-5 disk array.
It was also an opportunity to try out Animoto to create a short and simple video. Normally, when marketing needs a video made, it will cost 25,000 dollars USD or more, and take weeks to produce. I was able to get this video done in just a few hours with no out-of-pocket expenses.
After this post, nearly all FUD in the blogosphere about double drive failures disappeared. More importantly, the XIV sales that quarter (2Q2010) was substantially better than the prior quarter. Many XIV sales reps credit this blog post for that huge bump in XIV sales! I guess this could be the Tony Pearson equivalent of the [Colbert Bump].
In 2009 and 2010, I was the third most influential blogger on IBM's Developerworks, and now in 2011, I have risen to number one position! Internally, we call this "Winning the Devy" (like an Emmy, but for DeveloperWorks bloggers). I would like to thank all my readers for continuing to share in the conversation!
Next week, I will be in Las Vegas for the 30th annual [Data Center Conference]. This is the fourth year attending this. For a bit of nostalgia, check out my blog posts from the [2008 event] and the [2009 event].
This week, I will be in Las Vegas for the 30th annual [Data Center Conference]. For those on Twitter, follow the conference on hashtag #GartnerDC, and follow me at [@az990tony].
Once again, I will be working the IBM Exhibition Booth of the Solution Showcase, attending keynote and break-out sessions, and meeting with clients and analysts. Today is mostly setting up the booth, getting my registration badge and materials, an orientation meeting for first-timers, and finish off the evening with a networking event to get the party started!
Traffic to and from the hotel was a mess today because of the [Las Vegas Strip at Night Rock-n-Roll Marathon]. The entire Las Vegas Boulevard was blocked off from 2pm to 11pm, causing taxis some headaches getting to and from each hotel. This marathon included a "Stiletto Dash" where women had to run in shoes that had at least three inch heels! (Only in Las Vegas!)
The conference is organized into 8 tracks:
Navigating the Journey to Cloud-Delivered Services
Achieving and Maintaining IT Operational Excellence
Modernizing Your Storage Strategy to Keep Pace with Burgeoning Demand
Ensuring Your Business Continuity Management Plan Reflects Today's Realities and Tomorrow's Challenges
Virtualization: Moving at Light Speed While Leveraging Your Existing Investments
The Future of Servers and Operating Systems
Data Center Modernization: Staying Agile in Chaotic Times
Pervasive Mobility: What Infrastructure and Operations Needs to Know Now
I am glad to see that storage got its own track this year! If you are attending the conference, here are the sessions that IBM is featuring for Monday:
IBM: Watson and Your Data Center
This is a lunch-time talk. Steve Sams, IBM VP of Sites and Facilities, will explain how to leverage Watson-like analytic approaches to provide flexible, cost-effective data center solutions. Analytics can be used to better align IT to the business needs, optimize server, storage and network utilization and improve data center design.
IBM: University of Rochester Medical Center cracks the code on data growth
Rick Haverty, Director of Infrastructure for University of Rochester Medical Center (URMC), will discuss how his team built a storage strategy that transformed their environment to bring savings right to their bottom line without sacrificing the speed, criticality and performance requirements of their imaging and EMR systems. I will be there to introduce Rick at the beginning, and then moderate the Q&A after the talk.
Solution Showcase Reception
The Solution Showcase opens up Monday night with a reception, serving food and drinks. Look for the IBM Portable Mobile Data Center (PMDC), the big trailer on the show floor. We also have an exhibit booth, across from the PMDC, to ask questions and talk with various IBM experts. You can look for me and the other experts wearing white lab coats!
This week, I will be in Las Vegas for the 30th annual [Data Center Conference]. For those on Twitter, follow the conference on hashtag #GartnerDC, and follow me at [@az990tony]. IBM is a Global Partner and Platinum Sponsor for this event. Here is a recap of some of the Monday morning keynote sessions:
Welcome and Introduction
Monday morning kicked off with a welcome introduction from the conference coordinators. This is the highest attendance for this conference in its 30 year history, with 60 percent of the attending for their first time, and 18 percent only once before. This is the fourth time I am attending. Half of the attendees represent corporations with 20,000 employees or more, the other half from smaller companies and government agencies. The top five industries represented are financial services, public sector, healthcare, manufacturing, and energy.
This conference uses a clever "interactive polling" where hand-held devices can be used to select choices, and results of over 800 voters are presented immediately on the big screen.
For IT budgets, 42 percent plan to increase next year, 32 percent flat, and 26 percent lower, which are similar to the numbers last year. Of nine different IT challenges, the top three were managing storage growth, power/cooling issues, and adopting a Cloud strategy.
Top 10 Trends and how they will impact Data Center IT
The analyst presented top 10 business, technology and societal trends that will impact IT. He added a last-minute eleventh issue that he felt will impact everyone in 2012:
Consumerization and the Tablet. Back in 1997, a GB of flash memory cost $7,992 US dollars, and today that same GB costs only 25 cents. Employees are bringing their own devices to the workplace, and expecting IT support.
Infinite Data Center. You may never have to expand your floorspace again. Improvements in server and storage density can allow you to continually upgrade in place.
Energy Management. Data centers consume 100x more energy than the offices they support. The cost of energy is on part with IT equipment. Energy management is becoming an enterprise-wide discipline. A key performance indicator (KPI) can be "compute per kW" or "compute per Square foot".
Context Awareness. There are hundreds of thousands of apps for Android-based smart phones and iPhones. Context awareness allows an app to help business travelers in airports know what restaurants are nearby, their flight status, and alternate flights available, based entirely on their location.
Hybrid Clouds. By 2013, over 60 percent of cloud adoption will be to redeploy existing apps like email. Some 80 percent of cloud initiatives will be private or hybrid configurations. Customers want "good enough" technology, and thus Cloud will be mostly an augmentation strategy.
Fabric Computing. The opposite of fully-integrated stacks is the notion of having compute, memory and storage joined together via an interconnect fabric with software to manage the entire environment.
IT Complexity. Robert Glass's Law states that for every 25 percent increase in functionality, there is a 100 percent increase in complexity. See Roger Session's whitepaper [The IT Complexity Crisis: Danger and Opportunity] for more on this.
Patterns and Analytics. Big data and business analytics is a key platform. This is expected to grow 60 percent CAGR.
Impact of Virtualization. Virtualizing your environment should be considered a continuous process, not a one-time project. Many companies are running x86 servers at less than 55 percent, which the speaker considers under-utilized. Virtual Desktop Infrastructure (VDI) is a trade-off, may cost more but have other business benefits to consider. The problem is that many IT shops are organized vertially (a server team, storage team, network team) but problems surface horizontally, and there is no "ownership" for the resolution. Some use "tiger teams" to address this. Companies should reward lateral thinking.
Social Media. Of the ommunications on cell phones by college students, 98.4 percent are text messages, and only 1.6 percent voice phone calls. People search Google for "what was", but they search Twitter for "what is". Most of the growth on Twitter are in the 39-52 year-old demographic. The analyst felt that if your company is blocking or restricting access to facebook, twitter, youtube or other social networking sites, then shame on you. I agree!
Flooding in Thailand. Over two million square feet of HDD production space were flooded, and this will impact HDD prices for 2012. Already, a 2TB drive that was selling for $79 at local store is now selling for $190.
How To Get Your CFO's Support For Strategy and Funding
In the first of a series of "mastermind interviews", the analyst interviewed their own CFO Chris Lafond. Ultimately, it is about business results. They have grown annual 15-20 percent, from 250 million in 2003 to 1.3 billion US dollars in 2011 for annual revenue, 4600 employees, doing business in 85 countries. The company is focused on three business areas: Research, Consulting, and Events like this one. Chris does not approve 3-5 year projects, and instead requests projects be broken up into year-long phases. ROI can be very misleading, and he asks instead for benefits and contributions to initiatives.
It is important to keep the horse in front of the cart. Accounting departments should not drive business decisions. For example, companies should not move to the public cloud just so that the accounting department can shift from CAPex to OPex. Try to depreciate as soon as possible. Likewise, green technologies and social responsibility are factors, but not drivers of business decisions. Acquisitions are a natural evolution of the market, so risk mitigation strategies should be in place in case your vendor of choice is acquired by someone you don't like.
For BC/DR planning, the analyst has a single Data Center approach, but Chris indicated that IT is looking to expand this. Their single datacenter for one part of their business was in Florida, and the other in Massachusetts, and both impacted by Hurricanes or Earthquakes recently.
The "lightning round" asked Chris his thoughts, either thumbs up, thumbs down, or neutral, on single ideas or concepts. I liked this part of the interview!
Chargeback? Thumbs down. He doesn't feel you should have internal fighting over charge rates. He prefers showback instead.
BYO Device with stipend? Thumbs down, but inevitable. Giving people a chunk of money to buy their own laptop, smart phone or tablet of choice may wreak havoc on the IT department for support and service.
Telepresence? Thumbs down. Cool, but very expensive. I don't think people are prepared to exploit the benefits of this.
Corporate apps on public "app stores"? Thumbs down. Concerns over security and integration is main issue.
Access to Social Networks? Thumbs up. This is how employees communicate and collaborate. Don't stifle them doing the right things just because you are afraid they might waste 20 minutes on Facebook per day.
Your IT budget? It's up slightly 1-5 percent for 2012.
Cloud? Promising, some challenges related to integration and security.
Chris finished up with a story about an application team that indicated that they would need to make 100 customizations to an off-the-shelf general ledger financial application. Chris and the other executives asked to be presented each and every customization, and he was able to eliminate most of them.
Positive comments I heard from the audience was that these keynotes had real "meat" to them, and not just full of cliches and platitudes that is common for keynote sessions. I would have to agree.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the other Monday morning keynote sessions:
Driving Innovation to Achieve Dramatic Improvements
What is Innovation? It is a process that starts with one or more ideas, that results in change, that creates value. Easier said than done!
Innovation drives business growth. The analyst indicated that the IT infrastructure can either be in the way to impede business growth, neutral to enable growth, or contributing to business growth. Companies often find downtime as an inhibitor to business growth. The analyst gave these typical numbers.
Unplanned downtime (hours per year)
Planned Downtime (hours per year)
A big inhibitor to change is "cultural inertia", which states that the way things are prevent what they could be. Change requires both rewards and measures. Employees are often uncomfortable with change. Motivation should be with carrots not sticks.
(I often joke that the only people who are comfortable with change are babies with soiled diapers and prisoners on death row!)
The impedence to change is further amplified by leadership because what got them into their positions was their history of success, and often leaders perpetuate what worked for them in the past.
"There is nothing so useless as doing efficiently that which should not be done at all."
--- Peter Drucker
Nothing lasts forever, and companies should not try to avoid the inevitable. Innovators need to see themselves as change agents. the analyst feels that less than 10 percent of IT will adopt innovation to enact dramatic change. The analyst took a poll of the audience asking: Why isn't your IT Infrastructure and Operations more innovative? Over 800 attendees responded. Here were the results:
The analyst suggests treating Innovation like a team sport, with small 2-5 person teams. Search for breakthrough opportunities by setting audacious goals to inspire innovative thinking. What approach are most people doing today? Here are some polling results:
The analyst suggest it is more important to establish a culture of innovation first, and process second. Skunkworks projects are back in favor. IT folks should avoid the worship of so-called "best practices" as a reason to avoid change in trying something different. To think "outside-the-box" you need to get outside the box, or office, or cubicle, or wherever you work that prevents you from interacting with your internal or external customers. Customers can bring great insights on new approaches to take.
One new approach, born in the Cloud and now coming to the Enterprise is the concept of [DevOps], which consists of promoting collaboration between the "Appplication Development" half of IT, with the "Operations" half. If you never had heard of DevOps before, you are not alone, most of the attendees at this conference hadn't either. Here are the poll results:
Some companies have instituted a "Fresh Eyes" program, asking new-hires and early-tenure employees questions like: What surprised you the most when you joined the company? Was there anything that didn't make sense to you? Do you have any ideas to improve the way we do things?
"In a time of crisis we all have the potential to morph up to a new level and do things we never thought possible"
– Stuart Wilde
Why wait for a crisis?
Facebook: Efficient Infrastructure at a Massive Scale
Frank Frankovsky, the Director of Hardware and Design and Supply Chain at Facebook, was sitting right next to me in the audience. I didn't know this until it was his turn to speak, and he jumps up and walks to the stage! For those who live under a rock and/or are over 40 years old, Facebook is a social media site that allows people to maintain personal profiles, share photos, news and messages, play games, and create groups to organize events. They now support over 800 million accounts, a healthy percentage of the 1.9 billion people on the internet today.
Started in 2004, Facebook was originally hosted on standard server and storage hardware in colocation facilities. Facebook saved 38 percent costs by bringing their operations in-house, building their own servers from parts, and using no third-party software. Facebook has the advantage of owning their entire software stack, leveraging open source as much as possible. They even re-wrote their own PHP compiler, which they pronounce "Hip-Hop", short for high-performance-PHP.
Facebook can stand up a new data center in less than 10 months, from breaking ground to serving users. Most of Facebook's data centers sport a PUE less than 1.5, but their newest one in Prineville, Oregon is down to an amazing 1.07 level for a 7.5 Megawatt facility! How did they do it? Here are a few of their tricks:
Use Scale-Out architecture. Having lots of small servers, scattered in various data centers, allows them to survive a server failure, as well as having the luxury to shut down a datacenter when needed for maintenance reasons.
Free Cooling. Instead of air-conditioning, they pump in cold air from the outside, and send the heated exhaust back outdoors. Frank does not believe servers should be treated like humans, so their data centers run uncomfortably hot. The 50-year climate data is used to determine data center locations that have the optimal "free cooling" opportunities.
Eliminate UPS and PDU energy losses. Rather than running 480 VAC power through UPS that represent a 6 to 12 percent loss, and then PDU that introduce another 3 percent loss getting down to 208 or 120 VAC, Frank's team builds servers that feed direclty off the 480 VAC from the power company. For backup power, they use 48VDC batteries. One set of batteries can backup six racks of servers.
Target 6 to 8 KW per rack. Low-density racks are easier to keep cool.
Build their own IT equipment. Rather than buying commercially-available servers, Frank's team builds 1.5 U servers based on Intel "Westmere" chipset. 1.5U allows for larger fan radius than standard 1U pizza box format. (IBM's iDataPlex uses 2U fans for the same reason!) Facebook has a "Vanity free" design philosophy, so no fancy plastic bezels. In most cases, the covers are left off. Most (65 percent) of their servers are web front-ends. They plan new IT equipment based on Intel's "Sandy Bridge" chipset.
Use SATA drives. They buy the largest SATA drives available, directly from manufacturers, in direct-attach storage (DAS) in their servers. Data is organized in a Hadoop cluster, and they have developed their own internal "Haystack" for photo storage. Despite the floods in Thailand, Facebook has secured all the SATA disk they plan to buy for 2012 from their suppliers.
Use Solid-State drives. Their Database tier uses 100 percent Solid-State drives.
Frank is also a founder for the [Open Compute Project], which takes an "Open source" approach to IT hardware.
Facebook does not bother with hypervisors. Instead, they have adapted their own software to make full use of the CPU natively. This eliminates the "I/O Tax" penalty associated with VMware and other hypervisors.
Of course, not everyone owns their entire software stack, and can build their own servers! It was nice to hear how a company without such limitations can innovate to their advantage.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the Monday afternoon sessions:
IBM Watson and your Data Center
Steve Sams, IBM VP of Site and Facilities Services, cleverly used IBM Watson as a way to explain how analytics can be used to help manage your data center. Sadly, most of the people at my table missed the connection between IBM Watson and Analytics. How does answering a single trivia question in under three seconds relate to the ongoing operations of a data center? If you were similarly confused, take a peak at my series of IBM Watson blog posts:
The analyst who presented this topic was probably the fastest-speaking Texan I have met. He covered various aspects of Cloud Computing that people need to consider. Why hasn't Cloud taken off sooner? The analyst feels that Cloud Computing wasn't ready for us, and we weren't ready for Cloud Computing. The fundamentals of Cloud Computing have not changed, but we as a society have. Now that many end users are comfortable consuming public cloud resources, from Facebook to Twitter to Gmail, they are beginning to ask for similar from their corporate IT.
Legal issues - see this hour-long video, [Cloud Law & Order], which discusses legal issues related to Cloud Computing.
Employee staffing - need to re-tool and re-train IT employees to start thinking of their IT as a service provider internally.
Hybrid Cloud - rather than struggle choosing between private and public cloud methodologies, consider a combination of both.
University of Rochester Medical Center (URMC) Cracks Code on Data Growth
Often times, the hour is split, 30 minutes of the sponsor talking about various products, followed by 30 minutes of the client giving a user experience. Instead, I decided to let the client speak for 45 minutes, and then I moderated the Q&A for the remaining 15 minutes. This revised format seemed to be well-received!
University of Rochester is in New York, about 60 miles east of Buffalo, and 90 miles from Toronto across Lake Ontario. Six years ago, Rick Haverty joined URMC as the Director of Infrastructure services, managing 130 of the 300 IT personnel at the Medical Center. I met Rick back in May, when he presented at the IBM [Storage Innovation Executive Summit] in New York City.
URMC has DS8000, DS5000, XIV, SONAS, Storwize V7000 and is in the process of deploying Storwize V7000 Unified. He presented how he has used these for continuous operations and high availability, while controlling storage growth and costs.
The Q&A was lively, focusing on how his team manages 1PB of disk storage with just four storage administrators, his choice of a "Vendor Neutral Archive" (VNA), and his experiences with integration.
This was a great afternoon, and I was glad to get all my speaking gigs done early in the week. I would like to thank Rick Haverty of URMC for doing a great job presenting this afternoon!
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the Tuesday morning sessions:
Wells Fargo: Data Center Lessons Learned from the Wachovia Acquisition
This was the next in their "Mastermind Interview" series. The analyst interviewed Scott Dillon, EVP and Head of Technology Infrastructure Services for Wells Fargo bank. Some 13 years ago, Wells Fargo merged with Norwest, and three years ago, Wells Fargo merged again, this time with Wachovia bank. Today, the new merged Wells Fargo manages 1.2 Trillion USD in assets, some 12,000 ATMs, and 9,000 branch offices within two miles of 50 percent of the US population.
On the technical side, Scott's team has to deal with 10,000 IT changes per month, spanning 85 discrete businesses that Wells Fargo is involved in. To help drive the consolidation, they formed a culture group called "One Wells Fargo".
Often, Wells Fargo and Wachovia used different applications for the same function. The consolidation team took the A-or-B-but-not-C approach, which means they would either choose the existing application that Wells Fargo was already using (A), or the one that Wachovia was already using (B), but not look for a replacement (C). They also wanted to avoid re-platforming any apps during the merger. This simplified the process of developing target operating models (TOMs).
Before each application cut-over, the consolidation team did dry-run, dress rehearsals and walkthroughs over the phone to ensure smooth success. They wanted a Wachovia account holder to be able to walk into the bank on one day, and then come back the next day as a Wells Fargo account holder, into the same branch office but now with Wells Fargo signage, with minimal disruption.
Wells Fargo also adopted a test-to-learn approach of choosing small test markets to see how well the transition would work before tackling larger, more complicated markets. For example, they started in Colorado, where Wells Fargo has a huge presence, but Wachovia had a small presence.
This was first and foremost a business merger, not just an IT merger. Each decision to 6-18 months to act on, and the IT team spent the last three years working every weekend to make this a reality.
A Satirical Look at Business and Technology
Comedian Bob Hirschfeld presented a light-hearted look at the IT industry. Bob actually attended sessions on Monday at this conference so his satire was exceptionally hard-hitting. He took jabs at the latest IT job requirements, padding on light poles, IBM Watson, social media's impact on dictators, various industry acronyms, virtualization, the various reasons why printer ink is so expensive, and the evil masterminds behind Powerpoint.
Storing Big Data takes a Village
Two analysts co-presented this session on the 12 dimensions of information management that revolve around the volume, variety and velocity of "Big Data".
In the past, it took a while to gather data, and a while to process the data, so annual, quarterly and monthly reports were common. Today, with high-velocity streams like Twitter, especially during cultural events or natural disasters, data is produced and analyzed quickly. It is important to sort the steady-state from the anomalies.
Myth 1: All data fits nicely into relational databases. The analysts feel the concept of putting everything into one big data base is dead. Some data sets are so complicated that traditional database joins would cause smoke to come out of the sides of the servers. Instead, new technologies have emerged, including NoSQL, Cassandra, Hadoop, Columnar databases, and In-memory databases. XML has helped to bring together disparate data formats.
Companies need to adapt to this new reality of Business Analytics. Here is a poll of the audience on how many are in what stage of adaptation:
Myth 2: Everyone will do Big Data with commodity hardware. Businesses want commmercial offerings that don't fail every day. (For example, instead of using open-source Hadoop, consider IBM's [InfoSphere BigInsights] commercial product based on Hadoop designed for the Enterprise).
Myth 3: Big Data is too big for backup. Certainly, traditional full-plus-incremental approaches fail to scale, but that is not the only option you have. Consider disk replication, snapshots, and integrated disk-and-tape blended solutions that adopt a more progressive backup methodology.
Capacity forecasting can be difficult with Big Data. Scale-out NAS systems, including IBM SONAS and the various me-too competitive offerings, were originally focused on High Performance Computing (HPC) and the Media & Entertainment (M&E) industries, are now ready for prime-time and appropriate for other use cases.
It's like the game of Clue, but instead of Professor Plum with the candlestick in the library, it was Chuck with the Cluster in the Closet. To avoid shadow IT creating huge Hadoop Clusters in your closets, encourage the use of Cloud Computing for "sandbox" projects. IBM, Amazon and others offer hosted MapReduce engines for this purpose.
What type of storage do you plan to use for Big Data? The top five, weighted from a list during a poll of the audience were: (78) traditional disk arrays, (71) Scale-out NAS, (46) pre-configured appliances, (30) Hadoop clusters, and (23) Cloud Storage.
Big Data is about doing things differently. Do your employees understand analytical techniques? Your company may need to start thinking about policies for capturing Big Data, storing it correctly, and analyzing it for insights and patterns needed to stay competitive.
It was good to mix reality with a bit of humor. Some of these conference attendees take themselves too seriously, and it is good to be reminded that IT is just part of the overall business operation.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of some of the Tuesday afternoon sessions:
Brocade: Maximizing Your Cloud: How Data Centers Must Evolve
This was a session sponsored by Brocade to promote their concept of the "Ethernet Fabric". The first speaker, John McHugh, was from Brocade, and the second speaker was a client testimonial, Jamie Shepard, EVP for International Computerware, Inc.
John had an interesting take on today's network challenges. He feels that most LANs are organized for "North-South" traffic, referring to upload/downloads between clients and servers. However, the networks of tomorrow will need to focus on "East-West" traffic, referring to servers talking to other servers.
John was also opposed to integrated stacks that combine servers, storage and networking into a single appliance, as this prevents independent scaling of resources.
The Future of Backup is Not Backup
Primary data is growing at 40 to 60 percent compound annual growth rate (CAGR), but backup data is growing faster. Why? Because data that was not backed up before are now being backed up, including test data, development data, and mobile application data.
Backup costs are 19x more expensive than production software costs. There is an enormous gap in data protection because companies fail to factor this into their budgets. It is not uncommon for IT departments to use multiple backup tools, for example one tool for VMs, and another tool for servers, and a third product for desktops.
part of the problem is identifying who "buys" the backup software. The server team might focus on the operating systems supported. The storage team focuses on the disk and tape media supported. The application owners focus on the features and capabilities for backup that minimize impact to their application.
The analyst organized these issues into three "C's" of backup concerns: Cost, Capability and Complexity. Cost is not just the software license fee for the backup software, but the cost of backup media, courier fees, and transmisison bandwidth. Capability refers to the features and functions, and IT folks are tired of having to augment their backup solution with additional tools and scripts to compensate for lack of capability. Complexity refers to the challenges trying to get existing backup software to tackle new sources like Virtual Machines, Mobile apps, and so on.
Has everyone moved to a tape-less backup system? Polling results found that people are shifting back to tape, either in a tape-only environment, or to supplement their disk or disk-based virtual tape library (VTL). Here are the polling results:
The poll also showed the top three backup software vendors were Symantec, IBM and Commvault, which is consistent with marketshare. However, the analyst feels that by 2014, an estimated 30 percent of companies will change their backup softwar vendor out of frustration over cost, capability and/or complexity.
There are a lot new backup software products specific to dealing with Virtual Machines. Some are focused exclusively on VMware. When asked what tool people used to backup their VMs, the polling results showed the following. NOte that 20 percent for Other includes products from major vendors, like IBM Tivoli Storage Manager for Virtual Environments, as the analyst was more interested in the uptake of backup software from startups.
Some companies are considering Cloud Computing for backup. This is one area where having the cloud service provider at a distance is an actual advantage for added protection. A poll asking whether some or most data is backed up to the Cloud, either already today, or plans for the near future within the next 12 or 24 months, showed the following:
In addition to backup service providers, there are now several startups that offer file sharing, and some are adding "versioning" to this that can serve as an alternative to backup. These include DropBox, SugarSync, iCloud, SpiderOak and ShareFile.
The final topic was Snapshot and Disk Replication. These tend to be hardware-based, so they may not have options for versioning, scheduling, or application-aware capabilities normally associated with backup software. Space-efficient snapshots, which point unchanged data back to the original source, may not provide full data protection that disparate backup copies would provide. Here were polling results on whether snapshot/replication was used to augment or replace some or most of their backups:
Some of his observations and recommendations:
Maintenance is more expensive than acquisition cost. Don't focus on the tip of the iceberg. Some backup software is more efficient for bandwidth and media which will save tons of money in the long run.
Try to optimize what you have. He calls this the "Starbuck's effect". If you just need one coffee, then paying $4.50 for a cup makes sense. But if you need 100 coffees, you might be better off buying the beans.
Design backups to meet service level agreements (SLAs). In the past, backup was treated as one-size-fits-all, but today you can now focus on a workload by workload basis.
Be conservative in adopting new technologies until you have your backup procedures in place to handle data protection.
Backup is for operational recovery, not long-term retention of data. A poll showed two-thirds of the audience kept backup versions for longer than 60 days! Re-evaluate how long you keep backups, and how many versions you keep. If you need long-term retention, use archive process instead.
Recovery testing is a dying art. Practice recovery procedures so that you can do it safely and correctly when it matters most.
The analyst had a series of awesome pictures of large structures, the pyramids of Giza, the Chrysler building, and so on, and how they would look without their foundations in place. Backup is a foundation and should be treated as such in all IT planning purposes.
IT is evolving, but some basic needs like networking and backup procedures don't change. As companies re-evaluate their IT operations for Big Data, Cloud Computing and other new technologies, it is best to remember that some basic needs must be met as part of those evaluations.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of more of the Tuesday afternoon sessions:
IBM CIOs and Storage
Barry Becker, IBM Manager of Global Strategic Outsourcing Enablement for Data Center Services, presented this session on Storage Infrastructure Optimization (SIO).
A bit of context might help. I started my career in DFHSM which moved data from disk to tape to reduce storage costs. Over the years, I wouuld visit clients, analyze their disk and tape environment, and provide a set of recommendations on how to run their operations better. In 2004, this was formalized into week-long "Information Lifecycle Management (ILM) Assessments", and I spent 18 months in the field training a group of folks on how to perform them. The IBM Global Technology Services team have taken a cross-brand approach, expanding this ILM approach to include evaluations of the application workloads and data types. These SIO studies take 3-4 weeks to complete.
Over the next decade, there will only be 50 percent more IT professionals than we have today, so new approaches will be needed for governance and automation to deal with the explosive growth of information.
SIO deals with both the demand and supply of data growth in five specific areas:
Data reclamation, rationalization and planning
Virtualization and tiering
Backup, business continuity and disaster recovery
Storage process and governance
Archive, Retention and Compliance
The process involves gathering data and interview business, financial and technical stakeholders like storage administrators and application owners. The interviews take less than one hour per person.
Over the past two years, the SIO team has uncovered disturbing trends. A big part of the problem is that 70 percent of data stored on disk has not been accessed in the past 90 days, and is unlikely to be accessed at all in the near future, so would probably be better to store on lower cost storage tiers.
Storage Resource Management (SRM) is also a mess, with over 85 percent of clients having serious reporting issues. Even rudimentary "Showback" systems to report back what every individual, group or department were using resulted in significant improvement.
Archive is not universally implemented mostly because retention requirements are often misunderstood. Barry attributed this to lack of collaboration between storage IT personnel, compliance officers, and application owners. A "service catalog" that identifies specific storage and data types can help address many of these concerns.
The results were impressive. Clients that follow SIO recommendations save on average 20 to 25 percent after one year, and 50 percent after three to five years. Implementing storage virtualization averaged 22 percent lower CAPEX costs. Those that implemented a "service catalog" saved on average $1.9 million US dollars. Internally, IBM's own operations have saved $13 million dollars implementing these recommendations over the past three years.
Reshaping Storage for Virtualization and Big Data
The two analysts presenting this topic acknowledged there is no downturn on the demand for storage. To address this, they recommend companies identify storage inefficiencies, develop better forecasting methodologies, implement ILM, and follow vendor management best practices during acquisition and outsourcing.
To deal with new challenges like virtualization and Big Data, companies must decide to keep, replace or supplement their SRM tools, and build a scalable infrastructure.
One suggestion to get upper management to accept new technologies like data deduplication, thin provisioning, and compression is to refer to them as "Green" technologies, as they help reduce energy costs as well. Thin provisioning can help drive up storage utilization to rates as high as you dare, typically 60 to 70 percent is what most people are comfortable with.
A poll of the audience found that top three initiatives for 2012 are to implement data deduplication, 10Gb Ethernet, and Solid-State drives (SSD).
The analysts explained that there are two different types of cloud storage. The first kind is storage "for" the cloud, used for cloud compute instances (aka Virtual Machines), such as Amazon EBS for EC2. The second kind is storage "as" the cloud, storage as a data service, such as Amazon S3, Azure Blob and AT&T Synaptic.
The analysts feel that cloud storage deployments will be mostly private clouds, bursting as needed to public cloud storage. This creates the need for a concept called "Cloud Storage Gateways" that manage this hybrid of some local storage and some remote storage. IBM's SONAS Active Cloud Engine provides long-distance caching in this manner. Other smaller startups include cTera, Nasuni, Panzura, Riverbed, StorSimple, and TwinStrata.
A variation of this are "storage gateways" for backup and archive providers as a staging area for data to be subsequently sent on to the remote location.
New projects like virtualization, Cloud computing and Big Data are giving companies a new opportunity to re-evaluate their strategies for storage, process and governance.
Continuing my coverage of the 30th annual [Data Center Conference]. here is a recap of Wednesday morning sessions.
A Data Center Perspective on MegaVendors
The morning started with a keynote session. The analyst felt that the eight most strategic or disruptive companies in the past few decades were: IBM, HP, Cisco, SAP, Oracle, Apple and Google. Of these, he focused on the first three, which he termed the "Megavendors", presented in alphabetical order.
Cisco enjoys high-margins and a loyal customer base with Ethernet switch gear. Their new strategy to sell UP and ACROSS the stack moves them into lower-margin business like servers. Their strong agenda with NetApp is not in sync with their partnership with EMC. They recently had senior management turn-over.
HP enjoys a large customer base and is recognized for good design and manufacturing capabilities. Their challenges are mostly organizational, distracted by changes at the top and an untested and ever-changing vision, shifting gears and messages too often. Concerns over the Itanium have not helped them lately.
IBM defies simple description. One can easily recognize Cisco as an "Ethernet Switch" company, HP as a "Printer Company", Oracle as a "Database Company', but you can't say that IBM is an "XYZ" company, as it has re-invented itself successfully over its past 100 years, with a strong focus on client relationships. IBM enjoys high margins, sustainable cost structure, huge resources, a proficient sales team, and is recognized for its innovation with a strong IBM Research division. Their "Smarter Planet" vision has been effective in supporting their individual brands and unlock new opportuties. IBM's focus on growth markets takes advantage of their global reach.
His final advice was to look for "good enough" solutions that are "built for change" rather than "built to last".
Chris works in the Data Center Management and Optimization Services team. IBM owns and/or manages over 425 data centers, representing over 8 million square feet of floorspace. This includes managing 13 million desktops, and 325,000 x86 and UNIX server images, and 1,235 mainframes. IBM is able to pool resources and segment the complexity for flexible resource balancing.
Chris gave an example of a company that selected a Cloud Compute service provided on the East coast a Cloud Storage provider on the West coast, both for offering low rates, but was disappointed in the latency between the two.
Chris asked "How did 5 percent utilization on x86 servers ever become acceptable?" When IBM is brought in to manage a data center, it takes a "No Server Left Behind" approach to reduce risk and allow for a strong focus on end-user transition. Each server is evaluated for its current utilization:
Amazingly, many servers are unused. These are recycled properly.
1 to 19 percent
Workload is virtualized and moved to a new server.
20 to 39 percent
Use IBM's Active Energy Manager to monitor the server.
40 to 59 percent
Add more VMs to this virtualized server.
over 60 percent
Manage the workload balance on this server.
This approach allows IBM to achieve a 60 to 70 percent utilization average on x86 machines, with an ROI payback period of 6 to 18 months, and 2x-3x increase of servers-managed-per-FTE.
Storage is classified using Information Lifecycle Management (ILM) best practices, using automation with pre-defined data placement and movement policies. This allows only 5 percent of data to be on Tier-1, 15 percent on Tier-2, 15 percent on Tier-3, and 65 percent on Tier-4 storage.
Chris recommends adopting IT Service Management, and to shift away from one-off builds, stand-alone apps, and siloed cost management structures, and over to standardization and shared resources.
You may have heard of "Follow-the-sun" but have you heard of "Follow-the-moon"? Global companies often establish "follow-the-sun" for customer service, re-directing phone calls to be handled by people in countries during their respective daytime hours. In the same manner, server and storage virtualization allows workloads to be moved to data centers during night-time hours, following the moon, to take advantage of "free cooling" using outside air instead of computer room air conditioning (CRAC).
Since 2007, IBM has been able to double computer processing capability without increasing energy consumption or carbon gas emissions.
It's Wednesday, Day 3, and I can tell already that the attendees are suffering from "information overload'.
Continuing my coverage of the 30th annual [Data Center Conference]. here is a recap of Wednesday breakout sessions.
Aging Data: The Challenges of Long-Term Data Retention
The analyst defined "aging data" to be any data that is older than 90 days. A quick poll of the audience showed the what type of data was the biggest challenge:
In addition to aging data, the analyst used the term "vintage" to refer to aging data that you might actually need in the future, and "digital waste" being data you have no use for. She also defined "orphaned" data as data that has been archived but not actively owned or managed by anyone.
You need policies for retention, deletion, legal hold, and access. Most people forget to include access policies. How are people dealing with data and retention policies? Here were the poll results:
The analyst predicts that half of all applications running today will be retired by 2020. Tools like "IBM InfoSphere Optim" can help with application retirement by preserving both the data and metadata needed to make sense of the information after the application is no longer available. App retirement has a strong ROI.
Another problem is that there is data growth in unstructured data, but nobody is given the responsibility of "archivist" for this data, so it goes un-managed and becomes a "dumping ground". Long-term retention involves hardware, software and process working together. The reason that purpose-built archive hardware (such as IBM's Information Archive or EMC's Centera) was that companies failed to get the appropriate software and process to complete the solution.
Cloud computing will help. The analyst estimates that 40 percent of new email deployments will be done in the cloud, such as IBM LotusLive, Google Apps, and Microsoft Online365. This offloads the archive requirement to the public cloud provider.
A case study is University of Minnesota Supercomputing Institute that has three tiers for their storage: 136TB of fast storage for scratch space, 600TB of slower disk for project space, and 640 TB of tape for long-term retention.
What are people using today to hold their long-term retention data? Here were the poll results:
Bottom line is that retention of aging data is a business problem, techology problem, economic problem and 100-year problem.
A Case Study for Deploying a Unified 10G Ethernet Network
Brian Johnson from Intel presented the latest developments on 10Gb Ethernet. Case studies from Yahoo and NASA, both members of the [Open Data Center Alliance] found that upgrading from 1Gb to 10Gb Ethernet was more than just an improvement in speed. Other benefits include:
45 percent reduction in energy costs for Ethernet switching gear
80 percent fewer cables
15 percent lower costs
doubled bandwidth per server
Ruiping Sun, from Yahoo, found that 10Gb FCoE achieved 920 MB/sec, which was 15 percent faster than the 8Gb FCP they were using before.
IBM, Dell and other Intel-based servers support Single Root I/O Virtualization, or SR-IOV for short. NASA found that cloud-based HPC is feasible with SR-IOV. Using IBM General Parallel File System (GPFS) and 10Gb Ethernet were able to replace a previous environment based on 20 Gbps DDR Infiniband.
While some companies are still arguing over whether to implement a private cloud, an archive retention policy, or 10Gb Ethernet, other companies have shown great success moving forward!
This is my final post on my coverage of the 30th annual [Data Center Conference]. IBM was a Platinum sponsor, and there were over 2,600 attendees, of which 27 percent were IT Directors or higher. Two thirds of the companies have 5000 employees or more. Here is a recap of the last few sessions I attended.
Best Practices for Data Center consolidation
As if the conference co-chairs aren't already super-busy, here they are presenting one of the breakout sessions. In the 1990s, consolidation was done purely to reduce total cost of ownership (TCO). Today, there are a variety of other reasons, including issues with power and cooling, service level agreements, and security.
Of these, 25 percent plan to have more data centers in three years, and 47 percent plan to consolidate to fewer. The benefits to consolidation include economies of scale, staff reduction, reduced hardware facilities costs, and application retirement. Challenges include dealing with politics, building new facilities to replace the old ones, and bandwidth. Here were some of the primary reasons why data center consolidation projects fail:
Human Resources (HR) issues
Resources not freed available
Lack of Project Management skills
No rationalization at consolidated site
Interactive Polling Results
The last keynote session was Thursday morning. The conference co-chairs present the highlights of the interactive polling that was done during the week at this conference.
The first topic was social media. There was a lot of Twitter activity with hashtag #GartnerDC that I followed throughout the week. Most of the tweets seem to be from people who were not actually at the conference.
Some 45 percent of the attendees have implemented social media initiatives at their companies. What tooling are they using to accomplish this? There are some provided by the major ITSM vendors, tools specific for corporate social media such as Yammer, collaboration tools like Microsoft SharePoint and IBM's Lotus Connections, and public sites like Facebook and Twitter. Here were the poll results:
The next topic was focused on Mobile devices and Cloud Computing. For example, do companies store data in public cloud, or plan to in the future, for mobile devices?
One third of the attendees allow employees to bring their own tablet to work with full IT support. Only 18 percent allow employees to bring their own PC or laptop. Over 40 percent felt that their IT department was not yet ready to support smartphones.
What are the main drivers to adopt private cloud? Some are deploying private clouds as a way to defend their IT jobs from going to the public cloud. Here were the poll results:
What problems are companies trying to solve with cloud computing? Here were the poll results:
A majority of attendees that use VMware are exploring LInux KVM, such as Red Hat Enterprise Virtualization (RHEV) or Microsfot Hyper-V. What storage protocol are attendees using for their server virtualization? Here were the poll results:
The next topic was the process for IT service management. The top three were ITIL, CMMI and DevOps, with the majority using ITIL or ITIL in combination with something else. These are needed for release management, change management, performance management, capacity management and incident management. How collaborative is the relationship between IT operations and application development? Here were the poll results:
How well does IT operations contribute to business innovation? This year 38 percent were satisfied, and 33 percent unsatisfied. This was a big improvement over last year, that found 19 percent satisfied, 64 percent unsatisfied.
Building a Private Storage Cloud: Is It a Science Experiment?
While everyone understands the benefits of private and public cloud computing, there seems to be hesitation about hosted cloud storage. Some people have already adopted some form of cloud storage, and other plan to within 12 months. Here were the poll results:
The top three reasons for considering public cloud storage was to adopt lower-cost storage tier, to benefit from off-site storage, and staff constraints. The top concerns were security and performance.
The IT department will need to start thinking like a cloud provider, and perhaps adopt a hybrid cloud approach. What IT equipment can be re-used? What will the new IT operations look like in a Cloud environment? What were the primary use cases for cloud storage? Here were the poll results:
In addition to the major cloud providers (IBM, Amazon, etc.) there are a variety of new cloud storage startups to address these business needs.
So that wraps up my coverage of this conference. In addition to attending great keynote and breakout sessions, I was able to have great one-on-one discussions with clients at the Solution Showcase booth, during breaks and at meals. IBM's focus on Big Data, Workload-optimized Systems, and Cloud seems to resonate well with the analysts and attendees. I want to give special thinks to Lynda, Dana, Peggy, Hugo, David, Rick, Cris, Richard, Denise, Chloe, and all my colleagues, friends and family from Arizona for their support!
I hope everyone had a nice Winter break. For my birthday last month, my good friends at [StarTech.com] sent me a nice [double-headed USB combo cable] that has both Micro-USB and Mini-USB connectors. I am always looking to reduce the number of cables I take with me on trips, and this one is perfect, as I have a Samsung 4G smart phone that uses the Micro-USB connector, and a Canon PowerShot digital camera that uses the Mini-USB connector.
(FTC Disclosure: The U.S. Federal Trade Commission may consider this a "celebrity endorsement" for StarTech's product. I have used the cable and it works as expected. My review is based on my own experience using the cable, and information publicly available. IBM and StarTech are independent companies. Aside from giving me this nice cable at no cost, I have not received any payment from StarTech or any other third party to mention them or their product on this blog, I am not affiliated with StarTech in any way, nor do I have any financial interest in their company.)
When the [Universal Serial Bus] standard first came out in the mid-1990s, my colleagues and I were all excited that this will finally put an end to all the proprietary plugs and cables that each manufacturer seemed to waste their time re-inventing the wheel with yet another cable connector. For the most part, USB has simplified this, and the USB cable can be used for both data transfer and for power charging.
Today, there are many alternatives to using a cable for data transfer, such as Wi-Fi and Bluetooth, but people are finding that their smart phones and other devices run out of juice way too often. At various conferences, I have seen several people panic looking for an electrical outlet to charge their device, and a few brazen enough to ask other attendees, "Can I plug my phone into your laptop?"
(Caution: Be careful allowing strangers to plug their device into your USB port, as this can provide data transfer in addition to power charging, spreading viruses or other malicious intent. On my Lenovo Thinkpad T410, one of the USB ports is colored yellow and is always powered on, even when my laptop is in suspend or hibernation mode. This would be a safe way to allow someone to charge off your power without concern for data transfer in either direction.)
Recently, I have flown on airplanes where each seat had a USB charging port, ideal if you want to listen to music or watch a video on your device. I have also driven a rental carthat had USB charging ports in addition to the traditional cigarette lighter option, especially useful if you need to make an emergency phone call at the side of the road, or if you are using the GPS navigation feature to find your way. These are both a good step in the right direction!
Carrying one cable instead of two might not seem like much of a big deal, but if you think about it, complexity in the IT industry is all about the number of cables admins have to deal with. The push from 1GbE to 10GbE can help reduce the number of cables. Converged Enhanced Ethernet (CEE) takes it one step further, allowing NFS, CIFS, iSCSI and FCoE to all flow over a single cable. This can greatly reduce complexity in your IT environment.
If you are interested in reducing the complexity in your IT environment, contact your local IBM Business Partner or sales representative.
This week I was aboard the Queen Mary in Long Beach, California! This was a business event organized by [Key Info Systems], a valued IBM Business Partner. Key Info resells IBM servers, storage and switches.
The Queen Mary retired in 1967, and has been converted into a hotel and events venue. The locals just parked their car and walked on board, but I got to stay Tuesday through Thursday in one of the cabins. It was long and narrow, with round windows! There were four dials for the bathtub: Cold Salt, Hot Fresh, Cold Fresh, and Hot Salt.
Stepping on the boat was like walking back in time through history! If you decide to go see it, check out the [Art Deco bar at the front of the Promenade deck. The ship is still in the water, but is permanently docked. It is sectioned off to prevent the ocean waves from affecting it, so we did not have the nauseous moving back and forth normally associated with cruise ships.
(It is with a bit of irony that we are on the Queen Mary just days after the tragedy of the [Costa Concordia], the largest Italian cruise ship that ran aground near Isola de Giglio. The captain will have to explain how he [fell into a lifeboat] before he had a chance to wait for everyone else to get safely off the shipwreck. He was certainly no [Captain Sulley]! I am thankful that most of the 4,200 people survived the incident.)
Lief Morin, Founder and Chief Executive for Key Info Systems, kicked off the meeting with highlights of 2011 successes. I have known Lief for years, as Key Info comes to the Tucson EBC on a frequent basis. This event was designed to give his sellers an update of what is the latest for each product line, and what to look forward to in the next 12-18 months.
The next speaker was from Vision Solutions that provides High Availability solutions for IBM i on Power Systems. In 2010, their company nearly doubled in size with the acquisition of Double-Take, which provides data replication for x86 servers running Windows, Linux, VMware, Hyper-V and other hypervisors. The capabilities of Double-Take sounded similar to what IBM offers with [Tivoli Storage Manager FastBack] and [Tivoli Storage Manager for Virtual Environments].
Dinner at Sir Winston's
Rather than take the "Ghosts and Legends" tour, I opted for dinner at the Queen Mary's signature restaurant, Sir Winston's. This is a fancy place, so dress accordingly. If you want the Raspberry soufflé, order it early as it takes 30 minutes to prepare!
[Storwize V7000], including the new Storwize V7000 Unified configuration
Storage is an important part of the Key Info Systems revenue stream, so I was glad to have lots of questions and interactions from the audience.
Murder Mystery Dinner
The acting troupe from [Dinner Detective] put on quite the show for us! With all that is going on in the world, it is good to laugh out loud every now and then.
In other murder mystery dinners I have participated in, each person is assigned a "character" and given a script of what to say and when to say it. This was different, we got to pick our own characters. I chose "Doctor Watson", from the Sherlock Holmes series. Several attendees thought it was a double meaning with [IBM Watson], the computer that figured out the clues on Jeopardy! television game show, and has since been [put to work at Wellpoint] to help out the Healthcare industry.
After the "murder" happened, two actors portraying policemen selected members of the audience to answer questions. We didn't get a script of what to say, so everyone had to "ad lib". I was singled out as a suspect, and had fun playing along in character. One of the attendees afterwards said he was impressed that I was able to fabricate such amusing and elaborate responses to their personal and embarassing questions. As a public speaker for IBM, I have had a lot of practice thinking quickly on my feet.
Fibre Channel and Ethernet Switches
The next two speakers gave us an update on Fibre Channel and Ethernet switches, and their thoughts on the inevitability of Fibre Channel over Ethernet (FCoE). One of the exciting new developments is the [Brocade Network Subscription] which creates a flexible pay-per-use Ethernet port rental model for customers. This is especially timely given the Financial Accounting Standards Board proposed [FASB Change 13] that affects operating leases in the balance sheet.
With the Brocade Network Subscription, you pay monthly for the ports you are using. Need more ports, Brocade will install the added gear. Use fewer ports, Brocade will take the equipment back. There is no term endpoint or residual value like tradtional leasing, so when you are done using the equipment, give it back any time. This is ideal for companies that may need to have a lot of Ethernet ports for the next 2-3 years, but then plan to taper down, and don't want to get stuck with a long-term commitment or capital depreciation.
The last speaker was from VMware. IBM is the #1 reseller of VMware, and VMware commands an impressive 81 percent marketshare in the x86 virtualization space. The speaker presented VMware's strategy going forward, which aligns well with IBM's own strategy, to help companies Cloud-enable their existing IT infrastructures, in preparation for eventual moves to Hybrid or Public cloud deployments.
Special thanks to Lief Morin for sponsoring this event, Raquel Hernandez from IBM for coordinating my travel, and Pete, Christina and Kendrell from Key Info Systems for organizing the activities!
Some job titles can be vague. Have you ever given your title to a person at a cocktail party, only to have to explain exactly what you do? With a title like "IBM Master Inventor and Senior Managing Consultant", this happens to me all the time. To help explain what we do at the Tucson Executive Briefing Center (EBC), I use the following analogy.
People who want to see or interact with animals have several options. One option is to go visit the animals in their natural habitat. A more convenient option, however, is to visit the animals in a zoo. Zoos bring together a wide variety of animals, making it convenient to visit all of them at one time.
I did not fully appreciate the advantage of zoos until I took a safari in Kenya, Africa a few years ago. The word safari means "long journey" in Swahili. For two weeks, we drove around in a Land Rover on bumpy roads across the country. The best time to see the animals was early in the morning and late in the afternoon. We would drive around for hours looking for a type animal we had not seen already. Most came to see the so-called "Big Five": Buffalo, Elephant, Leopard, Lion and Rhinoceros. After two weeks and hundreds of miles, we had seen the "Big Nine" which extends the Big Five to include the Cheetah, Zebra, Giraffe and Hippo, as well as seeing a variety of other, lesser known animals.
When it comes to zoos, there are two kinds.
Self-guided -- offering the basic zoo experience where you are handed a map to visit the animals on your own.
Docent-guided -- offering a richer zoo experience where the docent provides added value, leading visitors around the zoo, answering questions, providing education, and comparing the differences between the animals.
Over the past 15 years, IBM has been consolidating storage development in Tucson, Arizona moving storage-related projects from San Jose, CA, from Rochester, MN, and from Raleigh, NC. Tucson has the largest collection of IBM storage hardware and software development in North America. I am one of the three local "docents", guiding the clients that come to Tucson to visit the developers.
Here are some of the types of developers that our clients ask to interact with:
A was hired into IBM back in 1986 as a Research Scientist. When clients want to hear about IBM's future direction over the next 10-15 years, we bring in someone from IBM Research.
While disk systems may seem no more complicated as arranging books on a shelf, clients often want to talk to hardware engineers related to IBM's tape libraries, especially the IBM System Storage TS3500 library and the High-Density frame that can store multiple cartridges per slot in a spring-loaded manner.
I have a Bachelor's degree in Computer Engineering and Master's degree in Electrical Engineering, so I am able to speak both sides of the hardware/software divide. Software engineers here in Tucson develop the microcode that runs on disk and tape hardware, the various GUI, CLI and SMI-S API interfaces, as well as Tivoli Storage software, especially Tivoli Storage Manager (TSM) and Tivoli Storage Productivity Center.
IBM Tucson has a huge test lab, and our testers are very familiar with all of the subtle nuances of interoperability between servers, HBAs, switches and storage devices. We have system and function testers for the individual products, ISV testers to validate software compatability, performance testers, and environment testers to verify the storage devices can handle extremes in temperature, humidity, vibration and noise.
IBM has architects for each product line to help decide which features and functions are developed for each product release. While many software engineers have expertise narrowly focused on an individual component, the system architects need to have a broad awareness of the entire environment. Earlier in my career, I was the chief architect for DFSMS, the storage management element of the z/OS mainframe operating sytsem, and chief architect for what we now call Tivoli Storage Productivity Center.
Product and Portfolio Managers
Product and Portfolio managers are helpful to explain to clients why IBM invested more in some products than others. I had served as the Portfolio Manager for IBM tape systems. When clients want to talk about the business side of our products, such as pricing, licensing and leasing issues, we bring the product and portfolio managers in.
For some clients, high level executives want to speak to their counterparts at IBM, vice president to vice president, executive to executive. Our local IBM executives often help kick off the briefing in the morning, or provide the executive summary and discuss next steps at the end of the day. Golfing, dinners and drinks, of course, are always a popular scheduing option.
On behalf of the rest of the Tucson EBC, I would like to thank all the developers who have helped us last year with client briefings. There are too many to mention, and most are too humble to let me put their names in this blog. Team, your assistance is very appreciated!
Many IBMers consider Tucson to be the headquarters for storage, and I have heard IBM executives refer to Tucson as the center of the universe for storage products. However, IBM is a global company. Just as zoos do not pretend to be complete collections of animals, IBM storage development is not entirely contained in Tucson. IBM Research for storage is also done in Almaden CA, Yorktown Heights NY, and Haifa, Israel. Hardware development is also done in Japan, Europe and Israel. Tivoli Storage has locations in Beaverton, Oregon, and Austin, Texas, to name a few. IBM is a big company, so if I left your favorite location off the list, let me know in the comments below.
Some clients, sales reps and business partners have complained that Tucson is not the most convenient location to get to. I get that. One rep asked why we don't have briefing centers somewhere more accessible, such as Chicago or Atlanta, both cities offer a major airline hub. As much as I personally enjoy cities like Chicago or Atlanta, people don't visit zoos just to see the docents, they come to see the animals. Having docents located in Chicago or Atlanta, standing sadly in front of empty cages with no animals to interact with, makes no sense at all.
With over 350 days of sunshine per year, Tucson is actually a well-kept secret. Clients who have never been to Tucson discover the wonders of the Sonoran desert. Coyotes chase roadrunners across our parking lot. Several clients who have come to visit us have ended up buying retirement homes here. If you haven't been to Tucson, or it has been a while since your last trip, I encourage you to [schedule a briefing]. The weather right now is ideal!
My how time flies! The month is almost over, and people are asking if I plan to discuss my [New Years' Resolutions]. For those readers new to my blog, you can review the [resolutions I made in prior years]. I started blogging about my New Year's resolutions back in 2007 because IBM has a "black-out" period before it announces its year-end financial results, and I can't talk about IBM itself during that time.
"Tests done since 1933 show that people who talk about their intentions are less likely to make them happen.
Announcing your plans to others satisfies your self-identity just enough that you're less motivated to do the hard work needed.
In 1933, W. Mahler found that if a person announced the solution to a problem, and was acknowledged by others, it was now in the brain as a 'social reality', even if the solution hadn't actually been achieved."
The solution for this? Spread out your resolutions throughout the year. That is the advice from Jonah Lehrer in his Wall Street Journal article [Blame it on the Brain]. Here is an excerpt:
"Willpower, like a bicep, can only exert itself so long before it gives out; it's an extremely limited mental resource.
Given its limitations, New Year's resolutions are exactly the wrong way to change our behavior. It makes no sense to try to quit smoking and lose weight at the same time, or to clean the apartment and give up wine in the same month. Instead, we should respect the feebleness of self-control, and spread our resolutions out over the entire year. Human routines are stubborn things, which helps explain why 88% of all resolutions end in failure, according to a 2007 survey of over 3,000 people conducted by the British psychologist Richard Wiseman. Bad habits are hard to break—and they're impossible to break if we try to break them all at once."
Based on those two articles, I focused last year on a single resolution, to lose weight. It worked, I lost some weight, not as much as I wanted, and certainly not for the usual eat-less/exercise-more reasons.
First, I tried Tim Ferris' [Four Hour Body] diet, and I had every intention to post about my progress throughout the year, but that didn't happen. The diet involved eating a restricted diet for six days--including beans, green vegetables, and lean meats--then having one cheat day where you eat a whole bunch of the bad foods you weren't allowed the prior week. The problem I had was that I got so used to eating the same way six days a week, that I forgot to cheat! On this diet, cheating is not optional, it is mandatory. Mo, on the other hand, had no problem with the cheat days, and even extended this to cheat afternoons and cheat evenings!
Mid-year, I saw the movie [Forks Over Knives]. I consulted with my doctor, and switched over to a plant-based, whole-foods diet with his approval. This is basically [dietary veganism]: no eggs, no dairy, no meat, no fish, no poultry. What's left? Lots of slow carbs like beans, spinach and quinoa, that I had already learned to cook and eat earlier on Tim Ferriss' diet, without the stress of remembering to cheat on the weekend.
The nice thing about this diet is that you can eat a lot more than usual, so you are never hungry. The bad news is that I developed a vitamin deficiency, and so my doctor asked me to switch to a relaxed mostly-vegetarian diet, with some eggs, some fish, some meat, and lots of vitamin supplements.
I thought I would start 2012 with a bunch of funny resolutions, like the ones in [Chuck & Beans], but I decided to keep things on a serious level. If you've made resolutions, do not tell anyone what they are, and try focusing on a single one at a time.
For all of you who had a bad year in 2011, I hope you have a much better one in 2012!
Mark your calendars! If you work in IT and have an interest in storage, then there are two upcoming conferences you might be interested in attending!
Join a network of your peers at
[IBM Pulse2012] who are fundamentally and cost-effectively changing the economics of IT and speeding the delivery of innovative products and services. With four days of top-notch education, Pulse 2012 will help you react with agility in changing competitive landscapes, reduce vulnerability throughout the service lifecycle, and continuously improve the business impact of the technology.
I presented at the very first IBM Pulse back in May 2008, which was a combination event to cover Tivoli Storage, Maximo and Netcool. For a bit of nostalgia, read my 2008 blog posts:
The IBM Pulse conference has certainly evolved over the past few years! The agenda is not yet finalized, so I don't know if I will be there again this year.
The second event has a new name. [IBM Edge2012] is the premier storage event that brings together innovative IBM technologies, world class training, leading industry experts, and compelling client success stories and best practices. Edge2012 is dedicated to helping you design, build and implement efficient storage infrastructure solutions.
We started doing these back in the mid-90s, entitled the "IBM Storage Symposium", then later the "IBM System Storage and Storage Networking Symposium". In 2007, I was there in Las Vegas presenting on a variety of topics. See my blog post [Storage Symposium 2007 recap].
In 2008, we had a version of the Storage Symposium down in Cuernavaca, Mexico. Not only did I present, but it was also a "book signing" event for my first book [Inside System Storage: Volume I]. Here were my blog posts: [Introduction], and [Conclusion]. We also had an event in the United States, as well as Montpelier, France, but since I already went to the one in Mexico, I let my colleagues go to these other ones instead.
In 2009, IBM experimented with combining two conferences under one roof in Chicago, IL. The IBM System Storage and Storage Networking Conference was combined with the IBM System x and BladeCenter Technical Conference. The idea was that server people would probably also be interested in storage, and storage admins might also be interested in x86-based servers. See my blog post
[Storage Symposium 2009 recap].
In 2010, System Storage and System x were once again combined, held in Washington DC, but the conferences were renamed to IBM System Storage Technical University and the IBM System x Technical University to give them a common look and feel. See my blog post [Storage University 2010 review].
In 2011, not satisfied that two data points was inconclusive, IBM continued the experiment, hosting both System Storage and System x conferences in Orlando, Florida. Here were my blog posts:
The results are now in. While I think it is admirable to run multiple conferences at the same time in the same place can help reduce costs and consolidate adminstration, it can have its drawbacks also. In the case of System Storage and System x, we learned a few things:
Having System x and Storage in the same conference gave the appearance that the conference was not focused on either. At smaller companies, there might be people who manage both x86 servers and storage, but at larger companies, servers and storage are managed by separate people, often in separate departments with different travel budgets.
Nearly all of IBM's storage attaches to IBM System x servers. However there are some clients that run AIX, IBM i or System z mainframes that might not have considered attending this conference, thinking that it was focused on storage for System x servers.
Both conferences were considered technical education, and might not have appealed to upper IT executives and directors as something to help make purchase decision from a business perspective, or to nework with peers of other decision makers.
The solution - IBM Edge. This conference is focused 100 percent on storage. There will be "Executive Edge" for decision makers to network with their peers, and "Technical Edge" for the storage admins to get the technical education they are looking for on IBM System Storage and Networking products and solutions. Please note that this conference was held in July or August in previous years, but will be held in June this year.
I am very excited about this new direction, and plan to be there in June 4-8 for this event!
Last week, on January 31, two of my colleagues retired from IBM. At IBM, retirements always happen on the last day of the month. Here is my memories of each, listed alphabetically by last name.
Mark Doumas retires after working 32 years with IBM. Mark was my manager for a few months in 2003. Back then, IBM was working on launching a variety of new products, including the IBM SAN File System (SFS), the IBM SAN Volume Controller (SVC), a new release of Tivoli Storage Manager (TSM), and TotalStorage Productivity Center (TPC), which was later renamed to IBM Tivoli Storage Productivity Center.
Mark was manager of the portfolio management team, and I was asked to manage the tape systems portfolio. I am no stranger to tape, as one of my 19 patents is for the pre-migration feature of the IBM 3494 Virtual Tape Server (VTS). The portfolio included LTO and Enterprise tape drives, tape libraries and virtual tape systems. My job was to help decide how much of IBM's money we should invest in each product area. This was less of a technical role, and more of a business-oriented project management position
Portfolio management is actually part of a chain of project management roles. At the lowest level are team leads that manage individual features, referred to as line items of a release. Release managers are responsible for all the line items of a particular release. Product managers determine which line items will be shipped in which release, and often have to balance across three or more releases. Architects help determine which products in a portfolio should have certain features. Since I was chief architect for DFSMS and Productivity Center, stepping up to portfolio manager was naturally the next rung on the career ladder.
(Side note: If you were wondering why I was only a few months on the job, it was because I was offered an even better position as Technical Evangelist for SVC. See my 2007 blog post [The Art of Evangelism] for a humourous glimpse of the kind of trouble I got in with that title on my business card!)
While my stint in this role was brief, I am still considered an honorary member of the tape development team. Nearly every week I present an overview of our tape systems portfolio at the Tucson Executive Briefing Center, or on the road at conferences and marketing events.
This year, 2012, marks the 60th anniversary of IBM Tape, but I will save that for a future post!
Jim is an IBM Fellow for IBM Systems and Technology Group. There are only 73 IBM Fellows currently working for IBM, and this is the highest honor IBM can bestow on an employee. He has been working with IBM since 1968 and now retires after 44 years! Jim was tasked with predicting the future of IT, and help drive strategic direction for IBM. Cost pressures, requirements for growth, accelerating innovation and changing business needs help influence this direction.
Many consider Jim one of the fathers of server virtualization. For those who think VMware invented the concept of running multiple operating systems on a single host machine, guess again! IBM developed the first server hypervisor in 1967, and introduced the industry's first [offical VM product on August 2, 1972] for the mainframe.
When I joined IBM in 1986, my first job was to work on what was then called DFHSM software for the MVS operating system. Each software engineer had unlimited access to his or her own VM instance of a mainframe for development and testing. This was way better than what we had in college, having to share time on systems for only a few minutes or hours per day. Today, DFHSM is now called the DFSMShsm component of DFSMS, an element of the z/OS operating system.
At various conferences like [SHARE] and [WAVV] we celebrated VM's 25th anniversary in 1997, and its 30th anniversary in 2002. Today, it is called z/VM and IBM continues to invest in its future. Last October, IBM announced [z/VM 6.2] release which provides Live Guest Relocation (LGR) to seemlessly move VM guest images from one mainframe to another, similar to PowerVM's Live Partition Mobility or VMware's VMotion.
Lately, it seems employees at other companies jump from job to job, and from employer to employer, on average every 4.1 years. According to [National Longitudinal Surveys] conducted by the [US. Government's Bureau of Labor Statistics], the average baby boomer holds 11 jobs. In contrast, it is quite common to see IBMers work the majority of their career at IBM.
The next time you have a tasty beverage in your hand, raise your glass! To Mark and Jim, you have earned our respect, and you both have certainly earned your retirement!
Well, it's Tuesday again, and you know what that means! IBM Announcements! Typically, IBM System Storage has three to five major product launches per year. Making announcements every Tuesday would have been two frequent, and having one big announcement every two or three years would be too far apart. Worldwide combined revenues for storage hardware and software grew double digits last year, comparing full-year 2011 to the prior 2010 year, and I am sure that 2012 will also be a good year for IBM as well! This week we have announcements for both disk and tape, but since 2012 is the 60th Diamond Anniversary for tape, I will start with tape systems first.
TS1140 support for JA/JJ tape cartridges
The TS1140 enterprise tape drive was announced at the [Storage Innovation Executive Summit] last May. It supported a new E07 format on three different new tape cartridges. Models "JC" was 4.0TB standard re-writeable tapes, "JY" was 4.0TB WORM tapes, and "JK" were 500GB economy tapes that were less expensive, but offered faster random access.
Generally, IBM has adopted an N-2 read, N-1 write [backward compatibility]. This means that the TS1140 could read E05 and E06 formatted tapes on JB and JX media, and could write E06 format on JB and JX media. However, there are a lot of older JA and JJ media, especially as part of TS7740 environments, so IBM now supports TS1140 drives to read J1A formatted JA and JJ media. This is not just for TS7740 environments, any TS1140 in stand-alone or tape library configurations will support this as well.
TS7700 R2.1 enhancements
IBM is a leader in tape virtualization with or without physical tape as back-end media. There are two hardware models of the [IBM Virtualization Engine TS7700 family] for the IBM System z mainframe. These virtual libraries are referred to as "clusters" in IBM literature.
The TS7740 Virtual Tape Library supports putting virtual tape images on disk first, then move less-active data to physical tape, which I covered in my blog post [IBM Announcements - July 2007].
A unique feature of the TS7700 series is support for a Grid configuration, which allows up to six different TS7700 clusters to be grouped into a single instance image. These clusters can be in local or remote locations, connected via WAN or LAN connections.
R2.1 is the latest software release of this successful IBM's TS7700 series.
True Sync Mode Copy. Before R2.1, the TS7700 offered "immediate mode copy". An application would write to a virtual tape, and when it was done with the tape and performed an unmount, the TS7700 would then replicate the tape contents to a secondary cluster on the grid. With True Sync Mode, data contents are replicated per implicit or explicit SYNC points. This is another IBM first in the IT tape industry.
Remote Mount Fail-over. When you have two or more TS7700 clusters in a grid configuration, you can do remote mounts. We've added fail-over multi-pathing up to four paths, so that if a link to a remote cluster is down, it will try one of the others instead.
Parallel Copies and Pre-Migration. On of my 19 patents is for the pre-migration feature for the IBM 3494 Virtual Tape Server (VTS) that carries forward into the TS7700, and is also used in the SONAS and Information Archive products. However, when the grid architecture was introduced, the engineers decided not to allow pre-migration and copies to secondary clusters to occur concurrently. Now these two operations can be done in parallel.
Merge two grids into one grid. Now that we can support up to six clusters into a single grid, we have people with 2-cluster and 3-cluster grids looking to merge them into one. Of course, all the logical and physical volume serials (VOLSER) must be unique!
Accelerate off JA/JJ Media. There are a lot of older JA and JJ media still in TS7700 libraries. This feature allows customers to speed up the transition to newer physical tape media.
Copy Export to E06 format on JB media. This one is clever, and I have to say I would have never thought about it. Let's say you have a TS7740 with TS1140 drives, but you want to export some virtual tapes to physical media to be sent to someone who only has a TS7740 connected with older TS1130 drives. These older drives can't read new JC media nor make sense of the E07 format. This feature will let you export to older JB media in E06 format so that it will be fully readable at the new location on the TS1130 drives.
Copy Export Merge service offering. Thanks to mergers and acquisitions, it is sometimes necessary to split off a portion of data from a TS7700 grid. In the past, IBM supported sending this export to a completely empty TS7700 library, but this new service offerings allows the export to be merged into an existing TS7700 that already contains data.
LTFS-SDE support for Mac OS X 10.7 Lion
How do people still not yet know about the Linear Tape File System [LTFS]? I mentioned this in my blogs back in 2010 in [April], [September], and [November]. Last year, LTFS was the [NAB Show Pick Hits Award] and an [Emmy] for revolutionizing the use of digital tape in Television broadcasting.
In layman's terms, the Single Drive Edition [LTFS-SDE] allows a tape cartridge to be treated like USB memory stick. It is supported on the LTO5 tape drives for systems running various levels of Windows, Linux and Mac OS X. Prior to this announcement, IBM supported Snow Leopard (10.5.6) and Leopard (10.6), and now supports Mac OS X 10.7 "Lion" release.
IBM first introduced Solid-State Drives (SSD) back in 2007 where it made sense the most, in [drive-for-drive replacements on blade servers in the IBM BladeCenter]. Blade servers typically only have a single drive, and SSD are both faster and use less energy on a drive-for-drive comparison, so this provided immediate benefit. Today, SSD are available on a variety of System x and POWER system servers.
In 2008, IBM rocked the world by being the first to reach [1 Million IOPS with Project Quicksilver]. This was an all-SSD configuration which many considered unrealistic (at the time), but it showed the potential for solid state drives.
When the [XIV Gen3 was Announced - July 2011], each module included an 1.8-inch "SSD-Ready" slot in the back. IBM made a Statement of Direction that IBM would someday offer SSD drives to put in these slots. Today's announcement is that IBM has finalized the qualification process, so now XIV Gen3 clients can have 400GB of usable non-volatile SSD read cache added to each module. This SSD can be added to existing XIV Gen3 boxes in the field, or it can be factory-installed in new shipments. If you have a 15-module XIV, that's 6TB of additional read cache! This SSD is entirely managed by the XIV Gen3, so you won't have to spend weeks reading manuals or specifying configuration parameters.
When you carve volumes on the XIV, you now have an option to enable or disable use of the SSD cache for each volume. Since XIV is being used in private and public cloud deployments, this offers the ability to offer premium performance at premium prices. The use of SSD is complementary to IBM XIV Quality of Service (QoS) performance levels, which are determined by host instead.
Well, that's the first major IBM System Storage launch of 2012. Let me know what you think in the comment section below.
Have you ever noticed that sometimes two movies come out that seem eerily similar to each other, released by different studios within months or weeks of each other? My sister used to review film scripts for a living, she would read ten of them and have to pick her top three favorites, and tells me that scripts for nearly identical concepts came all the time. Here are a few of my favorite examples:
1994: [Wyatt Earp] and [Tombstone] were Westerns recounting the famed gunfight at the O.K. Corral. Tombstone, Arizona is near Tucson, and the gunfight is recreated fairly often for tourists.
1998: [Armageddon] and [Deep Impact] were a pair of disaster movies dealing with a large rock heading to destroy all life on earth. I was in Mazatlan, Mexico to see the latter, dubbed in Spanish as "Impacto Profundo".
1998: [A Bug's Life] and [Antz] were computer-animated tales of the struggle of one individual ant in an ant colony.
2000: [Mission to Mars] and [Red Planet] were sci-fi pics exploring what a manned mission to our neighboring planet might entail.
This is different than copy-cat movies that are re-made or re-imagined many years later based on the previous successes of an original. Ever since my blog post [VPLEX: EMC's Latest Wheel is Round] in 2010 comparing EMC's copy-cat product that came our seven years after IBM's SAN Volume Controller (SVC), I've noticed EMC doesn't talk about VPLEX that much anymore.
This week, IBM announced [XIV Gen3 Solid-State Drive support] and our friends over at EMC announced [VFCache SSD-based PCIe cards]. Neither of these should be a surprise to anyone who follows the IT industry, as IBM had announced its XIV Gen3 as "SSD-Ready" last year specifically for this purpose, and EMC has been touting its "Project Lightning" since last May.
Fellow blogger Chuck Hollis from EMC has a blog post [VFCache means Very Fast Cache indeed] that provides additional detail. Chuck claims the VFCache is faster than popular [Fusion-IO PCIe cards] available for IBM servers. I haven't seen the performance spec sheets, but typically SSD is four to five times slower than the DRAM cache used in the XIV Gen3. The VFCache's SSD is probably similar in performance to the SSD supported in the IBM XIV Gen3, DS8000, DS5000, SVC, N series, and Storwize V7000 disk systems.
Nonetheless, I've been asked my opinions on the comparison between these two announcements, as they both deal with improving application performance through the use of Solid-State Drives as an added layer of read cache.
(FTC Disclosure: I am both a full-time employee and stockholder of the IBM Corporation. The U.S. Federal Trade Commission may consider this blog post as a paid celebrity endorsement of IBM servers and storage systems. This blog post is based on my interpretation and opinions of publicly-available information, as I have no hands-on access to any of these third-party PCIe cards. I have no financial interest in EMC, Fusion-IO, Texas Memory Systems, or any other third party vendor of PCIe cards designed to fit inside IBM servers, and I have not been paid by anyone to mention their name, brands or products on this blog post.)
The solutions are different in that IBM XIV Gen3 the SSD is "storage-side" in the external storage device, and EMC VFCache is "server-side" as a PCI Express [PCIe] card. Aside from that, both implement SSD as an additional read cache layer in front of spinning disk to boost performance. Neither is an industry first, as IBM has offered server-side SSD since 2007, and IBM and EMC have offered storage-side SSD in many of their other external storage devices. The use of SSD as read cache has already been available in IBM N series using [Performance Accelerator Module (PAM)] cards.
IBM has offered cooperative caching synergy between its servers and its storage arrays for some time now. The predecessor to today's POWER7-based were the iSeries i5 servers that used PCI-X IOP cards with cache to connect i5/OS applications to IBM's external disk and tape systems. To compete in this space, EMC created their own PCI-X cards to attach their own disk systems. In 2006, IBM did the right thing for our clients and fostered competition by entering in a [Landmark agreement] with EMC to [license the i5 interfaces]. Today, VIOS on IBM POWER systems allows a much broader choice of disk options for IBM i clients, including the IBM SVC, Storwize V7000 and XIV storage systems.
Can a little SSD really help performance? Yes! An IBM client running a [DB2 Universal Database] cluster across eight System x servers was able to replace an 800-drive EMC Symmetrix by putting eight SSD Fusion-IO cards in each server, for a total of 64 Solid-State drives, saving money and improving performance. DB2 has the Data Partitioning Feature that has multi-system DB2 configurations using a Grid-like architecture similar to how XIV is designed. Most IBM System x and BladeCenter servers support internal SSD storage options, and many offer PCIe slots for third-party SSD cards. Sadly, you can't do this with a VFCache card, since you can have only one VFCache card in each server, the data is unprotected, and only for ephemeral data like transaction logs or other temporary data. With multiple Fusion-IO cards in an IBM server, you can configure a RAID rank across the SSD, and use it for persistent storage like DB2 databases.
Here then is my side-by-side comparison:
IBM XIV Gen3 SSD Caching
Selected x86-based models of Cisco UCS, Dell PowerEdge, HP ProLiant DL, and IBM xSeries and System x servers
All of these, plus any other blade or rack-optimized server currently supported by XIV Gen3, including Oracle SPARC, HP Titanium, IBM POWER systems, and even IBM System z mainframes running Linux
Operating System support
Linux RHEL 5.6 and 5.7, VMware vSphere 4.1 and 5.0, and Windows 2008 x64 and R2.
All of these, plus all the other operating systems supported by XIV Gen3, including AIX, IBM i, Solaris, HP-UX, and Mac OS X
FCP and iSCSI
Vendor-supplied driver required on the server
Yes, the VFCache driver must be installed to use this feature.
No, IBM XIV Gen3 uses native OS-based multi-pathing drivers.
External disk storage systems required
None, it appears the VFCache has no direct interaction with the back-end disk array, so in theory the benefits are the same whether you use this VFCache card in front of EMC storage or IBM storage
XIV Gen3 is required, as the SSD slots are not available on older models of IBM XIV.
Shared disk support
No, VFCache has to be disabled and removed for vMotion to take place.
Yes! XIV Gen3 SSD caching shared disk supports VMware vMotion and Live Partition Mobility.
Support for multiple servers
An advantage of the XIV Gen3 SSD caching approach is that the cache can be dynamically allocated to the busiest data from any server or servers.
Support for active/active server clusters
Aware of changes made to back-end disk
No, it appears the VFCache has no direct interaction with the back-end disk array, so any changes to the data on the box itself are not communicated back to the VFCache card itself to invalidate the cache contents.
None identified. However, VFCache only caches blocks 64KB or smaller, so any sequential processing with larger blocks will bypass the VFCache.
Yes! XIV algorithms detect sequential access and avoid polluting the SSD with these blocks of data.
Number of SSD supported
One, which seems odd as IBM supports multiple Fusion-IO cards for its servers. However, this is not really a single point of failure (SPOF) as an application experiencing a VFCache failure merely drops down to external disk array speed, no data is lost since it is only read cache.
6 to 15 (one per XIV module) for high availability.
Pin data in SSD cache
Yes, using split-card mode, you can designate a portion of the 300GB to serve as Direct-attached storage (DAS). All data written to the DAS portion will be kept in SSD. However, since only one card is supported per server and the data is unprotected, this should only be used for ephemeral data like logs and temp files.
No, there is no option to designate an XIV Gen3 volume to be SSD-only. Consider using Fusion-IO PCIe card as a DAS alternative, or another IBM storage system for that requirement.
Pre-sales Estimating tools
Yes! CDF and Disk Magic tools are available to help cost-justify the purchase of SSD based on workload performance analysis.
IBM has the advantage that it designs and manufactures both servers and storage, and can design optimal solutions for our clients in that regard.
It takes me 20-30 minutes to complete a crossword or Sudoku puzzle. I am in no hurry, and I find the process relaxing. But what if you were paid to complete a puzzle? In that case, finishing the puzzle sooner, in fewer minutes, means more money in your paycheck per hour worked! However, getting paid would mean that doing these puzzles may no longer be fun or relaxing.
The idea of converting a hobby into a revenue-generating activity is not new. Who wouldn't want to earn money doing something you were planning to do already? The television is full of commercial advertisements for credit cards where you can earn Double Miles or Cash Rewards just for spending money on things you were going to spend on anyways.
But is "earn" the right word? The merchants pay a percentage fee every time a patron uses a credit card, and the bank is just providing a marketing incentive in the form of a portion of those fees back to the consumer, to encourage more usage of their card versus other forms of payment. Sort of like "profit sharing".
(FTC Disclosure: I am a full-time employee and shareholder of the IBM Corporation. This blog post should not be considered an endorsement for anything. My opinions and writings are based on publicly available information and my own experiences doing freelance work prior to my employment at IBM. I have no hands-on experience with Amazon Mechanical Turk, neither as a worker nor requester, have not participated in TopCoder contests, nor have I used the Viggle app. I do not have any financial interest in Amazon, TopCoder, Viggle or any other third-party company mentioned on this blog post, nor has anyone paid me to mention their company names, brands or offerings.)
Here's how it works. You get the app on your phone, and register each television show as you watch it. You can watch the show live, or much later recorded on your Tivo. You watch the shows you were going to watch anyways, and just provide your demographics, all in the name of market research. You get two points per minute of watching, and after 7,500 points, you get a $5 gift card from retailers such as from retailers such as Burger King, Starbucks, Best Buy, Sephora, Fandango, and CVS drugstores. For the typical American, it would take about three weeks to watch that much television!
Of course, this is not the only way to earn money working from home. A reader asked me for my opinions of [Amazon Mechanical Turk]. While the other examples above are done for marketing purposes, Mechanical Turk can be used for a variety of other things. Up to now, the IT industry has regarded the Cloud as the delivery of computing as a service, with the infrastructure, hardware and software existing on internationally networked servers, effectively invisible to the end user. This model is now to being applied broadly to people.
Basically, Mechanical Turk acts as a marketplace, where employers post Human Intelligent Tasks (HITs) that workers can do. Most can be completed in minutes and you are paid pennies to do so. Some examples might help illustrate what a HIT looks like:
Call a business and get the email address of the manager in charge.
Review a photograph and describe its style or content in three words or less
Select among multiple choices to categorize a job listing or company position
As a Mechanical Turk worker, you only work on the HITs you choose to work on, presumably those that interest you, and that you can do well and quickly. Workers can do this anytime, anywhere, such as 2:00am in the morning, at home, when you can't sleep or taking care of children. You can choose to work as much or as little as you like.
The employers--referred to as Mechanical Turk requesters--put money into their payroll accounts, load up their tasks, and hit publish. This gives them immediate access to a global, on-demand 24-by-7 workforce that can help complete thousands of HITs in minutes. These employers won't have to put an advertisement in the want ads and interview potential candidates, just to let them go later when the project is over.
Just like any other job, Mechanical Turk wages are reported to the IRS, and each person's work is evaluated for quality. In doing these tasks, you build up your "digital reputation" that will either prevent you or allow you to work on certain HITs. You can also take tests to reach Qualification levels to be eligible to work on HITs not available to everyone else.
Software engineers would have a hard time writing an Artificial Intelligence [AI] program to do these simple tasks, so being able to generate a HIT for something in the middle of a computer program might be the easiest way to get past a difficult part of an algorithm. Amusingly, Amazon describes this form of [crowdsourcing] as an artificial form of Artificial Intelligence!
While this approach may work for small, easily defined tasks, what about works that require a high amount of Human Intelligence, like storage software or hardware development?
When I was working for IBM as a software engineer in the 1980s and 1990s, it took us years to get a project done, using the traditional [Waterfall Model]. My job as a software architect was to estimate the thousands of lines of code (KLOC) a project would require, estimate the number of Person-Years (PY) it would take, and recommend the appropriate sized team. Back then, each engineer averaged only about 1,000 lines of software code per year, so KLOC and PY were often used interchangeably. Fellow IBM author Fred Brooks wrote an excellent book on the process called [The Mythical Man-Month].
The Waterfall model has the advantage that people only have to work a portion of the cycle on the project. In between, there was plenty of downtime to attend training, improve your skills, or take vacation. As our director Lynn Yates would often complain, "if they are only writing two lines of code in the morning, and two in the afternoon, why do they need time to rest?"
The Waterfall model was not perfect, and had its share of critics. One downside was that the clients didn't see anything until General Availability (GA), with a few getting a glimpse a few months earlier during our Early Support Program (ESP). By the time clients could tell us it was not what they wanted or expected, it was too late to change until the next release.
To address this concern, 17 software engineers wrote the now famous [Agile Manifesto]. The authors felt that collaboration, between the developers and with the clients, is critical to success. Business people and developers must work together daily throughout the project. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. The best architectures, requirements, and designs emerge from self-organizing teams. The result is an iterative approach that allows the client to see working prototypes early in the process, allowing last-minute changes to requirements to influence the final product.
Combining the Mechanical Turk concept with Agile programming methodology gives you what IBM calls an "Outcomes Model" approach. In the IBM research paper [Software Economies] (PDF, 5 pages), the authors argue that there are four fundamental principles needed for an "Outcomes Model" approach:
Autonomy. All of the actions necessary to bring jobs to completion should be driven by market forces; the process is
never gated by an entity outside of the market.
Inclusiveness. Everyone who provides information or performs work that leads to improvements should share in the
Transparency. The system should be transparent with respect to both the flow of money in the market and the tasks
performed by workers in the market.
Reliability. The system should be immune to manipulation, robust against attack (e.g., via insertion of untrusted code),
and prevent "shallow" work which would have to be re-done later.
I was surprised to see that [the TopCoder Community is 390,593 strong], nearly the size of the entire IBM company. TopCoder is focused on computer programming and digital creation using the Outcomes Model approach. Rather than paying everyone for their work, however, the platform is designed around challenges and competitions, and the top players or contributors are rewarded with cash prizes.
As an innovative company, IBM constantly explores a variety of means and approaches to offer value to its clients and customers. These new approaches may have some distinct advantages not just for IBM and its shareholders, but also for its clients and the freelancers hired to work on these projects. The global marketplace is getting flatter, smaller and smarter. It will be interesting how this plays out. If the discussion above encourages you to hone your technical skills, perhaps that is motivation enough to get off the couch and stop watching so much television!
Raj hails from Toronto, Canada and will be able to provide the Canadian perspective on all things Storage. I had the pleasure to meet Raj in person here in Tucson when him and dozens of his cohorts came down for a multi-customer briefing at the [IBM Executive Briefing Center] where I work.