This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
On his The Storage Architect blog, Chris Evans wrote [Twofor the Price of One]. He asks: why use RAID-1 compared to say a 14+2 RAID-6 configuration which would be much cheaper in terms of the disk cost? Perhpaps without realizing it, answers itwith his post today [XIV part II]:
So, as a drive fails, all drives could be copying to all drives in an attempt to ensure the recreated lost mirrors are well distributed across the subsystem. If this is true, all drives would become busy for read/writes for the rebuild time, rather than rebuild overhead being isolated to just one RAID group.
Let me try to explain. (Note: This is an oversimplification of the actual algorithm in an effortto make it more accessible to most readers, based on written materials I have been provided as partof the acquisition.)
In a typical RAID environment, say 7+P RAID-5, you might have to read 7 drives to rebuild one drive, and in the case of a 14+2 RAID-6, reading 15 drives to rebuild one drive. It turns out the performance bottleneck is the one driveto write, and today's systems can rebuild faster Fibre Channel (FC) drives at about 50-55 MB/sec, and slower ATA disk at around 40-42 MB/sec. At these rates, a 750GB SATA rebuild would take at least 5 hours.
In the IBM XIV Nextra architecture, let's say we have 100 drives. We lose drive 13, and we need to re-replicate any at-risk 1MB objects.An object is at-risk if it is the last and only remaining copy on the system. A 750GB that is 90 percent full wouldhave 700,000 or so at-risk object re-replications to manage. These can be sorted by drive. Drive 1 might have about 7000 objects that need re-replication, drive 2might have slightly more, slightly less, and so on, up to drive 100. The re-replication of objects on these other 99 drives goes through three waves.
Select 49 drives as "source volumes", and pair each randomly with a "destination volume". For example, drive 1 mapped todrive 87, drive 2 to drive 59, and so on. Initiate 49 tasks in parallel, each will re-replicate the blocks thatneed to be copied from the source volume to the destination volume.
50 volumes left.Select another 49 drives as "source volumes", and pair each with a "destination volume". For example, drive 87 mapped todrive 15, drive 59 to drive 42, and so on. Initiate 49 tasks in parallel, each will re-replicate the blocks thatneed to be copied from the source volume to the destination volume.
Only one drive left. We select the last volume as the source volume, pair it off with a random destination volume,and complete the process.
Each wave can take as little as 3-5 minutes. The actual algorithm is more complicated than this, as tasks complete early the source and volumes drives are available for re-assignment to another task, but you get the idea. XIV hasdemonstrated the entire process, identifying all at-risk objects, sorting them by drive location, randomly selectingdrive pairs, and then performing most of these tasks in parallel, can be done in 15-20 minutes. Over 40 customershave been using this architecture over the past 2 years, and by now all have probably experienced at least adrive failure to validate this methodology.
In the unlikely event that a second drive fails during this short time, only one of the 99 task fails. The other 98 tasks continue to helpprotect the data. By comparison, in a RAID-5 rebuild, no data is protected until all the blocks are copied.
As for requiring spare capacity on each drive to handle this case, the best disks in production environments aretypically only 85-90 percent full, leaving plenty of spare capacity to handle re-replication process. On average,Linux, UNIX and Windows systems tend to only fill disks 30 to 50 percent full, so the fear there is not enough sparecapacity should not be an issue.
The difference in cost between RAID-1 and RAID-5 becomes minimal as hardware gets cheaper and cheaper. For every $1 dollar you spend on storage hardware, you spend $5-$8 dollars managing the environment. As hardware gets cheaper still, it might even be worth making three copies of every 1MB object, the parallel processto perform re-replications would be the same. This could be done using policy-based management, some data gets triple-copied, and other data gets only double-copied, based on whether the user selected "premium" or "basic" service.
The beauty of this approach is that it works with 100 drives, 1000 drives, or even a million drives. Parallel processingis how supercomputers are able to perform feats of amazing mathematical computations so quickly, and how Web 2.0services like Google and Yahoo can perform web searches so quickly. Spreading the re-replication process acrossmany drives in parallel, rather than performing them serially onto a single drive, is just one of the many uniquefeatures of this new architecture.
Has it been a week already? I am here in Europe checking out various options for mobile, social media and cloud on my "Digital IBMer" tour. Here´s where we have been so far...
We landed at the Frankfurt airport, which will serve as our starting and ending point. It is close to Mainz where my IBM colleagues at the Executive Briefing Center for Germany is located. I looked throughout the airport for a SIM chip for my smartphone that worked in all European countries, but nobody had one for sale. We had lunch while we wait for the train to Brussels.
Our next stop was Brussels, capital of Belgium. The Belgians speak Flemish, which is like a Belgian version of Dutch, and French. I don´t speak Flemish nor Dutch, so I have been able to get by on French here. The Hotel Opera was near the central station, but we got off at the Bruxelles-Midi, thinking that Midi meant middle, or central of the city, but is Flemish (er.. make that French) for South instead, so we had a bit of walking to do!
Bruges is only an hour train ride from Brussels and is worth seeing. Our Eurail pass makes it easy just to go from city to city by train. Our particular one allows us first class travel through 23 countries for 15 contiguous days. We had lunch at the central square, and for dessert... Belgian Waffle-on-a-stick! Mine was covered in powdered sugar, and soon the rest of me was also.
Through tweets on Twitter, I was able to meet up with Stef, a local storage administrator and fan of my blog, and go out for beers. Stef was kind enough to lend me a pre-paid SIM chip for my phone that provided data plan while I am in Belgium! Thank you Stef!
Amsterdam, The Netherlands
Not surprisingly, Amsterdam is one of my favorite cities. It´s like Las Vegas without casinos. Our hotel, The Bulldog, is conveniently located in the center of town.
I met up with Joanne, a professional cellist (yes, she plays the Cello musical instrument for a living) who took us on a tour of the MuzikGebouw, which is where they hold concerts and events. Using the "Amsterdam City Guide" app from Travel Advisor on my smartphone, we followed one of their suggested self-guided walking tours. We also went to the Rijksmuseum, which is under construction, so only a subset of the art is on display.
From Amsterdam, we took a night train to Copenhagen. This is a 15-hour train ride, no dinner, but they give you breakfast. Men and women are in separate sleeping cars, and I was paired up with a business man, Danny, from Tiawan trying to sell clothing for firefighters.
Until now, I have managed with German, French and English, but I wasn´t sure about Danish, so I brought this "European Phrase Book" that has 14 languages. We stayed at the DanHostel, conveniently located near"Tivoli Park".
We have safely arrived to Berlin. Our train from Copenhagen to Hamburg went on a ferry boat to cross over the water! We are staying at Plus Berlin Hostel,which has a nice indoor swimming pool and dry sauna.
Until now, we have had beautiful sunny weather, but today is cold and dreary. We started out taking photos of all the graffiti in East Berlin we could find, but it started raining, so we changed plans and went to the world famous Pergamonmuseum.
Well, that´s my first week of adventure. Tomorrow, we leave for Prague in the Czech Republic!
Wrapping up my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a final morning of main-tent sessions. Here is a quick recap of the sessions presented Thursday morning. This left the afternoon for people to catch their flights or hit the links.
Data Center Actions your CFO will Love
Steve Sams, IBM Vice President of Global Site and Facilities, presented simple actions that can yield significant operational and capital cost savings. The first focus area was to extend the life of your existing data center. Some 70 percent of data centers are 10-15 years old or worse, and therefore not designed for today's computational densities. IBM did this for its Lexington data center, making changes that resulted in 8x capability without increasing footprint.
The second focus area was to rationalize the infrastructure across the organization. The process of "rationalizing" involves determining the business value of specific IT components and deciding whether the business value justifies the existing cost and complexity. It allows you to prioritize which consolidations should be done first to reduce costs and optimize value. IBM's own transformation reduced 128 CIOs down to a single CIO, and from 155 host data centers scattered were consolidated down to seven, and 80 web hosting data centers down to five. This also included consolidating 31 intranets down to a single global intranet.
The third focus area was to design your new infrastructure to be more responsive to change. IBM offers four solutions to help those looking to build or upgrade their data center:
Scalable Modular Data Center - save up to 20 percent than traditional deployments with turn-key configurations from 500 to 2500 square feet that can be deployed in as little as 8-12 weeks to an existing floorspace.
Enterprise Modular Data Center - save 40 to 50 percent with 5000 square foot standardized design for larger data centers. This modular approach provides a "pay as you grow" approach that can be more responsive to future unforeseen needs.
Portable Modular Data Center - this is the PMDC shipping container that was sitting outside in the parking lot. This can be deployed anywhere in 12-14 weeks and is ideal for dealing with disaster recoveries or situations where traditional data center floor plans cannot be built fast enough.
High Density Zone - this can help increase capacity in an existing data center without a full site retrofit.
Here is a quick [video] that provides more insight.
Neil Jarvis, CIO of American Automobile Association (AAA) for Northern California, Nevada and Utah (NCNU), provided the customer testimonial. Last September, the [AAA NCNU selected IBM] to build them an energy-efficient green data center. Neil provided us an update now six months later, managing the needs of 4 million drivers.
Virtualization - Managing the World's Infrastructure
Helene Armitage, IBM General Manager of the newly formed IBM System Software product line, presented on virtualization and management. Virtualization is becoming much more than a way of meeting the demand for performance, capability, and flexibility in the data center. It helps create a smarter, more agile data center. Her presentation focused on four areas: consolidate resources, manage workloads, automate processes, and optimize the delivery of IT services.
Charlie Weston, Group Vice President of Information Technology at Winn Dixie, one of the largest food retailers in the United States, with over 500 stores and supermarkets. The grocery business is highly competitive with tight profit margins. Winn Dixie wanted to deploy business continuity/disaster recovery (BC/DR) while managing IT equipment scattered across these 500 locations. They were able to consolidate 600 stand-alone servers into a single corporate data center. Using IBM AIX with PowerVM virtualization on BladeCenter, each JS22 blade server could manage 16 stores. These were mirrored to a nearby facility, as well as a remote disaster recovery center. They were also able to add new Linux application workloads to their existing System z9 EC mainframe. The result was to free up $5 million US dollars in capital that could be used to remodel their stores, and improve application performance 5-10 times. They were able to deploy a new customer portal on Linux for System z in days instead of months, and have reduced their disaster recovery time objective (RTO) against hurricanes from days to hours. Their next steps involves looking at desktop virtualization.
Redefining x86 Computing
Roland Hagan, IBM Vice President for IBM System x server platform, presented on how IBM is redefining the x86 computing experience. More than 50 percent of all servers are x86 based. These x86 servers are easy to acquire, enjoy a large application base, and can take advantage of readily available skilled workforce for administration. The problem is that 85 percent of x86 processing power remains idlea, energy costs are 8 times what they were 12 years ago, and management costs are now 70 percent of the IT budget.
IBM has the number one market share for scalable x86 servers. Roland covered the newly announced eX5 architecture that has been deployed in both rack-optimized models as well as IBM BladeCenter blade servers. These can offer 2x the memory capacity as competitive offerings, which is important for today's server virtualization, database and analytics workloads. This includes 40 and 80 DIMM models of blades, and 64 to 96 DIMM models of rack-optimized systems. IBM also announced eXFlash, internal Solid State Drives accessible at bus speeds.
The results can be significant. For example, just two IBM System x3850 4-socket, 8-core systems can replace 50 (yes, FIFTY) HP DL585 4-socket, 4-core Opteron rack servers, reducing costs 80 percent with a 3-month ROI payback period. Compared to IBM's previous X4 architecture, the eX5 provides 3.5 times better SAP performance, 3.8 times faster server virtualization performance, and 2.8 times faster database performance.
The CIO of Acxiom provided the customer testimonial. They were able to get a 35-to-1 consolidation switching over to IBM x86 servers, resulting in huge savings.
Top ROI projects to Get Started
Mark Shearer, IBM Vice President of Growth Solutions, and formerly my fourth-line manager as the Vice President of Marketing and Communications, presented a list of projects to help clients get started. There are over 500 client references that have successfully implement Smarter Planet projects. Mark's list were grouped into five categories:
Enabling Massive Scale
Increase Business Agility
Manage Risk, Compliance and Security
Organize Vast Amounts of Information
Turn Information into Insight
The attendees were all offered a free "Infrastructure Study" to evaluate their current data center environments. A team of IBM experts will come on-site, gather data, interview key personnel and make recommendations. Alternatively, these can be done at one of IBM's many briefing centers, such as the IBM Executive Briefing Center in Tucson Arizona that I work at.
This wraps up the week for me. I have to pack the XIV back into the crate, and drive back to Tucson. IBM plans to host another Executive Summit in the September/October time frame on the East coast.
Well it's Tuesday again, and you know what that means? IBM Announcements!
(Update: I thought it was quite clever to announce the new z13 mainframe on January the 13th. A few [triskaidekaphobic] employees pointed out that certain [Greek and Spanish-speaking cultures] consider Tuesday the 13th to be an unlucky day. However, superstitious people should probably not work in IT, as it would be difficult for a worldwide company like IBM to avoid all the numbers that different cultures consider unlucky.)
You are cordially invited to join IBM on January 14 from 2:00pm to 4:30pm Eastern Standard Time (US) when IBM will share a whole new generation of IBM z Systems™ built to meet the needs of your digital business. Join us and learn how IBM z Systems are designed to:
Support the transaction growth and needs of the mobile generation and the Internet of things
Integrate data, transactions and analytics, for in-transaction insights and right-time actions
Provide secure, trusted and efficient cloud services with new economic models
Exploit new modern, open development environments, tooling and skills for greater returns
At this live streaming event, you will hear from a remarkable group of business and technology leaders who will share success stories, best practices and the exciting technology innovations and capabilities of the new generation of IBM z Systems. Go to the Registration page] to participate.
But what does this really mean? Are you thinking BFD?
(Update: For those not familiar with IT acronyms, BFD refers to "Bigger, Faster, Denser" -- the trend in IT to announce new generations that are merely bigger, faster, and/or denser versions of the previous generations. Fortunately, the z13 takes up the same amount of data center floor space -- 2 floor tiles = 2 square meters = 20 square feet -- and weighs approximately the same as the z196 and zEC12, so raised floor struts do not have to be strengthened or reinforced to take in this new system.)
You may have noticed that we are now talking about "z System" instead of "System z". This change was made to line up with IBM's change to "POWER Systems" from "System p". Leadership felt that dropping the stodgy old zEnterprise and giving the mainframe a "hip" new name would attract new emerging digital workloads like Cloud, Analytics, Mobile and Social.
This is not the first time IBM has renamed products in a series. While the IBM mainframe just celebrated its 50th anniversary last year, the "13" refers to the 13th generation of CMOS-based mainframe technology introduced in 1994. Here is a quick table to show you the names that have evolved over the years:
1 to 6
S/390 G1 to G6
zSeries z800 and z900
zSeries z890 and z990
9 to 10
System z9 and z10
System z196 and z114
zEnterprise zEC12 and zBC12
(Note: This change also corresponds to a completely restructuring of IBM into business units, eliminating its former hardware and software groups. The design and development of all mainframe-related hardware, software and middleware will be consolidated under the IBM Systems business unit. I will wait for IBM's 4Q financial results announcement on or after January 20 before I cover this in any more detail.)
The z13 machine itself has some unique differences from previous generations. Instead of a "Multi-chip Module" (MCM) that contained multiple processor and storage controllers on a single slab, the z13 uses Single-Chip Modules (SCM) that are either a single 8-core processor, or a single system controller, allowing them to be field replaceable units (FRU).
Previous generations organized the processors in 1 to 4 vertical "books". The problem was that if you had a single book system, you bought a lot of hardware infrastructure designed to support a full four books. In the new design, processors are organized into horizontal Central Processor Complex (CPC) drawers, with additional hardware infrastructure provided per drawer. This makes the lower-end models more affordable. Each drawer has six processor SCMs and two system controller SCMs, providing 39 to 42 usable cores per drawer. Models ranges from 30 to 141 usable cores, with the option to upgrade from one model to another as your needs grow.
The z13 provides N-2 generation compatibility. This means you can have the z196, zEC12 and z13 all participate in the same Parallel Sysplex. You will also be able to upgrade your z196 or zEC12 to the new z13 system.
The new z13 can have up to 10TB of memory, and this can be assigned entirely to a single Logical Partition, or LPAR. The system can be subdivided up to 85 LPARs, versus 60 on the previous generation. Currently, z/OS v1 can only have up to 1TB per LPAR, and z/OS v2 can only go up to 4TB, so I suspect this 10TB is planning for future OS releases.
The new z13 now offers Simultaneous Multithreading [SMT]. Initially, this will double the number of threads for IFL engines (supporting Linux and z/VM), and zIIP engines supporting DB2, Java, XML and IPsec workloads. IBM is eliminating the zAAP engines, since zIIP engines can do all of that. The new [Preview z/OS v2.2] will take advantage of this SMT capability.
To assist with database, analytics and multimedia workloads, the z13 offers Single Instruction, Multiple Data [SIMD] capability. This allows a single instruction to perform the same update or action across many data fields.
For those clients with zBX models 2 and 3, allowing you to run POWER-based AIX and x86-based Microsoft Windows and Linux operating systems on your mainframe, they will be able to upgrade to [zBX model 4 for the z13 System]. Now that IBM has sold off its x86 server business to Lenovo, I suspect it will also phase out the zBX offerings as well.
To handle emerging workloads of Cloud, Mobile and other Web applications, IBM will offer a new stronger and faster Crypto Express5S cryptographic adapter. The z13 will enhance public key support for constrained digital environments using Elliptic Curve Cryptography (ECC) for users such as Chrome, Firefox, and Apple's iMessage. The z13 will also minimize reformatting of databases with new exploitation of VISA format preserving encryption (FPE) for credit card numbers.
The z13 also made some enhancements for Linux clients. The zAware analytics that analyzes internal traces and logs for z/OS has been extended to support Linux on System z. For those who want to use GDPS Business Continuity and Disaster Recovery services, but don't want to develop z/OS skills for the "K" system, there will now be a Virtual GDPS appliance that will run self-contained z/OS. Lastly, IBM has made a statement of direction that it will support open source Linux KVM as a Linux-only alternative to z/VM hypervisor. OpenStack will support both this new Linux KVM as well as z/VM 6.3 release.
The PCIe bus has been upgraded to Gen3 at 16Gbps, from Gen2 used in the zEC12. These can be used for Coupling Facility Links, which are faster than the legacy 6 Gbps InfiniBand, which are also supported for legacy migration. People with z196 and zEC12 can either carry forward their I/O drawers they have previously purchased, or move the PCiE Gen2 cards into the new Gen3 drawers.
The new z13 will also support 16Gb FICON, using the new FICON Express5S cards. Here is my segue into storage, as you are probably now wondering when I was going to get to the storage part of the announcement!
IBM is also announcing corresponding changes to the DS8870 firmware and accessories to go with the z13 System. This includes:
FICON Dynamic Routing
Reduce cost with improved and persistent performance for supporting I/O devices. This feature will allow SAN directors to have both FICON and FCP share the same Inter-switch Links (ISL). This is especially useful for clients who use FCP with their Linux, z/VM, AIX or Windows workloads.
16Gb Host Adapters
Improve network performance with twice faster FC and FICON adapters, and minimize latency for database log writes with zHyperWrite and Metro Mirror. These 16Gb adapters can auto-negotiate down to 4Gb and 8Gb, so that the DS8870 can connect to both the new z13 mainframe, as well as older models.
Forward Error Correction
Preserve data integrity with more redundancy on the information transmitted via 16Gb adapters.
zHPF Extended Distances improvements
Increase remote data speed with 50 percent better I/O performance when accessing remote disk, typically after a HyperSwap. The "zHPF" acronym is short for z System High Performance FICON.
Improved resiliency capabilities while enhancing the value of FICON Dynamic Routing mentioned above. IBM is extending Workload Manager (WLM) Quality of Service (QoS) optimization that exists now for compute and storage to the SAN Fabric, allowing WLM policies to influence FICON traffic.
IBM zHyperWrite™ capability
Helps you achieve better DB2 log write performance when using Metro Mirror (PPRC) in a HyperSwap-managed environment. Log writes are sent directly to both DASD, freeing up Metro Mirror resources.
But don't just take my word for it, here are reviews of the new system from various journalists:
"They seem a computing odd couple: the mainframe, the old workhorse, and the smartphone, the cool-kid computer of today. But IBM has designed the latest version of the mainframe, which is being introduced on Wednesday, with the smartphone in mind. The new mainframe, the z13, has been engineered to cope with the huge volume of data and transactions generated by people using smartphones and tablets."
-- New York Times
"One customer enthusiastic about such features is Citigroup Inc., a longtime IBM user that favors mainframes for both reliability and security. 'Security is in the DNA of the mainframe,' said Martin Kennedy, Citi's managing director for platforms and storage. Another factor shaping the bank's needs, Mr. Kennedy said, is the rising volume of transactions carried out using smartphones and other mobile devices. Mainframes are particularly good at combining data from a variety of systems and presenting them to a user's mobile app, he said."
-- The Wall Street Journal
"IBM is introducing a new mainframe in a bet that clients will need its souped-up speed and security to handle a surge in consumers using smartphones for everything from banking to checking health-care records. The z13 system can encrypt and analyze data in real time and process 30,000 transactions a second, International Business Machines Corp. (IBM) announced today. That means faster and safer transactions for consumers on mobile phones."
"With the unveiling of the z13, IBM has taken its MobileFirst Platform to deliver even better performance and security than before, as it incorporates the fastest microprocessor in the world, server processors that are twice as fast as existing products, 300 percent additional memory and 100 percent more bandwidth analytics speed. Last year, IBM formed a partnership with Apple to help bring Apple's iDevices to business customers to boost sales, with IBM providing cloud and mobile analytics support."
-- The Street
"'We're driving toward a world where more and more people are using mobile devices, or embedded devices, to interact with systems,' John Birtles, director of IBM z Systems, tells WIRED. 'We need to make sure that those devices are secure, that the transaction's secure, and that our clients get the level of analytics that gives them opportunities to improve their businesses.'"
IBM mainframes are used to process the majority of financial transactions around the world, is well positioned to handle Cloud, Analytics, Mobile and Social workloads. The IBM DS8000 series disk is the #1 market leader for disk storage on mainframe environments.
This week, I will be in Auckland, New Zealand for the [IBM System x and System Storage Technical Symposium]. This is a three-day event, with 35 unique sessions and labs. The agenda is organized with a keynote session in the beginning, followed by 12 time slots over three days, each slot offering five different break-out session topics to choose from. Here is a recap of Day 1:
The keynote was led by Phil Tasker, IBM Business Unit Executive (BUE) for STG Education Programs in Growth Markets, then Matt Paterson, General Manager for Sales in New Zealand say a few words. IBM is in the Top 10 Training Hall of Fame, and conducts over 40,000 classes worldwide, resulting in over 1.3 million student days of instructions. IBM Systems Lab and Training technical hosts over three dozen conferences like this one every year. This is the first time that System x and Storage Symposium has been run in New Zealand, and based on the incredibly good turn-out, will probably be a regular event.
Matt Ziegler - HPC
Matt Ziegler, IBM Senior HPC Solutions Architect for the iDataPlex marketing team, gave an introdcution to HPC during the keynote, then provided more details in a break-out session.
In the High Performance Computing (HPC) market, IBM POWER used to be the dominant chipset, with over 200 of the top 500 supercomputers back in June 2001. Today, only about 50 use POWER. Rather, over 350 of the top 500 supercomputers use x86 instead. HPC represents a 6.3 percent growth opportunity for computer, 9.3 percent growth for storage, and 8.6 percent growth for services.
IBM's leadership in energy efficiency applies to HPC as well. In the "Green 500", a ranking based on MFLOPS/Watt, 19 of the top 25 are from IBM. IBM's iDataPlex is the most energy efficient x86 platform, at 401 MFLOPS per Watt.
Overall, x86 is growing. In 2005, x86 had 48 percent of the market, RISC/Itanium had 39 percent, and mainframe had 12 percent. In 2009, x86 grew to 56 percent, RISC/Itanium dropped to 33 percent, and mainframe to 11 percent. By 2014, Matt projects that x86 will be 63 percent, RISC/Itanium will drop to 30 percent, and mainframe to 7 percent.
The most popular form factor for x86 are blades. Growing from 8 percent in 2005, to 20 percent in 2009, and projected to be 33 percent by 2014.
IBM's Storage Strategy in the Era of Smarter Computing
I gave this presentation twice today. It has evolved quite a bit from the version I presented in Orlando last July. Attendees appreciated that my colorful analogies and stories helped them better understand the concepts of Big Data analytics, Workload-Optimized systems, and Cloud Storage offerings.
SONAS Product Review and Demo
Rich Swain presented IBM's Scale-Out Network Attached Storage (SONAS) and provided a live demo connecting to a box here in New Zealand. This is a topic I often present at the Tucson Executive Briefing Center, but it is always good to hear someone else's spin.
Phil Tasker invited everyone to the Welcome Reception after the last sessions. There was food and drink, and prizes! One person won an Xbox-360 game console, and two people won iPads.
Well, it's Tuesday again, and that means IBM announcements! Today we had a major launch, with so many products, services and offerings
that I can't fit them all into a single post, so I will split them up into several posts to give the attention they deserve. So, in this
post, I will focus on just the networking gear.
IBM Converged Switch B32
The "Converged" part of this switch refers to Converged Enhanced Ethernet (CEE), which is just a lossless Ethernet that meets certain standards to allow Fibre Channel over Ethernet (FCoE) that are still being discussed between Brocade and Cisco. Thankfully, IBM demanded both Brocade and Cisco stick to open agreed-upon standards, and the rest of the world gets to benefit from IBM's leadership in keeping everything as open and non-proprietary as possible.
The B32 ("B" because it was made by Brocade) starts with 24 10Gb Converged Enhanced Ethernet (CEE) ports, and then you can add eight Fibre Channel ports, for a total of 32 ports, hence the name B32. These are designed to be Top-of-Rack (TOR) switches. Basically, instead of having expensive optical cables for Ethernet and/or Fibre Channel out of each server, you have cheap twinax copper cables connecting the server's Converged Network Adapters (CNA) to this TOR switch, and then you can have the 10Gb Ethernet go to your regular Ethernet LAN, and your 8Gbps FC traffic go to your regular FC SAN. In other words, the CNA serves both the role of an Ethernet Network Interface Card (NIC) as well as a Fibre Channel Host Bus Adapter (HBA) card.
(You might see 8Gbps Fibre Channel represented as 8/4/2 or 2/4/8, this is just to remind you that these 8Gb FC ports can auto-negotiate down to 2Gbps and 4Gbps legacy hardware, but not 1Gbps. If you are still using 1Gbps FC, you need 4Gpbs SFP transceivers instead, shown often as 1/2/4 or 4/2/1.)
New SSN-16 module for Cisco directors and switches
When I present SAN gear to sales reps, I often get the question, "What is the difference between a switch and a director?" My quick and simple answer is that switches have fixed ports, but directors have slots that you can slide in different blades or expansion modules. The Cisco MDS9500 series are directors with slots, the three models provide a hint to their capacity. The last two digits represent the number of total slots, but the first two slots are already taken. In other words, model 9513 has 11 slots, model 9509 has seven slots, and model 9506 has four slots. You can have a 48-port blade in a slot, so in theory, you can have a maximum of 528 ports on the biggest model 9513.
However, if you want FCIP for disaster recovery, or I/O Acceleration (IOA) for remote e-vaulting tape libraries, you need a special 18/4 blade. This has 18 FC ports, four 1GbE ports and a special service processor that speaks FCIP or IOA. If you wanted two service processors for FCIP and two for IOA, you would need four of these blades, and that takes up slots that could have been used for 48-port blades instead. The solution? The new SSN-16 has sixteen 1GbE ports and four service processors, so with one slot, you can handle the FCIP and IOA processing that you previously used four cards, giving you three slots back to use with higher port-density cards.
Even better, you can put this new SSN-16 in the Cisco 9222i. The model 9222i is a "hybrid" switch with 22 fixed ports (18 FC ports, four fixed 1GbE ports, and a service processor, so basically the fixed port version of the 18/4 blade above), but it also has one slot! That one slot can be used for the SSN-16 to give you added FCIP or IOA capability.
For our mainframe clients, the FICON package includes four 24-port FICON blades and 96 SFP 4Gbps transceivers to fully populate them. Here is the IBM [Press Release].
Cisco Nexus 5000 series for IBM System Storage
The Cisco Nexus 5000 series is Cisco's entry into the Converged Enhanced Ethernet world, although Cisco sometimes refers to this as Data Center Ethernet (DCE), IBM will continue to use CEE when referring to either Brocade and Cisco gear. These are also Top-of-Rack aggregators that support CNA connections over cheaper twinax copper wires. Model 5010 has 10 ports that can be configured for either 1GbE or 10Gb CEE, 10 ports that are 10Gb CEE, and a slot for an expansion module. The Model 5020 has basically twice as much of everything, including two slots instead of one. Since 10Gb Ethernet does not auto-negotiate down to 1GbE, half the ports can be configured to run 1GbE instead. Frankly, that can be seen as wasting your precious Nexus ports with 1GbE connections, so you might find a 1GbE-to-10GbE aggregator that combines a dozen or more 1GbE to a few 10GbE links instead.
Today's announcement is that in addition to 10GbE and 4Gbps FC expansion modules, there is now an expansion module that supports 8Gbps Fibre Channel. Here is the IBM [Press Release].
Whether you choose Brocade or Cisco, nearly all of IBM System Storage disk and tape products can work today with Converged Enhanced Ethernet environments, either directly using iSCSI, NFS or CIFS, or using the FCoE methodology.
As you can see, it took me a whole post just to cover just our networking gear announcements, and I haven't even covered our disk, tape and cloud storage offerings. I'll get to these in later posts.
Continuing my romp through Australia and New Zealand, the last Storage Optimisation Breakfast of the week was Brisbane, which the locals here refer to as [Brisvegas], probably for all of the nightlife and casinos here.
The IBM office building is conveniently across the street from my hotel, the [Sofitel Brisbane]. The hotel also sits above central station, which allows quick transportation to the airport.
This time, we had a tag team of two people from James Cook University (JCU) to present their success story. First up was Kent Adams, the Director or Information Technology and Resources. JCU is recognized as one of the top 5 percent of Universities worldwide, and as a result, their data storage requirements are growing at 400 percent per year! Their latest purchase put out for RFP was for at least 40TB that could handle at least 20,000 IOPS. The winning solutions was an IBM XIV disk system.
Behind the scenes at all the events this week here in Australia were, from left to right, Natalie from GPJ Australia, the local subsidiary of the George P. Johnson events management we use in the states; Sonia Phillips, IBM Advisory Marketing Lead for Dynamic Infrastructure Optimisation and Cloud Computing, Demand Programs, for Australia and New Zealand; and Monika Lovgren, IBM Marketing and Execution Lead for Workload Optimised Systems for Australia.
The second speaker was Lee Askew, one of the Storage Administrators. Overall, the JCU team have been amazed at how well this box works. When they started it up, they expected to spend the next 24-36 hours formatting RAID ranks, but not with the XIV. It was ready in 2 minutes and they started provisioning storage right away. Their own tests to fail a drive found they can do a full rebuild to redundancy in 9 minutes. It took 8-36 hours on their previous disk array. Failing a full data module took only 75 minutes to bring back to redundancy.
After a long and tiring week, I was able to relax by walking through this beautiful King Edward park near the IBM building. This had a nice variety of plants and flowers, and with the surprise visit of a lizard about the length of my arm that crossed my path.
JCU also uses Asynchronous Mirror to replicate data to another XIV at distance. Again, as with all aspects of IBM XIV, the solution works as advertised. They are well positioned to grow from the 18,000 students they have today, to their target goal of 25,000 students they want to have by 2015.
Worldwide, IBM has done well with colleges and universities, and this was a great example of how partnering with IBM for your IT infrastructure can make a huge difference!
This week I'm in Argentina, teaching IBM Business Partners and sales reps about the latest System Storage products. Encouraged by my success on my Digital IBMer tour last month in Europe, I decided to get a SIM chip for my smartphone here in Buenos Aires.
I did my homework. There are three major mobile service providers that offer pre-paid GSM-based SIM chips: Claro, Movistar, and Personal. I arrived on Sunday morning, but thanks to the local [blue laws], none of them were open. I was able to walk around and find retail outlets for each within blocks of my hotel.
All three offer voice and SMS text messaging, but online reviews indicated that Movistar offered the best data plan. I was there at 9:30am sharp, the moment the Movistar store opened Monday morning. The lovely young lady behind the counter was quite helpful. She put the SIM chip in my phone, but then told me it might be an hour or two before it was activated. I would receive an SMS text message welcoming me to the Movistar network. She provided my new 12-digit phone number, along with instructions on how to check my balance (*444) or call for technical assistance (*611).
(FTC Disclosure: even though I am not in the United States as I write this, the U.S. Federal Trade Commission rules require that I mention that this blog post is not intended as a paid or celebrity endorsement for any of the cellphone service providers mentioned. I work for IBM, and this post is based entirely on my personal experience.)
Why not just use international roaming available on my US plan? International roaming is quite expensive! I made the mistake of uploading three hi-res photos to Flickr last year in New Zealand to discover this the hard way. Here is a comparison chart:
Voice calls (per minute)
$2.80 pesos (about $0.64 USD)
SMS text (per message sent)
$0.90 pesos (about $0.20 USD)
$10.00 pesos for 1GB across 2 days (about $2.27 USD)
(If your spouse or significant other threatens to leave you if you don't call her every day while out of the country, remind her that divorce attorneys are less expensive than these international roaming rates! Fortunately, all of my friends and family know this and are quite understanding if they don't here from me as often as they would like.)
The SIM chip cost only 30 pesos (about seven bucks). Normally, SIM chips come without credit, but their current promotion included 20 pesos credit for voice calls (enough for 7 minutes of talking), and 200 free SMS text messages.
Six hours later, my phone still was not yet activated. I returned to the store Monday afternoon to ask what was going on. She decided the chip must be bad, gave me a second one, and assigned me a new phone number. I would then have to wait again another hour or two for the welcome message.
Monday evening, a grey window pops up, "Bienvenidos a Movistar" so I thought it was activated, but it wasn't exactly the SMS text message the young lady told me would happen. Sure enough, neither *444 nor *611 worked, giving me voice responses that my phone is not yet activated, and please wait another hour.
Tuesday morning, I am back at the Movistar outlet. The young lady was not happy to see me. She confirmed my second chip was not yet activated, but felt she did nothing wrong. She insisted the problem was either with my phone, or with the Movistar main office, but that she did everything correctly by the book.
(I realize that the sales clerks at these outlet stores don't have a Ph.D. in digital telephony or electrical engineering. I was not angry, nor trying to blame her individually for all of the problems we encountered. Getting a smartphone manufactured in South Korea for the US market to work in Argentina is challenging enough. Given all the difficulties I had last month in Europe, I know it is not limited to Latin America.)
Either way, I told her, if we can't get my phone working, I would like my 30 pesos refunded and promised she would never see me again.
Her response was classic. She would rather not-see me-again because I was delighted with the Movistar service, rather than not-see me-again because we were unable to get it working. She offered to contact the main office to figure out what was going on, and that I should come back in an hour or two. She did not want to lose my business, nor have me go to one of her two main competitors. Now that's customer service!
Tuesday afternoon, I return. She now was instructed on how to do some basic problem determination. We put my new SIM chip into a test phone, and confirmed it was not my phone having problems. The chip did not work in the test phone either. She called the main office, and they were able to activate the chip in the test phone, and then she transferred the chip back to my phone. I asked her to please call my new phone number to confirm it was now working, and I was able to send a quick text message to confirm that was also working. The *444 indcated that my balance was now down to 19.29 pesos. Apparently, it cost me 71 centavos to receive her phone call.
(Just as we were wrapping up, a young man walks in with his phone wanting a SIM chip. None of the Movistar staff spoke English, he did not speak Spanish, but luckily I speak both fluently and was able to translate.
First, we confirmed his phone was still locked, and that he would need to contact his AT&T provider to get an unlock code. He should then come back with the unlock code and his passport to then buy the chip. He didn't understand why Movistar needed his passport for a pre-paid plan, so I had to explain to him at length Argentinian law, the Denied Parties List, the ongoing war against terror and drug trafficking, and how he would have to agree to their Terms and Conditions to use their service, even if there is no ongoing monthly service contract.
He thanked me, promised to return with both his unlock code and passport, and told me my English was "quite good"!)
The next step was to activate my data plan. For this, I would need to buy additional credit. Scratch cards to add credit to your pre-paid phone, referred to locally as "Tarjeta de Recarga", come in 20 and 30-peso denomnations, but are not sold at the Movistar outlet. Instead, the young lady told me to get one at any kiosk or corner convenience store.
As it turns out, not every convenience store offers these cards for Movistar, but after a few blocks, I was able to find one that did. The process is simple: call *444, follow the Spanish-language prompts, scratch off the back of the card, and enter the 16-digit code. I bought a 20-peso card (about $4.50 USD), followed the procedure, and got my confirmation text, indicating that I qualified for 10 extra pesos as a gift for being a new customer, so my new balance was now $49.29 pesos. Woo-hoo!
Now that my phone was armed with enough credit, all I had to do was send an SMS text message containing the word "Datos" to the Movistar phone number 2345. A text message response indicated my data plan was now active. I will have to do this every other day, as the plan is 1GB per 2-day period, but I have enough credit to last me the rest of the week here. To get my phone to detect the new status, I had to turn on data packet traffic, configure and validate the Access Point Name (APN) information, then reboot the phone.
The data plan service is based on the General Packet Radio Service [GPRS] protocol. GPRS is a best-effort service, resulting in variable throughput and latency that depends on the number of other users sharing the service concurrently. Speeds are comparable to dial-up rates, 56 to 114 Kbps.
For those of us spoiled on T-Mobile's 4G speeds in the USA, GPRS is terribly slow. But that's OK. I doubt I will go over the 1GB limit. Overall, I am quite pleased with my success. My phone is fully functional for the week, and all for less than the cost of a single glass of Malbec in the Hilton lobby bar!
This week, I will be in Las Vegas for the 30th annual [Data Center Conference]. For those on Twitter, follow the conference on hashtag #GartnerDC, and follow me at [@az990tony]. IBM is a Global Partner and Platinum Sponsor for this event. Here is a recap of some of the Monday morning keynote sessions:
Welcome and Introduction
Monday morning kicked off with a welcome introduction from the conference coordinators. This is the highest attendance for this conference in its 30 year history, with 60 percent of the attending for their first time, and 18 percent only once before. This is the fourth time I am attending. Half of the attendees represent corporations with 20,000 employees or more, the other half from smaller companies and government agencies. The top five industries represented are financial services, public sector, healthcare, manufacturing, and energy.
This conference uses a clever "interactive polling" where hand-held devices can be used to select choices, and results of over 800 voters are presented immediately on the big screen.
For IT budgets, 42 percent plan to increase next year, 32 percent flat, and 26 percent lower, which are similar to the numbers last year. Of nine different IT challenges, the top three were managing storage growth, power/cooling issues, and adopting a Cloud strategy.
Top 10 Trends and how they will impact Data Center IT
The analyst presented top 10 business, technology and societal trends that will impact IT. He added a last-minute eleventh issue that he felt will impact everyone in 2012:
Consumerization and the Tablet. Back in 1997, a GB of flash memory cost $7,992 US dollars, and today that same GB costs only 25 cents. Employees are bringing their own devices to the workplace, and expecting IT support.
Infinite Data Center. You may never have to expand your floorspace again. Improvements in server and storage density can allow you to continually upgrade in place.
Energy Management. Data centers consume 100x more energy than the offices they support. The cost of energy is on part with IT equipment. Energy management is becoming an enterprise-wide discipline. A key performance indicator (KPI) can be "compute per kW" or "compute per Square foot".
Context Awareness. There are hundreds of thousands of apps for Android-based smart phones and iPhones. Context awareness allows an app to help business travelers in airports know what restaurants are nearby, their flight status, and alternate flights available, based entirely on their location.
Hybrid Clouds. By 2013, over 60 percent of cloud adoption will be to redeploy existing apps like email. Some 80 percent of cloud initiatives will be private or hybrid configurations. Customers want "good enough" technology, and thus Cloud will be mostly an augmentation strategy.
Fabric Computing. The opposite of fully-integrated stacks is the notion of having compute, memory and storage joined together via an interconnect fabric with software to manage the entire environment.
IT Complexity. Robert Glass's Law states that for every 25 percent increase in functionality, there is a 100 percent increase in complexity. See Roger Session's whitepaper [The IT Complexity Crisis: Danger and Opportunity] for more on this.
Patterns and Analytics. Big data and business analytics is a key platform. This is expected to grow 60 percent CAGR.
Impact of Virtualization. Virtualizing your environment should be considered a continuous process, not a one-time project. Many companies are running x86 servers at less than 55 percent, which the speaker considers under-utilized. Virtual Desktop Infrastructure (VDI) is a trade-off, may cost more but have other business benefits to consider. The problem is that many IT shops are organized vertially (a server team, storage team, network team) but problems surface horizontally, and there is no "ownership" for the resolution. Some use "tiger teams" to address this. Companies should reward lateral thinking.
Social Media. Of the ommunications on cell phones by college students, 98.4 percent are text messages, and only 1.6 percent voice phone calls. People search Google for "what was", but they search Twitter for "what is". Most of the growth on Twitter are in the 39-52 year-old demographic. The analyst felt that if your company is blocking or restricting access to facebook, twitter, youtube or other social networking sites, then shame on you. I agree!
Flooding in Thailand. Over two million square feet of HDD production space were flooded, and this will impact HDD prices for 2012. Already, a 2TB drive that was selling for $79 at local store is now selling for $190.
How To Get Your CFO's Support For Strategy and Funding
In the first of a series of "mastermind interviews", the analyst interviewed their own CFO Chris Lafond. Ultimately, it is about business results. They have grown annual 15-20 percent, from 250 million in 2003 to 1.3 billion US dollars in 2011 for annual revenue, 4600 employees, doing business in 85 countries. The company is focused on three business areas: Research, Consulting, and Events like this one. Chris does not approve 3-5 year projects, and instead requests projects be broken up into year-long phases. ROI can be very misleading, and he asks instead for benefits and contributions to initiatives.
It is important to keep the horse in front of the cart. Accounting departments should not drive business decisions. For example, companies should not move to the public cloud just so that the accounting department can shift from CAPex to OPex. Try to depreciate as soon as possible. Likewise, green technologies and social responsibility are factors, but not drivers of business decisions. Acquisitions are a natural evolution of the market, so risk mitigation strategies should be in place in case your vendor of choice is acquired by someone you don't like.
For BC/DR planning, the analyst has a single Data Center approach, but Chris indicated that IT is looking to expand this. Their single datacenter for one part of their business was in Florida, and the other in Massachusetts, and both impacted by Hurricanes or Earthquakes recently.
The "lightning round" asked Chris his thoughts, either thumbs up, thumbs down, or neutral, on single ideas or concepts. I liked this part of the interview!
Chargeback? Thumbs down. He doesn't feel you should have internal fighting over charge rates. He prefers showback instead.
BYO Device with stipend? Thumbs down, but inevitable. Giving people a chunk of money to buy their own laptop, smart phone or tablet of choice may wreak havoc on the IT department for support and service.
Telepresence? Thumbs down. Cool, but very expensive. I don't think people are prepared to exploit the benefits of this.
Corporate apps on public "app stores"? Thumbs down. Concerns over security and integration is main issue.
Access to Social Networks? Thumbs up. This is how employees communicate and collaborate. Don't stifle them doing the right things just because you are afraid they might waste 20 minutes on Facebook per day.
Your IT budget? It's up slightly 1-5 percent for 2012.
Cloud? Promising, some challenges related to integration and security.
Chris finished up with a story about an application team that indicated that they would need to make 100 customizations to an off-the-shelf general ledger financial application. Chris and the other executives asked to be presented each and every customization, and he was able to eliminate most of them.
Positive comments I heard from the audience was that these keynotes had real "meat" to them, and not just full of cliches and platitudes that is common for keynote sessions. I would have to agree.
Continuing my post-week coverage of the [Data Center 2010 conference], Thursday morning had some interesting sessions for those that did not leave town last night.
Interactive Session Results
In addition to the [Profile of Data Center 2010] that identifies the demographics of this year's registrants, the morning started with highlights of the interactive polls during the week.
External or Heterogeneous Storage Virtualization
The analyst presented his views on the overall External/Heterogeneous Storage Virtualization marketplace. He started with the key selling points.
Avoid vendor lock-in. Unlike the IBM SAN Volume Controller, many of the other storage virtualization products result in vendor lock-in.
Leverage existing back-end capacity. Limited to what back-end storage devices are supported.
Simplify and unify management of storage. Yes, mostly.
Lower storage costs. Unlike the IBM SAN Volume Controller, many using other storage virtualization discover an increase in total storage costs.
Migration tools. Yes, as advertised.
Consolidation/Transition. Yes, over time.
Better functionality. Potentially.
Shortly after several vendors started selling external/heterogeneous storage virtualization solutions, either as software or pre-installed appliances, major storage vendors that were caught with their pants down immediately started calling everything internally as also "storage virtualization" to buy some time and increase confusion.
While the analyst agreed that storage virtualization simplifies the view of storage from the host server side, it can complicate the management of storage on the storage end. This often comes up at the Tucson Briefing Center. I explain this as the difference between manual and automatic transmission cars. My father was a car mechanic, and since he is the sole driver and sole mechanic, he prefers manual transmission cars, easier to work on. However, rental car companies, such as Hertz or Avis, prefer automatic transmission cars. This might require more skills on behalf of their mechanics, but greatly simplifies the experience for those driving.
The analyst offered his views on specific use cases:
Data Migration. The analyst feels that external virtualization serves as one of the best tools for data migration. But what about tech refresh of the storage virtualization devices themselves? Unlike IBM SAN Volume Controller, which allows non-disruptive upgrades of the nodes themselves, some of the other solutions might make such upgrades difficult.
Consolidation/Transition. External virtualization can also be helpful, depending on how aggressive the schedule for consolidation/transition is performed.
Improved Functionality/Usability. IBM SAN Volume Controller is a good example, an unexpected benefit. Features like thin provisioning, automated storage tiering, and so on, can be added to existing storage equipment.
The analyst mentioned that there were different types of solutions. The first category were those that support both internal storage and external storage virtualization, like the HDS USP-V or IBM Storwize V7000. He indicated that roughly 40 percent of HDS USP-V are licensed for virtualization. The second category were those that support external virtualization only, such as IBM SAN Volume Controller, HP Lefthand and SVSP, and so on. The third category were software-only Virtual Guest images that could provide storage virtualization capabilities.
The analyst mentioned EMC's failed product Invista, which sold less than 500 units over the past five years. The low penetration for external virtualization, estimated between 2-5 percent, could be explained from the bad taste that left in everyone considering their options. However, the analyst predicts that by 2015, external virtualization will reach double digit marketshare.
Having a feel for the demographics of the registrants, and specific interactive polling in each meeting, provides a great view on who is interested in what topic, and some insight into their fears and motivations.
Last week's earthquake in Haiti reminds us all how fragile systems can be. Part of a complete Information Infrastructure is Information Security. Back in 2006, IBM [acquired Internet Security Services]. This week, IBM announces two sets of ISS Data Security Services: These services can include assessments of your current environment, running workshops to help gather requirements, help design security policies, and even follow through with implementation.
Endpoint Data Protection
Here "endpoint" refers to laptops, desktops, PDAs and smart phones. Not surprisingly, more and more mobile employees are relying on data stored on these endpoint devices, and they need to be protected and secure. [Endpoint Data Protection services] includings software, consulting and implementation of a solution that fits your environment.
Enterprise Content Protection
Here "enterprise content" refers to data that is stored centrally, such as a data center, and accessed over one or more networks. [Enterprise Content Protection services] will evaluate the data that is most sensitive, determine the various formats, identify risks, and provide guidance on how best to protect. Software is available to identify network exits and leakage points.
Both of these services include implementation of help desk support as well. To learn more, check out the ISS [Virtual Briefing Center].
This week, Tuesday, Wednesday and Thursday, I am at the IBM Dynamic Infrastructure Executive Summit at the beautiful Fairmont Resort in Scottsdale, Arizona. This is a mix of indoor and outdoor meetings, one-on-ones with IBM executives, and main-tent sessions.
The Solutions Showcase will cover the following:
As the bar for performance gets higher and the need to manage, store and analyze massive amounts of information escalates, systems must scale to meet the needs of the business. The latest server and storage technology innovations including: POWER7, eX5, XIV, ProtecTIER, SONAS, and System z Solution Editions.
Smarter Data Centers
Today’s data centers are under extreme power and cooling pressures and space constraints. How can you get more out of your existing facility, while planning for future requirements? IBM energy efficiency consultants will tell you how you can reduce both CAPEX and OPEX costs and plan for future growth with consolidation and virtualization, energy efficient (energy star) equipment and modular data center solutions. Be sure to check out the IBM Portable Modular Data Center (PMDC) that fits in a standard shipping crate!
IBM’s Cloud Computing solutions provide you with flexible, dynamic, secure and cost-efficient delivery choices from pay-per-use (by the hour, week or year) at IBM cloud centers around the world, conditioning your infrastructure to build your own private cloud or out-of-the box cloud solutions that are quick and easy to deploy. Which workloads are the best fit for cloud computing? How do you decide which cloud computing is right for your organization? Cloud experts will talk about the options, give you recommendations based on your business objectives and help you get started.
Many people have asked me if there was any logic with the IBM naming convention of IBM Systems branded servers. Here's your quick and easy cheat sheet:
System x -- "x" for cross-platform architecture. Technologies from our mainframe and UNIX servers were brought into chips that sit next to the Intel or AMD processors to provide a more reliable x86 server experience. For example, some models have a POWER processor-based Remote Supervisor Adapter (RSA).
System p -- "p" for POWER architecture.
System z -- "z" for Zero-downtime, zero-exposures. Our lawyers prefer "near-zero", but this is about as close as you get to ["six-nines" availability] in our industry, with the highest level of security and encryption, no other vendor comes close, so you get the idea.
But what about the "i" for System i? Officially, it stands for "Integrated" in that it could integrate different applications running on different operating systems onto a [COMMON] platform. Options were available to insert Intel-based processor cards that ran Windows, or attach special cables that allowed separate System x servers running Windows to attach to a System i. Both allowed Windows applications to share the internal LAN and SAN inside the System i machine. Later, IBM allowed [AIX on System i] and [Linux on Power] operating systems to run as well.
From a storage perspective, we often joked that the "i" stood for "island", as most System i machines used internal disk, or attached externally to only a fewselected models of disk from IBM and EMC that had special support for i5/OS using a special, non-standard 520-byte disk block size. This meant only our popular IBM System Storage DS6000 and DS8000 series disk systems were available. This block size requirement only applies to disk. For tape, i5/OS supports both IBM TS1120 and LTO tape systems. For the most part,System i machines stood separate from the mainframe, and the rest of the Linux, UNIX and Windows distributed serverson the data center floor.
Often, when I am talking to customers, they ask when will product xyz be supported on System z or System i?I explained that IBM's strategy is not to make all storage devices connect via ESCON/FICON or support non-standard block sizes, but rather to get the servers to use standard 512-byte block size, Fibre Channel and other standard protocols.(The old adage applies: If you can't get Mohamed to move to the mountain, get the mountain to move to Mohamed).
On the System z mainframe, we are 60 percent there, allowing three of the five operating systems (z/VM, z/VSE and Linux) to access FCP-based disk and tape devices. (Four out of six if you include [OpenSolaris for the mainframe])But what about System i? As the characters on the popular television show [LOST] would say: It's time to get off the island!
Last week, IBM announced the new [i5/OS V6R1 operating system] with features that will greatly improve the use of external storage on this platform. Check this out:
POWER6-based System i 570 model server
Our latest, most powerful POWER processor brought to the System i platform. The 570 model will be the first in the System i family of servers to make use of new processing technology, using up to 16 (sixteen!) POWER6 processors (running at 4.7GHZ) in each machine.The advantage of the new processors is the increased commercial processing workload (CPW) rating, 31 percent greater than the POWER5+ version and 72 percent greater than the POWER5 version. CPW is the "MIPS" or "TeraFlops" rating for comparing System i servers.Here is the[Announcement Letter].
Fibre Channel Adapter for System i hardware
That's right, these are [Smart IOAs], so an I/O Processor (IOP) is no longer required! You can even boot the Initial Program Load (IPL) direclty from SAN-attached tape.This brings System i to the 21st century for Business Continuity options.
Virtual I/O Server (VIOS)
[VirtualI/O Server] has been around for System p machines, but now available on System i as well. This allows multiplelogical partitions (LPARs) to access resources like Ethernet cards and FCP host bus adapters. In the case of storage, the VIOS handles the 520-byte to 512-byte conversion, so that i5/OS systems can now read and write to standard FCP devices like the IBM System Storage DS4800 and DS4700 disk systems.
IBM System Storage DS4000 series
Initially, we have certified DS4700 and DS4800 disk systems to work with i5/OS, but more devices are in plan.This means that you can now share your DS4700 between i5/OS and your other Linux, UNIX and Windowsservers, take advantage of a mix of FC and SATA disk capacities, RAID6 protection, and so on.
To call [IBM PowerVM] the "VMware for the POWER architecture" would not do it quite justice. In combination with VIOS, IBM PowerVM is able to run a variety of AIX, Linux and i5/OS guest images.The "Live Partition Mobility" feature allows you to easily move guest images from one system to another, while they are running, just like VMotion for x86 machines.
And while we are on the topic of x86, PowerVM is also able to represent a Linux-x86 emulation base to run x86-compiled applications. While many Linux applications could be re-complied from source code for the POWER architecture "as is", others required perhaps 1-2 percent modification to port them over, and that was too much for some software development houses. Now, we can run most x86-compiled Linux application binaries in their original form on POWER architecture servers.
BladeCenter JS22 Express
The POWER6-based [JS22 Express blade] can run i5/OS, taking advantage of PowerVM and VIOS to access all of the BladeCenterresources. The BladeCenter lets you mix and match POWER and x86-based blades in the same chassis, providing theultimate in flexibility.
I can't believe we got snow this week on Valentine's Day! It didn't last long on the ground here in Tucson, but there are still some white caps in our mountains. For those of you "trapped" by snow, or too much work, here are two upcoming events you can attend from your desk and computer!
IBM Oracle Virtual University 2012
Please join us for the fourth annual IBM Oracle Virtual University that runs "live" for 24 hours, then continues 'on-demand' replay through the remainder of 2012.
From: Tuesday, February 21, 6:00 am US Eastern Time EST (6:00 pm China Time)
To: Wednesday, February 22, 6:00 am EST
This is a great educational event for IBM and Business Partner sales & technical teams who sell IBM Oracle solutions or have Oracle solutions installed in their account. It is for anyone who is new to or interested in the IBM Oracle Alliance as well as experienced sales & technical people who need all the latest on the IBM/Oracle co-opetition relationship for 2012 and beyond.
This VIRTUAL on-line event will cover key topics around the IBM Oracle Alliance. I am one of the speakers and will cover IBM System Storage offerings as they relate to Oracle software.
This is a chance for sellers to hear an update on what's new, unique and available to sell in 2012. The goal of this session is to help enable you to sell more IBM products and services with Oracle solutions in 2012! Learn where to go for help to better understand these solutions, close more deals and reach your targets.
Even through economic challenges, storage requirements have continued to grow along with the information explosion.
Join us for this informative webcast and hear from Jon Toigo, CEO and Managing Principal of Toigo Partners, as he discusses six cutting-edge storage technologies that are ready for prime time and can help transform your data center.
Date: Tuesday, February 28
Time: 1:00 pm EST, 12"00 pm CST, 10:00 am PST
The featured speaker is fellow blogger Jon Toigo, CEO and Managing Principal, Toigo Partners, an outspoken technology consumer advocate and vendor watchdog whose articles, columns, and blog posts on [DrunkenData.com] are enjoyed by over a million readers per month.
It's Tuesday, and you know what that means... IBM Announcements!
IBM System Storage ProtecTIER
Today, IBM refreshed its IBM System Storage ProtecTIER data deduplication family with new hardware and software. On the hardware side, The [TS7650G gateway] now has 32 cores and 64GB RAM. The [TS7650 Appliance] now has 24 cores and 64GB of RAM, and the [TS7610 Appliance Express] has 4 cores and up to 16GB of RAM.
On the software side, all of these now support Symantec's proprietary "OpenStorage" OST API. This applies across the board, from the [Enterprise Edition], [Appliance Edition], and the [Entry Edition]. For those using Symantec NetBackup as their backup software, the OST API can provide advantages over the standard VTL interface.
IBM Systems Director Storage Control
The second announcement has an interesting twist. I could file this in my "I Told You So" folder. Offiically, it's called the [Cassandra Complex], where you accurately predict how something will turn out, but being unable to convince anyone else of what the future holds.
About ten years ago, I was asked to be lead architect of a new product to be called IBM TotalStorage Productivity Center, which was later renamed to IBM Tivoli Storage Productivity Center. This would combine three projects:
Tivoli Storage Resource Manager (TSRM)
Tivoli SAN Manager (TSANM)
Multiple Device Manager (MDM)
The first two were based on Tivoli's internal GUI platform, and the MDM was a plug-in for IBM Systems Director. I argued that administrators would want everything on a single pane of glass, and that we should bring all the components under a common GUI platform, such as IBM Systems Director. Unfortunately, management did not agree with me on that, and preferred instead to leave each interface alone to minimize development effort. The only "unification" was to give them all similar sounding names, four components packaged as single product:
Productivity Center for Data (formerly TSRM)
Productivity Center for Fabric (formerly TSANM)
Productivity Center for Disk (formerly MDM)
Productivity Center for Replication (formerly MDM)
While this management decision certainly allowed version 1 to hit the market sooner, this was not a good "first impression" of the product for many of our clients.
In 2002, IBM acquired Trellisoft, Inc. which replaced the internally-developed TSRM with a much better interface, but again, this was different GUI than the other components. A "launcher" was created that would launch the various disparate interfaces for each component for Version 2. At this point, we have different development teams scattered in five locations, with the first two components being developed by the Tivoli software team, and the other two components being developed by the System Storage hardware team.
Often times, when a technical lead architect and management do not agree, things do not end well. The lead architect has to leave the product, and management is forced to take alternative actions to keep the product going. In my case, management considered the idea of a common GUI as an expensive "nice-to-have" luxury we could not afford, but I considered this a "must-have". I moved on to a new job within IBM, and management, unable to continue without my leadership, gave up and handed the entire project over to the Tivoli Software team.
The Tivoli Software team took a whiff at the pile of code and agreed that it stunk. Dusting off my original design documents, they pretty much discarded most of the code and re-wrote much from scratch, with a common database, common app server, and common GUI platform. Unfortunately, Productivity Center for Replication was held up waiting for some hardware prerequisites, but the other three components would be packaged together as "Productivity Center v3 - Standard Edition" and was a big improvement over the prior versions.
In Version 4, TotalStorage Productivity Center was renamed to Tivoli Storage Productivity Center, and the Replication component was brought into the mix. A scaled-down version packaged as Productivity Center "Basic Edition" was made available as a hardware appliance named "System Storage Productivity Center" or SSPC. The idea was to provide a pre-installed 1U-high hardware console that had the basic functions of Productivity Center, with the option to upgrade to the full Tivoli Storage Productivity Center with just license keys.
So, now, years later, management recognizes that a common GUI platform is more than just a "nice-to-have". IBM now support three very specific use cases:
1. Administration for a single product
For small clients who might have only a single IBM product, IBM is now focused on making the GUI browser-based, specifically to work with the Mozilla Firefox browser, but any similar browser should work as well. The new IBM Storwize V7000 GUI is a good example of this.In this case, the browser serves as the common GUI platform.
2. Administration for both servers and storage devices
For mid-sized companies that have administrators managing both servers and storage, IBM announced this month the new [IBM Systems Director Storage Control v4.2.1] plug-in, which provides Tivoli Storage Productivity Center "Basic Edition" support. This allows admins already familiar with IBM Systems Director for managing their servers to also manage basic storage functions. This is the "I Told You So" moment, connecting server and storage administration under the IBM Systems Director management platform makes a lot of sense, it did when I came up with the idea 10 years ago! Hmmmm?
3. Administration for just the storage environment
For larger companies big enough to have separate server and storage admin teams, IBM continues to offer the full Tivoli Storage Productivity Center product for the storage admins. The most recent release enhanced the support for IBM DS8000, SVC, Storwize V7000 and XIV storage systems.
Today, analysts consider IBM's [Tivoli Storage Productivity Center] one of the leading products in its category. I am glad my original vision has finally come to life, even though it took a while longer than I expected.
To learn more about IBM storage hardware, software or services, see the updated [IBM System Storage] landing page.
For those who missed it, IBM announced last Tuesday encryption capability for the TS1120 drive, our enterprise tape drive that read and write 3592 cartridges. Do you need special cartridges for this? No! Use the sames ones you have already been using!
In his blog post, [The Lure of Kit-Cars], fellow blogger Chuck Hollis (EMC) uses an excellent analogy delineating the differences between kit-cars you build from parts, versus fully-integrated systems that you can drive off the car dealership showroom lot. The analogy holds relatively well, as IT departments can also build their infrastructure from parts, or you can get fully-integrated systems from a variety of vendors.
Is this what your data center looks like?
Certainly, this debate is not new. In my now infamous 2007 post [Supermarkets and Specialty Shops], I explained that there were clients that preferred to get their infrastructure from a single IT supermarket, like IBM or HP, while others were lured into thinking that buying separate parts from butchers, bakers and candlestick makers and other specialty shops was somehow a better idea.
Chuck correctly explains that in the early years of the automobile industry, before major car manufacturers had mass-production assembly lines, putting a car together from parts was the only way cars were made. Today, only the few most avid enthusiasts build cars this way. The majority get cars from a single seller and drive away. In my post [Resolving the Identity Crisis], I postulated that EMC appeared to be trying to shed itself of the "disk-only specialty shop" image and over to be more like IBM. Not quite a full IT Supermarket, but perhaps more like a [Trader Joe's] premium-priced retailer.
(If you find that EMC's focus on integrated systems appears to be a 180-degree about-face from their historical focus on selling individual best-of-breed products, see my previous discussion of Chuck's contradictions in my blog post: [Is Storage the Next Confusopoly].)
While companies like EMC might be making this transition, there is a lot of resistance and inertia from the customer marketplace. I agree with Chuck, companies should not be building kit-cars or IT infrastructures from parts, certainly not from parts sold from different vendors. In my post [Talking about Solutions not Products], I explained how difficult it was to change behavior. CIOs, IT directors and managers need to think differently about their infrastructure. Let's take a quick look at some choices:
Following Chuck's argument, it makes no sense to build a "kit-car" combining Oracle/Sun servers with EMC storage. Oracle would argue it makes more sense to run on integrated systems, business logic on their "Exalogic" system, and database processing on their "Exadata". Benchmark after benchmark, however, IBM is able to demonstrate that Oracle applications and databases run faster on IBM systems. Customers that want to run Oracle applications can run either on a full Oracle stack, or a full IBM stack, and both do better than a kit-car including EMC parts.
HP has been working hard to keep up with IBM in this area. With their their partnership with Microsoft, and acquisitions of EDS, 3Com and 3PAR, they can certainly make a case for getting a full HP stack rather than a kit-car mixing HP servers with EMC disk storage. The problem is that HP is focused on a converged infrastructure for private cloud computing, but Microsoft is focused on Azure and public cloud computing. It will be interesting when these two big companies sort this out. Definitely watch this space.
If you squint your eyes and focus on the part of the world that only has x86 machines, then Dell can be seen as an IT supermarket. In my post about [Entry-Level iSCSI Offerings], I discuss how Dell's acquisition of EqualLogic was a signal that it was trying to get away from selling EMC specialty shop products, and building up its own set of offerings internally.
Cisco is new on the server scene, but has already made quite a splash. Here, I have to agree with Chuck's logic: the only time it makes sense to buy EMC disk storage at all is when it is part of an integrated "V-block". This is not really an IT supermarket situation, instead you park your car at the "Acadia Mini-Mall" and get what you need from Trader Joe's, Cisco UCS, and VMware stores.
But wait, if what you want is running VMware on Cisco servers, you might be better off with IBM System Storage N series or NetApp storage. In his blog post about [Enhanced Secure Multi-Tenancy], fellow Blogger Val Bercovici (NetApp) provides a convincing argument of why Cisco and VMware run better on an "N-block" rather than a "V-block". IBM N series provides A-SIS deduplication, and IBM Real-time Compression can provide additional capacity and performance improvements. That might be true, but whether you get your storage from EMC, NetApp or IBM, to me, you are still working with three different vendors in any case.
Of course, following Chuck's logic, it makes more sense for people with IBM servers, whether they be mainframes, POWER systems or x86 machines, to integrate these with IBM storage, IBM software and IBM services. IBM is the leading reseller of VMware, but also has a lot of business with Microsoft Hyper-V, Citrix Xen, Linux KVM, PowerVM, PR/SM and z/VM. While IBM has market leading servers, disk and tape systems, to compete for those RFP bids that just ask for one component or another, it prefers to sell fully-integrated systems, which IBM has been doing successfully since the 1950s.
Back in 2007, I mentioned how IBM's fully-integrated InfoSphere Balanced Warehouse [Trounced HP and Sun]. For business analytics, IBM offers the fully-integrated [IBM Smart Analytics Systems]. Today, IBM expanded its line of fully-integrated private cloud service delivery platforms with the announcement of the [IBM CloudBurst for on Power Systems], which does for POWER7 what the IBM CloudBurst for System x, Oracle Exalogic, or Acadia's V-block, do for x86.
IBM estimates that private clouds built on Power systems can be up to 70 percent less expensive than stand alone x86 servers.
Before he earned his PhD in Mechanical Engineering, my father was a car mechanic. I spent much of my teenage years covered in grease, helping my father assembling cars, lifting engines, and rebuilding carburetors. Certainly this was good father-son time, and I certainly did learn something in the process. Like the automobile industry, the IT industry has matured, and it makes no financial sense to build your own IT infrastructure from parts from different vendors.
For a test drive of the industry's leading integrated IT systems, see your IBM sales rep or IBM Business Partner.
Well, I am off on a much-needed vacation. For my American readers, this weekend represents our "4th of July" Independence Day holiday. What better way to celebrate than to drive hundreds of miles from one side of the country to the other? In this case, from the North side down to the South side.
I am armed with two books on this subject. The first, is part of a series on American Road Trips, which details the roadside attractions to be found along the Great River Road. We will start up in Minnesota, and work our way Southward, covering a total of eight states in eight days along the Mississippi River.
The second book is Alton Brown's "Feasting on Asphalt, the River Run". This book describes Alton's ride Northward up the Mississippi river, detailing the restaurants and foods he enjoyed, so I will have to read the chapters in reverse.
Special thanks to Roy Buol, mayor of Dubuque, Iowa that I [met in Scottsdale earlier this year] for the idea to come visit his fine city, considered one of the Smarter Cities in the USA, thanks to IBM technology.
I don't know if I will have internet access along the way, or have the time and/or energy to blog, tweet (@az990tony) or upload photos during the trip. We'll see.
Well it's Tuesday, which means its time to look at recent announcements.While I was on vacation last week, IBM made a lot of storage announcements October 23.Josh Krischer gives his summary on WikiBon [October 2007 Review].Austin Modine of the The Register went so far as to say that [IBM goes crazy with storage system updates].
IBM System Storage DS8000 series
This is "Release 3" software/microcode upgrades on our existing "Turbo" hardware.
IBM FlashCopy SE -- Here "SE" stands for Space Efficient. Rather than allocating a full 100% of the space for the FlashCopy destination, you can set aside just a fraction, and this will hold all the changed blocks, similar to whatIBM already offers on the DS4000 series.
Dynamic Volume Expansion -- In the past, if you needed more space for a LUN, you had to carve out a newer one elsewhere, and then copy the data over from the old to the new, leaving the old LUN around to be re-used or leftstranded. With this enhancement, you can just upgrade the LUN in place, making it bigger as needed, similar to whatIBM already offers on the DS4000 series and SAN Volume Controller. This applies to CKD volumes for the System zmainframe users out there as well.
Storage Pool Striping -- striping volumes across RAID ranks to eliminate or reduce hot-spots, and provide betterload balancing. Many used SAN Volume Controller in front of the DS8000 to do this, but now you can do it natively inthe DS8000 itself.
z/OS Global Mirror Multiple Reader -- for System z customers, "z/OS Global Mirror" is the new name for XRC. Thisenhancement improves the throughput of sending updates to the remote disaster recovery location.
DS Storage Manager enhancements, the element manager software has been enhanced, and is pre-installed on the new IBM System Storage Productivity Center, which I will talk about below.
Intermix of DS8000 machine types -- this is especially useful to allow new frames to have co-terminating warrantieswith the base units. In other words, as you expand your system, you can ensure that the entire chunk of iron runs outof warranty all at the same time, to simplify your decision making process to upgrade or contract for extended service.
One of the biggest complaints about IBM TotalStorage Productivity Center is that it is software that needs to beinstalled on its own server, and that this installation process can take a day or two. Why wait? Now you can havea hardware console that has the DS8000 Storage Manager software, SVC Admin Console software, and IBM TotalStorageProductivity Center "Basic Edition" pre-installed. Here are the key features.
Pre-installed and tested console
DS8000 R3 GUI integration
Cohabitation of SVC 4.2.1 GUI and CIMOM
Automated device discovery
Asset and capacity reporting, including tape library support
Our "Release 9" applies across the board, from N3000 to N5000 to N7000 series models, includingnew host bus adapters, and the new Data OnTAP 7.2.4 release level.
The Virtual File Manager (VFM) was announced as one of our latest [Storage Virtualization Solutions]. VFMprovides a global namespace that aggregates the file systems from Linux, UNIX, and Windows file servers, as well asN series storage, into a consolidated environment.
IBM's virtual tape library (VTL) for the distributed systems platform, has been enhanced to provide:
Up to 12TB of disk cache, using 750GB SATA disk.
F05 Tape Frames installed as TS7520 base units through a 32 port fibre channel switch
Support for LTO generation 4 tape drives, both as virtual tape drives and as physical tape drives within IBM automated tape libraries attached to the TS7520. This allows you to use Encryption capabilities of LTO4.
DS3000 series now supports SATA disk, and can be attached to AIX and Linux on System p servers. This appliesto the DS3200, DS3300 and DS3400 models.See the [DS3000 Announcement Letter] for more details.
Next Monday, September 1, 2008, marks my two year "blogoversary" for this blog!
I won't be blogging on Monday, of course, because that is [Labor Day] holiday here in the United States.
(From a Canadian colleague: US is not the only country who celebrates Labor Day on the first weekend in September. Canada also celebrates Labour Day on the first weekend in September. It's the only holiday(other than Christmas/New Years) where we are in sync with US. Our Thanksgiving Days are different as is your July 4 vs our July 1. But for Labour Day we are one with the Borg...)
(From an Australian colleague: each province of Australia has its own day to celebrate Labor Day, see [Australia Public Holidays])
The rest of the world celebrates Labor Day on May 1, but the USA celebrates this on the first Monday of September, which this year lands on September 1.Originally, the day is intended to be a "day off for working citizens", IBM is kind enough to let managers and marketingpersonnel have the day off also. (Not that anyone is going to notice no press releases next Monday, right?)
I started this blog on September 1, 2006 as part of IBM's big["50 Years of Disk Systems Innovation"] campaign. IBM introduced the first commercial disk system on September 13, 1956 and so the 50th anniversary was in 2006. Last year, IBM celebrated the 55th anniversary of tape systems.
Several readers have asked me why I haven't talked about recent current events, such as the Olympic Games in Beijing, or the U.S. National Conventions for the race for U.S. President. I have to remind them of one of the key precepts of IBMblogging guidelines:
8. Respect your audience. Don’t use ethnic slurs, personal insults, obscenity, or engage in any conduct that would not be acceptable in IBM’s workplace. You should also show proper consideration for others’ privacy and for topics that may be considered objectionable or inflammatory - such as politics and religion.
I made subtle references to my senator from Arizona, John McCain, in my post [ILM for my iPod], and to Barack Obama in my post [Searching for matching information]. I don't think anyone would mind that I send a "Happy Birthday!" wish to both of them.Senator McCain turns 72 years old today, and Senator Obama turned 47 years old earlier this month.
And lastly, Tucson itself [celebrates this entire month] its 233rd birthday. That's right,Tucson, the 32nd largest city of the USA, and headquarters for IBM System Storage, is older than the USA itself.While the Tucson area has been continuously inhabited by humans for over 3500 years, it officially became Tucsonon August 20, 1775.
Fellow blogger Justin Thorp has opined that [blogging is like jogging]. Somedays, you are just too busy to do it, and other days, you make time for it, because you know it is important.For the record, it is not my job to blog for IBM, that ended last September 2007. I continue to blog anyways because I have benefited from it, both personally and professionally.I want to thank all of you readers out there for making this blog a great success! Being named one of the top 10 blogs of the IT storage industry by Network World, two back-to-back Brand Impact awards from Liquid Agency, and recently earning a "31" Technorati ranking, has really helped keep me going.
So, I look forward to next month, and beginning my third year on this blog. I am sure there will be lots of surprises and announcements you can all look forward to in the next coming weeks and months that I will have plenty to write about.
Wednesday morning at the [Oracle OpenWorld 2011] conference started with another keynote session. This time, Safra Catz, CFO and President of Oracle, introduced John Chambers, CEO of Cisco.
John says Cisco is helping to "empower the customer through market transitions." This includes helping customers decide how to deploy new technology, choosing between integrated stacks and interoperable components, scaling the business with a flat IT budget, and how/when to decide on moving to the cloud.
(FTC Disclosure: IBM resells Cisco switches and directors and are considered a partner in this sense. If you are going to buy Cisco switches and directors, please consider buying them through IBM.)
The information economy is transitioning to a networked one. Access to information is not as important as access to expertise. Process and Procedures are not as important as Communities and Relationships. The old style Command-and-Control management is giving way to Collaboration. He showed a chart that showed the evolution from routed/bridged networks to packet/mobile and video. He also had a chart that showed the evolution from Mainframe/Mini-computers, to Client/Server and Web, to Virtualization in the Cloud. He also indicated that Google's acquisition of Motorola was indicative of the "Death of the PC".
High Tech companies must re-invent themselves to stay relevant. Here were Cisco's five "Foundational Priorities":
Leadership in the Core. This refers to his core business of high-end Ethernet and Fibre Channel directors.
Collaboration. This was the original promise of networking computers together, was to bring people together also. He feels that "Collaboration" will take off in the 2010's.
Data Center/Virtualization/Cloud. Cisco is now in the business of selling computers. They are now #2 in North America for x86 server sales, and #3 globally. In this regard, they are a direct competitor to both IBM and Oracle at this conference. John wants to create "borderless" networks between Private and Public clouds. He claims that they have now 8,228 Ciscu UCS customers over the past 18 months. This was a slam at Oracle, who hasn't sold half that many new systems in the same time period.
Video. John indicated that every product in the Cisco family is video-enabled, from the Cius tablet, to WebEx, to TelePresence, to all of his switches and directors. In theory, the "Flip" video cam that Cisco dropped in their latest round of layoffs would have also been counted in that category. John indicates that he envisions video will take over as the predominant communication mechanism. Back in 2006, at Oracle OpenWorld, John showed a chart that indicated that people will transition from passive TV-watchers to active video producers. Here we are five years later, and while 24 hours' worth of video are uploaded to YouTube every second, most people are still TV-watchers.
Architectures for Business Transformation. He elaborated on this to refer to issues like reliability, security, and products that are designed to work together. Business and Government leaders are focused on their business, not technology.
He gave a demo of Cisco UCS. This is a 4U collection of server blades, with up to 384GB of DRAM using 8GB DIMMs, or 192GB using much-cheaper 4GB DIMMs. There are 2 switches with 8 ports each 10GbE, for a total of 160 Gbps, that can carry both Ethernet and FCoE traffic. The UCS System Manager is similar to IBM's Unified Resource Manager in that it manages the entire box. A "service profile" has 40 to 50 BIOS settings that can be applied to give each x86 blade a specific personality. You can re-provision these by changing their service profile as needed.
The next demo was really cool. They took video that involved people talking, and had it "machine transcribed" so that you can read the words being said in the video. Type in a word like "tolerances" in the search engine, and the video advances exactly to the spot where that word is uttered.
The next demo after that involved a special camera for monitoring High-Occupancy Vehicle (HOV) lanes in traffic. In an example used in London, UK, the camera can see inside the car and confirm there are enough people to justify HOV usage, and if not, scan the license plate and charge the owner of the vehicle a fine. (In a sense, "Big Data" analytics combined with Cisco's vision of ubiquitous video equals [Big Brother])
In another slam against Oracle, John actually backed up his claims with published benchmarks. He wrapped up his talk with: "If I have done my job well, then you will all leave this room a bit uncomfortable." Not surprisingly, John didn't mention either the vBlock relationship with EMC, or the FlexPod relationship with NetApp.
As time progresses, things change, sometimes for the better in the right direction, sometimes a step backwards, and sometimes just different enough to be annoying. I wrote my blog post about [A Box Full of Floppies] a week ago, and posted in Monday. Let's take a look at how time and change impacted that one post.
The weather has warmed up here in Tucson so I started my Spring Cleaning early this year...
If there is ever a good time to brag about how beautiful the weather here in Tucson, it would be when everyone else in the country is digging themselves out of piles of snow. When my friends on Twitter were complaining how cold it was in Scottland, Ireland, Canada, or the East Coast of the United States, I would remind them that I am wearing a tee-shirt and shorts. I played golf for a week last December!
Sadly, a few days after my post, Tucson had the coldest days of February, breaking records set back in the year 1899. Water pipes were frozen, outdoor plants have suffered, and over 14,000 homes and businesses were shut off from natural gas. The 1,400-plus employees at the IBM Tucson facility have been asked to telecommute until restroom facilities can be restored to working order.
While we should all pay more attention to [climate change], this latest chill is probably just a seasonal flucation thanks to [La Niña] that happens every 10-15 years.
Here is a YouTube video of an astronaut ejecting a floppy disk...
Back in 2009, YouTube decided to [stop supporting Internet Explorer 6 (IE6)] to view its videos. However, that is what most IBMers were on, and this posted a problem when I embedded a video on my blog. To get around that, my friends at Microsoft provided special "conditional HTML tags" that allows me to suppress YouTube videos when viewed from Internet Explorer. The video shows up for those using Chrome, Opera, Firefox or other browsers, but is suppressed for IE users, and that allowed IBM employees to at least read the text.
Fortunately, last July, IBM decided to switch from IE6 over to Mozilla Firefox as the standard browser, so I thought this would no longer be an issue.
Unfortunately, my friends at YouTube have done it again. They changed the generated embed code from using "object" tags to "iframe" which messes up blogs written in various blogging systems, including Lotus Connections that I have here on DeveloperWorks, as well as WordPress. The new method is intended to either promote the new HTML5 standard, or to piss off [iPhone users]. In any case, several readers found they could not read my entire post about floppies because the "iframe" prevented the rest of the post to be shown. I have since reverted back to the old "object" tags and re-posted for everyone's benefit.
I may have to stand up an OS/2 machine just to check out what is actually on those floppies...
For any data that you keep for long term retention, it is important that you be able to access the data in a meaningful way when you need it. IBM has identified five ways that this can be done:
Museum approach -- keep old servers, storage and applications around. In my case, I have computers that can handle 3.5-inch floppy diskettes, but no hardware to read my Zip cartridges or 5.25-inch floppies.
Emulation approach -- emulating old systems with new systems. I remember the first CD players had "tape cassette" attachements so they can be used in car stereos.
Migration approach -- migrating data and applications to new technology. This is what most businesses do. For example, if you keep archives through IBM Tivoli Storage Manager or DFSMShsm, the software will migrate data from old tapes to new tapes as part of its tape reclamation process.
Descriptive approach -- including sufficiently descriptive metadata, such as with HTML or XML tags, that would enable future rendering.
Ecapsulation approach -- encapsulate the data, metadata and related application logic for future processing. While the "descriptive" approach might help display the contents of proprietary formats, the encapsulation approach would include application logic, perhaps written in Java, that could be used to actually operate built-in macros, pivot tables, or other active features of a document or database.
IBM Research is working closely with industry standards groups, like the Organization for the Advancement of Structured Information Standards [OASIS], to help promote the use of open standards for long-term retention.
For my readers who follow American Football, enjoy the [SuperBowl]!
It's that time again. Every year, IBM hosts the "System Storage Technical University". I have been going to these since they first started in the 1990s. This time we are at the lovely [Hilton Orlando] in Orlando, Florida.
For those who want to relive past events, here are my blog posts from this event in 2010:
As was the case last year, IBM once again will run this conference alongside the [IBM System x Technical University] the same week, in the same hotel. This allows attendees to cross over to the other side to see a few sessions of the other conference. I took advantage of this last year, and plan to do so again this year as well!
For those on Twitter, you can follow my tweets at [@az990tony] or search on the hash tag #ibmtechu.
For those of us in the northern hemisphere, yesterday was this year's Winter Solstice, representingthe shortest amount of daylight between sunrise and sunset. So today, I thought I would blog on my thoughtsof managing scarcity.
Earlier in my career, I had the pleasure to serve as "administrative assistant" to Nora Denzel for the week at a storage conference. My job was to make her look good at the conference, which if you know Nora, doesn't take much. Later, she left IBM to work at HP, and I gotto hear her speak at a conference, and the one thing that I remember most was her statement that thewhole point of "management" was to manage scarcity, as in not enough money in the budget,not enough people to implement change, or not enough resources to accomplish a task.(Nora, I have no idea where you are today, so if you are reading this, send me a note).
Of course, the flip-side to this is that resources that are in abundance are generallytaken for granted. Priorities are focused on what is most scarce. Let's examine some of theresources involved in an IT storage environment:
Capacity - while everyone complains that they are "running out of space", the truth is that most external disk attached to Linux, UNIX, or Windows systems contain only 20-40% data. Many years ago, I visitedan insurance company to talk about a new product called IBM Tivoli Storage Manager. This company had 7TB of disk on their mainframe,and another 7TB of disk scattered on various UNIX and Windows machines. In the room were TWO storage admins for
the mainframe, and 45 storage admins for the distributed systems. My first question was "why so many people forthe mainframe, certainly one of you could manage all of it yourself, perhaps on Wednesday afternoons?" Their response was that they acted as eachother's backup, in case one goes on vacation for two weeks. My follow-up question to the rest of the audience was:"When was the last time you took two weeks vacation?" Mainframes fill their disk and tape storage comfortablyat over 80-90% full of data, primarily because they have a more mature, robust set of management software, likeDFSMS.
Labor - by this I mean skilled labor able to manage storage for a corporation. Some companies I have visitedkeep their new-hires off production systems for the first two years, working only on test or development systemsonly until then. Of course, labor is more expensive in some countries than others. Last year, I was doing a whiteboard session on-site for a client in China, and the last dry-erase pen ran out of ink. I asked for another pen, and they instead sent someone to go re-fill it. I asked wouldn't it be cheaper just to buy another pen, and they said "No, labor is cheap, but ink is expensive." Despite this, China does complain that there is a shortage of askilled IT labor force, so if you are looking for a job, start learning Mandarin.
Power and Cooling - Most data centers are located on raised floors, with large trunks of electrical power and hugeair conditioning systems to deal with all the heat generated from each machine. I have visited the data centers ofclients that are forced now to make decisions on storage based on power and cooling consumption, because the coststo upgrade their aging buildings are too high. Leading the charge is IBM, with technology advancements in chips, cards, and complete systems that use less power, and generate less heat. While energy is still fairly cheap in the grand scheme of things, fears ofGlobal Warmingand declining oil supplies, the costs ofpower and cooling have gotten some news lately. In 1956, Hubbert predicted US would reach peak oil supplies by1965-1970 (it happened in 1971), and this year Simmonsestimated that world-wide oil production began its decline already in 2005. Smart companies like Google have movedtheir server farms to places like Oregon in the Pacific Northwest for cheaper hydroelectric power.
Bandwidth - Last year IBM introduced 4Gbps Fibre Channel and FICON SAN networking gear, along with the servers and storage needed to complete the solution. 4Gbps equates to about 400 MB/sec in data throughput. By comparison, iSCSI is typically run on 1Gbps Ethernet, but has so much overheads that you only get abour 80 MB/sec. Next year, we may see both 8 Gbps SAN, and 10 GbE iSCSI, to provide 800 MB/sec throughputs. My experience is that the SAN is not the bottleneck, instead people run out of bandwidth at the server or storage end first. They may not have a million dollars to buy the fastest IBM System p5 servers, or may not have enough host adapters at the storage system end.
Floorspace - I end with floorspace because it reminds me that many "shortages" are temporary or artificially created. Floorspace is only in short supply because you don't want to knock down a wall, or build a new building, to handle your additional storage requirements.In 1997, Tihamer Toth-Fejel wrote an article for the National Space Society newsletter that estimated that ...Everybody on Earth could live comfortably in the USA on only 15% of our land area, with a population density between that of Chicago and San Francisco. Using agricultural yields attained widely now, the rest of the U.S. would be sufficient to grow enough food for everyone. The rest of the planet, 93.7% of it, would be completely empty.Of course, back in 1997 the world population was only 5.9 billion, and this year it is over 6.5 billion.
This last point brings me back to the concept of food, and I am not talking about doughnuts in the conference room, or pizza while making year-end storage upgrades. I'm talking aboutthe food you work so hard to provide for yourself and your family. The folks at Oxfam came up with a simpleanalogy. If 20 people sit down at your table, representing the world’s population:
3 would be served a gourmet, multi-course meal, while sitting at decorated table and a cushioned chair.
5 would eat rice and beans with a fork and sit on a simple cushion
12 would wait in line to receive a small portion of rice that they would eat with their hands while sitting on the floor.
So for those of you planning a special meal next Monday, be thankful you are one of the lucky three, and hopefulthat IBM will continue to lead the IT industry to help out the other seventeen.
Well it's Tuesday again, and you know what that means? IBM Announcements!
(FCC Disclosure: This official launch also includes October 6 announcements. In any case, the usual disclaimer applies: I currently work for IBM, and this blog post can be considered a "paid celebrity endorsement" of the IBM products mentioned below.)
IBM announced various updates to its Spectrum Storage product line. Here is a quick recap.
IBM Spectrum Virtualize 7.6
Spectrum Virtualize is the new name of the "storage hypervisor" code that resides in IBM SAN Volume Controller (SVC) and Storwize family products. When you buy an SVC, you will license Spectrum Virtualize software on it. It is NOT available separately as software-only that you can install on any other hardware. There are three major improvements:
Software-based Data-at-Rest Encryption
Earlier this year, IBM delivered data-at-rest encryption for the Storwize V7000 and V7000 Unified. This week, IBM extends this support to other storage hypervisors.
Since this feature is based on the Intel processor that supports the Advanced Encryption Standard New Instructions (AES-NI), it applies only to the newer hardware: SAN Volume Controller 2145-DH8, the Storwize V7000 Gen2, FlashSystem V9000, and VersaStack converged systems that contain these. You can run Spectrum Virtualize v7.6 on older hardware models, but the encryption feature will be disabled.
Basically, by taking advantage of AES-NI commands, IBM can now offer data-at-rest encryption on any virtualized flash or disk arrays, eliminating the need for special "Self-Encrypting Drives", or SED.
The encryption keys are kept on USB memory sticks, that you can either leave in the machine, or stash away in some vault or safe somewhere.
The other improvement is distributed RAID. Distributed RAID has been hugely popular on IBM XIV products, and has since found its way into the DCS3700, DCS3860 and Elastic Storage Server models.
With this new enhancement, storage admins can select "Distributed RAID-5" or "Distributed RAID-6" as alternate choices to traditional RAID ranks.
Why use it? All the drives are now active, eliminating idle spare drives that do nothing collecting dust and cobwebs waiting for an opportunity to spin up, and when they finally are used for a rebuild become a terrible bottleneck. Since all drives are reading and writing, the rebuild rate is an order of magnitude (5 to 10x) faster!
For those clients nervous about large 8TB drives and the number of days it would take to perform a traditional RAID rebuild, this should calm all of your fears.
This is one of those line-items that we have told clients that it was "just around the corner" and "coming soon, watch this space", and finally it is available. For clients using Stretched Cluster or HyperSwap across two buildings, best practices suggests keeping the quorum disk in a third building. This often met having to dedicate a single 2U disk system in a closet somewhere, with expensive Fibre Channel cables connecting to the other two buildings.
To address this, IBM now allows the quorum disk to be based on Internet Protocol (the IP portion of TCP/IP), which can be any bare-metal or virtual machine that is LAN or WAN attached. The "quorum disk" is just a little Java program. This can run on any cloud service provider as well, such as IBM SoftLayer, that both buildings have connectivity.
A minor improvement worth mentioning is that the IBM "Comprestimator" tool that estimates the capacity savings of Real-time Compression is now integrated into Spectrum Virtualize v7.6 command line interface (CLI), allowing you to run the tool on demand, as needed, on any virtual volume.
IBM Spectrum Scale v4.2
IBM plans to offer all of its solutions in any of three flavors: software-only that you can deploy on your own server hardware, pre-built system appliances, and cloud services on IBM SoftLayer, IBM Cloud Managed Services or third-party cloud providers. Spectrum Scale is the software-only flavor, and Elastic Storage Server and Storwize V7000 Unified are pre-built systems based on that software.
File and Object access
IBM published a "Redbook" on how to implement OpenStack Swift and Amazon S3 interfaces to an existing Spectrum Scale deployment. IBM supported it, but it was basically Do-it-Yourself DIY implementation. This has now been resolved, with full integration of OpenStack Swift and Amazon S3 object-protocol interfaces.
(For those unfamiliar with "Object storage", think of it like valet parking for your data. Before working for IBM, I was previously employed as a valet attendant, so I feel qualified to make this analogy.
If you park your car in a 10-story high parking structure, you have to remember where you parked to go find the car again. With valet parking, you hand over the keys to the valet attendant, the car gets parked, and you get a claim stub that you then use to get your car back. In the meantime, you don't know where your car is parked, and you don't care either!
Storing files in volume-level or file-level storage is like that 10-story high parking structure. You have to remember where you put it, which LUN or which sub-directory. With object storage, the system provides a "claim stub" in the form of an Universal Record Identifier, or URI, and simple HTTP commands like GET and POST can be used to upload and download the content.)
Policy-driven Compression and Quality of Service (QoS)
If you want to differentiate the levels of service provided by files and objects stored in your infrastructure, look no further. Simple SQL-like language is used to set up policies that are invoked when needed.
Hadoop Connector for File and Objects
The IBM Hadoop Connector allows Hadoop and Spark analytics applications to treat Spectrum Scale as a 100 percent compatible alternative to Hadoop File Systems (HDFS). Previously, this was only available for files, but now it has been extended to include objects as well.
Advanced Graphical User Interface (GUI)
Based on the award-winning GUI that has been used for IBM XIV, SVC, Storwize and various other members of the IBM System Storage family, IBM announces an HTML5-based web-browser GUI for configuring and managing Spectrum Scale and Elastic Storage Server (ESS).
Storwize V7000 Unified
The "file modules" that run IBM Spectrum Scale will get updated to R1.6 level, which supports SMB 3.0 and NFS 4.0 protocols. SMB support will now include both internal and externally-virtualized storage. You will also be able to use Active File Management to migrate to other Spectrum Scale implementations.
IBM Spectrum Control
As the former chief architect of IBM Tivoli Storage Productivity Center v1, I have been a big fan of the advancements and evolution of Spectrum Control. IBM offers three levels. The first level is "Basic Edition", entitled at no additional charge for IBM storage hardware clients. The second level is "Standard Edition" which offers configuration, provisioning and performance monitoring. The third level is "Advanced Edition", which includes advanced storage analytics, file-level reporting, storage tiering and data placement optimization.
You can imagine my skepticism when I was told that Spectrum Control was going to be enhanced to support Spectrum Scale. What could it offer? IBM Spectrum Scale already has built-in storage tiering and data placement optimization!
It turns out that having effective "management tools" was the #1 reason clients have stated were needed to implement and deploy Spectrum Scale. Since 1998, back when it was called General Parallel File System, or GPFS, the target market was High Performance Computing (HPC) familiar with Command Line Interfaces (CLI).
But IBM was to broaden the reach of IBM Spectrum Scale, to financial services, health care and life sciences, government and education, and a variety of other industries. They won't tolerate being limited to CLI interfaces.
For clients with multiple Spectrum Scale clusters, Spectrum Control can offer the following:
Visibility across the capacity utilization (file systems, pools, file sets, quotas) and cluster health across all Spectrum Scale clusters in the data center
Ability to specify alerts which are applied across all Spectrum Scale clusters, for things like relative or absolute free space in a file system, or inodes used, nodes going down, etc.
Understand the cross-cluster relationships established by remote cluster mounts, and seamlessly navigate between them
If external SAN storage is used, Spectrum Control shows the correlation between Spectrum Scale Network Shared Disks (NSD) and their corresponding SAN volumes, again with the ability to navigate between them; also it can provide performance monitoring for the volumes backing the NSD
Ability to monitor file capacity usage in the context of applications, by adding Spectrum Scale "file set containers" to application groups defined in Spectrum Control
Compare file system activity across Spectrum Scale clusters, with the ability to drill into file system and node performance charts
Support for object storage on Spectrum Scale, determine which object-enabled clusters are closest to running out of free space
While the basic built-in GUI is great for smaller deployments, if you have a dozen or more Spectrum Scale clusters, or have Spectrum Scale clusters intermixed with traditional block-level and NAS storage devices, then Spectrum Control is for you!
It used to take weeks to deploy the original versions of Tivoli Storage Productivity Center, but now, Spectrum Control is now offered in the cloud, and you can deploy it in as little as 30 minutes.
Want to check it out? You can explore Spectrum Control Storage Insights cloud service as a [Live Demo], or [Start your free trial]! The reporting capabilities of Spectrum Scale are identical between the on-premise version of Spectrum Control, and this cloud service offering.
Here's a great quote from a leading IT industry analyst:
"In multi-petabyte, multivendor installations, overall storage costs of ownership for use of IBM Spectrum Storage solutions averaged 73 percent less than EMC, and 61 percent less than Hitachi equivalents" -- Brian Jeffery, Managing Director, International Technology Group, Naples, FL
As IBM continues its transition from a hardware-oriented company founded over a century ago, manufacturing meat scales and cheese slicers, to one more focused on higher value-add software and services, the Spectrum Storage software family will play a critical role of this transformation!
I saw this as an opportunity to promote the new IBM Tivoli Storage Manager v6.1 which offers a variety of new scalability features, and continues to provide excellent economies of scale for large deployments, in my post [IBM has scalable backup solutions].
"So does TSM scale? Sure! Just add more servers. But this is not an economy of scale. Nothing gets less expensive as the capacity grows. You get a more or less linear growth of costs that is directly correlated to the growth of primary storage capacity. (Technically, it costs will jump at regular and predictable intervals, by regular and predictable and equal amounts, as you add TSM servers to the infrastructure--but on average it is a direct linear growth. Assuming you are right sized right now, if you were to double your primary storage capacity, you would double the size of the TSM infrastructure, and double your associated costs.)"
I talked about inaccurate vendor FUD in my post [The murals in restaurants], and recently, I saw StorageBod's piece, [FUDdy Waters]. So what would "economies of scale" look like? Using Scott's own words:
Without Economies of Scale
"If it costs you $5 to backup a given amount of data, it probably costs you $50 to back up 10 times that amount of data, and $500 to back up 100 times that amount of data."
With Economies of Scalee
"If anybody can figure out how to get costs down to $40 for 10 times the amount of data, and $300 for 100 times the amount of data, they will have an irrefutable advantage over anybody that has not been able to leverage economies of scale."
So, let's do some simple examples. I'll focus on a backup solution just for employee workstations, each employee has 100GB of personal data to backup on their laptop or PC. We'll look at a one-person company, a ten-person company, and a hundred-person company.
Case 1: The one-person company
Here the sole owner needs a backup solution. Here are all the steps she might perform:
Spend hours of time evaluating different backup products available, and make sure her operating system, file system and applications are supported
Spend hours shopping for external media, this could be an external USB disk drive, optical DVD drive, or tape drive, and confirm it is supported by the selected backup software.
Purchase the backup software, external drive, and if optical or tape, blank media cartridges.
Spend time learning the product, purchase "Backup for Dummies" or similar book, and/or taking a training class.
Install and configure the software
Operate the software, or set it up to run automatically, and take the media offsite at the end of the day, and back each morning
Case 2: The ten-person company
I guess if each of the ten employees went off and performed all of the same steps as above, there would be no economies of scale.
Fortunately, co-workers are amazingly efficient in avoiding unnecessary work.
Rather than have all ten people evaluate backup solutions, have one person do it. If everyone runs the same or similar operating system, file systems and applications, this can be done about the same as the one-person case.
Ditto on the storage media. Why should 10 people go off and evaluate their own storage media. One person can do it for all ten people in about the same time as it takes for one person.
Purchasing the software and hardware. Ok, here is where some costs may be linear, depending on your choices. Some software vendors give bulk discounts, so purchasing 10 seats of the same software could be less than 10 times the cost of one license. As for storage hardware, it might be possible to share drives and even media. Perhaps one or two storage systems can be shared by the entire team.
For a lot of backup software, most of the work is in the initial set up, then it runs automatically afterwards. That is the case for TSM. You create a "dsm.opt" file, and it can list all of the include/exclude files and other rules and policies. Once the first person sets this up, they share it with their co-workers.
Hopefully, if storage hardware was consolidated, such that you have fewer drives than people, you can probably have fewer people responsible for operations. For example, let's have the first five employees sharing one drive managed by Joe, and the second five employees sharing a second drive managed by Sally. Only two people need to spend time taking media offsite, bringing it back and so on.
Case 3: The hundred-person company
Again, it is possible that a hundred-person company consists of 10 departments of 10 people each, and they all follow the above approach independently, resulting in no economies of scale. But again, that is not likely.
Here one or a few people can invest time to evaluate backup solutions. Certainly far less than 100 times the effort for a one-person company.
Same with storage media. With 100 employees, you can now invest in a tape library with robotic automation.
Purchase of software and hardware. Again, discounts will probably apply for large deployments. Purchasing 1 tape library for all one hundred people is less than 10 times the cost and effort of 10 departments all making independent purchases.
With a hundred employees, you may have some differences in operating system, file systems and applications. Still, this might mean two to five versions of dsm.opt, and not 10 or 100 independent configurations.
Operations is where the big savings happen. TSM has "progressive incremental backup" so it only backs up changed data. Other backup schemes involve taking period full backups which tie up the network and consume a lot of back end resources. In head-to-head comparisons between IBM Tivoli Storage Manager and Symantec's NetBackup, IBM TSM was shown to use significantly less network LAN bandwidth, less disk storage capacity, and fewer tape cartridges than NetBackup.
The savings are even greater with data deduplication. Either using hardware, like IBM TS76750 ProtecTIER data deduplication solution, or software like the data deduplication capability built-in with IBM TSM v6.1, you can take advantage of the fact that 100 employees might have a lot of common data between them.
So, I have demonstrated how savings through economies of scale are achieved using IBM Tivoli Storage Manager. Adding one more person in each case is cheaper than the first person. The situation is not linear as Scott suggests. But what about larger deployments? IBM TS3500 Tape Library can hold one PB of data in only 10 square feet of data center floorspace. The IBM TS7650G gateway can manage up to 1 PB of disk, holding as much as 25 PB of backup copies. IT Analysts Tony Palmer, Brian Garrett and Lauren Whitehouse from Enterprise Strategy Group tried IBM TSM v6.1 out for themselves and wrote up a ["Lab Validation"] report. Here is an excerpt:
"Backup/recovery software that embeds data reduction technology can address all three of these factors handily. IBM TSM 6.1 now has native deduplication capabilities built into its Extended Edition (EE) as a no-cost option. After data is written to the primary disk pool, a deduplication operation can be scheduled to eliminate redundancy at the sub-file level. Data deduplication, as its name implies, identifies and eliminates redundant data.
TSM 6.1 also includes features that optimize TSM scalability and manageability to meet increasingly demanding service levels resulting from relentless data growth. The move from a proprietary back-end database to IBM DB2 improves scalability, availability, and performance without adding complexity; the DB2 database is automatically maintained and managed by TSM. IBM upgraded the monitoring and reporting capabilities to near real-time and completely redesigned the dashboard that provides visibility into the system. TSM and TSM EE include these enhanced monitoring and reporting capabilities at no cost."
The majority of Fortune 1000 customers use IBM Tivoli Storage Manager, and it is the backup software that IBM uses itself in its own huge data centers, including the cloud computing facilities. In combination with IBM Tivoli FastBack for remote office/branch office (ROBO) situations, and complemented with point-in-time and disk mirroring hardware capabilities such as IBM FlashCopy, Metro Mirror, and Global Mirror, IBM Tivoli Storage Manager can be an effective, scalable part of a complete Unified Recovery Management solution.
Doug Balog, IBM VP and Business Level Executive for Storage, presented Smart Archiving. Citing research by Jon Toigo, Doug indicated that 40 percent of data on disk should be archived. Sadly, a vast majority of companies continue to use their backups as archives. There is a better way to do archives, to address the needs of four use cases:
The IBM Information Archive for email, files and eDiscovery offers full text indexing. A well-deployed archive strategy can save up to 60 percent in backup costs, and reduce backup times by 80 percent. IBM offers advanced analytics and visualization for archive data.
An analysis of a global insurance company found that they kept, on average, 120 copies of every email sent. This was the combination of an average of 12 copies of the email, multipled by 10 backups of the email repository.
Banjercito, a bank in Mexico, has a 10-year retention requirement from government regulations.
The new LTFS Library Edition allows Library-based access to files stored on tape cartridges. The new TS3500 Library Connector means that a single system of connected tape libraries can hold up to 2.7 Exabytes (EB) of data.
Archive Industry Perspectives
Steve Duplessie from Enterprise Strategy Group [ESG] gave his views on the challenges of volume, access and cost. His definition for archive: the long term retention of information on a separate environment for compliance, eDiscovery and business reference purposes. Steve advocates a purpose-built solutiion for archive. There are three major challenges for implementing an archive solution:
Getting Participation -- Steve feels that key stakeholders have inappropriate expectations of what archive is, or can be.
Define Tasks -- Steve argues that archive is very much a process-oriented approach, and tasks must fit business process and procedures
Prepare for Future Content Types -- the frequent change of standard and proprietary data types poses a real challenge for long term retention of data
For example, the Financial Industry Regulatory Authority [FINRA] oversee 4,000 brokerage firms, and 600,000 broker/dealers. They have mandated the storing of digital data related to stock trades, and this can include text messages, voice messages, and emails. They continue to expand this definition, so soon this could include tweets on Twitter, for example.
Steve feels there are four key requirements for archive:
Support for email, such as an email application plug-in
Off-line access to archived data
Support for mobile devices, such as smartphones
Basic search capabilities
Companies are starting to take archive seriously. About 35 percent of firms surveyed have adopted archive, and another 36 percent plan to in the next 12-24 months. Enterprise archive has grown over 200 percent from 2007 to 2009. Steve agrees that not everything needs to be stored on disk. Retention periods greater than six years dictates the need for tape.
Current systems may not meet today's requirements. Data loss and downtime costs have skyrocketed. Data Protection and Retention projects can represent a gold mine of savings, new capabilities can greatly lower costs, allowing companies to shift resources over to revenue generation.
Big Data, New Physics and Geospatial Super-Food
I would vote this the best session of the day! For all those confused on what the heck "Big Data" means, Jeff has the best explanation. Jeff Jonas is an IBM Distinguished Engineer and the Chief Scientist of Entity Analytics. He had just finished his 17th marathon on Saturday, and his fingers were bandaged.
Jeff had founded the Systems Research and Design (SR&D) company, known for creating NORA (non-obvious relationship awareness) used by Las Vegas casinos to identify fraud. SR&D was acquired by IBM back in 2005. Jeff is focused on sensemaking of streams. He feels many companies are suffering from "Enterprise Amnesia".
"The data must find the data .. and the relevance must find the user."
-- Jeff Jonas
Jeff's metaphor to Big Data is a jigsaw puzzle without the picture on the outside of the box. To demonstrate his point, he presented a pile of jigsaw puzzle pieces and asked four teenagers to put the puzzle together without the advantage of the picture on the box. What he had not told them was that he mixed four different puzzles together, removing out 10 to 20 percent of the pieces from each puzzle. He also added some duplicate pieces from a second identical puzzle, and just to make things fun, included a dozen pieces from a sixth puzzle just to mess with their heads. Within a few hours, the kids had managed to figure out that there were four puzzles, that there were duplicate pieces, and that there were some pieces that did not fit any of the four puzzles.
"You can't squeeze knowledge from a pixel."
-- Jeff Jonas
This approach favors false negatives. New observations reverse out old conceptions. As the picture emerges, this provides added focus on new information. More data can provide better predictions. "Bad" data, including misspelled words and mis-coded categories, was often discarded or corrected on the basis of "Garbage-In, Garbage Out", but can now be useful in a Big Data perspective.
Take for example the 600 billion recordings of the "location data" captured on cell phones every day. With regular triangulation of cell phone towers, the information can pinpoint you within 60 meters, add GPS and this improved to within 20 meters, and add Wi-Fi is further improved to 10 meters. While this data is "de-identified" so as not to identify individual users, the process of re-identification is relatively trivial. Jeff's system is able to predict a person will be next Thursday at 5:35pm with 87 percent accuracy.
Thus, Big Data represents an asset, accumulation of context. Real-time analytics can be a competitive advantage. These streams of data will need persistent storage and massive I/O capabilities. In one example, Jeff processed 4,200 separate sources of information and was able to identify "dead votes". These are votes cast by people that died in years prior, indicating voter fraud.
Jeff's latest project, codenamed G2, will tackle not just people, but everything from proteins to asteroids.
Normally, the worst time slot is the hour after lunch, but these presentations kept people's attention.
Continuing my coverage of last week's Data Center Conference 2009, my last breakout session of the week was an analyst presentation on Solid State Drive (SSD) technology. There are two different classes of SSD, consumer grade multi-level cell (MLC) running currently at $2 US dollars per GB, and Enterprise grade single-level cell (SLC) running at $4.50 US dollars per GB. Roughly 80 to 90 percent of the SSD is used in consumer use cases, such as digital cameras, cell phones, mobile devices, USB sticks, camcorders, media players, gaming devices and automotive.
While the two classes are different, the large R&D budgets spent on consumer grade MLC carry forward to help out enterprise grade SLC as well. SLC means there is a single level for each cell, so each cell can only hold a single bit of data, a one or a zero. MLC means the cell can hold multiple levels of charge, each representing a different value. Typically MLC can hold 3 to 4 bits of data per cell.
Back in 1997, SLC Enterprise Grade SSD cost roughly $7870 per GB. By 2013, Consumer Grade 4-bit MLC is expected to be only 24 cents per GB. Engineers are working on trade-offs between endurance cycles and retention periods. FLASH management software is the key differentiator, such as clever wear-leveling algorithms.
SSD is 10-15 times more expensive than spinning hard disk drives (HDD), and this price difference is expected to continue for a while. This is because of production volumes. In 4Q09, manufacturers will manufacturer 50 Exabytes of HDD, but only 2 Exabytes of SSD. The analyst thinks that SSD will only be roughly 2 percent of the total SAN storage deployed over the next few years.
How well did the audience know about SSD technology?
4 percent not at all
30 percent some awareness
30 percent enough to make purchase decision
21 percent able to quantify benefits and trade-offs
15 percent experts
SSD does not change the design objectives of disk systems. We want disk systems that are more scalable and have higher performance. We want to fully utilize our investment. We want intelligent self-management similar to caching algorithms. We want an extensible architecture.
What will happen to fast Fibre Channel drives? Take out your Mayan calendar. Already 84mm 10K RPM drives are end of life (EOL) in 2009. The analyst expects 67mm and 70mm 10K drives will EOL in 2010, and that 15K will EOL by 2012. A lot of this is because HDD performance has not kept up with CPU advancements, resulting in an I/O bottleneck. SSD is roughly 10x slower than DRAM, and some architectures use SSD as a cache extension. The IBM N series PAM II card and Sun 7000 series being two examples.
Let's take a look at a disk system with 120 drives, comparing 73GB HDD's versus 32GB SSD's.
per HDD drive
per SSD drive
There are various use cases for SSD. These include internal DAS, stand-alone Tier 0 storage, replace or complement HDD in disk arrays, and as an extension of read cache or write cache. The analyst believes there will be mixed MLC/SLC devices that will allow for mixed workloads. His recommendations:
Use SSD to eliminate performance and throughput bottlenecks
Consolidate workloads to maximize value
Use SLAs to identify workload candidates
Evaluate emerging technologies along with established vendors
Do not expect SSD to drastically reduce power/cooling
SSD will continue to complement HDD, primarily SATA disk
Trust but verify, check out customer references offered by storage vendors
Yesterday, I started this week's topic discussing the various areas of exploration to helpunderstand our recent press release of the IBM System Storage SAN Volume Controller and itsimpressive SPC-1 and SPC-2 benchmark results that ranks it the fastest disk system in the industry.
Some have suggested that since the SVC has a unique design, it should be placed in its own category,and not compared to other disk systems. To address this, I would like to define what IBM meansby "disk system" and how it is comparable to other disk systems.
When I say "disk system", I am going to focus specifically on block-oriented direct-access storage systems, which I will define as:
One or more IT components, connected together, that function as a whole, to serve as a target forread and write requests for specific blocks of data.
Clarification: One could argue, and several do in various comments below, that there are other typesof storage systems that contain disks, some that emulate sequential access tape libraries, some that emulate file-systems through CIFS or NFS protocols, and some that support thestorage of archive objects and other fixed content. At the risk of looking like I may be including or excluding such to fit my purposes, I wanted to avoid apples-to-orangescomparisons between very different access methods. I will limit this exploration to block-oriented, direct-access devices. We can explore these other types of storage systems in later posts.
People who have been working a long time in the storage industry might be satisfied by this definition, thinkingof all the disk systems that would be included by this definition, and recognize that other types of storage liketape systems that are appropriately excluded.
Others might be scratching their heads, thinking to themselves "Huh?" So, I will provide some background, history, and additional explanation. Let's break up the definition into different phrases, and handle each separately.
read and write requests
Let's start with "read and write requests", which we often lump together generically as input/output request, or just I/O request. Typically an I/O request is initiated by a host, over a cable or network, to a target. The target responds with acknowledgment, data, or failure indication. A host can be a server, workstation, personal computer, laptop or other IT device that is capable of initiating such requests, and a target is a device or system designed to receive and respond to such requests.
(An analogy might help. A woman calls the local public library. She picks up the phone, and dials the phone number of the one down the street. A man working at the library hears the phone ring, answers it with "Welcome to the Public Library! How can I help you?" She asks "What is the capital city of Ethiopia?" and replies "Addis Ababa." and hangs up. Satisfied with this response, she hangs up. In this example, the query for information was the I/O request, initiated by the lady, to the public library target)
Today, there are three popular ways I/O requests are made:
CCW commands over OEMI, ESCON or FICON cables
SCSI commands over SCSI, Fibre Channel or SAS cables
SCSI commands over Ethernet cables, wireless or other IP communication methods
specific blocks of data
In 1956, IBM was the first to deliver a disk system. It was different from tape because it was a "direct access storage device" (the acronym DASD is still used today by some mainframe programmers). Tape was a sequential media, so it could handle commands like "read the next block" or "write the next block", it could not directly read without having to read past other blocks to get to it, nor could it write over an existing block without risking overwriting the contents of blocks past it.
The nature of a "block" of data varies. It is represented by a sequence of bytes of specific length. The length is determined in a variety of ways.
CCW commands assume a Count-Key-Data (CKD) format for disk, meaning that tracks are fixed in size, but that a track can consist of one or more blocks, and can be fixed or variable in length. Some blocks can span off the end of one track, and over to another track. Typical block sizes in this case are 8000 to 22000 bytes.
SCSI commands assume a Fixed-Block-Architecture (FBA) format for disk, where all blocks are the same size, almost always a power of two, such as 512 or 4096 bytes. A few operating systems, however, such as i5/OS on IBM System i machines, use a block size that doesn't follow this power-of-two rule.
one or more IT components
You may find one or more of the following IT components in a disk system:
motorized platter(s) covered in magnetic coating with a read/write head to move over its surface. These are often referred to as Hard Disk Drive (HDD) or Disk Drive Modules (DDM), and are manufacturedby companies like Seagate or Hitachi Global Storage Technologies.
A set of HDD can be accessed individually, affectionately known as JBOD for Just-a-bunch-of-disk, or collectively in a RAID configuration.
Memory can act as the high-speed cache in front of slower storage, or as the storage itself. For example, the solid state disk that IBM announced last week is entirely memory storage, using Flash technology.
Lately, there are two popular packaging methods for disk systems:
Monolithic -- all the components you need connected together inside a big refrigerator-sized unit, with options to attach additional frames. The IBM System Storage DS8000, EMC Symmetrix DMX-4 and HDS TagmaStore USP-V all fit this category.
Modular -- components that fit into standard 19-inch racks, often the size of the vegetable drawer inside a refrigerator, that can be connected externally with other components, if necessary, to make a complete disk system. The IBM System Storage DS6000, DS4000, and DS3000 series, as well as our SVC and N series, fall into this category.
Regardless of packaging, the general design is that a "controller" receives a request from its host attachment port, and uses its processors and cache storage to either satisfy the request, or pass the request to the appropriate HDD,and the results are sent back through the host attachment port.
In all of the monolithic systems, as well as some of the modular ones, the controller and HDD storage are contained in the same unit. On other modular systems, the controller is one system, and the HDD storage is in a separate system, and they are cabled together.
serve as a target
The last part is that a disk system must be able to satisfy some or all requests that come to it.
(Using the same analogy used above, when the lady asked her question, the guy at the public library knew the answer from memory, and replied immediately. However, for other questions, he might need to look up the answer in a book, do a search on the internet, or call another library on her behalf.)
Some disk systems are cache-only controllers. For these, either the I/O request is satisfied as a read-hit or write-hit in cache, or it is not, and has to go to the HDD. The IBM DS4800 and N series gateways are examples of this type of controller.
Other systems may have controller and disk, but support additional disk attachment. In this case, either the I/O request is handled by the cache or internal disk, or it has to go out to external HDD to satisfy the request. IBM DS3000 series, DS4100, DS4700, and our N series appliance models, all fall into this category.
So, the SAN Volume Controller is a disk system comprising of one to four node-pairs. Each node is a piece of IT equipment that have processors and cache. These node-pairs are connected to a pair of UPS power supplies to protect the cache memory holding writes that have not yet been de-staged. The combination of node-pairs and UPS acting as a whole, is able to serve as a target to SCSI commands sent over Fibre Channel cables on a Storage Area Network (SAN). To read some blocks of data, it uses its internal cache storage to satisfy the request, and for others, it goes out to external disk systems that contain the data required. All writes are satisfied immediately in cache on the SVC, and later de-staged to external disk when appropriate.
As of end of 2Q07, having reached our four-year anniversary for this product, IBM has sold over 9000 SVC nodes, which are part of more than 3100 SVC disk systems. These things are flying off the shelves, clocking in a 100% YTY growth over the amount we sold twelve months ago. Congratulations go to the SVC development team for their impressive feat of engineering that is starting to catch the attention of many customers and return astounding results!
So, now that I have explained why the SVC is considered a disk system, tomorrow I'll discuss metrics to measure performance.