This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Today I spoke at the IBM Think Green Roadshow in Phoenix, Arizona. This is justone of a 15-city tour to help make people aware of Green data center issues.Here is the schedule forthe remaining cities. Contact your local IBM rep for details.
Victor Ferreira was our moderator and host. He is the site level executive for the2000 IBM employees in the Phoenix area, and manages the Public Sector for our Westernregion.
The first speaker was Dave McCoy, IBM principal in our Data Center services group.He explained IBM's Project Big Green and the Energy Efficiency Initiative, and wentinto details on how IBM can act as general contractor to design, plan and build theideal Green Data Center for you. IBM can also retrofit existing buildings, with new technologies like stored cooling, optimized airflow assessments, and modulardata center floorspace. While not related to energy, but still important to ourenvironment was IBM Asset Recovery Services, where IBM can take all those old PCmonitors, keyboards and other outdated equipment and refurbish or melt down to recapture useful metals and plastics, and disposing the rest in an environmentally-friendly,non-toxic manner.
I was the second speaker, covering "How to get it done". While Dave covered the issuesand technologies available, I explained how to put it all into practice. This includesIT systems assessments, health audits, and thermal profiling. Using server and storagevirtualization, you can increase resource utilization and reduce energy waste. IBM's CoolBlueproduct line, which includes the IBM PowerExecutive software to monitor your IT environment, and the "Rear Door Heat Exchanger" that uses chilled water to remove asmuch as 60% of the heat coming out of the back of a server rack, greatly reducing hot-spotson the data center floor, and allowing you to run the entire room at warmer, less-expensivetemperatures.
On the server side, I covered IBM's System z mainframe and the BladeCenter as examples of how innovative technologies can be used to run more applications with less energy. The newSystem p570 based on the energy-intelligent POWER6 processor has twice the performance for the same amountof power as its POWER5 predecessor. On thestorage side, I explained how Information Lifecycle Management (ILM), storage virtualization,and the use of a blended disk and tape environment can greatly reduce energy costs.
Reps from our many technology partners Eaton, APC, Schneider Electric, Liebert, and Anixter werethere to support this event.
The session ended with a Q&A Panel, with Dave McCoy, myself, and Greg Briner from IBM GlobalFinancing. IBM is able to offer creative "project financing" that can often times match theactual monthly savings, resulting in net zero cost to your operational budget, with payback periods as little as 2.5 years.
To learn more about IBM's efforts to help clients create "Green" data centers, clickGreen Data Center.
Well, tomorrow is the Winter solstice, at least for those of us in the Northern hemisphere of the planet.As often happens, I have more vacation days left than I can physically take before they evaporateat the end of the year, so next week I will be off, going to see movies like the new["Golden Compass"]or perhaps read the latest book from [Richard Dawkins].
Next week, I suspect some of the kids on my block will be playing with radio-controlled cars orplanes. If you are not familiar with these, here's a [video on BoingBoing]that shows Carl Rankin's flying machines that he made out of household materials.
Which brings me to the thought of scalability. For the most part, the physics involvedwith cars, planes, trains or sailboats apply at the toy-size level as well as the real-world level. One human operator can drive/manage/sail one vehicle. While I have seen a chess master play seven opponents on seven chess boards concurrently, itwould be difficult for a single person to fly seven radio-controlled airplanes at the same time.
How can this concept be extended to IT administrators in the data center? They have to deal withhundreds of applications running on thousands of distributed servers.In a whitepaper titled [Single System Image (SSI)], the threeauthors write:
A single system image (SSI) is the property of a systemthat hides the heterogeneous and distributed nature of theavailable resources and presents them to users and applicationsas a single unified computing resource.
IBM has some offerings that can help towards this goal.
Even in the case where yourvehicle is being pulled by eight horses--(or eight reindeer?)--a single operator can manage it, holding the reins in both hands. In the same manner,IBM has spent a lot of investment and research into supercomputers, where hundreds of individualservers all work together towards a common task. The operator submits a math problem, for example,and the "system system image" takes care of the rest, dividing the work up into smaller chunksthat are executed on each machine.
When done with IBM mainframes, it is called a Parallel Sysplex. The world's largest business workloadsare processed by mainframes, and connecting several together and working in concert makes this possible.In this case, the tasks are typically just single transactions, no need to divide them up further, justbalance the workload across the various machines, with shared access to a common database and storageinfrastructure so they can all do the work equally.
Last August, in my post [Fundamental Changes for Green Data Centers], I mentioned that IBM consolidated 3900 Intel-based servers onto 33 mainframes. This not only saves lots of electricity, but makes it much easier for the IT administratorsto manage the environment.
Parallel Sysplex configurations often require thousands of disk volumes, which would have been quitea headache dealing with them individually. With DFSMS, IBM was able to create "storage groups" wherea few groups held the data. You might have reasons to separate some data from others, put them inseparate groups. An IT administrator could handle a handful of storage groups much easier than thousandsof disk volumes. As businesses grow, there would be more data in each storage group, but the numberof storage groups remains flat, so an IT administrator could manage the growth easily.
IBM System Storage SAN Volume Controller (SVC) is able to accomplish this for other distributed systems.All of the physical disk space assigned to an SVC cluster is placed into a handful of "managed diskgroups". As the system grows in capacity, more space is added to each managed disk group, but few IT administrators can continue to manage this easily.
The new IBM System Storage Virtual File Manager (VFM) is able to aggregate file systems into one globalname space, again simplifying heterogeneous resources into a single system image. End users have a singledrive letter or mount point to deal with, rather than many to connect to all the disparate systems.
Lastly we get to the actual management aspect of it all. Wouldn't it be nice if your entire data centercould be managed by a hand-held device with two joysticks and a couple of buttons? We're not quite there yet, but last October we announced the [IBM System Storage Productivity Center (SSPC)]. This is a master consolethat has a variety of software pre-installed to manage your IBM and non-IBM storage hardware, includingSAN fabric gear, disk arrays and even tape libraries. It lets the storage admin see the entire data centeras a single system image, displaying the topology in graphical view that can be drilled down using semanticzooming to look at or manage a particular device or component.
Customers are growing their storage capacity on average 60 percent per year. They could do this by havingmore and more things to deal with, and gripe about the complexity, or they can try to grow theirsingle system image bigger, with interfaces and technologies that allow the existing IT staff to manage.
In case you missed it, IBMunveiled a new digital video surveillance service yesterday. This "marks an important shift in the industry's approach to security, applying advanced analytics to video data and signaling the ability to converge physical and information technology (IT) security."
The IBM Smart Surveillance Solution is designed to provide the unique capability to carry out efficient data analysis of video sequences either in real time or from recordings. These recordings can be on disk or tape storage.
The problem with today's existing "analog" surveillance is that the analog cameras record onto traditional VHS tapes, and these are rotated through, re-written after a few hours or days. To review tapes often involves human intervention, and must be done before the VHS tapes are re-used. Many shoplifters, thieves, and other law-breakers take a chance that their actions will not be caught on tape, or that they will be long gone by the time the video is analyzed.
The IBM Smart Surveillance Solution can provide a number of advantages over traditional video solutions, including:
Real-time alerts that can help anticipate incidents by identifying suspicious behaviors.
Forensic capabilities are enhanced by utilizing unique indexing and attribute-based search of video events to classify objects into categories such as people and cars.
Situational awareness of the location, identity and activity of objects in a monitored space including license plate recognition and face capture.
With real-time analytics capabilities, the new DVS service can open up a wide array of new applications that go far beyond the traditional security aspects of surveillance systems. Early adopter industries in this rapidly evolving market include retail, public sector and financial services. The retail industry estimates nearly $50 billion is lost annually to fraud, theft and administrative errors.
Once in digital format, video surveillance can be sent further, processed quicker, and stored for longer periods of time, than traditional media makes practical today.
Well, I have left Japan, and while everyone else is enjoying the Super Bowl, I am now in Australia, at another conference.Today I had the pleasure to hear filmmakers talk about their successes, and how IBM helps the movie industry.
At one extreme was Khoa Do, independent filmmaker. After acting in movies asideMichael Caine and Billy Zane, he decided to become his own director. He started a project to help seven disadvantaged youths from a poor drug-ridden section of Sydney, by having them act in his first full-length film.Armed with only an IBM laptop and small budget, he made the film called "The Finished People" that had critical acclaim.
The film was a success, and many of the disadvantaged youths have gone on to act in other movies. In 2005, Khoa Do was named "Young Australian of the Year".
Thanks to IBM technology, filmmaking is now accessible to a wider number of aspiring wanna-be directors. It is no longer necessary to be part of a large film studio with a multi-million dollar budget to tell your story.
At the other extreme, was Xavier Desdoigts, director of technical operations at Animal Logic, the Computer Graphics (CG) arthouse that produced special effects of movies like "The Matrix", "House of Flying Dragons" and "World Trade Center". They started with producing digital effects for TV commercials, like this one forCarlton Draught Beer.
With the support of a large film studio and multi-million dollar budget, Animal Logic now boasts the 86th most powerful "Supercomputer" based on IBM BladeCenter technology, with over 4000 servers connected into a cluster, for making the movie "Happy Feet". The movie took four years to make, with over 500 people, of 27 different nationalities. It was the first CG movie made in Australia, and has been well-received by audiences worldwide.
Mr. Desdoigts gave out some interesting facts and figures about the movie:
While visually stunning on the big screen, each frame is only 1.4 Megapixel, about the same resolution as most camera phones.
In one scene, there are 427,086 penguins all appearing on frame.
Mumble, the lovable lead character, is made up of over 6 million feathers.
As many as 17 dancers were "motion-captured" to choreograph the tap-dancing and character interaction segments.
Only one system admin was needed to manage this entire server farm. (IBM Systems Director technology makes this possible)
The movie consumed 103 TB of disk space, backed up to 595 LTO tape cartridges.
An estimated 17 million CPU-hours were needed for all the processing and rendering.
Rather than talking about technology for technology sake, these filmmakers showed how technology couldbe put to use, in a practical sense, to provide the world something of value.
This week, I am attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
The last day of the conference had fewer people. Many stayed for the Elton John concert then left. I am glad to be one of the few that squeezed out every last value of learning from the money it cost for my employer to send me here.
2419A Enhance the Agility of Your Cloud with IBM FlashSystem
Kristy Ortega and Shaluka Perera, IBM FlashSystem Solutions team, presented. Cloud Service Providers (CSP) and Managed Service Providers (MSP) are leveraging flash technology for a variety of reasons:
To meet Service Level Agreements (SLAs)
To handle unpredictable workloads
To minimize noisy neighbor interference
To offer premium performance as an up-sell feature
To be able to scale faster to meet incoming requests
To reduce server count
To keep custonmers delighted and reduce customer churn
To offer data-rich features without sacrificing performance
Kristy gave three practical client use cases:
IP-Only -- an MSP in the Nordic countries, employed IBM FlashSystem and Storwize V5000. They achieved five times VMware density on their servers and 300 percent improved application performance. Nearly all of the cost of the new storage hardware was offset by the savings in VMware license costs!
Cageka -- an MSP in Europe, employed IBM FlashSystem and SAN Volume Controller. They achieved 66 percent reduced SAP ERP response time, 97 percent reduction in floorspace, and 95 percent reduced power and cooling costs.
COCC -- formerly the Connecticut On-Line Computer Center, a CSP for bank and credit unions, employed IBM FlashSystem with IBM POWER servers. They achieved 10x faster OLTP transaction processing times, 80 percent reduction in power and cooling costs. The payback period for this was less than 3 months!
IBM sells SAN switches featuring Brocade Gen5 "Fabric Vision" technology, and resells Cisco MDS switches like the 9396S model. Both of these have been enhanced to handle the lower latency and higher throughput that IBM FlashSystem provides.
IBM Data Engine for NoSQL employs Redis with Coherent Accelerator Processor Interface (CAPI) that allows POWER8 servers to connect directly to IBM FlashSystem as an extension of memory rather than bus-attached external storage. This reduces the code path length to read/write to IBM FlashSystem by 97 percent, resulting in solutions that use six times less rack space, and three times less costs. This solution reduces CPU core requirements by 20-30 cores for every 1M IOPS of workload!
Spectrum Scale supports IBM FlashSystem in a variety of configurations. First, IBM FlashSystem can serve as a high-speed cache when Spectrum Scale virtualizes other NFS storage devices. Second, IBM FlashSystem can serve as a low-latency storage pool to direct new or hot data to. Third, Spectrum Scale can separate its metadata from the content of files and objects, putting the metadata on IBM FlashSystem. This greatly improves searching through directory structures or for specific object attributes.
Last year, IBM, Hewlett-Packard, and VMware launched Project Capstone to "leave no application behind". They made a concerted effort to make sure that all relevant applications that run on bare metal can also run on VMware hypervisor. IBM FlashSystem has support for VMware features, including VAAI, VASA, and VVols.
IBM has partnered with Atlantas ILIO to offer in-line data deduplication for Virtual Desktop Infrastructure (VDI). A single 2U IBM FlashSystem can support 5,000 users and 10,000 virtual desktops, running at 382 IOPS per desktop.
Lastly, Healthcare provider Trizetto has used IBM FlashSystem to reduce OPEX by 90 percent, shrinking from a 20U disk system array to a 2U IBM FlashSystem device.
4331A Leverage zOS and Cloud Storage for Backup/Archive Efficiency and Cost Reduction
Eddie Lin, IBM Senior Technical Staff Member for DS8000 development team, presented this technology preview. Taking advantage of cloud storage is not limited to the distributed storage world alone. The ability to connect existing archive and backup solutions in z/OS to on-premise object storage platforms provides huge efficiency gains, enabling clients to do more during their critical batch windows.
IBM is integrating cloud gateway software into its DS8870 and DS8880 Enterprise Disk Systems in conjunction with DFSMShsm and DFSMSdss for a complete end-to-end solution to optimize this space. We will show a live demonstration of this capability during this session.
This solution uses the Storage-as-the-Storage-Cloud methodology I mentioned in my session yesterday. DS8000 is #1 storage provider for mainframe environments. Eddie explained the current inefficient process of moving cold data to tape, using 37-year-old DFSMShsm functionality.
A new approach involves moving data directly from DS8870 storage systems to object storage, either on-premises or off-premises. This eliminates MIPS used for data movement, and reduces the record-keeping normally done by DFSMShsm. z/OS Data sets migrated to the Cloud will continue to be designated as MIGRAT in the ICF Catalog. Similar recall times from tape or Cloud.
There will also be options for DFSMSdss to invoke the function. However, you will need to provide in the DFSMSdss command parameters all of the information needed to connect to the Cloud that would normally be handled by DFSMShsm.
To make this all happen, you will need a certain level of DFSMS, and a certain level of DS8000 firmware. No new hardware is required, as it uses 1GbE Ethernet ports that already exist in DS8870 and DS8880 models. If you still have DS8100, DS8300, DS8700 or DS8800 models, now is a good time to start upgrade!
Internal tests on a 5GB data set were done to compare MIPS consumption. With DFSMShsm, 0.127 CPU, versus this new "Transparent Cloud Storage Tiering" method was only 0.068 CPU, indicating a 46 percent reduction in MIPS. DFSMShsm is often the #2 biggest consumer of MIPS (DB2 is #1), so any reduction here is a big deal.
IBM plans to support Spectrum Scale, Cleversafe, IBM SoftLayer, Amazon S3, Rackspace, Microsoft Azure. Full encryption data-in-flight is included, with keys managed using IBM SKLM. This capability will be fully supported by z/OS Security products (RACF, Top Secret, etc.) and z/OS audit logging.
Eddie wrapped up with a live demo.
7341A IBM Storage and Catalogic: Software Defined Solutions for Hybrid Cloud and DevOps
Third party Catalogic ECX software supports IBM, NetApp and EMC storage devices. I was hoping to hear how it works specifically with IBM storage models, but instead the speaker explained why Copy Data Management (CDM) was helpful for Bi-Modal environments.
Basically, copies of data taken to protect production data sit idle until needed. With Copy Data Management, the copies are available to development and test personnel. While traditional production IT operations are like Marathon runners, the new DevOps is like short-distance sprinters, needing to be agile in developing and testing new applications. Having ready access to copies of production data can speed this process.
4921A Radical Storage Simplicity for Your Cloud and How it Can Impact Your Customers
Diane Benjuya and Yafit Sami, both from IBM, presented IBM Spectrum Accelerate, the software "de-coupled" from traditional XIV hardware.
The XIV grid architecture automatically distributes data, eliminates hot-spots, and provides enterprise-class features like thin provisioning, VMware support, snapshots and remote mirroring. It's "Distributed RAID-10" capability can rebuild after the loss of a 6TB disk drive failure in less than an hour.
Spectrum Accelerate has nearly the same set of features, minus Microsoft Hyper-V integration, FCP host access support, VMware vSphere v6 VVol support, Real-time Compression, and Encryption. Spectrum Accelerate adds a feature not available to XIV called Hyperconvergence. This allows application Virtual Machines to run on the same servers used for Spectrum Accelerate. Spectrum Accelerate can run on-premises on customer-choice hardware, or in the Cloud, such as IBM SoftLayer.
In response to complaints that IBM XIV was a single-frame storage array, IBM introduced Hyper-Scale, a series of features that allow up to 144 XIV Gen3 frames as a single system. With the introduction of Spectrum Accelerate, Hyper-Scale Manager can now manage any combination of XIV Gen3 and Spectrum Accelerate clusters, on-premises or off-premises, up to 144 total.
Hyper-Scale Mobility can migrate volumes from one XIV to another without the need for external virtualization such as IBM SAN Volume Controller. For iSCSI volumes, Hyper-Scale Mobility can migrate data between XIV and Spectrum Accelerate, or from one Spectrum Accelerate cluster to another, on-premises with off-premises.
Hyper-Scale Consistency allows Snaphots to be taken of a group of volumes across multiple XIV frames. Now, snapshots can be taken of a group of volumes across both XIV and Spectrum Accelerate clusters.
Remote Mirroring is fully supported. You can replicate data from XIV to Spectrum Accelerate, Spectrum Accelerate to XIV, or from one Spectrum Accelerate cluster to another.
The IBM XIV Mobile Dashboard for Apple and Android phones can support any mix of XIV and Spectrum Accelerate clusters. This includes monitoring your environment, as well as push notifications.
IBM has also introduced flexible licensing options. With newly purchased XIV boxes and Spectrum Accelerate, you can choose to buy the software license as "perpetual", allow you to move it to new hardware when your hardware kicks the bucket. This license can be moved to new XIV hardware, or to Spectrum Accelerate cluster deployment.
For Spectrum Accelerate, an additional license option is "monthly", allowing you to elastically add or reduce the amount of storage you manage, either on-premises or off-premises.
Like the idea of Spectrum Accelerate but don't want to build it yourself? Third party SuperMicro offers hardware pre-certified and pre-installed with Spectrum Accelerate. You license Spectrum Accelerate directly from IBM, and SuperMicro will take care of the rest.
Spectrum Accelerate is a component of the Spectrum Storage suite. A single flat-TB pricing for all six Spectrum Storage products.
Want to try IBM Spectrum Accelerate yourself? Here are three options:
Free 90-day trial with self-destruct. After 90 days, the code stops working. You can download this and try it out.
90-day evaluation copy. Your authorized IBM Seller works with you to install, and if you like it, you buy it after 90 days to continue to use it.
Special promotion before June 30, 2016 -- Purchase IBM Spectrum Accelerate for production, and your first 20TB are free. No strings attached.
IBM's Silverpop uses IBM Spectrum Accelerate to deploy their market analytics solution. They can spin up a new customer with 250TB of capacity in 24-48 hours on IBM SoftLayer. They found they use half as many storage admins managing storage with IBM Spectrum Accelerate than their previous method.
Well, that's the end of the conference. I have to go back and submit all of my survey responses, which I should have done every day all along, but was too busy writing blog posts!
The presentations are also now available for download for those who attended the conference. (Go to Session Preview on the IBM InterConnect attendee website and hit the Download Presentation button)
Well, it's Tuesday, which means IBM Announcements!
We have both disk and tape related announcements today.
2 TB Drives
Yes, they are finally here. IBM now offers [2 TB SATA drives for its IBM System Storage DCS9900 series] disk systems. These are 5400 RPM, slower than traditional 7200 RPM SATA drives. This increases the maximum capacity of a single DCS9900 from 1200 TB to 2400 TB. The DCS9900 is IBM's MAID system (Massive Array of Idle Disk) which allows for drive spin-down to reduce energy costs and is ideal for long term retention of archive data that must remain on disk for High Performance Computing or video streaming.
TS3000 System Console
The TS3000 System Console [provides improved features for service and support] of up to 24 tape library frames or 43 unique tape systems. Tape frames include those of the TS7740, TS7720 and TS7650. Tape systems include TS3500, TS3400 or 3494 libraries as well as stand-alone TS1120 and TS1130 drives. Having the TS3000 System Console in place is a benefit to both IBM and the customer, as it improves IBM's ability to provide service in a more timely manner.
Both announcements are part of IBM's strategy to provide cost-effective, energy-efficient, long-term retention storage for archive data.
Continuing this week's coverage of the 27th annual [Data Center Conference] I attended some break-out sessions on the "storage" track.
Effectively Deploying Disruptive Storage Architectures and Technologies
Two analysts co-presented this session. In this case, the speakers are using the term "disruptive" in the [positive sense] of the word, as originally used by Clayton Christensen in hisbook[The Innovator's Dilemma], andnot in the negative sense of IT system outages. By a show of hands,they asked if anyone had more storage than they needed. No hands went up.
The session focused on the benefits versus risks of new storage architectures, and which vendors they felt would succeed in this new marketplace around the years 2012-2013.
By electronic survey, here were the number of storage vendors deployed by members of the audience:
14 percent - one vendor
33 percent - two vendors, often called a "dual vendor" strategy
24 percent - three vendors
29 percent - four or more storage vendors
For those who have deployed a storage area network (SAN), 84 percent also have NAS, 61 percent also have some form or archive storage such as IBM System Storage DR550, and 18 percent also have a virtual tape library (VTL).
The speaker credited IBM's leadership in the now popular "storage server" movement to the IBM Versatile Storage Server [VSS] from the 1990s, the predecessor to IBM's popular Enterprise Storage Server (ESS). A "storage server" is merely a disk or tape system built using off-the-shelf server technology, rather than customized [ASIC] chips, lowering thebarriers of entry to a slew of small start-up firms entering the IT storage market, and leading to newinnovation.
How can a system designed for now single point of failure (SPOF) actually then fail? The speaker convenientlyignored the two most obvious answers (multiple failures, microcode error) and focused instead on mis-configuration. She felt part of the blame falls on IT staff not having adequate skills to deal with the complexities of today's storage devices, and the other part of the blame falls on storage vendors for making such complicated devices in the first place.
Scale-out architectures, such as IBM XIV and EMC Atmos, represent a departure from traditional "Scale-up" monolithic equipment. Whereas scale-up machines are traditionally limited in scalability from their packaging, scale-out are limited only by the software architecture and back-end interconnect.
To go with cloud computing, the analyst categorized storage into four groups: Outsourced, Hosted, Cloud, and Sky Drive. The difference depended on where servers, storage and support personnel were located.
How long are you willing to wait for your preferred storage vendor to provide a new feature before switching to another vendor? A shocking 51 percent said at most 12 months! 34 percent would be willing to wait up to 24 months, and only 7 percent were unwilling to change vendors. The results indicate more confidence in being able to change vendors, rather than pressures from upper management to meet budget or functional requirements.
Beyond the seven major storage vendors, there are now dozens of smaller emerging or privately-held start-ups now offering new storage devices. How willing were the members of the audience to do business with these? 21 percent already have devices installed from them, 16 percent plan to in the next 12-24 months, and 63 percent have no plans at all.
The key value proposition from the new storage architectures were ease-of-use and lower total cost of ownership.The speaker recommended developing a strategy or "road map" for deploying new storage architectures, with focus on quantifying the benefits and savings. Ask the new vendor for references, local support, and an acceptance test or "proof-of-concept" to try out the new system. Also, consider the impact to existing Disaster Recovery or other IT processes that this new storage architecture may impact.
Tame the Information Explosion with IBM Information Infrastructure
Susan Blocher, IBM VP of marketing for System Storage, presented this vendor-sponsored session, covering theIBM Information Infrastructure part of IBM's New Enterprise Data Center vision. This was followed by BradHeaton, Senior Systems Admin from ProQuest, who gave his "User Experience" of the IBM TS7650G ProtecTIER virtual tape library and its state-of-the-art inline data deduplication capability.
Best Practices for Managing Data Growth and Reducing Storage Costs
The analyst explained why everyone should be looking at deploying a formal "data archiving" scheme. Not just for "mandatory preservation" resulting from government or industry regulations, but also the benefits of "optional preservation" to help corporations and individual employees be more productive and effective.
Before there were only two tiers of storage, expensive disk and inexpensive tape. Now, with the advent of slower less-expensive SATA disks, including storage systems that emulate virtual tape libraries, and others that offer Non-Erasable, Non-Rewriteable (NENR) protection, IT administrators now have a middle ground to keep their archive data.
New software innovation supports better data management. The speaker recalled when "storage management" was equated to "backup" only, and now includes all aspects of management, including HSM migration, compliance archive, and long term data preservation. I had a smile on my face--IBM has used "storage management" to refer to these other aspects of storage since the 1980s!
The analyst felt the best tool to control growth is the "Delete" the data no longer needed, but felt that nobody uses Storage Resource Management (SRM) tools needed to make this viable. Until then, people willchose instead to archive emails and user files to less expensive media.The speaker also recommended looking into highly-scalable NAS offerings--such as IBM's Scale-Out File Services (SoFS), Exanet, Permabit, IBRIX, Isilon, and others--when fast access to files is worth the premium price over tape media.The speaker also made the distinction between "stub-based" archiving--such as IBM TSM Space Manager, Sun's SAM-FS, and EMC DiskXtender--from "stub-less" archive accomplished through file virtualization that employes a global namespace--such as IBM Virtual File Manager (VFM), EMC RAINfinity or F5's ARX.
She made the distinction between archives and backups. If you are keeping backups longer than four weeks, they are not really backups, are they? These are really archives, but not as effective. Recent legal precedent no longer considers long-term backup tapes as valid archive tapes.
To deploy a new archive strategy, create a formal position of "e-archivist", chose the applications that will be archived and focus on requirements first, rather than going out and buying compliance storage devices. Try to get users to pool their project data into one location, to make archiving easier. Try to have the storage admins offer a "menu" of options to Line-of-Business/Legal/Compliance teams that may not be familiar with subtle differences in storage technologies.
While I am familiar with many of these best practices already, I found it useful to see which competitiveproducts line up with those we have already within IBM, and which new storage architectures others find mostpromising.
IBM introduces the eight generation of Linear Tape Open (LTO) tape drive technology, with corresponding support in all of the IBM tape libraries.
Fellow blogger Jon Toigo, of Drunkendata.com fame, came to Tucson to interview Lee Jesionowski, Ed Childers, Calline Sanchez, and me about this. Check out the various segments on YouTube or his website.
The LTO-8 cartridges are not yet available, but when they are, they will hold 12 TB raw capacity, or 30 TB effective capacity at 2.5-to-1 compression ratio. The new drives are N-1 compatible to read/write LTO-7 cartridge media.
Previous generations also supported reading N-2 generation tapes, LTO-8 breaks from that tradition and will not support LTO-6 cartridges at all.
LTO-8 comes in both "Full Height" (FH) and Half-Height (HH) models. The FH models can transfer data at 360 MB/sec (or 900 MB/sec effective at 2.5-to-1 compression), and the HH models at 300 MB/sec (or 750 MB/sec effective at 2.5-to-1).
LTO-8 supports IBM Spectrum Archive and the "Linear Tape File System" (LTFS) tape format for self-describing long-term retention of data.
Compliance storage has come under many names. For tape and optical media, we had "WORM" for Write-Once, Read-Many. For disk-based storage, we had "Fixed-Content" or "Content-Addressable Storage". For file systems, we had "Immutable Storage".
Fortunately, the clever folks who crafted the SEC 17a-4 law came up with an umbrella term: "Non-Erasable, Non-Rewriteable" (NENR) that covers all storage media, from WORM tape and optical, to tamperproof flash, disk and cloud-based solutions.
The other major change is "Concentrated Dispersal" mode, or "CD mode" for short. Erasure Coding works best when data is dispersed across three or more sites. When this happens, you can lose all of the data at one site, and still have 100 percent access to all data from the other locations.
IBM's "Information Dispersal Algorithm", or IDA for short, scattered slices of data across many servers. Great for high availability and performance, but often meant that the minimum deployment was 500TB or greater.
Not every organization is ready for such a large purchase. Some want to just [dip their toe in the water] with something smaller, less expensive. Well IBM delivered!
The new CD mode means that instead of one slice per Slicestor node, you can pack lots of slices on each node. Each slice will be on distinct disk drives, for high availability.
Entry-level configurations now can be as little as 72-104 TB, across 1, 2 or 3 sites.
Twenty years ago, I flew to Atlanta for the semi-annual SHARE conference. I was a lead architect for DFSMS, the storage management software for mainframe servers. When I got to the hotel, I realized that I had forgotten to pack my saline solution for my contact lenses. I went to the hotel gift shop, and picked the first one I found. I took my contacts in the solution and went to bed.
The next morning, I put on my contacts, got dressed, and participated in meetings. One of my colleagues noticed my eyes were quite red, and suggested I switch from contact lenses to glasses. I went back to my hotel room, saw to my horror that what I thought was saline solution was actually hydrogen peroxide intended for hard lenses. When I removed the lenses, all I could see was white light.
I managed to find my way to the elevator, and feel for the button with the star that indicated the lobby on the ground floor. I asked a hotel staffer to call me an ambulance, but instead, they put me in a cab, and sent me to Emory Hospital. On arrival, all I could do was hand over my wallet to my cabbie, and let him take out what he felt was fair, since I could not see him, the meter, or his license number.
After bumping my knees into dozens of cars in the parking lot, I finally made it to the ER, only to have receptionist give me a form to fill out and a pen. At this point, I lost it. I gave her my wallet and said that any information she may need should be in there.
Thankfully, a doctor noticed this exchange, and took care of me right away. I had chemically burned off both corneas. He injected some green fluid into both eyeballs, and sent me off in a cab to the Pharmacy. At least I had both eyes were bandaged in gauze, so people were kind enough to take me to get to the counter to get my pain killers, Percocet.
The pharmacist provided me the pills, and warned me NOT to operate any heavy machinery under the influece of this medication. Seriously? I can't see, both eyes covered, and he tells me that?
I got back to the hotel, got ready for bed, took the pills and brushed my teeth. I woke up the next morning on the bathroom floor, still clutching the toothbrush, and vertical and horizontal lines across my right cheek which were made by the one-inch tiles of the bathroom floor. These pills really knocked me out.
That day, I had to present a full hour in front of hundreds of people. I had a colleague flip my transparencies for me, while I spoke to each one, my eyes still covered in gauze. That evening, I was one of the experts on the panel for a "Birds of a Feather", or BOF session, answering a variety of questions. People could see that I was blind, but I could still hear the questions, and I could still answer them as well.
If you are going to Edge 2013 in Las Vegas, please consider attending my BOF session on Security for PureSystems, System x and Storage products, scheduled for Thursday afternoon, June 13. I will be moderating a distinguished panel of experts to answer your questions! I have listed them here alphabetically:
Jack Arnold, US Federal. Jack has worked decades in the storage industry, and will provide insight into security issues related to the government.
Tom Benjamin, Development Manager for Key Lifecycle Management and Java Cryptography. Tom will bring his expertise in both TKLM and ISKLM for managing encryption keys, and how to communicate these between security and storage administrators.
Paul Bradshaw, Chief Storage Architect for Clouds. A research scientist from IBM's Almaden Research Lab, Paul will provide insight in how to deal with security issues related to private, hybrid and public cloud deployments.
Ajay Dholakia, Solution Center of Excellent. Ajay will cover server-side considerations for security deployments, including System x and PureSystems.
Jim Fisher, Advanced Technical Skills. Jim brings expertise related to deploying data-at-rest encryption.
Not sure what kind of questions to ask? Here is a series of Questions and Answers we had at a Storage event in 2011 that might give you a good idea: [2011 Storage Free-for-All].
The question is if this is unique or specific to these particular models, or if this affects all kinds of blade servers because of their very nature and architecture. Stephen indicates that they also have HP C class enclosures, but since they are still in test mode, cannot comment on them.
I have no experience with any of HP's blade servers, but I have worked closely with our IBM BladeCenter team to help make sure that our storage, and our SAN equipment, work well together with the BladeCenter, and more importantly, that problems can be diagnosed effectively.
When I asked why people feel they need to know the inner workings of storage, the overwhelming response was to help diagnose problems. This could include problems inplacing related data on a potentially single point of failure, problems with performance, and problems communicating with 1-800-IBM-SERV.
So, if you have encountered problems diagnosing SAN problems with BladeCenter, or find that setting up an IBM SAN with blade servers in general, I would be interested in hearing what IBM can do to make the situation better.[Read More]