This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
International Technology Group [ITG] has just published a series of papers about IBM SmartCloud Virtual Storage Center (VSC) and SAN Volume Controller/Storwize storage hypervisor virtualization technology detailing the cost benefit advantages over EMC and VMware.
IBM delivers up to 72% lower storage TCO than EMC storage virtualization and management solutions in large enterprises ... and up to 35% lower storage TCO than VMware tools in mid-sized environments
Also, you can watch an interview with the study's author, International Technology Group Managing Director, Brian Jeffery, live from next week's IBM Edge Conference in Las Vegas. Brian will be interviewed on [TheCUBE by Wikibon] on Monday afternoon. Watch it live on May 19!
I will be at Edge next week. If you plan to be there, I would be glad to discuss these ITG findings with you and your clients in person.
I am not in Las Vegas this week for this year's event, but the sessions will be streamed live through [IBM GO].
IBM Systems Technical University - May 22-26, 2017 - Orlando, FL
IBM Systems Technical University is the evolution of a variety of other conferences related to servers, storage and software. Starting out as the "IBM Storage Symposium", then added "System x" servers and renamed to "Storage and System x University", then dropped "System x" when IBM sold off that business to Lenovo.
A few years ago, it was renamed "Edge", initially just focused on Storage, but then two years ago combined with System z mainframe servers and POWER Systems for IBM i and AIX platforms. It also covers software products that previously had their own conferences, like IBM Pulse or MaximoWorld
Last year, the IBM Marketing team tried a daring experiment. Let's change "Edge" to be a "Cognitive Solutions and Cloud Platform" conference, with emphasis on IT Infrastructure.
The experiment failed. Not because IBM Systems don't support these new initiatives, but because the audience were more interested to hear about how IBM Systems help their current day-to-day business. As many attendees told me, "If we wanted to hear about Cognitive or Cloud, we have plenty of other of conferences that cover that already!"
While 40 percent of IBM revenues are generated from Cognitive Solutions and Cloud Platform, the other 60 percent are traditional, on-premise, systems-of-record application workloads, the kind that business, non-profit groups, and government agencies have been using for the past few decades!
To address this need, IBM offered three-day "IBM Systems Technical University" events at various locations. Last year, I presented storage topics at events in Atlanta, Austin, Bogota, Boston, Chicago, Dubai, Nairobi, and São Paulo.
We will have several of those this year as well. The main one will be a full 5-day event, May 22-26, in Orlando Florida. I will be there presenting various sessions on storage!
IBM World of Watson - October 29-November 2, 2017 - Las Vegas, NV
This is a Cognitive Solutions and Cloud Platform conference, with an emphasis on Analytics and Database technologies.
I did not attend World of Watson, or WoW for short, last year, but it was an evolution of the conference previously called "IBM Insight". I am sure everything from DB2 and Open Source databases to Hadoop and Spark will be covered this year as well.
In writing this post, I realize that this year will be like a "Conference Sandwich". Cognitive-and-Cloud at the top and bottom, with all the meat, veggies and garnish in the middle!
Today, IBM announced a software/server/storage combo that out-performed both HP and Sun. Here is an excerpt from the[IBM Press Release]:
IBM today announced that its recently introduced E7100 Balanced Warehouse(TM), consisting of the IBM POWER6(TM) processor-based System p(TM) 570 server, the IBM System Storage(TM) DS4800 and DB2(R) Warehouse 9.5, is already lapping the field in performance. The new data warehousing solution is now ranked number one in both performance and in price/performance in the TPC-H business:
2 x speed-up over HP system with Oracle 10g and equal number of cores;
3.17 x speed up over Sun with Oracle 10g and 38 percent price advantage;
A new world record by loading 10 terabytes (TB) data at six TB per hour (TB/hr).
"These latest benchmark results further prove IBM's strength and leadership in the business intelligence arena," said Scott Handy, vice president of marketing and strategy, IBM Power Systems. "The E7100 Balanced Warehouse is a complete data warehousing solution comprised of pre-tested, scalable and fully integrated system and storage components, designed to get customers up and running quickly to get to the real benefit of unprecedented business insight and intellect."
Those not familiar with the [IBM Balanced Warehouse], it is the productized version of DB2's ["Balanced Configuration Unit" or BCU] reference configuration. The IBM Balanced Warehouse presents a pre-tested, pre-configured solution for Business Intelligence (BI) applications. These are in the form of "building blocks" thatcan be combined to get to the size you need, with incremental growth as your business expands. Each building block expertly matches the CPU processor and RAM memory of the server, with the appropriate I/O bus, cabling, and capacity of the disk system, resulting in optimal performance.
IBM DB2 software is designed to allow you to combine multiple building blocks into a single system image. This greatly simplifies your data warehouse deployment, and can help ensure success. For example, for a 50TB deployment, you can take a base 2TB building block, add 24 more, each with 2TB of disk capacity, and have a completely balanced environment. IBM clients have built systems over 300TB in this manner with these building blocks.
The IBM Balanced Warehouse is offered in several configurations:
The [C-class models] are designed for SMB customers, employing an IBM System x server with internal or direct attached EXP3000 disk.
The [D-class models] are the next step up, offering department-level data marts and data warehouse for larger deployments, employing an IBM System x server with EXP3000 or System Storage DS3400 entry level disk.
The [E-class models] represent our top-of-the line configurations for our largest enterprise deployments. The [E6000] run Linux on an IBM System x server with System Storage DS48000 disk. The [E7000] run AIX on an IBM System p575 server with DS4800 disk. The new [E7100] mentioned above runsAIX on a POWER6-based IBM System p570 with DS4800 disk.
As I have mentioned before, in my post[Supermarketsand Specialty Shops],companies are looking for complete solutions, preferably from a single vendor like IBM, HP and Sun, rather than buying piece part components from different vendors and hoping the combined ["Frankenstein"] configuration meets business requirements.
The DS4800 is an obvious choice for this solution, providing an excellent balance of cost and performance, in a modular packaging that is ideal for the incremental growth design inherent in the IBM Balanced Warehouse philosophy. To learn more about this disk system, see the official [DS4800 website] for details, descriptions and specifications.
Wow! That can seem overwhelming. While the conference spans multiple hotels on the strip, I personally will be focusing my time at the [Mandalay Bay resort]. My session will be held at the Solutions Expo on Wednesday 1:45pm. Here are the details:
YSS-1841 IBM Cloud Storage Options
This session will cover private and public cloud storage options, including flash, disk and tape, to address the different types of cloud storage requirements. It will also explain the use of Active File Management for local space management and global access to files, and support for file-and-sync.
Program: Core Curriculum Topic: Systems Hardware Sub-topic: Storage Systems & Software
To help attendees plan your week, InterConnect has a [Session Preview Tool]. I have already found over 40 sessions related to Storage that I am interested in attending!
Well it's Tuesday again, and you know what that means? IBM announcements! Many of the announcements were made by IBM Executives at the [IBM Pulse 2014 conference].
IBM BlueMix is the newest cloud offering from IBM, providing Platform-as-a-Service (PaaS) offering based on the Cloud Foundry open source project that promises to deliver enterprise-level features and services that are easy to integrate into cloud applications.
This week, my fifth-line manager Tom Rosamilia, IBM Senior Vice President IBM Systems & Technology Group and Integrated Supply Chain made two announcements at Pulse. First, in additional to x86-based servers, SoftLayer will also offer POWER-based servers to run AIX, IBM i and [Linux on POWER] applications.
Second, SoftLayer will support PureApplication Patterns of Expertise. What is a pattern of expertise? It can be as simple as a virtual machine encapsulated in [Open Virtual Format], to more dynamic architectures, packaged with required platform services, that are deployed and managed by the system according to a set of policies.
Patterns simplify and automate tasks across the lifecycle of the application. Customers and partners alike are [seeing significant reductions in cost and time] across the application lifecycle with the deployment of a PureApplication System.
Also, this week at Pulse, Robert LaBlanc, IBM Senior Vice President of Software and Cloud Solutions, announced [IBM plans to Acquire Cloudant] which offers an open, cloud Database-as-a-Service (DBaaS) that helps organizations simplify mobile, web app and big data development efforts.
When I introduced [SmartCloud Virtual Storage Center] back in October 2012, I mentioned that it was a great solution for large enterprise that have all of their disk behind SAN Volume Controller (SVC).
To reach smaller accounts, IBM has announced two new offerings:
IBM SmartCloud Virtual Storage Entry for customers that have less than 250TB of disk behind two or four SVC nodes. It is priced per terabyte, by the amount of capacity that is virtualized.
IBM SmartCloud Virtual Storage for Storwize Family for customers that have other Storwize family products (Storwize V7000 or V5000, for example). It is priced per the number of storage enclosures that are managed by the Storwize family hardware.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of more of the Tuesday afternoon sessions:
IBM CIOs and Storage
Barry Becker, IBM Manager of Global Strategic Outsourcing Enablement for Data Center Services, presented this session on Storage Infrastructure Optimization (SIO).
A bit of context might help. I started my career in DFHSM which moved data from disk to tape to reduce storage costs. Over the years, I wouuld visit clients, analyze their disk and tape environment, and provide a set of recommendations on how to run their operations better. In 2004, this was formalized into week-long "Information Lifecycle Management (ILM) Assessments", and I spent 18 months in the field training a group of folks on how to perform them. The IBM Global Technology Services team have taken a cross-brand approach, expanding this ILM approach to include evaluations of the application workloads and data types. These SIO studies take 3-4 weeks to complete.
Over the next decade, there will only be 50 percent more IT professionals than we have today, so new approaches will be needed for governance and automation to deal with the explosive growth of information.
SIO deals with both the demand and supply of data growth in five specific areas:
Data reclamation, rationalization and planning
Virtualization and tiering
Backup, business continuity and disaster recovery
Storage process and governance
Archive, Retention and Compliance
The process involves gathering data and interview business, financial and technical stakeholders like storage administrators and application owners. The interviews take less than one hour per person.
Over the past two years, the SIO team has uncovered disturbing trends. A big part of the problem is that 70 percent of data stored on disk has not been accessed in the past 90 days, and is unlikely to be accessed at all in the near future, so would probably be better to store on lower cost storage tiers.
Storage Resource Management (SRM) is also a mess, with over 85 percent of clients having serious reporting issues. Even rudimentary "Showback" systems to report back what every individual, group or department were using resulted in significant improvement.
Archive is not universally implemented mostly because retention requirements are often misunderstood. Barry attributed this to lack of collaboration between storage IT personnel, compliance officers, and application owners. A "service catalog" that identifies specific storage and data types can help address many of these concerns.
The results were impressive. Clients that follow SIO recommendations save on average 20 to 25 percent after one year, and 50 percent after three to five years. Implementing storage virtualization averaged 22 percent lower CAPEX costs. Those that implemented a "service catalog" saved on average $1.9 million US dollars. Internally, IBM's own operations have saved $13 million dollars implementing these recommendations over the past three years.
Reshaping Storage for Virtualization and Big Data
The two analysts presenting this topic acknowledged there is no downturn on the demand for storage. To address this, they recommend companies identify storage inefficiencies, develop better forecasting methodologies, implement ILM, and follow vendor management best practices during acquisition and outsourcing.
To deal with new challenges like virtualization and Big Data, companies must decide to keep, replace or supplement their SRM tools, and build a scalable infrastructure.
One suggestion to get upper management to accept new technologies like data deduplication, thin provisioning, and compression is to refer to them as "Green" technologies, as they help reduce energy costs as well. Thin provisioning can help drive up storage utilization to rates as high as you dare, typically 60 to 70 percent is what most people are comfortable with.
A poll of the audience found that top three initiatives for 2012 are to implement data deduplication, 10Gb Ethernet, and Solid-State drives (SSD).
The analysts explained that there are two different types of cloud storage. The first kind is storage "for" the cloud, used for cloud compute instances (aka Virtual Machines), such as Amazon EBS for EC2. The second kind is storage "as" the cloud, storage as a data service, such as Amazon S3, Azure Blob and AT&T Synaptic.
The analysts feel that cloud storage deployments will be mostly private clouds, bursting as needed to public cloud storage. This creates the need for a concept called "Cloud Storage Gateways" that manage this hybrid of some local storage and some remote storage. IBM's SONAS Active Cloud Engine provides long-distance caching in this manner. Other smaller startups include cTera, Nasuni, Panzura, Riverbed, StorSimple, and TwinStrata.
A variation of this are "storage gateways" for backup and archive providers as a staging area for data to be subsequently sent on to the remote location.
New projects like virtualization, Cloud computing and Big Data are giving companies a new opportunity to re-evaluate their strategies for storage, process and governance.
Marking the occasion, here is an important letter from our Vice President, Laura Guio:
May 6, 2014
To Whom it may concern
Subject: ProtecTIER Development Update:
This year marks the sixth anniversary of IBM's acquisition of Diligent Technology. Over the past six years IBM has emerged as a leader in enterprise class data deduplication. Our highly scalable, dual node hardware redundancy and gateway design are unique characteristics in the industry. IBM fundamentally believes in the importance of cost saving data deduplication technology and continues to enhance our solution, improve value and increase investment protection for our installed base.
First, it is important to note what IBM has done most recently. IBM is among the first to integrate flash technology along with deduplication to boost performance and lower cost. Integration of the IBM FlashSystem 840 for metadata was completed the day the system was publically announced. The speed of technology integration is a result of our flexible gateway design which simplifies technology adoption. It also is enabled by our global development team providing a 24x7 system design, product test and integration environment.
Secondly, IBM has recently released ProtecTIER Mainframe Edition which enables the same enterprise class deduplication capability now for IBM System z. Another distinctive feature of ProtecTIER is its ability to sustain high throughput for both read and write operations. Most deduplication methodologies have an inherent read performance penalty. Since mainframe tape operations are much more read intensive than distributed systems, we were one of the first to market with a practical deduplication offering for all mainframe tape applications.
That's just what we've done getting out of the starting blocks in 2014. Our development team continues to enhance ProtecTIER. We're also working on refreshing the entire ProtecTIER product line with new model enhancements. A new gateway design is underway which will improve performance of the existing DD5. We expect this to be available as an upgrade, providing investment protection for existing ProtecTIER clients. The SM2 product family is also being redesigned to extend its capacity range. Along with hardware changes, we will widen the disk support matrix offering enhanced flexibility and new levels of price performance.*1*
We expect 2014 to be a busy year for IBM deduplication. We have development facilities around the world in Europe, North America, Central America and Asia, working on ProtecTIER. IBM continues to market, sell, and support ProtecTIER as our strategic offering for cost-reducing deduplication technology. Any suggestion that ProtecTIER is fading away is wishful thinking by our competitors. We are working to expand our markets as we have demonstrated by our recent introduction of ProtecTIER into the mainframe. Furthermore, we are looking to expand the use cases for ProtecTIER, which can now be attached as a NAS file system, to other areas besides pure backup. We're excited about what we are delivering today and where we can provide leadership by leveraging deduplication for customer storage environments.
Vice President, Business Line Executive Storage Systems
IBM Systems and Technology Group
*1*: IBM's statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM's sole discretion. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.
To learn more about IBM ProtecTIER, consider attending the [IBM Edge conference], May 19-23, 2014 at the Venetian Hotel in Las Vegas. I'll be there to explain Data Deplication technology as part of my "Data Footprint Reduction" presentation!
IBM Master Inventor, Senior IT Architect, and Event Content Manager
Last week marked the 50th anniversary of landing a human on the moon. Over 4,000 IBM employees were involved. So much has been written about this, that I thought it would be better to point you to some articles and interviews I found of interest.
(While most people focus on the single day, July 20, when Neil Armstrong and Edwin "Buzz" Aldrin stepped foot on the moon, the entire journey lasted a week, from take off July 16, to splash down on July 24.)
"The Real-Time Computer Complex (RTCC) in Houston, Texas, was an IBM computing and data processing system at NASA’s Manned Spacecraft Center—now called the Lyndon B. Johnson Space Center—that collected, processed and sent to Mission Control information to direct every phase of an Apollo mission. The RTCC was so fast, there was virtually no time between receiving and solving a computing problem. Initially, IBM 7094-11 computers were used in the RTCC. Later, IBM System/360 Model 75J mainframes, and peripheral storage and processing equipment were used."
Those peripheral storage were IBM tape and disk systems, of course. IBM Tape systems were developed in 1952, and disk systems in 1956, in time to be used for the Apollo missions.
These were the modern forerunners of today's zSeries systems as far as their very basic systems architecture is concerned. (The IBM Z mainframe is still backward compatible with software written for the S/360, S/370, and the S/390 through machine virtualization and software emulation.)
IBM's legacy in the Space Program lives on in not just its continued involvement in NASA's current and future efforts with its computer systems and support services, but also in some of the software that was built for the Apollo program.
The IMS suite of hierarchical database management applications, which is still an important part of IBM's mainframe software portfolio, was originally designed by IBM in conjunction with Rockwell and Caterpillar so that the huge Bill of Materials (BOM) for the Saturn V, composed of hundreds of thousands of parts, could be inventoried and managed."
Chuck Yeager, test pilot who broke the sound barrier.
Alan Shepard, the first American to reach space. (Of course, Russia takes the prize of first man to orbit the earth, with Yuri Gagarin's flight back in 1961.) Later, Alan would be the first man to hit a golf ball on the moon.
Michael Collins, the third astronaut of the Apollo 11 mission. While Neil and Buzz were down on the surface of the moon, Michael kept the command module in orbit, effectively "driving around the block" to pick them up when they were ready to head home.
Chris Hadfield, a modern-day astronaut, famous for his cover of the [Space Oddity] song.
Real-time images from the moon were sent in 10-frames-per-second format to three places on earth, two in Australia and one in California. Television cameras pointed at those monitors were then used to for the live feed to the rest of the world. The live feed was also recorded in Houston Texas, to capture the best parts from each of the three sources, in case there were problems with the live feed. However, since there were no problems with the live feed, these video tapes were never used.
Years later, Gary George, a NASA intern, would purchase a whole bunch of surplus video tapes for just $218 dollars, which included three of the video tapes from Houston of the Apollo 11 landing. Today, they happen to be the only remaining recordings of the event, and [were sold last week for $1.82 Million dollars at Sotheby's auction!
This whole episode exposes the [Digital Dark Age]. Created on perishable plastic, film decays within years if not properly stored. According to [National Film Preservation Foundation], the losses are high. The Library of Congress has documented that only 20 percent of U.S. feature films from the 1910s and 1920s survive in complete form in American archives; of the American features produced before 1950, about half still exist.
To learn more on IBM's impressive capabilities to pull of projects like this, or just how to store data for long term retention, attend one of the [IBM Systems Technical University] events we have coming up in Bangkok, Sao Paulo, Johannesburg, Las Vegas, Sydney, and Prague.
This year marks the 10 year anniversary of IBM's introduction of LTO tape technology. IBM is a member of the Linear Tape Open consortium which consists of IBM, HP and Quantum, referred to as "Technology Provider Companies" or TPCs. In an earlier job role, I was the "portfolio manager" for both LTO and Enterprise tape product lines.
Today, we held a celebration in Tucson, with cake and refreshments.
IBM Executives Doug Balog, IBM VP of Storage Platform, and Sanjay Tripathi, the new IBM Director and Business Line Executive for Tape, VTL and Archive systems, presented the successes of LTO tape over the past 10 years.
To date over 3.5 million LTO tape drives, and over 150 million LTO tape media cartridges have been shipped which is a testament to the remarkable marketplace acceptance of the technology.
In honor of this event, I decided to interview Bruce Master, IBM Senior Program Manager for Data Protection Systems, about this 10 year anniversary.
10 years of LTO technology is a great milestone. How is this especially significant to IBM and its clients?
According to IDC data, IBM has held the #1 leader position in market share for total world wide branded tape revenue for over 7 years and that IBM is still #1 in branded midrange tape revenue which includes the LTO tape technologies. IBM was the first drive manufacturer to deliver LTO-1 drives, back in September 2000, the first to deliver tape drive encryption to the marketplace on LTO-4 drives, and is shipping LTO generation 5 drives and libraries. IBM is the author of the new Linear Tape File System (LTFS) specification that has been adopted by the TPCs. This file system revolutionizes how tape can be used as if it were a giant 1.5 terabyte removable USB memory stick with the capability to be accessed with directory tree structures and drag and drop functionality. With LTO's built-in real-time compression, a single tape cartridge can hold up to 3TB of data.
The Linear Tape File System has been getting a lot of attention. Where can we learn more about it?
Why is tape still a critical part of a storage infrastructure?
Tape is low cost and provides critical off-line portable storage to help protect data from attacks that can occur with on-line data. For instance, on-line data is at risk of attack from a virus, hacker, system error, disgruntled employee, and more. Since tape is off-line, not accessible by the system, it protects against these forms of corruption. LTO technology also provides write-once read-many (WORM) tape media to help address compliance issues that specify non-erasable, non-rewriteable (NENR) storage, hardware encryption to secure data, as well as a low cost long term archive media. When data cools off, or becomes infrequently accessed, why keep it on spinning disk? Move it to tape where it is much greener and lower cost. A tape in a slot on a shelf consumes minimal energy.
So tape is not dead?
Ha! Far from it. Seems like disk-only "specialty shop" storage vendors that don’t have tape in their sales portfolio are the ones that propagate that myth. In reality, storage managers are tasked with meeting complex objectives for performance, compliance, security, data protection, archive and total cost of ownership. Optimally, a blend of disk and tape in a tiered infrastructure can best address these objectives. You can’t build a house with just a hammer. IBM has a rich tool kit of storage offerings including disk, tape, software, services and deduplication technologies to help clients address their needs.
Do you have an example of a client who was saved by tape?
Yes indeed. Estes Express, a large trucking firm, was hit by a hurricane that flooded their data center and destroyed all systems. Fortunately the company survived because the night before they had backed up all data on to IBM tape and moved the cartridges offsite! The company survived and has since implemented a best practices data protection strategy with a combination of disk-to-disk-to-tape (D2D2T) using LTO tape at the primary site, and a remote global mirrored site that is also backed up to LTO tape.
So tape saved the day. What is the outlook for tape innovation in the future?
The future is bright for tape. Earlier this year, IBM and Fujifilm were able to [demonstrate a tape density achievement] that could enable a native 35TB tape cartridge capacity! This shows a long roadmap ahead for tape and a continued good night’s sleep for storage managers knowing that their precious data will be safe.
Of course, LTO tape is just one of the many reasons IBM is a successful and profitable leader in the IT storage industry. Doug Balog talked about his experiences in London for the [October 7th launch] of IBM DS8800, Storwize V7000 and SAN Volume Controller 6.1. Sanjay Tripathi showed recent successes with IBM's ProtecTIER Data Deduplication Solution and Information Archive products.
I would like to thank Bruce Master for his time in completing this interview. To learn more about IBM tape and storage offerings, visit [ibm.com/storage].