Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
The blog team is working on re-directs for those who don't see this in time. Depending on which RSS feed reader you use, you may need to unsubscribe/re-subscribe to re-activate. You can updatethe URL for the feed to one of these:
Continuing this week in Las Vegas, we had a great set of sessions today.
Fibre Channel Overview
I like the manner in whichJim Robinson presented this "basics" session on how Fibre Channel works, why it is spelled "Fibre" not "Fiber", and how all the different layers work in the protocol.
IBM Virtualization Engine TS7700 series
Jim Fisher from the IBM Tucson lab presented the TS7700 series, which replaces our Virtual Tape Server (VTS). Hehad performance numbers to show that it was faster in various measurements against the B20 model of the VTS. Itis supported on the z/OS, z/VM, z/VSE, TPF and z/TPF operating systems.
IBM E-mail Archiving and Storage solution
Ron Henkhaus provided an overview of IBM's E-mail Archive and Storage appliance. The solution combines IBM BladeCenter server blade, DS4200 serieswith SATA disk, and pre-installed software: IBM Content Manager, IBM Records Manager, IBM CommonStore for Lotus Domino and Microsoft Exchange, and IBM System Storage Archive Manager. Services are included to get it connected toyour e-mail environment.
Lee La Frese from our Tucson performance lab presented various performance featuresof the IBM System Storage DS8000 series, and how they compare to competition.
First, some interesting statistics.
Back in 2002, the average high-end EnterpriseStorage Server (ESS) model F20 was configured only for 4 Terabytes (TB). In 2004,the average ESS was up to 12 TB. Today, the average DS8100 is 17.4 TB and the averageDS8300 is 41.5 TB.
51 percent of DS8000 series are configured for FCP only (Linux, UNIX, Windows, i5/OS),35 percent FICON only (System z mainframe), and 14% have both mixed.
Average I/O density has stabilized to about 0.6 IOPS per GB. This means that for everyTB of business data, you can expect most applications to issue 600 Input/Output requestsper second.
While IBM SAN Volume Controller has the fastest SPC-1 and SPC-2 benchmarks, the DS8000also has good results. Looking at just the monolithic "scale-up" systems, DS8000 hasthe fastest SPC-1, and second place for SPC-2.
Compared against the EMC DMX-3, the IBM DS8000 series has superior performance.For example, comparing 2Gbps port performance on each, DMX-3 is able to do 20 IOPS perport, compared to DS8000 with 38 IOPS per port.Compared against HDS USP, the response time for 60,000 IOPS for HDS averaged 10.5 milliseconds (msec), compared to IBM DS8000 less than 6.5 msec.
There are some unique features of the DS8000 to optimize performance. Two areAdaptive Multi-stream Prefetching (AMP) which helps improve processing of databasequeries, and HyperPAV which helps on mainframe workloads.
For FATA disks, performance of sequential reads and writes is only 20 percent less than15K RPM FC disks, but a whopping 50 percent less for random access. Consider using FATAfor audio/video streaming, surveillance data, seismic recordings, and medical imaging.
Comparing 146GB 10K versus 300GB 15K from a capacity perspective was interesting.37TB of 300GB 15K had 20 percent better response time, but 25 percent less maximum throughput,than 37TB of 146GB drives. Depending on your workload, this can help decided which youchoose.
Lee also covered RAID rebuild performance. When an individual HDD fails that is part of a RAIDgroup, the DS8000 performs a rebuild onto a spare drive. A RAID-5 rebuild is processedat 52 MB/sec, compared to RAID-10 at 56 MB/sec. Rebuild processing is low priority,so any other workload will take higher priority to avoid impacting application performance.Compared to EMC, the IBM DS8000 can rebuild RAID-5 73GB 15K RPM drive in only 24 minutes, but it takes 37 minutes to do this on a DMX-3. That is 13 minutes of additional exposure where a second drive failure might cause you to lose all your data in that RAID group!
N series ILM and Business Continuity
James Goodwin from our Advanced Technical Support team presented IBM System Storage N series featuresthat relate to ILM and Business Continuity. He covered features like SnapShot, SnapLock,SnapVault and LockVault.
I have arrived safely in Las Vegas for the IBM System Storage and Storage Networking Symposium. This eventis held once every year. The gold sponsors were: Brocade, Cisco, Finisar, Servergraph, and VMware. Our silversponsor was Qlogic.
I presented IBM's System Storage strategy and an overview of our product line. For those who missed it,our strategy is focused on helping customers in four key areas:
Optimize IT - to simplify and automate your IT operations and optimize performance and functionality, through server/storage synergies, storage virtualization, and intergrated storage infrastructure management.
Leverage Information - to enable a single view of trusted business information through data sharing, and to get the most value from information through Information Lifecycle Management (ILM).
Mitigate Risk - to comply with security and regulatory requirements, and keep your business running with a complete set of business continuity solutions. IBM offers a range of non-erasable, non-rewriteable storage, encryption on disk and tape, and support for IT Infrastructure Library (ITIL) service management disciplines.
Enable Business Flexibility - to provide scalable solutions and protect your IT investment through the use of open industry standards like Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S). IBM offers scalability in three dimensions: Scale-up, Scale-out, and Scale-within.
IBM has a broad storage portfolio, in seven offering categories:
Disk Systems, including our SAN Volume Controller, DS family, and N series.
Tape Systems, including tape drives, libraries and virtualization.
Storage Networking, a complete set of switches, directors and routes
Infrastructure Management, featuring the IBM TotalStorage Productivity Center software
Business Continuity, advanced copy services and the software to manage them
Lifecycle and Retention, our non-erasable, non-rewriteable storage including DR550, N series with SnapLock, and WORM tape support, Grid Archive Manager and our Grid Medical Archive Solution (GMAS)
Storage Services, everything from consulting, design and deployment to outsourcing and hosting.
I could talk all day on this, but given that the room was packed, every seat taken and the rest of the audience standing along the walls, I had to keep it down to one hour.
SAN Volume Controller Overview
I presented an overview of the IBM System Storage SAN Volume Controller (SVC), IBM's flagship disk virtualizationproduct. Rather than giving a long laundry list of features and benefits,I focused on the five that matter most:
Reduces the cost and complexity of managing storage, especially for mixed storage environments
Simplifies Business Continuity through non-disruptive data migration and advanced copy services
Improves storage utilization, getting more value from the storage hardware you already have
Enhances personnel productivity, empowering storage administrators to get their job done
Delivers high availability and performance
SAN Volume Controller - Customer Success Stories
A good part of this conference are presented by non-IBMers, which include Business Partners and clientssharing their experiences. In this session, we had two speakers share their experiences with SVC.
David Snyder keeps over 80 web sites online and available. His digital media technologiesteam uses SVC to make their storage administration easier, and ensure high availability for web site content creation and publishing.
Mark Prybylski manages storage at his company, a financial bank. His storage management team uses SVC Global Mirror which provides asynchronous disk mirroring between different types of disk, as part oftheir Business Continuity/Disaster Recovery plan.
The last session I attended was "Storage .. to Optimize your ECM depoloyments" by Jerry Bower, now working for IBM as part of our recent acquisition of the Filenet company. ECM stands for Enterprise Content Management, and IBM is the market leader in this space. Jerry gave a great overview of IBM Content Manager software suite, our newly acquired Filenet portfolio, and the storage supported.
After the sessions was a reception at the Solution Center with dozens of exhibitor booths. For example,Optica Technologies had their PRIZM productswhich are able to connect FICON servers to ESCON storage devices.
I am back at "the Office" for a single day today. This happens often enough I need a name for it.Air Force pilots that practice landing and take-offs call them "Touch and Go", but I think I needsomething better. If you can think of a better phrase, let me know.
This week, I was in Hartford, CT, Somers, NY and our Corporate Headquarters in Armonk, in a varietyof meetings, some with editors of magazines, others with IBMers I have only spoken to over the phone andfinally got a chance to meet face to face.
I got back to Tucson last night, had meetings this morning in Second Life, then presented "InformationLifecycle Management" in Spanish to a group of customers from Mexico, Chile, and Brazil. We have a great Tucson Executive Briefing Center, and plenty of foreign-language speakers to draw from our localemployees here at the lab site.
Sunday, I leave for Las Vegas for our upcoming IBM Storage and Storage Networking Symposium. We will cover the latest in our disk, tape, storage networking and related software.Do you have your tickets? If you plan to attend, and want to meet up with me, let me know.
Last week, a writer for a magazine contacted us at IBM to confirm a quote that writing a Terabyte (TB) on disk saves 50,000 trees. I explained that this was cited from UC Berkeley's famousHow Much Information? 2003 study.
To be fair, the USA Today article explains that AT&T also offers "summary billing" as well as "on-line billing", but apparently neither of these are the default choice. I can understand that phone companies send out bills on paper because not everyone who has a phone has internet access, but in the case of its iPhone customers, internet access is in the palm of your hands! Since all iPhone customers have internet access, and AT&T knows which customers are using an iPhone, it would make sense for either on-line billing or summary billing to be the default choice, and let only those that hate trees explicitly request the full billing option.
Sending a box of 300 pages of printed paper is expensive, both for the sender and the recipient. This informationcould have been shipped less expensively on computer media, a single floppy diskette or CDrom for example. Forthose who prefer getting this level of detail, a searchable digitized version might be more useful to the consumer.
Which brings me to the concept of Information Lifecycle Management (ILM). You can read my recent posts on ILM byclicking the Lifecycle tab on the right panel, or my now infamous post from last year about ILM for my iPod.
His recollection of the history and evolution of ILM fairly matches mine:
The phrase "Information Lifecycle Management" was originally coined by StorageTek in early 1990s as a way to sell its tape systems into mainframe environments. Automated tape libraries eliminated most if not all of the concerns that disk-only vendors tout as the problem with manual tape. I began my IBM career in a product now called DFSMShsm which specifically moved data from disk to tape when it no longer needed the service level of disk. IBM had been delivering ILM offerings since the 1970s, so while StorageTek can't claim inventing the concept, we give them credit for giving it a catchy phrase.
EMC then started using the phrase four years ago in its marketing to sell its disk systems, including slower less-expensive SATA disk. The ILM concept helped EMC provide context for the many acquisitions of smaller companies that filled gaps in the EMC portfolio. Question: Why did EMC acquire company X? Answer: To be more like IBM and broaden its ILM solution portfolio.
Information Lifecycle Management is comprised of the policies, processes,practices, and tools used to align the business value of information with the mostappropriate and cost effective IT infrastructure from the time information isconceived through its final disposition. Information is aligned with businessrequirements through management policies and service levels associated withapplications, metadata, and data.
Whitepapers and other materials you might read from IBM, EMC, Sun/StorageTek, HP and others will all pretty much tell you what ILM is, consistent with this SNIA definition, why it is good for most companies, and how it is not just about buying disk and tape hardware. Software, services, and some discipline are needed to complete the implementation.
While the SNIA definition provides a vendor-independent platform to start the conversation, it can be intimidatingto some, and is difficult to memorize word for word.When I am briefing clients, especially high-level executives, they often ask for ILM to be explained in simpler terms. My simplified version is:
Information starts its life captured or entered as an "asset" ...
This asset can sometimes provide competitive advantage, or is just something needed for daily operations. Digital assets vary in business value in much the same way that other physical assets for a company might. Some assets might be declared a "necessary evil" like laptops, but are tracked to the n'th degree to ensure they are not lost, stolen or taken out of the building. Other assetsare declared "strategically important" but are readily discarded, or at least allowed to walk out the door each evening.
... then transitions into becoming just an "expense" ...
After 30-60 days, many of the pieces of information are kept around for a variety of reasons. However, if it isn'tneeded for daily operations, you might save some money moving it to less expensive storage media, throughless expensive SAN or LAN network gear, via less expensive host application servers. If you don't need instantaccess, then perhaps the 30 seconds or so to fetch it from much-less-expensive tape in an automated tape librarycould be a reasonable business trade-off.
... and ends up as a "liability".
Keeping data around too long can be a problem. In some cases, incriminating, and in other cases, just having toomuch data clogs up your datacenter arteries. If not handled properly within privacy guidelines, data potentially exposes sensitive personal or financial information of your employees and clients. Most regulations require certain data to be kept, in a manner protected against unexpected loss, unethical tampering, and unauthorized access, for a specific amount of time, after which it can be destroyed, deleted or shredded.
So ILM is not just a good idea to save a company money, it can keep them out of the court room, as well as help save the environment and not kill so many trees. Now that 100 percent of iPhone customers have internet access, and a goodnumber of non-iPhone customers have internet access at home, work, school or public library, it makes sense for companies to ask people to "opt-in" to getting their statements on paper, rather than forcing them to "opt-out".
Despite this, or perhaps because of this, over 30 percent of IBM's Linux server revenue is onnon-x86 platforms, avoiding the XenSource vs. VMware decision altogether. Both System z (traditional mainframe servers) and System p (traditional UNIX servers) are able to run many Linux images in a fully virtualized manner, without VMware or XenSource.
Philip Rosedale, chief executive of Linden Labs, which produced the Second Life virtual reality environment, said Second Life and Facebook are popular because they give people a new environment to interact in that they are comfortable with.
Of course I have blogged for months now on my involvement in Second Life, and how IBM is investing in this platform for business purposes. Recently, IBM made news for publishing its Code of Conduct,and set of guidelines on how you run your avatar in virtual worlds, including Second Life. IBM recognizesthe business potential of virtual worlds, and has formed the "3D Internet" group exploring the possibilities.Over 5000 IBM employees now use Second Life on a regular basis.
I was surprised to learn that there were over 23,000 IBMers already on Facebook. I used to be on LinkedIn,but found FaceBook to have more IBMers and have made the switch. Recently, we were told that these 23,000 IBMers spend 19 minutes, on average, per day visiting Facebook pages. Nobody askedme how much time I spend every day on FaceBook, but with over 350,000 employees in the company,I am sure some have ways to track the lives of others.
Both of these count as adding more "FUN" into the workplace, which everyone should strive for. It is also good to know that the skills you developusing Second Life or FaceBook can carry over to your next job role or your next employer.The number-one question I get from new colleagues when I mention either these exciting new ways to communicate and collaborate is: "But how is this related to business?"
Second Life is obvious, a new innovative way to hold meetings with colleagues, Business Partners and clients isgoing to have business value. Meetings in Second Life help you focus on what is being discussed, versus a plaintelephone call where your eyes may wander to other things in your view. Of course nothing beatsthe effectiveness of face-to-face meetings, but Second Life offers a more energy-efficient alternative than traveling to other cities or countries.
Stephen over at RupturedMonkey discusses the challenges of recruiting storage administrators:
There has been a Storage Admin job advertised for many months but no one wants it. Why? It's offering VERY good money but the word has got around the company has poor management practices and most people don't last for more than 6 months. So, with the shortage of good SAN people, good money and conditions, what can that company do to recruit someone? ...
This leads me to the thought that has anyone ever thought about the standards that storage administrators should follow? Can an employer look up a web site to find questions to ask prospective employees? More often than not, they are recruiting because the previous one left so how can companies know what they are getting.
There is actually a great standard called Information Technology Infrastructure Library (ITIL) that applies not just to storage administrators, but other IT personnel such as network administrators and server administrators. Here's a quick web-site about ITIL History:
ITIL History can be traced back to the late 1980’s when the British government determined that the level of IT service quality provided to them was not sufficient enough. The Central Computer and Telecommunications Agency (CCTA), now called the Office of Government Commerce (OGC), was tasked with developing a framework for efficient and financially responsible use of IT resources within the British government and the private sector.
The goal was to develop an approach that would be vendor-independent and applicable to organizations with differing technical and business needs. This resulted in the creation of the ITIL.
This standard spread from the UK to other governments in Europe, and is now being adopted worldwide by government agencies, non-profit organizations and commercial enterprises. IBM, of course, has been involved along the way, encouraging this set of best practices to take hold.
ITIL provides a common vocabulary that puts everyone in the IT industry on the same page, with the ultimate goal of helping companies run their IT organizations more efficiently.
ITIL provides recommendations, or best practices, for managing the way IT provides services to the rest of the organization, in the same way you would the rest of your business, with a defined set of processes.
While ITIL does a great job of describing what needs to be done, it doesn’t describe how to get it done. It doesn’t tell you how to take those best practices and implement them with real-life tools and technology. It’s not prescriptive.
The general process is now referred to as "IT Service Management", and the seven ITIL books are managed by the IT Service Management forum (ITSMf).
ITIL is vendor-independent. You can learn ITIL disciplines at one IT shop, and carry those skills with you when you go to another IT shop that has completely different gear. A common vocabulary would allow employers to post jobs in a consistent manner, and ask questions to those interviewing for the job. You can be ITIL-trained, and even ITIL-certified. IBM offers this training.
Of course, specific skills on how to use specific software to configure storage devices, request change control approvals, or define SAN zones, are useful, but often can be picked up on the job, reading the vendor manuals on the specifics. Of course, you can use IBM TotalStorage Productivity Center, which would allow someone to manage a variety of disk, tape and SAN fabric gear from one interface, greatly reducing the learning curve.
Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
Perhaps they won't be surprised any more. Here is an article in eWeek that explains how IBM isreducing energy costs 80% by consolidating 3,900 rack-optimized servers to 33 IBM System z mainframe servers, running Linux, in its own data centers. Since 1997, IBM has consolidated its 155 strategic worldwide data center locations down to just seven.
I am very pleased that IBM has invested heavily into Linux, with support across servers, storage, software andservices. Linux is allowing IBM to deliver clever, innovative solutions that may not be possible with other operating systems. If you are in storage, you should consider becoming more knowledgeable in Linux.
The older systems won't just end up in a landfill somewhere. Instead, the details are spelled out inthe IBM Press Release:
As part of the effort to protect the environment, IBM Global Asset Recovery Services, the refurbishment and recycling unit of IBM, will process and properly dispose of the 3,900 reclaimed systems. Newer units will be refurbished and resold through IBM's sales force and partner network, while older systems will be harvested for parts or sold for scrap. Prior to disposition, the machines will be scrubbed of all sensitive data. Any unusable e-waste will be properly disposed following environmentally compliant processes perfected over 20 years of leading environmental skill and experience in the area of IT asset disposition.
Whereas other vendors might think that some operational improvements will be enough, such as switching to higher-capacity SATA drives, or virtualizing x86 servers, IBM recognizes that sometimes more fundamental changes are required to effect real changes and real results.
I would like to welcome IBMer Barry Whyte to the blogosphere!
From his bio:
Barry Whyte is a 'Master Inventor' working in the Systems & Technology Group based in IBM Hursley, UK. Barry primarly works on the IBM SAN Volume Controller virtualization appliance. Barry graduated from The University of Glasgow in 1996 with a B.Sc (Hons) in Computing Science. In his 10 years at IBM he has worked on the successful Serial Storage Architecture (SSA) range of products and the follow-on Fibre Channel products used in the IBM DS8000 series. Barry joined the SVC development team soon after its inception and has held many positions before taking on his current role as SVC performance architect. Outside of work, Barry enjoys playing golf and all things to do with Rotary Engines.
To avoid confusion in future posts, I will refer to Barry Whyte as BarryW, and fellow EMC blogger Barry Burke (aka the Storage Anarchist) as BarryB.
I'm in Chicago this week, but it is actually HOTTER here than in my home town of Tucson, Arizona.