Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
The smart people at the University of Pittsburgh manage five campuses and over 33,000 students, andneeded to create an enterprise storage solution that would give it three key benefits. Of course, they turnedto IBM, the number one overall storage hardware vendor, to deliver.
A new storage infrastructure with the capacity to grow with the University of Pittsburgh as needed
Improved system reliability with reduced downtime, and availability 24/7/365
A significantly more manageable storage solution that could lower costs and provide better system efficiency through virtualization
As a result, IBM shipped its 25,000th high-end disk storage system, in this case two IBM System Storage DS8300 models, along with storage virtualization, and other related hardware, software and services, to provide a complete end-to-end solution.
Here is what Jinx Walton, Director of Computing Services and Systems Development at the University of Pittsburgh, had to say about it...
"The University of Pittsburgh supports large enterprise systems, and the number and complexity of new systems continue to grow. To effectively manage these systems it was necessary to identify an enterprise storage solution that would leverage our existing investments in storage, make allocation of storage flexible and responsive to project needs, provide centralized management, and offer the reliability and stability we require. The integrated IBM storage solution met these requirements"
I have arrived safely in Las Vegas for the IBM System Storage and Storage Networking Symposium. This eventis held once every year. The gold sponsors were: Brocade, Cisco, Finisar, Servergraph, and VMware. Our silversponsor was Qlogic.
I presented IBM's System Storage strategy and an overview of our product line. For those who missed it,our strategy is focused on helping customers in four key areas:
Optimize IT - to simplify and automate your IT operations and optimize performance and functionality, through server/storage synergies, storage virtualization, and intergrated storage infrastructure management.
Leverage Information - to enable a single view of trusted business information through data sharing, and to get the most value from information through Information Lifecycle Management (ILM).
Mitigate Risk - to comply with security and regulatory requirements, and keep your business running with a complete set of business continuity solutions. IBM offers a range of non-erasable, non-rewriteable storage, encryption on disk and tape, and support for IT Infrastructure Library (ITIL) service management disciplines.
Enable Business Flexibility - to provide scalable solutions and protect your IT investment through the use of open industry standards like Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S). IBM offers scalability in three dimensions: Scale-up, Scale-out, and Scale-within.
IBM has a broad storage portfolio, in seven offering categories:
Disk Systems, including our SAN Volume Controller, DS family, and N series.
Tape Systems, including tape drives, libraries and virtualization.
Storage Networking, a complete set of switches, directors and routes
Infrastructure Management, featuring the IBM TotalStorage Productivity Center software
Business Continuity, advanced copy services and the software to manage them
Lifecycle and Retention, our non-erasable, non-rewriteable storage including DR550, N series with SnapLock, and WORM tape support, Grid Archive Manager and our Grid Medical Archive Solution (GMAS)
Storage Services, everything from consulting, design and deployment to outsourcing and hosting.
I could talk all day on this, but given that the room was packed, every seat taken and the rest of the audience standing along the walls, I had to keep it down to one hour.
SAN Volume Controller Overview
I presented an overview of the IBM System Storage SAN Volume Controller (SVC), IBM's flagship disk virtualizationproduct. Rather than giving a long laundry list of features and benefits,I focused on the five that matter most:
Reduces the cost and complexity of managing storage, especially for mixed storage environments
Simplifies Business Continuity through non-disruptive data migration and advanced copy services
Improves storage utilization, getting more value from the storage hardware you already have
Enhances personnel productivity, empowering storage administrators to get their job done
Delivers high availability and performance
SAN Volume Controller - Customer Success Stories
A good part of this conference are presented by non-IBMers, which include Business Partners and clientssharing their experiences. In this session, we had two speakers share their experiences with SVC.
David Snyder keeps over 80 web sites online and available. His digital media technologiesteam uses SVC to make their storage administration easier, and ensure high availability for web site content creation and publishing.
Mark Prybylski manages storage at his company, a financial bank. His storage management team uses SVC Global Mirror which provides asynchronous disk mirroring between different types of disk, as part oftheir Business Continuity/Disaster Recovery plan.
The last session I attended was "Storage .. to Optimize your ECM depoloyments" by Jerry Bower, now working for IBM as part of our recent acquisition of the Filenet company. ECM stands for Enterprise Content Management, and IBM is the market leader in this space. Jerry gave a great overview of IBM Content Manager software suite, our newly acquired Filenet portfolio, and the storage supported.
After the sessions was a reception at the Solution Center with dozens of exhibitor booths. For example,Optica Technologies had their PRIZM productswhich are able to connect FICON servers to ESCON storage devices.
I am back at "the Office" for a single day today. This happens often enough I need a name for it.Air Force pilots that practice landing and take-offs call them "Touch and Go", but I think I needsomething better. If you can think of a better phrase, let me know.
This week, I was in Hartford, CT, Somers, NY and our Corporate Headquarters in Armonk, in a varietyof meetings, some with editors of magazines, others with IBMers I have only spoken to over the phone andfinally got a chance to meet face to face.
I got back to Tucson last night, had meetings this morning in Second Life, then presented "InformationLifecycle Management" in Spanish to a group of customers from Mexico, Chile, and Brazil. We have a great Tucson Executive Briefing Center, and plenty of foreign-language speakers to draw from our localemployees here at the lab site.
Sunday, I leave for Las Vegas for our upcoming IBM Storage and Storage Networking Symposium. We will cover the latest in our disk, tape, storage networking and related software.Do you have your tickets? If you plan to attend, and want to meet up with me, let me know.
Stephen over at RupturedMonkey discusses the challenges of recruiting storage administrators:
There has been a Storage Admin job advertised for many months but no one wants it. Why? It's offering VERY good money but the word has got around the company has poor management practices and most people don't last for more than 6 months. So, with the shortage of good SAN people, good money and conditions, what can that company do to recruit someone? ...
This leads me to the thought that has anyone ever thought about the standards that storage administrators should follow? Can an employer look up a web site to find questions to ask prospective employees? More often than not, they are recruiting because the previous one left so how can companies know what they are getting.
There is actually a great standard called Information Technology Infrastructure Library (ITIL) that applies not just to storage administrators, but other IT personnel such as network administrators and server administrators. Here's a quick web-site about ITIL History:
ITIL History can be traced back to the late 1980’s when the British government determined that the level of IT service quality provided to them was not sufficient enough. The Central Computer and Telecommunications Agency (CCTA), now called the Office of Government Commerce (OGC), was tasked with developing a framework for efficient and financially responsible use of IT resources within the British government and the private sector.
The goal was to develop an approach that would be vendor-independent and applicable to organizations with differing technical and business needs. This resulted in the creation of the ITIL.
This standard spread from the UK to other governments in Europe, and is now being adopted worldwide by government agencies, non-profit organizations and commercial enterprises. IBM, of course, has been involved along the way, encouraging this set of best practices to take hold.
ITIL provides a common vocabulary that puts everyone in the IT industry on the same page, with the ultimate goal of helping companies run their IT organizations more efficiently.
ITIL provides recommendations, or best practices, for managing the way IT provides services to the rest of the organization, in the same way you would the rest of your business, with a defined set of processes.
While ITIL does a great job of describing what needs to be done, it doesn’t describe how to get it done. It doesn’t tell you how to take those best practices and implement them with real-life tools and technology. It’s not prescriptive.
The general process is now referred to as "IT Service Management", and the seven ITIL books are managed by the IT Service Management forum (ITSMf).
ITIL is vendor-independent. You can learn ITIL disciplines at one IT shop, and carry those skills with you when you go to another IT shop that has completely different gear. A common vocabulary would allow employers to post jobs in a consistent manner, and ask questions to those interviewing for the job. You can be ITIL-trained, and even ITIL-certified. IBM offers this training.
Of course, specific skills on how to use specific software to configure storage devices, request change control approvals, or define SAN zones, are useful, but often can be picked up on the job, reading the vendor manuals on the specifics. Of course, you can use IBM TotalStorage Productivity Center, which would allow someone to manage a variety of disk, tape and SAN fabric gear from one interface, greatly reducing the learning curve.
Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
Perhaps they won't be surprised any more. Here is an article in eWeek that explains how IBM isreducing energy costs 80% by consolidating 3,900 rack-optimized servers to 33 IBM System z mainframe servers, running Linux, in its own data centers. Since 1997, IBM has consolidated its 155 strategic worldwide data center locations down to just seven.
I am very pleased that IBM has invested heavily into Linux, with support across servers, storage, software andservices. Linux is allowing IBM to deliver clever, innovative solutions that may not be possible with other operating systems. If you are in storage, you should consider becoming more knowledgeable in Linux.
The older systems won't just end up in a landfill somewhere. Instead, the details are spelled out inthe IBM Press Release:
As part of the effort to protect the environment, IBM Global Asset Recovery Services, the refurbishment and recycling unit of IBM, will process and properly dispose of the 3,900 reclaimed systems. Newer units will be refurbished and resold through IBM's sales force and partner network, while older systems will be harvested for parts or sold for scrap. Prior to disposition, the machines will be scrubbed of all sensitive data. Any unusable e-waste will be properly disposed following environmentally compliant processes perfected over 20 years of leading environmental skill and experience in the area of IT asset disposition.
Whereas other vendors might think that some operational improvements will be enough, such as switching to higher-capacity SATA drives, or virtualizing x86 servers, IBM recognizes that sometimes more fundamental changes are required to effect real changes and real results.
Seth Godin has an interesting post titled Times a Million.He recounts how many people determine the fuel savings of higher-mileage cars to be only $300-$900 per year,and that this is not enough to motivate the purchase of a more-efficient vehicle, such as a hybrid orelectric car. Of course, if everyone drove more efficient vehicles, the benefits "times a million" wouldbenefit everyone and the world's ecology.
When I discuss storage-related concepts, many executives mistakenly relate them to the one area of information technologythey know best: their laptop. Let's take a look at some examples:
Information Lifecycle Management
Information Lifecycle Management (ILM) includes classifying data by business value, and then using this to determineplacement, movement or deletion. If you think about the amount of time and effort to review the files on yourindividual laptop, and to manually select and move or delete data, versus the benefits for the individual laptopowner, you would dismiss the concept. Most administrative tasks are done manually on laptops, because automatedsoftware is either unavailable or too expensive to justify for a single owner.
In medium and large size enterprises, automated software to help classify, move and delete data makes a lot of sense.Executives who decide that ILM is not for their data center, based on their experiences with their laptop, are losingout on the "times a million" effect.
Laptops have various controls to minimize the use of battery, and these controls are equally available when pluggedin. Many users don't bother turning off the features and functions they don't need when plugged in, because theyfeel the cost savings would only amount to pennies per day.
Times a million, energy savings do add up, and options to reduce the amount used per server, per TB of data stored, not only save millions of dollars per year, but can also postpone the need to build a new data center, or upgrade the electrical systems in your existing data center.
Backup and Disaster Recovery planning
I am not surprised how many laptops do not have adequate backup and disaster recovery plans. When executives thinkin terms of the time and effort to backup their data, often crudely copying key files to CDrom or USB key, and worryingabout the management of those copies, which copies are the latest, and when those copies can be destroyed, theymight reject deploying appropriate backup policies for others.
Times a million, the collected data stored on laptops could easily be half of your companies emails and intellectual property. Products like IBM Tivoli Storage Manager can manage a large number of clients with a few administrators,keeping track of how many copies to keep, and how long to keep them.
So, next time you are looking at technology or solutions for your data center, don't suffer from "Laptop Mentality". Focus instead on the data center as a whole.
Chris Evans over at Storage Architect posts aboutHardware Replacement Lifecycle Update, on how storage virtualization can helpwith storage hardware replacemement. He makes two points that I would like to comment on.
... indeed products such as USP, SVC and Invista can help in this regard. However at some stage even the virtualisation tools need replacing and the problem remains, although in a different place.
Knowing that replacement of technologies at all levels are inevitable, IBM System Storage SAN Volume Controlleris actually designed to allow cluster non-disruptive upgrade, which we announcedMay 2006.
The process is quite elegant. The SVC consists of one or more node-pairs, and can be upgraded while the systemis up and running by replacing nodes one at a time in a sequence of suspend and resume. All of the mapping tablesare loaded onto the new nodes from the rest of the still active nodes.
I was hoping as part of the USP-V announcement HDS would indicate how they intend to help customers migrate from an existing USP which is virtualising storage, but alas it didn't happen.
Unlike the SVC, once cannot just upgrade the USP in place and make it into a USP-V. While it might be possible tounplug external disk from the old USP, and re-plug into the new USP-V, what do you do about the internal disk data?I doubt you can just move drawers and trays of disk from the old to the new. The data has to be moved some other way.
Some have asked why not just put an SVC in front of both the old USP and the new USP-V and transfer the data that way.While SVC does support virtualizing the old USP device, IBM is still testing the new USP-V as a managed device, and so this solution is not yet available, and would only apply to the LUNs in the USP-V, not the volumes specifically formatted for System i or System z.
An alternative is to take advantage of IBM's Data Mobility Services, the result of our recentacquisition of SofTek. IBM can help you both mainframe and distributed systems data from any device, to any device.
In a typical four year lifecycle of storage arrays, it might take six months or so to fill up the box, and might takeas much as a year at the end to move the data out to other equipment. SVC can greatly reduce both of these, so that you can take immediate advantage of new equipment as soon as possible, and keep using it for close to the full four years,migrating weeks or days before your lease expires.
Use more efficient disk media, such as high-capacity SATA disk drives
Both are great recommendations, but why limit yourself to what EMC offers? Your x86-based machines are only a subset of your servers,and disk is only a subset of your storage. IBM takes a more holistic approach, looking at the entire data center.
VMware is a great product, and IBM is its top reseller. But in addition to VMware, there are other solutions for the x86-based servers, like Xen and Microsoft Virtual Server. IBM's System p, System i, and System z product lines all support logical partitioning.
To compare the energy effectiveness of server virtualization, consider a metric that can apply across platforms. For example, for an e-mail server, consider watts per mailbox. If you have, say, 15,000 users, you can calculate how many watts you are consuming to manage their mailboxes on your current environment, and compare that with running them on VMware, or logical partitions on other servers. Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
More efficient Media
SATA and FATA disks support higher capacities, and run at slower RPM speeds, thus using fewer watts per terabyte.A terabyte stored on 73GB high-speed 15K RPM drives consumes more watts than the same terabyte stored using 500GB SATA.Chuck correctly identifies that tape is more power-efficient than disk, but then argues that paper is more power-efficient than tape. But paper is not necessarily more efficient than tape.
ESG analyst Steve Duplessie divides up data betweenDynamic vs. Persistent. The best place to put dynamic data is on disk, and here is where evaluation of FC/SAS versus SATA/FATA comes into play.Persistent data, on the other hand, can be stored on paper, microfiche, optical or tape media. All of these shelf-resident media consume no electricity, nor generate any heat that would require additional cooling.
A study by scientists at the Lawrence Berkeley National Laboratory titled High-Tech Means High-Efficiency: The Business Case for Energy Management in High-Tech Industries indicates thatData centers consume 15 to 100 times more energy per square foot than traditional office space. Storing persistent data in traditional office space can save a huge amount of energy. Steve Duplessie feels the ratio of dynamic to persistent data is 1:10 today, but is likely to grow to 1:100 in the near future, raising the demand for energy-efficient storage of persistent data ever more important to our environment.
Data centers consume nearly 5000 Megawatts in the USA alone, 14000 Megawatts worldwide. To put that in perspective, the country of Hungary I was in last week can generate up to 8000 Megawatts for the entire country (and they were using 7400 Megawatts last week as a result of their current heat wave, causing them grave concern).
Back in the 1990's, one of the insurance companies IBM worked with kept data on paper in manila folders, and armiesof young adults in roller skates were dispatched throughout the large warehouses of shelves to get the appropriate folder in response to customer service inquiries. Digitizing this paper into electronic format greatly reduced the need for this amount of warehouse space, as well as improved the time to retrieve the data.
A typical file storage box (12 inch x 12 inch x 18 inch) containing typed pages single-spaced, double-sided, 12 point font could hold perhaps 100MB. The same box could hold a hundred or more LTO or 3592 tape cartridges, each storing hundreds of GB of information. That's a million-to-one improvement of space-efficiency, and from a watts-per-TB basis, translates to substantial improvement in standard office air conditioning and lighting conditions.
To learn more about IBM's Project Big Green, watch thisintroductory video which used Second Life for the animation.
Back in the late 1980's and early 1990's, I was one of the architects for DFSMS on z/OS, and customers always asked, "What is the clip level?", in other words, how big does a customer have to be to take advantage of DFSMS. We worked it out that if you had more than 100GB of disk data, DFSMS is worthwhile. DFSMS is now just standard by default, as everyone now easily has more than 100GB of data.
Later, in the late 1990's, I worked on Linux for System z. Again, customers asked how many Linux guest images would justify deploying applications on a mainframe. We worked it out to about 10 images. 10 Linux logical partitions, or Linux guests under z/VM was enough to cost justify the entire investment.
So what is the "clip level" for SANs? How many servers does an SMB need to have to justify deploying a SAN? IBM announced the new BladeCenter S designed specifically for mid-sized companies, 100 to 1000 employees, typically running 25 to 45 servers. However, I suspect companies as small as 7-10 servers would probably benefit from deploying an FC or IP SAN.
What do you think? Send me a comment on how many servers should be the clip level.
Last week, I opined that Monday's IDC announcement "IBM #1 in combined disk and tape storage hardwaresales for 2006" was in part because of a resurgence of interest in tape, with four specific examples. There was a lot of reaction and reflection fromboth sides.
On the one side...
EMC blogger Mark Twomey at Storagezilla admits that perhapsTape Isn't Dead after all,is perhaps the best place to put long-term archive data, but not for backup? EMC's "creative marketing types" put out this Fun With Tape video that I found amusing. (It asks for a first name,last name, and e-mail address, which are then embedded into the resulting video itself, and perhaps forwarded to your nearest EMC sales rep, so answer according to your wishes for privacy).
The "mummy wrapped in tape media" seems to be a common theme, and shows up again in LiveVault'svideo with John Cleese, which makes the same argument asthe EMC video above, namely: switch your backups from tape to disk because we are a disk-only vendor.
... and on the other side
JWT over at DrunkenData asks Which is greener, disk or tape?Tape is, of course, by a long shot, and an essential part of IBM's Big Green initiative, a project to invest$1US Billion dollars per year for data centers to be more efficient for power and cooling.
Sun/StorageTek blogger Randy Chalfant questions the Death of Tape, and argues thatdisk-only solutions suffer from atrophy.The results he posts from a survey of 200 customers are similar to those we've seen with customers using IBM TotalStorage Productivity Center, our software to help evaluate data usage, and identify misuse, in your data center.
To my readers in the USA, United Kingdom, Ireland, South Africa, China and Japan, and a few other countries, Happy Father's Day!
This week I was in Palm Springs in meetings with clients, prospects, business partners and IBM sales reps.
Tuesday consisted of "outdoor meetings", but the high winds caused some people to arrive late, and others to land in the various sand traps and water hazards. A "welcome reception" event allowed everyone to socialize and get to know the IBM experts and executives. Two of my colleagues, Mike Stanek and Dave Wyatt, were with me also in Australia last week, and so the three of us were discussing recovery from jet lag.
Wednesday was organized as a main tent event, where everyone met into one large room to hear our strategy,latest set of offerings, and customer testimonials. This was done indoors, of course, which was a good thing as the winds were now gusting up to 50 miles per hour, knocking over windmills and making the local news.
Here's a quick sample from the testimonials:
An insurance company virtualized their IBM DS8000, DS4000, ESS 800 and EMC DMX3 high-end disk with theIBM System Storage SAN Volume Controller and got higher availability and performance. Data migrationefforts that used to take six(6) hours of admin time now took less than one hour, and with no system downtime.They have a total of 350TB virtualized under SVC now, but plan to extend this for a variety of other projects.
A bank presented their success using "Global Mirror" (IBM's asynchronous two-site replication disk mirroring capability).Their previous "business continuity" plan was called 2-20-24 for 2 sites that were 20 miles apart and recovery time objective (RTO) of 24 hours. With the events of Hurricane Katrina, this was considered inadequate, and a new2-200-6 plan was requested, across 200 miles with a recovery time objective of only 6 hours. The chose to deploythis one application at a time, to learn and grow by experience in each phase. They started with Microsoft Exchange e-mail application running under VMware on BladeCenter servers, and wereable to recover remotely within 1 hour. They are now looking to refine and automate the recovery process, perhapswith IBM TotalStorage Productivity Center for Replication and Geographically Dispersed Open Clusters (GDOC).
A healthcare provider presented their success with tiered storage, managing a 475TB mix of IBM DS8000, DS6000,DS4000 and HP EVA disk arrays. The key was having centralized storage management from IBM, which allowedthem to shrink provisioning time from 3 weeks average, to now 96% of their storage provisioning requests are completedin less than 1 week. Moving data between storage tiers was non-disruptive, and the significantcosts savings greatly justified the change in "mindset" that required some training on the new environment.
Thursday we offered a series of "workshops" on specific topics. These were interactive sessions to discuss installation, design and deployment of various solutions. The event ended early enough so that people couldreturn home, or go to the practice range, which reminded me of this inspiring video on How to play golf as well as Tiger Woods.
The event got great reviews, and I look forward to the next one. Until then, enjoy the weekend!
Yesterday, IBM announced a variety of new storage offerings. Our theme this time around was "Policies and Performance". Here's a quick recap.
IBM offers new appliance and gateway models of its popular "unified storage" IBM System Storage N series disk systems.The N5300 appliance has two models. A10 for the single-controller, and A20 for the dual-controller model. The N5600 gateway also has two models. G10 for the single-controller, and G20 for the dual-controller model.A new EXN4000 disk expansion drawer is 3U high, and can hold up to 14 disks. It can support 1Gbps, 2Gpbs and 4Gpbs speeds.In addition to all this new "performance", we offer a new "policy" called the Advanced Single Instance Storage feature for the N5000 and N7000 series, which provides de-duplication at the block level. This can be particularly useful if you are using your N series for e-mail, document publishing, databases, backups or archives.
SAN Volume Controller
A technology refresh with the new 8G4 model. Like its predecessor, the 8F4, this new model has 8GB of cache per node, and is fitted with 4Gpbs SAN attachment ports. The difference is that the 8G4 is based on our successfulIBM System x3550 server.This baby screams, so I look forward to seeing the updated SPC-1 and SPC-2 performance benchmark ratings.The new SVC 4.2 software provides additional authentication policies for more granular administration support, andmulti-destination FlashCopy (one source copied to up to 16 destination copies at the same time).
The DS8000 series now supports having third and fourth expansion frames. This was actually already available via RPQ, but now it can be directly ordered.This means that you can now hold up to half a Petabyte in a single disk system.
IBM TotalStorage Productivity Center v3.3 offers policy and performance-based guidance in configuring disk system volumes, specification of paths between hosts and disk systems during storage provisioning, policy-based specification of zone membership, configuration analysis capabilities, configuration change management, extended tape management, and both content-sensitive and scalable enterprise-wide reports. There is also a version specifically designedto manage disk replication on System z platforms.
Deep Computing Storage
The IBM System Storage DCS9550 Storage System comes in a 4U controller and 3U disk expansion drawers. It is designed for High Performance Computing (HPC) such as genome medical research, government research and rich media applications.
Our clients tell us they need performance to meet their dynamic business demands, and policies to help them manage the ever growing size of their storage infrastructure. We listened!
The results are finally in. IBMer Wolfgang Singer was awarded "Top Speaker" award for his NAS and iSCSI tutorial at last year's Orlando 2006 conference. Here he is receiving the awardfrom SNIA Executive Director Leo Leger.
Of course, NAS and iSCSI technologies have been around for a while, but they are still new formany customers, which is why tutorials like this are so important.
"Information is moving—you know, nightly news is one way, of course, but it's also moving through the blogosphere and through the Internets." --- George W. Bush
As multinational companies transition to becomeglobally integrated enterprises, information is going to move across nationalboundaries. Laws that pertain to how data is stored and access need to be addressed.
Jon W Toigo over at DrunkenData.com discusses an Interesting proposal on Google Censorship. The New York Sun reports that NYC comptroller, Williams Thompson Jr. istargeting both Google and Yahoo over theirpolicies of abiding the local laws in each country they do business in.The proposal includes asking Google to fight local laws, publicize when Google complies withlocal laws, and publicize when local governments ask Google to comply with their laws. While Toigo focuses on Google, this issue applies to Yahoo, Microsoft, and many other companies that do business in multiple countries.
I admire when government officials use diplomacy to influence the policy of other governments, andwhen individuals act to influence the policies of those who govern them, but Thompson isdoing neither.In this matter, Thompson is trying to influence thepolicies of another government outside his jurisdiction, as a manager of investments in companies that do business there.Investors have two choices when trying to influence how companies do business.
Stop investing in those companies
Purchase shares, and vote your portion of the shares.
It appears Thompson is exercising the latter, proposing that this issue be brought to shareholder vote via proxy.There can only be two results from such a vote, either:
Shareholders vote for it, and Google changes the way it does business in this and other countries, possibly stops doing business in countries that don't appreciate hegemony.
Shareholders vote against it, and Google continues to do a great balancing act, complying with laws and their owncorporate culture
Did we forget that we have censorship in the USA as well? Would Thompson's proposalsapply to the rules and regs that our own government requires?
IBM does business in most, not all, countries on this planet. In the countries we don't do business in, we havegood reason not to. For the countries we do, we comply with all the laws that apply in each case.When I travel to these countries, including some of the countries specifically targeted by this proposal, I must abide by their laws. No exceptions.
The world is shrinking, and technologies now allow companies to become globally integrated. Before writing"The World Is Flat", Thomas Friedman wrote a book titled The Lexus and TheOlive Tree, which covers all the various issues related to conflicts between global companies and the countriesand cultures they do business in.
This reminds me of the wisdom of the Prime Directiveintroduced in the late 1960s on the popular TV show "Star Trek". The concept was simple, honor the sovereigntyof other cultures, on other worlds, and play by their rules when you are on their planet.I say "wisdom" in that it took me years to truly appreciate this idea.Initially, I considered this just a plot device to introduce conflict each time the captain and crew of thestarship "Enterprise" visits a new location, and discovers a culture different than their own. But over the years, as I have traveled to many countries, I began to see and understandthe wisdom of the "Prime Directive", and it applies as much now, in real life, as it did back then in the futuristic 1960s TV show.
Who are we to say that our way of doing things is the one and only way to do them?
Yesterday morning, the entire country of Colombia suffered their worst black-out (power outage) in 22 years. 98% of the country was out for 4 1/2 hours.This is just 5 months after an outage that hit 25% of the country, December 7, 2006.Ironically, this one happened the week I am here explaining the need for Business Continuity plans to IBM Business Partners from Argentina, Peru, Velenzuela, Ecuador and Colombia. As is oftenthe case, people often need a real example to recognize the need for planning is important.
It reminded me of the Northeast Black-out of 2003 that impacted USA and Canada. I was speaking to a crowd of 800 people at the SHARE conference in Washington D.C. when it happened, and hundreds of pagers and cell-phones went off all at the same time. Although we were outside the effected area and had plenty of lighting, we ended up canceling therest of my talk, and many people left immediately to help execute their business continuity plans.Of course, terrorism was immediately assumed, but a final report showed that it was initiated in Ohiodue to overgrown trees, and then propagated due to a software bug to hundreds of other plants.
According to this morning's Bogota newspaper, "El Tiempo", nobody knows the root cause of yesterday's outage. Immediately, the country's leftist rebels were blamed, but now the leading theory is that it was initiated byoperator error (a technician touching something he shouldn't have), and then propagated by a faulty distribution system.
Another example of the need for a robust and resilient infrastructure, and appropropriate business continuity plans.
SNW wrapped up Thursday. As is often the case, a lot of people have left already.
I saw two presentations worth discussing here in this blog.
Angus MacDonald, CEO of Mathon Systems,presented "Litigation Readiness: How prepared are you for the demands of eDiscovery?"
The process of eDiscovery is to take a large volume of data and get the small bits of relevance, as it relatesto a case, investigation or litigation. In 2004, there were 64 billion emails per day, and this is expected to be 103 billion by 2008. There are growing concerns about the "spoliation" of evidence, which I thought was a typo,until I looked it up. He encouraged everyone to check out the Electronic Discovery Reference Model, which is trying to standardize the wayIT and legal communication with each other.
The problem is often miscommunication over semantics and terminology. For example, in eDiscovery, the term"production" describes the delivery of relevant documents to a judge or opposing party. This may involve printingthem out on paper, delivering them electronically in their original format, or converting to a more standardelectronic format like Adobe PDF. The judge or opposing party reserves the right to request how they want thedocuments produced. Of course, in any format other than the original format, authenticity needs to be affirmed.
He gave two example lawsuits related to this.
In Zubulake v. UBS Warburg, Zubulake was awarded $29 million because UBS stored old emails on backup tapes, rather than an archiving system, and could not locate seven of these backup tapes. This is not the first time I have seen some IT department, or some legal department, think that keeping backups of email repositories for many years is the same as keeping an "archive".
In Coleman Holdings v. Morgan Stanley, Coleman was awarded $1.45 billion because the judge felt that Morgan Stanley failed to do proper eDiscovery. This was after they tried to reconstruct their email system from 5000 old backup tapes.
Angus suggests identifying the types of documents most often requested, and start planning from there.In an interesting twist, the CEO/CFO/CIO might go to jail if the IT department doesn't do something correctly, so perhaps IT managers will now get the respect/funding/technology they need to get the job done.
Bruce Kornfeld, Compellent Technologies, presented "Building Systems that Scale: Imagining the one Petabyte per Admin management ratio."
Bruce did a good job staying generic, and not mentioning his company's products too much. Specifically, Compellentmakes a frame similar to what IBM used to call the "SAN Integration Server". Back in 2003, IBM introduced the SAN Volume Controller, which had no disk, and the "SAN Integration Server" which had controller + disk. What IBM learned was that customers prefer the diskless model, minimizing the amount of disk that has to be purchased from the original vendor, and instead opting to have the freedom to choose any vendor they like for the managed capacity.
An interesting feature of the Compellent solution is that they chop up the virtual disk into 2MB pieces, and allow these pieces to be moved automatically from high-speed (FC) to low-speed (SATA) disk, based on their reference frequency. This is similar to HSM, but at the block level, rather than the file level.
Every advantage Bruce listed for his box already exists from IBM: improved capacity planning, improved performance, ease of data migration, flexible volumes, and a single pane of glass GUI administration tool.
Perhaps more interesting were the questions from the audience:
Q1. Do you have any customers that have 1PB of your solution? No, we have several in the 200-500TB range.
Q2. You only have a single two-node cluster, can we have more clusters? No, that is all we support, but if you need that you would have to go to one of the major storage vendors (like IBM).
Q3. Do we have to buy Compellent storage to go with the Compellent controllers? Yes, it is designed so it is an integrated solution. If you need to virtualize your existing storage, you have to go to one of the major storage vendors (like IBM).
Q4. Having data migrate automatically from FC to SATA behind the scenes lowers performance and raises the risk of disk failure? Our box is designed for inactive data, so performance is not an issue.
Q5. How do you protect against double-disk failures? We don't, and these would be even more detrimental to our solution than traditional solutions. Other vendors offer RAID6, but we don't have that yet.
It was a fun week, and good to see people I have communicated with, but never met in person.
Continuing my coverage of SNW Spring 2007, Ron and Vincent kicked off Wednesday main tent sessions with more survey questions:
Q1. How secure is your storage network?
27% Redundant, 100% able to withstand physical failures
28% Able to withstand hackers, but not physical failures
37% Weak on both fronts
Q2. What was the cause of most downtime in last 12 months?
1% Natural disasters
13% Network outages
14% Server failures
9% Telecom provider outage
22% IT resource upgrades
33% Human error
Thornton May, futurist and columnist for ComputerWorld, presented "Storage 3.0: What Comes After, What Comes Next."I have seen several "futurists" present at conferences like this. They all feel the need to explain what their job is, and what it takes to be one. This time, Thornton indicated he was "ridiculously well-travelled, amazingly well-connected, pathologically observant, and brutally honest." His insights:
At current rates, in 15 years every molecule on earth will have its own IP address.
"What's NOT good enough changes." -- Clayton Christensen
Gabriel Broner, General Manager of the newly created "Storage Solutions" division of Microsoft, presented "The Drive to Unified Storage". The people sitting around me asked "What does Microsoft have to do with storage?" He defined "Unified Storage" the way we use it for IBM Sytstem Storage N series "a storage unit that provides both file and block level protocol support." Microsoft is using "e-mail" as the model for data access, identifying the need to have "off-line" copies on your PC or laptop that are synced up with "on-line" sources. Features that were typically only available for high-end applications are now being made available to the masses, like "Volume Snapshot" capability in Windows Vista. On the home front, Microsoft recognizes that typically one person acts as the "IT manager" for the family.
Their survey of storage spend of Fortune 1000 companies. It was not clear if this was for Windows environments, or how the data was collected. These numbers don't match what we hear from our UNIX or mainframe customers.
Microsoft is implementing application changes, such as Office 2007, to simplify storage issues. Storage virtualization is the key for the future, he says, stating that Microsoft's "iSCSI target" software support makes files look like block-oriented volumes. Virtualization is now mainstream, and deploying software on standard hardware is the new storage business model. The end goal is to simplify provisioning, device and resource management, without reducing functionality, narrowing the gap between general IT tasks and specific storage tasks.
Craig Lau, NBC Olympic coverage, presented their success story. Look at the number of "hours" of TV Olympic coverage over the years:
1996 Atlanta -- 175 hours
2000 Sydney -- 441 hours
2004 Athens -- 1210 hours
NBC now is able to deliver 70 hours of TV programs per day, shown across their seven channels (NBC, CNBC, MSNBC, Brave, USA Network, Telemundo, and HD-tv). The Olympics in Torino, Italy generated 25,000 tapes in 17 days. Their 100,000 tape Olympic repository is starting to deteriorate, and they need to consider conversion to digital format. Their challenge was that footage was difficult to find and producers needed immediate access to time sensitive/critical content.
Their solution was Digital Asset Management, automating indexing and logging, using an IP-based workflows that reduces the number of people at the Olympics location, and allowing content to be sent back to USA for remote editing.The facilities at Torino involved:
2850 people, most hired just the week prior to the Olympic event
250TB of disk storage
135 High-Definition cameras
212 Video Tape Recorders
4000 hours of content on 1700 tapes
NBC is frustrated by the lack of compatability and interoperability in the video format industry. They have been testing MPEG-1 (1.5 Mbps) formats, and plan to deploy a new system using 1080i for the upcoming 2008 Olympics in Beijing. With the new system, they can index footage by athlete, by event, and by human emotional reaction. They can review and edit footage within 30-45 seconds of live coverage, allowing rough edits to be documented as "Edit Decision Lists" that can be e-mailed or put on USB key for others to review.
Although I missed Anil Gupta's "Blogger Event" on Monday, several bloggers did stop by to visit me at the IBMbooth.
I survived my first day at SNW Spring 2007.This is my first time at SNW, but it is very much like many of the other conferences I have been to.It officially started Monday morning with pre-conferencetutorials and primer break-outsessions that covered storage fundamentals, but I didn't arrive until late Monday night due to highwind conditions at the Phoenix airport that delayed my travel.
Tuesday started out with main tent sessions. Ron Milton, VP of ComputerWorld that puts on this conference,and Vincent Franceschini, Chairman of the Board for SNIA, kicked off the event.It didn't take them long to get into the alphabet soup: ILM, ITIL, SMI-S, XAM, IMA, MMA, DDF,MF, DMF, IPSF, SSIF, and SRM.Several hundred people had "voting devices" so that they could participate in "informal" surveys.
Q1. What was the greatest need?
37% Storage Resource Management (SRM) tools
19% Storage Virtualization
19% Information Lifecycle Management (ILM)
14% Integration with other management tools
11% Compliance storage for regulations
Q2. What are people doing to address storage infrastructure complexity?
33% Deploying new SRM and SAN management tools
26% Adopting "Storage as a Service" methodology
22% Deploying new storage virtualization technologies
8% Hiring more staff
9% (complexity was not an issue)
The first keynote speaker was Cora Carmody, CIO of SAIC. In the late 1980s and early 1990s, I did a lot of work with SAIC here in San Diego, and so IBM sent me to San Diego quite frequentlyfor face-to-face meetings with them. Her talk was cryptically titled "Jumbo Shrimp, InformationManagement, and the Mark of the Beast." Coming up with good titles is important. Some of herkey points:
"Information management" was as much an oxymoron as "jumbo shrimp" or "military intelligence".(SAIC is a general contractor for the US Military, so this was especially funny).
Computer data needs both "ownership" and "stewardship".
Gartner analyst reports that 50% of digital information for a business resides in personal files onindividual PCs.
PAN-StaRRs project is ingesting 10TB per week of astronomical data.
TeraTEXT(R) project is a non-relational database that supports a large mix of structured and unstructured content.
The next "Y2K" crisis for the USA is changing from 3-digit to 4-digit area codes for our telephone numbers.
Battery size and life have not advanced as fast as we need
There has been little progress in "User Interface" ease of use
Formats and standards are picked for the most part by the winning vendors, and it is the silence of themarketplace that lets them get away with this.
We are overly reliant on an inherently insecure medium.
The "mark of the beast" refers to exciting new technologies based on "presence awareness". For example,some hotels now are able to check you into the hotel as you drive up in your car, based on your car's licenseplate. Some 24-hour gyms use your fingerprint as your entry credentials, eliminating the need to staff peopleat the front desk.
IBM's own Barry Rudolph, presented "Storage in an Age of Inconvenient Truths", dressed up like Oscar-winner andformer USA Vice President Al Gore. Barry's focus was on the growingconcern of over environmental Power and Cooling issues in the data center. According to IDC, the cost of power and cooling an individual server, over its lifetime, now exceeds its acquisition cost. Storage devices are not as bad as servers in this regard. Data centers now consume 1.2% of the worlds energy.
Over lunch, I heard Tony Asaro from ESG present "The Need for Highly Virtualized Storage Systems withina Virtualized Data Center." His concern is that there is still a "heavy touch" required to manage storage.Without virtualization, your data center is less than the sum of its parts. Although IBM has been doingstorage virtualization since 1974, Tony mentioned that most storage vendors were "late to the party".He argues that "internal virtualization" inside storage arrays is not enough, you need "external virtualization"(like the IBM System Storage SAN Volume Controller) to virtualize your entire infrastructure.What storage administrators would like is for storage to have consumer levels of "ease of use", and today'snon-virtualized storage environments are nowhere near that.
"The great advantage [the telephone] possesses over every other form of electrical apparatus consists in the fact that it requires no skill to operate the instrument." - Alexander Graham Bell, 1878
I attended a few break-out sessions in the afternoon.
Ralph presented "Crisis of Capacity" which covered the drastic actions he had to take to handle power and coolingin their expanding data center during their summer months, where temperatures peak up to 105 degrees. This included creating "hot" and "cold" aisles onhis raised floor by re-organizing the perforated floor tiles, and doing a better job standardizing how cables areconnected to the back of racks and up through the ceiling to maximize airflow. An amp-meter on each power strip was used to measure the powerused at each rack, which allowed them to better prioritize their efforts. Their Air Conditioning unit was only 12inches from the concrete floor, and raising it to 18 inches greatly reduced noise and vibration. Adding a second AC unit made a world of difference. Finally, they eliminatedKVMs, because people who use KVMs break other parts of thedata center. His rule of thumb: the cooling requirements will be 50% of the rated power requirements for equipment.
Terry Yoshi, Intel internal IT department, as a member of the SNIA's end user council
Terry presented "Taming the SAN Complexity". The problem with "complexity" as a concept is that it is very subjective, difficult to quantify, and therefore difficult to manage. He presented complexity in four areas:Organizational structure of the company as a whole; skill sets required of the IT staff; business process andprocedures; and technology. Dealing with complexity is a battle between Old School (because we've always doneit this way) and New School (because it is new and different technology). Storage Area Networks are inherentlya "shared resource", and the increased complexity is a direct result of the low reliability of the componentsand devices it is composed of. People should focus on the "Total Cost of Ownership" (TCO) for a SAN, and not just the initial acquisitionprice of SAN gear.He was not a fan of the "dual/multiple" vendor strategy that many companies employto reduce costs. His suggestion that things should be tried out first on your "test SAN" caused some chuckles,as few have such a thing. Finally, he suggested not only documenting "Best Practices" and "Best Known Methods"but also things that have been found not to work, his do-not-try-this-at-home list.
Tony Antony, Cisco marketing manager for Optical products
This was an overview of the technologies available for long distance connections for disaster recovery,business continuity, and resilience. He covered three levels.
IP - Fibre Channel of IP (FCIP) offers the greatest "global" distance but forces people into asynchronous mirroring.
SONET/SDH - SONET is what we call it in the USA, and SDH is what it is called in other countries. This provides state-to-state or "out-of-region" distances, which is ideal to meet certain government regulations for homeland defense. He suggests this is offered when dark fiber or DWDM is not available.
DWDM/CWDM - this is using a prism to run multiple colors of light through a single fiber optic cable. CWDM ischeaper, but only handles 8 signals per cable. DWDM can handle 32 to 160 signals per cable, but is more expensive.
His rule of thumb: one buffer credit for every kilometer at 2Gbps speed (for every 2km at 1Gbps).
The day ended at the "Expo". I hung out at the IBM booth to help answer questions and network with others.
Last year in Beijing, China, one of my colleagues told me "When it rains here, cabs dry up". Normally, there are enough taxi cabs to handle normal conditions, but when it rains, people who normally walk now want to take a cab instead, and the demand goes up, resulting in being more difficult to find one when you need one.
I'm wrapping up my week here in Chicago, and it snowed yesterday. Cabs were scarce. I walked. Many others walked too, about half with umbrellas to protect themselves against the snowflakes.
Most systems are designed to handle typical average conditions. Taxi cabs in a city, for example, handle typicalamounts of traffic.
IT is different. In many cases, IT infrastructures are designed for the peaks, not the averages. Peaks can be where you need performance the most, and failure to design for peaks can be disastrous. As with any business decision, this represents a trade-off. Design for the average, and suffer through the peaks, or design for the peak, and be over-allocated and under-utilized most of the time otherwise.
The concept that there should be a linear "Storage Administrators per TB" rule-of-thumb has been around for a while.Back in 1992, I went to visit a customer in Germany who had FIVE storage admins for 90 GB (yes, GB, not TB) disk array.I told them they only needed 3 admins, but they cited German laws that prohibited "overtime" work on evenings and weekends.
Later, in 1996, I visited an insurance company in Ohio to talk about IBM Tivoli Storage Manager. They had TWO admins to manage 7TB on their mainframe, and another 45 people managing the 7TB across their distributed systems running Linux, UNIX, and Windows. My first question, why TWO? Only one would be needed for the mainframe, but they responded that they back each other up when one takes a 2-week vacation. My second question to the rest of the audience was... "When was the last time you guys took a 2-week vacation?"
Today, admins manage many TBs of storage. But TBs are turning out not to be a fair ruler to estimate the number of admins you need. It's a moving target, and other factors have more influence that sheer quantity of data.Let's take a look at some of those factors, which we call "the three V's":
Variety of information types
In the beginning, there were just flat text files. In today's world, we have structured databases, semi-structured e-mail systems, hypertext documents, composite applications, audio and video formats that require streaming, and so on. Variety adds to the complexity of the environment. Different data requires different treatment, different handling, and perhaps even different storage technologies.
Volume of data
Data on disk and tape is growing 60% year on year. It's growing on paper also. It's growing on film like photos and X-rays. The problem is not the amount, but the rate of growth. Imagine if population and traffic in your city or town increased 60% in one year, most likely people would suffer because most governments just aren't prepared for that level of growth.
Velocity of change
Back in the 1950's and 1960's, people only had to make updates once a year, scheduling time during holidays. Now, people are making changes every month, sometimes every weekend. One customer we spoke with recently said they do about 8000 changes PER WEEKEND!
So, the key is that there is no simple rule-of-thumb. Fewer admins are need per TB on mainframe than distributed systems data. Fewer admins per TB are needed when you deploy productivity software, like IBM TotalStorage Productivity Center. Fewer admins per TB are needed when you deploy storage virtualization, like IBM SAN Volume Controller or IBM virtual tape libraries.
Today,Apple and EMI announced that EMI’s entire music and video catalog will be available in May without any digital rights management (DRM) protection.Not only with the music be higher quality, but can be played on any player, presumably using MP3 format instead ofApple's proprietary AAC format. Being locked into any single vendor solution is undesirable. Similar issues abound for Microsoft Office 2007 file formats.
On my iPod, I ripped all my CDs into MP3 format, not AAC. I love my iPod, but if I ever decided to chose a different MP3 player, I did not want to go through the time-consuming process or re-ripping them again.
A blog by Seth Godin feels this Apple-EMI announcement means thatDRM is dead.
Back when music labels added value by producing and distributing music in physical form, it made sense for them to take a cut. Mass-producing CDs and distributing them out to music stores across the country costs lots of money. However, for online music, music labels don't have these same overhead costs, but continue the process of paying the artists only a few pennies per dollar. Some artists have file lawsuits to get their fair share.
This process applies to any published work. For example, you can purchase Kevin Kelly's book in various formats, at different prices, from different distributors. For example:
In PDF for $2, directly from the author via PayPal
black-and-white hardcover, for $20, from Amazon
color softcopy, for $30, from Lulu
Each nets the author $1.50 in royalties per copy. You can decide how much in production and distribution costs you want to pay.
Michael Scott, one of my "Second Life" builder/scripters, for demonstrating client-focused dedication to IBM's corporate values.
Our site manager, Terri Mitchell, did a recap of all our recent awards and accomplishments.Of the nine Design Innovation awards won by IBM this year at the CeBIT conference, eight were for IBM System Storage products!
The IBM System Storage EXP3000: an entry-level data storage server that is optimized for cost-sensitive and space-limited environments and employs a user-centered design that enables ease of use and simple tool-less installation and removal of all components.
The IBM System Storage N7000 Series: a modular disk storage system that delivers high-end enterprise storage and data management value ideal for large-scale applications, while helping to anticipate growth, maintaindata availability and reduce costs.
The IBM System Storage N5000 Series: a modular disk storage system designed to address the entire spectrum of data availability challenges while offering value in price and scalability. Built-in enterprise serviceability and manageability features support efforts to increasereliability and simplify storage infrastructure and maintenance.
The IBM System Storage N3700: a filer that integrates storage and storage processing into a single unit, facilitating affordable network deployments.
The IBM System Storage DS4700: a NEBS-compliant disk storage server designed to address requirements for companies in the telecommunications industry, as well as other segments, such as oil and gas, meeting standardsfor electromagnetic compatibility, thermal robustness, earthquake and office vibration resistance, and provides protection for the product components from airborne contaminants.
The IBM System Storage EXP810: a data storage expansion unit capable of 4.8 Terabytes of physical storage, with a user-centered and tool-less design featuring redundant power, cooling, and disk modules for ease of use and simple serviceability.
The IBM System Storage TS3400: an affordable, space-friendly tape library for users in remote locations that supports enterprise-class technology and encryption capabilities.
A representative from Tucson's Brewster Center presented Terri an award, thanking IBM for its strong support for the community through various charity initiatives.
The final speaker was a new IBM client, Tony Casella, the IT Director of the town of Marana. Recently, the town of Marana selected IBM products made big news. Arizona is the fastest growing state in the USA, and the town of Marana, just north of Tucson, is one of the fastest growing communities in Arizona. The town is growing so large that it will soon spill over from Pima into Pinal county, and will be the first town in Arizona authorized to span county boundaries.
The Magic Quadrant is copyrighted concept by Gartner, representing a two-by-two grid that ranks various offerings from different vendors. Ideally, vendors want their products in the upper right "Leaders" quadrant. Yahoo Finance reports:
According to Gartner, Inc., "Leaders have the highest combined measures of an ability to execute and a completeness of vision. They have the most comprehensive and scalable products. They have a proven track record of financial performance and an established market presence. In terms of vision, they are perceived as thought leaders, having well-articulated plans for ease of use, how to address scalability and product breadth. For vendors to have long-term success, they must plan to address the expanded market requirements for change management and root-cause and performance analysis. Leaders must not only deliver to the current market requirements, which continue to change, but they also need to anticipate and deliver on future requirements. A cornerstone for leaders is the ability to articulate how these requirements will be addressed as part of their vision for resource management. As a group, leaders can be considered a part of most new purchase proposals, and they have high success rates in winning new business."
IBM TotalStorage Productivity Center is a strategic part of IBM Service Management, and a foundational component of the IBM Systems Director family. IBM is making a concerted effort across servers, networks, software and storage to help manage the IT infrastructure in a coordinated way.
An article in InformationWeek reports that40,000 ASU Students Leap to Google Apps; University Pays Zero. The ASU president, Michael Crow, wants to make IT the primary driver in his ambitious "New American University" project.Last October, ASU became the first large institution to deploy Google Apps, a comprehensive suite of productivity applications that includes e-mail, search, calendars, instant messaging, and even word processing and spreadsheets.I've tried them out, they work, nothing fancy but certainly good enough for college homework assignments.
Already 40,000 students and faculty have switched their e-mail to Google, while keeping their asu.edu designation. (out of 65,000 student population, which Mr. Crow is trying to raise to 90,000 students!)
E-mail is a thorn in the side of storage administrators. Being "semi-structured" repositories, they cannot just delete or move files around, as there is context between notes and their attachments, that shouldn't be broken. E-mail systems are often the fastest growing consumer of storage for many organizations.
Switching from maintaining their own mail servers to Google is saving ASU $500,000 US dollars alone, not including the administrator labor savings. Again, some corporations might feel their e-mail is too "secret" to be outsourced like this, but for college students who spend all their creative talent posting things on MySpace and YouTube, and faculty who spend their careers TRYING to get published, they have nothing to hide from the rest of the world. It makes perfect sense.
Best of all, Google isn't charging ASU anything for this service. Google is able to cover the costs from advertising revenue instead. I can think of a lot of companies that might want to advertise to a demographic of "40,000 students who are mostly 18-25 years old and all live in or near Tempe, AZ".
The movie industry is slowly making the conversion to digital.
For about 25 years, movies were silent, actors acted, text was shown on the screen, and an organ or piano player added the musical score. My mother was a concert pianist, so I grew up listening to all kinds of piano music. Last weekend, while I was in Chicago for St. Patricks Day, we watched and listened to the dueling pianos at a bar called "Howl at the Moon". Those not familiar with this art form can watch this 1-minute video of Star Wars re-imagined as a Silent Movie.
About 80 years ago, "talkies" appeared. The sound was converted to a series of colors that were recorded as a separate strip on the film media itself, hence the name "soundtrack". When the movie ran, the colors would then be converted back to voice and music. While the live piano players were out of jobs, the move to sound created a whole new industry for foley artists, orchestras and composers.InformationWeek's Mitch Wagner explains in Something Will Be Lost thatgreat artists like Charlie Chaplin and Mary Pickford never completely made the transition to talkies.
Now the movie industry is changing again, this time from film to digital format. Thanks to digital, we can now see videos on the internet, such as this set of Impressive Palindromes parody of a Bob Dylan song.
While movies are digital when you rent them from the DVD store, download them on iTunes, or play them on YouTube, they are still mostly in analog format on 35mm or 70mm film stock when you see them on the big screen.
My first "digital projection" experience was the movie "Ice Age" shown in Denver, Colorado. The theatre owner came out to show us what film stock looks like, and then how small the DVD was that held the digital version. The theatre also showed previews of other movies first on film, then in digital, so that we could see the difference in quality.My second experience was "Star Wars: Attack of the Clones (episode II)", which I saw opening night at the Ziegfeld theatre in New York City. This was a huge theatre, and we had front row seats in the upper balcony.
Of course, the transition of film stock to digital projection is just one of the many trends resulting in the fast growth of computer IT storage. Documents transitioned from paper, to being scanned into digital format, to being created digitally using word processing software. Likewise, photographs went from film, to being scanned, to being captured with digital cameras.
As with talkies, history repeats itself; the transition to digital projection is not going smoothly.NPR's Laura Sydell reports thatDigital Projection in Theaters Slowed by Dispute. The dispute is between movie production companies and theatre owners. Currently, it is quite expensive to send out film stock to all the theatres, so the transition to digital will save the movie production companies lots of money. On the other hand, installing digital projection equipment will be costly for theatre owners. How the two groups will share the burdensome costs to convert this infrastructure is still under negotiation.
As a fan of going to the movies, I hope they resolve this dispute soon.
Yesterday, most of the USA moved its clocks forward an hour. Arizona and Hawaii don't bother, as there is plenty of daylight in both states. While it may seem that Arizonans are not "affected" by Daylight Saving Time (DST), we are, because we have to deal with the time zone offsets with those we talk to in other states. (Note: it is SAVING not SAVINGS, many people mistakenly say "Daylight Savings Time", which is incorrect).
Year round, Arizona is on Mountain Standard Time (MST), which is GMT-7. Figuring out what time Arizona can be remembered by a simple mnemonic:
In the winter time, Utah, Colorado, New Mexico, and Arizona are all on MST, so best American ski resorts are all on the same time zone. People who hop from one ski resort to another by helicopter don't have to reset their watches as they move into or out of Arizona.
In the summer time, Arizonans head to San Diego, Los Angeles or other parts of California, where it is not so hot. California is on PDT, which is the same as MST. People who hop from Arizona wineries and vineyards to those in California and Oregon can easily cross the Arizona-California border without having to reset our watches.
Those in Second Life may have noticed that "Second Life time" (SL time) shifted from PST to PDT. That is because their servers reside in San Francisco, California.
Well, this week I am in Maryland, just outside of Washington DC. It's a bit cold here.
Robin Harris over at StorageMojo put out this Open Letter to Seagate, Hitachi GST, EMC, HP, NetApp, IBM and Sun about the results of two academic papers, one from Google, and another from Carnegie Mellon University (CMU). The papers imply that the disk drive module (DDM) manufacturers have perhaps misrepresented their reliability estimates, and asks major vendors to respond. So far, NetAppand EMC have responded.
I will not bother to re-iterate or repeat what others have said already, but make just a few points. Robin, you are free to consider this "my" official response if you like to post it on your blog, or point to mine, whatever is easier for you. Given that IBM no longer manufacturers the DDMs we use inside our disk systems, there may not be any reason for a more formal response.
Coke and Pepsi buy sugar, Nutrasweet and Splenda from the same sources
Somehow, this doesn't surprise anyone. Coke and Pepsi don't own their own sugar cane fields, and even their bottlers are separate companies. Their job is to assemble the components using super-secret recipes to make something that tastes good.
IBM, EMC and NetApp don't make DDMs that are mentioned in either academic study. Different IBM storage systems uses one or more of the following DDM suppliers:
Seagate (including Maxstor they acquired)
Hitachi Global Storage Technologies, HGST (former IBM division sold off to Hitachi)
In the past, corporations like IBM was very "vertically-integrated", making every component of every system delivered.IBM was the first to bring disk systems to market, and led the major enhancements that exist in nearly all disk drives manufactured today. Today, however, our value-add is to take standard components, and use our super-secret recipe to make something that provides unique value to the marketplace. Not surprisingly, EMC, HP, Sun and NetApp also don't make their own DDMs. Hitachi is perhaps the last major disk systems vendor that also has a DDM manufacturing division.
So, my point is that disk systems are the next layer up. Everyone knows that individual components fail. Unlike CPUs or Memory, disks actually have moving parts, so you would expect them to fail more often compared to just "chips".
If you don't feel the MTBF or AFR estimates posted by these suppliers are valid, go after them, not the disk systems vendors that use their supplies. While IBM does qualify DDM suppliers for each purpose, we are basically purchasing them from the same major vendors as all of our competitors. I suspect you won't get much more than the responses you posted from Seagate and HGST.
American car owners replace their cars every 59 months
According to a frequently cited auto market research firm, the average time before the original owner transfers their vehicle -- purchased or leased -- is currently 59 months.Both studies mention that customers have a different "definition" of failure than manufacturers, and often replace the drives before they are completely kaput. The same is true for cars. Americans give various reasons why they trade in their less-than-five-year cars for newer models. Disk technologies advance at a faster pace, so it makes sense to change drives for other business reasons, for speed and capacity improvements, lower power consumption, and so on.
The CMU study indicated that 43 percent of drives were replaced before they were completely dead.So, if General Motors estimated their cars lasted 9 years, and Toyota estimated 11 years, people still replace them sooner, for other reasons.
At IBM, we remind people that "data outlives the media". True for disk, and true for tape. Neither is "permanent storage", but rather a temporary resting point until the data is transferred to the next media. For this reason, IBM is focused on solutions and disk systems that plan for this inevitable migration process. IBM System Storage SAN Volume Controller is able to move active data from one disk system to another; IBM Tivoli Storage Manager is able to move backup copies from one tape to another; and IBM System Storage DR550 is able to move archive copies from disk and tape to newer disk and tape.
If you had only one car, then having that one and only vehicle die could be quite disrupting. However, companies that have fleet cars, like Hertz Car Rentals, don't wait for their cars to completely stop running either, they replace them well before that happens. For a large company with a large fleet of cars, regularly scheduled replacement is just part of doing business.
This brings us to the subject of RAID. No question that RAID 5 provides better reliability than having just a bunch of disks (JBOD). Certainly, three copies of data across separate disks, a variation of RAID 1, will provide even more protection, but for a price.
Robin mentions the "Auto-correlation" effect. Disk failures bunch up, so one recent failure might mean another DDM, somewhere in the environment, will probably fail soon also. For it to make a difference, it would (a) have to be a DDM in the same RAID 5 rank, and (b) have to occur during the time the first drive is being rebuilt to a spare volume.
The human body replaces skin cells every day
So there are individual DDMs, manufactured by the suppliers above; disk systems, manufactured by IBM and others, and then your entire IT infrastructure. Beyond the disk system, you probably have redundant fabrics, clustered servers and multiple data paths, because eventually hardware fails.
People might realize that the human body replaces skin cells every day. Other cells are replaced frequently, within seven days, and others less frequently, taking a year or so to be replaced. I'm over 40 years old, but most of my cells are less than 9 years old. This is possible because information, data in the form of DNA, is moved from old cells to new cells, keeping the infrastructure (my body) alive.
Our clients should approach this in a more holistic view. You will replace disks in less than 3-5 years. While tape cartridges can retain their data for 20 years, most people change their tape drives every 7-9 years, and so tape data needs to be moved from old to new cartridges. Focus on your information, not individual DDMs.
What does this mean for DDM failures. When it happens, the disk system re-routes requests to a spare disk, rebuilding the data from RAID 5 parity, giving storage admins time to replace the failed unit. During the few hours this process takes place, you are either taking a backup, or crossing your fingers.Note: for RAID5 the time to rebuild is proportional to the number of disks in the rank, so smaller ranks can be rebuilt faster than larger ranks. To make matters worse, the slower RPM speeds and higher capacities of ATA disks means that the rebuild process could take longer than smaller capacity, higher speed FC/SCSI disk.
According to the Google study, a large portion of the DDM replacements had no SMART errors to warn that it was going to happen. To protect your infrastructure, you need to make sure you have current backups of all your data. IBM TotalStorage Productivity Center can help identify all the data that is "at risk", those files that have no backup, no copy, and no current backup since the file was most recently changed. A well-run shop keeps their "at risk" files below 3 percent.
So, where does that leave us?
ATA drives are probably as reliable as FC/SCSI disk. Customers should chose which to use based on performance and workload characteristics. FC/SCSI drives are more expensive because they are designed to run at faster speeds, required by some enterprises for some workloads. IBM offers both, and has tools to help estimate which products are the best match to your requirements.
RAID 5 is just one of the many choices of trade-offs between cost and protection of data. For some data, JBOD might be enough. For other data that is more mission critical, you might choose keeping two or three copies. Data protection is more than just using RAID, you need to also consider point-in-time copies, synchronous or asynchronous disk mirroring, continuous data protection (CDP), and backup to tape media. IBM can help show you how.
Disk systems, and IT environments in general, are higher-level concepts to transcend the failures of individual components. DDM components will fail. Cache memory will fail. CPUs will fail. Choose a disk systems vendor that combines technologies in unique and innovative ways that take these possibilities into account, designed for no single point of failure, and no single point of repair.
So, Robin, from IBM's perspective, our hands are clean. Thank you for bringing this to our attention and for giving me the opportunity to highlight IBM's superiority at the systems level.
While most of the post is accurate and well-stated, two opinions particular caught my eye. I'll be nice and call them opinions, since these are blogs, and always subject to interpretation. I'll put quotes around them so that people will correctly relate these to Hu, and not me.
"Storage virtualization can only be done in a storage controller. Currently Hitachi is the only vendor to provide this." -- Hu Yoshida
Hu, I enjoy all of your blog entries, but you should know better. HDS is fairly new-comer to the storage virtualization arena, so since IBM has been doing this for decades, I will bring you and the rest of the readers up to speed. I am not starting a blog-fight, just want to provide some additional information for clients to consider when making choices in the marketplace.
First, let's clarify the terminology. I will use 'storage' in the broad sense, including anything that can hold 1's and 0's, including memory, spinning disk media, and plastic tape media. These all have different mechanisms and access methods, based on their physical geometry and characteristics. The concept of 'virtualization' is any technology that makes one set of resources look like another set of resources with more preferable characteristics, and this applies to storage as well as servers and networks. Finally, 'storage controller' is any device with the intelligence to talk to a server and handle its read and write requests.
Second, let's take a look at all the different flavors of storage virtualization that IBM has developed over the past 30 years.
IBM introduces the S/370 with the OS/VS1 operating system. "VS" here refers to virtual storage, and in this case internal server memory was swapped out to physical disk. Using a table mapping, disk was made to look like an extension of main memory.
IBM introduces the IBM 3850 Mass Storage System (MSS). Until this time, programs that ran on mainframes had to be acutely aware of the device types being written, as each device type had different block, track and cylinder sizes, so a program written for one device type would have to be modified to work with a different device type. The MSS was able to take four 3350 disks, and a lot of tapes, and make them look like older 3330 disks, since most programs were still written for the 3330 format. The MSS was a way to deliver new 3350 disk to a 3330-oriented ecosystem, and greatly reduce the cost by handling tape on the back end. The table mapping was one virtual 3330 disk (100 MB) to two physical tapes (50 MB each). Back then, all of the mainframe disk systems had separate controllers. The 3850 used a 3831 controller that talked to the servers.
IBM invents Redundant Array of Independent Disk (RAID) technology. The table mapping is one or more virtual "Logical Units" (or "LUNs") to two or more physical disks. Data is striped, mirrored and paritied across the physical drives, making the LUNs look and feel like disks, but with faster performance and higher reliability than the physical drives they were mapped to. RAID could be implemented in the server as software, on top or embedded into the operating system, in the host bus adapter, or on the controller itself. The vendor that provided the RAID software or HBA did not have to be the same as the vendor that provided the disk, so in a sense, this avoided "vendor lock-in".Today, RAID is almost always done in the external storage controller.
IBM introduces the Personal Computer. One of the features of DOS is the ability to make a "RAM drive". This is technology that runs in the operating system to make internal memory look and feel like an external drive letter. Applications that already knew how to read and write to drive letters could work unmodified with these new RAM drives. This had the advantage that the files would be erased when the system was turned off, so it was perfect for temporary files. Of course, other operating systems today have this feature, UNIX has a /tmp directory in memory, and z/OS uses VIO storage pools.
This is important, as memory would be made to look like disk externally, as "cache", in the 1990s.
IBM AIX v3 introduces Logical Volume Manager (LVM). LVM maps the LUNs from external RAID controllers into virtual disks inside the UNIX server. The mapping can combine the capacity of multiple physical LUNs into a large internal volume. This was all done by software within the server, completely independent of the storage vendor, so again no lock-in.
IBM introduces the Virtual Tape Server (VTS). This was a disk array that emulated a tape library. A mapping of virtual tapes to physical tapes was done to allow full utilization of larger and larger tape cartridges. While many people today mistakenly equate "storage virtualization" with "disk virtualization", in reality it can be implemented on other forms of storage. The disk array was referred to as the "Tape Volume Cache". By using disk, the VTS could mount an empty "scratch" tape instantaneously, since no physical tape had to be mounted for this purpose.
Contradicting its "tape is dead" mantra, EMC later developed its CLARiiON disk library that emulates a virtual tape library (VTL).
IBM introduces the SAN Volume Controller. It involves mapping virtual disks to manage disks that could be from different frames from different vendors. Like other controllers, the SVC has multiple processors and cache memory, with the intelligence to talk to servers, and is similar in functionality to the controller components you might find inside monolithic "controller+disk" configurations like the IBM DS8300, EMC Symmetrix, or HDS TagmaStore USP. SVC can map the virtual disk to physical disk one-for-one in "image mode", as HDS does, or can also map virtual disks across physical managed disks, using a similar mapping table, to provide advantages like performance improvement through striping. You can take any virtual disk out of the SVC system simply by migrating it back to "image mode" and disconnecting the LUN from management. Again, no vendor lock-in.
The HDS USP and NSC can run as regular disk systems without virtualization, or the virtualization can be enabled to allow external disks from other vendors. HDS usually counts all USP and NSC sold, but never mention what percentage these have external disks attached in virtualization mode. Either they don't track this, or too embarrassed to publish the number. (My guess: single digit percentage).
Few people remember that IBM also introduced virtualization in both controller+disk and SAN switch form factors. The controller+disk version was called "SAN Integration Server", but people didn't like the "vendor lock-in" having to buy the internal disk from IBM. They preferred having it all external disk, with plenty of vendor choices. This is perhaps why Hitachi now offers a disk-less version of the NSC 55, in an attempt to be more like IBM's SVC.
IBM also had introduced the IBM SVC for Cisco 9000 blade. Our clients didn't want to upgrade their SAN switch networking gear just to get the benefits of disk virtualization. Perhaps this is the same reason EMC has done so poorly with its "Invista" offering.
So, bottom line, storage virtualization can, and has, been delivered in the operating system software, in the server's host bus adapter, inside SAN switches, and in storage controllers. It can be delivered anywhere in the path between application and physical media. Today, the two major vendors that provide disk virtualization "in the storage controller" are IBM and HDS, and the three major vendors that provide tape virtualization "in the storage controller" are IBM, Sun/STK, and EMC. All of these involve a mapping of logical to physical resources. Hitachi uses a one-for-one mapping, whereas IBM additionally offers more sophisticated mappings as well.
Wrapping up my week in China, I read an article by Li Xing in the local "China Daily" about energy efficiency in buildings. She argues that it is not enough for a building to be energy-efficient on its own, but you have to consider the impact of the other buildings around. Does it reflect the sun so harshly into neighboring windows that people are forced to put up blinds and use artificial light? Does it block the sun, so that rooms that previously could be used with natural sunlight must now be artificially lit?
A similar effect happens with power and cooling in the data center. Servers and storage systems generate heat, and that heat affects all the other equipment in the data center. IBM has the most power-efficient and heat-efficient servers and storage, but that is not enough. You have to consider the heat generated by all systems that might raise overall temperature.
Research has indicated that water can remove far more heat per volume unit than air. For example, in order to disperse 1,000 watts, with 10 degree temperature difference, only 24 gallons of water per hour is needed, while the same space would require nearly 11,475 cubic feet of air. IBM's Rear Door Heat eXchanger helps keep growing datacenters at safe temperatures, without adding AC units. The unobtrusive solution brings more cooling capacity to areas where heat is the greatest -- around racks of servers with more powerful and multiple processors.
The CoolBlue portfolio of IBM innovations includes comprehensive hardware and systems-management tools for computing environments, enabling clients to better optimize the power consumption, management and cooling of infrastructure at the system, rack and datacenter levels. The CoolBlue portfolio includes IBM PowerConfigurator, PowerExecutive, and Rear Door Heat eXchanger.
The eXchanger works on standard 42U racks, and can help clients deal with the rapid growth of rack-mounted servers and storage on their raised floor. How cool is that!
Federal Rules for Civil Procedures (FRCP) will increase adoption of unstructured data classification, email archive systems and CAS.
CAS continues to flounder, but the rest I can agree with. Regulations are being adopted world wide. Japan has its own Sarbanes-Oxley (SOX) style legislation go into effect in 2008.IBM TotalStorage Productivity Center for Data is a great tool to help classify unstructured file systems. IBM CommonStore for email supports both Microsoft Exchange and Lotus Domino, and can be connected to IBM System Storage DR550 for compliance storage.
Unified storage systems (combined file and block storage target systems) will become increasingly attractive in 2007, because of their ease of use and simplicity.
I agree with this one also. Our sales of IBM N series in 2006 was great, and looking to continue its strong growth in 2007. The IBM N series brings together FCP, iSCSI and NAS protocols into one disk system. With the SnapLock(tm) feature, N series can store both re-writable data, as well as non-erasable, non-rewriteable data, on the same box. Combine the N series gateway on the front-end with SAN Volume Controller on the back-end, and you have an even more powerful combination.
Distributed ROBO backup to disk will emerge as the fastest growing data protection solution in 2007.
IDC had a similar prediction for 2006. ROBO refers to "Remote Office/Branch Office", and so ROBO backup deals with how to back up data that is out in the various remote locations. Do you back it up locally? or send it to a central location?Fortunately, IBM Tivoli Storage Manager (TSM) supports both ways, and IBM has introduced small disk and tape drives and auto-loaders that can be used in smaller environments like this. I don't know whether "backup to disk" will be the fastest growing, but I certainly agree that a variety of ROBO-related issues will be of interest this year.
2007 will be remembered as the year iSCSI SAN took off because of the much reduced pricing for 10 Gbit iSCSI and the continued deployment of 10 Gbit iSCSI targets.
While I agree that iSCSI is important, I can't say 2007 will be remembered for anything.We have terrible memory in these things. Ask someone what year did Personal Computers (PC) take off, and they will tell you about Apple's famous 1984 commercial. Ask someone when the Internet took off, cell phones took off, etc, and I suspect most will provide widely different answers, but most likely based on their own experience.
For the longest time, I resisted getting a cell phone. I had a roll of quarters in my car, and when I needed to make a call, I stopped at the nearby pay-phone, and made the call. In 1998, pay phones disappeared. You can't find them anymore. That was the year of the cell phones took off, at least for me.
Back to iSCSI, now that you can intermix iSCSI and SAN on the same infrastructure, either through intelligent multi-protocol switches available from your local IBM rep, or through an N series gateway, you can bring iSCSI technology in slowly and gradually. Low-cost copper wiring for 10 Gbps Ethernet makes all this very practical.
Another up-and-coming technology is AoE, or ATA-over-Ethernet. Same idea as iSCSI, but taken down to the ATA level.
CDP will emerge as an important feature on comprehensive data protection products instead of a separate managed product.
Here, CDP stands for Continuous Data Protection. While normal backups work like a point-and-shoot camera, taking a picture of the data once every midnight for example. CDP can record all the little changes like a video camera, with the option to rewind or fast-forward to a specific point in the day. IBM Tivoli CDP for Files, for example, is an excellent complement to IBM Tivoli Storage Manager.
The technology is not really new, as it has been implemented as "logs" or "journals" on databases like DB2 and Oracle, as well as business applications like SAP R/3.
The prediction here, however, relates to packaging. Will vendors "package" CDP into existing backup products, possibly as a separately priced feature, or will they leave it as a separate product that perhaps, like in IBM's case, already is well integrated.
The VTL market growth will continue at a much reduced rate as backup products provide equivalent features directly to disk. Deduplication will extend the VTL market temporarily in 2007.
VTL here refers to Virtual Tape Library, such as IBM TS7700 or TS7510 Virtualization Engine. IBM introduced the first one in 1997, the IBM 3494 Virtual Tape Server, and we have remained number one in marketshare for virtual tape ever since. I find it amusing that people are now just looking at VTL technology to help with their Disk-to-Disk-to-Tape (D2D2T) efforts, when IBM Tivoli Storage Manager has already had the capability to backup to disk, then move to tape, since 1993.
As for deduplication, if you need the end-target box to deduplicate your backups, then perhaps you should investigatewhy you are doing this in the first place? People take full-volume backups, and keep to many copies of it, when a more sophisticated backup software like Tivoli Storage Manager can implement backup policies to avoid this with a progressive backup scheme. Or maybe you need to investigate why you store multiple copies of the same data on disk, perhaps NAS or a clustered file system like IBM General Parallel File System (GPFS) could provide you a single copy accessible to many servers instead.
The reason you don't see deduplication on the mainframe, is that DFSMS for z/OS already allows multiple servers to share a single instance of data, and has been doing so since the early 1980s. I often joke with clients at the Tucson Executive Briefing Center that you can run a business with a million data sets on the mainframe, but that there wereprobably a million files on just the laptops in the room, but few would attempt to run their business that way.
Optical storage that looks, feels and acts like NAS and puts archive data online, will make dramatic inroads in 2007.
Marc says he's going out on a limb here, and that's good to make at least one risky prediction. IBM used to have anoptical library emulate disk, called the IBM 3995. Lack of interest and advancement in technology encouraged IBM to withdraw it. A small backlash ensued, so IBM now offers the IBM 3996 for the System p and System i clients that really, really want optical.
As for optical making data available "online", it takes about 20 seconds to load an optical cartridge, so I would consider this more "nearline" than online. Tape is still in the 40-60 second range to load and position to data, so optical is still at an advantage.
Optical eliminates the "hassles of tape"? Tape data is good for 20 years, and optical for 100 years, but nobody keeps drives around that long anyways. In general, our clients change drives every 6-8 years, and migrate the data from old to new. This is only a hassle if you didn't plan for this inevitable movement. IBM Tivoli Storage Manager, IBM System Storage Archive Manager, and the IBM System Storage DR550 all make this migration very simple and easy, and can do it with either optical or tape.
The Blue-ray vs. DVD debate will continue through 2007 in the consumer world. I don't see this being a major player in more conservative data centers where a big investment in the wrong choice could be costly, even if the price-per-TB is temporarily in-line with current tape technologies. IBM and others are investing a lot of Research and Development funding to continue the downward price curve for tape, and I'm not sure that optical can keep up that pace.
Well, that's my take. It is a sunny day here in China, and have more meetings to attend.
Well, I have left Japan, and while everyone else is enjoying the Super Bowl, I am now in Australia, at another conference.Today I had the pleasure to hear filmmakers talk about their successes, and how IBM helps the movie industry.
At one extreme was Khoa Do, independent filmmaker. After acting in movies asideMichael Caine and Billy Zane, he decided to become his own director. He started a project to help seven disadvantaged youths from a poor drug-ridden section of Sydney, by having them act in his first full-length film.Armed with only an IBM laptop and small budget, he made the film called "The Finished People" that had critical acclaim.
The film was a success, and many of the disadvantaged youths have gone on to act in other movies. In 2005, Khoa Do was named "Young Australian of the Year".
Thanks to IBM technology, filmmaking is now accessible to a wider number of aspiring wanna-be directors. It is no longer necessary to be part of a large film studio with a multi-million dollar budget to tell your story.
At the other extreme, was Xavier Desdoigts, director of technical operations at Animal Logic, the Computer Graphics (CG) arthouse that produced special effects of movies like "The Matrix", "House of Flying Dragons" and "World Trade Center". They started with producing digital effects for TV commercials, like this one forCarlton Draught Beer.
With the support of a large film studio and multi-million dollar budget, Animal Logic now boasts the 86th most powerful "Supercomputer" based on IBM BladeCenter technology, with over 4000 servers connected into a cluster, for making the movie "Happy Feet". The movie took four years to make, with over 500 people, of 27 different nationalities. It was the first CG movie made in Australia, and has been well-received by audiences worldwide.
Mr. Desdoigts gave out some interesting facts and figures about the movie:
While visually stunning on the big screen, each frame is only 1.4 Megapixel, about the same resolution as most camera phones.
In one scene, there are 427,086 penguins all appearing on frame.
Mumble, the lovable lead character, is made up of over 6 million feathers.
As many as 17 dancers were "motion-captured" to choreograph the tap-dancing and character interaction segments.
Only one system admin was needed to manage this entire server farm. (IBM Systems Director technology makes this possible)
The movie consumed 103 TB of disk space, backed up to 595 LTO tape cartridges.
An estimated 17 million CPU-hours were needed for all the processing and rendering.
Rather than talking about technology for technology sake, these filmmakers showed how technology couldbe put to use, in a practical sense, to provide the world something of value.
Stephen Colbert, of The Colbert Report, explains the name changes in recent mergers of the Telecommunications industry. A discussion on "changing names" and how that impacts storage seems like a good way to wrap up the week's theme on naming conventions.
Name changes are sometimes painful, but often times done for a purpose, such as to promote a family. In the US, when a man and woman marries, the woman often changes her family name to match her husband, and the kids all adopt the father's family name. I say "often" because there are times where the woman keeps her name, or adds to it in a hyphenated way. ABC News reported that a Man Fights to Take Wife's Name in Marriage. KipEsquire, a lawyer, writes about it in his blogA stitch in haste.
IT industry changes the names of products that people knew as something else. Other times, they re-use an existing name, when really it is or should be different from the original. Last year, I took on the job of helping transition from our brand "TotalStorage" to the "System Storage" product line under the new "IBM Systems" brand. I help decide what stays the same name or what changes, when it should change, and how to announce that change.
On the disk side, IBM renamed Fibre Array Storage Technology, or FAStT, which was pronounced exactly like "fast", to DS4000 series. This was a big improvement, as people couldn't seem to spell it properly, with variations like "FastT". Nor could people pronounce it properly, saying "fast-tee" instead. The advantage of "DS" is that it is both easy to spell, and easy to pronounce. The DS4000 series continues to be "fast", providing excellent performance for its midrange price category.
IBM's Enterprise Storage Server (ESS) line went from model E10, to F20, to 750 and 800. When IBM came out with its replacement, the IBM TotalStorage DS8000, some people asked why it wasn't named the ESS 900, for example. The DS8000 is quite different internally, new hardware design and implementation, but is highly compatible with the ESS line, and shares much of the same functionality from microcode. Last year, it was replaced by the IBM System Storage DS8000 Turbo. Again, newer hardware, so it was easy to justify the new name change from "TotalStorage" to "System Storage".
Renaming a product risks losing its certifications and awards. For example, IBM spent a lot of time and money getting the OS/390 operating system certified as a "UNIX" platform. When it was renamed to z/OS, IBM had to do it all over again. Learning from this experience, IBM decided not to rename the SAN Volume Controllerto a new designation like "DS5750", as it enjoys the "number one" spot on both the SPC-1 and SPC-2 performance benchmarks, and is recognized as the leader in the disk storage virtualization marketplace. Renaming this product would mean losing that collateral.
IBM's "other disk systems" the N series posed another set of challenges. The current DS line already has entry-level (DS3000), midrange (DS4000) and enterprise-class (DS6000 and DS8000) products. The OEM agreement that IBM has with Network Appliance (NetApp) resulted in a new set of entry-level, midrange, and enterprise-class products. But these didn't fit nicely into the DS3000-to-DS8000 continuum. Instead, IBM decided to go with N series, using N3000 for entry-level, N5000 for midrange, and N7000 for enterprise-class. These are different than the numbers used by NetApp for their comparable, but not identical, offerings.
On the tape side, IBM decided to name the tape drives TS1000 and TS2000 range, tape libraries and automation with a TS3000 range, and tape virtualization to the TS7000 range. A lot of tape products already had 3000 numbering that had to change to fit this new scheme. This is why IBM's popular 3592 tape drive was renamed to the TS1120. The replacement to the 3494 Virtual Tape Server was named TS7700 Virtualization Engine.
Obviously, you can't change the names of products that are currently in the field, but what about existing software with minor updates? IBM decided to leave "TotalStorage Produtivity Center" under the "TotalStorage" brand until it has a significant version upgrade. Many people say "TPC" as a convenient acronym when referring to this product, but TPC is a registered trademark of the Professional Golfers Association (PGA) to refer to its "Tournament Players Club".
How can anyone confuse "managing storage" with "playing golf"? One activity is full of frustration that takes years or decades to master, involving the need to understand a variety of equipment and techniques to use each properly to accomplish your goals; and the other is an enjoyable activity, immediately productive in front of a single pane of glass managing all of your DAS, SAN and NAS storage, from reporting on your files and databases to managing storage networks and tape libraries.
Continuing this week's theme of New Year's Resolutions for the data center, today we'll talk about one that people don't always think about on a personal level, that is to hone your tools and skills.
A long time ago, I used to be a regular speaker at the SHARE user group conference. One of the most attended sessions was Sam Golob presenting the latest CBT Tape set of tools. Over time, this large collection of "mainframe shareware" was handed out on 3480 tape cartridges, then on CDs, and finally made downloadable off the web.Sam's main point, which I remember to this day, was that everyone who has a job should figure out what tools they use, keep those tools functioning properly, and learn to use them well.
Later, I took some cooking classes at a culinary school. Among other things, we learned:
A sharp knife is safer and easier to use than a dull one, resulting in fewer accidents
Knowing what you are doing is the difference between food that is "simply awful" to that which is "awfully simple" to prepare.
A well trained chef can prepare most meals with just a sharp knife and wooden spoon.
The same could be said about software tools. What tools do you use in your job? Do you feel you know how to take full advantage of their power and capabilities?If you develop software, do you know all the features for your debugging tools? If you develop advertising or marketing materials, do you know all the features of your photo or video editing software? If you manage storage in a data center, do you know all the tools for managing your storage area network (SAN), disk systems, tape libraries, and reporting tools to identify all of your files and databases across your entire IT environment?I would not be surprised if you could replace a whole mess of tools with just one, such as the IBM TotalStorage Productivity Center.
This year I resolve to be more consistent in my blogging, and my goal is to give you one to five entries per week, every week, based on the advice from Glenn Wolsey, Jennette Banks, and others.On some weeks, I will have a running theme, so rather than super-long entries to cover everything I can think of on a topic, make the entries short and readable. This week is a good time to review last year's "New Year's Resolutions" and to make new ones for 2007. I will discuss actions that companies can adopt for their data centers.
A common resolution is to lose weight, as in this Dilbert comic. Last year, I resolved to lose weight in 2006, and am delighted with myself that I lost eight pounds. When people ask for the secret of my success, I whisper in their ear "Eat less, exercise more." In general, people (and companies) know what to do, but just don't do it, which Pfeffer and Sutton document in their book The Knowing-Doing Gap. In my case, it involved lifestyle change: I exercised at a gym three times per week in Tucson, with a personal trainer, and revamped my diet.
Not everyone subscribes to the "eat less exercise more" philosophy. For example, Ric Watson argues in his blog that you can eat fewer calories, but eat more in actual volume, by choosing the right foods. This brings up the issues of "metrics" that most data centers are familiar with. Last year, I read the book "You: On a Diet" which explains that it is better to focus on "waist reduction" as measured in inches around your mid-section at the belly button, than "weight reduction" as measured in pounds. This year, I resolve to get down to 35 inches by the end of 2007.
The problem with measuring "weight" is that you are weighing bones, muscle and fat. A person can gain ten pounds of muscle, lose ten pounds of fat, and the scale would indicate no progress. The same problem occurs in data centers. How many TB of data do you have? Storage admins can easily tell you, but can they tell how much of this is bone (data needed for operating infrastructure), muscle (data used in daily operations that generates revenue) or fat (obsolete or orphaned data)?
We at IBM often state that "Information Lifecycle Management (ILM)" is more lifestyle change than a "fad diet". Figuring out what data you should capture in the first place, where to place it, when to move it, and when to get rid of it, is more important that just buying different tiers of storage hardware. So, for those looking to make new data center resolutions, I suggest the following actions:
Re-evaluate the metrics you now use, and determine if they are helpful in making decisions and taking action.
Come up with new ones that are more focused to solve the issues you face.
Consider storage infrastructure software, such as IBM TotalStorage Productivity Center, to help you gather the information about your SAN, disk and tape systems, calculate the metrics, and automate the appropriate actions.
For those of us in the northern hemisphere, yesterday was this year's Winter Solstice, representingthe shortest amount of daylight between sunrise and sunset. So today, I thought I would blog on my thoughtsof managing scarcity.
Earlier in my career, I had the pleasure to serve as "administrative assistant" to Nora Denzel for the week at a storage conference. My job was to make her look good at the conference, which if you know Nora, doesn't take much. Later, she left IBM to work at HP, and I gotto hear her speak at a conference, and the one thing that I remember most was her statement that thewhole point of "management" was to manage scarcity, as in not enough money in the budget,not enough people to implement change, or not enough resources to accomplish a task.(Nora, I have no idea where you are today, so if you are reading this, send me a note).
Of course, the flip-side to this is that resources that are in abundance are generallytaken for granted. Priorities are focused on what is most scarce. Let's examine some of theresources involved in an IT storage environment:
Capacity - while everyone complains that they are "running out of space", the truth is that most external disk attached to Linux, UNIX, or Windows systems contain only 20-40% data. Many years ago, I visitedan insurance company to talk about a new product called IBM Tivoli Storage Manager. This company had 7TB of disk on their mainframe,and another 7TB of disk scattered on various UNIX and Windows machines. In the room were TWO storage admins for
the mainframe, and 45 storage admins for the distributed systems. My first question was "why so many people forthe mainframe, certainly one of you could manage all of it yourself, perhaps on Wednesday afternoons?" Their response was that they acted as eachother's backup, in case one goes on vacation for two weeks. My follow-up question to the rest of the audience was:"When was the last time you took two weeks vacation?" Mainframes fill their disk and tape storage comfortablyat over 80-90% full of data, primarily because they have a more mature, robust set of management software, likeDFSMS.
Labor - by this I mean skilled labor able to manage storage for a corporation. Some companies I have visitedkeep their new-hires off production systems for the first two years, working only on test or development systemsonly until then. Of course, labor is more expensive in some countries than others. Last year, I was doing a whiteboard session on-site for a client in China, and the last dry-erase pen ran out of ink. I asked for another pen, and they instead sent someone to go re-fill it. I asked wouldn't it be cheaper just to buy another pen, and they said "No, labor is cheap, but ink is expensive." Despite this, China does complain that there is a shortage of askilled IT labor force, so if you are looking for a job, start learning Mandarin.
Power and Cooling - Most data centers are located on raised floors, with large trunks of electrical power and hugeair conditioning systems to deal with all the heat generated from each machine. I have visited the data centers ofclients that are forced now to make decisions on storage based on power and cooling consumption, because the coststo upgrade their aging buildings are too high. Leading the charge is IBM, with technology advancements in chips, cards, and complete systems that use less power, and generate less heat. While energy is still fairly cheap in the grand scheme of things, fears ofGlobal Warmingand declining oil supplies, the costs ofpower and cooling have gotten some news lately. In 1956, Hubbert predicted US would reach peak oil supplies by1965-1970 (it happened in 1971), and this year Simmonsestimated that world-wide oil production began its decline already in 2005. Smart companies like Google have movedtheir server farms to places like Oregon in the Pacific Northwest for cheaper hydroelectric power.
Bandwidth - Last year IBM introduced 4Gbps Fibre Channel and FICON SAN networking gear, along with the servers and storage needed to complete the solution. 4Gbps equates to about 400 MB/sec in data throughput. By comparison, iSCSI is typically run on 1Gbps Ethernet, but has so much overheads that you only get abour 80 MB/sec. Next year, we may see both 8 Gbps SAN, and 10 GbE iSCSI, to provide 800 MB/sec throughputs. My experience is that the SAN is not the bottleneck, instead people run out of bandwidth at the server or storage end first. They may not have a million dollars to buy the fastest IBM System p5 servers, or may not have enough host adapters at the storage system end.
Floorspace - I end with floorspace because it reminds me that many "shortages" are temporary or artificially created. Floorspace is only in short supply because you don't want to knock down a wall, or build a new building, to handle your additional storage requirements.In 1997, Tihamer Toth-Fejel wrote an article for the National Space Society newsletter that estimated that ...Everybody on Earth could live comfortably in the USA on only 15% of our land area, with a population density between that of Chicago and San Francisco. Using agricultural yields attained widely now, the rest of the U.S. would be sufficient to grow enough food for everyone. The rest of the planet, 93.7% of it, would be completely empty.Of course, back in 1997 the world population was only 5.9 billion, and this year it is over 6.5 billion.
This last point brings me back to the concept of food, and I am not talking about doughnuts in the conference room, or pizza while making year-end storage upgrades. I'm talking aboutthe food you work so hard to provide for yourself and your family. The folks at Oxfam came up with a simpleanalogy. If 20 people sit down at your table, representing the world’s population:
3 would be served a gourmet, multi-course meal, while sitting at decorated table and a cushioned chair.
5 would eat rice and beans with a fork and sit on a simple cushion
12 would wait in line to receive a small portion of rice that they would eat with their hands while sitting on the floor.
So for those of you planning a special meal next Monday, be thankful you are one of the lucky three, and hopefulthat IBM will continue to lead the IT industry to help out the other seventeen.
You may not be the right person to ask but I am asking everyone so "How do you see hybrid disk drives?"
(For the record, I am not immediately related to Robert. At onepoint, "Pearson" was the 12th most common surname in the USA, but now doesn't even make the Top 100.)
Robert, I would like to encourage you and everyone else to ask questions, don't worry if I am the wrong person to ask, asprobably I know the right person within IBM. Some people have called me the "Kevin Bacon" of Storage,as I am often less than six degrees away from the right person, having worked in IBM Storage for over 20 years.
For those not familiar with hybrid drives, there is a good write-up in Wikipedia.
Unfortunately, most of the people I would consult on this question, such as those from Market Intelligence or Research, are on vacation for the holidays, so, Robert, I will have to rely on my trusted 78-card Tarot deck and answer you with a five-card throw.
Your first card, Robert, is the Hermit. This card represents "introspection". The best I/O is no I/O, which means that if applications can keep the information they need inside server memory, you can avoid the bus bandwidth limitations to going to external storage devices. Where external storage makes sense is when data is shared between servers, or when the single server is limited to a set amount of internal memory. So, consider maxing out the memory in your server first (IBM would be glad to sell you more internal memory!!!), then consider outside solid-state or hybrid devices. Windows for example has an architectural limit of 4GB.
Your second card, Robert, is the Four of Cups, representing "apathy".On the card, you see three cups together, with the fourth cup being delivered from a cloud. This reminds me thatwe have three storage tiers already (memory,disk,tape), and introducing a fourth tier into the mix may not garnermuch excitement. For the mainframe, IBM introduced a Solid-State Device, call the Coupling Facility, which can be accessed from multipleSystem z servers. It is used heavily by DFSMS and DB2 to hold shared information. However, given some customer's apathytowards Information Lifecycle Management which includes "tiered storage", introducing yet another tier that forcespeople to decide what data goes where may be another challenge.
Your third card, Robert, is the Chariot, which represents "Speed, Determination,and Will". In some cases, solid state disk are faster for reading, but can be slower for writing. In the case of ahybrid drive, where the memory acts as a front-end cache, read-hits would be faster, but read-misses might be slower.While the idea of stopping the drives during inactivity will reduce power consumption, spinning up and slowing downthe disk may incur additional performance penalties. At the time of this post, the fastest disk system remains the IBM SAN Volume Controller, based on SPC-1 and SPC-2 benchmarks in excess of those published for other devices.
Your fourth card, Robert, is the Eight of Pentacles, which represents"Diligence, Hard work". The pentacles are coins with five-sided stars on them, and this often represents money.Our research team has projected that spinning disk will continue to be a viable and profitable storage media for at least anothereight years.
Your fifth and last card, Robert, is the World, which normallyrepresents "Accomplishment", but since it is turned upside down, the meaning is reversed to "Limitation". Some Hybriddisks, and some types of solid state memory in general, do have limitations in the number of write cycles they can handle. For thoseunhappy with the frequency and slowness for rebuilds on SATA disk may find similar problems with hybrid drives.For that reason, businesses may not trust using hybrid drives for their busiest, mission-critical applications, but certainlymight use it for archive data with lower write-cycle requirements.
The tarot cards are never wrong, but certainly interpretations of the cards can be.
It has always been the case in fast pace technology areas that you can't tell the players without a program card, andthis is especially true for storage.
When analyzing each acquistion move, you need to think of what is driving it. What are the motives?Having been in the storage business 20 years now, and seen my share of acquisitions, both from within IBM,as well as competition, I have come up with the following list of motives.
Although slavery was abolished in the US back in the 1800's, and centuries earlier everywhere else, many acquisitionsseem to be focused on acquiring the people themselves, rather than the products or client list. I have seen statistics such as "We retained 98% of the people!" In reality, these retentions usually involve costly incentives,sign-in bonuses, stock options, and the like. Desptie this, people leave after a few years, often because ofpersonality or "corporate culture" clash. For example, many former STK employees seem to be leaving after their company was acquired by Sun Microsystems.
If you can't beat them, join them. Acquisitions can often be used by one company to raise its ranking in marketshare, eliminating smaller competitors. And now that you have acquired their client list, perhaps you can sellthem more of your original set of products!
Symantec had acquired Veritas, which in turn had acquired a variety of other smaller players, and the end result is that they are now #1 backup software provider, even though none of theirproducts holds a candle to IBM's Tivoli Storage Manager. Meanwhile, EMC acquired Avamar to try to get more into the backup/recovery game, but most analysts still find EMC down in the #4 or #5 place in this category.
Next month,Brocade's acquisition of McData should take effect, furthering its marketshare in SAN switch equipment.
Prior to my current role as "brand market strategist" for System Storage, I was a "portfolio manager" where wetried to make sure that our storage product line investments were balanced. This was a tough job, as the investmentshad to balance the right development investments into different technologies, including patent portfolios.Despite IBM's huge research budget, I am not surprised that some clever inventions of new technologies comefrom smaller companies, that then get acquired once their results appear viable.
The last motive is value shift. This is where companies try to re-invent themselves, or find that they are stuck in acommodity market rut, and wish to expand into more profitable areas.
LSI Logic acquisition of StoreAge is a good exampleof this. Most of the major storage vendors have already shifted to software and services to provide customer value,as predicted in 1990's by Clayton Christensen in his book "The Innovator's Dilemma". The rest are still strugglingto develop the right strategy, but leaning in this general direction.
I wasn't at the event, but thought it would be good to explain some basic concepts ofInformation Lifecycle Management (ILM),using the files on my iPod as an example. (Disclosure: IBM makes the technology inside many of Apple's computers, and so IBMers get to buy Appleproducts at employee prices. I own a Mac Mini based on IBM's POWER4 processor, and an iPod Photo 60GB model).
I have 20,000 MP3 music files, representing 106GB of data. This fits nicely on my 250GB external disk system attached to my Mac Mini, but won't all fit on my little 60GB iPod. I needed a way to decide what music I keep on bothmy iPod and Mac Mini, and which I keep only on my Mac Mini. When I am traveling, I am able to listen only to the musicin the first group, but when I am at home, I am able to listen to all my music in both groups.(Another disclosure: I use my Tivo connected to my LAN to play all my MP3 music through my home stereo system.I had my entire house wired with Cat5 to make this possible.)
Apple's iTunes software lets me decide which MP3 files are copied to my iPod using "playlists". A playlist is a list of songs. Fixed playlists are created manually, each song copied to its list in a specific order. Smart playlists are createdautomatically, via policy. I give it the criteria, and it finds the songs for me. If I import a new music CD,none of the songs will be added to any fixed playlists, but could be added to my smart playlists if I set the policiescorrectly. Apple iTunes supports both "include" and "exclude" methodologies.
I use primarily smart playlists, based on genre and rating. I have tried to keep the number of genre down to a small manageable list:
Rhythm & Blues
Of course, what I have for genre may not match what's in theGracenote database, so I sometimes have to makeupdates to match my convention. I've picked these based on my different "applications" for my music. For example, I listen to Ambient music to help me fall asleep on airplanes, but Rock when I exercise at the gym.
Next, I use the ratings from one to five stars. The advantage to the rating is that I can change them on-the-fly directly on my iPod. All other "metadata" has to be entered only from the keyboard of my Mac Mini.
Files for Mac Mini only, not copied to my iPod
Non-mix, copied to my iPod, but typically spoken words, such as language lessons
Mix, music to include in my music mixes
Keep on my iPod, but re-evaluate
So, I have five smart playlists, "One Star", "Two Stars", etc. for each rating, and have decidedto keep only the 2, 3, 4 and 5 star songs on my iPod, by simply putting check marks on those playlists to copythem over. I have about 50 songs with 5 stars, and 8000 with 3 stars, and the rest in the other categories,leaving me a few GB to spare.
I also have playlists for each genre, "Rock mix", "Pop Mix", "Ambient Mix", etc. where I have selected thosethat match the genre, AND have 3, 4 or 5 stars. In this manner, I can listen to a mix. If I find a song mis-classified for that genre, I change it to four stars, which serves as myreminder to re-evaluate when I am back at home on my Mac Mini. If I don't want a song in my mix, I just lowerit to 2 stars. I want it off my iPod altogether, I lower it to one star.
This method is simple enough, and allows me to enjoy my music right away, and more effectively, without having to wait for completely finishing my classification process.
Next week, I'm traveling to Africa (purely vacation, not related to my job, my senator, or myinvolvement in anycharitable organizations). My Canon camera has only a 1GB IBM Microdrive, but I am able to offloadmy pictures to my iPod, connected via USB cable, and review the pictures on the little 2-inch screen. By simply "unchecking" my 2-star and 3-starplaylists, and checking only those mixes I plan to take with me, I was able to clear 17GB of space, plenty ofroom for all my photos of elephants and giraffes, but still plenty of music to listen to. Thanks to my simple methodology, I was able to do this with minimal effort, and willhave no problem putting all my music back when I return.
When evaluating an ILM process, many people are overwhelmed by their fear of the classification process, when in reality it doesn't have to be so complicated.
Is there an "iTunes" for the storage in your datacenter? Yes! It's called IBM TotalStorage Productivity Center. It can help you list and classify all the files in your IT environment,including files in your internal disks inside the servers, your NAS and SAN external disk systems, across both IBM and non-IBM hardware.It's a good thing to consider as part of your overall ILM strategy.
But ITSM is more than just a better way to manage operational tasks, it is focused on the best practices of the IT Infrastructure Library (ITIL) which has been adopted bythe European Union, and now being adopted worldwide by both government agencies and private enterprises as a smartway to run your IT environment.
Of course, we've designed our solutions to apply to your entire IT environment, supporting both IBM and non-IBM equipment, so even if not all of your servers and storage come from IBM, at least your software can be.[Read More]
BladeCenterservers come in many flavors, including blades with Intel, AMD and POWER chipsets, and can be configured in Grid and SuperComputer configurations. Up to 14 blade servers can fit intoa single 7U-high chassis, making this twice as dense as standard 1U-high rack-mounted servers.
System x, the new "IBM Systems" name for our popular xSeries product line, support Intel and AMD chipsets. These come in both rack-mountedand tower configurations. These also are idea for clustered and SuperComputer configurations.[Read More]
Yesterday (September 7, 2006) the Eclipse Foundation announced that it has approved the creation of the Aperi Storage Management Framework Project.
There's been a lot of confusion out there about Aperi, so I thought I would post some facts and opinions about this exciting new project. A few years ago, I was thelead architect for IBM TotalStorage Productivity Center, IBM's infrastructure management product that helped launch the creation of Aperi.
From the latin word for "open", Aperi is an open source project that aims to simplify the management of storage environments, using the Storage Management Initiative - Specification (SMI-S) open standardto promote interoperability and eliminate complexity in today’s storage environments.
Aperi should provide immediate value upon install with basic storage management capabilities, rather than just simply a collection of components that require costly integration. We've discussed requirements for functions such as:
Resource discovery, monitoring, and reporting
Fabric Topology mapping
Disk / Tape management
Device configuration & LUN assignment
SAN fabric management
Basic asset management
The big confusion most people have is Aperi's relation to SMI-S and the Storage Networking Industry Association (SNIA)open standards group. The best way to explain this is to go backto your High School SAT college-entrance exams. Remember questions like this?
(The answer: a crumb is to bread like a splinter is to wood.)
Aperi is an implementation of SMI-S standard, similar to MySQL or PostgreSQL areopen standard relational database implementations of Structured Query Language (SQL).These compete with proprietary database implementations such as IBM DB2 Universal Database,Oracle Database, Microsoft SQL Server, or Sybase.
Aperi: SMI-S :: PostgreSQL : Structured Query Language (SQL)
It is often the case that the folks writing the code are different than the folks defining the standards. This is the case between the members of Aperi writing code, and the members of the SNIA writing standards. IBM happens tohave employees writing Aperi code, and other employees helping define SMI-S standards.What can I say, IBM is a big company and a leader in many areas.
A good analogyis how the Apache community has developed an awesome web server, and the Firefox Mozillacommunity have developed an awesome web browser, both of which are implementations of the HTTP/HTML standards adopted by the World Wide Web Consortium. Apache and Firefoxcompete with proprietary implementations, such as Microsoft Internet Information Services(IIS) web server and Internet Explorer web browser.
Aperi: SNIA :: Apache : World Wide Web (WWW) Consortium
With this arrangement, Aperi and the SNIA will have very complementary roles in defining and driving standards across the entire storage market. To that end, Aperi will make extensive use of the SNIA’s Technology Center and SNIA’s “plugfests” to test the interoperability of the Aperi framework with the variety of 3rd-party storage offerings available. By providing a tested implementation of SMI-S, Aperi will drive broader industry availability of SMI-S, as well as offer the many benefits of an industry-backed open source community.
Check out this vote of confidence:
"Eclipse's Aperi Project will further advance the adoption of SNIA's SMI-S, benefiting the entire storage industry and IT community. Furthermore, the SNIA and Aperi will define plans to collaborate on new storage standards, standards testing programs, and storage interoperability programs." --- Wayne M. Adams, chair, SNIA Board of directors
So, both proprietary and open source implementations have their place in the world.Proprietary products are needed for advanced, unique value-add, and opensource projects are for basic support focused on interoperability and flexibility.These can be combined, for example, proprietary "plug-ins" built on an open source base. The more choices the client has, the better.
Storage vendors benefit too. Vendors are tired of being in the "Y.A.C." business, building "Yet Another Configurator" for each new device developed, with basic functionsto carve LUNs, read performance stats, and so on. By shipping Aperi instead, storagevendors like IBM can invest their development dollars in real innovations, things thatmatter for the customer.
A lot of people ask me about IBM branding, as we have recently changed brands. In the past we had two separate brands, one for servers (eServer) and one for storage (TotalStorage). These would be fine if we wanted to promote their independence, but customers today want synergy between servers and storage, they want systems that work well together.
Last year, in response to market feedback, we crated a new brand, "IBM Systems" and put all the server and storage product lines under one roof. Over time, we will transition from TotalStorage to System Storage naming. This will occur with new products, and major versions of existing products.
Two other phrases you will hear in the names of our offerings are "Virtualization Engine" and "Express". These are portfolio identifiers. The Virtualization Engine identifier was created to emphasize our leadership in system virtualization, and we have products that span product lines with this identifier.
The Express identifier was created to emphasize our focus on Small and Medium sized business (SMB). It spans not just servers and storage, but across other offerings from other IBM divisions.
Of course, just renaming products and services isn't enough. Systems don't work together just because they have similar names, are covered in similar "Apple white" plastic, or have similar black bezels. Obviously, thoughtful and collaborative design are needed, with the appropriate amounts of engineering and testing. IBM is aligning its server and storage development so that the IBM Systems brand keeps its promise.
In last week's System Storage Portfolio Top Gun class in Dallas, some of the students were not familiarwith Really Simple Syndication (RSS). For the uninitiated, this can be intimidating.I thought a quick overview of what I've done might help:
Chose a "feed reader". I chose Bloglines but there are many others.
Use Technorati to search other blogs for keywords or phrases I am looking for.
When I find a blog that I like to continue tracking, I "add" it to my subscription list on bloglines. Just hit "add" and copy the URL of the blog you want to track. Bloglines will figure out the RSS keywords required.I track eight blogs at the momemnt, but some people with lots of time on their hands track 20 or more. It is easy to unsubscribe, so don't be afraid to try some out for a few days.
Since I was actually going to run a blog of my own, I read a few books on the topic. One I recommend is "Naked Conversations" by Robert Scoble and Shel Israel, both experienced bloggers.
Finally, I am not big on spell checking, but most places have the option to preview your post or comment before it actually gets posted, which is not a bad idea if you use any HTML tags.
For a quick taste of blogging, consider using Data Storage Blogger Feed Reader. This has a lot of blogs on the topic of storage, already added and categorized for your convenience, ready for your perusal.
I am sure there are many other ways to enjoy the Blogosphere, but this works for me.[Read More]
I have created blog categories, based on our System Storage offering matrix, which you can track individually:
Disk systems, including the IBM System Storage DS Family of products, SAN Volume Controller, N series, as well as features unique to these products, such as FlashCopy, MetroMirror, or SnapLock. Tape
Tape systems, including the IBM System Storage TS Family of products, tape-related products in the Virtualization Engine portfolio, drives, libraries and even tape media.
Storage Networking offerings, from Brocade, McData, Cisco and others, such as switches, routers and directors.
Infrastructure management, including IBM TotalStorage Productivity Center software, IBM Tivoli Provisioning Manager, IBM Tivoli Intelligent Orchestrator, and IBM Tivoli Storage Process Manager.
Business Continuity, including IBM Tivoli Storage Manager, Tivoli CDP for Files, Productivity Center for Replication software component, Continuous Availability for Windows (CAW), Continuous Availability for AIX (CAA).
Lifecycle and Retention offerings, including our IBM System Storage DR550, DR550 Express, GPFS, Tivoli Storage Manager Space Management for UNIX, Tivoli Storage Manager HSM for Windows, and DFSMS.
Storage services, including consulting, assessments, design, deployment, management and outsourcing.