Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
The "corporate bloggers" from the various storage vendors often mention their opinions about IBM products. Sometimes, they say something nice, and other times they poke fun. It's good to read the various opinions. Most are well-thought and well-written.
EMC blogger Chuck Hollis has a post about the various categories that industry analyst IDC used for external controller-based disk in their most recentQ4 Storage Scorecard.I agree with Chuck that it is good to have independent analysts that take an objective look across all storage vendors to provide the facts on various makes and models. Both IBM and EMC took marketshare in 4Q, so we cancongratulate ourselves and each other for the efforts needed to make this happen.
Chuck mentions that while EMC and HDS high-end boxes are similar, perhaps IBM's "DS" series is different enough to question putting it in the same "high-end" category. It's not clear if Chuck is poking fun at the fact that theIBM DS family spans multiple categories; or an admission thatthe IBM DS8300 Turbo is faster than the EMC DMX-3 and HDS USP offerings. Perhaps we need a new categorycalled "super high-end"?
IDC doesn't publish their data by price band, but we can infer from the products in each how they decidedwhich products were grouped into which categories. Let's examine the entire IBM DS family in the various categories.
Our newest offering is the IBM System Storage DS3000 series. Some analysts call this category "low end", but IBM prefers using "entry level". These have an attractivelow acquisition price, very easy to set up, and are intended for the Intel and AMD servers, such as IBMBladeCenter, System x, as well as servers from HP and Dell. Disk arrays in this category typically have listprices below $50,000 USD.
Our midrange offering is the IBM System Storage DS4000 series. These are designed for Linux, UNIX and Windows based workloads.Some call these server platforms "open systems", or sometimes "distributed systems". The DS4000 systems are rack-optimized modularunits, providing plenty of options and trade-offs between price and performance for price-sensitive customers.The "high end" model of the DS4000 series is the DS4800, and has very impressive performance characteristics.Disk arrays in this category typically have list prices in the $50,000 to $299,000 USD range.
IBM System Storage DS6000 seriesis one of our enterprise class offerings. DS6000 offers mainframe attachment comparable to what EMC DMX or HDS USP offer for their "enterprise class" or "high end" models, but uses substantially less power and in a much more compact modular rack-optimized packaging. Disk arrays in this category typically have list prices at $300,000 USD and above.
Super High End
Perhaps IBM and EMC can work together to petition IDC to adopt this as a new category, based on performance,rather than list price. Is the storage marketplace ready for a fourth category?As Chuck mentioned on his blog, IBM is #1 for mainframe disk storage, and perhaps it is because the IBM System Storage DS8000 Turbo series does so well on most mainframe workloads. No offering from EMC or HDS meet or beat the SPC benchmarks for the DS8000 Turbo. You can see the results in the Executive Summary or read the Full Report.
Thanks to IBM's innovative Adaptive ReplacementCache algorithm, IBM DS8000 performance shines best handling read-intensive random-access workloads that mainframes do most often. These types of workload are modeled by the SPC-1 benchmark. In cases of write-intensive, sequential processing, the differences are less substantial, as disk arrays from all manufacturers drop down to the native performance capabilities of the 10K and 15K RPM drives.
I'll give you a real example. Not long ago, I waspart of a team to help resolve a performance bottleneck on-site at the customer location. The customer had an interesting "composite application" where data was processed on AIX platform (IBM System p), which passed the data to a Linux partition running on the IBM System z mainframe,which in turn used Java SQL to post updates to a DB2 database on z/OS partition, which then wrote out through FICON adapters to an HDS USP device. IBM and HDS worked together to help the customer figure out why they weregetting disappointing throughput and response times. IBM brought in experts on AIX, TCP/IP, Java, Linux, z/OS and FICON. HDS had their experts too, and tried to improve performance by quadrupling the storage capacity, and spreading the data out across more spindles. That didn't work. As it turns out, HDS disk just couldn't deliver the performance required. The software and mainframe were all well tuned. They replaced the HDS withan IBM DS8000 array, and it met all the service level requirements. Problem solved.
The problem with having this new "super high end" category, of course, is that only IBM plays in it, so it wouldn'toffer the marketplace much of a comparison. For now, we'll just have to settle for being the fastest in the samecategory as EMC DMX and HDS USP.
Storage is a competitive marketplace.Both EMC and HDS are reputable companies that make quality products that attach to IBM System z mainframe servers. Not all workloads are mission-critical or performance-sensitive. For less critical workloads, perhaps you may find EMC or HDS performance is "good enough".
But if performance is important to you, you should consider IBM on your list of vendors for your next purchase decision. Let IBM help you prove it to yourself, running your specific workloads side by side with your existing equipment.
This week I was in Palm Springs in meetings with clients, prospects, business partners and IBM sales reps.
Tuesday consisted of "outdoor meetings", but the high winds caused some people to arrive late, and others to land in the various sand traps and water hazards. A "welcome reception" event allowed everyone to socialize and get to know the IBM experts and executives. Two of my colleagues, Mike Stanek and Dave Wyatt, were with me also in Australia last week, and so the three of us were discussing recovery from jet lag.
Wednesday was organized as a main tent event, where everyone met into one large room to hear our strategy,latest set of offerings, and customer testimonials. This was done indoors, of course, which was a good thing as the winds were now gusting up to 50 miles per hour, knocking over windmills and making the local news.
Here's a quick sample from the testimonials:
An insurance company virtualized their IBM DS8000, DS4000, ESS 800 and EMC DMX3 high-end disk with theIBM System Storage SAN Volume Controller and got higher availability and performance. Data migrationefforts that used to take six(6) hours of admin time now took less than one hour, and with no system downtime.They have a total of 350TB virtualized under SVC now, but plan to extend this for a variety of other projects.
A bank presented their success using "Global Mirror" (IBM's asynchronous two-site replication disk mirroring capability).Their previous "business continuity" plan was called 2-20-24 for 2 sites that were 20 miles apart and recovery time objective (RTO) of 24 hours. With the events of Hurricane Katrina, this was considered inadequate, and a new2-200-6 plan was requested, across 200 miles with a recovery time objective of only 6 hours. The chose to deploythis one application at a time, to learn and grow by experience in each phase. They started with Microsoft Exchange e-mail application running under VMware on BladeCenter servers, and wereable to recover remotely within 1 hour. They are now looking to refine and automate the recovery process, perhapswith IBM TotalStorage Productivity Center for Replication and Geographically Dispersed Open Clusters (GDOC).
A healthcare provider presented their success with tiered storage, managing a 475TB mix of IBM DS8000, DS6000,DS4000 and HP EVA disk arrays. The key was having centralized storage management from IBM, which allowedthem to shrink provisioning time from 3 weeks average, to now 96% of their storage provisioning requests are completedin less than 1 week. Moving data between storage tiers was non-disruptive, and the significantcosts savings greatly justified the change in "mindset" that required some training on the new environment.
Thursday we offered a series of "workshops" on specific topics. These were interactive sessions to discuss installation, design and deployment of various solutions. The event ended early enough so that people couldreturn home, or go to the practice range, which reminded me of this inspiring video on How to play golf as well as Tiger Woods.
The event got great reviews, and I look forward to the next one. Until then, enjoy the weekend!
Last week, Paul Weinberg of eChannelLine.com asks Is this the year of the SAN (again)?So, I thought this week I would cover my thoughts and opinions on storage networking. We oftenfocus on servers or storage devices, and forget that the network in between is an entire worldon itself.
I believe Mr. Weinberg is basing this on the idea that in 2007, over 50 percent of disk will beattached over SAN, edging out the alternative: Direct Attached Storage (DAS). But perhaps 50 percentis the wrong number to focus on. In 2007, The United Nations estimates thatcities will surpass rural areas, with just over 50 percent of theworld's population. Does that make this the "Year of the City"? Of course not.
Instead, I prefer to use the methodology that Malcolm Gladwell uses in his book, The Tipping Point.(I have read this book and highly recommend it!)Gladwell indicates that the tipping point happens at the start of the epidemic, not when it is half over.Isn't it better to celebrate the sweet 16 debutante ball when young ladies have completed their years of training and preparation, and are ready to be introduced to the rest of the world, rather than after they are thirty-something, married with children.
Let's explore some of the history. Stuart Kendric has a nice 7-page summary on theHistory & Plumbing of SANs.
IBM announced the first SAN technology calledEnterprise Systems Connection (ESCON) way back in September 1990. This allowed multiplemainframe servers to connect to multiple storage systems over equipment called "ESCON Directors" that directedtraffic from point A to point B. Before this, mainframes sent "ChannelCommand Words" or CCWs, across parallel "bus and tag" copper cables. ESCON was serial overfiber optic wiring. SANs solved two problems: first, it reduced the "rat's nest" of cables between many serversand many storage systems, and second, it extended the distance between server and storage device.
For distributed systems running UNIX or Windows, the CCW-equivalent over parallel cables was called Small ComputerSystem Interface (SCSI). The SCSI command had over 1000 command words, so for its Advanced Technology (AT) personal computers (PC AT), IBM introduced a subset of SCSI commands called ATA (Advanced Technology Attachment). ATA drives supportedfewer commands, ran at slower speeds, and were manufactured with a less rigorous process. Today ATA drives are about 55 percent the cost per MB as comparable SCSI drives.
Anyone who has ever opened their PC and found flat ribbon cable with eight or sixteen wires in parallel, can understand that the same issues applied externally. Parallel technologies arelimited to distance and speed, as all the bits have to arrive at the end of the wire at approximately thesame time. Direct attach schemes with every server attaches directly to every storage device were also problematic.Imagine 100 servers connected to 100 storage devices, that would be 10,000 wires!
So, a new technology standard was developed, called Fibre Channel, ratified in 1994.The spelling of "Fibre" was intentionally made different than "Fiber" on purpose. "Fibre" is a protocol thatcan travel over copper or glass wires. "Fiber" represents the glass wiring itself.
Fibre Channel is amazingly versatile. For today's Linux, UNIX and Windows servers, it can carry SCSI commands, and the combination of SCSI over FC is called Fibre Channel Protocol (FCP). For the mainframe servers, it can carry CCW commands. Running CCW over Fibre Channel is called FICON. This convergence allows mainframes and distributed systems to share a common Fibre Channel network, using the same set of switches and directors.
We saw the use of SANs explode in the marketplace over the past 10 years, and then cool down with a series of mergers and acquisitions. Last year, Brocade announced it was acquiring rival McData, so we will be down to two major players, Cisco and Brocade.
So, IMHO, I think we are well past the "Year of the SAN".
Tuesday is always good for announcements. Today, Gartner, Inc. announced that IBM has taken over HP in its climb to the top. I'll quote directly from today's press release:
STAMFORD, Conn., March 6, 2007 — Worldwide external controller-based (ECB) disk storage revenue totaled $15.2 billion in 2006, a 4.1 percent increase over 2005 revenue of $14.6 billion, according to Gartner, Inc.IBM overtook Hewlett-Packard for the No. 2 position in 2006 (see Table 1). IBM’s worldwide ECB market share increased to 15.8 percent, while HP’s market share dropped to 13.1 percent.
IBM beat HP both in 4Q06, as well as 2006 full year.You can read more about it from Gartner Dataquest report “Market Share: Disk Array Storage, All Regions, All Countries, 1Q05-4Q06" on their website. (Note: non-IBMers might need an account with Gartner to access this, not sure)
The focus was on external controller-based disk, not external controller-less SCSI/SAS disk, not disk arrays posing as virtual tape libraries, nor any disk sold inside HP, Sun, IBM or Dell servers. This is to compare with disk-only vendors such as EMC and HDS. The revenues reflect hardware only, including hardware-related parts of financial leases and managed services. Revenues from optional priced software features such as multi-pathing drivers, management software, or advanced copy services were excluded.I discussed these types of analyst reports back in blog post last September: Space Race Heats Up.
These marketshare numbers are based on revenues, not units or terabytes. When a box gets sold, the revenue was counted toward the vendor that sold it, not the manufacturer that built it. In this last report:
When Dell sells an EMC box, it gets counted as Dell. When Fujitsu Siemens sells an EMC box, it gets counted as "Other".
When HP sells an HDS box, it gets counted as HP. When Sun sells the HDS box, it gets counted as Sun.
When IBM sells its System Storage N series (from the OEM agreement with NetApp), it gets counted as IBM. Both IBM and NetApp experienced growth in the NAS/unified storage arena.
It's still cold here in the Washington DC area, but at least good news like this helps warm me up!
This week and next I am touring Asia, meeting with IBM Business Partners and sales repsabout our July 10 announcements.
Clark Hodge might want to figure out where I am, given the nuclearreactor shutdowns from an earthquake in Japan. His theory is that you can follow my whereabouts just by following the news of major power outages throughout the world.
So I thought this would be a good week to cover the topic of Business Continuity, which includes disaster recovery planning. When making Business Continuity plans, I find it best to work backwards. Think of the scenarios that wouldrequire such recovery actions to take place, then figure out what you need to have at hand to perform the recovery, and then work out the tasks and processes to make sure those things are created and available when and where needed.
I will use my IBM Thinkpad T60 as an example of how this works. Last week, I was among several speakers making presentations to an audience in Denver, and this involved carrying my laptop from the back of the room, up to the front of the room, several times. When I got my new T60 laptop a year ago, it specifically stated NOT to carry the laptop while the disk drive was spinning, to avoid vibrations and gyroscopic effects. It suggested always putting the laptop in standby, hibernate or shutdown mode, prior to transportation, but I haven't gotten yet in the habit of doing this. After enough trips back and forth, I had somehow corrupted my C: drive. It wasn't a complete corruption, I could still use Microsoft PowerPoint to show my slides, but other things failed, sometimes the fatal BSOD and other times less drastically. Perhaps the biggest annoyance was that I lost a few critical DLL files needed for my VPN software to connect to IBM networks, so I was unable to download or access e-mail or files inside IBM's firewall.
Fortunately, I had planned for this scenario, and was able to recover my laptop myself, which is important when you are on the road and your help desk is thousands of miles away. (In theory, I am now thousands of miles closer to our help desk folks in India and China, but perhaps further away from those in Brazil.) Not being able to respond to e-mail for two days was one thing, but no access for two weeks would have been a disaster! The good news: My system was up and running before leaving for the trip I am on now to Asia.
Following my three-step process, here's how this looks:
Step 1: Identify the scenario
In this case, my scenario is that the file system the runs my operating system is corrupted, but my drive does not have hardware problems. Running PC-Doctor confirmed the hardware was operating correctly. This can happen in a variety of ways, from errant application software upgrades, malicious viruses, or in my case, picking up your laptop and carrying it across the room while the disk drive is spinning.
Step 2: Figure out what you need at hand
All I needed to do was repair or reload my file sytem. "Easier said than done!" you are probably thinking. Many people use IBM Tivoli Storage Manager (TSM) to back up their application settings and data. Corporate include/exclude lists avoid backing up the same Windows files from everyone's machines. This is great for those who sit at the same desk, in the same building, and would be given a new machine with Windows pre-installed as the start of their recovery process. If on the other hand you are traveling, and can't access your VPN to reach your TSM server, you have to do something else. This is often called "Bare Metal Restore" or "Bare Machine Recovery", BMR for short in both cases.
I carry with me on business trips bootable rescue compact discs, DVDs of full system backup of my Windows operating system, and my most critical files needed for each specific trip on a separate USB key. So, while I am on the road, I can re-install Windows, recover my applications, and copy over just the files I need to continue on my trip, and then I can do a more thorough recovery back in the office upon return.
Step 3: Determine the tasks and processes
In addition to backing up with IBM TSM, I also use IBM Thinkvantage Rescue and Recovery to make local backups. IBM Rescue and Recovery is provided with IBM Thinkpad systems, and allows me to backup my entire system to an external 320GB USB drive that I can leave behind in Tucson, as well as create bootable recovery CD and DVDs that I can carry with me while traveling.
The problem most people have with a full system backup is that their data changes so frequently, they would have to take backups too often, or recover "very old" data. Most Windows systems are pre-formatted as one huge C: drive that mixes programs and data together. However, I follow best practice, separating programs from data. My C: drive contains the Windows operating system, along with key applications, and the essential settings needed to make them run. My D: drive contains all my data. This has the advantage that I only have to backup my C: drive, and this fits nicely on two DVDs. Since I don't change my operating system or programs that often, and monthly or quarterly backup is frequent enough.
In my situation in Denver, only my C: drive was corrupted, so all of my data on D: drive was safe and unaffected.
When it comes to Business Continuity, it is important to prioritize what will allow you to continue doing business, and what resources you need to make that happen. The above concepts apply from laptops to mainframes. If you need help creating or updating your Business Continuity plan, give IBM a call.
The blogosphere has quieted down a bit over the two papers on MTBF estimates for Disk Drive Modules (DDM).One article on SearchStorage.com by Arun Taneja asksIs RAID passé? Disk capacity is growing at a faster rate than DDM reliability. During the hours to rebuild a DDM, companies are at risk of additional failures that could require recovery from a copy, or result in data loss, depending on how well your Business Continuity (BC) plan is written and followed.
... The problem with that is that it's the DISK ARRAY that determines when a drive has failed an starts the rebuild process. That IS under the control of IBM, specifically the controller. But more importantly, it effects my risk of data loss.
As I see it, my risk of data loss with RAID-5 is influenced by two main factors. 1 - The drive replacement rate and 2 - The rebuild time (which to a great extent is a function of the drive size) both of which IBM has some control over.
So, I think that the question in my mind is, what's the tipping point? Where does the risk of using RAID-5 protection exceed what I'm willing to accept, and I need to move to some other protection mechanism like RAID-6? Is it when the rebuild times exceed 12 hours? 24 hours? 48 hours?
Also, I wonder why IBM isn't publishing some information to help me make these kinds of decisions?
Oh, dear - while Tony doesn’t seem to be parrying vigorously (as Seagate, Hitachi, and Chunk were doing), his contribution sounds more like IBM marketing than the kind of detailed, technical response one might have hoped for
... well, he *is* a manager, and a marketing one at that, so perhaps we shouldn’t expect more).
Both are fair comments. Disk arrays do run microcode to assist or perform the RAID function, detect failures and start the rebuild process, and so clever designs to support spare disks, process the rebuild quickly, and so on, can differentiate one vendor's offering from another.
On the issue of what does IBM provide to help its clients make the right decisions for their environments, Jon William Toigo at DrunkenData points his readers to IBM's Business Continuity Self-Assessment tool. In normal data center conditions, DDMs will fail, and a Business Continuity plan shouldbe written and developed to handle this fact. Using 2-site and 3-site mirroring, complemented with versions of tape backups, can help address some of these concerns and mitigate some of the risks involved with using disk systems.
For those who want a more technical answer, IBM has just published a series of IBM Redbooks.
Each client's situation is different, so no simple answer is possible. However, IBM does have a lot of experience in this area, and would be glad to help you write or update your existing Business Continuity plan.
The smart people at the University of Pittsburgh manage five campuses and over 33,000 students, andneeded to create an enterprise storage solution that would give it three key benefits. Of course, they turnedto IBM, the number one overall storage hardware vendor, to deliver.
A new storage infrastructure with the capacity to grow with the University of Pittsburgh as needed
Improved system reliability with reduced downtime, and availability 24/7/365
A significantly more manageable storage solution that could lower costs and provide better system efficiency through virtualization
As a result, IBM shipped its 25,000th high-end disk storage system, in this case two IBM System Storage DS8300 models, along with storage virtualization, and other related hardware, software and services, to provide a complete end-to-end solution.
Here is what Jinx Walton, Director of Computing Services and Systems Development at the University of Pittsburgh, had to say about it...
"The University of Pittsburgh supports large enterprise systems, and the number and complexity of new systems continue to grow. To effectively manage these systems it was necessary to identify an enterprise storage solution that would leverage our existing investments in storage, make allocation of storage flexible and responsive to project needs, provide centralized management, and offer the reliability and stability we require. The integrated IBM storage solution met these requirements"
I am back from China, and now glad to be back in the old USA. Last week, someone asked me what would it take to add a specific feature to the IBM System Storage DS8300. The what-would-it-take question is well-known among development circles informally as a "sizing" effort, or more formally as "Development Expense" estimate.
For software engineering projects, the process was simply that an architect would estimate the number of "Lines of Code" (LOC) typically represented in thousands of lines of code (KLOC). This single number would convert to another single number, "person-months", which would then translate to another single number "dollars". Once you had KLOC, the rest followed directly from a formula, average or rule-of-thumb.
More amazing is that this single number could then determine a variety of other numbers, the number of total months for the schedule, the number of developers, testers, publication writers and quality assurance team members needed, and so on. Again, these were developed using a formula, developed and based on past experience of similar projects.
Hardware design introduces a different set of challenges. When I was getting my Masters Degree in Electrical Engineering, it took myself and four other grad students a full semester just to design a six-layer, 900 transistor silicon chip, which could only perform a single function, multiply two numbers together.At IBM, another book that I was given to read was Soul of a New Machine, documenting six hardware engineers, and six software engineers, working long hours on a tight schedule to produce a new computer for Data General.
So why do I bring this up now? IBM architects William Goddard and John Lynott are being inducted posthumously this year into the prestigious National Inventors Hall of Fame for their disk system innovation.
Under the leadership of Reynold Johnson, the team developed an air-bearing head to “float” above the disk without crashing into the disk. Imagine a fighter airplane flying full speed across the country-side at 50 feet off the ground. If you every heard the term "my disk crashed", it was originally referring to the read/write head touching the disk surface, causing terrible damage.
A uniformly flat disk surface was created by spinning the coating onto the rapidly rotating disk, leaving many wearing lab coats covered with disk liquid at waist level. Developing disk-to-disk and track-to-track access mechanisms proved more challenging, and nearly halted the project. The team, however, was adamant that this problem could be solved, and customers were increasingly asking for random access technology. The result was the "350 Disk Storage Unit" designed for the "305 RAMAC computer", which I have talked about a lot last year as part of our "50 years of disk systems innovation" celebration.
Neither Goddard nor Lynott had computing experience prior to joining IBM. Goddard was a former science teacher who briefly worked in aerospace. Lynott had been a mechanic in the Navy and later a mechanical engineer. They didn't have a nice formula based on past experience, they didn't have the benefit of Fred Brooks' advice, or the rules-of-thumb or averages now used to estimate the size of projects. They had to break new ground.
Continuing my coverage of SNW Spring 2007, Ron and Vincent kicked off Wednesday main tent sessions with more survey questions:
Q1. How secure is your storage network?
27% Redundant, 100% able to withstand physical failures
28% Able to withstand hackers, but not physical failures
37% Weak on both fronts
Q2. What was the cause of most downtime in last 12 months?
1% Natural disasters
13% Network outages
14% Server failures
9% Telecom provider outage
22% IT resource upgrades
33% Human error
Thornton May, futurist and columnist for ComputerWorld, presented "Storage 3.0: What Comes After, What Comes Next."I have seen several "futurists" present at conferences like this. They all feel the need to explain what their job is, and what it takes to be one. This time, Thornton indicated he was "ridiculously well-travelled, amazingly well-connected, pathologically observant, and brutally honest." His insights:
At current rates, in 15 years every molecule on earth will have its own IP address.
"What's NOT good enough changes." -- Clayton Christensen
Gabriel Broner, General Manager of the newly created "Storage Solutions" division of Microsoft, presented "The Drive to Unified Storage". The people sitting around me asked "What does Microsoft have to do with storage?" He defined "Unified Storage" the way we use it for IBM Sytstem Storage N series "a storage unit that provides both file and block level protocol support." Microsoft is using "e-mail" as the model for data access, identifying the need to have "off-line" copies on your PC or laptop that are synced up with "on-line" sources. Features that were typically only available for high-end applications are now being made available to the masses, like "Volume Snapshot" capability in Windows Vista. On the home front, Microsoft recognizes that typically one person acts as the "IT manager" for the family.
Their survey of storage spend of Fortune 1000 companies. It was not clear if this was for Windows environments, or how the data was collected. These numbers don't match what we hear from our UNIX or mainframe customers.
Microsoft is implementing application changes, such as Office 2007, to simplify storage issues. Storage virtualization is the key for the future, he says, stating that Microsoft's "iSCSI target" software support makes files look like block-oriented volumes. Virtualization is now mainstream, and deploying software on standard hardware is the new storage business model. The end goal is to simplify provisioning, device and resource management, without reducing functionality, narrowing the gap between general IT tasks and specific storage tasks.
Craig Lau, NBC Olympic coverage, presented their success story. Look at the number of "hours" of TV Olympic coverage over the years:
1996 Atlanta -- 175 hours
2000 Sydney -- 441 hours
2004 Athens -- 1210 hours
NBC now is able to deliver 70 hours of TV programs per day, shown across their seven channels (NBC, CNBC, MSNBC, Brave, USA Network, Telemundo, and HD-tv). The Olympics in Torino, Italy generated 25,000 tapes in 17 days. Their 100,000 tape Olympic repository is starting to deteriorate, and they need to consider conversion to digital format. Their challenge was that footage was difficult to find and producers needed immediate access to time sensitive/critical content.
Their solution was Digital Asset Management, automating indexing and logging, using an IP-based workflows that reduces the number of people at the Olympics location, and allowing content to be sent back to USA for remote editing.The facilities at Torino involved:
2850 people, most hired just the week prior to the Olympic event
250TB of disk storage
135 High-Definition cameras
212 Video Tape Recorders
4000 hours of content on 1700 tapes
NBC is frustrated by the lack of compatability and interoperability in the video format industry. They have been testing MPEG-1 (1.5 Mbps) formats, and plan to deploy a new system using 1080i for the upcoming 2008 Olympics in Beijing. With the new system, they can index footage by athlete, by event, and by human emotional reaction. They can review and edit footage within 30-45 seconds of live coverage, allowing rough edits to be documented as "Edit Decision Lists" that can be e-mailed or put on USB key for others to review.
Although I missed Anil Gupta's "Blogger Event" on Monday, several bloggers did stop by to visit me at the IBMbooth.
I would like to welcome IBMer Barry Whyte to the blogosphere!
From his bio:
Barry Whyte is a 'Master Inventor' working in the Systems & Technology Group based in IBM Hursley, UK. Barry primarly works on the IBM SAN Volume Controller virtualization appliance. Barry graduated from The University of Glasgow in 1996 with a B.Sc (Hons) in Computing Science. In his 10 years at IBM he has worked on the successful Serial Storage Architecture (SSA) range of products and the follow-on Fibre Channel products used in the IBM DS8000 series. Barry joined the SVC development team soon after its inception and has held many positions before taking on his current role as SVC performance architect. Outside of work, Barry enjoys playing golf and all things to do with Rotary Engines.
To avoid confusion in future posts, I will refer to Barry Whyte as BarryW, and fellow EMC blogger Barry Burke (aka the Storage Anarchist) as BarryB.
I'm in Chicago this week, but it is actually HOTTER here than in my home town of Tucson, Arizona.
SNW wrapped up Thursday. As is often the case, a lot of people have left already.
I saw two presentations worth discussing here in this blog.
Angus MacDonald, CEO of Mathon Systems,presented "Litigation Readiness: How prepared are you for the demands of eDiscovery?"
The process of eDiscovery is to take a large volume of data and get the small bits of relevance, as it relatesto a case, investigation or litigation. In 2004, there were 64 billion emails per day, and this is expected to be 103 billion by 2008. There are growing concerns about the "spoliation" of evidence, which I thought was a typo,until I looked it up. He encouraged everyone to check out the Electronic Discovery Reference Model, which is trying to standardize the wayIT and legal communication with each other.
The problem is often miscommunication over semantics and terminology. For example, in eDiscovery, the term"production" describes the delivery of relevant documents to a judge or opposing party. This may involve printingthem out on paper, delivering them electronically in their original format, or converting to a more standardelectronic format like Adobe PDF. The judge or opposing party reserves the right to request how they want thedocuments produced. Of course, in any format other than the original format, authenticity needs to be affirmed.
He gave two example lawsuits related to this.
In Zubulake v. UBS Warburg, Zubulake was awarded $29 million because UBS stored old emails on backup tapes, rather than an archiving system, and could not locate seven of these backup tapes. This is not the first time I have seen some IT department, or some legal department, think that keeping backups of email repositories for many years is the same as keeping an "archive".
In Coleman Holdings v. Morgan Stanley, Coleman was awarded $1.45 billion because the judge felt that Morgan Stanley failed to do proper eDiscovery. This was after they tried to reconstruct their email system from 5000 old backup tapes.
Angus suggests identifying the types of documents most often requested, and start planning from there.In an interesting twist, the CEO/CFO/CIO might go to jail if the IT department doesn't do something correctly, so perhaps IT managers will now get the respect/funding/technology they need to get the job done.
Bruce Kornfeld, Compellent Technologies, presented "Building Systems that Scale: Imagining the one Petabyte per Admin management ratio."
Bruce did a good job staying generic, and not mentioning his company's products too much. Specifically, Compellentmakes a frame similar to what IBM used to call the "SAN Integration Server". Back in 2003, IBM introduced the SAN Volume Controller, which had no disk, and the "SAN Integration Server" which had controller + disk. What IBM learned was that customers prefer the diskless model, minimizing the amount of disk that has to be purchased from the original vendor, and instead opting to have the freedom to choose any vendor they like for the managed capacity.
An interesting feature of the Compellent solution is that they chop up the virtual disk into 2MB pieces, and allow these pieces to be moved automatically from high-speed (FC) to low-speed (SATA) disk, based on their reference frequency. This is similar to HSM, but at the block level, rather than the file level.
Every advantage Bruce listed for his box already exists from IBM: improved capacity planning, improved performance, ease of data migration, flexible volumes, and a single pane of glass GUI administration tool.
Perhaps more interesting were the questions from the audience:
Q1. Do you have any customers that have 1PB of your solution? No, we have several in the 200-500TB range.
Q2. You only have a single two-node cluster, can we have more clusters? No, that is all we support, but if you need that you would have to go to one of the major storage vendors (like IBM).
Q3. Do we have to buy Compellent storage to go with the Compellent controllers? Yes, it is designed so it is an integrated solution. If you need to virtualize your existing storage, you have to go to one of the major storage vendors (like IBM).
Q4. Having data migrate automatically from FC to SATA behind the scenes lowers performance and raises the risk of disk failure? Our box is designed for inactive data, so performance is not an issue.
Q5. How do you protect against double-disk failures? We don't, and these would be even more detrimental to our solution than traditional solutions. Other vendors offer RAID6, but we don't have that yet.
It was a fun week, and good to see people I have communicated with, but never met in person.