Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Michael Scott, one of my "Second Life" builder/scripters, for demonstrating client-focused dedication to IBM's corporate values.
Our site manager, Terri Mitchell, did a recap of all our recent awards and accomplishments.Of the nine Design Innovation awards won by IBM this year at the CeBIT conference, eight were for IBM System Storage products!
The IBM System Storage EXP3000: an entry-level data storage server that is optimized for cost-sensitive and space-limited environments and employs a user-centered design that enables ease of use and simple tool-less installation and removal of all components.
The IBM System Storage N7000 Series: a modular disk storage system that delivers high-end enterprise storage and data management value ideal for large-scale applications, while helping to anticipate growth, maintaindata availability and reduce costs.
The IBM System Storage N5000 Series: a modular disk storage system designed to address the entire spectrum of data availability challenges while offering value in price and scalability. Built-in enterprise serviceability and manageability features support efforts to increasereliability and simplify storage infrastructure and maintenance.
The IBM System Storage N3700: a filer that integrates storage and storage processing into a single unit, facilitating affordable network deployments.
The IBM System Storage DS4700: a NEBS-compliant disk storage server designed to address requirements for companies in the telecommunications industry, as well as other segments, such as oil and gas, meeting standardsfor electromagnetic compatibility, thermal robustness, earthquake and office vibration resistance, and provides protection for the product components from airborne contaminants.
The IBM System Storage EXP810: a data storage expansion unit capable of 4.8 Terabytes of physical storage, with a user-centered and tool-less design featuring redundant power, cooling, and disk modules for ease of use and simple serviceability.
The IBM System Storage TS3400: an affordable, space-friendly tape library for users in remote locations that supports enterprise-class technology and encryption capabilities.
A representative from Tucson's Brewster Center presented Terri an award, thanking IBM for its strong support for the community through various charity initiatives.
The final speaker was a new IBM client, Tony Casella, the IT Director of the town of Marana. Recently, the town of Marana selected IBM products made big news. Arizona is the fastest growing state in the USA, and the town of Marana, just north of Tucson, is one of the fastest growing communities in Arizona. The town is growing so large that it will soon spill over from Pima into Pinal county, and will be the first town in Arizona authorized to span county boundaries.
The "Storage Symposium Mexico - 2008" conference was a great success this week!
Day 1 - The plan was for me to arrive for the Wednesday night reception. Eachattendee was given a copy of my latest book[Inside System Storage: Volume I] and I was planning to sign them. I thought perhaps we should have a "book signing" tablelike all of the other published authors have.
Things didn't go according to plan. Thunderstorms at the Mexico City airport forced our pilot to find an alternate airport. Nearby Acapulco airport was the logical choice, but was full from all the otherflights, so the plane ended up in a tiny town called McAllen, Texas. I did not arrive until the morning of Day 2,so ended up signing the books throughout Thursday and Friday, during breaks and meals, wherever they couldfind me!
Special thanks to fellow IBMer Ian Henderson who picked me up from the airport at such an awkward hour anddrive me all the way to Cuernavaca!
All of us, IBMers, Business Partners and clients alike, all donned black tee-shirtswith a white eightbar logo for a group photo with one of those "wide lens" cameras. While we werebeing assembled onto the bleachers, I took this quick snapshot of myself and some of the guys behind me.
I was original scheduled to be first to speak, but with my flight delays, was moved to a time slot after lunch.After a big Mexican lunch, the conference coordinators were afraid the attendees might fall asleep,a Mexican tradition called [siesta], so I wasinstructed to WAKE THEM UP! Fortunately, my topic was Information Lifecycle Management, a topicI am very passionate about, since my days working on DFSMS on the mainframe. With 30percent reduction in hardware capital expenditures, 30 percent reduction in operational costs, and typical payback periods between 15 to 24 months, the presentation got everyone's attention.
Of course, a lot happens outside of the formal meetings. We had a Japanese theme dinner, where we woreJapanese Hachimaki [headbands]with the eightbar logo. For those not familiar with Japanese culture, hachimaki are worn today not so much for the practical purpose to catch the perspiration but rather for mental stimulation to express one's determination. Some students wear hachimaki when they study to put themselves in the right spirit and frame of mind.
Shown here are presenters Mike Griese (Infrastructure Management with IBM TotalStorage Productivity Center),Dave Larimer (Backup and Storage Management with IBM Tivoli Storage Manager), myself, and John Hamano(Unified Storage with IBM System Storage N series).
Day 3 - Wrapping up the week, I presented two more times.
First, I covered IBM Disk Virtualization with IBM SAN Volume Controller. One interesting question was if the SAN Volume Controller could be made to looklike a Virtual Tape Library. I explained that this was never part of the original design, but that if you wantto combine SVC with a VTL into a combined disk-and-tape blended solution, consider using theIBM product called Scale-Out File Services[SoFS] which I covered in my post[Moredetails about IBM clustered scalable NAS].
During one of the breaks, I took a picture of the behind-the-scenes staff that put this together. They had created these huge blocks representing puzzle pieces, emphasizing how IBM is one of the few ITvendors that can bring all the pieces together for a complete solution.
Shown hereare Mike Griese (presenter), Cyntia Martinez, Claudia Aviles, Cesar Campos (IBM Business Unit Executive forSystem Storage in Mexico), and Claudia Lopez. Each day the staff wore matching shirts so that it was easyto find them.
Later, I covered Archive and Compliance Solutions to highlight our complete end-to-end set of solutions.When asked to compare and contrast the architectures of the IBM System Storage DR550 with EMC Centera, I explainedthat the DR550 optimizes the use of online disk access for the most recent data. For example, if you aregoing to keep data for 10 years, maybe you keep the most recent 12 months on disk, and the rest is moved,using policy-based automation, to a tape library for the remaining nine years. This means that the disk insidethe DR550 is always being used to read and write the most recent data, the data you are most likely to retrievefrom an archive system. Data older than a year is still accessible, but might take a minute or two for the tapelibrary robot to fetch.The EMC Centera, on the other hand, is a disk-only solution. It offers no option to move older data to tape,nor the option to spin-down the drives to conserve power. It fills up after the same 12 months or so, and then you get towatch it the remaining nine years, consuming electricity and heating your data center.
I don't know about you, butI have never seen anyone purposely put in "space heaters" into their data center, but certainly a full EMC Centeradoes little else. Both devices use SATA drives and support disk mirroring between locations, but IBM DR550 offers dual-parity RAID-6, and supports encryption of the data on both the disk and the tape in the DR550. EMC Centerastill uses only RAID-5, and has not yet, as far as I know, offered any level of encryption. IBM System StorageDR550 was clocked at about three times faster than Centera at ingesting new archive objects over a 1GbE Ethernet connection.
This last photo is me and fellow IBMer Adriana Mondragón. She was one of my students in the [System Storage Portfolio Top Gun class],last February in Guadalajara, Mexico.She graduated in the top 10 percent of her group, earning her the prestigious titleof "Top Gun" storage sales specialist.
The conference wrapped up with a Mexican lunch with a traditional Mariachi band. I took pictures, but figured you allalready know what [Mariachi players] look like, and I didn't wantto detract from the otherwise serious tone of this blog post! This was the first System Storage Symposium in Mexico, butbased on its success, we might continue these annually.
A lot of people ask me about IBM branding, as we have recently changed brands. In the past we had two separate brands, one for servers (eServer) and one for storage (TotalStorage). These would be fine if we wanted to promote their independence, but customers today want synergy between servers and storage, they want systems that work well together.
Last year, in response to market feedback, we crated a new brand, "IBM Systems" and put all the server and storage product lines under one roof. Over time, we will transition from TotalStorage to System Storage naming. This will occur with new products, and major versions of existing products.
Two other phrases you will hear in the names of our offerings are "Virtualization Engine" and "Express". These are portfolio identifiers. The Virtualization Engine identifier was created to emphasize our leadership in system virtualization, and we have products that span product lines with this identifier.
The Express identifier was created to emphasize our focus on Small and Medium sized business (SMB). It spans not just servers and storage, but across other offerings from other IBM divisions.
Of course, just renaming products and services isn't enough. Systems don't work together just because they have similar names, are covered in similar "Apple white" plastic, or have similar black bezels. Obviously, thoughtful and collaborative design are needed, with the appropriate amounts of engineering and testing. IBM is aligning its server and storage development so that the IBM Systems brand keeps its promise.
IBM's emphasis on "Information Infrastructure" is to help organizations get the right information, to the right people at the right time. This helps them to have the right insights, make the right decisions, and develop the right innovations needed for the challenges at hand.
As the planet got smaller and flatter, IBM led the way. Now, as the planet needs to get smarter--with more efficient health care, energy distribution, financial institutions, and IT infrastructures--IBM will once again take the lead.
It has always been the case in fast pace technology areas that you can't tell the players without a program card, andthis is especially true for storage.
When analyzing each acquistion move, you need to think of what is driving it. What are the motives?Having been in the storage business 20 years now, and seen my share of acquisitions, both from within IBM,as well as competition, I have come up with the following list of motives.
Although slavery was abolished in the US back in the 1800's, and centuries earlier everywhere else, many acquisitionsseem to be focused on acquiring the people themselves, rather than the products or client list. I have seen statistics such as "We retained 98% of the people!" In reality, these retentions usually involve costly incentives,sign-in bonuses, stock options, and the like. Desptie this, people leave after a few years, often because ofpersonality or "corporate culture" clash. For example, many former STK employees seem to be leaving after their company was acquired by Sun Microsystems.
If you can't beat them, join them. Acquisitions can often be used by one company to raise its ranking in marketshare, eliminating smaller competitors. And now that you have acquired their client list, perhaps you can sellthem more of your original set of products!
Symantec had acquired Veritas, which in turn had acquired a variety of other smaller players, and the end result is that they are now #1 backup software provider, even though none of theirproducts holds a candle to IBM's Tivoli Storage Manager. Meanwhile, EMC acquired Avamar to try to get more into the backup/recovery game, but most analysts still find EMC down in the #4 or #5 place in this category.
Next month,Brocade's acquisition of McData should take effect, furthering its marketshare in SAN switch equipment.
Prior to my current role as "brand market strategist" for System Storage, I was a "portfolio manager" where wetried to make sure that our storage product line investments were balanced. This was a tough job, as the investmentshad to balance the right development investments into different technologies, including patent portfolios.Despite IBM's huge research budget, I am not surprised that some clever inventions of new technologies comefrom smaller companies, that then get acquired once their results appear viable.
The last motive is value shift. This is where companies try to re-invent themselves, or find that they are stuck in acommodity market rut, and wish to expand into more profitable areas.
LSI Logic acquisition of StoreAge is a good exampleof this. Most of the major storage vendors have already shifted to software and services to provide customer value,as predicted in 1990's by Clayton Christensen in his book "The Innovator's Dilemma". The rest are still strugglingto develop the right strategy, but leaning in this general direction.
As financial firms focus on costs, the IT departments will have an opportunity to consolidate their servers, networks and storage equipment. Consolidating disk and tape resources, implementing storage virtualization, and reducingenergy costs might get a boost from this crisis. Consolidating disparate storage resources to a big SoFS, XIV,DS8000 disk system, or TS3500 tape library might greatly help reduce costs.
Having mixed vendor environments that result from such mergers and acquisitions can be complicated to manage. Thankfully, IBM TotalStorage Productivity Centermanages both IBM and non-IBM equipment, based on open industry standards like SMI-S and WBEM.Merged companies might let go IT people with limited vendor-specific knowledge, but keep the ones familiar withcross-vendor infrastructure management skills and ITIL certification.
Comparing different vendor equipment
It seems that often times when there is a merger or acquisition, the two companies were using different storage gear from different vendors. IBM has made some incredible improvements over the past three years, in both performance enhancements and energy efficiency, but many companies with non-IBM equipment may not be aware of them.If there was ever a time to perform a side-by-side comparison between IBM and non-IBM equipment, here isyour chance.
For more on the impact of the financial meltdown on IT, see this InfoWorld[Special Report].
Jon Toigo over at DrunkenData writes in his post[A Wink and a Nod] about thebenefits of the new IBM System z10 Enterprise Class mainframe. Here's an excerpt about storage:
"The other key point worth making about this scenario is that storage behind a z10 must conform to IBM DASD rules. That means no more BS standards wars between knuckle-draggers in the storage world who continue to mitigate the heterogeneous interoperability and manageability of distributed systems storage using proprietary lock in technologies designed as much to lock in the consumer and lock out the competition as to deliver any real value. That has got to be worth something."
For z/OS and TPF operating systems, disk must support CCW commands over ESCON or FICON connections, or NFS commandsover the Local Area Network. However, most of the workloads that are being ported over from x86 platforms willprobably be running Linux on System z images, and as such Linux supports both CCW and SCSI protocols, the latterover native FCP connections through a Storage Area Network (SAN) or via iSCSI over the Local Area Network. Many SAN directors support both FCP and FICON, and the z10 also supports both 1Gbps and 10Gbps Ethernet, so you may not have to invest in any new networking gear.
The best part is that you may not have to migrate your data. The IBM System Storage SAN Volume Controller is supported for Linux on System z, and with "image mode" you can leave the data in its original format on its original disk array. Many file systems are now supported by Linux, including Windows NTFS with the latest NTFS-3G driver.
If your data is already on NAS storage, such as the IBM System Storage N series disk systems, then the IBM z10can access it directly, from z/OS, z/VM or Linux.
Have lots of LTO tape data? Linux on System z supports LTO as well.
Jon continues his rant with a question about porting Microsoft Windows applications. Here's another excerpt:
"For one, what do we do with all the Microsoft servers. There is no Redmond-sanctioned approach to my knowledge for virtualizing Microsoft SQL Server or Exchange Server in a mainframe partition."
Yes, it is possible to run Windows on a mainframe through emulation, but I feel that's the wrong approach. Instead, the focus should be on running "functionally equivalent" programs on the native mainframe operating systems, and again Linuxis often the best choice for this. Switching from Windows to Linux may not be "Redmond-sanctioned", but it getsthe job done.
Instead of SQL Server, consider something functionally equivalent like IBM's DB2 Universal Database, or perhaps an open source database like MySQL, PostgreSQL or Apache Derby. Well-written applications use standard SQL calls, so ifthe application does not try to use unique, proprietary features of MS SQL Server, you are in good shape.
In my discussion last November on [Microsoft Exchange email server], I mentioned that Bynari makes a functionally equivalent email server on Linux that works with your existing Microsoft Outlook clients. Your end-users wouldn't know you migrated to a mainframe! (well, they might notice their email runs faster)
So if your data center has three or more racks of Sun, Dell or HP "pizza box" or "blade" x86 servers, chances are you can migrate the processing over to a shiny new IBM z10 EC mainframe, save some money in the process, without too much impact to your existing Ethernet, SAN or storage system infrastructure. IBM can even help you dispose of the oldx86 machines so that their toxic chemicals don't end up in any landfill.
Well it's Tuesday, which means its time to look at recent announcements.While I was on vacation last week, IBM made a lot of storage announcements October 23.Josh Krischer gives his summary on WikiBon [October 2007 Review].Austin Modine of the The Register went so far as to say that [IBM goes crazy with storage system updates].
IBM System Storage DS8000 series
This is "Release 3" software/microcode upgrades on our existing "Turbo" hardware.
IBM FlashCopy SE -- Here "SE" stands for Space Efficient. Rather than allocating a full 100% of the space for the FlashCopy destination, you can set aside just a fraction, and this will hold all the changed blocks, similar to whatIBM already offers on the DS4000 series.
Dynamic Volume Expansion -- In the past, if you needed more space for a LUN, you had to carve out a newer one elsewhere, and then copy the data over from the old to the new, leaving the old LUN around to be re-used or leftstranded. With this enhancement, you can just upgrade the LUN in place, making it bigger as needed, similar to whatIBM already offers on the DS4000 series and SAN Volume Controller. This applies to CKD volumes for the System zmainframe users out there as well.
Storage Pool Striping -- striping volumes across RAID ranks to eliminate or reduce hot-spots, and provide betterload balancing. Many used SAN Volume Controller in front of the DS8000 to do this, but now you can do it natively inthe DS8000 itself.
z/OS Global Mirror Multiple Reader -- for System z customers, "z/OS Global Mirror" is the new name for XRC. Thisenhancement improves the throughput of sending updates to the remote disaster recovery location.
DS Storage Manager enhancements, the element manager software has been enhanced, and is pre-installed on the new IBM System Storage Productivity Center, which I will talk about below.
Intermix of DS8000 machine types -- this is especially useful to allow new frames to have co-terminating warrantieswith the base units. In other words, as you expand your system, you can ensure that the entire chunk of iron runs outof warranty all at the same time, to simplify your decision making process to upgrade or contract for extended service.
One of the biggest complaints about IBM TotalStorage Productivity Center is that it is software that needs to beinstalled on its own server, and that this installation process can take a day or two. Why wait? Now you can havea hardware console that has the DS8000 Storage Manager software, SVC Admin Console software, and IBM TotalStorageProductivity Center "Basic Edition" pre-installed. Here are the key features.
Pre-installed and tested console
DS8000 R3 GUI integration
Cohabitation of SVC 4.2.1 GUI and CIMOM
Automated device discovery
Asset and capacity reporting, including tape library support
Our "Release 9" applies across the board, from N3000 to N5000 to N7000 series models, includingnew host bus adapters, and the new Data OnTAP 7.2.4 release level.
The Virtual File Manager (VFM) was announced as one of our latest [Storage Virtualization Solutions]. VFMprovides a global namespace that aggregates the file systems from Linux, UNIX, and Windows file servers, as well asN series storage, into a consolidated environment.
IBM's virtual tape library (VTL) for the distributed systems platform, has been enhanced to provide:
Up to 12TB of disk cache, using 750GB SATA disk.
F05 Tape Frames installed as TS7520 base units through a 32 port fibre channel switch
Support for LTO generation 4 tape drives, both as virtual tape drives and as physical tape drives within IBM automated tape libraries attached to the TS7520. This allows you to use Encryption capabilities of LTO4.
DS3000 series now supports SATA disk, and can be attached to AIX and Linux on System p servers. This appliesto the DS3200, DS3300 and DS3400 models.See the [DS3000 Announcement Letter] for more details.
Yesterday (September 7, 2006) the Eclipse Foundation announced that it has approved the creation of the Aperi Storage Management Framework Project.
There's been a lot of confusion out there about Aperi, so I thought I would post some facts and opinions about this exciting new project. A few years ago, I was thelead architect for IBM TotalStorage Productivity Center, IBM's infrastructure management product that helped launch the creation of Aperi.
From the latin word for "open", Aperi is an open source project that aims to simplify the management of storage environments, using the Storage Management Initiative - Specification (SMI-S) open standardto promote interoperability and eliminate complexity in today’s storage environments.
Aperi should provide immediate value upon install with basic storage management capabilities, rather than just simply a collection of components that require costly integration. We've discussed requirements for functions such as:
Resource discovery, monitoring, and reporting
Fabric Topology mapping
Disk / Tape management
Device configuration & LUN assignment
SAN fabric management
Basic asset management
The big confusion most people have is Aperi's relation to SMI-S and the Storage Networking Industry Association (SNIA)open standards group. The best way to explain this is to go backto your High School SAT college-entrance exams. Remember questions like this?
(The answer: a crumb is to bread like a splinter is to wood.)
Aperi is an implementation of SMI-S standard, similar to MySQL or PostgreSQL areopen standard relational database implementations of Structured Query Language (SQL).These compete with proprietary database implementations such as IBM DB2 Universal Database,Oracle Database, Microsoft SQL Server, or Sybase.
Aperi: SMI-S :: PostgreSQL : Structured Query Language (SQL)
It is often the case that the folks writing the code are different than the folks defining the standards. This is the case between the members of Aperi writing code, and the members of the SNIA writing standards. IBM happens tohave employees writing Aperi code, and other employees helping define SMI-S standards.What can I say, IBM is a big company and a leader in many areas.
A good analogyis how the Apache community has developed an awesome web server, and the Firefox Mozillacommunity have developed an awesome web browser, both of which are implementations of the HTTP/HTML standards adopted by the World Wide Web Consortium. Apache and Firefoxcompete with proprietary implementations, such as Microsoft Internet Information Services(IIS) web server and Internet Explorer web browser.
Aperi: SNIA :: Apache : World Wide Web (WWW) Consortium
With this arrangement, Aperi and the SNIA will have very complementary roles in defining and driving standards across the entire storage market. To that end, Aperi will make extensive use of the SNIA’s Technology Center and SNIA’s “plugfests” to test the interoperability of the Aperi framework with the variety of 3rd-party storage offerings available. By providing a tested implementation of SMI-S, Aperi will drive broader industry availability of SMI-S, as well as offer the many benefits of an industry-backed open source community.
Check out this vote of confidence:
"Eclipse's Aperi Project will further advance the adoption of SNIA's SMI-S, benefiting the entire storage industry and IT community. Furthermore, the SNIA and Aperi will define plans to collaborate on new storage standards, standards testing programs, and storage interoperability programs." --- Wayne M. Adams, chair, SNIA Board of directors
So, both proprietary and open source implementations have their place in the world.Proprietary products are needed for advanced, unique value-add, and opensource projects are for basic support focused on interoperability and flexibility.These can be combined, for example, proprietary "plug-ins" built on an open source base. The more choices the client has, the better.
Storage vendors benefit too. Vendors are tired of being in the "Y.A.C." business, building "Yet Another Configurator" for each new device developed, with basic functionsto carve LUNs, read performance stats, and so on. By shipping Aperi instead, storagevendors like IBM can invest their development dollars in real innovations, things thatmatter for the customer.
Today,Apple and EMI announced that EMI’s entire music and video catalog will be available in May without any digital rights management (DRM) protection.Not only with the music be higher quality, but can be played on any player, presumably using MP3 format instead ofApple's proprietary AAC format. Being locked into any single vendor solution is undesirable. Similar issues abound for Microsoft Office 2007 file formats.
On my iPod, I ripped all my CDs into MP3 format, not AAC. I love my iPod, but if I ever decided to chose a different MP3 player, I did not want to go through the time-consuming process or re-ripping them again.
A blog by Seth Godin feels this Apple-EMI announcement means thatDRM is dead.
Back when music labels added value by producing and distributing music in physical form, it made sense for them to take a cut. Mass-producing CDs and distributing them out to music stores across the country costs lots of money. However, for online music, music labels don't have these same overhead costs, but continue the process of paying the artists only a few pennies per dollar. Some artists have file lawsuits to get their fair share.
This process applies to any published work. For example, you can purchase Kevin Kelly's book in various formats, at different prices, from different distributors. For example:
In PDF for $2, directly from the author via PayPal
black-and-white hardcover, for $20, from Amazon
color softcopy, for $30, from Lulu
Each nets the author $1.50 in royalties per copy. You can decide how much in production and distribution costs you want to pay.
An article in InformationWeek reports that40,000 ASU Students Leap to Google Apps; University Pays Zero. The ASU president, Michael Crow, wants to make IT the primary driver in his ambitious "New American University" project.Last October, ASU became the first large institution to deploy Google Apps, a comprehensive suite of productivity applications that includes e-mail, search, calendars, instant messaging, and even word processing and spreadsheets.I've tried them out, they work, nothing fancy but certainly good enough for college homework assignments.
Already 40,000 students and faculty have switched their e-mail to Google, while keeping their asu.edu designation. (out of 65,000 student population, which Mr. Crow is trying to raise to 90,000 students!)
E-mail is a thorn in the side of storage administrators. Being "semi-structured" repositories, they cannot just delete or move files around, as there is context between notes and their attachments, that shouldn't be broken. E-mail systems are often the fastest growing consumer of storage for many organizations.
Switching from maintaining their own mail servers to Google is saving ASU $500,000 US dollars alone, not including the administrator labor savings. Again, some corporations might feel their e-mail is too "secret" to be outsourced like this, but for college students who spend all their creative talent posting things on MySpace and YouTube, and faculty who spend their careers TRYING to get published, they have nothing to hide from the rest of the world. It makes perfect sense.
Best of all, Google isn't charging ASU anything for this service. Google is able to cover the costs from advertising revenue instead. I can think of a lot of companies that might want to advertise to a demographic of "40,000 students who are mostly 18-25 years old and all live in or near Tempe, AZ".
This week I was in Palm Springs in meetings with clients, prospects, business partners and IBM sales reps.
Tuesday consisted of "outdoor meetings", but the high winds caused some people to arrive late, and others to land in the various sand traps and water hazards. A "welcome reception" event allowed everyone to socialize and get to know the IBM experts and executives. Two of my colleagues, Mike Stanek and Dave Wyatt, were with me also in Australia last week, and so the three of us were discussing recovery from jet lag.
Wednesday was organized as a main tent event, where everyone met into one large room to hear our strategy,latest set of offerings, and customer testimonials. This was done indoors, of course, which was a good thing as the winds were now gusting up to 50 miles per hour, knocking over windmills and making the local news.
Here's a quick sample from the testimonials:
An insurance company virtualized their IBM DS8000, DS4000, ESS 800 and EMC DMX3 high-end disk with theIBM System Storage SAN Volume Controller and got higher availability and performance. Data migrationefforts that used to take six(6) hours of admin time now took less than one hour, and with no system downtime.They have a total of 350TB virtualized under SVC now, but plan to extend this for a variety of other projects.
A bank presented their success using "Global Mirror" (IBM's asynchronous two-site replication disk mirroring capability).Their previous "business continuity" plan was called 2-20-24 for 2 sites that were 20 miles apart and recovery time objective (RTO) of 24 hours. With the events of Hurricane Katrina, this was considered inadequate, and a new2-200-6 plan was requested, across 200 miles with a recovery time objective of only 6 hours. The chose to deploythis one application at a time, to learn and grow by experience in each phase. They started with Microsoft Exchange e-mail application running under VMware on BladeCenter servers, and wereable to recover remotely within 1 hour. They are now looking to refine and automate the recovery process, perhapswith IBM TotalStorage Productivity Center for Replication and Geographically Dispersed Open Clusters (GDOC).
A healthcare provider presented their success with tiered storage, managing a 475TB mix of IBM DS8000, DS6000,DS4000 and HP EVA disk arrays. The key was having centralized storage management from IBM, which allowedthem to shrink provisioning time from 3 weeks average, to now 96% of their storage provisioning requests are completedin less than 1 week. Moving data between storage tiers was non-disruptive, and the significantcosts savings greatly justified the change in "mindset" that required some training on the new environment.
Thursday we offered a series of "workshops" on specific topics. These were interactive sessions to discuss installation, design and deployment of various solutions. The event ended early enough so that people couldreturn home, or go to the practice range, which reminded me of this inspiring video on How to play golf as well as Tiger Woods.
The event got great reviews, and I look forward to the next one. Until then, enjoy the weekend!
Ideally, every airline would use the most experienced seasoned professional airline pilots money could buy, but some airlines, in an effort to compete on ticket price, may elect instead to have less experienced pilots.Here's a great excerpt:
Airline history lesson 101: It used to be, up until the mid 1980’s, that a young pilot would be hired on at a major carrier, become a flight engineer (FE), and then spend a few years managing the systems of the older-generation airplanes. But he or she was learning all the while. These new “pilots” sat in the FE seat and did their job, all the while observing the “pilots” doing the flying, day in and day out.
The FE’s learned from the seasoned pilots about the real world of flying into the Chicago O’Hares and New York LaGuardias. They learned decision making, delegation, and the reality of “captain’s final authority” as confirmed in the law. When they got the chance to upgrade, they became a copilot. The copilot’s duty was to assist the captain in flying; but even during their time as the new copilot, they had the luxury of the FE looking over their shoulders — i.e., more learning. This three-man-crew concept, now a fond memory in the domestic markets but used predominately in international flying, was considered one more layer of protection. But it’s gone.
To become the public speaker I am today, IBM put me through a variety of speaking classes. I taught high school and college classes to practice in front of groups. But most importantly, I traveled with seasoned colleagues and watched them in action from the front row.I learned how to handle tough questions, how to react to hecklers causing trouble, and how to deal with the unexpected before, during and after each presentation. In addition to speaking skills, I ended up having to learn travel skills, foreign language skills, and a variety of cultural social skills. All part of the job in my line of work.
Likewise, being a storage administrator is an important job, and for some data centers, not something to give lightly to a fresh college graduate. Unless they have had format IT Infrastructure Library [ITIL] certification coursework, I doubt they would understand the processes and disciplines demanded by the typical data center. I have been to accounts where new hires are not allowed to touch production systems for the first two years. Instead they watch the seasoned professionals do their jobs, and are given only access to "sand box" systems that are used for application testing or Quality Assurance (QA). Sadly, I have also been to other accounts where people with no storage experience whatsoever were tossed into the admin pool and let loose with superuser passwords, all in an effort to save money during times of exponential data growth rates, only to pay the price later with outages or lost data.
The parallels between the airline industry and the IT industry are eerie.
I have created blog categories, based on our System Storage offering matrix, which you can track individually:
Disk systems, including the IBM System Storage DS Family of products, SAN Volume Controller, N series, as well as features unique to these products, such as FlashCopy, MetroMirror, or SnapLock. Tape
Tape systems, including the IBM System Storage TS Family of products, tape-related products in the Virtualization Engine portfolio, drives, libraries and even tape media.
Storage Networking offerings, from Brocade, McData, Cisco and others, such as switches, routers and directors.
Infrastructure management, including IBM TotalStorage Productivity Center software, IBM Tivoli Provisioning Manager, IBM Tivoli Intelligent Orchestrator, and IBM Tivoli Storage Process Manager.
Business Continuity, including IBM Tivoli Storage Manager, Tivoli CDP for Files, Productivity Center for Replication software component, Continuous Availability for Windows (CAW), Continuous Availability for AIX (CAA).
Lifecycle and Retention offerings, including our IBM System Storage DR550, DR550 Express, GPFS, Tivoli Storage Manager Space Management for UNIX, Tivoli Storage Manager HSM for Windows, and DFSMS.
Storage services, including consulting, assessments, design, deployment, management and outsourcing.
The results are finally in. IBMer Wolfgang Singer was awarded "Top Speaker" award for his NAS and iSCSI tutorial at last year's Orlando 2006 conference. Here he is receiving the awardfrom SNIA Executive Director Leo Leger.
Of course, NAS and iSCSI technologies have been around for a while, but they are still new formany customers, which is why tutorials like this are so important.
Wrapping up my week in China, I read an article by Li Xing in the local "China Daily" about energy efficiency in buildings. She argues that it is not enough for a building to be energy-efficient on its own, but you have to consider the impact of the other buildings around. Does it reflect the sun so harshly into neighboring windows that people are forced to put up blinds and use artificial light? Does it block the sun, so that rooms that previously could be used with natural sunlight must now be artificially lit?
A similar effect happens with power and cooling in the data center. Servers and storage systems generate heat, and that heat affects all the other equipment in the data center. IBM has the most power-efficient and heat-efficient servers and storage, but that is not enough. You have to consider the heat generated by all systems that might raise overall temperature.
Research has indicated that water can remove far more heat per volume unit than air. For example, in order to disperse 1,000 watts, with 10 degree temperature difference, only 24 gallons of water per hour is needed, while the same space would require nearly 11,475 cubic feet of air. IBM's Rear Door Heat eXchanger helps keep growing datacenters at safe temperatures, without adding AC units. The unobtrusive solution brings more cooling capacity to areas where heat is the greatest -- around racks of servers with more powerful and multiple processors.
The CoolBlue portfolio of IBM innovations includes comprehensive hardware and systems-management tools for computing environments, enabling clients to better optimize the power consumption, management and cooling of infrastructure at the system, rack and datacenter levels. The CoolBlue portfolio includes IBM PowerConfigurator, PowerExecutive, and Rear Door Heat eXchanger.
The eXchanger works on standard 42U racks, and can help clients deal with the rapid growth of rack-mounted servers and storage on their raised floor. How cool is that!
I am in Toronto, Canada. It is a lot cold and rainy here, worse than last week in Seoul, Korea.This looks like a slow news week, so slow that the only news here in Canada is the possibility of anew 5-dollar coin. I thought I would make this week's theme about enterprise applications.
IBM doesn't make these applications anymore, we have decided to focus on our core strength, to be the best IT platform to run other people's applications. This means being the best IT systems, software and services company. However, many of the companies that make enterprise applications are both cooperate and compete against parts of IBM, what we call "coopetition".
Let's take a look at some acronyms in this space:
"Enterprise Resource Planning" represents all the basic applications that business need to run theirbusiness, including: finance, accounting, human resources, and manufacturing. The focus here is to streamline operations and make the workforce more productive. Before IBM, I ran my ownsoftware development company, Pearson Kurath Systems, and we developed ERP applications for clients oneby one, customized to their industry requirements.
"Customer Relationship Management" or sometimes "Client Relationship Management" help companies identifyand retain their customer base. Focus here is to increase customer satisfaction and loyalty.
"Supply Chain Management" help track supply and just-in-time inventory demand, sharing the information withkey suppliers and distributors. The focus is to manage inventories down to nothing, and improve speed to get products out to market.
"Business to Business" refer to procurement, purchase orders, and collecting payments over the internet.One of my pet peeves are acronyms that use "2" to mean "to" and "4" to mean "for".
"Human Capital Management" deals with managing costs of Human Resources (HR) and coordinating servicesfrom outside organizations.
"Knowledge Management" refers to sharing and collaborating information. This is not just email and instant messaging, but also online calendaring, experience repositories, client case studies, and anecdotes.
This week I will cover applications that address these, and how they relate to storage.
For those in the US, last friday, the day after Thanksgiving, marks the official start of the Holiday shopping season. This has been called [Black Friday] as some stores open as early as 4am in the morning, when it is still dark outside, to offer special discount prices. Some shoppers camp out in sleeping bags and lawn chairs in front of stores overnight to be the first to get in.
Not surprisingly, some folks don't care for this approach to shopping, and prefer instead shopping online. Since 2005, the Monday after Thanksgiving (yesterday) has been called [Cyber Monday].USA Today newspaper reports [Cyber Monday really clicks with customers]. Many of the major online shopping websites indicated a 37 percent increase in sales yesterday over last year's Cyber Monday.
On Deadline dispels the hype on both counts:[Cyber Monday: Don't Believe the Hype?"], indicating that Black Friday is not the peak shopping for bricks-and-mortar shops, andthat Cyber Monday is not the busiest online shopping day of the year, either.
A flood of new video and other Web content could overwhelm the Internet by 2010 unless backbone providers invest up to US$137 billion in new capacity, more than double what service providers plan to invest, according to the study, by Nemertes Research Group, an independent analysis firm. In North America alone, backbone investments of $42 billion to $55 billion will be needed in the next three to five years to keep up with demand, Nemertes said.
Internet users will create 161 exabytes of new data this year, and this exaflood is a positive development for Internet users and businesses, IIA says.
If the "161 Exabytes" figure sounds familiar, it is probably from the IDC Whitepaper [The Expanding Digital Universe] that estimated the 161 Exabytes created, captured or replicated in 2006 will increase six-fold to 988 Exabytes by the year 2010. This is not just video captured for YouTube by internet users, but also corporate data captured by employees, and all of the many replicated copies. The IDC whitepaper was based on an earlier University of California Berkeley's often-cited 2003[How Much Info?] study, which not only looked at magnetic storage (disk and tape), but also optical, film, print, and transmissions over the air like TV and Radio.
A key difference was that while UC Berkeley focused on newly created information, the IDC study focused on digitized versions of this information, and included theadded impact of replication.It is not unusual for a large corporate databases to be replicated many times over. This is done for business continuity, disaster recovery, decision support systems, data mining, application testing, and IT administrator training. Companies often also make two or three copies of backups or archives on tape or optical media, to storethem in separate locations.
Likewise, it should be no surprise that internet companies maintain multiple copies of data to improve performance.How fast a search engine can deliver a list of matches can be a competitive advantage. Content providers may offer the same information translated into several languages.Many people replicate their personal and corporate email onto their local hard drives, to improve access performance, as well as to work offline.
The big question is whether we can assume that an increased amount of information created, captured and replicated will have a direct linear relation to the growth of what is transmitted over the internet. Three fourths of the U.S. internet users watched an average of 158 minutes of online video in May 2007, is this also expected to grow six-fold by 2010? That would be fifteen hours a month, at current video densities, or more likely it would be the same 158 minutes but of much higher quality video.
On the other hand, much of what is transmitted is never stored, or stored for only very short periods of time.Some of these transmissions are live broadcasts, you are either their to watch and listen to them when they happen, or you are not. Online video games are a good example. The internet can be used to allow multiple players to participate in real time, but much of this is never stored long-term. An interesting feature of the Xbox 360 is to allow you to replay "highlight" videos of the game just played, but I do not know if these can be stored away or transferred to longer term storage.
Of course, there will always be people who will save whatever they can get their hands on. Wired Magazine has anarticle [Downloading Is a Packrat's Dream], explaining that many [traditional packrats] are now also "digital packrats", and this might account for some of this growth. If you think you might be a digital packrat,Zen Habits offers a [3-step Cure].
In any case, the trends for both increased storage demand, and increased transmission bandwidth requirements, are definitely being felt. Hopefully, the infrastructure required will be there when needed.
This year I resolve to be more consistent in my blogging, and my goal is to give you one to five entries per week, every week, based on the advice from Glenn Wolsey, Jennette Banks, and others.On some weeks, I will have a running theme, so rather than super-long entries to cover everything I can think of on a topic, make the entries short and readable. This week is a good time to review last year's "New Year's Resolutions" and to make new ones for 2007. I will discuss actions that companies can adopt for their data centers.
A common resolution is to lose weight, as in this Dilbert comic. Last year, I resolved to lose weight in 2006, and am delighted with myself that I lost eight pounds. When people ask for the secret of my success, I whisper in their ear "Eat less, exercise more." In general, people (and companies) know what to do, but just don't do it, which Pfeffer and Sutton document in their book The Knowing-Doing Gap. In my case, it involved lifestyle change: I exercised at a gym three times per week in Tucson, with a personal trainer, and revamped my diet.
Not everyone subscribes to the "eat less exercise more" philosophy. For example, Ric Watson argues in his blog that you can eat fewer calories, but eat more in actual volume, by choosing the right foods. This brings up the issues of "metrics" that most data centers are familiar with. Last year, I read the book "You: On a Diet" which explains that it is better to focus on "waist reduction" as measured in inches around your mid-section at the belly button, than "weight reduction" as measured in pounds. This year, I resolve to get down to 35 inches by the end of 2007.
The problem with measuring "weight" is that you are weighing bones, muscle and fat. A person can gain ten pounds of muscle, lose ten pounds of fat, and the scale would indicate no progress. The same problem occurs in data centers. How many TB of data do you have? Storage admins can easily tell you, but can they tell how much of this is bone (data needed for operating infrastructure), muscle (data used in daily operations that generates revenue) or fat (obsolete or orphaned data)?
We at IBM often state that "Information Lifecycle Management (ILM)" is more lifestyle change than a "fad diet". Figuring out what data you should capture in the first place, where to place it, when to move it, and when to get rid of it, is more important that just buying different tiers of storage hardware. So, for those looking to make new data center resolutions, I suggest the following actions:
Re-evaluate the metrics you now use, and determine if they are helpful in making decisions and taking action.
Come up with new ones that are more focused to solve the issues you face.
Consider storage infrastructure software, such as IBM TotalStorage Productivity Center, to help you gather the information about your SAN, disk and tape systems, calculate the metrics, and automate the appropriate actions.
Continuing this week's theme of New Year's Resolutions for the data center, today we'll talk about one that people don't always think about on a personal level, that is to hone your tools and skills.
A long time ago, I used to be a regular speaker at the SHARE user group conference. One of the most attended sessions was Sam Golob presenting the latest CBT Tape set of tools. Over time, this large collection of "mainframe shareware" was handed out on 3480 tape cartridges, then on CDs, and finally made downloadable off the web.Sam's main point, which I remember to this day, was that everyone who has a job should figure out what tools they use, keep those tools functioning properly, and learn to use them well.
Later, I took some cooking classes at a culinary school. Among other things, we learned:
A sharp knife is safer and easier to use than a dull one, resulting in fewer accidents
Knowing what you are doing is the difference between food that is "simply awful" to that which is "awfully simple" to prepare.
A well trained chef can prepare most meals with just a sharp knife and wooden spoon.
The same could be said about software tools. What tools do you use in your job? Do you feel you know how to take full advantage of their power and capabilities?If you develop software, do you know all the features for your debugging tools? If you develop advertising or marketing materials, do you know all the features of your photo or video editing software? If you manage storage in a data center, do you know all the tools for managing your storage area network (SAN), disk systems, tape libraries, and reporting tools to identify all of your files and databases across your entire IT environment?I would not be surprised if you could replace a whole mess of tools with just one, such as the IBM TotalStorage Productivity Center.
I have arrived safely in Las Vegas for the IBM System Storage and Storage Networking Symposium. This eventis held once every year. The gold sponsors were: Brocade, Cisco, Finisar, Servergraph, and VMware. Our silversponsor was Qlogic.
I presented IBM's System Storage strategy and an overview of our product line. For those who missed it,our strategy is focused on helping customers in four key areas:
Optimize IT - to simplify and automate your IT operations and optimize performance and functionality, through server/storage synergies, storage virtualization, and intergrated storage infrastructure management.
Leverage Information - to enable a single view of trusted business information through data sharing, and to get the most value from information through Information Lifecycle Management (ILM).
Mitigate Risk - to comply with security and regulatory requirements, and keep your business running with a complete set of business continuity solutions. IBM offers a range of non-erasable, non-rewriteable storage, encryption on disk and tape, and support for IT Infrastructure Library (ITIL) service management disciplines.
Enable Business Flexibility - to provide scalable solutions and protect your IT investment through the use of open industry standards like Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S). IBM offers scalability in three dimensions: Scale-up, Scale-out, and Scale-within.
IBM has a broad storage portfolio, in seven offering categories:
Disk Systems, including our SAN Volume Controller, DS family, and N series.
Tape Systems, including tape drives, libraries and virtualization.
Storage Networking, a complete set of switches, directors and routes
Infrastructure Management, featuring the IBM TotalStorage Productivity Center software
Business Continuity, advanced copy services and the software to manage them
Lifecycle and Retention, our non-erasable, non-rewriteable storage including DR550, N series with SnapLock, and WORM tape support, Grid Archive Manager and our Grid Medical Archive Solution (GMAS)
Storage Services, everything from consulting, design and deployment to outsourcing and hosting.
I could talk all day on this, but given that the room was packed, every seat taken and the rest of the audience standing along the walls, I had to keep it down to one hour.
SAN Volume Controller Overview
I presented an overview of the IBM System Storage SAN Volume Controller (SVC), IBM's flagship disk virtualizationproduct. Rather than giving a long laundry list of features and benefits,I focused on the five that matter most:
Reduces the cost and complexity of managing storage, especially for mixed storage environments
Simplifies Business Continuity through non-disruptive data migration and advanced copy services
Improves storage utilization, getting more value from the storage hardware you already have
Enhances personnel productivity, empowering storage administrators to get their job done
Delivers high availability and performance
SAN Volume Controller - Customer Success Stories
A good part of this conference are presented by non-IBMers, which include Business Partners and clientssharing their experiences. In this session, we had two speakers share their experiences with SVC.
David Snyder keeps over 80 web sites online and available. His digital media technologiesteam uses SVC to make their storage administration easier, and ensure high availability for web site content creation and publishing.
Mark Prybylski manages storage at his company, a financial bank. His storage management team uses SVC Global Mirror which provides asynchronous disk mirroring between different types of disk, as part oftheir Business Continuity/Disaster Recovery plan.
The last session I attended was "Storage .. to Optimize your ECM depoloyments" by Jerry Bower, now working for IBM as part of our recent acquisition of the Filenet company. ECM stands for Enterprise Content Management, and IBM is the market leader in this space. Jerry gave a great overview of IBM Content Manager software suite, our newly acquired Filenet portfolio, and the storage supported.
After the sessions was a reception at the Solution Center with dozens of exhibitor booths. For example,Optica Technologies had their PRIZM productswhich are able to connect FICON servers to ESCON storage devices.
I did not register soon enough to get into the MGM Grand itself, so I am staying at a Hiltonat the other end of the Las Vegas strip, but am able to hop on the "Monorail" to get to the MGM,just in time for the breakfast and first welcome session.
This conference has a familiar set up: six keynote sessions, 62 break-out sessions, and fourtown hall meetings. Thanks to electronic survey devices on the seats, speakers were able to gatherreal-time demographics. A large portion of attendees, including myself, are attending this conference for theirfirst time. Here's my recap of the first three keynote sessions:
The Future of Infrastructure and Operations: The Engine of Cloud Computing
How much do companies spend just to keep current? As much as 70 percent! The speaker noted thatthe best companies can get this down to 10 to 30 percent, leaving the rest of the IT budget to facilitate transformation. He predicts that companies are transforming their data centers fromsprawled servers to virtualization, towards a fully automated, service-oriented, real-time infrastructure.
Whereas the original motivation for IT virtualization was to reduce costs, companies now recognizethat they greatly improve agility, the ability to rapidly provision resources for new workloads, and that this will then lead to opportunites for alternative sourcing, such as cloud computing.
The operating system is becoming commoditized, focusing attention instead to a new concept: the"Meta OS". VMware's Virtual Data Center and Microsoft's Azure Fabric Controller are just two examples.Currently, analysts estimate only about 12 percent of x86 workloads are running virtualized, but thatthis could be over 50 percent by 2012.In this same time frame, year 2012, storage Terabytes is expected to increase 6.5x fold, and WAN bandwidthgrowing 35 percent per year.
Virtualization is not just for business applications. There are opportunities to eliminate the mostcostly part of any business: the Personal Computer, poster child of the skyrocketing costs of the client/server movement. Remote hosting of applications, streaming of applications,software as a service (SaaS) and virtual machines for the desktop can greatly reduce costs of customizedPC images and help desk support.
Cloud computing not only reduces per costs per use, but provides a lower barrier of entry and somemuch needed elasticity.Draw a line anywhere along the application-to-hardware software/hardware stack, and you can define acloud computing platform/service. About 65 percent of the attendees surveyed indicated that they were already doing something with CloudComputing, or were planning to in the next four years.
To help get there, the speaker felt that Value-added Resellers (VAR) and System Integrators (SI) wouldevolve into "service brokers", providing Small and Medium sized Businesses (SMB) "one throat to choke" in mixedmultisourced operations. The term "multisource" caught me a bit off-guard, referring to having someworkloads run internally (insourced) while other workloads run out on the Cloud (outsourced). Largerenterprises might have a "Dynamic Sourcing Team", a set of key employees serving as decision makers, employing both business and IT skills to determine the best sourcing for each application workload.
What are the biggest obstacles to getting there? The speaker felt it was the IT staff. People and cultureare the most difficult to change. The second are lack of appropriate metrics. Here were the survey resultsof the attendees:
41 percent had metrics for infrastructure economic attributes
49 percent had metrics for qualities of service (QoS)
12 percent had metrics to measure agility, speed of resource provisioning
The Data Center Scenario: Planning for the Future
This second keynote had two analyst "co-presenters". The focus was on the importance of having a documented Data Center strategy and architecture. Unfortunately, most Data Centers "happen on their own", with a majoroverhaul every 5 to 10 years. The speakers presented some "best practices" for driving this effort.
The first issue was to identify tiers of criticality, similar to those by the[Uptime Institute]. In their example, the most criticalworkloads would have perhaps recovery point objectives (RPO) of zero, and recover time objectives of lessthan 15 minutes. This is achievable using synchronous mirroring with fully automation to handle the failover.
The second issue was to recognize that many applications were designed for local area networks (LAN), butmany companies have distributed processing over a wide area network (WAN). Latency over these longer distancescan kill distributed performance of these applications.
The third issue was that different countries offer different levels of security, privacy and law enforcement.Canada and Ireland, for example, had the lowest risk, countries like India had medium risk, and countries likeChina and Russia had the highest risk, based on these factors.
The speakers suggested the following best practices:
Get a better understanding of the costs involved in providing IT services
Centralize applications that are not affected by latency, but regionalize those that are affected toremote locations to minimize distance delays.
Work towards a "lights out" data center facility, with operations personnel physically separated fromdata center facilities.
For the unfortunate few that are trying to stretch out more life from their existing aging data centers,the speakers offered this advice:
Build only what you need
Decommission orphaned servers and storage, which can be 1 to 12 percent of your operations
Target for replacement any hardware over five years old, not just to reduce maintenance costs, butalso to get more energy-efficient equipment.
Consider moving test workloads, and as much as half of your web servers, off UPS and onto the nativeelectricity grid. In the event of an outage, this reduces UPS consumption.
Implement power-capping and load-shedding, especially during peak times.
Enacting these changes can significantly improve the bottom line. Archaic data centers, those typically over 10 years old with power usage effectiveness (PUE) over 3.0 can cost over twice as much as a moreefficient data center. To learn more about PUE as a metric, see the Green Grid's whitepaper[Data Center power efficiency metrics:PUE and DCiE].
While virtualization can help with these issues, it also introduces new problems, such as VM sprawl anddealing with antiquated licensing schemes of software companies.
The Four Traits of the World's Best-Performing Business Leaders
Best-selling author Jason Jennings presented his findings in researching his various books:
It's Not the Big That Eat the Small... It's the Fast That Eat the Slow : How to Use Speed as a Competitive Tool in Business
Less Is More : How Great Companies Use Productivity As a Competitive Tool in Business
Think Big, Act Small
Hit the Ground Running : A Manual for New Leaders
Jason identified the best companies and interviewed their leaders, including such companies as Koch Industries, Nucor Steel, and IKEA furniture. The leaders he interviewed felt a calling to serveas stewards of their companies, not just write mission and vision statements, and be willingto let go of projects or people that aren't working out.
Jasonindicated a 2007 Gallup poll on the American workplace indicates that 70 percent of employees do notfeel engaged in their jobs.The focus of these leaders isto hire people with the right attitudes, rather than the right aptitudes, and give those people with the knowledge and the right to make business decisions. If done well,employees will think and act as owners, and hold themselves accountable for their economic results. Jason found cases where 25-year-olds were givenresponsibility to make billion-dollar decisions!
I found his talk inspiring! The audience felt motivated to do their jobs better, and be more engagedin the success of their companies.
These keynote sessions set the mood for the rest of the week. I can tell already that the speakers willtoss out a large salad of buzzwords and IT industry acronyms. I saw several people in the audience confusedon some of the terminology, and hopefully they will come over to IBM booth 20 at the Solutions Expofor straight talk and explanation.
This week I am at the Data Center Conference 2009 in Las Vegas. There are some 1700 people registered this year for this conferece, representing a variety of industries like Public sector, Services, Finance, Healthcare and Manufacturing. A survey of the attendees found:
55 percent are at this conference for the first time.
18 percent once before, like me
15 percent two or three times before
12 percent four or more times before
Plans for 2010 IT budgets were split evenly, one third planning to spend more, one third planning to spend about the same, and the final third looking to cut their IT budgets even further than in 2009. The biggest challenges were Power/Cooling/Floorspace issues, aligning IT with Business goals, and modernizing applications. The top three areas of IT spend will be for Data Center facilities, modernizing infrastructure, and storage.
There are six keynote sessions scheduled, and 66 breakout sessions for the week. A "Hot Topic" was added on "Why the marketplace prefers one-stop shopping" which plays to the strengths of IT supermarkets like IBM, encourages HP to acquire EDS and 3Com, and forces specialty shops like Cisco and EMC to form alliances.
Day 2 began with a series of keynote sessions. Normally when I see "IO" or "I/O", I immediately think of input/output, but here "I&O" refers to Infrastructure and Operations.
Business Sensitivity Analysis leads to better I&O Solutions
The analyst gave examples from Alan Greenspan's biography to emphasize his point that what this financial meltdown has caused is a decline in trust. Nobody trusts anyone else. This is true between people, companies, and entire countries. While the GDP declined 2 percent in 2009 worldwide, it is expected to grow 2 percent in 2010, with some emerging markets expected to grow faster, such as India (7 percent) and China (10 percent). Industries like Healthcare, Utilities and Public sector are expected to lead the IT spend by 2011.
While IT spend is expected to grow only 1 to 5 percent in 2010, there is a significant shift from Capital Expenditures (CapEx) to Operational Expenses (OpEx). Five years ago, OpEx used to represent only 64 percent of IT budget in 2004, but today represents 76 percent and growing. Many companies are keeping their aging IT hardware longer in service, beyond traditional depreciation schedules. The analyst estimated over 1 million servers were kept longer than planned in 2009, and another 2 million will be kept longer in 2010.
An example of hardware kept too long was the November 17 delay of 2000 some flights in the United States, caused by a failed router card in Utah that was part of the air traffic control system. Modernizing this system is estimated to cost $40 billion US dollars.
Top 10 priorities for the CIO were Virtualization, Cloud Computing, Business Intelligence (BI), Networking, Web 2.0, ERP applications, Security, Data Management, Mobile, and Collaboration. There is a growth in context-aware computing, connecting operational technologies with sensors and monitors to feed back into IT, with an opportunity for pattern-based strategy. Borrowing a concept from the military, "OpTempo" allows a CIO to speed up or slow down various projects as needed. By seeking out patterns, developing models to understand those patterns, and then adapting the business to fit those patterns, a strategy can be developed to address new opportunities.
Infrastructure and Operations: Charting the course for the coming decade
This analyst felt that strategies should not just be focused looking forward, but also look left and right, what IBM calls "adjacent spaces". He covered a variety of hot topics:
65 percent of energy running x86 servers is doing nothing. The average x86 running only 7 to 12 percent CPU utilization.
Virtualization of servers, networks and storage are transforming IT to become on big logical system image, which plays well with Green IT initiatives. He joked that this is what IBM offered 20 years ago with Mainframe "Single System Image" sysplexes, and that we have come around full circle.
One area of virtualization are desktop images (VDI). This goes back to the benefits of green-screen 3270 terminals of the mainframe era, eliminating the headaches of managing thousands of PCs, and instead having thin clients rely heavily on centralized services.
The deluge in data continues, as more convenient access drives demand for more data. The anlyst estimates storage capacity will increase 650 percent over the next five years, with over 80 percent of this unstructured data. Automated storage tiering, ala Hierarchical Storage Manager (HSM) from the mainframe era, is once again popular, along with new technologies like thin provisioning and data deduplication.
IT is also being asked to do complex resource tracking, such as power consumption. In the past IT and Facilities were separate budgets, but that is beginning to change.
The fastest growing social nework was Twitter, with 1382 percent growth in 2009, of which 69 percent of new users that joined this year were 39 to 51 years old. By comparison, Facebook only grew by 249 percent. Social media is a big factor both inside and outside a company, and management should be aware of what Tweets, Blogs, and others in the collective are saying about you and your company.
The average 18 to 25 year old sends out 4000 text messages per month. In 24 hours, more text messages are sent out than people on the planet (6.7 billion). Unified Communications is also getting attention. This is the idea that all forms of communication, from email to texts to voice over IP (VoIP), can be managed centrally.
Smart phones and other mobile devices are changing the way people view laptops. Many business tasks can be handled by these smaller devices.
It costs more in energy to run an x86 server for three years than it costs to buy it. The idea of blade servers and componentization can help address that.
Mashups and Portals are an unrecognized opportunity. An example of a Mashup is mapping a list of real estate listings to Google Maps so that you can see all the listings arranged geographically.
Lastly, Cloud Computing will change the way people deliver IT services. Amusingly, the conference was playing "Both Sides Now" by Joni Mitchell, which has the [lyrics about clouds]
Unlike other conferences that clump all the keynotes at the beginning, this one spreads the "Keynote" sessions out across several days, so I will cover the rest over separate posts.
Continuing this week's coverage of the 27th annual [Data Center Conference] I attended some break-out sessions on the "storage" track.
Effectively Deploying Disruptive Storage Architectures and Technologies
Two analysts co-presented this session. In this case, the speakers are using the term "disruptive" in the [positive sense] of the word, as originally used by Clayton Christensen in hisbook[The Innovator's Dilemma], andnot in the negative sense of IT system outages. By a show of hands,they asked if anyone had more storage than they needed. No hands went up.
The session focused on the benefits versus risks of new storage architectures, and which vendors they felt would succeed in this new marketplace around the years 2012-2013.
By electronic survey, here were the number of storage vendors deployed by members of the audience:
14 percent - one vendor
33 percent - two vendors, often called a "dual vendor" strategy
24 percent - three vendors
29 percent - four or more storage vendors
For those who have deployed a storage area network (SAN), 84 percent also have NAS, 61 percent also have some form or archive storage such as IBM System Storage DR550, and 18 percent also have a virtual tape library (VTL).
The speaker credited IBM's leadership in the now popular "storage server" movement to the IBM Versatile Storage Server [VSS] from the 1990s, the predecessor to IBM's popular Enterprise Storage Server (ESS). A "storage server" is merely a disk or tape system built using off-the-shelf server technology, rather than customized [ASIC] chips, lowering thebarriers of entry to a slew of small start-up firms entering the IT storage market, and leading to newinnovation.
How can a system designed for now single point of failure (SPOF) actually then fail? The speaker convenientlyignored the two most obvious answers (multiple failures, microcode error) and focused instead on mis-configuration. She felt part of the blame falls on IT staff not having adequate skills to deal with the complexities of today's storage devices, and the other part of the blame falls on storage vendors for making such complicated devices in the first place.
Scale-out architectures, such as IBM XIV and EMC Atmos, represent a departure from traditional "Scale-up" monolithic equipment. Whereas scale-up machines are traditionally limited in scalability from their packaging, scale-out are limited only by the software architecture and back-end interconnect.
To go with cloud computing, the analyst categorized storage into four groups: Outsourced, Hosted, Cloud, and Sky Drive. The difference depended on where servers, storage and support personnel were located.
How long are you willing to wait for your preferred storage vendor to provide a new feature before switching to another vendor? A shocking 51 percent said at most 12 months! 34 percent would be willing to wait up to 24 months, and only 7 percent were unwilling to change vendors. The results indicate more confidence in being able to change vendors, rather than pressures from upper management to meet budget or functional requirements.
Beyond the seven major storage vendors, there are now dozens of smaller emerging or privately-held start-ups now offering new storage devices. How willing were the members of the audience to do business with these? 21 percent already have devices installed from them, 16 percent plan to in the next 12-24 months, and 63 percent have no plans at all.
The key value proposition from the new storage architectures were ease-of-use and lower total cost of ownership.The speaker recommended developing a strategy or "road map" for deploying new storage architectures, with focus on quantifying the benefits and savings. Ask the new vendor for references, local support, and an acceptance test or "proof-of-concept" to try out the new system. Also, consider the impact to existing Disaster Recovery or other IT processes that this new storage architecture may impact.
Tame the Information Explosion with IBM Information Infrastructure
Susan Blocher, IBM VP of marketing for System Storage, presented this vendor-sponsored session, covering theIBM Information Infrastructure part of IBM's New Enterprise Data Center vision. This was followed by BradHeaton, Senior Systems Admin from ProQuest, who gave his "User Experience" of the IBM TS7650G ProtecTIER virtual tape library and its state-of-the-art inline data deduplication capability.
Best Practices for Managing Data Growth and Reducing Storage Costs
The analyst explained why everyone should be looking at deploying a formal "data archiving" scheme. Not just for "mandatory preservation" resulting from government or industry regulations, but also the benefits of "optional preservation" to help corporations and individual employees be more productive and effective.
Before there were only two tiers of storage, expensive disk and inexpensive tape. Now, with the advent of slower less-expensive SATA disks, including storage systems that emulate virtual tape libraries, and others that offer Non-Erasable, Non-Rewriteable (NENR) protection, IT administrators now have a middle ground to keep their archive data.
New software innovation supports better data management. The speaker recalled when "storage management" was equated to "backup" only, and now includes all aspects of management, including HSM migration, compliance archive, and long term data preservation. I had a smile on my face--IBM has used "storage management" to refer to these other aspects of storage since the 1980s!
The analyst felt the best tool to control growth is the "Delete" the data no longer needed, but felt that nobody uses Storage Resource Management (SRM) tools needed to make this viable. Until then, people willchose instead to archive emails and user files to less expensive media.The speaker also recommended looking into highly-scalable NAS offerings--such as IBM's Scale-Out File Services (SoFS), Exanet, Permabit, IBRIX, Isilon, and others--when fast access to files is worth the premium price over tape media.The speaker also made the distinction between "stub-based" archiving--such as IBM TSM Space Manager, Sun's SAM-FS, and EMC DiskXtender--from "stub-less" archive accomplished through file virtualization that employes a global namespace--such as IBM Virtual File Manager (VFM), EMC RAINfinity or F5's ARX.
She made the distinction between archives and backups. If you are keeping backups longer than four weeks, they are not really backups, are they? These are really archives, but not as effective. Recent legal precedent no longer considers long-term backup tapes as valid archive tapes.
To deploy a new archive strategy, create a formal position of "e-archivist", chose the applications that will be archived and focus on requirements first, rather than going out and buying compliance storage devices. Try to get users to pool their project data into one location, to make archiving easier. Try to have the storage admins offer a "menu" of options to Line-of-Business/Legal/Compliance teams that may not be familiar with subtle differences in storage technologies.
While I am familiar with many of these best practices already, I found it useful to see which competitiveproducts line up with those we have already within IBM, and which new storage architectures others find mostpromising.
Continuing my coverage of the 27th annual[Data Center Conference], the weather here in Las Vegas has been partly cloudy,which leads me to discuss some of the "Cloud Computing" sessions thatI attended on Wednesday.
The x86 Server Virtualization Storm 2008-2012
Along with IBM, Microsoft is recognized as one of the "Big 5" of Cloud Computing. With theirrecent announcements of Hyper-V and Azure, the speaker presented pros-and-cons between thesenew technologies versus established offerings from VMware. For example, Microsoft's Hyper-Vis about three times cheaper than VMware and offers better management tools. That could beenough to justify some pilot projects. By contrast, VMware is more lightweight, only 32MB,versus Microsoft Hyper-V that takes up to 1.5GB. VMware has a 2-3 year lead ahead of Microsoft, and offers some features that Microsoft does not yet offer.
Electronic surveys of the audience offered some insight. Today, 69 percent were using VMware only, 8 percent had VMware plus other, including Xen-based offerings from Citrix,Virtual Iron and others. However, by 2010, the audience estimated that 39 percent would be VMware+Microsoft and another 23 percent VMware plus Xen, showing a shift away from VMware'scurrent dominance. Today, there are 11 VMware implementations to Microsoft Hyper-V, and thisis expected to drop to 3-to-1 by 2010.
Of the Xen-based offerings, Citrix was the most popular supplier. Others included Novell/PlateSpin,Red Hat, Oracle, Sun and Virtual Iron. Red Hat is also experimenting with kernel-based KVM.However, the analyst estimated that Xen-based virtualization schemes would never get past8 percent marketshare. The analyst felt that VMware and Microsoft would be the two dominant players with the bulk of the marketshare.
For cloud computing deployments, the speaker suggested separating "static" VMs from "dynamic" ones. Centralize your external storage first, and implement data deduplicationfor the OS load images. Which x86 workloads are best for server virtualization? The speaker offered this guidance:
The "good" are CPU-bound workloads, small/peaky in nature.
The "bad" are IO-intensive, those that exploit the features of native hardware
The "ugly" refers to workloads based on software with restrictive licenses and those not fully supported on VMs. If you have problems, the software vendor may not help resolve them.
Moving to the Cloud: Transforming the Traditional Data Center
IBM VP Willie Chiu presented the various levels of cloud computing.
Software-as-a-Service (SaaS) provides the software application, operating system and hardware infrastructure, such as SalesForce.com or Google Apps. Either the software meets your needs or it doesn't, but has the advantage that the SaaS provider takes care of all the maintenance.
Platform-as-a-Service (PaaS) provides operating system, perhaps some middleware like database or web application server, and the hardware infrastructure to run it on. The PaaS provider maintains the operating system patches, but you as the client must maintain your own applications. IBM has cloud computing centers deployed in nine different countries across the globe offering PaaS today.
Infrastructure-as-a-Service (IaaS) provides the hardware infrastructure only. The client must maintain and patch the operating system, middleware and software applications. This can be very useful if you have unique requirements.
In one case study, Willie indicated that moving a workload from a traditional data center to the cloud lowered the costs from $3.9 million to $0.6 million, an 84 percent savings!
We've Got a New World in Our View
Robert Rosier, CEO of iTricity, presented their "IaaS" offering. "iTricity" was coined from the concept of "IT as electricity". iTricity is the largest Cloud Computing company in continental Europe, hosting 2500 servers with 500TB of disk storage across three locations in the Netherlands and Germany.
Those attendees I talked to that were at this conference before commented that this year's focus on virtualization and cloud computing is noticeably more than in previous years. For more on this, read this 12-page whitepaper:[IBM Perspective on Cloud Computing]
It's Thursday here at the [Data Center Conference] here in Las Vegas. Trying to keep up with all the sessions and activities has been quite challenging. As is often the case, there are more sessions that I want to attend than I physically am able to, so have to pick and choose.
Making the Green Data Center a Reality
The sixth and final keynote was an expert panel session, with Mark Bramfitt from Pacific Gas and Electric [PG&E], and Mark Thiele from VMware.
Mark explained PG&E's incentive program to help data centers be more energyefficient. They have spent $7 million US dollars so far on this, and he has requested another$50 million US dollars over the next three years. One idea was to put "shells" aroundeach pod of 28 or so cabinets to funnel the hot air up to the ceiling, rather than havingthe hot air warm up the rest of the cold air supply.
The fundamental disconnect for a "green" data center is that the Facilities team pay for the electricity, but it is the IT department that makes decisions that impact its use. The PG&E rebates reward IT departments for making better decisions. The best metric available is"Power Usage Effectiveness" or [PUE], which is calculated by dividing total energy consumed in the data center, divided by energy consumed by the IT equipment itself.Typical PUE runs around 3.0 which means for every Watt used for servers, storage or network switches, another 2 Watts are used for power, cooling, and facilities. Companies are tryingto reduce their PUE down to 1.6 or so. The lower the better, and 1.0 is the ideal.The problem is that changing the data center infrastructure is as difficult as replacingthe phone system or your primary ERP application.
While California has [Title 24], stating energy efficiency standards for both residential and commercial buildings, it does notapply to data centers. PG&E is working to add data center standards into this legislation.
The two speakers also covered Data Center [bogeymans], unsubstantiated myths that prevent IT departments fromdoing the right thing. Here are a few examples:
Power cycles - some people believe that x86 servers can typically only handle up to 3000 shutdowns, and so equipment is often left running 24 hours a day to minimize these. Most equipment is kept less than 5 years (1826 days), so turning off non-essential equipment at night, and powering it back on the next morning, is well below this 3000 limit and can greatly reduce kWh.
Dust - many are so concerned about dust that they run extra air-filters which impactsthe efficiency of cooling systems air flow. New IT equipment tolerates dust much betterthan older equipment.
Humidity - Mark had a great story on this one. He said their "de-humidifier" broke,and they never got around to fixing it, and they went years without it, realizing they didn't need to de-humidify.
The session wrapped up with some "low hanging fruit", items that can provide immediate benefit with little effort:
Cold-aisle containment--Why are so few data centers doing this?
Colocation providers need to meter individual clients' energy usage -- IBM offers the instrumentation and software to make this possible
Air flow management--Simply organizing cables under the floor tiles could help this.
Virtualization and Consolidation.
High-efficiency power supplies
Managing IT from a Business Service Perspective
The "other" future of the data center is to manage it as a set of integrated IT services,rather than a collection of servers, storage and switches.IT Infrastructure Library (ITIL) is widely-accepted as a set of best practices to accomplish this "service management" approach. The presenter from ASG Software Solutions presented their Configuration Management Data Base (CMDB) and application dependency dashboard. Theyhave some customers with as many as 200,000 configuration items (CIs) in their CMDB.
The solution looked similar to the IBM Tivoli software stack presented earlier this yearat the [Pulse conference].Both ASG and IBM "eat their own dog food", or perhaps more accurately "drink their own champagne", using these software products to run their own internal IT operations.
For many, the future of a "green" data center managed as a set of integrated service are years away, but the technologies and products are available today, and there is no reasonto postpone these projects any longer than necessary. For more about IBM's approach togreen data center, see [Energy EfficiencySolutions]. You can also take IBM's[IT Service Management self-assessment] to help determine whichIBM tools you need for your situation.
I am back at "the Office" for a single day today. This happens often enough I need a name for it.Air Force pilots that practice landing and take-offs call them "Touch and Go", but I think I needsomething better. If you can think of a better phrase, let me know.
This week, I was in Hartford, CT, Somers, NY and our Corporate Headquarters in Armonk, in a varietyof meetings, some with editors of magazines, others with IBMers I have only spoken to over the phone andfinally got a chance to meet face to face.
I got back to Tucson last night, had meetings this morning in Second Life, then presented "InformationLifecycle Management" in Spanish to a group of customers from Mexico, Chile, and Brazil. We have a great Tucson Executive Briefing Center, and plenty of foreign-language speakers to draw from our localemployees here at the lab site.
Sunday, I leave for Las Vegas for our upcoming IBM Storage and Storage Networking Symposium. We will cover the latest in our disk, tape, storage networking and related software.Do you have your tickets? If you plan to attend, and want to meet up with me, let me know.
For a while now, IBM has been trying to explain to clients that focusingon just storage hardware acquisition costs is not enough. You need toconsider the "Total Cost of Ownership" or TCO of a purchase decision.For active data, a 3-5 year TCO assessment can give you a better comparison of costs between IBM and competitive choices. For long-term archive retention, 7-10 year TCO assessment may be necessary.
Now, IBM has a cute [2-minute video] that brings anappropriate analogy to help IT and non-IT executives understand.
Wrapping up this week's theme on IBM's Dynamic Infrastructure® strategic initiative, we have a few more goodies in the goody bag.
First item: Dave Bricker shows off the XIV cloud-optimized storage at Pulse 2009
Second item: Rodney Dukes discusses the latest features of the DS8000 disk system at Pulse 2009
Third item: IBM launches the [Dynamic Infrastructure Journal]. You can read the February 2009 edition online, and if you find it useful and interesting, subscribe to learn from IBM's transformation experts how to reduce cost, manage risk and improve service.
Whether or not you attended the IBM Pulse 2009 conference, you might enjoy looking at the rest of the series of videos on [YouTube] and photographs on [Flickr].
Jim Stallings, IBM General Manager for Global Markets, will explain why a smarter planet needs a dynamicinfrastructure. I used to work for Jim, when he was in charge of the IBM Linux initiative and I was on the Linux forS/390 mainframe team.
Erich Clementi, IBM Vice President, Strategy & General Manager Enterprise Initiatives, will explain how to best leverage opportunities with cloud computing.
Steve Forbes, Chairman and CEO of Forbes Inc. and Editor-in-Chief of Forbes Magazine, will presentGlobal Outlooks and the Challenge of Change.
Rich Lechner, IBM Vice President, Energy & Environment, will explain the importance of Building an Energy-Efficient Dynamic Infrastructure. I also worked for Rich, back when he was the VP of Marketing for IBM System Storage, and Iwas back then the "Technical Evangelist". See my post [The Art of Evangelism] to better understand why I don't carry that title anymore.
In addition to these presentations, you will be able to "walk" around to different booths and have on-line chats with subject matter experts and download resources. Don't worry, this is not based on [Second Life], but rather using "On24" much simpler visual interface.Of course, you can follow on [Twitter] or join the fan club at[Facebook].
This is a worldwide event, with translated resource materials and on-line subject matter experts in six different languages (English, French, Italian, German, Mandarin and Japanese). Those in North, Central and South Americas can participate June 23, and those in Europe, Asia and the rest of the world on June 24. [Register Today] and mark your calendars!
Recently, IBM and the University of Texas Medical Branch (UTMB) [launched an effort] using IBM's World Community Grid "virtual supercomputer" to allow laboratory tests on drug candidates for drug-resistant influenza strains and new strains, such as H1N1 (aka "swineflu"), in less than a month.
Researchers at the University of Texas Medical Branch will use [World Community Grid] to identify the chemical compounds most likely to stop the spread of the influenza viruses and begin testing these under laboratory conditions. The computational work adds up to thousands of years of computer time which will be compressed into just months using World Community Grid. As many as 10 percent of the drug candidates identified by calculations on World Community Grid are likely to show antiviral activity in the laboratory and move to further testing.
According to the researchers, without access to World Community Grid's virtual super computing power, the search for drug candidates would take a prohibitive amount of time and laboratory testing.
This reminded me of an 18-minute video of Larry Brilliant at the 2006 Technology, Entertainment and Design [TED] conference. Back in 2006, Larry predicted a pandemic in the next three years, and here it is 2009 and we have the H1N1 virus.
His argument was to have "early detection" and "early response" to contain worldwide diseases like this.
A few months after Larry's "call to action" in 2006, IBM and over twenty major worldwide public health institutions, including the World Health Organization [WHO] and the Centers for Disease Control and Prevention [CDC], [announced the Global Pandemic Initiative], a collaborative effort to help stem the spread of infectious diseases.
One might think that with our proximity to Mexico that the first cases would have been the border states, such as Arizona, but instead there were cases as far away as New York and Florida. The NYT explains in an article [Predicting Flu With the Aid of (George) Washington] that two rival universities, Northwestern University and Indiana University, both predicted that there would be about 2500 cases in the United States, based on air traffic control flight patterns, and the tracking data from a Web site called ["Where's George"] which tracks the movement of US dollar bills stamped with the Web site URL.
The estimates were fairly close. According to the Centers for Disease Control and Prevention [H1N1 Flu virus tracking page], there are currently 3009 cases of H1N1 in 45 states, as of this writing.
This is just another example on how an information infrastructure, used properly to provide insight, make predictions, and analyze potential cures, can help the world be a smarter planet. Fortunately, IBM is leading the way.
Wrapping up my week's theme on IBM's acquisition XIV, we have gotten hundreds of positive articles and reviews in the press, but has caused quite a stir with the[Not-Invented-Here] folks at EMC.We've heard already from EMC bloggers [Chuck Hollis] and [Mark Twomey].The latest is fellow EMC blogger BarryB's missive [Obligatory "IBM buys XIV" Post], which piles on the "Fear, Uncertainty and Doubt" [FUD], including this excerpt here:
In a block storage device, only the host file system or database engine "knows" what's actually stored in there. So in the Nextra case that Tony has described, if even only 7,500-15,000 of the 750,000 total 1MB blobs stored on a single 750GB drive (that's "only" 1 to 2%) suddenly become inaccessible because the drive that held the backup copy also failed, the impact on a file system could be devastating. That 1MB might be in the middle of a 13MB photograph (rendering the entire photo unusable). Or it might contain dozens of little files, now vanished without a trace. Or worst yet, it could actually contain the file system metadata, which describes the names and locations of all the rest of the files in the file system. Each 1MB lost to a double drive failure could mean the loss of an enormous percentage of the files in a file system.
And in fact, with Nextra, the impact will be across not just one, but more likely several dozens or even hundreds of file systems.
Worse still, the Nextra can't do anything to help recover the lost files.
Nothing could be further from the truth. If any disk drive module failed, the system would know exactly whichone it was, what blobs (binary large objects) were on it, and where the replicated copies of those blobs are located. In the event of a rare double-drive failure, the system would know exactly which unfortunate blobs were lost, and couldidentify them by host LUN and block address numbers, so that appropriate repair actions could be taken from remote mirrored copies or tape file backups.
Second, nobody is suggesting we are going to put a delicateFAT32-like Circa-1980 file system that breaks with the loss of a single block and requires tools like "fsck" to piece back together. Today's modern file systems--including Windows NTFS, Linux ext3, and AIX JFS2--are journaled and have sophisticated algorithms tohandle the loss of individual structure inode blocks. IBM has its own General Parallel File System [GPFS] and corresponding Scale out File Services[SOFS], and thus brings a lotof expertise to the table.Advanced distributed clustered file systems, like [Google File System] and Yahoo's [Hadoop project] take this one step further, recognizing that individual node and drive failures at the Petabyte-scale are inevitable.
In other words, XIV Nextra architecture is designed to eliminate or reduce recovery actions after disk failures, not make them worse. Back in 2003, when IBM introduced the new and innovative SAN Volume Controller (SVC), EMCclaimed this in-band architecture would slow down applications and "brain-damage" their EMC Symmetrix hardware.Reality has proved the opposite, SVC can improve application performance and help reduce wear-and-tear on the manageddevices. Since then, EMC acquired Kashya to offer its own in-band architecture in a product called EMC RecoverPoint, that offers some of the features that SVC offers.
If you thought fear mongering like this was unique to the IT industry, consider that 105years ago, [Edison electrocuted an elephant]. To understand this horrific event, you have to understand what was going on at the time.Thomas Edison, inventor of the light bulb, wanted to power the entire city of New York with Direct Current(DC). Nikolas Tesla proposed a different, but more appropriate architecture,called Alternating Current(AC), that had lower losses over distances required for a city as large and spread out as New York. But Thomas Edison was heavily invested in DC technology, and would lose out on royalties if ACwas adopted.In an effort to show that AC was too dangerous to have in homes and businesses, Thomas Edison held a pressconference in front of 1500 witnesses, electrocuting an elephant named Topsy with 6600 volts, and filmed the event so that it could be shown later to other audiences (Edison invented the movie camera also).
Today's nationwide electric grid would not exist without Alternating Current.We enjoy both AC for what it is best used for, and DC for what it is best used for. Both are dangerous at high voltage levels if not handled properly. The same is the case for storage architectures. Traditional high-performance disk arrays, like the IBM System Storage DS8000, will continue to be used for large mainframe applications, online transaction processing and databases. New architectures,like IBM XIV Nextra, will be used for new Web 2.0 applications, where scalability, self-tuning, self-repair,and management simplicity are the key requirements.
(Update: Dear readers, this was meant as a metaphor only, relating the concerns expressed above thatthe use of new innovative technology may result in the loss or corruption of "several dozen or even hundreds of file systems" and thus too dangerous to use, with an analogy on the use of AC electricity was too dangerous to use in homes. To clarify, EMC did not re-enact Thomas Edison's event, no animalswere hurt by EMC, and I was not trying to make political commentary about the current controversy of electrocution as amethod of capital punishment. The opinions of individual bloggers do not necessarily reflect the official positions of EMC, and I am not implying that anyone at EMC enjoys torturing animals of any size, or their positions on capital punishment in general. This is not an attack on any of the above-mentioned EMC bloggers, but rather to point out faulty logic. Children should not put foil gum wrappers in electrical sockets. BarryB and I have apologized to each other over these posts for any feelings hurt, and discussion should focus instead on the technologies and architectures.)
While EMC might try to tell people today that nobody needs unique storage architectures for Web 2.0 applications, digital media and archive data, because their existing products support SATA disk and can be used instead for these workloads, they are probably working hard behind the scenes on their own "me, too" version.And with a bit of irony, Edison's film of the elephant is available on YouTube, one of the many Web 2.0 websites we are talking about. (Out of a sense of decency, I decided not to link to it here, so don't ask)
There is a difference between improving "energy efficiency" versus reducing "power consumption".
Let's consider the average 100 watt light bulb, of which 5 watts generate the desired feature (light), and 95 percent generated as undesired waste (heat). In this case, it would be 5 percent efficient. If you delivered a new light bulb that generated 3 watts of light for only 30 watts of energy, then you would have an offering that was more energy efficient (10 percent instead of 5 percent) and use 70 percent less power (30 watts instead of 100 watts). This new "dim bulb" would not be as bright as the original, but has other desirable energy qualities.
Nearly all of the output of data center equipment results in heat.In The Raised Floor blog [It's Too Darn Hot!], Will Runyon explains how IBM researcher Bruno Michel in Zurich has developed new ways to cool chips with water shot through thousands of nozzles, much like capillaries in the human body. This is just one of many developments that are part of IBM's [Project Big Green]
But what if the desired feature is heat, and the undesired feature is light?In the case of Hasbro's toy[Easy-Bake Oven],a 100W incadescent light bulb is used to bake small cakes. This is generating 95W of desired heat, and onlywasting 5 percent as light (unused inside the oven). That makes this little toy 95 percent energy efficient, butconsumes as much energy as any other 100W light bulb lamp or fixture in your house. With manufacturing switchingfrom incadescent to compact flourescent bulbs, this toy oven may not be around much longer.
While we all joke that it is just a matter of time before our employers make us ride stationary bicycles attached to generators to power our monstrous data centers, 23-year old student Daniel Sheridan designeda see-saw for kids in Africa to play on that generates electricity for nearby schools. [Dan won the "mostinnovative product" at the Enterprise Festival].
Another approach is to improve efficiency by converting previously undesirable outcomes to desirable. Brian Bergstein has a piece in Forbes titled["Heat From Data Center to Warm a Pool"].Here's an excerpt:
"In a few cases, the heat produced by the computers is used to warm nearby offices. In what appears to be a first, the town pool in Uitikon, Switzerland, outside Zurich, will be the beneficiary of the waste heat from a data center recently built by IBM Corp. (nyse: IBM) for GIB-Services AG.
As in all data centers, air conditioners will blast the computers with chilly air - to keep the machines from exceeding their optimum temperature of around 70 degrees - and pump hot air out.
Usually, the hot air is vented outdoors and wasted. In the Uitikon center, it will flow through heat exchangers to warm water that will be pumped into the nearby pool. The town covered the cost of some of the connecting equipment but will get to use the heat for free."
I see a business opportunity here. Next to every data center lamenting about their power and cooling, build a state-of-the-art fitness center for the employees and nearby townspeople. Exercise on a stationary bicyclegenerating electricity, while your kids play on the see-saw generating electricity, and then afterwards thewhole family can take a dip in the heated swimming pool. And if the company subscribes to the notion of a Results-Oriented Work Environment [ROWE],it could encourage its employees to take "fitness" breaks throughout the day, rather than having everyone there in the early morning or late evening hours, leveling out the energy generated.
Yesterday's announcement that IBM had acquired XIV to offer storage for Web 2.0 applicationsprompted a lot of discussion in both the media and the blogosphere. Several indicated thatit was about time that one of the major vendors stepped forward to provide this, and it madesense that IBM, the leader in storage hardware marketshare, would be the first. Others were perhaps confused on what is unique with Web 2.0 applications. What has changed?
I'll use this graphic to help explain how we have transitioned through three eras of storage.
The first era: Server-centric
In the 1950s, IBM introduced both tape and disk systems into a very server-centric environment.Dumb terminals and dumb storage devices were managed entirely by the brains inside the server.These machines were designed for Online Transaction Processing (OLTP), everywhere from bookingflights on airlines to handling financial transfers.
The second era: Network-centric
In the 1980s and 1990s, dumb terminals were replaced with smarter workstations and personalcomputers; and dumb storage were replaced with smarter storage controllers. Local Area Networks (LANs)and Storage Area Networks (SANs) allowed more cooperative processing between users, servers andstorage. However, servers maintained their role as gatekeepers. Users had to go through aspecific server or server cluster to access the storage they had access to. These servers continuedtheir role in OLTP, but also manage informational databases, file sharing and web serving.
The third era: Information-centric
Today, we are entering a third era. Servers are no longer the gatekeepers. Smart workstationsand personal computers are now supplemented with even more intelligent handheld devices, Blackberryand iPhones, for example. Storage is more intelligent too, with some being able to offer file sharingand web serving directly, without the need of an intervening server. The roles of servers have changed,from gatekeepers, to ones that focuses on crunching the numbers, and making information presentableand useful.
Here is where Web 2.0 applications, digital media and archives fits in. These are focused on unstructured data that don't require relational database management systems. So long as the useris authorized, subscribed and/or has made the appropriate payment, she can access the information. With the appropriate schemes in place, information can now be mashed-up in a variety of ways, combined with other information that can render insights and help drive new innovations.
Of course, we will still have databases and online transaction processing to book our flights andtransfer our funds, but this new era brings in new requirements for information storage, and newarchitectures that help optimize this new approach.
While some might be familiar with mashups that combine public Web 2.0 sources of information, enterprise mashups go one step further, integrating withthe "information infrastructure" of your data center. It's not just enough to deliver theright information to the right person at the right time, it has to bein the right format, in a manner that can be readily understood andacted upon. Enterprise mashups can help.
Well, this week I am in Maryland, just outside of Washington DC. It's a bit cold here.
Robin Harris over at StorageMojo put out this Open Letter to Seagate, Hitachi GST, EMC, HP, NetApp, IBM and Sun about the results of two academic papers, one from Google, and another from Carnegie Mellon University (CMU). The papers imply that the disk drive module (DDM) manufacturers have perhaps misrepresented their reliability estimates, and asks major vendors to respond. So far, NetAppand EMC have responded.
I will not bother to re-iterate or repeat what others have said already, but make just a few points. Robin, you are free to consider this "my" official response if you like to post it on your blog, or point to mine, whatever is easier for you. Given that IBM no longer manufacturers the DDMs we use inside our disk systems, there may not be any reason for a more formal response.
Coke and Pepsi buy sugar, Nutrasweet and Splenda from the same sources
Somehow, this doesn't surprise anyone. Coke and Pepsi don't own their own sugar cane fields, and even their bottlers are separate companies. Their job is to assemble the components using super-secret recipes to make something that tastes good.
IBM, EMC and NetApp don't make DDMs that are mentioned in either academic study. Different IBM storage systems uses one or more of the following DDM suppliers:
Seagate (including Maxstor they acquired)
Hitachi Global Storage Technologies, HGST (former IBM division sold off to Hitachi)
In the past, corporations like IBM was very "vertically-integrated", making every component of every system delivered.IBM was the first to bring disk systems to market, and led the major enhancements that exist in nearly all disk drives manufactured today. Today, however, our value-add is to take standard components, and use our super-secret recipe to make something that provides unique value to the marketplace. Not surprisingly, EMC, HP, Sun and NetApp also don't make their own DDMs. Hitachi is perhaps the last major disk systems vendor that also has a DDM manufacturing division.
So, my point is that disk systems are the next layer up. Everyone knows that individual components fail. Unlike CPUs or Memory, disks actually have moving parts, so you would expect them to fail more often compared to just "chips".
If you don't feel the MTBF or AFR estimates posted by these suppliers are valid, go after them, not the disk systems vendors that use their supplies. While IBM does qualify DDM suppliers for each purpose, we are basically purchasing them from the same major vendors as all of our competitors. I suspect you won't get much more than the responses you posted from Seagate and HGST.
American car owners replace their cars every 59 months
According to a frequently cited auto market research firm, the average time before the original owner transfers their vehicle -- purchased or leased -- is currently 59 months.Both studies mention that customers have a different "definition" of failure than manufacturers, and often replace the drives before they are completely kaput. The same is true for cars. Americans give various reasons why they trade in their less-than-five-year cars for newer models. Disk technologies advance at a faster pace, so it makes sense to change drives for other business reasons, for speed and capacity improvements, lower power consumption, and so on.
The CMU study indicated that 43 percent of drives were replaced before they were completely dead.So, if General Motors estimated their cars lasted 9 years, and Toyota estimated 11 years, people still replace them sooner, for other reasons.
At IBM, we remind people that "data outlives the media". True for disk, and true for tape. Neither is "permanent storage", but rather a temporary resting point until the data is transferred to the next media. For this reason, IBM is focused on solutions and disk systems that plan for this inevitable migration process. IBM System Storage SAN Volume Controller is able to move active data from one disk system to another; IBM Tivoli Storage Manager is able to move backup copies from one tape to another; and IBM System Storage DR550 is able to move archive copies from disk and tape to newer disk and tape.
If you had only one car, then having that one and only vehicle die could be quite disrupting. However, companies that have fleet cars, like Hertz Car Rentals, don't wait for their cars to completely stop running either, they replace them well before that happens. For a large company with a large fleet of cars, regularly scheduled replacement is just part of doing business.
This brings us to the subject of RAID. No question that RAID 5 provides better reliability than having just a bunch of disks (JBOD). Certainly, three copies of data across separate disks, a variation of RAID 1, will provide even more protection, but for a price.
Robin mentions the "Auto-correlation" effect. Disk failures bunch up, so one recent failure might mean another DDM, somewhere in the environment, will probably fail soon also. For it to make a difference, it would (a) have to be a DDM in the same RAID 5 rank, and (b) have to occur during the time the first drive is being rebuilt to a spare volume.
The human body replaces skin cells every day
So there are individual DDMs, manufactured by the suppliers above; disk systems, manufactured by IBM and others, and then your entire IT infrastructure. Beyond the disk system, you probably have redundant fabrics, clustered servers and multiple data paths, because eventually hardware fails.
People might realize that the human body replaces skin cells every day. Other cells are replaced frequently, within seven days, and others less frequently, taking a year or so to be replaced. I'm over 40 years old, but most of my cells are less than 9 years old. This is possible because information, data in the form of DNA, is moved from old cells to new cells, keeping the infrastructure (my body) alive.
Our clients should approach this in a more holistic view. You will replace disks in less than 3-5 years. While tape cartridges can retain their data for 20 years, most people change their tape drives every 7-9 years, and so tape data needs to be moved from old to new cartridges. Focus on your information, not individual DDMs.
What does this mean for DDM failures. When it happens, the disk system re-routes requests to a spare disk, rebuilding the data from RAID 5 parity, giving storage admins time to replace the failed unit. During the few hours this process takes place, you are either taking a backup, or crossing your fingers.Note: for RAID5 the time to rebuild is proportional to the number of disks in the rank, so smaller ranks can be rebuilt faster than larger ranks. To make matters worse, the slower RPM speeds and higher capacities of ATA disks means that the rebuild process could take longer than smaller capacity, higher speed FC/SCSI disk.
According to the Google study, a large portion of the DDM replacements had no SMART errors to warn that it was going to happen. To protect your infrastructure, you need to make sure you have current backups of all your data. IBM TotalStorage Productivity Center can help identify all the data that is "at risk", those files that have no backup, no copy, and no current backup since the file was most recently changed. A well-run shop keeps their "at risk" files below 3 percent.
So, where does that leave us?
ATA drives are probably as reliable as FC/SCSI disk. Customers should chose which to use based on performance and workload characteristics. FC/SCSI drives are more expensive because they are designed to run at faster speeds, required by some enterprises for some workloads. IBM offers both, and has tools to help estimate which products are the best match to your requirements.
RAID 5 is just one of the many choices of trade-offs between cost and protection of data. For some data, JBOD might be enough. For other data that is more mission critical, you might choose keeping two or three copies. Data protection is more than just using RAID, you need to also consider point-in-time copies, synchronous or asynchronous disk mirroring, continuous data protection (CDP), and backup to tape media. IBM can help show you how.
Disk systems, and IT environments in general, are higher-level concepts to transcend the failures of individual components. DDM components will fail. Cache memory will fail. CPUs will fail. Choose a disk systems vendor that combines technologies in unique and innovative ways that take these possibilities into account, designed for no single point of failure, and no single point of repair.
So, Robin, from IBM's perspective, our hands are clean. Thank you for bringing this to our attention and for giving me the opportunity to highlight IBM's superiority at the systems level.
Forrester analysts kicked off the keynote sessions for Day 1 of the Forrester IT Forum 2009 event. The theme for this conference is "Redefining IT's value to the Enterprise."Rather than focusing on blue-sky futures that are decades away, Forrester wants to present instead a blend of pragmatic informationthat is actionable now in the next 90 days along with some forward-looking trends.
If you ask CEOs how well their IT operations are doing, 75 percent will saythey are doing great. However, if you dig down, and ask how their companies are leveraging IT to help generate revenues, reduce costs, improve employee morale, drive profits, improve customer service, or manage risks, then the percentage drops down to 30 to 35 percent.
What are the root causes of this "perception gap" in value between business and IT? Several ideas come to mind:
Some CEOs still consider IT departments as "cost centers". Rather than exploiting technology to help drive the rest of the business, they are seen as a necessary evil, an extension of the accounting department, for example.
Some CEOs consider IT's role as basically "keeping the lights on". They only notice IT when the lights go out, or other business outages caused by disruptions in IT.
IT departments measure themselves in technology terms, not business terms. CEOs and the rest of the senior management team may not be "tech savvy", and the CIO and IT directors may not be "business savvy", resulting in failure to communicate IT's role and value to the rest of the business.
This conference is focused on CIOs and IT professionals, and how they can bridge the tech/business gap. The first two executive keynote presentations emphasized this point.
Bob Moffat, Senior VP and Group Executive, IBM
Bob Moffat (my fifth-line manager, or if you prefer, my boss's boss's boss's boss's boss) is the Senior VP and Group Executive of IBM's Systems and Technology Group that manufactures storage and other hardware. He presented how IBM is helping our clients deploy smarter solutions. Globalization has changed world business markets, has changed the reach of information technology, and has changed our client's needs.To support that, IBM is focused on making the world a smarter planet, instrumented with appropriate sensors, interconnected over converging networks, and intelligent to provide visibility, control and automation.
It's time to rethink IT in light of these new developments, to think about IT in client terms, with business metrics. Bob gave several internal and customer examples, here's one from the City of Stockholm:
Covering nine square miles of Stockholm Sweden, IBM led [the largest project of its kind] for traffic congestion in Europe. To reduce congestion caused by 300,000 vehicles, the City of Stockhold enacted a "congestion fee" with real-time recognition of license plates and a Web infrastructure to collect payments. The analytics, metrics and incentives have paid off. Since August 2007, traffic is reduced 18 percent, a reduction of travel time on inner streets, and a 9 percent increase in "green" vehicles.
In addition to smarter traffic, IBM has initiatives for smarter water, smarter energy, smarterhealthcare, smarter supply chain, and smarter food supply.
Dave Barnes, Senior VP and CIO, United Postal Service (UPS)
Dave Barnes must act as the "trusted advisor" to the rest of the senior management team. UPS delivers packages worldwide. They put sensors on all of the vehicles, not just to know how fast they were driving,but also how often they drove in reverse gear, and sensors on the engines to determine maintenance schedules.Analytics found that driving in reverse was the most dangerous, and by providing this information to the drivers themselves, the drivers were able to come up with their own innovative ways to minimize accidents.This is one role of IT, to provide employees the information they need to enable them to be better at their own jobs.
Dave also mentioned the importance of collaborating across business units. Their "Information Technology Steering Committee (ITSC)" has 15 members, of which only three are from the IT department. This helped deploy social media initiatives within UPS. For example, Twitter has been adopted so that senior management can get unfiltered customer feedback. This is perhaps another key role of IT, to flatten an organization from cultural hierarchies that prevent top brass up in the ivory tower from hearing what is going wrong down on the street. Too often, a customer or client complains to the nearest employee, and this may or may not get passed up accurately along the chain of command. Twitter allowed executives to see what was going on for themselves.
Dave also covered the "Best Neighbor" approach. If you were going to build a deck in your back yard, you might ask your neighbors that have already done this, and learn from their experience. Sadly, this does not happen enough in IT. To address this, UPS has a "Tech Governance Group" that focused on business process across the organization. For example, they improved "package flow", reducing 100 million miles in the past few years.
Lastly, he mentioned that many technologists are "loners". They have a few like that, but try to hire techies who look to team across business units instead. Likewise, they try to hire business people who are somewhat tech savvy. For example, they have encouraged business employees to write their own reports, rather than requesting new reports to be developed by the IT department. The end result, the business people get exactly the reports they want, faster than waiting for IT to do it. Another role for IT is to provide end-users the tools to make their own reports.
(Dave didn't mention what tools these were, but it sounded like the Business Intelligence and Reporting Tools [BIRT] that IBM uses.)
These two sessions were a great one-two punch to the audience of 600 CIOs and IT professionals. First, IBM sets the groundwork for what needs to be done. Then, UPS shows how they did exactly that, adopting a dynamic infrastructure and got great results. This is going to be an interesting week!
Continuing my blog coverage of the [Forrester IT Forum 2009 conference],I will group a bunch of topics related to Cloud Computing into one post. Cloud Computing was a big topichere at the IT Forum, and probably was also in the other two conferences IBM participated in this week inLas Vegas:
The CIOs and IT professionals at this Forrester IT Forum seemed to be IT decision makers with a broader view. There was a lot of interest in Cloud Computing. What is Cloud Computing? Basically, it is renting IT capability on an as-needed basis from a computing service provider. The different levels of cloud computing depends on what the computing service provider actually provides. How do these compare with traditional co-location facilities or your own in-house on-premises computing? Here's my handy-dandy quick-reference guide:
Cloud Software-as-a-Service [SaaS], Examples: SalesForce and Google Apps.
Cloud Infrastructure-as-a-Service [IaaS], such as Amazon EC2, RackSpace.
Tradtional Co-Location facility, you park your equipment on rented floorspace, power, cooling and bandwidth.
Traditional On-Premises, what most people do today, build or buy your own data center, buy the hardware, write or buy the software, then install and manage it.
A main tent session had a moderated Q&A panel of three Forrester Analysts titled "Saving, Making and Risking Cash with Cloud Computing." Here are some key points from this panel:
Is Cloud Computing just another tool in the IT toolbox, or does it represent a revolution? The panel gave arguments for both. As a set of technology, protocols and standards, it is an evolutionary progression of other standards already in place, and an extension of methods used in co-location and time-share facilities. However,from a business model perspective, Cloud Computing represents a revolutionary trend, eliminating in some cases huge up-front capital expenses and/or long-term outsourcing contracts. PaaS and IaaS offerings can be rented by the hour, for example.
An example of using Cloud Computing for a one-time batch job: The New York Times decided to build an archive of 11 million articles, but this meant having to convert them all from TIFF to PDF format. The IT person they put in charge of this rented 100 machines on [Amazon Elastic Compute Cloud (EC2)] for 24 hours and was able to convert all 4TB of data for only $240 US dollars.
Cloud Computing can make it easier for companies to share information with clients, suppliers and business partners, eliminating the need to punch holes through firewalls to provide access.
Since it is relatively cheap for companies to try out different cloud computing offerings with little or no capital investment, the spaghetti model applies--"throw it on the wall, and see what sticks!"
What application areas should you consider running in the cloud? Employee self-service portals-Yes, ERP-Mixed, On-time batch jobs-Mixed, Email-Yes, Access Control-No, Web 2.0-Mixed, Testing/QA-Mixed, Back Office Transactions-No, Disaster Recovery-Mixed.
Different IT roles will see varying benefits and risks with cloud computing. However, by 2011, every new IT project must answer the question "Why not run in the cloud?"
There were a variety of track sessions that explored different aspects of cloud computing:
Software-as-a-Server: When and Why
This session had three Forrester analysts in a Q&A panel format. SaaS can provide much-needed relief from application support, maintenance and upgrade chores. The choice and depth of offerings is improving from SaaS providers. However, when comparing TCO between SaaS and on-premises deployments, can yield different results for different use cases. For example, a typical SaaS rate of $100 US dollars per user per month, with discounts, could be $1000 per year, or $10,000 over a 10-year period. Compare that to the total 10-year costs of an on-premises deployment, and you have a good ball-park comparison. SaaS can provide faster time-to-value, and you can easily just try-before-you-buy several alternative offerings before making a decision.
The downside to SaaS is that you need to understand their data center, where it is located, and how it is protected for backup and disaster recovery. Some SaaS providers have only a single data center, so it mightbe disruptive if it experiences a regional disaster.
Cloud IT Services: The Next Big Thing or Just Marketing Vapor?
Economic pressures are forcing companies to explore alternatives, and Cloud IT services are providingadditional options over traditional outsourcing. Only 70-80 percent of companies are satisfied with traditionaloutsourcing, so there is opportunity for Cloud IT services to address those not satisfied. Scalable, consumption-based billing with Web-based accessibility and flexibility is an attractive proposition. Tenyears ago, you could not buy an hour on a mainframe with your credit card, now you can.
Cloud technologies are mature, and there is interest in using these services. About 10 percent of companies are piloting SaaS offerings, 16 percent piloting PaaS offerings, and 13 percent investing in deploying "private clouds" within their data center. This week Aneesh Chopra, who is Barack Obama's pick as the first CTO for the US Federal Government, [stated to congressional leaders]: “The federal government should be exploring greater use of cloud computing where appropriate.”
IBM is betting heavily on their Cloud Computing strategy, has already gone through the reorganizations needed to be positioned well, and claims to have thousands of clients already. HP has some cloud offerings focused on their enterprise customers. Dell is investing and reorganizing for cloud as well.
Network Strategic Planning for Challenging Times
While not limited to Cloud Computing, companies are seeing WAN traffic doubling every 18 months, but withoutthe corresponding increases in budget to cover it. The Forrester analyst covered WAN optimization management services, hybrid Ethernet-MPLS offerings to help people transition from MPLS VPNs to Carrier-grade Ethernet.
Who should you hire for WAN optimization? Do you trust your own Telco that provides your bandwidth to help you figure out ways to use less of it? Alternatives include System Integrators and Service providers like IBM and EDS.Or, you could try to do it yourself, but this requires capital investment in gear and performance monitoring software.
New workloads like Voice over IP (VoIP) and digital surveillance can help cost-justify upgrading your MPLS VPNs to Carrier-grade Ethernet. The possibility of converging this with iSCSI and/or Fibre Channel (FC) over Ethernet (FCoE) and this can help reduce costs as well. Both MPLS and Ethernet will co-exist for awhile, and hybrid offerings from Telcos will help ease the transition. In the meantime, switching some workloads to Cloud Computing can provide immediate relief to in-house networks now. Converging voice, video, LAN, WAN and SAN traffic may require the IT departments to reorganize how the IT role of "network administrator" is handled.
Navigating the Myriad New Sourcing Models
The landscape of outsourcing has changed with the introducing of new Cloud Computing offerings. However, adapting these new offerings to internal preferences may prove challenging. The Forrester analyst suggesting being ready to try to influence their companies to adopt Cloud Computing as a new sourcing option.
Traditional outsourcing just manages your existing hardware and software, often referred to as "Your mess for less!" However, outsourcing contract law is mature and many outsource providers are large, well-established providers. In contrast, some SaaS providers are small, and the few that are largemay be fairly new to the outsourcing business. Here are some things to consider:
Where will the data physically be located? There are government regulations, such as the US Patriot Act, that can influence this decision.Many Canadian and European customers are avoiding providers where datais stored in the United States for this reason.
What is the service delivery chain? Some cloud providers in turn useother cloud providers. For example a SaaS provider might develop the software and then rent the platform it runs on from a PaaS, which in turn mightbe using offshore or co-location facilities to actually house their equipment.Knowing the service delivery chain may prove important on contractnegotiations. Clarify "cloud" terminology and avoid mixed metaphors.
What is their contingency plan? What is your contingency plan if the system is slow or inaccessible. What is their plan to protect against data loss during disasters? What if they go out of business? Source Code Escrow has proven impractical in many cases. SLAs should provide for performance, availability and other key metrics. However, service level penalties are not a cure-all for major disruptions, loss of revenues or reputation.
How will they handle security, compliance and audits? Heavy regulatory requirements may favor dedicated resources to be used.
Who has "custodianship" of the data? Will you get the data back if you discontinue the contract? If so, what format will it be in, and will it make any sense if you are not running the same application as the cloud provider?
Will they provide transition assistance? Moving from on-premises to cloud may involve some effort, including re-training of end users.
Are the resources shared or dedicated? For shared resource environments, is the capacity "fenced off" in any way to prevent having other clients impact your performance or availability.
I am glad to see so much interest in Cloud Computing. To learn more, here is IBM's [Cloud Computing] landing page.
Continuing this week's theme on dealing with the global economic meltdown, recession and financial crisis, I found a great video that recaps IBM CEO Sam Palmisano's recommendations to being more competitive in thisenvironment.
In a recent speech to business leaders, Sam outlined what he sees as the four most importantsteps to thriving in the global economy. The highlights can be seen here in this [2-minute video]on IBM's "Forward View" eMagazine.
Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
Perhaps they won't be surprised any more. Here is an article in eWeek that explains how IBM isreducing energy costs 80% by consolidating 3,900 rack-optimized servers to 33 IBM System z mainframe servers, running Linux, in its own data centers. Since 1997, IBM has consolidated its 155 strategic worldwide data center locations down to just seven.
I am very pleased that IBM has invested heavily into Linux, with support across servers, storage, software andservices. Linux is allowing IBM to deliver clever, innovative solutions that may not be possible with other operating systems. If you are in storage, you should consider becoming more knowledgeable in Linux.
The older systems won't just end up in a landfill somewhere. Instead, the details are spelled out inthe IBM Press Release:
As part of the effort to protect the environment, IBM Global Asset Recovery Services, the refurbishment and recycling unit of IBM, will process and properly dispose of the 3,900 reclaimed systems. Newer units will be refurbished and resold through IBM's sales force and partner network, while older systems will be harvested for parts or sold for scrap. Prior to disposition, the machines will be scrubbed of all sensitive data. Any unusable e-waste will be properly disposed following environmentally compliant processes perfected over 20 years of leading environmental skill and experience in the area of IT asset disposition.
Whereas other vendors might think that some operational improvements will be enough, such as switching to higher-capacity SATA drives, or virtualizing x86 servers, IBM recognizes that sometimes more fundamental changes are required to effect real changes and real results.
Many people have asked me if there was any logic with the IBM naming convention of IBM Systems branded servers. Here's your quick and easy cheat sheet:
System x -- "x" for cross-platform architecture. Technologies from our mainframe and UNIX servers were brought into chips that sit next to the Intel or AMD processors to provide a more reliable x86 server experience. For example, some models have a POWER processor-based Remote Supervisor Adapter (RSA).
System p -- "p" for POWER architecture.
System z -- "z" for Zero-downtime, zero-exposures. Our lawyers prefer "near-zero", but this is about as close as you get to ["six-nines" availability] in our industry, with the highest level of security and encryption, no other vendor comes close, so you get the idea.
But what about the "i" for System i? Officially, it stands for "Integrated" in that it could integrate different applications running on different operating systems onto a [COMMON] platform. Options were available to insert Intel-based processor cards that ran Windows, or attach special cables that allowed separate System x servers running Windows to attach to a System i. Both allowed Windows applications to share the internal LAN and SAN inside the System i machine. Later, IBM allowed [AIX on System i] and [Linux on Power] operating systems to run as well.
From a storage perspective, we often joked that the "i" stood for "island", as most System i machines used internal disk, or attached externally to only a fewselected models of disk from IBM and EMC that had special support for i5/OS using a special, non-standard 520-byte disk block size. This meant only our popular IBM System Storage DS6000 and DS8000 series disk systems were available. This block size requirement only applies to disk. For tape, i5/OS supports both IBM TS1120 and LTO tape systems. For the most part,System i machines stood separate from the mainframe, and the rest of the Linux, UNIX and Windows distributed serverson the data center floor.
Often, when I am talking to customers, they ask when will product xyz be supported on System z or System i?I explained that IBM's strategy is not to make all storage devices connect via ESCON/FICON or support non-standard block sizes, but rather to get the servers to use standard 512-byte block size, Fibre Channel and other standard protocols.(The old adage applies: If you can't get Mohamed to move to the mountain, get the mountain to move to Mohamed).
On the System z mainframe, we are 60 percent there, allowing three of the five operating systems (z/VM, z/VSE and Linux) to access FCP-based disk and tape devices. (Four out of six if you include [OpenSolaris for the mainframe])But what about System i? As the characters on the popular television show [LOST] would say: It's time to get off the island!
Last week, IBM announced the new [i5/OS V6R1 operating system] with features that will greatly improve the use of external storage on this platform. Check this out:
POWER6-based System i 570 model server
Our latest, most powerful POWER processor brought to the System i platform. The 570 model will be the first in the System i family of servers to make use of new processing technology, using up to 16 (sixteen!) POWER6 processors (running at 4.7GHZ) in each machine.The advantage of the new processors is the increased commercial processing workload (CPW) rating, 31 percent greater than the POWER5+ version and 72 percent greater than the POWER5 version. CPW is the "MIPS" or "TeraFlops" rating for comparing System i servers.Here is the[Announcement Letter].
Fibre Channel Adapter for System i hardware
That's right, these are [Smart IOAs], so an I/O Processor (IOP) is no longer required! You can even boot the Initial Program Load (IPL) direclty from SAN-attached tape.This brings System i to the 21st century for Business Continuity options.
Virtual I/O Server (VIOS)
[VirtualI/O Server] has been around for System p machines, but now available on System i as well. This allows multiplelogical partitions (LPARs) to access resources like Ethernet cards and FCP host bus adapters. In the case of storage, the VIOS handles the 520-byte to 512-byte conversion, so that i5/OS systems can now read and write to standard FCP devices like the IBM System Storage DS4800 and DS4700 disk systems.
IBM System Storage DS4000 series
Initially, we have certified DS4700 and DS4800 disk systems to work with i5/OS, but more devices are in plan.This means that you can now share your DS4700 between i5/OS and your other Linux, UNIX and Windowsservers, take advantage of a mix of FC and SATA disk capacities, RAID6 protection, and so on.
To call [IBM PowerVM] the "VMware for the POWER architecture" would not do it quite justice. In combination with VIOS, IBM PowerVM is able to run a variety of AIX, Linux and i5/OS guest images.The "Live Partition Mobility" feature allows you to easily move guest images from one system to another, while they are running, just like VMotion for x86 machines.
And while we are on the topic of x86, PowerVM is also able to represent a Linux-x86 emulation base to run x86-compiled applications. While many Linux applications could be re-complied from source code for the POWER architecture "as is", others required perhaps 1-2 percent modification to port them over, and that was too much for some software development houses. Now, we can run most x86-compiled Linux application binaries in their original form on POWER architecture servers.
BladeCenter JS22 Express
The POWER6-based [JS22 Express blade] can run i5/OS, taking advantage of PowerVM and VIOS to access all of the BladeCenterresources. The BladeCenter lets you mix and match POWER and x86-based blades in the same chassis, providing theultimate in flexibility.
"Information is moving—you know, nightly news is one way, of course, but it's also moving through the blogosphere and through the Internets." --- George W. Bush
As multinational companies transition to becomeglobally integrated enterprises, information is going to move across nationalboundaries. Laws that pertain to how data is stored and access need to be addressed.
Jon W Toigo over at DrunkenData.com discusses an Interesting proposal on Google Censorship. The New York Sun reports that NYC comptroller, Williams Thompson Jr. istargeting both Google and Yahoo over theirpolicies of abiding the local laws in each country they do business in.The proposal includes asking Google to fight local laws, publicize when Google complies withlocal laws, and publicize when local governments ask Google to comply with their laws. While Toigo focuses on Google, this issue applies to Yahoo, Microsoft, and many other companies that do business in multiple countries.
I admire when government officials use diplomacy to influence the policy of other governments, andwhen individuals act to influence the policies of those who govern them, but Thompson isdoing neither.In this matter, Thompson is trying to influence thepolicies of another government outside his jurisdiction, as a manager of investments in companies that do business there.Investors have two choices when trying to influence how companies do business.
Stop investing in those companies
Purchase shares, and vote your portion of the shares.
It appears Thompson is exercising the latter, proposing that this issue be brought to shareholder vote via proxy.There can only be two results from such a vote, either:
Shareholders vote for it, and Google changes the way it does business in this and other countries, possibly stops doing business in countries that don't appreciate hegemony.
Shareholders vote against it, and Google continues to do a great balancing act, complying with laws and their owncorporate culture
Did we forget that we have censorship in the USA as well? Would Thompson's proposalsapply to the rules and regs that our own government requires?
IBM does business in most, not all, countries on this planet. In the countries we don't do business in, we havegood reason not to. For the countries we do, we comply with all the laws that apply in each case.When I travel to these countries, including some of the countries specifically targeted by this proposal, I must abide by their laws. No exceptions.
The world is shrinking, and technologies now allow companies to become globally integrated. Before writing"The World Is Flat", Thomas Friedman wrote a book titled The Lexus and TheOlive Tree, which covers all the various issues related to conflicts between global companies and the countriesand cultures they do business in.
This reminds me of the wisdom of the Prime Directiveintroduced in the late 1960s on the popular TV show "Star Trek". The concept was simple, honor the sovereigntyof other cultures, on other worlds, and play by their rules when you are on their planet.I say "wisdom" in that it took me years to truly appreciate this idea.Initially, I considered this just a plot device to introduce conflict each time the captain and crew of thestarship "Enterprise" visits a new location, and discovers a culture different than their own. But over the years, as I have traveled to many countries, I began to see and understandthe wisdom of the "Prime Directive", and it applies as much now, in real life, as it did back then in the futuristic 1960s TV show.
Who are we to say that our way of doing things is the one and only way to do them?
Spend twenty hours a week running a project for a non-profit.
Teach yourself Java, HTML, Flash, PHP and SQL. Not a little, but mastery. [Clarification: I know you can't become a master programmer of all these in a year. I used the word mastery to distinguish it from 'familiarity' which is what you get from one of those Dummies type books. I would hope you could write code that solves problems, works and is reasonably clear, not that you can program well enough to work for Joel Spolsky. Sorry if I ruffled feathers.]
Volunteer to coach or assistant coach a kids sports team.
Start, run and grow an online community.
Give a speech a week to local organizations.
Write a regular newsletter or blog about an industry you care about.
Learn a foreign language fluently.
Write three detailed business plans for projects in the industry you care about.
Self-publish a book.
Run a marathon.
In 2007, 51 percent of graduating college students could find jobs in their field, and this year it has dropped to only 20 percent. If you find yourself with some time on your hands, either recently graduated or recently unemployed, consider volunteerism.Last year, I chose to donate my time and money to an innovative project called "One Laptop per Child" [OLPC]. It was one of my [New Years Resolutions] for 2008. I was actually "recruited" by folks from the OLPC after they read my [series of blog posts] on things that can be done with their now famous green-and-white XO laptop.
The first half of the year, I spent helping "Open Learning Exchange Nepal" [OLE Nepal], a non-government organization (NGO) to help education in that country. XO laptops were provided to second and sixth graders at several schools, and my assignment was to help with the school "XS" server. This would be the server that all the laptops connect to. My blog posts on this included:
Rather than [Move to Nepal], I was able to help by building an identical XS server in Tucson, and provide support remotely. This included getting the "Mesh Antennas"to be properly recognized, having an internet filter using [DansGuardian] software, and working out backup procedures.
For the second half of the year, I was asked to mentor a college student inHyderabad, India as part of the ["Google Summer of Code"] to develop an[Educational Blogger System]on the XS server. We called it "EduBlog" and based it on the popular [Moodle] educational software platform.This was going to be tested with kids from Uruguay, but sending a serverdown to this country proved politically-challenging, so instead, I [builta server and shipped it] to a co-location facility in Pennsylvania that agreed to donate the cost and expenses needed to run the server there with full internet connection. I acted as "system admin" for the box, was able to connect remotely via SSH, while Tarun, the college student I was mentoring, developed the EduBlog software. Twice the system washacked, but I was able to restore the system remotely thanks to a multi-boot configuration that allowedme to reboot to a read-only operating system image and restore the operating system and data.
The students and teachers in Uruguay were helped locally by [Proyecto Ceibal]. We were able to translate the system into Spanish, and the project was a big success, enough to convince local government to provideXO laptops to their students to further the benefits.
Continuing this week's theme of doing important things without leaving town, I present our results foran exciting project I started earlier this year.
For seven weeks, my coworker Mark Haye and I voluntarily led a class of students here in Tucson, Arizona in an after-school pilot project to teach the ["C" programming language] using [LEGO® Mindstorms® NXT robots]. The ten students, boys and girls ages 9 to 14 years old, were already part of the FIRST [For Inspiration and Recognition of Science and Technology] program, and participated in FIRST Lego League[FLL] robot competitions.Since the students were already familiar building robots, and programming them with a simple graphical system of connecting blocks that perform actions. However, to compete in the next level of robot competitions, FIRST Tech Challenge [FTC],we need to leave this simple graphical programming behind, and upgrade to more precise "C" programming.
Mark is a software engineer for IBM Tivoli Storage Manager and has participated in FLL competitions over the past nine years. This week, he celebrates his 25th anniversary at IBM, and I celebrate my 23rd. The teacher, Ms. Ackerman, and the students referred to us as "Coach Mark" and "Coach Tony".
This was the first time I had worked with LEGO NXT robots. For those not familiar with these robots, you can purchase a kit at your localtoy store. In addition to regular LEGO bricks, beams, and plates, there are motors, wheels, and sensors. A programmable NXT brick has three outputs (marked A,B, and C) to control three motors, and four inputs (marked 1,2,3,4) to receive values from sensors. Programs are written and compiled on laptops and then downloaded to the NXT programmable brick through an USB cable, or wirelessly via Bluetooth.
In the picture shown, an image of the Mars planetary surface is divided into a grid with thick black lines.A light sensor between the front two wheels of the robot is over the black line.
We used the [RobotC programming firmware] and integrated development environment (IDE) from [Carnegie Mellon University].The idea of this pilot was to see how well the students could learn "C". With only a few hours after class on each Wednesday, could we teach young students "C" programming in just seven weeks?
My contribution? I have taught both high school and college classes, and spent over 15 years programming for IBM, so Mark asked me to help.We started with a basic lesson plan:
A brief history of the "C" language
Understanding statements and syntax
Setting motor speed and direction
Compiling and downloading your first program
Understanding the "while" loop
Retrieving input sensor values
Understanding the "if-then-else" statement
Defining variables with different data types
Manipulating string variables
Writing a program for the robot to track along a black line on a white background.
Understanding local versus global scope variables
Writing a program for a robot to count black lines as it crosses them.
Perform left turns, right turns, and to cross a specific number of lines on a grid pattern to move the robot to a specific location.
Weeks 6 and 7
Mission Impossible: come up with a challenge to make the robot do something that would be difficult to accomplish using the previous NXT visual programming language.
At the completion of these seven weeks, I sat down to interview "Coach Mark"on his thoughts on this pilot project.
This is a practical programming skill. The "C" language is used throughout the world to program everything from embedded systems to operating systems, and even storage software. This would allow the robots to handle more precise movements, more accurate turns, and more complicated missions.
Can kids learn "C" in only seven weeks?
Part of the pilot project was to see how well the students could understand the material. They were already familiar with building the robots, and understood the basics of programming sensors and motors, so we were hoping this was a good foundation to work from. Some kids managed very well, others struggled.
Did everything go according to plan?
The first two weeks went well, turning on motors and having robots move forward and backward were easy enough. We seemed to lose a few students on week 3, and things got worse from there. However, several of the students truly surprised us and managed to implement very complicated missions. We were quite pleased with the results.
What kind of problems did the kids encounter?
Touch sensor required loops waiting for pressing. Motors did not necessarily turn as expected until more advanced methods were used. Making 90 degree left and right turns accurately was more difficult than expected.
Any funny surprises?
Yes, we had a Challenge Map representing the Mars planetary surface from a previous FLL competition that was dark red and divided into squares with thick black lines. An active light sensor returns a value of "0" (complete darkness) to "100" (bright white).However, the Mars surface had craters that were dark enough to be misinterpreted as a black line causing some unusual results. This required some enhanced programming techniques to resolve.
Did robots help or hurt the teaching process?
I think they helped. Rather than writing programs that just display "Hello World!" on a computer screen, the students can actually see robots move, and either do what they expect, or not!
And when the robots didn't do what they were expected to?
The students got into "debug" mode. They were already used to doing this from previous FLL competitions, but with RobotC, you can leave the USB cable connected (or use wireless Bluetooth) and actually gather debugging information while the robot is running, to see the value of sensors and other variables and help determine why things are not working properly.
Any applicability to the real world of storage?
We have robots in the IBM System Storage TS3500 tape library. These robots scan bar code labels, pull tapes out of shelves and mount them into drives.The programming skills are the same needed for storage software, suchas IBM Tivoli Storage Manager or IBM Tivoli Storage Productivity Center.
The world is becoming smarter, instrumented with sensors, interconnected over a common network, and intelligent enough to react and respond correctly. The lessons of reading sensor values and moving motors can be considered the first step in solutions that help to make a smarter planet.
The concept that there should be a linear "Storage Administrators per TB" rule-of-thumb has been around for a while.Back in 1992, I went to visit a customer in Germany who had FIVE storage admins for 90 GB (yes, GB, not TB) disk array.I told them they only needed 3 admins, but they cited German laws that prohibited "overtime" work on evenings and weekends.
Later, in 1996, I visited an insurance company in Ohio to talk about IBM Tivoli Storage Manager. They had TWO admins to manage 7TB on their mainframe, and another 45 people managing the 7TB across their distributed systems running Linux, UNIX, and Windows. My first question, why TWO? Only one would be needed for the mainframe, but they responded that they back each other up when one takes a 2-week vacation. My second question to the rest of the audience was... "When was the last time you guys took a 2-week vacation?"
Today, admins manage many TBs of storage. But TBs are turning out not to be a fair ruler to estimate the number of admins you need. It's a moving target, and other factors have more influence that sheer quantity of data.Let's take a look at some of those factors, which we call "the three V's":
Variety of information types
In the beginning, there were just flat text files. In today's world, we have structured databases, semi-structured e-mail systems, hypertext documents, composite applications, audio and video formats that require streaming, and so on. Variety adds to the complexity of the environment. Different data requires different treatment, different handling, and perhaps even different storage technologies.
Volume of data
Data on disk and tape is growing 60% year on year. It's growing on paper also. It's growing on film like photos and X-rays. The problem is not the amount, but the rate of growth. Imagine if population and traffic in your city or town increased 60% in one year, most likely people would suffer because most governments just aren't prepared for that level of growth.
Velocity of change
Back in the 1950's and 1960's, people only had to make updates once a year, scheduling time during holidays. Now, people are making changes every month, sometimes every weekend. One customer we spoke with recently said they do about 8000 changes PER WEEKEND!
So, the key is that there is no simple rule-of-thumb. Fewer admins are need per TB on mainframe than distributed systems data. Fewer admins per TB are needed when you deploy productivity software, like IBM TotalStorage Productivity Center. Fewer admins per TB are needed when you deploy storage virtualization, like IBM SAN Volume Controller or IBM virtual tape libraries.
Use more efficient disk media, such as high-capacity SATA disk drives
Both are great recommendations, but why limit yourself to what EMC offers? Your x86-based machines are only a subset of your servers,and disk is only a subset of your storage. IBM takes a more holistic approach, looking at the entire data center.
VMware is a great product, and IBM is its top reseller. But in addition to VMware, there are other solutions for the x86-based servers, like Xen and Microsoft Virtual Server. IBM's System p, System i, and System z product lines all support logical partitioning.
To compare the energy effectiveness of server virtualization, consider a metric that can apply across platforms. For example, for an e-mail server, consider watts per mailbox. If you have, say, 15,000 users, you can calculate how many watts you are consuming to manage their mailboxes on your current environment, and compare that with running them on VMware, or logical partitions on other servers. Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
More efficient Media
SATA and FATA disks support higher capacities, and run at slower RPM speeds, thus using fewer watts per terabyte.A terabyte stored on 73GB high-speed 15K RPM drives consumes more watts than the same terabyte stored using 500GB SATA.Chuck correctly identifies that tape is more power-efficient than disk, but then argues that paper is more power-efficient than tape. But paper is not necessarily more efficient than tape.
ESG analyst Steve Duplessie divides up data betweenDynamic vs. Persistent. The best place to put dynamic data is on disk, and here is where evaluation of FC/SAS versus SATA/FATA comes into play.Persistent data, on the other hand, can be stored on paper, microfiche, optical or tape media. All of these shelf-resident media consume no electricity, nor generate any heat that would require additional cooling.
A study by scientists at the Lawrence Berkeley National Laboratory titled High-Tech Means High-Efficiency: The Business Case for Energy Management in High-Tech Industries indicates thatData centers consume 15 to 100 times more energy per square foot than traditional office space. Storing persistent data in traditional office space can save a huge amount of energy. Steve Duplessie feels the ratio of dynamic to persistent data is 1:10 today, but is likely to grow to 1:100 in the near future, raising the demand for energy-efficient storage of persistent data ever more important to our environment.
Data centers consume nearly 5000 Megawatts in the USA alone, 14000 Megawatts worldwide. To put that in perspective, the country of Hungary I was in last week can generate up to 8000 Megawatts for the entire country (and they were using 7400 Megawatts last week as a result of their current heat wave, causing them grave concern).
Back in the 1990's, one of the insurance companies IBM worked with kept data on paper in manila folders, and armiesof young adults in roller skates were dispatched throughout the large warehouses of shelves to get the appropriate folder in response to customer service inquiries. Digitizing this paper into electronic format greatly reduced the need for this amount of warehouse space, as well as improved the time to retrieve the data.
A typical file storage box (12 inch x 12 inch x 18 inch) containing typed pages single-spaced, double-sided, 12 point font could hold perhaps 100MB. The same box could hold a hundred or more LTO or 3592 tape cartridges, each storing hundreds of GB of information. That's a million-to-one improvement of space-efficiency, and from a watts-per-TB basis, translates to substantial improvement in standard office air conditioning and lighting conditions.
To learn more about IBM's Project Big Green, watch thisintroductory video which used Second Life for the animation.
Last month, HP and Oracle jointly announced their new "Exadata Storage Server".This solution involves HP server and storage paired up with Oracle software, designed for Data Warehouse andBusiness Intelligence workloads (DW/BI).
I immediately recognized the Exadata Storage Server as a "me too" product, copying the idea from IBM's [InfoSphere Balanced Warehouse]which combines IBM servers, IBM storage and IBM's DB2 database software to accomplish this, but from a singlevendor, rather than a collaboration of two vendors.The Balanced Warehouse has been around for a while. I even blogged about this last year, in my post[IBMCombo trounces HP and Sun] when IBM announced its latest E7100 model. IBM offers three different sizes: C-class for smaller SMB workloads, D-class for moderate size workloads, and E-class for large enterprise workloads.
One would think that since IBM and Oracle are the top two database software vendors, and IBM and HP are the toptwo storage hardware vendors, that IBM would be upset or nervous on this announcement. We're not. I would gladlyrecommend comparing IBM offerings with anything HP and Oracle have to offer. And with IBM's acquisition of Cognos,IBM has made a bold statement that it is serious about competing in the DW/BI market space.
But apparently, it struck a nerve over at EMC.
Fellow blogger Chuck Hollis from EMC went on the attack, and Oracle blogger Kevin Closson went on the defensive.For those readers who do not follow either, here is the latest chain of events:
When it comes to blog fights like these, there are no clear winners or losers, but hopefully, if done respectfully,can benefit everyone involved, giving readers insight to the products as well as the company cultures that produce them.Let's see how each side fared:
Chuck implies that HP doesn't understand databases and Oracle doesn't understand server and storage hardware, socobbling together a solution based on this two-vendor collaboration doesn't make sense to him. The few I know who work at HP and Oracle are smart people, so I suspect this is more a claim againsteach company's "core strengths". Few would associate HP with database knowledge, or Oracle with hardware expertise,so I give Chuck a point on this one.
Of course, Chuck doesn't have deep, inside knowledge of this new offering, nor do I for that matter, and Kevin is patient enough to correct all of Chuck's mistaken assumptions and assertions. Kevin understands that EMC's "core strengths" isn't in servers or databases, so he explains things in simple enough terms that EMC employees can understand, so I give Kevin a point on this one.
If two is bad, then three is worse! How much bubble gum and bailing wire do you need in your data center? The better option is to go to the one company that offers it all and brings it together into a single solution: IBM InfoSphere Balanced Warehouse.
While most of the post is accurate and well-stated, two opinions particular caught my eye. I'll be nice and call them opinions, since these are blogs, and always subject to interpretation. I'll put quotes around them so that people will correctly relate these to Hu, and not me.
"Storage virtualization can only be done in a storage controller. Currently Hitachi is the only vendor to provide this." -- Hu Yoshida
Hu, I enjoy all of your blog entries, but you should know better. HDS is fairly new-comer to the storage virtualization arena, so since IBM has been doing this for decades, I will bring you and the rest of the readers up to speed. I am not starting a blog-fight, just want to provide some additional information for clients to consider when making choices in the marketplace.
First, let's clarify the terminology. I will use 'storage' in the broad sense, including anything that can hold 1's and 0's, including memory, spinning disk media, and plastic tape media. These all have different mechanisms and access methods, based on their physical geometry and characteristics. The concept of 'virtualization' is any technology that makes one set of resources look like another set of resources with more preferable characteristics, and this applies to storage as well as servers and networks. Finally, 'storage controller' is any device with the intelligence to talk to a server and handle its read and write requests.
Second, let's take a look at all the different flavors of storage virtualization that IBM has developed over the past 30 years.
IBM introduces the S/370 with the OS/VS1 operating system. "VS" here refers to virtual storage, and in this case internal server memory was swapped out to physical disk. Using a table mapping, disk was made to look like an extension of main memory.
IBM introduces the IBM 3850 Mass Storage System (MSS). Until this time, programs that ran on mainframes had to be acutely aware of the device types being written, as each device type had different block, track and cylinder sizes, so a program written for one device type would have to be modified to work with a different device type. The MSS was able to take four 3350 disks, and a lot of tapes, and make them look like older 3330 disks, since most programs were still written for the 3330 format. The MSS was a way to deliver new 3350 disk to a 3330-oriented ecosystem, and greatly reduce the cost by handling tape on the back end. The table mapping was one virtual 3330 disk (100 MB) to two physical tapes (50 MB each). Back then, all of the mainframe disk systems had separate controllers. The 3850 used a 3831 controller that talked to the servers.
IBM invents Redundant Array of Independent Disk (RAID) technology. The table mapping is one or more virtual "Logical Units" (or "LUNs") to two or more physical disks. Data is striped, mirrored and paritied across the physical drives, making the LUNs look and feel like disks, but with faster performance and higher reliability than the physical drives they were mapped to. RAID could be implemented in the server as software, on top or embedded into the operating system, in the host bus adapter, or on the controller itself. The vendor that provided the RAID software or HBA did not have to be the same as the vendor that provided the disk, so in a sense, this avoided "vendor lock-in".Today, RAID is almost always done in the external storage controller.
IBM introduces the Personal Computer. One of the features of DOS is the ability to make a "RAM drive". This is technology that runs in the operating system to make internal memory look and feel like an external drive letter. Applications that already knew how to read and write to drive letters could work unmodified with these new RAM drives. This had the advantage that the files would be erased when the system was turned off, so it was perfect for temporary files. Of course, other operating systems today have this feature, UNIX has a /tmp directory in memory, and z/OS uses VIO storage pools.
This is important, as memory would be made to look like disk externally, as "cache", in the 1990s.
IBM AIX v3 introduces Logical Volume Manager (LVM). LVM maps the LUNs from external RAID controllers into virtual disks inside the UNIX server. The mapping can combine the capacity of multiple physical LUNs into a large internal volume. This was all done by software within the server, completely independent of the storage vendor, so again no lock-in.
IBM introduces the Virtual Tape Server (VTS). This was a disk array that emulated a tape library. A mapping of virtual tapes to physical tapes was done to allow full utilization of larger and larger tape cartridges. While many people today mistakenly equate "storage virtualization" with "disk virtualization", in reality it can be implemented on other forms of storage. The disk array was referred to as the "Tape Volume Cache". By using disk, the VTS could mount an empty "scratch" tape instantaneously, since no physical tape had to be mounted for this purpose.
Contradicting its "tape is dead" mantra, EMC later developed its CLARiiON disk library that emulates a virtual tape library (VTL).
IBM introduces the SAN Volume Controller. It involves mapping virtual disks to manage disks that could be from different frames from different vendors. Like other controllers, the SVC has multiple processors and cache memory, with the intelligence to talk to servers, and is similar in functionality to the controller components you might find inside monolithic "controller+disk" configurations like the IBM DS8300, EMC Symmetrix, or HDS TagmaStore USP. SVC can map the virtual disk to physical disk one-for-one in "image mode", as HDS does, or can also map virtual disks across physical managed disks, using a similar mapping table, to provide advantages like performance improvement through striping. You can take any virtual disk out of the SVC system simply by migrating it back to "image mode" and disconnecting the LUN from management. Again, no vendor lock-in.
The HDS USP and NSC can run as regular disk systems without virtualization, or the virtualization can be enabled to allow external disks from other vendors. HDS usually counts all USP and NSC sold, but never mention what percentage these have external disks attached in virtualization mode. Either they don't track this, or too embarrassed to publish the number. (My guess: single digit percentage).
Few people remember that IBM also introduced virtualization in both controller+disk and SAN switch form factors. The controller+disk version was called "SAN Integration Server", but people didn't like the "vendor lock-in" having to buy the internal disk from IBM. They preferred having it all external disk, with plenty of vendor choices. This is perhaps why Hitachi now offers a disk-less version of the NSC 55, in an attempt to be more like IBM's SVC.
IBM also had introduced the IBM SVC for Cisco 9000 blade. Our clients didn't want to upgrade their SAN switch networking gear just to get the benefits of disk virtualization. Perhaps this is the same reason EMC has done so poorly with its "Invista" offering.
So, bottom line, storage virtualization can, and has, been delivered in the operating system software, in the server's host bus adapter, inside SAN switches, and in storage controllers. It can be delivered anywhere in the path between application and physical media. Today, the two major vendors that provide disk virtualization "in the storage controller" are IBM and HDS, and the three major vendors that provide tape virtualization "in the storage controller" are IBM, Sun/STK, and EMC. All of these involve a mapping of logical to physical resources. Hitachi uses a one-for-one mapping, whereas IBM additionally offers more sophisticated mappings as well.
You may not be the right person to ask but I am asking everyone so "How do you see hybrid disk drives?"
(For the record, I am not immediately related to Robert. At onepoint, "Pearson" was the 12th most common surname in the USA, but now doesn't even make the Top 100.)
Robert, I would like to encourage you and everyone else to ask questions, don't worry if I am the wrong person to ask, asprobably I know the right person within IBM. Some people have called me the "Kevin Bacon" of Storage,as I am often less than six degrees away from the right person, having worked in IBM Storage for over 20 years.
For those not familiar with hybrid drives, there is a good write-up in Wikipedia.
Unfortunately, most of the people I would consult on this question, such as those from Market Intelligence or Research, are on vacation for the holidays, so, Robert, I will have to rely on my trusted 78-card Tarot deck and answer you with a five-card throw.
Your first card, Robert, is the Hermit. This card represents "introspection". The best I/O is no I/O, which means that if applications can keep the information they need inside server memory, you can avoid the bus bandwidth limitations to going to external storage devices. Where external storage makes sense is when data is shared between servers, or when the single server is limited to a set amount of internal memory. So, consider maxing out the memory in your server first (IBM would be glad to sell you more internal memory!!!), then consider outside solid-state or hybrid devices. Windows for example has an architectural limit of 4GB.
Your second card, Robert, is the Four of Cups, representing "apathy".On the card, you see three cups together, with the fourth cup being delivered from a cloud. This reminds me thatwe have three storage tiers already (memory,disk,tape), and introducing a fourth tier into the mix may not garnermuch excitement. For the mainframe, IBM introduced a Solid-State Device, call the Coupling Facility, which can be accessed from multipleSystem z servers. It is used heavily by DFSMS and DB2 to hold shared information. However, given some customer's apathytowards Information Lifecycle Management which includes "tiered storage", introducing yet another tier that forcespeople to decide what data goes where may be another challenge.
Your third card, Robert, is the Chariot, which represents "Speed, Determination,and Will". In some cases, solid state disk are faster for reading, but can be slower for writing. In the case of ahybrid drive, where the memory acts as a front-end cache, read-hits would be faster, but read-misses might be slower.While the idea of stopping the drives during inactivity will reduce power consumption, spinning up and slowing downthe disk may incur additional performance penalties. At the time of this post, the fastest disk system remains the IBM SAN Volume Controller, based on SPC-1 and SPC-2 benchmarks in excess of those published for other devices.
Your fourth card, Robert, is the Eight of Pentacles, which represents"Diligence, Hard work". The pentacles are coins with five-sided stars on them, and this often represents money.Our research team has projected that spinning disk will continue to be a viable and profitable storage media for at least anothereight years.
Your fifth and last card, Robert, is the World, which normallyrepresents "Accomplishment", but since it is turned upside down, the meaning is reversed to "Limitation". Some Hybriddisks, and some types of solid state memory in general, do have limitations in the number of write cycles they can handle. For thoseunhappy with the frequency and slowness for rebuilds on SATA disk may find similar problems with hybrid drives.For that reason, businesses may not trust using hybrid drives for their busiest, mission-critical applications, but certainlymight use it for archive data with lower write-cycle requirements.
The tarot cards are never wrong, but certainly interpretations of the cards can be.