Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
I am in Toronto, Canada. It is a lot cold and rainy here, worse than last week in Seoul, Korea.This looks like a slow news week, so slow that the only news here in Canada is the possibility of anew 5-dollar coin. I thought I would make this week's theme about enterprise applications.
IBM doesn't make these applications anymore, we have decided to focus on our core strength, to be the best IT platform to run other people's applications. This means being the best IT systems, software and services company. However, many of the companies that make enterprise applications are both cooperate and compete against parts of IBM, what we call "coopetition".
Let's take a look at some acronyms in this space:
"Enterprise Resource Planning" represents all the basic applications that business need to run theirbusiness, including: finance, accounting, human resources, and manufacturing. The focus here is to streamline operations and make the workforce more productive. Before IBM, I ran my ownsoftware development company, Pearson Kurath Systems, and we developed ERP applications for clients oneby one, customized to their industry requirements.
"Customer Relationship Management" or sometimes "Client Relationship Management" help companies identifyand retain their customer base. Focus here is to increase customer satisfaction and loyalty.
"Supply Chain Management" help track supply and just-in-time inventory demand, sharing the information withkey suppliers and distributors. The focus is to manage inventories down to nothing, and improve speed to get products out to market.
"Business to Business" refer to procurement, purchase orders, and collecting payments over the internet.One of my pet peeves are acronyms that use "2" to mean "to" and "4" to mean "for".
"Human Capital Management" deals with managing costs of Human Resources (HR) and coordinating servicesfrom outside organizations.
"Knowledge Management" refers to sharing and collaborating information. This is not just email and instant messaging, but also online calendaring, experience repositories, client case studies, and anecdotes.
This week I will cover applications that address these, and how they relate to storage.
IBM announced the industry's first corporate-led initiative to enable clients to earn energy efficiency certificates for reducing the energy needed to run their data centers.For the first time, this provides a way for businesses to attain a certified measurement of their energy use reduction, a key, emerging business metric. The certificates can be traded for cash on the growing energy efficiency certificate market or otherwise retained to demonstrate reductions in energy use and associated CO2 emissions. The Efficiency Certificates initiative engages Neuwing Energy Ventures, a leading verifier of energy efficiency projects and marketer of energy efficiency certificates.
How it works:
The Neuwing Energy assessments are a two-part evaluation to 1) determine the initial energy draw from the data center or IT equipment identified for consolidation based on industry accepted energy estimates for the servers in use and the power and cooling profiles of the data center, and 2) a second review of energy draw after steps are taken that are designed to reduce energy consumption.
Neuwing Energy will issue customers an Efficiency Certificate for the total megawatt-hours of energy no longer needed to power and cool their data center or operate IT equipment. Neuwing Energy will keep a portion of each customer's earned certificates or charge a per MWH saved fee in exchange for the assessment.
Customers can trade earned Efficiency Certificates on the energy efficiency certificate market or they can retain their certificates, using them to demonstrate reductions in energy use and associated CO2 emissions.
IBM intends to make the Efficiency Certificates program available across its entire line of server and storage offerings.
In this case, it is not chess pieces, but FUD being slung around like mud between vendors. EMC blogger Chuck Hollis' post [Products vs. Features] correctly pointsout that IBM has invented most nearly everything useful in IT, and sadly a few things we wish we hadn't.Gene Amdahl, who left IBM to start his own company, is credited for coining the phrase describing IBM'sinnovative sales techniques. Wikipedia has a nice write up on the history of[Fear, Uncertainty and Doubt(FUD)].
Nowadays, when you hear "FUD" most storage administrators immediately think of EMC, who have taken this method to anew level of art-form. Take for example two EMC entries from fellow blogger BarryB, on his Storage Anarchist blog:[Not Dead Yet, andPushing Daisies].The first is a reference to a funny scene from a Monty Python movie, and the second one is referring to a terriblenew television program called "Pushing Daisies". (In this show, the main character can bring a dead personback to life for sixty seconds, just long enough to ask a few questions on behalf of his detective friend. He must touch the person again within 60 seconds, or someone else randomly dies instead. I amnot a fan of this concept, and found it a bit morbid and creepy. But I digress.)
It is true I was on vacation the past two weeks, but this was group travel I booked over six months ago before we had the exact dates lined up for our various announcements, and not a last-minute celebration of my recent new job assignment. I got all my assignments for this announcement turned in before leaving for my trip. I never thought of checking with fellow IBM blogger BarryW to make sure that we don't have overlapping vacation schedules, leaving the "blogosphere" unmanned, so to speak, but it is not a bad idea. Fortunately, our IBM PR team was able to make their rebuttal through other means. You can read the recap on Techworld [Marketing Wars by Proxy].
Several astute readers on my blog, however, requested that I add my two cents. Let's take a look at some of BarryB's comments:
...most DS8300's are to this day most frequently bundled as "free" storage with IBM mainframe and server sales.
We just shipped our 15,000th box, so for this absurd statement to be true, more than half would have to be given away as part of a server-and-storage deal?Actually, about a third of our DS8000 sales are sold with servers in the same bundle, and while we do provide discounts from the official list price, that is not the same as "free". The other two thirds are sold into accounts to be used with the existing servers already deployed. So BarryB, your math doesn't work out. (Perhaps you've been taking Hitachi math lessons???)
It is interesting however, that when we do a 4-year TCO comparison, between a normally-discounted DS8000 versus free EMC DMX4 hardware, IBM still has the lower cost, given that most of the price-gouging from EMC happens after the initial sale, through software features, annual Powerpath renewals and MES upgrades. If you are an EMC customer, and you are planning to add more capacity to your DMX, ask EMC to charge you no more than what you originally paid on a dollar-per-GB basis for the initial capacity. That's only fair, right?
...No thin provisioning, or even a commitment to thin provisioning. Just crickets. (Celerra support since Jan 2006...
EMC DMX does not have thin provisioning available today either, so BarryB brings up Celerra, their NAS box? IBM System Storage N series NAS box also has thin provisioning, so if you want thin provisioning you can buy a NAS box from EMC or IBM. Thin provisioning makes sense using NAS protocols, as there are actual commands to "delete a file" that can then free up the related blocks in a thin-provisioned environment. The only way to do this with block-oriented protocols is to get the OS to notify the storage device that blocks can be freed up. As it turns out, IBM's z/OS has such support, which we developed specifically for our thin-provisioning support in our IBM RAMAC Virtual Array disk systems back in the 1990s.For block-oriented devices on most other operating systems, thin provisioning may not be all that it is cracked up to be.
No SATA drives (only DMX-4 supports native SATA-II drives, since Aug’07)
A few people are confused on this. IBM DS8000 has supported FATA for quite some time now, same slower speeds and higher capacities as SATA, but are technically NOT the same as SATA. FATA are designed to provide better protection against vibrational shock, to improve reliability of the drives. IBM felt that if the data was important enough to put on a high-end system, it should get better-than-SATA treatment. If you really want SATA, try our IBM System Storage N series, DS4000 or DS3000 models.
No RAID 6 (DMX-3 has supported multi-dimensional RAID since Q1’07, DMX-4 since Aug'07, ...
IBM N series supports RAID6, but we called it RAID-DP and that confused some people. Same thing, DP stands for Dual Parity, protecting against a double-disk failure. We also just announced RAID6 on our DS4000 series, by the way.
No 4Gb back-end (USP-V since May '07, DMX-4 since Aug’07)
I found this one odd, since BarryB himself in an earlier post explained why 4Gbps back-end made no difference to DMX4 performance in this post [DMX-4 and Oh So Much More], which I will put into a different color so you can tell it is from a different post:
You may have noticed that there weren't any specific performance claims attributed to the new 4Gb FC back-end. This wasn't an oversight, it is in fact intentional. The reality is that when it comes to massive-cache storage architectures, there really isn't that much of a difference between 2Gb/s transfer speeds and 4Gb/s. Transmit times are really only a tiny portion of I/O overhead, and just don't make that much difference when a massively-cached system is pre-fetching reads, buffering/delaying writes and reordering I/O requests to minimize seek times. Not that 4Gb/s won't help some applications, but most people just won't see any noticeable difference.
In this case, BarryB is right. The IBM DS8000's 2Gbps back-end is not a performance bottleneck. The DS8000 with a 2Gbps back-end is faster than DMX4 with a 4Gbps back-end for business application workloads. EMC doesn't publish SPC benchmarks to deny this, so you will just have to take our word on this.
Still only 1024 maximum disk drives (DMX-3 & 4 support up to 2400 drives, USP-V supports 1152)
I would be curious to see how many customers have more than 1024 drives on any high-end disk array.As we learned back in [Day 2 Storage Symposium], the average DS8100 has 17.4 TB, and DS8300 has 41.5 TB capacity. Using 500GB drives,that's only 83 spindles. Even with 73GB drives, that's 568 spindles. Plenty of room for growth, so I am notconvinced that higher theoretical upper architectural limits are worth discussing here.
Still only two HARD LPARs (partitions) ..., and even IBM’s mid-tier products support more than 2 storage partitions (in this same announcement)
IBM's two LPARs are TWICE what EMC DMX offers. I don't even know why anyone from EMC would bring this up? While EMC is enjoying their success with VMware, the lack the experience to carry this over to their storage lines. Until EMC offers MORE THAN TWO of any kind of partitions on their high-end offerings, there just is no credibility here. As for our "storage partitions" on our DS4000 line, that is an unfortunate mis-understanding of the press release. On the DS4000, the term "storage partition" is really "LUN masking", dividing up only which disks can be accessed by which hosts, and not dividing up any processor or cache capacity. So this is not the same as any LPAR concept on any other system. For example, a DS4000 with 64 partitions can be attached to 64 hosts, or 64 host-clusters like a Windows MSCS environment or AIX HACMP.
No native Ethernet replication or iSCSI support (Symmetrix has had since 2002)
Again, I found this one odd. On another EMC post, [Vigorous Debates],Chad Sakac mentions that only 2% of Symmetrix are sold with IP ports, not sure if this is for Ethernet replication, iSCSI attachment, or both (Again, I will use a different color):
On the Symm business (a huge part of EMC’s business – the IP ports are included on 2% of deals. That’s a fact.
Just because engineer can put a feature or function on a box, doesn't mean there is business sense to do so. I would hate for IBM to invest millions of dollars on native iSCSI support, only to have 2% of our DS8000 boxes sold with that feature. Customers who have DS8000 on FC SANs already deployed can easily add iSCSI support either through their SAN switches, or by fronting the DS8000 with an N series gateway. Most customers looking for native iSCSI are the smaller no-SAN-deployed SMB customers, and for them, we have both the DS3300 and the various N series models to choose from.
Well that's my two cents. The DS8000 series remains a strategic part of the IBM System Storage offering matrix, with continued investment in the development, as well as on-going research that we can leverage throughout the IBM company. I would like to read your thoughts on this, post me a comment below.
Well, it is Halloween back in the USA. I am in Seoul Korea this week, so it is already Thursday, November 1st here, but thought I would comment on Colin Barker's piece in ZDnet titled[SNW offers the frights].The article starts out with an oversimplification:
The storage industry is enjoying a boom currently thanks to the requirement for IT managers to keep everything. With the possibility of being sued any time by any company for no good reason at all, everyone is keeping everything, or at least all their data. Result? Loads and loads more kit being bought to the benefit of EMC, IBM, HP and every other supplier with any kind of storage product.
While its true that IBM System Storage grew yet again in 3Q07, exceeding our own internal business model, I would not call this an overall "boom" for the storage industry. While companies are growing in "TB capacity" by 30-50%, this translates only to single digit growth in terms of "Dollar revenues". This is because we continue to make storage with declining dollar-per-GB.
One should not confuse what people do with what people are required to do. I am not a lawyer, but most regulations pertaining to storage of information state that certain records need to be kept for a set amount of time, either a fixed period of years, or based on some event. For example, broker/dealers need to keep emails of their clients for six years after the client closes their brokerage account. After those six years, the records can be destroyed.
Unfortunately, many IT managers look at the laws and come up with the simplest solution: keep everything forever. While this might meet the regulators audit requirements, it does expose their employer to subpoenas for data that should have been deleted, and may not be very cost-effective.
The alternative for many IT managers involves having to leave their comfort zone, and talk to their legal counsel, the lines of business, and try to classify their data, determine a set of policies, and inact some forms of enforcement. This is perhaps the "scary" part of the storage of information, it has grown outside the walls of IT, forcing IT managers to interact with the rest of the business to get their jobs done.
Compliance is the only game in town and that is most certainly where the money is.
Anytime an analyst tells you that something is the "only game in town", they are usually wrong. In this case, IBM has had great success in other areas that are not compliance-related. For example, digital video surveillance (DVS) is being used not only to help reduce shoplifting, but also to help identify patterns in customers perusing through aisles and window-shopping. Identifying what people are interested in has proven effective in moving product displays around to better attract buyers and motivate them to make purchases.
Take, the keynote from Andy Monshaw, general manager of IBM storage, and thus a man who is very much in a position to know. He spent his allotted 30 minutes, or whatever, listing all the security, compliance, threats and related issues that are currently making the jobs of most IT manager a cause for concern. Now, there is an argument that suggests that it is absolutely the right thing to do to frighten IT managers into sorting out their issues. They need shaking up say some. Especially analysts.
I helped develop the content of Andy's SNW presentation, working with his speech writers and graphic artists to make a consistent and coherent message fit in the 25 minutes he was given. The challenge with SNW is that we needed to make this presentation applicable across the entire storage industry, without sounding like an infomercial for IBM offerings.
Some people have compared the storage to the "insurance industry", claiming that backups, remote disk mirroring, continuous data protection and other storage related features are costs that can be compared to insurance you pay to protect your home, business, and other assets. You hope you never have to use it, and complain how much it costs, but when bad things happen, you hope it is the best money can buy.
Unlike Y2K, which was a one-time event that had a specific date of occurrence, the threats and risks mentioned by Andy in his presentation may never happen at all, or in other cases, may happen more than once, without knowing when or where. For the sake of your shareholders, and your stakeholders, it is best to be prepared for these possibilities.
The counter argument says that IT companies just smell the money.
Is this a counter argument? Can IBM not both help customers mitigate their risks, and at the same time, turn a profit? Trust me, you do not want to do business with any storage vendor that is not interested in making a profit. The better ones have incorporated addressing client's most pressing challenges into their strategy. I gave a quick summary of IBM's strategy last August in [Day 1 Storage Symposium].
Helping our clients mitigate risks is just one of IBM's core strengths. If you want to learn more, contact your local IBM Business Partner or storage rep.
Well it's Tuesday, which means its time to look at recent announcements.While I was on vacation last week, IBM made a lot of storage announcements October 23.Josh Krischer gives his summary on WikiBon [October 2007 Review].Austin Modine of the The Register went so far as to say that [IBM goes crazy with storage system updates].
IBM System Storage DS8000 series
This is "Release 3" software/microcode upgrades on our existing "Turbo" hardware.
IBM FlashCopy SE -- Here "SE" stands for Space Efficient. Rather than allocating a full 100% of the space for the FlashCopy destination, you can set aside just a fraction, and this will hold all the changed blocks, similar to whatIBM already offers on the DS4000 series.
Dynamic Volume Expansion -- In the past, if you needed more space for a LUN, you had to carve out a newer one elsewhere, and then copy the data over from the old to the new, leaving the old LUN around to be re-used or leftstranded. With this enhancement, you can just upgrade the LUN in place, making it bigger as needed, similar to whatIBM already offers on the DS4000 series and SAN Volume Controller. This applies to CKD volumes for the System zmainframe users out there as well.
Storage Pool Striping -- striping volumes across RAID ranks to eliminate or reduce hot-spots, and provide betterload balancing. Many used SAN Volume Controller in front of the DS8000 to do this, but now you can do it natively inthe DS8000 itself.
z/OS Global Mirror Multiple Reader -- for System z customers, "z/OS Global Mirror" is the new name for XRC. Thisenhancement improves the throughput of sending updates to the remote disaster recovery location.
DS Storage Manager enhancements, the element manager software has been enhanced, and is pre-installed on the new IBM System Storage Productivity Center, which I will talk about below.
Intermix of DS8000 machine types -- this is especially useful to allow new frames to have co-terminating warrantieswith the base units. In other words, as you expand your system, you can ensure that the entire chunk of iron runs outof warranty all at the same time, to simplify your decision making process to upgrade or contract for extended service.
One of the biggest complaints about IBM TotalStorage Productivity Center is that it is software that needs to beinstalled on its own server, and that this installation process can take a day or two. Why wait? Now you can havea hardware console that has the DS8000 Storage Manager software, SVC Admin Console software, and IBM TotalStorageProductivity Center "Basic Edition" pre-installed. Here are the key features.
Pre-installed and tested console
DS8000 R3 GUI integration
Cohabitation of SVC 4.2.1 GUI and CIMOM
Automated device discovery
Asset and capacity reporting, including tape library support
Our "Release 9" applies across the board, from N3000 to N5000 to N7000 series models, includingnew host bus adapters, and the new Data OnTAP 7.2.4 release level.
The Virtual File Manager (VFM) was announced as one of our latest [Storage Virtualization Solutions]. VFMprovides a global namespace that aggregates the file systems from Linux, UNIX, and Windows file servers, as well asN series storage, into a consolidated environment.
IBM's virtual tape library (VTL) for the distributed systems platform, has been enhanced to provide:
Up to 12TB of disk cache, using 750GB SATA disk.
F05 Tape Frames installed as TS7520 base units through a 32 port fibre channel switch
Support for LTO generation 4 tape drives, both as virtual tape drives and as physical tape drives within IBM automated tape libraries attached to the TS7520. This allows you to use Encryption capabilities of LTO4.
DS3000 series now supports SATA disk, and can be attached to AIX and Linux on System p servers. This appliesto the DS3200, DS3300 and DS3400 models.See the [DS3000 Announcement Letter] for more details.
Well, I'm back from my relaxing vacation in New Zealand and Fiji. Yes, we hada few earthquakes near Milford Sound, several avalanches that blocked some roads, a few power outages, and we walked in a rain storm for three hours after our bus broke down. But overall, it was fun.
This week I am in Seoul, South Korea for various business meetings. The best part is that I am almost already acclimated to the time zone, since New Zealand was GMT+11 and Fijiwas GMT+12, I am hoping that I will adjust fast this week.
South Korea is part of our "BRICK" countries--those emerging markets made up of Brazil,Russia, India, China and Korea that represent IBM's spearhead into the SMB marketplace.[Read More]
Two European scientists, Albert Fert (France) and Peter Grunberg (Germany) have won the 2007 Nobel Prize for physics for their research into Giant Magnetoresistance, or GMR. GMR read/write heads are used in IBM disk systems.
New high-density dual-coated particulate magnetic tape: Developed by Fuji Photo Film Co., Ltd., in Japan in collaboration with IBM Almaden researchers, this next-generation version of its NANOCUBIC™ tape uses a new barium-ferrite magnetic media that enables high-density data recording without using expensive metal sputtering or evaporation coating methods.
More sensitive read-write head: For the first time, magnetic tape technology employs the sensitive giant-magnetoresistive (GMR) head materials and structures used to sense very small magnetic fields in hard disk drives.
GMR servo reader: New GMR servo-reading elements, software and fast-and-precise positioning devices provides an active feedback system with unprecedented 0.35-micron accuracy in monitoring and positioning the read-write head over the 1.5-micron-wide residual data track.
Improved tape-handling features: Flangeless, grooved rollers permit smoother high-speed passage of the tape, which also enhances the ability of the head to write and read high-density data.
Innovative signal processing algorithms for the read data channel: An advanced read channel used new "noise-predictive, maximum-likelihood" (NPML) software developed at IBM's Zurich Research Laboratory to process the captured data faster and more accurately than would have been possible with existing methods.
IBM often leverages the research done in one part of its business over to other parts of its business. In this manner, advances in disk translate into advances in tape, keeping tape a viable medium for at least the next 8-10 years.
This is great news for everyone. I have said before that VMware is perhaps the best product EMC offers, and some EMC bloggers have returned the favor saying that SVC might just be the best disk system that IBM offers. While IBM and EMCare heavily competitive in other aspects of the IT storage industry, when it comes to delivering what is right for the customer, we can set aside those differences. IBM is the number one reseller of VMware, and it is a great pairing with SAN Volume Controller.
Of course, it is not a free-for-all. VMware has a few restrictions at this time:
The VMware certification is limited to Windows operating system, specifically those listed on its [Guest OS Guide].The list looks fairly extensive, so if you were running Windows guests on VMware to SVC today via RPQ,you are probably covered. However, if you are running NetWare or Linux, then the VMware certification doesnot yet apply.
host bus adapters
Only Qlogic host bus adapters are supported at this time. This is because VMware directly communicates to the host bus adapters as part of its "I/O virtualization" capabilities, and needs to work with or test with all the other HBA manufacturers.
no RDM support
Raw Device Mapping (RDM) mode is currently not yet supported. This probably will only affect a small percentage of customers, as I don't know of any major applications that require this.
Of course, most of these issues can probably be addressed with additional testing, or minor software changes, and IBM will work with VMware to prioritize what added testing or software changes are needed to expand this support.
I am Tony Pearson, storage consultant at the IBM Executive Briefing Center, located in Tucson, Arizona. I have degrees in Computer Engineering and Electrical Engineering from the University of Arizona. Over the past 20 years, I have worked in a variety of storage roles, including development projects, product and portfolio management, testing, field support, marketing, and now am doing storage consulting.
There are a lot of things to discuss related to storage, and I am never short of opinions. As such, the standard IBM disclaimer applies: “The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.”
I have invited other IBMers to post their opinions, and when they do, their opinions may not necessarily match mine either.
This is an open two-way conversation between IBM, Business Partners, Independent Software Vendors, prospect and existing clients. I encourage everyone to post comments about our products, services, and marketing efforts.
I welcome HDS into the "Super High-End" club. Those who follow my blog might remember thatI suggested that analysts like IDC that use "Entry Level", "Midrange" and "Enterprise" as categoriesmay need a New Category: Super High End.
I was not surprised to see EMC, who now drops further down in perception, dispute HDS's recent SPC-1 benchmarks.Fellow blogger EMC's BarryB posted on his Storage Anarchist blog [IBM vs. Hitachi] thatpoints out that IBM's SAN Volume Controller (SVC) is still much faster, and less expensive, than USP-V.
So, just in case you haven't seen all the press releases, here is a quick recap on the results:IBM SVC 4.2 is still in first place, then HDS USP-V, then IBM System Storage DS8300. Just for comparison, I includeour IBM System Storage DS4800 midrange disk results, so you can appreciate the difference between midrange and high-end.There are other products from other vendors, I just point out a few from IBM and HDS here in this graph.
******************************************************************** 272,505 IOPS - IBM SVC 4.2 ************************************************** 200,245 IOPS - HDS USP-V ******************************* 123,033 IOPS - IBM DS8300 *********** 45,014 IBM DS4800
HDS tried to come up with a phrase "Enterprise Storage System" for comparison that would leave the SVC 4.2 out.Given that the SVC has five nines (99.999%) availability, has non-disruptive upgrade and firmware update capability, has more than two processors typical of midrange products, and can connect to mainframes via z/VM, z/VSE andLinux on System z operating systems, there is no reason to pretend SVC isn't Enterprise-class.
The irony now is that EMC now looks very lonely being one of the last remaining major storage vendors not to participate in standardized benchmarks that help customers make purchase decisions, as mentioned both by IBM's BarryW: I guess that only leaves EMC, as well as HDS's Claus Mikkelsen: Olympics of Storage.
Earlier this year, EMC's Chuck Hollis opined[Storage Scorecard]that the EMC DMX and HDS TagmaStore USP were high-endboxes, which I would speculate both of these would fall somewhere between DS4800 and DS8300 on the graph above.If that is the case, it is impressive that HDS was able to re-engineer their USP-V to be 2-3x faster thanits predecessor, the USP.
Not all workloads are the same, and your mileage may vary. While I can't speak to HDS, the folks over atEMC have assured me, in writingcomments on this blog, that there is nothing preventing their customers from publishingtheir own performance comparisons between EMC and non-EMC equipment. I would encourage every customer to do this, between IBM and HDS, HDS and EMC, and between IBM and EMC, to help shed even more light on this area.In fact, you can even run your own SPC benchmarks to see how your own environment compares to the ones published.
Of course, performance is just one attribute on which to choose a storage vendor, and to choose specific products,models or features. For more information about Storage Performance Council and the SPC-1 and SPC-2 benchmarks,see my week-long series on SPC benchmarks, which are listed in reverse chronological order.
Go to the official Storage Performance Council website to read the details of the SPC-1 results.
Today, 13.5% of EMC's sales force is female, the company says, compared with 40% at International Business Machines Corp. and 29% at CA Inc., a big software vendor, those companies say. According to the 2000 U.S. census, about 25% of high-tech employees nationally were women.
IBM recognizes that diversity provides unique advantages in dealing with a global marketplace. Not only are women well represented on our IT sales force, they are also well represented on our board of directors, our Worldwide Management Committee, and our executive team overall, as well as in technical positions such as IBM Fellows, Distinguished Engineers, members of the IBM Academy of Technology. Working Mother magazine has rated IBM one of the top 10 "Best Companies" for women to work for in each of the 18 years that it has published this list.
In 2006, 51 camps called EXITE (Exploring Interests in Technology and Engineering) were held worldwide in 33 countries. The hope is to get young girls to pursue college degrees in computer science, math and engineering, so that they can then help fill the shortage of technical resources in IT.
So, if you are a women discouraged at your current place of employment, and are looking for exciting new opportunities in IT, come check out working for IBM![Read More]
Over the past year and a half, I have been focused on explaining WHAT IBM System Storage was, and WHY IBM should be considered when making a storage purchase decision. Let's recapsome of IBM's accomplishments during this time:
Today, October 1, I switch over to HOW to get it done. In my new job role, I will be leading a seriesof projects and workshops on how to make your data center more green, how to get more value from the information you have, how to better protect your information from unauthorized access or unethical tampering, how to develop and deploya site-wide business continuity plan, and how to centralize your management using open industry standards.
I will still be in Tucson, but am moving from building 9032 over to 9070 to be closer to the rest of my team.
I was in Raleigh this week, in business meetings, and had dinner last night at a Japanese Tepanyaki restaurant. The man next to me was dining alone, and said he worked for Cisco, a big company, "Had you heard of it?" he asked. Of course, I told him, I work for IBM, and IBM and Cisco have a strong working relationship, using each others products in both directions. He said he understood why they would use IBM, but why would IBM buy anything from them, and then he said, "Oh yes, your cafeteria".
At this point we realized he was talking about SYSCO, the food company, not Cisco, the storage networking technology partner. We both had a good laugh.
Which brings me to think of other "mis-heard" or "mis-interpreted" items that might have caught people off guard because they sounded similarly.
zFS versus ZFS
Some things are case-sensitive. Lower case zFS is the hierarchical file system for the z/OS mainframe environment, which was originally called "episode" file system that IBM acquired from TransArc. z/OS supports two file systems, HFS and zFS. Meanwhile, ZFS is one of the file systems available for Sun Solaris. Apple Mac OS is switching from its own HFS, different than the z/OS version, over the Sun's ZFS.
packs versus PACS
Older mainframers call disk volumes "packs". This started in the days where disks were "removable" and you can pack and unpack them into the drive unit.
PACS on the other hand refers to the "Picture Archive and Communication System" application environment used by hospitals and medical facilities to storage and share X-ray, Cardiology and Radiology images. Today, modern medical equipment are called "modalities" and directly connect to NAS storage via NFS or CIFS protocols. The images are immediately digitized and sent to disk, then tape, for long-term archive storage. IBM's Grid Medical Archive Solution (GMAS) is designed specifically for this environment.
rack versus RAC
Perhaps my favorite was when someone asked a high-level executive at a conference if their storage product supported Oracle RAC, and the response was that it supported anyone's rack, so long as it met the 19 inch standard. Everyone burst out laughing, and he probably had to be explained what was going on afterward.
Oracle RAC refers to Real Application Cluster, allowing multiple Oracle servers to work together as a system. A "rack" is just the powered shelf, typically 19" wide, and typically 25U or 42U tall, that allows modular servers, storage or network gear be placed together in a data center. A "U" is 1.75 inches, the thickness of a "two-by-four" piece of lumber. If you have ever used a 3.5 inch or 5.25 inch floppy diskette, then you already know the 2U and 3U sizes.
I am sure there are many other examples of similar sounding terms and phrases. If you have any to contribute, post a comment below!
Guy Kawasaki is hosting a Web Conference next week on The Art of Evangelism.By this he is referring to promoting products and services, rather than the traditionaldefinition: the preaching or promulgation of the gospel.
A few years ago, I myself had the official title of "Technical Evangelist" for the IBM System Storageproduct line. I never liked the title, and asked to use something else, but since I was part of ateam of "Technical Evangelists," I had to keep it. A lot of companies were using this as a title,I was told, and everyone knew that it was not a religious reference, but a marketing one.
Sometimes, words do not translate well into other countries or cultures. Four years ago, on theweek of September 11, 2003, I traveled to Kuwait, Qatar and UAE for a business trip to present thelatest on our storage products. On arrival in Kuwait, I had to fill out my "visa application" to enterthe country, and it asked for my "occupation/title" but there were not enough spaces to write "Technical Evangelist" so I just entered "Evangelist".
The two Kuwaitis behind the desk looked it up in their Arabic/English dictionary, discussed it, andweren't sure if they should shoot me, or take me to the back room to video tape my proper be-heading. Our official hostcame over to ask what was the delay, and they showed her the dictionary translation. She asked me,"Why would you put Evangelist as your title?" So, I gave her my business card, and told herthat my full title of Technical Evangelist did not fit in the space provided.
She explained to the two behind the desk that I had misunderstood the question, and misspelled theactual word intended was "Engineer". She showed them the agenda of the IBM Technical Conference I wasspeaking at, and the list of Oil and Construction companies that were attending. They looked upthe new title "Engineer", and agreed the translation was suitable for entry, and that these two words,Evangelist and Engineer, used enough similar letters they could understand how one might misspell one for the other.
Our limo took a small detour to the middle of the desert so that we could burn and bury the ashes of the remainder of my business cards, before arriving to the hotel. All of my powerpoint slides that listed my title were changed to "Technical Engineer". The events themselves went very well,as IT people are the same all over the world, and had no problem setting aside religious or politicaldifferences in an effort to learn more about technology.
When I got back to the United States, I shared my experience with my fellow team-mates, most of whom never leavethe country, and would never have thought this might happen. Management agreed to let us change our titles.That was good for me, as I had to order a new box of business cards anyways.
Last year, I became "Manager of Brand Marketing Strategy" of the IBM System Storage product line.Now on business trips I just write "Manager" on the Occupation/Title line. It fits in every form I have ever had to fill, and translates properly into every language.
Today I spoke at the IBM Think Green Roadshow in Phoenix, Arizona. This is justone of a 15-city tour to help make people aware of Green data center issues.Here is the schedule forthe remaining cities. Contact your local IBM rep for details.
Victor Ferreira was our moderator and host. He is the site level executive for the2000 IBM employees in the Phoenix area, and manages the Public Sector for our Westernregion.
The first speaker was Dave McCoy, IBM principal in our Data Center services group.He explained IBM's Project Big Green and the Energy Efficiency Initiative, and wentinto details on how IBM can act as general contractor to design, plan and build theideal Green Data Center for you. IBM can also retrofit existing buildings, with new technologies like stored cooling, optimized airflow assessments, and modulardata center floorspace. While not related to energy, but still important to ourenvironment was IBM Asset Recovery Services, where IBM can take all those old PCmonitors, keyboards and other outdated equipment and refurbish or melt down to recapture useful metals and plastics, and disposing the rest in an environmentally-friendly,non-toxic manner.
I was the second speaker, covering "How to get it done". While Dave covered the issuesand technologies available, I explained how to put it all into practice. This includesIT systems assessments, health audits, and thermal profiling. Using server and storagevirtualization, you can increase resource utilization and reduce energy waste. IBM's CoolBlueproduct line, which includes the IBM PowerExecutive software to monitor your IT environment, and the "Rear Door Heat Exchanger" that uses chilled water to remove asmuch as 60% of the heat coming out of the back of a server rack, greatly reducing hot-spotson the data center floor, and allowing you to run the entire room at warmer, less-expensivetemperatures.
On the server side, I covered IBM's System z mainframe and the BladeCenter as examples of how innovative technologies can be used to run more applications with less energy. The newSystem p570 based on the energy-intelligent POWER6 processor has twice the performance for the same amountof power as its POWER5 predecessor. On thestorage side, I explained how Information Lifecycle Management (ILM), storage virtualization,and the use of a blended disk and tape environment can greatly reduce energy costs.
Reps from our many technology partners Eaton, APC, Schneider Electric, Liebert, and Anixter werethere to support this event.
The session ended with a Q&A Panel, with Dave McCoy, myself, and Greg Briner from IBM GlobalFinancing. IBM is able to offer creative "project financing" that can often times match theactual monthly savings, resulting in net zero cost to your operational budget, with payback periods as little as 2.5 years.
To learn more about IBM's efforts to help clients create "Green" data centers, clickGreen Data Center.
To get beyond the simple statistics of vendor popularity, we looked at the number and combinations of vendors with which enterprises work. Many were customers of one or two storage providers, but the rest were customers of up to six storage providers. More than one-third were customers of systems vendors only, bypassing storage specialists.
Comparisons between solutions vendors and storage component vendors are not new. One could argue that this can be compared to supermarkets and specialty shops.
Supermarkets offer everything you need to prepare a meal. You can buy your meat, bread, cheese,and extras all with one-stop shopping. In a sense, IBM, HP, Sun and Dell are offering this to clients who prefer this approach. Not surprisingly, the two leaders in overall storage hardware,IBM and HP, are also the two best to offer a complete set of software, services, servers and storage.
IBM and HP are also the leaders in tape.While Forrester reports that many large enterprises in North America prefer to buy diskfrom storage specialists, others have found that customers prefer to buy their tape from solution providers. Recently, Byte and Switch reports thatLTO Hits New Milestones,where the LTO consortium (IBM, HP, and Quantum) have collectively shipped over 2 million LTO tape drives, and over 80 million LTO tape cartridges. Perhaps this is because tape is part of an overallbackup, archive or space management solution, and customers trust a solution vendor overa storage specialist.
Where possible, IBM brings synergy between its servers and storage. For example, we justannounced the IBM BladeCenter Boot Disk System, a 2U high unit that supports up to 28 blade servers, ideal for applications running under Windows or Linux, and helping to reduce the energy consumption for thoseinterested in a "Green" data center.
Some people prefer buying their meat at the slaughterhouse, bread at the French pastry shop, andso on. Storage specialists focus on just storage, leaving the rest of the solution, like servers,to be purchased separately from someone else. Storage vendors like NetApp, EMC, HDS and othersoffer storage components to customers that like to do their own "system integration", or to thosethat are large enough to hire their own "systems integrator".
Storage specialists recognize that not everybody is a "specialty shop" shopper.HDS has done well selling their disk through solution vendorslike HP and Sun. EMC sells its gear through solution vendor Dell.
Interestingly, I have met clients who prefer to buy IBM System Storage N series from IBM, becauseIBM is a solution vendor, and others that prefer to buy comparable NetApp equipment directly fromNetApp, because they are a storage component vendor.
I mostly buy my groceries at a supermarket, buthave, on occasion, bought something from the local butcher, baker or candlestick maker. And if you are ever in Tucson, you might be able to find Mexican tamalessold by a complete stranger standing outside of a Walgreens pharmacy, the ultimate extreme of specialization. You can get a dozen tamales for tenbucks, and in my experience they are usually quite good. Theoretically, if you get sick, or they don't taste right, you have no recourse, and will probably never see that stranger again to complain to.(And no, before I get flamed, I am not implying any major vendor mentioned above is like this tamale vendor)
Of course, nothing is starkly black and white, and comparisons like this are just to help provide context and perspective,but if you are looking to have a complete IT solutionthat works, from software and servers to storage and financing, come to the vendor you can trust, IBM.
A few weeks ago, my Tivo(R) digital video recorder (DVR) died. All of my digital clocks in my house were flashing 12:00 so I suspect it wasa power strike while I was at the office. The only other item to die was the surge protector,and so it did what it was supposed to do, give up its own life to protect the rest of myequipment. Although somehow, it did not protect my Tivo.
I opened a problem ticket with Sony, and they sent me instructions on how to send itover to another state to get it repaired.Amusingly, the instructions included "Please make a backup of the drive contents beforesending the unit in for repair." Excuse me? How am I supposed to do that, exactly?
My model has only a single 80GB drive, and so my friend and I removed the drive and attachedit to one of our other systems to see if anything was salvageable. It failed every diagnostictest. There was just not enough to read to be usable elsewhere.
This is typical of many home systems. They are not designed for robust usage, high availability, nor any form of backup/recovery process. Some of the newer models havetwo drives in a RAID-1 mode configuration, but most have many single points of failure.
And certainly, it is not mission critical data. Life goes on without the last few episodesof Jack Bauer on "24", or the various Food Network shows that I recorded for items I planto bake some day. For the past few weeks, I have spent more time listening to the radioand reading books. Somehow, even though my television runs fine without my Tivo, watchingTV in "real time" just isn't the same.
I suspect that if you gave someone a method to do the backup, most would not bother to useit. People are now relying more and more heavily on their home-basedinformation storage systems, digital music, video and cherished photographs. Perhaps experiencing a "loss" will help them appreciate backup/recovery systems so much more than they do today.
The smart people at the University of Pittsburgh manage five campuses and over 33,000 students, andneeded to create an enterprise storage solution that would give it three key benefits. Of course, they turnedto IBM, the number one overall storage hardware vendor, to deliver.
A new storage infrastructure with the capacity to grow with the University of Pittsburgh as needed
Improved system reliability with reduced downtime, and availability 24/7/365
A significantly more manageable storage solution that could lower costs and provide better system efficiency through virtualization
As a result, IBM shipped its 25,000th high-end disk storage system, in this case two IBM System Storage DS8300 models, along with storage virtualization, and other related hardware, software and services, to provide a complete end-to-end solution.
Here is what Jinx Walton, Director of Computing Services and Systems Development at the University of Pittsburgh, had to say about it...
"The University of Pittsburgh supports large enterprise systems, and the number and complexity of new systems continue to grow. To effectively manage these systems it was necessary to identify an enterprise storage solution that would leverage our existing investments in storage, make allocation of storage flexible and responsive to project needs, provide centralized management, and offer the reliability and stability we require. The integrated IBM storage solution met these requirements"
ESG Analyst, Tony Asaro, talks about the many small storage startups having aBillion Dollar Impact on the storage system industry. Tony has counted over50 storage system vendors that are now in the marketplace. Is it really that many?Most of the time, the media only focus on the top seven major players, but I agree that big players like IBM should take trends about small startups like this seriously.
EMC Blogger Chuck Hollis suggests that this trend might be the start of a squeeze play, where top players and new upstarts squeeze out the middle playerslike Sun and HDS, in his postDesperate Times In Storage Land?
(His statement that IDC and Gartner have listed EMC as number one in "almost all"market segments is perhaps a bit misleading. IBM is number one in overall storage hardware, as wellas leading in tape drives, tape libraries, tape virtualization, and for that matter,disk virtualization. I don't know if IDC or Gartner count EMC Disk Library in the "tape virtualization" category, or if either analyst distinguishes between "cache-based" versus "switch-based" disk virtualization as separate categories.Perhaps Chuck should have qualified this to say "almost all of themarket segments that EMC does business in," which of course is better than the othervendors in the middle.)
Often, when looking at disk storage it is easy to focus on comparisons to other disk storage, but disruptive technologies cross boundaries. Already we have seen Flash Memory drives on the IBM BladeCenter, replacing traditional disk drives internal to each blade server. They are smaller than regular disk drives, but big enough to hold the operating system to boot from.
The New York Times has an article by John Markoff, Redefining the Architecture of Memory that talks about IBM's research on "Racetrack Memory".The article is a good read, but here are some interesting excerpts:
Now, if an idea that Stuart S. P. Parkin is kicking around in an I.B.M. lab here is on the money, electronic devices could hold 10 to 100 times the data in the same amount of space.
Currently the flash storage chip business is exploding. Used as storage in digital cameras, cellphones and PCs, the commercially available flash drives with multiple memory chips store up to 64 gigabytes of data.
However, flash memory has an Achilles’ heel. Although it can read data quickly, it is very slow at storing it. That has led the industry on a frantic hunt for alternative storage technologies that might unseat flash.
Mr. Parkin’s new approach, referred to as “racetrack memory,” could outpace both solid-state flash memory chips as well as computer hard disks, making it a technology that could transform not only the storage business but the entire computing industry.
But ultimately, the technology may have even more dramatic implications than just smaller music players or wristwatch TVs, said Mark Dean, vice president for systems at I.B.M. Research.“Something along these lines will be very disruptive,” he said. “It will not only change the way we look at storage, but it could change the way we look at processing information. We’re moving into a world that is more data-centric than computing-centric.”
This technology has the potential to break some of the physical limitations that are currently worrying disk drive designers. I look forward to see how this plays out.
Registration for the "Meet the Storage Experts" event in Second Life will close this week fornext week's September 20 event. All IBMers, clients and IBM Business Partners are welcome to attend. We will focus this time on DS3000 and N series disk systems, tape systems,and IBM storage networking gear.
If you miss this one, we plan to have another one in November!
Array-based replication does have drawbacks; all externalised storage becomes dependent on the virtualising array. This makes replacement potentially complex. To date, HDS have not provided tools to seamlessly migrate away from one USP to another (as far as I am aware). In addition, there's the problem of "all your eggs in one basket"; any issue with the array (e.g. physical intervention like fire, loss of power, microcode bug etc) could result in loss of access to all of your data. Consider the upgrade scenario of moving to a higher level of code; if all data was virtualised through one array, you would want to be darn sure that both the upgrade process and the new code are going to work seamlessly...
The final option is to use fabric-based virtualisation and at the moment this means Invista and SVC. SVC is an interesting one as it isn't an array and it isn't a fabric switch, but it does effectively provide switching capabilities. Although I think SVC is a good product, there are inevitably going to be some drawbacks, most notably those similar issues to array-based virtualisation (Barry/Tony, feel free to correct me if SVC has a non-disruptive replacement path).
I would argue that the IBM System Storage SAN Volume Controller (SVC) is more like the HDS USP, and less like the Invista. Both SVC and USP provide a common look and feel to the application server, both provide additional cache to external disk, both are able to provide a consistent set of copy services.
IBM designed the SVC so that upgrades can occur non-disruptively. You can replace the hardware nodes, one node at a time, while the SVC system is up and running, without disruption to reading and writing data on virtual disk. You can upgrade the software, one node at a time, while the SVC system is up and running, without disruption to reading and writing data on virtual disk. You can upgrade the firmware on the managed disk arrays behind the SVC, again, without disruption to reading and writing data on virtual disk.
More importantly, SVC has the ultimate "un-do" feature. It is called "image mode". If for any reason you want to take a virtual disk out of SVC management, you migrate over to an "image mode" LUN, and then disconnect it from SVC. The "image mode" LUN can then be used directly, with all the file system data in tact.
I define "virtualization" as technology that makes one set of resources look and feel like a different set of resources with more desirable characteristics. For SVC, the more desirable characteristics include choice of multi-pathing driver, consistent copy services, improved performance, etc. For EMC Invista, the question is "more desirable for whom?" EMC Invista seems more designed to meet EMC's needs, not its customers. EMC profits greatly from its EMC PowerPath multi-pathing driver, and from its SRDF copy services, so it appears to have designed a virtualization offering that:
Continuesthe use of EMC Powerpath as a multi-pathing driver. SVC supports driversthat are provided at no charge to the customer, as well as those built-in to each operating system like MPIO.
and, continuesthe use of Array-based copy services like SRDF of the underlying disk. SVC providesconsistent copy services regardless of storage vendor being managed.
A post from Dan over at Architectures of Control explains the anti-social nature of public benches. City planners, in an effort to discourage homeless people from sleeping on benches in parks or sidewalks, design benches that are so uncomfortableto use, that nobody uses them. These included benches made of metal that are too hot or too cold during certainmonths, benches slanted at an angle that dump you on the ground if you lay down, or benches that have dividers sothat you must be in an upright seated position to use.
This is not a disparagement of split-path switch-based designs. Rather, EMC's specific implementation appears to be designed for it to continuevendor lock-in for its multi-pathing driver, continuevendor lock-in for its copy services when used with EMC disk, and only provide slightly improved data migration capability for heterogeneous storage environments. Other switch-based solutions, such as those from Incipient or StoreAge, had different goals in mind.
Sadly, my IBM colleague BarryW and I have probably spent more words discussing Invista than all eleven EMC bloggers combined this year. While everyone in the industry is impressed how often EMC can sell "me, too" products with an incredibly large marketing budget, EMC appears not to have set aside funds for the Invista.
If a customer could design the ideal "storage virtualization" solution that would provide them the characteristics they desire the most from storage resources, it would not be anything like an Invista. While there are pros and cons between IBM's SVC and HDS's TagmaStore offerings, the reason both IBM and HDS are the market leaders in storage virtualization is because both companies are trying to provide value to the customer, just in different ways, and with different implementations.
When new technologies are introduced to the marketplace, it is normal for customers to be skeptical.
My sister is a mechanical engineer, so when she needs to configure a part or component, she candesign it on the computer, and then use a "Rapid Prototyping Machine"that acts like a 3D printer, to generate a plastic part that matches the specifications. Some machinesdo this by taking a hunk of plastic and cutting it down to the appropriate shape, and others use glue andpowder to assemble the piece.
But not everything is that simple. Harry Beckwith deals with the issue of selling services and software featuresin his book "Selling the Invisible". How do you sell a service before it is performed? How do you sell a softwarefeature based on new technology that the customer is not familiar with?
Our good friends over at NetApp, our technology partners for the IBM System Storage N series, developed a"storage savings estimator" tool that can provide good insight into the benefits of Advanced Single InstanceStorage (A-SIS) deduplication feature.
I decided to run the tool to analyze my own IBM Thinkpad C: drive (Windows operating system and programs) and D: drive ("My Documents" folder containing all my data files) to see how much storage savings thetool would estimate. Here are my results:
WINXP-C-07G (C: drive)Total Number of Directories: 1272Total Number of Files: 56265Total Number of Symbolic Links: 0Total Number of Hard Links: 41996Total Number of 4k Blocks: 2395884Total Number of 512b Blocks: 18944730Total Number of Blocks: 2395884Total Number of Hole Blocks: 290258Total Number of Unique Blocks: 1611792Percentage of Space Savings: 20.61Scan Start Time: Wed Sep 5 14:37:06 2007Scan End Time: Wed Sep 5 14:53:51 2007
WINXP-D-07H (D: drive)Total Number of Directories: 507Total Number of Files: 7242Total Number of Symbolic Links: 0Total Number of Hard Links: 11744Total Number of 4k Blocks: 3954712Total Number of 512b Blocks: 31610595Total Number of Blocks: 3954712Total Number of Hole Blocks: 3204Total Number of Unique Blocks: 3524605Percentage of Space Savings: 10.79Scan Start Time: Wed Sep 5 14:21:16 2007Scan End Time: Wed Sep 5 14:34:30 2007
I am impressed with the results, and have a better understanding of the way A-SIS works. A-SIS looks at every4kB block of data, and creates a "fingerprint", a type of hash code of the contents. If two blocks have different "fingerprints", then the contents are known to be different. If two blocks have the same fingerprint, it is mathematically possible for them to be unique in content, so A-SIS schedules a byte-for-byte comparison to be sure they are indeed the same. This might happen hours after the block is initially written to disk, but is a much safer implementation, and does not slow down the applications writing data.
(In an effort to provide support "real time" as data was being written, earlier versions of deduplication
had to either assume that a hash collision was a match, or take time to perform the byte-for-byte comparisonrequired during the write process. Doing this byte-for-byte comparison when the device is the busiest doingwrite activities causes excessive undesirable load on the CPU.)
The estimator tool runs on any x86-based Laptop, personal computer or server, and can scan direct-attached, SAN-attached, or NAS-attached file systems. If you are a customer shopping around for deduplication, ask your IBM pre-sales technical support, storage sales rep, or IBM Business Partner to analyze your data. Tools like this can help make a simple cost-benefit analysis: the cost of licensing the A-SIS software feature versus the amount of storage savings.
I can't believe I have been blogging for a year now!
I have Jennifer Jones from IBM to thank for getting this started. She was my predecessor in the job I have now, and she was moving on to bigger and better things, and during the transition for me to take over, she suggested that we start a blog, podcast, or similar. While there are many blogs and podcasts inside the firewall of IBM, I wanted something to be accessible to all of our IBM sales team, IBM Business Partners, existing and prospective clients, and to enable comments, to enable two-waycommunication. Podcasts are very one-way, so we chose a blog instead.Getting it set up took a while, convincing our own management that this was worthwhile, and dealing with our legal department on the IBM blogging guidelines of what we can and cannot write about, we finally got it going last year, launching September 1, just in time for our 50 years of disk systems innovation campaign.
It has been a wild ride, a great learning experience, and has proven quite fulfilling for job satisfaction. Here are some observations and lessons I have learned along the way.
Roller is the open source blog server that drives Sun Microsystem's blogs.sun.com employee blogging site, IBM DeveloperWorks blogs that this blog exists on, thousands of internal blogs at IBM Blog Central, the JRoller Java community site, and hundreds of others world-wide.Whereas there might be fancier blog systems elsewhere that I could have chosen, hosting my blog with IBM Developerworksseemed like a good choice. I can access from any web-browser capable machine, and enter my blog posts in nativeHTML, that I develop in the tool itself, or offline with a standard basic text editor like Microsoft Notepad that I can then cut-and-paste back in.
One lesson I learned the hard way was that Roller generates the Permalink URL for each blog post based on the first five words of the title. For that reason, it is important to chose an appropriate and unique title, avoiding the use of punctuation, quotation marks, or pharmaceutical "enhancement products" that might get rejected by SPAM filters.Once chosen, you can't change the title afterwards as it won't match the Permalink anymore.My blog post "Aperi is (enhancement product) for SMI-S" caused no end of grief to our Press Release team.
Writing blog posts in native HTML is not as hard as it sounds. I am limited to hosting a maximum of 24MB of files, and they can only be jpg, jpeg, gif, png, mp3, pdf or ppt format.So, wherever possible, I point to other websites for content.For those new to blogging, I recommendThe Barebones Guide to HTML.
Roller also generates for me a spreadsheet of all my page views for the week. Tracking blog traffic closely is as crazyas checking your company's stock price every day. These "web-stat" e-mails get filed directly into my Bacn folder on Lotus Notes.
In my earlyadvice to bloggers, I mentioned my choice of Bloglines as my RSS feed reader. When I subscribe to a new blog, I specify Full entries, not Partial,which allows me to scan it quickly, but filters out many of the non-text content like videos. It also allowed meto see what my own blog posts looked like from within a reader, so that I can write them appropriately.
I find if valuable to read other blogs, including those written by employees of our toughest competitors. Evenif you don't blog yourself, following blogs can be extremely valuable. Be careful what you leave as comments onother blogs, they may come back to haunt you later.
Currently, I track 55 blogs, some about storage,marketing, Web 2.0 issues, Second Life, Linux, or other areas of interest. I prefer blogs that make only 1-5 postsper week, so blogs like LifeHacker and LifeRemix are off my Bloglines list, but are excellent resourceswhen I am searching for something specific. If you think 55 is a lot of blogs, consider Timothy Ferriss' post onHow RobertScoble reads 622 RSS feeds each morning.
I have quite an international readership, so I have to be careful using American idioms and pop cultural references.For example, in my blog post IBM acquires Softek, I mentioned "shotgun weddings" and had various responses asking what exactly did that mean,all from readers outside the USA. I've learned that sometimes you need to link them to an American Slang dictionary,or Wikipedia encyclopedia entry to explain these terms and phrases.
Technoraticurrently tracks over 100 million blogs and over 250 million pieces of tagged social media. Getting my blogtracked had some issues. You have to join, thenpost a "claim"on your own blog. My mistake was having a case-sensitive URL with a mix of upper and lower case letters, but Technorati prefers all lower case. IBM worked with Technorati to get this resolved.
Del.icio.us is a social bookmarking website -- the primary use is to store your bookmarks online, which allows you to access the same bookmarks from any computer and add bookmarks from anywhere, too. On del.icio.us, you can use tags to organize and remember your bookmarks, which is a much more flexible system than folders.
I use Firefox, Safari, Dillo and Internet Explorer web browsers, so it is nice that I have access to allmy bookmarks in the same consistent manner. When I see content on a website that I might like to reference laterin a blog, I tag it with del.icio.us so that I can get to it later.
Fellow GTD-ers will quickly recognize this acronym, but for the rest of you, it refers to David Allen's book "Getting Things Done®".This is a great book! I learned about it reading other people's blogs, and found it incrediblyuseful helping me organize my time.There are various online tools available to help employ this method. I use Lotus Connections Activitiesfor group projects with co-workers at IBM, and BackPack for projects withmy friends outside of work.
The success of YouTube encouraged IBM to launch IBM TV, a portal for IBM's video and multimedia assets and make it easier for IBM employees, customers, partners and prospects to access and view IBM multimedia. The plan is to have eight anchor episodes per year, professionally hosted by TV personality, Joe Washington, and point to related offers and other resources for viewers to learn more.
Blogging also introduced me to Second Life. I asked around if anyone else within IBM was using Second Life, anddiscovered quite a few. I got invited to join our internal Eightbar group, and participated in various events, including an IBM Holidayparty that I discussed in my blog post"Building a Snowman in Second Life".
In April, we had a launch of our newest products in Second Life, and we plan to have two more Second Life events,September 20 and another in November, staged as "Meet the Experts" question and answer panels.
I wrap up with Facebook. Actually, whereas most of my Web 2.0 efforts have been work-related, I have quite a few friends and family who follow my blog. Several were inspired to start their own blogs, such asPassages from Pamand Barry Whyte on Storage Virtualization. Bridging the gap is Facebook, something I can use to keep tabs on my friends, as well as my storage industry-related contacts.
Wow, that's quite a lot in one year. Well, I am done with my meetings down here in Sao Paulo, Brazil. My colleauges and I are returning tonight to enjoy the long Labor Day weekend.
August 31 is my good friend Jim Cosentino's retirement day as a full-time employee at IBM. After over 30 years at IBM, in various marketing, sales and consulting roles, he is going to be thinking about happy things instead of working. His last seven years has been at theIBM Poughkeepsie Customer Executive Briefing Center as the lead System Storage presenter.
The past few years, I've traveled with him around the world on various business trips, teaching our IBM sales force and IBM Business Partners about our System Storage offerings, and presenting to clients. He is a class act, always positive, laughing, seeing the bright side of things.
While "spend more time with his family" has become a business cliche, I know Jim will actually enjoy his retirement years, spend more time with his family, take on other pursuits and hobbies, and perhaps do some more traveling.
Jim, if you are reading this, I have one suggestion. I know you have lots of friends within IBM, and count myself as one of them, but may I suggest your first goal is to makeat least three newfriends, to help you in your transition to retirement.
Congratulations Jim! Enjoy your well-deserved retirement!
If you are ever down in Sao Paulo, Brazil, may I suggest not drinking "American amounts" of their "Brazilian Coffee". The coffee here is "robust", to say the least.
Yesterday, my blog focused on IBM iSCSI offerings that were announced in August.Also announced earlier this month, the Integrated Removable Media Manager (IRMM) on System zhas been years in the making.IRMM is a new robust systems management product for Linux® on IBM System z™ that manages open system media in heterogeneous distributed environments and virtualizes physical tape libraries. IRMM combines the capacity of multiple heterogeneous libraries into a single reservoir of tape storage that can be managed from a central point.By providing an integrated solution with the opportunity for both mainframe z/OS DFSMSrmm and distributed Tivoli® Storage Manager™ environments to be managed by IRMM, System z can now be a hub for the management of removable media.
The people who thought the "Mainframe is obsolete", and those that thought "Tape is dead", are both proven wrong again with this announcement. People are looking to deploy robust tape automation for backup and archive, and this convergence with mainframe makes perfect sense by providing business value that extends to other distributed systems.
The proof-of-concept that IBM Haifa research center developed back in 1998 became what we now call the iSCSI protocol.The book iSCSI: The Universal Storage Connection introduces the history as follows:
In the fall of 1999 IBM and Cisco met to discuss the possibility of combining their SCSI-over-TCP/IP efforts. After Cisco saw IBM's demonstration of SCSI over TCP/IP, the two companies agreed to develop a proposal that would be taken to the IETF for standardization.
There are three ways to introduce iSCSI into your data center:
Through a gateway, like the IBM System Storage N series gateway, that allows iSCSI-based servers connect to FC-based storage devices
Through a SAN switch or director, a FC-based server can access iSCSI-based storage, an iSCSI-based server accessing FC-based storage, or even iSCSI-based servers attaching to iSCSI-based storage.
Directly through the storage controller.
IBM has been delivering the first method with its successful IBM System Storage N series gateway products, buttoday we have announced additional support for the second and third methods.Here's a quick recap.
New SAN director blades
Supporting the second method, IBM TotalStorage SAN256B Director is enhanced to deliver iSCSI functionality with a new M48 iSCSI Blade, which includes 16 ports (8 Fibre Channel ports; and 8 Ethernet ports for iSCSI connectivity). We also announced a new Fibre Channel M48 Blade which provides 10 Gbps Fibre Channel Inter Switch Link (ISL) connectivity between SAN256B Directors.
With support for Boot-over-iSCSI, diskless rack-optimized and blade servers can boot Windows or Linux over Ethernet,eliminating the management hassles with internal disk.
All of this is part of IBM's overall push into the Small and Medium size Business marketplace, making it easier to shop for and buy from IBM and its many IBM Business Partners, easier to deploy and install storage, and easier tomanage the storage once you have it.
In his blog Rough Type, Nick Carr asks Where is my CloudBook?and points to John Markoff's 2-part series in the New York Times on computing in the clouds.(Read it here: Part 1, Part 2)
At first, I thought he meant computing while in an airplane, but instead, he is talking about computing on a laptop or other hand-held device that does not have an internal disk drive, no installedoperating system, no internal data storage. Instead, the idea is that you boot from a CD, accessyour data, and even some of your programs, over the internet. John used an Ubuntu Linux LiveCD in his example.
This week, I am in Sao Paulo, Brazil, and was "in the clouds" for over 10 hours flying from Dallas to here.The one time I am guaranteed "off-line" from the internet is on the plane, and I spend enough time on planesthat I am able to get work done despite being "disconnected".
The same reasons people want to get out of having a disk drive on their laptop, are the reasons data centersare getting out of internal disk on their servers.
disks crash, and typically are not protected in any RAID configuration on most laptops
operating systems get infected with viruses and malware
storage on one server is generally inaccessible to every other server
Booting from CD is especially clever. No more worrying about fixing your Windows registry, viruses,corrupted operating system files, or the cruft that accumulates on your C: drive that slowsyou down. The CD is the sameevery time, so it is like running your system with a freshly installed operating system every day.
The need for central repositories of data harkens back to the years of the IBM mainframe. Of course, whatmade sense back then continues to make sense now. The old 3270 terminals stored no data, and instead merelyprovided keyboard input and display text screen output to the vast amount of data stored on the central system.Today, the inputs are different, using your finger or mouse instead to point to what you want, sliding itacross to make things happen, and the output may now include photos, audio and video, but the concept isstill the same.
I carry my Ubuntu Linux LiveCD with me on every business trip. Combined with externally rewriteable media,such as a USB key, you can get work done even when you are in an airplane, and upload it whenyou are back on the net.
The IBM Storage and Storage Networking Symposium concludes today. As typical for manysuch conferences, it ended at noon, so that people can catch airline flights.
TS1120 Tape Encryption - Customer Experiences
Jonathan Barney had implemented many deployments of tape encryption, and shared hisexperiences at two customer locations.
The first company had decided to implement their EKM servers on dedicated 64-bitWindows servers. They had three sites, one in Chicago, Alphareta, and New York City,each with two EKM servers. Each library had a single TS3500 tape library, and pointedto four EKM servers, two local, and two remote.
The clever trick was managing the keystore. They decided that EKM-1 was their trustedsource, made all changes to that, and then copied it to the other five EKM servers.His team deployed one site at a time, which turned out to be ok, but he would notrecommend it. Better to design your complete solution, and make sure that all librariescan access all EKM servers.
This company decided to have a single key-label/key-pair for all three locations, but change it every 6 months. You have to keep the old keys for as long as you have tapesencrypted with those keys, perhaps 10-20 years.The customer found the IBM encryption implementation "elegant" and it can be easily replicated to a fourth site if needed.
The second company had both z/OS and Sun Solaris. Initially they planned to have botha hardware-based keystore on System z, and software-based keystore on Sun, but they realized that System z version was so much more secure and reliable, that it made nosense to have anything on the Sun Solaris platform.
On System z, they had two EKM images, and used VIPA to ensure load balancing fromthe library. Tapes written from z/OS used DFSMS Data Class to determine which tapesare encrypted and which aren't. All Tapes written from Sun Solaris were encryptied, written to a separate logical library partition of the TS3500, which in turn contactedthe System z for the EKM management to provide the keys to use for the encryption.
The "gotcha" for this case was that when they tested Disaster Recovery, they had torecover the two EKM servers first, before any other restores could take place, and thistook way too long. Instead, they developed a scaled-down 10-volume "rescue recovery" z/OS image that would contain the RACF database and all EKM related software to actas the keystore during a disaster recovery. Anytime they make updates, they only haveto dump 10 volumes to tape. Restore time is down to only 2 hours.
He gave this advice to deploy tape encryption:
Some third party z/OS security products, like Computer Associates Top Secret orACF2, require some PTFs to work with the EKM. The latest IBM RACF is good to go.
Getting IP support from IOS to OMVS requires IPL.
At one customer, an OMVS monitor software program killed the EKM because it wasn'tin their list of "acceptable Java programs". They updated the list and EKM ran fine.
DO not update EKM properties file while EKM is running. EKM keeps a lot of stuffin memory, and when it is recycled, copies this back to the EKM properties file, reversing any changes you may have done. It is best to shut down EKM, update theproperties file, then start up EKM back up again. This is why you should always haveat least two EKM servers for redundancy.
TSM for Linux on System z
Randy Larson from our Tivoli group presented this session.There is a lot of interest in deploying IBM Tivoli Storage Manager backup and archivesoftware on Linux for System z. Many customers are already invested in a mainframeinfrastructure, may have TSM for z/OS or z/VM, and want the newer features and functions that are available for TSM on Linux.
TSM has special support for Lotus Domino, Oracle, DB2 and WebSphere Application Servers.TSM clients can send backup data to a TSM server internally via Hipersockets, a virtualLAN feature on the System z platform that uses shared memory to emulate TCP/IP stack.
One of the big questions is whether to run Linux as guests under z/VM, or natively onLPAR. The general deployment is to carve an LPAR and run Linux natively untilyour server and storage administration staff have taken z/VM training classes. Oncetrained, they can easily move native LPAR images to z/VM guests. Unlike VMware that takesa hefty 40% overhead on x86 platforms to manage guests, z/VM only takes 5-10% overhead.
For the TSM database and disk storage pools, Randy recommends FC/SCSI disk, with ext3 file system, combined with LVM2 into logical volumes. ECKD disk and reiserfsworks too. Avoid use of z/VM minidisks. Under LVM2, consider 32KB stripes for the TSM database, and 256KB stripes for the disk storage pools. For multipathing, usefailover rather than multibus method. Read IC45459 before you activate "directio".
The TSM for Linux on z is very much like the TSM on AIX or Windows, and not like theTSM for z/OS. For tape, TSM for Linux on z does not support ESCON/FICON attached tape,you need to use FC/SCSI attached tape and tape libraries. TSM owns the library anddrives it uses, so give it a logical library partition separate from z/OS. ForSun/StorageTek customers, TSM works with or without the Gersham Enterprise Distrbu-Tape(EDT) software. Use the IBM-provided drivers for IBM tape. For non-IBM tape, TSM providessome drivers that you can use instead.
That wraps up my week. This was a great conference! If you missed it, look for the one in Montpelier, France this October. Check out the list of IBM Technical Conferencesto find others that might interest you.
The IBM Storage and Storage Networking Symposium continues ...
DS8300 Benchmark for Global Mirror
Phil Allison of Fidelity National Information Services presented his success switching from competition over to IBM DS8300 disk systems for use with Global Mirror. They had usedPerformance Associates famous PAIO driver to help to the benchmarktesting. They ran the benchmars at 2x and 3x their current workloads to see how well the DS8000 performed,measuring IOPS, MB/sec, and millisecond response time (msec). They were very impressed with their results,staying below their target 0.8 msec for most of their runs.
For the Global Mirror, the did a performance "bake-off" between Ciena CN2000 versus Cisco 9216i. These areimplemented differently. Ciena uses a Layer-2 approach, encapsulating the Fibre Channel packets directlyto transport as SDH/SONET or Gigabit Ethernet (GigE), which required dedicated circuits between JacksonvilleFlorida and Little Rock, Arkansas. By contrast, Cisco uses a Layer-3 approach, encapsulating Fibre Channelpackets within an IP packet, which can leverage existing datacenter-to-datacenter backbone.
To add stress to the benchmarks, they used a "Network Impairment" emulator. These artificially inject errors,lose packets, and other signal loss conditions. Running both Cisco and Ciena under these tests help them decide which to purchase, but also enforced that idea that they made the right choice choosing IBM for theirremote distance mirroring solution.
Comparison of Bare Machine Recovery Techniques
"Bare machine recovery" is the phrase used to restore a machine that has no operating system installed (or thewrong operating system). Dave Canan from IBM Advanced Technical Support did a great job reviewing the variousproducts and techniques available, and the pros and cons of each approach. The ones he covered were:
Tivoli Storage Manager - install fresh Windows Operating System, TSM client, and then follow certain steps
Automated System Recovery(ASR) - a new feature of Windows XP and Windows 2003 works with TSM client
Symantec Ghost - formerly callled PowerQuest Drive Image, there are now two versions: Ghost Home Edition and Ghost Corporate Solution Suite
Cristie Bare Machine Recovery(CBMR) - This is an IBM partner that provides both Linux and Windows PE versions. Cristie includes a license for Windows PE, so no need to use the alternative Bart PE method.
SAN Volume Controller - Customer Experience
Bill Giles of Catholic Medical Center, a hospital in New Hampshire, presented his experienceswith IBM System Storage SAN Volume Controller. They have a mix of IBM System x, System p, andSystem i servers, as well as machines from HP, Sun, and Dell. For applications, they havePicture Archiving and Communicatiion System (PACS) for cardiology and radiology, HL7 Interface engine, Clinical Information System, TSM for backup, and Microsoft Exchange fore-mail.
They deployed SVC on AIX, Solaris, Windows 2000 and 2003. They were very delightedwith the results:
Centralized Storage Provisioning
Consolidating disparate storage into a universal platform
Enables non-disruptive data migration
Increased utilization of existing disk resources
Improved disaster recovery with FlashCopy and Metro Mirror
Birds of a Feather (BOF) sessions
We had two BOFs, one for storage attached to System z operating systems, and another for storage attached to Linux, UNIX and Windows systems. This distinctionmade sense when mainframes could only attach to CKD disks and ESCON/FICON tape,and distributed systems could only do FCP/SCSI, but these days, there are all kindsof convergence going on.
Linux on System z can now attach via FCP to LTO tape and SAN Volume Controller, allowing now a wide range of storage options for that platform. z/OS, z/VM, z/VSEand Linux on System z can all access IBM System Storage N series via NFS.
The format was traditional Q&A panel, we had experts at the front of the room,handling the questions and discussion topics brought up by the audience. I'll spareyou the individual questions and answers.
The IBM Storage and Storage Networking Symposium in Las Vegas continues ...
N series and VMware
Jeff Barnett presented how VMware manages disk image files in its VMfs repository, and how N series offersa better alternative. Virtual machines can access N series volumes directly.
Business Continuity with System i
Allison Pate presented the various Business Continuity options for System i. Many customersuse internal storage for System i, but this then hampers Business Continuity efforts. Instead,you can have IBM System Storage DS8000 or DS6000 series disk systems provide disk mirroringbetween clustered systems.
There was a lot of interest in DR550, one of our many compliance storage solutions. Ron Henkhauspresented an overview of our DR550 and DR550 Express offerings. Unlike the competitive disk-onlysolutions, such as the EMC Centera, the DR550 allows you to attach an automated tape library, managing large amounts of fixed content data at a much lower cost point. It also has encryption, for both diskand tape data.
Open Systems Disk Management
Siebo Friesenborg presented the various steps needed to troubleshoot performance problemswith open systems, including the use of "iostat" on AIX systems as an example, and the stepsyou can take to make formal Service Level Agreements (SLA) between the IT department and thevarious lines of business.
IBM Encryption - TS1120 and LTO-4 encryption comparison
Tony Abete presented TS1120 and LTO-4 encryption techniques. Deploying encryption is more thanjust choosing a tape drive. There are a variety of factors involved, such as whether to managethe keys from the application, the operating system, or the library manager. You need policiesto decided when to encrypt tapes and when not to, generating your keys, storing them, and sharingthem with your business partners, suppliers and service providers with which you send tapes.
I can tell that many people are feeling like they are "drinking from a firehose".IBM's success in storage reaches out to so many different aspects of information management,a variety of industries, and disciplines as varied as regulatory compliance and medical imaging.
Registration is now open for our next "Meet the Storage Experts" event in Second Life. All IBMers, clients and IBM Business Partners are welcome to attend. We will focus this time on DS3000 and N series disk systems, tape systems,and IBM storage networking gear.
The blog team is working on re-directs for those who don't see this in time. Depending on which RSS feed reader you use, you may need to unsubscribe/re-subscribe to re-activate. You can updatethe URL for the feed to one of these:
Continuing this week in Las Vegas, we had a great set of sessions today.
Fibre Channel Overview
I like the manner in whichJim Robinson presented this "basics" session on how Fibre Channel works, why it is spelled "Fibre" not "Fiber", and how all the different layers work in the protocol.
IBM Virtualization Engine TS7700 series
Jim Fisher from the IBM Tucson lab presented the TS7700 series, which replaces our Virtual Tape Server (VTS). Hehad performance numbers to show that it was faster in various measurements against the B20 model of the VTS. Itis supported on the z/OS, z/VM, z/VSE, TPF and z/TPF operating systems.
IBM E-mail Archiving and Storage solution
Ron Henkhaus provided an overview of IBM's E-mail Archive and Storage appliance. The solution combines IBM BladeCenter server blade, DS4200 serieswith SATA disk, and pre-installed software: IBM Content Manager, IBM Records Manager, IBM CommonStore for Lotus Domino and Microsoft Exchange, and IBM System Storage Archive Manager. Services are included to get it connected toyour e-mail environment.
Lee La Frese from our Tucson performance lab presented various performance featuresof the IBM System Storage DS8000 series, and how they compare to competition.
First, some interesting statistics.
Back in 2002, the average high-end EnterpriseStorage Server (ESS) model F20 was configured only for 4 Terabytes (TB). In 2004,the average ESS was up to 12 TB. Today, the average DS8100 is 17.4 TB and the averageDS8300 is 41.5 TB.
51 percent of DS8000 series are configured for FCP only (Linux, UNIX, Windows, i5/OS),35 percent FICON only (System z mainframe), and 14% have both mixed.
Average I/O density has stabilized to about 0.6 IOPS per GB. This means that for everyTB of business data, you can expect most applications to issue 600 Input/Output requestsper second.
While IBM SAN Volume Controller has the fastest SPC-1 and SPC-2 benchmarks, the DS8000also has good results. Looking at just the monolithic "scale-up" systems, DS8000 hasthe fastest SPC-1, and second place for SPC-2.
Compared against the EMC DMX-3, the IBM DS8000 series has superior performance.For example, comparing 2Gbps port performance on each, DMX-3 is able to do 20 IOPS perport, compared to DS8000 with 38 IOPS per port.Compared against HDS USP, the response time for 60,000 IOPS for HDS averaged 10.5 milliseconds (msec), compared to IBM DS8000 less than 6.5 msec.
There are some unique features of the DS8000 to optimize performance. Two areAdaptive Multi-stream Prefetching (AMP) which helps improve processing of databasequeries, and HyperPAV which helps on mainframe workloads.
For FATA disks, performance of sequential reads and writes is only 20 percent less than15K RPM FC disks, but a whopping 50 percent less for random access. Consider using FATAfor audio/video streaming, surveillance data, seismic recordings, and medical imaging.
Comparing 146GB 10K versus 300GB 15K from a capacity perspective was interesting.37TB of 300GB 15K had 20 percent better response time, but 25 percent less maximum throughput,than 37TB of 146GB drives. Depending on your workload, this can help decided which youchoose.
Lee also covered RAID rebuild performance. When an individual HDD fails that is part of a RAIDgroup, the DS8000 performs a rebuild onto a spare drive. A RAID-5 rebuild is processedat 52 MB/sec, compared to RAID-10 at 56 MB/sec. Rebuild processing is low priority,so any other workload will take higher priority to avoid impacting application performance.Compared to EMC, the IBM DS8000 can rebuild RAID-5 73GB 15K RPM drive in only 24 minutes, but it takes 37 minutes to do this on a DMX-3. That is 13 minutes of additional exposure where a second drive failure might cause you to lose all your data in that RAID group!
N series ILM and Business Continuity
James Goodwin from our Advanced Technical Support team presented IBM System Storage N series featuresthat relate to ILM and Business Continuity. He covered features like SnapShot, SnapLock,SnapVault and LockVault.
I have arrived safely in Las Vegas for the IBM System Storage and Storage Networking Symposium. This eventis held once every year. The gold sponsors were: Brocade, Cisco, Finisar, Servergraph, and VMware. Our silversponsor was Qlogic.
I presented IBM's System Storage strategy and an overview of our product line. For those who missed it,our strategy is focused on helping customers in four key areas:
Optimize IT - to simplify and automate your IT operations and optimize performance and functionality, through server/storage synergies, storage virtualization, and intergrated storage infrastructure management.
Leverage Information - to enable a single view of trusted business information through data sharing, and to get the most value from information through Information Lifecycle Management (ILM).
Mitigate Risk - to comply with security and regulatory requirements, and keep your business running with a complete set of business continuity solutions. IBM offers a range of non-erasable, non-rewriteable storage, encryption on disk and tape, and support for IT Infrastructure Library (ITIL) service management disciplines.
Enable Business Flexibility - to provide scalable solutions and protect your IT investment through the use of open industry standards like Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S). IBM offers scalability in three dimensions: Scale-up, Scale-out, and Scale-within.
IBM has a broad storage portfolio, in seven offering categories:
Disk Systems, including our SAN Volume Controller, DS family, and N series.
Tape Systems, including tape drives, libraries and virtualization.
Storage Networking, a complete set of switches, directors and routes
Infrastructure Management, featuring the IBM TotalStorage Productivity Center software
Business Continuity, advanced copy services and the software to manage them
Lifecycle and Retention, our non-erasable, non-rewriteable storage including DR550, N series with SnapLock, and WORM tape support, Grid Archive Manager and our Grid Medical Archive Solution (GMAS)
Storage Services, everything from consulting, design and deployment to outsourcing and hosting.
I could talk all day on this, but given that the room was packed, every seat taken and the rest of the audience standing along the walls, I had to keep it down to one hour.
SAN Volume Controller Overview
I presented an overview of the IBM System Storage SAN Volume Controller (SVC), IBM's flagship disk virtualizationproduct. Rather than giving a long laundry list of features and benefits,I focused on the five that matter most:
Reduces the cost and complexity of managing storage, especially for mixed storage environments
Simplifies Business Continuity through non-disruptive data migration and advanced copy services
Improves storage utilization, getting more value from the storage hardware you already have
Enhances personnel productivity, empowering storage administrators to get their job done
Delivers high availability and performance
SAN Volume Controller - Customer Success Stories
A good part of this conference are presented by non-IBMers, which include Business Partners and clientssharing their experiences. In this session, we had two speakers share their experiences with SVC.
David Snyder keeps over 80 web sites online and available. His digital media technologiesteam uses SVC to make their storage administration easier, and ensure high availability for web site content creation and publishing.
Mark Prybylski manages storage at his company, a financial bank. His storage management team uses SVC Global Mirror which provides asynchronous disk mirroring between different types of disk, as part oftheir Business Continuity/Disaster Recovery plan.
The last session I attended was "Storage .. to Optimize your ECM depoloyments" by Jerry Bower, now working for IBM as part of our recent acquisition of the Filenet company. ECM stands for Enterprise Content Management, and IBM is the market leader in this space. Jerry gave a great overview of IBM Content Manager software suite, our newly acquired Filenet portfolio, and the storage supported.
After the sessions was a reception at the Solution Center with dozens of exhibitor booths. For example,Optica Technologies had their PRIZM productswhich are able to connect FICON servers to ESCON storage devices.
I am back at "the Office" for a single day today. This happens often enough I need a name for it.Air Force pilots that practice landing and take-offs call them "Touch and Go", but I think I needsomething better. If you can think of a better phrase, let me know.
This week, I was in Hartford, CT, Somers, NY and our Corporate Headquarters in Armonk, in a varietyof meetings, some with editors of magazines, others with IBMers I have only spoken to over the phone andfinally got a chance to meet face to face.
I got back to Tucson last night, had meetings this morning in Second Life, then presented "InformationLifecycle Management" in Spanish to a group of customers from Mexico, Chile, and Brazil. We have a great Tucson Executive Briefing Center, and plenty of foreign-language speakers to draw from our localemployees here at the lab site.
Sunday, I leave for Las Vegas for our upcoming IBM Storage and Storage Networking Symposium. We will cover the latest in our disk, tape, storage networking and related software.Do you have your tickets? If you plan to attend, and want to meet up with me, let me know.
Last week, a writer for a magazine contacted us at IBM to confirm a quote that writing a Terabyte (TB) on disk saves 50,000 trees. I explained that this was cited from UC Berkeley's famousHow Much Information? 2003 study.
To be fair, the USA Today article explains that AT&T also offers "summary billing" as well as "on-line billing", but apparently neither of these are the default choice. I can understand that phone companies send out bills on paper because not everyone who has a phone has internet access, but in the case of its iPhone customers, internet access is in the palm of your hands! Since all iPhone customers have internet access, and AT&T knows which customers are using an iPhone, it would make sense for either on-line billing or summary billing to be the default choice, and let only those that hate trees explicitly request the full billing option.
Sending a box of 300 pages of printed paper is expensive, both for the sender and the recipient. This informationcould have been shipped less expensively on computer media, a single floppy diskette or CDrom for example. Forthose who prefer getting this level of detail, a searchable digitized version might be more useful to the consumer.
Which brings me to the concept of Information Lifecycle Management (ILM). You can read my recent posts on ILM byclicking the Lifecycle tab on the right panel, or my now infamous post from last year about ILM for my iPod.
His recollection of the history and evolution of ILM fairly matches mine:
The phrase "Information Lifecycle Management" was originally coined by StorageTek in early 1990s as a way to sell its tape systems into mainframe environments. Automated tape libraries eliminated most if not all of the concerns that disk-only vendors tout as the problem with manual tape. I began my IBM career in a product now called DFSMShsm which specifically moved data from disk to tape when it no longer needed the service level of disk. IBM had been delivering ILM offerings since the 1970s, so while StorageTek can't claim inventing the concept, we give them credit for giving it a catchy phrase.
EMC then started using the phrase four years ago in its marketing to sell its disk systems, including slower less-expensive SATA disk. The ILM concept helped EMC provide context for the many acquisitions of smaller companies that filled gaps in the EMC portfolio. Question: Why did EMC acquire company X? Answer: To be more like IBM and broaden its ILM solution portfolio.
Information Lifecycle Management is comprised of the policies, processes,practices, and tools used to align the business value of information with the mostappropriate and cost effective IT infrastructure from the time information isconceived through its final disposition. Information is aligned with businessrequirements through management policies and service levels associated withapplications, metadata, and data.
Whitepapers and other materials you might read from IBM, EMC, Sun/StorageTek, HP and others will all pretty much tell you what ILM is, consistent with this SNIA definition, why it is good for most companies, and how it is not just about buying disk and tape hardware. Software, services, and some discipline are needed to complete the implementation.
While the SNIA definition provides a vendor-independent platform to start the conversation, it can be intimidatingto some, and is difficult to memorize word for word.When I am briefing clients, especially high-level executives, they often ask for ILM to be explained in simpler terms. My simplified version is:
Information starts its life captured or entered as an "asset" ...
This asset can sometimes provide competitive advantage, or is just something needed for daily operations. Digital assets vary in business value in much the same way that other physical assets for a company might. Some assets might be declared a "necessary evil" like laptops, but are tracked to the n'th degree to ensure they are not lost, stolen or taken out of the building. Other assetsare declared "strategically important" but are readily discarded, or at least allowed to walk out the door each evening.
... then transitions into becoming just an "expense" ...
After 30-60 days, many of the pieces of information are kept around for a variety of reasons. However, if it isn'tneeded for daily operations, you might save some money moving it to less expensive storage media, throughless expensive SAN or LAN network gear, via less expensive host application servers. If you don't need instantaccess, then perhaps the 30 seconds or so to fetch it from much-less-expensive tape in an automated tape librarycould be a reasonable business trade-off.
... and ends up as a "liability".
Keeping data around too long can be a problem. In some cases, incriminating, and in other cases, just having toomuch data clogs up your datacenter arteries. If not handled properly within privacy guidelines, data potentially exposes sensitive personal or financial information of your employees and clients. Most regulations require certain data to be kept, in a manner protected against unexpected loss, unethical tampering, and unauthorized access, for a specific amount of time, after which it can be destroyed, deleted or shredded.
So ILM is not just a good idea to save a company money, it can keep them out of the court room, as well as help save the environment and not kill so many trees. Now that 100 percent of iPhone customers have internet access, and a goodnumber of non-iPhone customers have internet access at home, work, school or public library, it makes sense for companies to ask people to "opt-in" to getting their statements on paper, rather than forcing them to "opt-out".
Despite this, or perhaps because of this, over 30 percent of IBM's Linux server revenue is onnon-x86 platforms, avoiding the XenSource vs. VMware decision altogether. Both System z (traditional mainframe servers) and System p (traditional UNIX servers) are able to run many Linux images in a fully virtualized manner, without VMware or XenSource.
Philip Rosedale, chief executive of Linden Labs, which produced the Second Life virtual reality environment, said Second Life and Facebook are popular because they give people a new environment to interact in that they are comfortable with.
Of course I have blogged for months now on my involvement in Second Life, and how IBM is investing in this platform for business purposes. Recently, IBM made news for publishing its Code of Conduct,and set of guidelines on how you run your avatar in virtual worlds, including Second Life. IBM recognizesthe business potential of virtual worlds, and has formed the "3D Internet" group exploring the possibilities.Over 5000 IBM employees now use Second Life on a regular basis.
I was surprised to learn that there were over 23,000 IBMers already on Facebook. I used to be on LinkedIn,but found FaceBook to have more IBMers and have made the switch. Recently, we were told that these 23,000 IBMers spend 19 minutes, on average, per day visiting Facebook pages. Nobody askedme how much time I spend every day on FaceBook, but with over 350,000 employees in the company,I am sure some have ways to track the lives of others.
Both of these count as adding more "FUN" into the workplace, which everyone should strive for. It is also good to know that the skills you developusing Second Life or FaceBook can carry over to your next job role or your next employer.The number-one question I get from new colleagues when I mention either these exciting new ways to communicate and collaborate is: "But how is this related to business?"
Second Life is obvious, a new innovative way to hold meetings with colleagues, Business Partners and clients isgoing to have business value. Meetings in Second Life help you focus on what is being discussed, versus a plaintelephone call where your eyes may wander to other things in your view. Of course nothing beatsthe effectiveness of face-to-face meetings, but Second Life offers a more energy-efficient alternative than traveling to other cities or countries.
Stephen over at RupturedMonkey discusses the challenges of recruiting storage administrators:
There has been a Storage Admin job advertised for many months but no one wants it. Why? It's offering VERY good money but the word has got around the company has poor management practices and most people don't last for more than 6 months. So, with the shortage of good SAN people, good money and conditions, what can that company do to recruit someone? ...
This leads me to the thought that has anyone ever thought about the standards that storage administrators should follow? Can an employer look up a web site to find questions to ask prospective employees? More often than not, they are recruiting because the previous one left so how can companies know what they are getting.
There is actually a great standard called Information Technology Infrastructure Library (ITIL) that applies not just to storage administrators, but other IT personnel such as network administrators and server administrators. Here's a quick web-site about ITIL History:
ITIL History can be traced back to the late 1980’s when the British government determined that the level of IT service quality provided to them was not sufficient enough. The Central Computer and Telecommunications Agency (CCTA), now called the Office of Government Commerce (OGC), was tasked with developing a framework for efficient and financially responsible use of IT resources within the British government and the private sector.
The goal was to develop an approach that would be vendor-independent and applicable to organizations with differing technical and business needs. This resulted in the creation of the ITIL.
This standard spread from the UK to other governments in Europe, and is now being adopted worldwide by government agencies, non-profit organizations and commercial enterprises. IBM, of course, has been involved along the way, encouraging this set of best practices to take hold.
ITIL provides a common vocabulary that puts everyone in the IT industry on the same page, with the ultimate goal of helping companies run their IT organizations more efficiently.
ITIL provides recommendations, or best practices, for managing the way IT provides services to the rest of the organization, in the same way you would the rest of your business, with a defined set of processes.
While ITIL does a great job of describing what needs to be done, it doesn’t describe how to get it done. It doesn’t tell you how to take those best practices and implement them with real-life tools and technology. It’s not prescriptive.
The general process is now referred to as "IT Service Management", and the seven ITIL books are managed by the IT Service Management forum (ITSMf).
ITIL is vendor-independent. You can learn ITIL disciplines at one IT shop, and carry those skills with you when you go to another IT shop that has completely different gear. A common vocabulary would allow employers to post jobs in a consistent manner, and ask questions to those interviewing for the job. You can be ITIL-trained, and even ITIL-certified. IBM offers this training.
Of course, specific skills on how to use specific software to configure storage devices, request change control approvals, or define SAN zones, are useful, but often can be picked up on the job, reading the vendor manuals on the specifics. Of course, you can use IBM TotalStorage Productivity Center, which would allow someone to manage a variety of disk, tape and SAN fabric gear from one interface, greatly reducing the learning curve.
Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
Perhaps they won't be surprised any more. Here is an article in eWeek that explains how IBM isreducing energy costs 80% by consolidating 3,900 rack-optimized servers to 33 IBM System z mainframe servers, running Linux, in its own data centers. Since 1997, IBM has consolidated its 155 strategic worldwide data center locations down to just seven.
I am very pleased that IBM has invested heavily into Linux, with support across servers, storage, software andservices. Linux is allowing IBM to deliver clever, innovative solutions that may not be possible with other operating systems. If you are in storage, you should consider becoming more knowledgeable in Linux.
The older systems won't just end up in a landfill somewhere. Instead, the details are spelled out inthe IBM Press Release:
As part of the effort to protect the environment, IBM Global Asset Recovery Services, the refurbishment and recycling unit of IBM, will process and properly dispose of the 3,900 reclaimed systems. Newer units will be refurbished and resold through IBM's sales force and partner network, while older systems will be harvested for parts or sold for scrap. Prior to disposition, the machines will be scrubbed of all sensitive data. Any unusable e-waste will be properly disposed following environmentally compliant processes perfected over 20 years of leading environmental skill and experience in the area of IT asset disposition.
Whereas other vendors might think that some operational improvements will be enough, such as switching to higher-capacity SATA drives, or virtualizing x86 servers, IBM recognizes that sometimes more fundamental changes are required to effect real changes and real results.
I would like to welcome IBMer Barry Whyte to the blogosphere!
From his bio:
Barry Whyte is a 'Master Inventor' working in the Systems & Technology Group based in IBM Hursley, UK. Barry primarly works on the IBM SAN Volume Controller virtualization appliance. Barry graduated from The University of Glasgow in 1996 with a B.Sc (Hons) in Computing Science. In his 10 years at IBM he has worked on the successful Serial Storage Architecture (SSA) range of products and the follow-on Fibre Channel products used in the IBM DS8000 series. Barry joined the SVC development team soon after its inception and has held many positions before taking on his current role as SVC performance architect. Outside of work, Barry enjoys playing golf and all things to do with Rotary Engines.
To avoid confusion in future posts, I will refer to Barry Whyte as BarryW, and fellow EMC blogger Barry Burke (aka the Storage Anarchist) as BarryB.
I'm in Chicago this week, but it is actually HOTTER here than in my home town of Tucson, Arizona.
There are a lot of exciting conferences and events coming up soon.
SHARE will be in San Diego, August 12-17. Held twice a year, I attended SHARE for 10 years back when I was lead architect for DFSMS,and then later the focal point for storage support on the Linux for System z platform.I won't be there this time around, but am glad to see that it is still thriving.
IBM Storage and Storage Networking Symposium
IBM Storage and Storage Networking Symposium will be in Las Vegas, August 19-24.This is a great conference that is focused entirelyon the products and solutions I deal with the most. I attended nearly every one since they startedthis back in the 1990s, and am glad that I will be there this year, making several presentations.If you plan to attend this and want to meet up, drop me a note.
VMworld will be held in San Francisco, September 11-13.IBM is a top reseller of VMware software, and is proud to be a Platinum Sponsor for this event. Lookfor the panel discussion on "Storage Virtualization" which I am sure will include SAN Volume Controller.
Meet the Storage Experts
Based on our successful product launch in Second Life back in April, we are now holding meetingsevery quarter to discuss various IBM System Storage topics. The next one will be September 20 onone of the IBM islands in Second Life. For those without travel budgets to go anywhere, the advantageto our "Second Life" events is that no travel is required, it can be done from the comfort of workor home office location.
I will post updates on how to register for this event as soon as I know them.
Virtual Worlds Fall 2007 onOctober 10-11, 2007 at the San Jose Convention Center. Sandy Kearney, IBM GlobalDirector of IBM 3D Internet and Virtual Business, will be the keynote speaker.This will include discussion of Second Life.
I am sure there are others, but these are the ones that I am aware of IBM's involvement.I'll be in Chicago next week, meeting with Sales Reps and Business Partners.
The question is if this is unique or specific to these particular models, or if this affects all kinds of blade servers because of their very nature and architecture. Stephen indicates that they also have HP C class enclosures, but since they are still in test mode, cannot comment on them.
I have no experience with any of HP's blade servers, but I have worked closely with our IBM BladeCenter team to help make sure that our storage, and our SAN equipment, work well together with the BladeCenter, and more importantly, that problems can be diagnosed effectively.
When I asked why people feel they need to know the inner workings of storage, the overwhelming response was to help diagnose problems. This could include problems inplacing related data on a potentially single point of failure, problems with performance, and problems communicating with 1-800-IBM-SERV.
So, if you have encountered problems diagnosing SAN problems with BladeCenter, or find that setting up an IBM SAN with blade servers in general, I would be interested in hearing what IBM can do to make the situation better.[Read More]
Miles per Gallon measures an effeciency ratio (amount of work done with a fixed amount of energy), not a speed ratio (distance traveled in a unit of time).
Given that IOPs and MB/s are the unit of "work" a storage array does, wouldn't the MPG equivalent for storage be more like IOPs per Watt or MB/s per Watt? Or maybe just simply Megabytes Stored per Watt (a typical "green" measurement)?
You appear to be intentionally avoiding the comparison of I/Os per Second and Megabytes per Second to Miles Per Hour?
May I ask why?
This is a fair question, Barry, so I will try to address it here.
It was not a typo, I did mean MPG (miles per gallon) and not MPH (miles per hour). It is always challenging to find an analogy that everyone can relate to explain concepts in Information Technology that might be harder to grasp. I chose MPG because it was closely related to IOPS and MB/s in four ways:
MPG applies to all instances of a particular make and model. Before Henry Ford and the assembly line, cars were made one at a time, by a small team of craftsmen, and so there could be variety from one instance to another. Today, vehicles and storage systems are mass-produced in a manner that provides consistent quality. You can test one vehicle, and safely assume that all similar instances of the same make and model will have the similar mileage. The same is true for disk systems, test one disk system and you can assume that all others of the same make and model will have similar performance.
MPG has a standardized measurement benchmark that is publicly available. The US Environmental Protection Agency (EPA) is an easy analogy for the Storage Performance Council, providing the results of various offerings to chose from.
MPG has usage-specific benchmarks to reflect real-world conditions.The EPA offers City MPG for the type of driving you do to get to work, and Highway MPG, to reflect the type ofdriving on a cross-country trip. These serve as a direct analogy to SPC having SPC-1 for Online transaction processing (OLTP) and SPC-2 for large file transfers, database queries and video streaming.
MPG can be used for cost/benefit analysis.For example, one could estimate the amount of business value (miles travelled) for the amount of dollar investment (cost to purchase gallons of gasoline, at an assumed gas price). The EPA does this as part of their analysis. This is similar to the way IOPS and MB/s can be divided by the cost of the storage system being tested on SPC benchmark results. The business value of IOPS or MB/s depends on the application, but could relate to the number of transactions processed per hour, the number of music downloads per hour, or number of customer queries handled per hour, all of which can be assigned a specific dollar amount for analysis.
It seemed that if I was going to explain why standardized benchmarks were relevant, I should find an analogy that has similar features to compare to. I thought about MPH, since it is based on time units like IOPS and MB/s, butdecided against it based on an earlier comment you made, Barry, about NASCAR:
Let's imagine that a Dodge Charger wins the overwhelming majority of NASCAR races. Would that prove that a stock Charger is the best car for driving to work, or for a cross-country trip?
Your comparison, Barry, to car-racing brings up three reasons why I felt MPH is a bad metric to use for an analogy:
Increasing MPH, and driving anywhere near the maximum rated MPH for a vehicle, can be reckless and dangerous,risking loss of human life and property damage. Even professional race car drivers will agree there are dangers involved. By contrast, processing I/O requests at maximum speed poses no additional risk to the data, nor possibledamage to any of the IT equipment involved.
While most vehicles have top speeds in excess of 100 miles per hour, most Federal, State and Local speed limits prevent anyone from taking advantage of those maximums. Race-car drivers in NASCAR may be able to take advantage of maximum MPH of a vehicle, the rest of us can't. The government limits speed of vehicles precisely because of the dangers mentioned in the previous bullet. In contrast, processing I/O requests at faster speeds poses no such dangers, so the government poses no limits.
Neither IOPS nor MB/s match MPH exactly.Earlier this week,I related IOPS to "Questions handled per hour" at the local public library, and MB/s to "Spoken words per minute" in those replies. If I tried to find a metric based on unit type to match the "per second" in IOPS and MB/s, then I would need to find a unit that equated to "I/O requests" or "MB transferred" rather than something related to "distance travelled".
In terms of time-based units, the closest I could come up with for IOPS was acceleration rate of zero-to-sixty MPH in a certain number of seconds. Speeding up to 60MPH, then slamming the breaks, and then back up to 60MPH, start-stop, start-stop, and so on, would reflect what IOPS is doing on a requestby request basis, but nobody drives like this (except maybe the taxi cab drivers here in Malaysia!)
Since vehicles are limited to speed limits in normal road conditions, the closest I could come up with for MB/s would be "passenger-miles per hour", such that high-occupancy vehicles like school buses could deliver more passengers than low-occupancy vehicles with only a few passengers.
Neither start-stops nor passenger-miles per hour have standardized benchmarks, so they don't work well for comparisonbetween vehicles.If you or anyone can come up with a metric that will help explain the relevance of standardized benchmarks better than the MPG that I already used, I would be interested in it.
You also mention, Barry, the term "efficiency" but mileage is about "fuel economy".Wikipedia is quick to point out that the fuel efficiency of petroleum engines has improved markedly in recent decades, this does not necessarily translate into fuel economy of cars. The same can be said about the performance of internal bandwidth ofthe backplane between controllers and faster HDD does not necessarily translate to external performance of the disk system as a whole. You correctly point this out in your blog about the DMX-4:
Complementing the 4Gb FC and FICON front-end support added to the DMX-3 at the end of 2006, the new 4Gb back-end allows the DMX-4 to support the latest in 4Gb FC disk drives.
You may have noticed that there weren't any specific performance claims attributed to the new 4Gb FC back-end. This wasn't an oversight, it is in fact intentional. The reality is that when it comes to massive-cache storage architectures, there really isn't that much of a difference between 2Gb/s transfer speeds and 4Gb/s.
Oh, and yes, it's true - the DMX-4 is not the first high-end storage array to ship a 4Gb/s FC back-end. The USP-V, announced way back in May, has that honor (but only if it meets the promised first shipments in July 2007). DMX-4 will be in August '07, so I guess that leaves the DS8000 a distant 3rd.
This also explains why the IBM DS8000, with its clever "Adaptive Replacement Cache" algorithm, has such highSPC-1 benchmarks despite the fact that it still uses 2Gbps drives inside. Given that it doesn't matter between2Gbps and 4Gbps on the back-end, why would it matter which vendor came first, second or third, and why call it a "distant 3rd" for IBM? How soon would IBM need to announce similar back-end support for it to be a "close 3rd" in your mind?
I'll wrap up with you're excellent comment that Watts per GB is a typical "green" metric. I strongly support the whole"green initiative" and I used "Watts per GB" last month to explain about how tape is less energy-consumptive than paper.I see on your blog you have used it yourself here:
The DMX-3 requires less Watts/GB in an apples-to-apples comparison of capacity and ports against both the USP and the DS8000, using the same exact disk drives
It is not clear if "requires less" means "slightly less" or "substantially less" in this context, and have no facts from my own folks within IBM to confirm or deny it. Given that tape is orders of magnitude less energy-consumptive than anything EMC manufacturers today, the point is probably moot.
I find it refreshing, nonetheless, to have agreed-upon "energy consumption" metrics to make such apples-to-apples comparisons between products from different storage vendors. This is exactly what customers want to do with performance as well, without necessarily having to run their own benchmarks or work with specific storage vendors. Of course, Watts/GB consumption varies by workload, so to make such comparisons truly apples-to-apples, you would need to run the same workload against both systems. Why not use the SPC-1 or SPC-2 benchmarks to measure the Watts/GB consumption? That way, EMC can publish the DMX performance numbers at the same time as the energy consumption numbers, and then HDS can follow suit for its USP-V.
I'm on my way back to the USA soon, but wanted to post this now so I can relax on the plane.
Wrapping up this week's exploration on disk system performance, today I willcover the Storage Performance Council (SPC) benchmarks, and why I feel they are relevant to help customers make purchase decisions. This all started to address a comment from EMC blogger Chuck Hollis, who expressed his disappointment in IBM as follows:
You've made representations that SPC testing is somehow relevant to customers' environments, but offered nothing more than platitudes in support of that statement.
Apparently, while everyone else in the blogosphere merely states their opinions and moves on,IBM is held to a higher standard. Fair enough, we're used to that.Let's recap what we covered so far this week:
Monday, I explained how seemingly simple questions like "Which is the tallestbuilding?" or "Which is the fastest disk system?" can be steeped in controversy.
Tuesday, I explored what constitutes a disk system. While there are special storage systemsthat include HDD that offer tape-emulation, file-oriented access, or non-erasable non-rewriteable protection,it is difficult to get apples-to-apples comparisions with storage systems that don't offer these special features.I focused on the majority of general-purpose disk systems, those that are block-oriented, direct-access.
Today, I will explore ways to apply these metrics to measure and compare storageperformance.
Let's take, for example, an IBM System Storage DS8000 disk system. This has a controller thatsupports various RAID configurations, cache memory, and HDD inside one or more frames.Engineers who are testing individual components of this system might run specifictypes of I/O requests to test out the performance or validate certain processing.
100% read-hit, this means that all the I/O requests are to read data expectedto be in the cache.
100% read-miss, this means that all the I/O requests are to read data expectedNOT to be in the cache, and must go fetch the data from HDD.
100% write-hit, this means that all the I/O requests are to write data into cache.
100% write-miss, this means that all the I/O requests are to bypass the cache,and are immediately de-staged to HDD. Depending on the RAID configuration, this can result in actually reading or writing several blocks of data on HDD to satisfy thisI/O request.
Known affectionately in the industry as the "four corners" test, because you can show them on a box, with writes on the left, reads on the right,hits on the top, and misses on the bottom.Engineers are proud of these results, but these workloads do notreflect any practical production workload. At best, since all I/O requests are oneof these four types, the four corners provide an expectation range from the worst performance (most often write-missin the lower left corner)and the best performance (most often read-hit in the upper right corner) you might get with a real workload.
To understand what is needed to design a test that is more reflective of real business conditions,let's go back to yesterday's discussion of fuel economy of vehicles, with mileage measured in miles per gallon.The How Stuff Works websiteoffers the following description for the two measurements taken by the EPA:
The "city" program is designed to replicate an urban rush-hour driving experience in which the vehicle is started with the engine cold and is driven in stop-and-go traffic with frequent idling. The car or truck is driven for 11 miles and makes 23 stops over the course of 31 minutes, with an average speed of 20 mph and a top speed of 56 mph.
The "highway" program, on the other hand, is created to emulate rural and interstate freeway driving with a warmed-up engine, making no stops (both of which ensure maximum fuel economy). The vehicle is driven for 10 miles over a period of 12.5 minutes with an average speed of 48 mph and a top speed of 60 mph.
Why two different measurements? Not everyone drives in a city in stop-and-go traffic. Having only one measurement may not reflect the reality that you may travel long distances on the highway. Offering both city and highway measurements allows the consumers to decide which metric relates closer to their actual usage.
Should you expect your actual mileage to be the exact same as the standardized test?Of course not. Nobody drives exactly 11 miles in the city every morning with 23 stops along the way,or 10 miles on the highway at the exact speeds listed.The EPA's famous phrase "your mileage may vary" has been quickly adopted into popular culture's lexicon. All kinds of factors, like weather, distance, anddriving style can cause people to get better or worse mileage than thestandardized tests would estimate.
Want more accurate results that reflect your driving pattern, in specific conditions that you are most likely to drive in? You could rentdifferent vehicles for a week and drive them around yourself, keeping track of whereyou go, and how fast you drove, and how many gallons of gas you purchased, so thatyou can then repeat the process with another rental, and so on, and then use yourown findings to base your comparisons. Perhaps you find that your results are always20% worse than EPA estimates when you drive in the city, and 10% worse when you driveon the highway. Perhaps you have many mountains and hills where you drive, you drive too fast, you run the Air Conditioner too cold, or whatever.
If you did this with five or more vehicles, and ranked them best to worstfrom your own findings, and also ranked them best to worst based on the standardizedresults from the EPA, you likely will find the order to be the same. The vehiclewith the best standardized result will likely also have the best result from your ownexperience with the rental cars. The vehicle with the worst standardized result willlikely match the worst result from your rental cars.
(This will be one of my main points, that standardized estimates don't have to be accurate to beuseful in making comparisons. The comparisons and decisions you would make with estimatesare the same as you would have made with actual results, or customized estimates based on current workloads. Because the rankings are in the same order, they are relevant and useful for making decisions based on those comparisons.)
Most people shopping around for a new vehicle do not have the time or patience to do this with rental cars. Theycan use the EPA-certified standardized results to make a "ball-park" estimate on how much they will spendin gasoline per year, decide only on cars that might go a certain distancebetween two cities on a single tank of gas, or merely to provide ranking of thevehicles being considered. While mileage may not be the only metric used in making a purchase decision, it can certainly be used to help reduce your consideration setand factor in with other attributes, like number of cup-holders, or leather seats.
In this regard, the Storage Performance Council has developed two benchmarks that attempt to reflect normal business usage, similar to "City" and "Highway" driving measurements.
SPC-1 consists of a single workload designed to demonstrate the performance of a storage subsystem while performing the typical functions of business critical applications. Those applications are characterized by predominately random I/O operations and require both queries as well as update operations. Examples of those types of applications include OLTP, database operations, and mail server implementations.
SPC-2 consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload.
Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large-scale financial processing.
Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence.
Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library.
The SPC-2 benchmark was added when people suggested that not everyone runs OLTP anddatabase transactional update workloads, just as the "Highway" measurement was addedto address the fact that not everyone drives in the City.
If you are one of the customers out there willing to spend the time and resources to do your own performance benchmarking, either at your own data center, or with theassistance of a storage provider, I suspect most, if not all, the major vendors(including IBM, EMC and others), and perhaps even some of the smaller start-ups, would be glad to work with you.
If you want to gather performance data of your actual workloads, and use this to estimate how your performance might be with a new or different storage configuration, IBMhas tools to make these estimates, and I suspect (again) that most, if not all, of theother storage vendors have developed similar tools.
For the rest of you who are just looking to decide which storage vendors to invite on your next RFP, and which products you might like to investigate that matchthe level of performance you need for your next project or application deployment,than the SPC benchmarks might help you with this decision. If performance is importantto you, factor these benchmark comparisons with the rest of the attributes you arelooking for in a storage vendor and a storage system.
In my opinion, I feel that for some people, the SPC benchmarks provide some value in this decision making process. They are proportionally correct, in that even ifyour workload gets only a portion of the SPC estimate, that storage systems withfaster benchmarks will provide you better performance than storage systems with lower benchmark results. That is why I feel they can be relevant in makingvalid comparisons for purchase decisions.
Hopefully, I have provided enough "food for thought"on this subject to support why IBM participates in the Storage Performance Council, why the performance of the SAN Volume Controller can be compared to the performanceof other disk systems, and why we at IBM are proud of the recent benchmark results in our recent press release.
Continuing our exploration this week into the performance of disk systems, today I will cover the metrics to measure performance. Why do people have metrics?
Help provide guidance in decision making prior to purchase
Help manage your current environment
Help drive changes
Several bloggers suggested that perhaps an analogy to vehicles would be reasonable, given that cars and trucks are expensive pieces of engineering equipment, and people make purchase decisions between different makes and models.
In the United States, the Environmental Protection Agency (EPA) government entity is responsible for measuringfuel economy of vehicles using the metric Miles Per Gallon (mpg).Specifically, these are U.S. miles (not nautical miles) and U.S. gallons, not imperial gallons. It is importantwhen defining metrics that you are precise on the units involved.
Since nearly all vehicles are driven by gallons of gasoline, and travel miles of distance, this is a great metric to use for comparing all kinds of vehicles, including motorcycles, cars, trucks and airplanes. The EPA has a fuel economy website to help people make these comparisons.Manufacturers are required by law to post their vehicles' fuel-economy ratings, as certified by the federal Environmental Protection Agency (EPA), on the window stickers of most every new vehicle sold in the U.S. -- vehicles that have gross-vehicle-weight ratings over 8,500 pounds are the exception.
What about storage performance? What could we use as the "MPG"-like metric that would allow you to compare different makes and models of storage?
The two most commonly used are I/O requests per second (IOPS) and Megabytes transferred per second (MB/s). To understand the difference in each one, let's go back to our analogy from yesterday's post.
(A woman calls the local public library. She picks up the phone, and dials the phone number of the one down the street. A man working at the library hears the phone ring, answers it with "Welcome to the Public Library! How can I help you?" She asks "What is the capital city of Ethiopia?" He replies "Addis Ababa" and hangs up. Satisfied with this response, she hangs up. In this example, the query for information was the I/O request, initiated by the lady, to the public library target)
In this example, it might have only taken 1 second to actually provide the answer, but it might have taken 10-30 seconds to pick up the phone, hear the request, respond, and then hang up the phone. If one person is able to do this in 10 seconds, on average, then he can handle 360 questions per hour. If another person takes 30 seconds, then only 120 questions per hour. Many business applications read or write less than 4KB of information per I/O request, and as such the dominant factor is not the amount of time to transfer the data, but how quickly the disk system can respond to each request. IOPS is very much like counting "Questions handled per hour" at the public library. To be more specific on units, we may specify the specific block size of the request, say 512 bytes or 4096 bytes, to make comparisons consistent.
Now suppose that instead of asking for something with a short answer, you ask the public library to read you the article from a magazine, identify all the movies and show times of a local theatre, or recite a work from Shakespeare. In this case, the time it took to pick up the phone and respond is very small compared to the time it takes to deliverthe information, and could be measured instead in words per minute. Some employees of the library may be faster talkers, having perhaps worked in auction houses in a prior job, and can deliver more words per minute than other employees. MB/s is very much like counting "Spoken words per minute" at the public library. To be more specific on units, we may request a specific amount of information, say the words contained in "Romeo and Juliet", to make comparisons consistent.
Now that we understand the metrics involved, tomorrow we can discuss how to use these in the measurement process.