Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
I welcome HDS into the "Super High-End" club. Those who follow my blog might remember thatI suggested that analysts like IDC that use "Entry Level", "Midrange" and "Enterprise" as categoriesmay need a New Category: Super High End.
I was not surprised to see EMC, who now drops further down in perception, dispute HDS's recent SPC-1 benchmarks.Fellow blogger EMC's BarryB posted on his Storage Anarchist blog [IBM vs. Hitachi] thatpoints out that IBM's SAN Volume Controller (SVC) is still much faster, and less expensive, than USP-V.
So, just in case you haven't seen all the press releases, here is a quick recap on the results:IBM SVC 4.2 is still in first place, then HDS USP-V, then IBM System Storage DS8300. Just for comparison, I includeour IBM System Storage DS4800 midrange disk results, so you can appreciate the difference between midrange and high-end.There are other products from other vendors, I just point out a few from IBM and HDS here in this graph.
******************************************************************** 272,505 IOPS - IBM SVC 4.2 ************************************************** 200,245 IOPS - HDS USP-V ******************************* 123,033 IOPS - IBM DS8300 *********** 45,014 IBM DS4800
HDS tried to come up with a phrase "Enterprise Storage System" for comparison that would leave the SVC 4.2 out.Given that the SVC has five nines (99.999%) availability, has non-disruptive upgrade and firmware update capability, has more than two processors typical of midrange products, and can connect to mainframes via z/VM, z/VSE andLinux on System z operating systems, there is no reason to pretend SVC isn't Enterprise-class.
The irony now is that EMC now looks very lonely being one of the last remaining major storage vendors not to participate in standardized benchmarks that help customers make purchase decisions, as mentioned both by IBM's BarryW: I guess that only leaves EMC, as well as HDS's Claus Mikkelsen: Olympics of Storage.
Earlier this year, EMC's Chuck Hollis opined[Storage Scorecard]that the EMC DMX and HDS TagmaStore USP were high-endboxes, which I would speculate both of these would fall somewhere between DS4800 and DS8300 on the graph above.If that is the case, it is impressive that HDS was able to re-engineer their USP-V to be 2-3x faster thanits predecessor, the USP.
Not all workloads are the same, and your mileage may vary. While I can't speak to HDS, the folks over atEMC have assured me, in writingcomments on this blog, that there is nothing preventing their customers from publishingtheir own performance comparisons between EMC and non-EMC equipment. I would encourage every customer to do this, between IBM and HDS, HDS and EMC, and between IBM and EMC, to help shed even more light on this area.In fact, you can even run your own SPC benchmarks to see how your own environment compares to the ones published.
Of course, performance is just one attribute on which to choose a storage vendor, and to choose specific products,models or features. For more information about Storage Performance Council and the SPC-1 and SPC-2 benchmarks,see my week-long series on SPC benchmarks, which are listed in reverse chronological order.
Go to the official Storage Performance Council website to read the details of the SPC-1 results.
Today, 13.5% of EMC's sales force is female, the company says, compared with 40% at International Business Machines Corp. and 29% at CA Inc., a big software vendor, those companies say. According to the 2000 U.S. census, about 25% of high-tech employees nationally were women.
IBM recognizes that diversity provides unique advantages in dealing with a global marketplace. Not only are women well represented on our IT sales force, they are also well represented on our board of directors, our Worldwide Management Committee, and our executive team overall, as well as in technical positions such as IBM Fellows, Distinguished Engineers, members of the IBM Academy of Technology. Working Mother magazine has rated IBM one of the top 10 "Best Companies" for women to work for in each of the 18 years that it has published this list.
In 2006, 51 camps called EXITE (Exploring Interests in Technology and Engineering) were held worldwide in 33 countries. The hope is to get young girls to pursue college degrees in computer science, math and engineering, so that they can then help fill the shortage of technical resources in IT.
So, if you are a women discouraged at your current place of employment, and are looking for exciting new opportunities in IT, come check out working for IBM![Read More]
Over the past year and a half, I have been focused on explaining WHAT IBM System Storage was, and WHY IBM should be considered when making a storage purchase decision. Let's recapsome of IBM's accomplishments during this time:
Today, October 1, I switch over to HOW to get it done. In my new job role, I will be leading a seriesof projects and workshops on how to make your data center more green, how to get more value from the information you have, how to better protect your information from unauthorized access or unethical tampering, how to develop and deploya site-wide business continuity plan, and how to centralize your management using open industry standards.
I will still be in Tucson, but am moving from building 9032 over to 9070 to be closer to the rest of my team.
I was in Raleigh this week, in business meetings, and had dinner last night at a Japanese Tepanyaki restaurant. The man next to me was dining alone, and said he worked for Cisco, a big company, "Had you heard of it?" he asked. Of course, I told him, I work for IBM, and IBM and Cisco have a strong working relationship, using each others products in both directions. He said he understood why they would use IBM, but why would IBM buy anything from them, and then he said, "Oh yes, your cafeteria".
At this point we realized he was talking about SYSCO, the food company, not Cisco, the storage networking technology partner. We both had a good laugh.
Which brings me to think of other "mis-heard" or "mis-interpreted" items that might have caught people off guard because they sounded similarly.
zFS versus ZFS
Some things are case-sensitive. Lower case zFS is the hierarchical file system for the z/OS mainframe environment, which was originally called "episode" file system that IBM acquired from TransArc. z/OS supports two file systems, HFS and zFS. Meanwhile, ZFS is one of the file systems available for Sun Solaris. Apple Mac OS is switching from its own HFS, different than the z/OS version, over the Sun's ZFS.
packs versus PACS
Older mainframers call disk volumes "packs". This started in the days where disks were "removable" and you can pack and unpack them into the drive unit.
PACS on the other hand refers to the "Picture Archive and Communication System" application environment used by hospitals and medical facilities to storage and share X-ray, Cardiology and Radiology images. Today, modern medical equipment are called "modalities" and directly connect to NAS storage via NFS or CIFS protocols. The images are immediately digitized and sent to disk, then tape, for long-term archive storage. IBM's Grid Medical Archive Solution (GMAS) is designed specifically for this environment.
rack versus RAC
Perhaps my favorite was when someone asked a high-level executive at a conference if their storage product supported Oracle RAC, and the response was that it supported anyone's rack, so long as it met the 19 inch standard. Everyone burst out laughing, and he probably had to be explained what was going on afterward.
Oracle RAC refers to Real Application Cluster, allowing multiple Oracle servers to work together as a system. A "rack" is just the powered shelf, typically 19" wide, and typically 25U or 42U tall, that allows modular servers, storage or network gear be placed together in a data center. A "U" is 1.75 inches, the thickness of a "two-by-four" piece of lumber. If you have ever used a 3.5 inch or 5.25 inch floppy diskette, then you already know the 2U and 3U sizes.
I am sure there are many other examples of similar sounding terms and phrases. If you have any to contribute, post a comment below!
Guy Kawasaki is hosting a Web Conference next week on The Art of Evangelism.By this he is referring to promoting products and services, rather than the traditionaldefinition: the preaching or promulgation of the gospel.
A few years ago, I myself had the official title of "Technical Evangelist" for the IBM System Storageproduct line. I never liked the title, and asked to use something else, but since I was part of ateam of "Technical Evangelists," I had to keep it. A lot of companies were using this as a title,I was told, and everyone knew that it was not a religious reference, but a marketing one.
Sometimes, words do not translate well into other countries or cultures. Four years ago, on theweek of September 11, 2003, I traveled to Kuwait, Qatar and UAE for a business trip to present thelatest on our storage products. On arrival in Kuwait, I had to fill out my "visa application" to enterthe country, and it asked for my "occupation/title" but there were not enough spaces to write "Technical Evangelist" so I just entered "Evangelist".
The two Kuwaitis behind the desk looked it up in their Arabic/English dictionary, discussed it, andweren't sure if they should shoot me, or take me to the back room to video tape my proper be-heading. Our official hostcame over to ask what was the delay, and they showed her the dictionary translation. She asked me,"Why would you put Evangelist as your title?" So, I gave her my business card, and told herthat my full title of Technical Evangelist did not fit in the space provided.
She explained to the two behind the desk that I had misunderstood the question, and misspelled theactual word intended was "Engineer". She showed them the agenda of the IBM Technical Conference I wasspeaking at, and the list of Oil and Construction companies that were attending. They looked upthe new title "Engineer", and agreed the translation was suitable for entry, and that these two words,Evangelist and Engineer, used enough similar letters they could understand how one might misspell one for the other.
Our limo took a small detour to the middle of the desert so that we could burn and bury the ashes of the remainder of my business cards, before arriving to the hotel. All of my powerpoint slides that listed my title were changed to "Technical Engineer". The events themselves went very well,as IT people are the same all over the world, and had no problem setting aside religious or politicaldifferences in an effort to learn more about technology.
When I got back to the United States, I shared my experience with my fellow team-mates, most of whom never leavethe country, and would never have thought this might happen. Management agreed to let us change our titles.That was good for me, as I had to order a new box of business cards anyways.
Last year, I became "Manager of Brand Marketing Strategy" of the IBM System Storage product line.Now on business trips I just write "Manager" on the Occupation/Title line. It fits in every form I have ever had to fill, and translates properly into every language.
Today I spoke at the IBM Think Green Roadshow in Phoenix, Arizona. This is justone of a 15-city tour to help make people aware of Green data center issues.Here is the schedule forthe remaining cities. Contact your local IBM rep for details.
Victor Ferreira was our moderator and host. He is the site level executive for the2000 IBM employees in the Phoenix area, and manages the Public Sector for our Westernregion.
The first speaker was Dave McCoy, IBM principal in our Data Center services group.He explained IBM's Project Big Green and the Energy Efficiency Initiative, and wentinto details on how IBM can act as general contractor to design, plan and build theideal Green Data Center for you. IBM can also retrofit existing buildings, with new technologies like stored cooling, optimized airflow assessments, and modulardata center floorspace. While not related to energy, but still important to ourenvironment was IBM Asset Recovery Services, where IBM can take all those old PCmonitors, keyboards and other outdated equipment and refurbish or melt down to recapture useful metals and plastics, and disposing the rest in an environmentally-friendly,non-toxic manner.
I was the second speaker, covering "How to get it done". While Dave covered the issuesand technologies available, I explained how to put it all into practice. This includesIT systems assessments, health audits, and thermal profiling. Using server and storagevirtualization, you can increase resource utilization and reduce energy waste. IBM's CoolBlueproduct line, which includes the IBM PowerExecutive software to monitor your IT environment, and the "Rear Door Heat Exchanger" that uses chilled water to remove asmuch as 60% of the heat coming out of the back of a server rack, greatly reducing hot-spotson the data center floor, and allowing you to run the entire room at warmer, less-expensivetemperatures.
On the server side, I covered IBM's System z mainframe and the BladeCenter as examples of how innovative technologies can be used to run more applications with less energy. The newSystem p570 based on the energy-intelligent POWER6 processor has twice the performance for the same amountof power as its POWER5 predecessor. On thestorage side, I explained how Information Lifecycle Management (ILM), storage virtualization,and the use of a blended disk and tape environment can greatly reduce energy costs.
Reps from our many technology partners Eaton, APC, Schneider Electric, Liebert, and Anixter werethere to support this event.
The session ended with a Q&A Panel, with Dave McCoy, myself, and Greg Briner from IBM GlobalFinancing. IBM is able to offer creative "project financing" that can often times match theactual monthly savings, resulting in net zero cost to your operational budget, with payback periods as little as 2.5 years.
To learn more about IBM's efforts to help clients create "Green" data centers, clickGreen Data Center.
To get beyond the simple statistics of vendor popularity, we looked at the number and combinations of vendors with which enterprises work. Many were customers of one or two storage providers, but the rest were customers of up to six storage providers. More than one-third were customers of systems vendors only, bypassing storage specialists.
Comparisons between solutions vendors and storage component vendors are not new. One could argue that this can be compared to supermarkets and specialty shops.
Supermarkets offer everything you need to prepare a meal. You can buy your meat, bread, cheese,and extras all with one-stop shopping. In a sense, IBM, HP, Sun and Dell are offering this to clients who prefer this approach. Not surprisingly, the two leaders in overall storage hardware,IBM and HP, are also the two best to offer a complete set of software, services, servers and storage.
IBM and HP are also the leaders in tape.While Forrester reports that many large enterprises in North America prefer to buy diskfrom storage specialists, others have found that customers prefer to buy their tape from solution providers. Recently, Byte and Switch reports thatLTO Hits New Milestones,where the LTO consortium (IBM, HP, and Quantum) have collectively shipped over 2 million LTO tape drives, and over 80 million LTO tape cartridges. Perhaps this is because tape is part of an overallbackup, archive or space management solution, and customers trust a solution vendor overa storage specialist.
Where possible, IBM brings synergy between its servers and storage. For example, we justannounced the IBM BladeCenter Boot Disk System, a 2U high unit that supports up to 28 blade servers, ideal for applications running under Windows or Linux, and helping to reduce the energy consumption for thoseinterested in a "Green" data center.
Some people prefer buying their meat at the slaughterhouse, bread at the French pastry shop, andso on. Storage specialists focus on just storage, leaving the rest of the solution, like servers,to be purchased separately from someone else. Storage vendors like NetApp, EMC, HDS and othersoffer storage components to customers that like to do their own "system integration", or to thosethat are large enough to hire their own "systems integrator".
Storage specialists recognize that not everybody is a "specialty shop" shopper.HDS has done well selling their disk through solution vendorslike HP and Sun. EMC sells its gear through solution vendor Dell.
Interestingly, I have met clients who prefer to buy IBM System Storage N series from IBM, becauseIBM is a solution vendor, and others that prefer to buy comparable NetApp equipment directly fromNetApp, because they are a storage component vendor.
I mostly buy my groceries at a supermarket, buthave, on occasion, bought something from the local butcher, baker or candlestick maker. And if you are ever in Tucson, you might be able to find Mexican tamalessold by a complete stranger standing outside of a Walgreens pharmacy, the ultimate extreme of specialization. You can get a dozen tamales for tenbucks, and in my experience they are usually quite good. Theoretically, if you get sick, or they don't taste right, you have no recourse, and will probably never see that stranger again to complain to.(And no, before I get flamed, I am not implying any major vendor mentioned above is like this tamale vendor)
Of course, nothing is starkly black and white, and comparisons like this are just to help provide context and perspective,but if you are looking to have a complete IT solutionthat works, from software and servers to storage and financing, come to the vendor you can trust, IBM.
A few weeks ago, my Tivo(R) digital video recorder (DVR) died. All of my digital clocks in my house were flashing 12:00 so I suspect it wasa power strike while I was at the office. The only other item to die was the surge protector,and so it did what it was supposed to do, give up its own life to protect the rest of myequipment. Although somehow, it did not protect my Tivo.
I opened a problem ticket with Sony, and they sent me instructions on how to send itover to another state to get it repaired.Amusingly, the instructions included "Please make a backup of the drive contents beforesending the unit in for repair." Excuse me? How am I supposed to do that, exactly?
My model has only a single 80GB drive, and so my friend and I removed the drive and attachedit to one of our other systems to see if anything was salvageable. It failed every diagnostictest. There was just not enough to read to be usable elsewhere.
This is typical of many home systems. They are not designed for robust usage, high availability, nor any form of backup/recovery process. Some of the newer models havetwo drives in a RAID-1 mode configuration, but most have many single points of failure.
And certainly, it is not mission critical data. Life goes on without the last few episodesof Jack Bauer on "24", or the various Food Network shows that I recorded for items I planto bake some day. For the past few weeks, I have spent more time listening to the radioand reading books. Somehow, even though my television runs fine without my Tivo, watchingTV in "real time" just isn't the same.
I suspect that if you gave someone a method to do the backup, most would not bother to useit. People are now relying more and more heavily on their home-basedinformation storage systems, digital music, video and cherished photographs. Perhaps experiencing a "loss" will help them appreciate backup/recovery systems so much more than they do today.
The smart people at the University of Pittsburgh manage five campuses and over 33,000 students, andneeded to create an enterprise storage solution that would give it three key benefits. Of course, they turnedto IBM, the number one overall storage hardware vendor, to deliver.
A new storage infrastructure with the capacity to grow with the University of Pittsburgh as needed
Improved system reliability with reduced downtime, and availability 24/7/365
A significantly more manageable storage solution that could lower costs and provide better system efficiency through virtualization
As a result, IBM shipped its 25,000th high-end disk storage system, in this case two IBM System Storage DS8300 models, along with storage virtualization, and other related hardware, software and services, to provide a complete end-to-end solution.
Here is what Jinx Walton, Director of Computing Services and Systems Development at the University of Pittsburgh, had to say about it...
"The University of Pittsburgh supports large enterprise systems, and the number and complexity of new systems continue to grow. To effectively manage these systems it was necessary to identify an enterprise storage solution that would leverage our existing investments in storage, make allocation of storage flexible and responsive to project needs, provide centralized management, and offer the reliability and stability we require. The integrated IBM storage solution met these requirements"
ESG Analyst, Tony Asaro, talks about the many small storage startups having aBillion Dollar Impact on the storage system industry. Tony has counted over50 storage system vendors that are now in the marketplace. Is it really that many?Most of the time, the media only focus on the top seven major players, but I agree that big players like IBM should take trends about small startups like this seriously.
EMC Blogger Chuck Hollis suggests that this trend might be the start of a squeeze play, where top players and new upstarts squeeze out the middle playerslike Sun and HDS, in his postDesperate Times In Storage Land?
(His statement that IDC and Gartner have listed EMC as number one in "almost all"market segments is perhaps a bit misleading. IBM is number one in overall storage hardware, as wellas leading in tape drives, tape libraries, tape virtualization, and for that matter,disk virtualization. I don't know if IDC or Gartner count EMC Disk Library in the "tape virtualization" category, or if either analyst distinguishes between "cache-based" versus "switch-based" disk virtualization as separate categories.Perhaps Chuck should have qualified this to say "almost all of themarket segments that EMC does business in," which of course is better than the othervendors in the middle.)
Often, when looking at disk storage it is easy to focus on comparisons to other disk storage, but disruptive technologies cross boundaries. Already we have seen Flash Memory drives on the IBM BladeCenter, replacing traditional disk drives internal to each blade server. They are smaller than regular disk drives, but big enough to hold the operating system to boot from.
The New York Times has an article by John Markoff, Redefining the Architecture of Memory that talks about IBM's research on "Racetrack Memory".The article is a good read, but here are some interesting excerpts:
Now, if an idea that Stuart S. P. Parkin is kicking around in an I.B.M. lab here is on the money, electronic devices could hold 10 to 100 times the data in the same amount of space.
Currently the flash storage chip business is exploding. Used as storage in digital cameras, cellphones and PCs, the commercially available flash drives with multiple memory chips store up to 64 gigabytes of data.
However, flash memory has an Achilles’ heel. Although it can read data quickly, it is very slow at storing it. That has led the industry on a frantic hunt for alternative storage technologies that might unseat flash.
Mr. Parkin’s new approach, referred to as “racetrack memory,” could outpace both solid-state flash memory chips as well as computer hard disks, making it a technology that could transform not only the storage business but the entire computing industry.
But ultimately, the technology may have even more dramatic implications than just smaller music players or wristwatch TVs, said Mark Dean, vice president for systems at I.B.M. Research.“Something along these lines will be very disruptive,” he said. “It will not only change the way we look at storage, but it could change the way we look at processing information. We’re moving into a world that is more data-centric than computing-centric.”
This technology has the potential to break some of the physical limitations that are currently worrying disk drive designers. I look forward to see how this plays out.
Registration for the "Meet the Storage Experts" event in Second Life will close this week fornext week's September 20 event. All IBMers, clients and IBM Business Partners are welcome to attend. We will focus this time on DS3000 and N series disk systems, tape systems,and IBM storage networking gear.
If you miss this one, we plan to have another one in November!
Array-based replication does have drawbacks; all externalised storage becomes dependent on the virtualising array. This makes replacement potentially complex. To date, HDS have not provided tools to seamlessly migrate away from one USP to another (as far as I am aware). In addition, there's the problem of "all your eggs in one basket"; any issue with the array (e.g. physical intervention like fire, loss of power, microcode bug etc) could result in loss of access to all of your data. Consider the upgrade scenario of moving to a higher level of code; if all data was virtualised through one array, you would want to be darn sure that both the upgrade process and the new code are going to work seamlessly...
The final option is to use fabric-based virtualisation and at the moment this means Invista and SVC. SVC is an interesting one as it isn't an array and it isn't a fabric switch, but it does effectively provide switching capabilities. Although I think SVC is a good product, there are inevitably going to be some drawbacks, most notably those similar issues to array-based virtualisation (Barry/Tony, feel free to correct me if SVC has a non-disruptive replacement path).
I would argue that the IBM System Storage SAN Volume Controller (SVC) is more like the HDS USP, and less like the Invista. Both SVC and USP provide a common look and feel to the application server, both provide additional cache to external disk, both are able to provide a consistent set of copy services.
IBM designed the SVC so that upgrades can occur non-disruptively. You can replace the hardware nodes, one node at a time, while the SVC system is up and running, without disruption to reading and writing data on virtual disk. You can upgrade the software, one node at a time, while the SVC system is up and running, without disruption to reading and writing data on virtual disk. You can upgrade the firmware on the managed disk arrays behind the SVC, again, without disruption to reading and writing data on virtual disk.
More importantly, SVC has the ultimate "un-do" feature. It is called "image mode". If for any reason you want to take a virtual disk out of SVC management, you migrate over to an "image mode" LUN, and then disconnect it from SVC. The "image mode" LUN can then be used directly, with all the file system data in tact.
I define "virtualization" as technology that makes one set of resources look and feel like a different set of resources with more desirable characteristics. For SVC, the more desirable characteristics include choice of multi-pathing driver, consistent copy services, improved performance, etc. For EMC Invista, the question is "more desirable for whom?" EMC Invista seems more designed to meet EMC's needs, not its customers. EMC profits greatly from its EMC PowerPath multi-pathing driver, and from its SRDF copy services, so it appears to have designed a virtualization offering that:
Continuesthe use of EMC Powerpath as a multi-pathing driver. SVC supports driversthat are provided at no charge to the customer, as well as those built-in to each operating system like MPIO.
and, continuesthe use of Array-based copy services like SRDF of the underlying disk. SVC providesconsistent copy services regardless of storage vendor being managed.
A post from Dan over at Architectures of Control explains the anti-social nature of public benches. City planners, in an effort to discourage homeless people from sleeping on benches in parks or sidewalks, design benches that are so uncomfortableto use, that nobody uses them. These included benches made of metal that are too hot or too cold during certainmonths, benches slanted at an angle that dump you on the ground if you lay down, or benches that have dividers sothat you must be in an upright seated position to use.
This is not a disparagement of split-path switch-based designs. Rather, EMC's specific implementation appears to be designed for it to continuevendor lock-in for its multi-pathing driver, continuevendor lock-in for its copy services when used with EMC disk, and only provide slightly improved data migration capability for heterogeneous storage environments. Other switch-based solutions, such as those from Incipient or StoreAge, had different goals in mind.
Sadly, my IBM colleague BarryW and I have probably spent more words discussing Invista than all eleven EMC bloggers combined this year. While everyone in the industry is impressed how often EMC can sell "me, too" products with an incredibly large marketing budget, EMC appears not to have set aside funds for the Invista.
If a customer could design the ideal "storage virtualization" solution that would provide them the characteristics they desire the most from storage resources, it would not be anything like an Invista. While there are pros and cons between IBM's SVC and HDS's TagmaStore offerings, the reason both IBM and HDS are the market leaders in storage virtualization is because both companies are trying to provide value to the customer, just in different ways, and with different implementations.
When new technologies are introduced to the marketplace, it is normal for customers to be skeptical.
My sister is a mechanical engineer, so when she needs to configure a part or component, she candesign it on the computer, and then use a "Rapid Prototyping Machine"that acts like a 3D printer, to generate a plastic part that matches the specifications. Some machinesdo this by taking a hunk of plastic and cutting it down to the appropriate shape, and others use glue andpowder to assemble the piece.
But not everything is that simple. Harry Beckwith deals with the issue of selling services and software featuresin his book "Selling the Invisible". How do you sell a service before it is performed? How do you sell a softwarefeature based on new technology that the customer is not familiar with?
Our good friends over at NetApp, our technology partners for the IBM System Storage N series, developed a"storage savings estimator" tool that can provide good insight into the benefits of Advanced Single InstanceStorage (A-SIS) deduplication feature.
I decided to run the tool to analyze my own IBM Thinkpad C: drive (Windows operating system and programs) and D: drive ("My Documents" folder containing all my data files) to see how much storage savings thetool would estimate. Here are my results:
WINXP-C-07G (C: drive)Total Number of Directories: 1272Total Number of Files: 56265Total Number of Symbolic Links: 0Total Number of Hard Links: 41996Total Number of 4k Blocks: 2395884Total Number of 512b Blocks: 18944730Total Number of Blocks: 2395884Total Number of Hole Blocks: 290258Total Number of Unique Blocks: 1611792Percentage of Space Savings: 20.61Scan Start Time: Wed Sep 5 14:37:06 2007Scan End Time: Wed Sep 5 14:53:51 2007
WINXP-D-07H (D: drive)Total Number of Directories: 507Total Number of Files: 7242Total Number of Symbolic Links: 0Total Number of Hard Links: 11744Total Number of 4k Blocks: 3954712Total Number of 512b Blocks: 31610595Total Number of Blocks: 3954712Total Number of Hole Blocks: 3204Total Number of Unique Blocks: 3524605Percentage of Space Savings: 10.79Scan Start Time: Wed Sep 5 14:21:16 2007Scan End Time: Wed Sep 5 14:34:30 2007
I am impressed with the results, and have a better understanding of the way A-SIS works. A-SIS looks at every4kB block of data, and creates a "fingerprint", a type of hash code of the contents. If two blocks have different "fingerprints", then the contents are known to be different. If two blocks have the same fingerprint, it is mathematically possible for them to be unique in content, so A-SIS schedules a byte-for-byte comparison to be sure they are indeed the same. This might happen hours after the block is initially written to disk, but is a much safer implementation, and does not slow down the applications writing data.
(In an effort to provide support "real time" as data was being written, earlier versions of deduplication
had to either assume that a hash collision was a match, or take time to perform the byte-for-byte comparisonrequired during the write process. Doing this byte-for-byte comparison when the device is the busiest doingwrite activities causes excessive undesirable load on the CPU.)
The estimator tool runs on any x86-based Laptop, personal computer or server, and can scan direct-attached, SAN-attached, or NAS-attached file systems. If you are a customer shopping around for deduplication, ask your IBM pre-sales technical support, storage sales rep, or IBM Business Partner to analyze your data. Tools like this can help make a simple cost-benefit analysis: the cost of licensing the A-SIS software feature versus the amount of storage savings.
I can't believe I have been blogging for a year now!
I have Jennifer Jones from IBM to thank for getting this started. She was my predecessor in the job I have now, and she was moving on to bigger and better things, and during the transition for me to take over, she suggested that we start a blog, podcast, or similar. While there are many blogs and podcasts inside the firewall of IBM, I wanted something to be accessible to all of our IBM sales team, IBM Business Partners, existing and prospective clients, and to enable comments, to enable two-waycommunication. Podcasts are very one-way, so we chose a blog instead.Getting it set up took a while, convincing our own management that this was worthwhile, and dealing with our legal department on the IBM blogging guidelines of what we can and cannot write about, we finally got it going last year, launching September 1, just in time for our 50 years of disk systems innovation campaign.
It has been a wild ride, a great learning experience, and has proven quite fulfilling for job satisfaction. Here are some observations and lessons I have learned along the way.
Roller is the open source blog server that drives Sun Microsystem's blogs.sun.com employee blogging site, IBM DeveloperWorks blogs that this blog exists on, thousands of internal blogs at IBM Blog Central, the JRoller Java community site, and hundreds of others world-wide.Whereas there might be fancier blog systems elsewhere that I could have chosen, hosting my blog with IBM Developerworksseemed like a good choice. I can access from any web-browser capable machine, and enter my blog posts in nativeHTML, that I develop in the tool itself, or offline with a standard basic text editor like Microsoft Notepad that I can then cut-and-paste back in.
One lesson I learned the hard way was that Roller generates the Permalink URL for each blog post based on the first five words of the title. For that reason, it is important to chose an appropriate and unique title, avoiding the use of punctuation, quotation marks, or pharmaceutical "enhancement products" that might get rejected by SPAM filters.Once chosen, you can't change the title afterwards as it won't match the Permalink anymore.My blog post "Aperi is (enhancement product) for SMI-S" caused no end of grief to our Press Release team.
Writing blog posts in native HTML is not as hard as it sounds. I am limited to hosting a maximum of 24MB of files, and they can only be jpg, jpeg, gif, png, mp3, pdf or ppt format.So, wherever possible, I point to other websites for content.For those new to blogging, I recommendThe Barebones Guide to HTML.
Roller also generates for me a spreadsheet of all my page views for the week. Tracking blog traffic closely is as crazyas checking your company's stock price every day. These "web-stat" e-mails get filed directly into my Bacn folder on Lotus Notes.
In my earlyadvice to bloggers, I mentioned my choice of Bloglines as my RSS feed reader. When I subscribe to a new blog, I specify Full entries, not Partial,which allows me to scan it quickly, but filters out many of the non-text content like videos. It also allowed meto see what my own blog posts looked like from within a reader, so that I can write them appropriately.
I find if valuable to read other blogs, including those written by employees of our toughest competitors. Evenif you don't blog yourself, following blogs can be extremely valuable. Be careful what you leave as comments onother blogs, they may come back to haunt you later.
Currently, I track 55 blogs, some about storage,marketing, Web 2.0 issues, Second Life, Linux, or other areas of interest. I prefer blogs that make only 1-5 postsper week, so blogs like LifeHacker and LifeRemix are off my Bloglines list, but are excellent resourceswhen I am searching for something specific. If you think 55 is a lot of blogs, consider Timothy Ferriss' post onHow RobertScoble reads 622 RSS feeds each morning.
I have quite an international readership, so I have to be careful using American idioms and pop cultural references.For example, in my blog post IBM acquires Softek, I mentioned "shotgun weddings" and had various responses asking what exactly did that mean,all from readers outside the USA. I've learned that sometimes you need to link them to an American Slang dictionary,or Wikipedia encyclopedia entry to explain these terms and phrases.
Technoraticurrently tracks over 100 million blogs and over 250 million pieces of tagged social media. Getting my blogtracked had some issues. You have to join, thenpost a "claim"on your own blog. My mistake was having a case-sensitive URL with a mix of upper and lower case letters, but Technorati prefers all lower case. IBM worked with Technorati to get this resolved.
Del.icio.us is a social bookmarking website -- the primary use is to store your bookmarks online, which allows you to access the same bookmarks from any computer and add bookmarks from anywhere, too. On del.icio.us, you can use tags to organize and remember your bookmarks, which is a much more flexible system than folders.
I use Firefox, Safari, Dillo and Internet Explorer web browsers, so it is nice that I have access to allmy bookmarks in the same consistent manner. When I see content on a website that I might like to reference laterin a blog, I tag it with del.icio.us so that I can get to it later.
Fellow GTD-ers will quickly recognize this acronym, but for the rest of you, it refers to David Allen's book "Getting Things Done®".This is a great book! I learned about it reading other people's blogs, and found it incrediblyuseful helping me organize my time.There are various online tools available to help employ this method. I use Lotus Connections Activitiesfor group projects with co-workers at IBM, and BackPack for projects withmy friends outside of work.
The success of YouTube encouraged IBM to launch IBM TV, a portal for IBM's video and multimedia assets and make it easier for IBM employees, customers, partners and prospects to access and view IBM multimedia. The plan is to have eight anchor episodes per year, professionally hosted by TV personality, Joe Washington, and point to related offers and other resources for viewers to learn more.
Blogging also introduced me to Second Life. I asked around if anyone else within IBM was using Second Life, anddiscovered quite a few. I got invited to join our internal Eightbar group, and participated in various events, including an IBM Holidayparty that I discussed in my blog post"Building a Snowman in Second Life".
In April, we had a launch of our newest products in Second Life, and we plan to have two more Second Life events,September 20 and another in November, staged as "Meet the Experts" question and answer panels.
I wrap up with Facebook. Actually, whereas most of my Web 2.0 efforts have been work-related, I have quite a few friends and family who follow my blog. Several were inspired to start their own blogs, such asPassages from Pamand Barry Whyte on Storage Virtualization. Bridging the gap is Facebook, something I can use to keep tabs on my friends, as well as my storage industry-related contacts.
Wow, that's quite a lot in one year. Well, I am done with my meetings down here in Sao Paulo, Brazil. My colleauges and I are returning tonight to enjoy the long Labor Day weekend.
August 31 is my good friend Jim Cosentino's retirement day as a full-time employee at IBM. After over 30 years at IBM, in various marketing, sales and consulting roles, he is going to be thinking about happy things instead of working. His last seven years has been at theIBM Poughkeepsie Customer Executive Briefing Center as the lead System Storage presenter.
The past few years, I've traveled with him around the world on various business trips, teaching our IBM sales force and IBM Business Partners about our System Storage offerings, and presenting to clients. He is a class act, always positive, laughing, seeing the bright side of things.
While "spend more time with his family" has become a business cliche, I know Jim will actually enjoy his retirement years, spend more time with his family, take on other pursuits and hobbies, and perhaps do some more traveling.
Jim, if you are reading this, I have one suggestion. I know you have lots of friends within IBM, and count myself as one of them, but may I suggest your first goal is to makeat least three newfriends, to help you in your transition to retirement.
Congratulations Jim! Enjoy your well-deserved retirement!
If you are ever down in Sao Paulo, Brazil, may I suggest not drinking "American amounts" of their "Brazilian Coffee". The coffee here is "robust", to say the least.
Yesterday, my blog focused on IBM iSCSI offerings that were announced in August.Also announced earlier this month, the Integrated Removable Media Manager (IRMM) on System zhas been years in the making.IRMM is a new robust systems management product for Linux® on IBM System z™ that manages open system media in heterogeneous distributed environments and virtualizes physical tape libraries. IRMM combines the capacity of multiple heterogeneous libraries into a single reservoir of tape storage that can be managed from a central point.By providing an integrated solution with the opportunity for both mainframe z/OS DFSMSrmm and distributed Tivoli® Storage Manager™ environments to be managed by IRMM, System z can now be a hub for the management of removable media.
The people who thought the "Mainframe is obsolete", and those that thought "Tape is dead", are both proven wrong again with this announcement. People are looking to deploy robust tape automation for backup and archive, and this convergence with mainframe makes perfect sense by providing business value that extends to other distributed systems.
The proof-of-concept that IBM Haifa research center developed back in 1998 became what we now call the iSCSI protocol.The book iSCSI: The Universal Storage Connection introduces the history as follows:
In the fall of 1999 IBM and Cisco met to discuss the possibility of combining their SCSI-over-TCP/IP efforts. After Cisco saw IBM's demonstration of SCSI over TCP/IP, the two companies agreed to develop a proposal that would be taken to the IETF for standardization.
There are three ways to introduce iSCSI into your data center:
Through a gateway, like the IBM System Storage N series gateway, that allows iSCSI-based servers connect to FC-based storage devices
Through a SAN switch or director, a FC-based server can access iSCSI-based storage, an iSCSI-based server accessing FC-based storage, or even iSCSI-based servers attaching to iSCSI-based storage.
Directly through the storage controller.
IBM has been delivering the first method with its successful IBM System Storage N series gateway products, buttoday we have announced additional support for the second and third methods.Here's a quick recap.
New SAN director blades
Supporting the second method, IBM TotalStorage SAN256B Director is enhanced to deliver iSCSI functionality with a new M48 iSCSI Blade, which includes 16 ports (8 Fibre Channel ports; and 8 Ethernet ports for iSCSI connectivity). We also announced a new Fibre Channel M48 Blade which provides 10 Gbps Fibre Channel Inter Switch Link (ISL) connectivity between SAN256B Directors.
With support for Boot-over-iSCSI, diskless rack-optimized and blade servers can boot Windows or Linux over Ethernet,eliminating the management hassles with internal disk.
All of this is part of IBM's overall push into the Small and Medium size Business marketplace, making it easier to shop for and buy from IBM and its many IBM Business Partners, easier to deploy and install storage, and easier tomanage the storage once you have it.
In his blog Rough Type, Nick Carr asks Where is my CloudBook?and points to John Markoff's 2-part series in the New York Times on computing in the clouds.(Read it here: Part 1, Part 2)
At first, I thought he meant computing while in an airplane, but instead, he is talking about computing on a laptop or other hand-held device that does not have an internal disk drive, no installedoperating system, no internal data storage. Instead, the idea is that you boot from a CD, accessyour data, and even some of your programs, over the internet. John used an Ubuntu Linux LiveCD in his example.
This week, I am in Sao Paulo, Brazil, and was "in the clouds" for over 10 hours flying from Dallas to here.The one time I am guaranteed "off-line" from the internet is on the plane, and I spend enough time on planesthat I am able to get work done despite being "disconnected".
The same reasons people want to get out of having a disk drive on their laptop, are the reasons data centersare getting out of internal disk on their servers.
disks crash, and typically are not protected in any RAID configuration on most laptops
operating systems get infected with viruses and malware
storage on one server is generally inaccessible to every other server
Booting from CD is especially clever. No more worrying about fixing your Windows registry, viruses,corrupted operating system files, or the cruft that accumulates on your C: drive that slowsyou down. The CD is the sameevery time, so it is like running your system with a freshly installed operating system every day.
The need for central repositories of data harkens back to the years of the IBM mainframe. Of course, whatmade sense back then continues to make sense now. The old 3270 terminals stored no data, and instead merelyprovided keyboard input and display text screen output to the vast amount of data stored on the central system.Today, the inputs are different, using your finger or mouse instead to point to what you want, sliding itacross to make things happen, and the output may now include photos, audio and video, but the concept isstill the same.
I carry my Ubuntu Linux LiveCD with me on every business trip. Combined with externally rewriteable media,such as a USB key, you can get work done even when you are in an airplane, and upload it whenyou are back on the net.
The IBM Storage and Storage Networking Symposium concludes today. As typical for manysuch conferences, it ended at noon, so that people can catch airline flights.
TS1120 Tape Encryption - Customer Experiences
Jonathan Barney had implemented many deployments of tape encryption, and shared hisexperiences at two customer locations.
The first company had decided to implement their EKM servers on dedicated 64-bitWindows servers. They had three sites, one in Chicago, Alphareta, and New York City,each with two EKM servers. Each library had a single TS3500 tape library, and pointedto four EKM servers, two local, and two remote.
The clever trick was managing the keystore. They decided that EKM-1 was their trustedsource, made all changes to that, and then copied it to the other five EKM servers.His team deployed one site at a time, which turned out to be ok, but he would notrecommend it. Better to design your complete solution, and make sure that all librariescan access all EKM servers.
This company decided to have a single key-label/key-pair for all three locations, but change it every 6 months. You have to keep the old keys for as long as you have tapesencrypted with those keys, perhaps 10-20 years.The customer found the IBM encryption implementation "elegant" and it can be easily replicated to a fourth site if needed.
The second company had both z/OS and Sun Solaris. Initially they planned to have botha hardware-based keystore on System z, and software-based keystore on Sun, but they realized that System z version was so much more secure and reliable, that it made nosense to have anything on the Sun Solaris platform.
On System z, they had two EKM images, and used VIPA to ensure load balancing fromthe library. Tapes written from z/OS used DFSMS Data Class to determine which tapesare encrypted and which aren't. All Tapes written from Sun Solaris were encryptied, written to a separate logical library partition of the TS3500, which in turn contactedthe System z for the EKM management to provide the keys to use for the encryption.
The "gotcha" for this case was that when they tested Disaster Recovery, they had torecover the two EKM servers first, before any other restores could take place, and thistook way too long. Instead, they developed a scaled-down 10-volume "rescue recovery" z/OS image that would contain the RACF database and all EKM related software to actas the keystore during a disaster recovery. Anytime they make updates, they only haveto dump 10 volumes to tape. Restore time is down to only 2 hours.
He gave this advice to deploy tape encryption:
Some third party z/OS security products, like Computer Associates Top Secret orACF2, require some PTFs to work with the EKM. The latest IBM RACF is good to go.
Getting IP support from IOS to OMVS requires IPL.
At one customer, an OMVS monitor software program killed the EKM because it wasn'tin their list of "acceptable Java programs". They updated the list and EKM ran fine.
DO not update EKM properties file while EKM is running. EKM keeps a lot of stuffin memory, and when it is recycled, copies this back to the EKM properties file, reversing any changes you may have done. It is best to shut down EKM, update theproperties file, then start up EKM back up again. This is why you should always haveat least two EKM servers for redundancy.
TSM for Linux on System z
Randy Larson from our Tivoli group presented this session.There is a lot of interest in deploying IBM Tivoli Storage Manager backup and archivesoftware on Linux for System z. Many customers are already invested in a mainframeinfrastructure, may have TSM for z/OS or z/VM, and want the newer features and functions that are available for TSM on Linux.
TSM has special support for Lotus Domino, Oracle, DB2 and WebSphere Application Servers.TSM clients can send backup data to a TSM server internally via Hipersockets, a virtualLAN feature on the System z platform that uses shared memory to emulate TCP/IP stack.
One of the big questions is whether to run Linux as guests under z/VM, or natively onLPAR. The general deployment is to carve an LPAR and run Linux natively untilyour server and storage administration staff have taken z/VM training classes. Oncetrained, they can easily move native LPAR images to z/VM guests. Unlike VMware that takesa hefty 40% overhead on x86 platforms to manage guests, z/VM only takes 5-10% overhead.
For the TSM database and disk storage pools, Randy recommends FC/SCSI disk, with ext3 file system, combined with LVM2 into logical volumes. ECKD disk and reiserfsworks too. Avoid use of z/VM minidisks. Under LVM2, consider 32KB stripes for the TSM database, and 256KB stripes for the disk storage pools. For multipathing, usefailover rather than multibus method. Read IC45459 before you activate "directio".
The TSM for Linux on z is very much like the TSM on AIX or Windows, and not like theTSM for z/OS. For tape, TSM for Linux on z does not support ESCON/FICON attached tape,you need to use FC/SCSI attached tape and tape libraries. TSM owns the library anddrives it uses, so give it a logical library partition separate from z/OS. ForSun/StorageTek customers, TSM works with or without the Gersham Enterprise Distrbu-Tape(EDT) software. Use the IBM-provided drivers for IBM tape. For non-IBM tape, TSM providessome drivers that you can use instead.
That wraps up my week. This was a great conference! If you missed it, look for the one in Montpelier, France this October. Check out the list of IBM Technical Conferencesto find others that might interest you.