I hope everyone enjoyed the French Open in Second Life! Here are some upcoming events:
Comments (3) Visits (9494)
A recent blog by Chris Mellor makes the outlandish conspiracy theory that IBM and HDS copied virtualisation technology from small start-up company DataCore.
(Chris doesn't actually name who is his source making such a claim, whether thatsomeone was employed by any of the parties involved at the time the events occurred,or is currently employed by a competitor like EMC bitterly jealous of the success IBM and HDScurrently enjoy with their offerings.)
As I already posted before about IBM'slong history of storage virtualization, SAN Volume Controller was really part of a sequence of major product in this area, after the successful 3850 MSS and 3494 VTS block virtualization products.
In the late 1990's, our research teams in Almaden, California and Hursley, UK were exploring storagetechnologies that could take advantage of commodity hardware parts and the indu As is often the case, while IBM was working on "the perfect product", small start-ups announce "not-yet-perfect" products into the marketplace. Tactical moves like partneringwith DataCore was a smart move, for the following reasons: The partnership proved worthwhile, not just to prove to IBM that this was a worthwhile market to enter, but also how "NOT" to package a solution. Specifically, DataCore SANsymphony was software that you had to install on your own Windows-based server. The client was left with the task of orderinga suitable Intel-based server, with the right amount of CPU cycles, RAM and host bus adapter ports,and configure the Windows operating system and DataCore software. It didn't go well. Basically, customers were expected to be their own "hardware engineers", having to knowway too much about storage hardware and software to design a combination that worked for theirworkloads. Most clients were disappointed with the amount of effort involved, and the resulting poor performance. To fix this, IBM delivered the SAN Volume Controller, with an optimized Linux operating system and inte I can't speak for HDS, but I suspect they came to similar conclusions that resulted in a similar decisionto build their product in-house. I welcome Hu Yoshida to correct me if I am wrong on this.
As is often the case, while IBM was working on "the perfect product", small start-ups announce "not-yet-perfect" products into the marketplace. Tactical moves like partneringwith DataCore was a smart move, for the following reasons:
The partnership proved worthwhile, not just to prove to IBM that this was a worthwhile market to enter, but also how "NOT" to package a solution. Specifically, DataCore SANsymphony was software that you had to install on your own Windows-based server. The client was left with the task of orderinga suitable Intel-based server, with the right amount of CPU cycles, RAM and host bus adapter ports,and configure the Windows operating system and DataCore software.
It didn't go well. Basically, customers were expected to be their own "hardware engineers", having to knowway too much about storage hardware and software to design a combination that worked for theirworkloads. Most clients were disappointed with the amount of effort involved, and the resulting poor performance.
To fix this, IBM delivered the SAN Volume Controller, with an optimized Linux operating system and inte I can't speak for HDS, but I suspect they came to similar conclusions that resulted in a similar decisionto build their product in-house. I welcome Hu Yoshida to correct me if I am wrong on this.
I can't speak for HDS, but I suspect they came to similar conclusions that resulted in a similar decisionto build their product in-house. I welcome Hu Yoshida to correct me if I am wrong on this.
Comment (1) Visits (9197)
One of the differences between IBM and the other storage vendors is that IBM is also in the business of middleware, application-aware backup software, and advanced copy services. This allows IBM to put togethersolutions that work to address specific challenges for our clients.
IBM has written a whitepaper on a cleverVSS Snapshot Backup for Exchange using IBM Tivoli Storage Manager and the point-in-time copy capabilities of IBM System Storage disk systems.
A problem in the past was that each vendor's point-in-time copy method had its own unique proprietary interface.Microsoft Developed Volume Shadow Copy Services (VSS) as a common interface front-end to resolve this concern.IBM Tivoli Storage Manager for Mail can invoke standard VSS interfaces, and this in turn can invoke FlashCopyon the IBM System Storage SAN Volume Controller, DS8000 series, or DS6000 series disk system.
You might be thinking: Wouldn't it have been less effort to just have TSM for Mail invoke IBM proprietary interfaces,rather than having to put full VSS support into TSM for mail, and then full VSS support into IBM's various disksystems? Perhaps, but IBM doesn't decide to do things because it is the cheapest way, we focus on what is theright way, and in this case, customers now have more choices, then can use TSM for Mail with IBM or non-IBM disksystems that support the VSS interface, and IBM disk systems can be employed into other uses for VSS snapshot.
Of course, we would like our clients to consider both TSM and IBM System Storage disk systems for a combined solution,not because they are required to make the solution work, but because both are best-of-breed, and whitepapers likethis show how they can provide synergy working together.
Last week, I opined that Monday's IDC announcement "IBM #1 in combined disk and tape storage hardwaresales for 2006" was in part because of a resurgence of interest in tape, with four specific examples. There was a lot of reaction and reflection fromboth sides.
technorati tags: IBM, IDC, combined, disk, tape, storage, announcement, EMC, dead, LiveVault, video, John Cleese, JWT, DrunkenData, Sun, StorageTek, STK, Randy Chalfant, TotalStorage, Productivity Center, Fathers Day, Big, Green, initiative[Read More]
Comment (1) Visits (7573)
This week I am off to Budapest, Hungary, for business meetings. It is the closest major city to IBM'smanufacturing plant in a small town called Vac (rhymes with "knots") where the IBM System Storage DS8000 seriesand SAN Volume Controller are assembled.
A client complained that their tape drives were not compressing data as well as it used to. Investigating further reminded me of a scene from the 1970's television show "All in the family", summarized well inAmerican Scientist:
... in one episode of All in the Family, Archie Bunker's son-in-law, Mike, watches Archie put on his shoes and socks. Mike goes into a conniption when Archie puts the sock and shoe completely on one foot first, tying a bow to complete the action, while the other foot remains bare. To Mike, if I remember correctly, the right way to put on shoes and socks is first to put a sock on each foot and only then put the shoes on over them, and only in the same order as the socks. In an ironic development in his character, the politically liberal Mike shows himself to be intolerant of differences in how people do common little things, unaccepting of the fact that there is more than one way to skin a cat or put on one's shoes.
Both agreed that socks go first, then shoes, but the actual deployment was different.
In the case of this customer, a recent change was the use of "encryption" before the data reached the tape drive. In regards to compression and encryption, you should always compress first, then encrypt. Compression algorithms rely on frequency of data, for example the letter "E" appears more often in the English language than the letter "Z". However, once you encrypt data, those data patterns are randomized, and any attempt to compress the data afterwards is wasted effort.
With IBM tape encryption on either the TS1120 or LTO4 tape drives, we compress, then encrypt, the data when it arrives to the tape drive, so that the compression has some chance of getting up to 3:1 reduction. This comp So, just as the case between Archie Bunker and his son-in-law, there are many ways to deploy compression and encryption, just make sure you do them in the right order to get the most benefit.
So, just as the case between Archie Bunker and his son-in-law, there are many ways to deploy compression and encryption, just make sure you do them in the right order to get the most benefit.
Comment (1) Visits (10245)
I'm in the Malev lounge at the Budapest Airport, waiting for my flight to return back to Tucson.
Back in the late 1980's and early 1990's, I was one of the architects for DFSMS on z/OS, and customers always asked, "What is the clip level?", in other words, how big does a customer have to be to take advantage of DFSMS. We worked it out that if you had more than 100GB of disk data, DFSMS is worthwhile. DFSMS is now just standard by default, as everyone now easily has more than 100GB of data.
Later, in the late 1990's, I worked on Linux for System z. Again, customers asked how many Linux guest images would justify deploying applications on a mainframe. We worked it out to about 10 images. 10 Linux logical partitions, or Linux guests under z/VM was enough to cost justify the entire investment.
So what is the "clip level" for SANs? How many servers does an SMB need to have to justify deploying a SAN? IBM announced the new BladeCenter S designed specifically for mid-sized companies, 100 to 1000 employees, typically running 25 to 45 servers. However, I suspect companies as small as 7-10 servers would probably benefit from deploying an FC or IP SAN.
What do you think? Send me a comment on how many servers should be the clip level.
Comment (1) Visits (12065)
Chuck Hollis makes some excellent points about Green Data Center Goes Marketing Mainstream. He does a great job summarizing EMC's strategy in this area:
Both are great recommendations, but why limit yourself to what EMC offers? Your x86-based machines are only a subset of your servers,and disk is only a subset of your storage. IBM takes a more holistic approach, looking at the entire data center.
technorati tags: IBM, EMC, Chuck Hollis, VMware, FC, SAS, SATA, FATA, disk, storage, logical partition, energy, power, cooling, Steve Duplessie, dynamic, persistent, data, Lawrence Berkeley National Laboratory, megawatt, paper, optical, microfiche, LTO, 3592, Project Big Green, Secondlife[Read More]
Comment (1) Visits (8919)
Ian Hughes talks about this Web 2.0 in his postExplaining Web 2.0 State of Mind.
Alan Lepofsky posts about The Value Of Social Networking which points to this same presentation about Web 2.0 concepts and ideas.He also points to this article in the Wall Street Journal titledPlaying Well With Others about IBM and their leadership in Web 2.0 technologies, such as those from our Lotus group.
Some quotes from the WSJ article I found interesting:
Some 26,000 IBM workers have registered blogs on the company's internal computer network where they opine on technology and their work.
Interesting in learning more about Web 2.0? The last page of the deck above has a good set of links and resources, for example, here are 23 Things to know about Web 2.0 to get you started.
Comment (1) Visits (11600)
NetworkWorld has compiled interlude with storage videos, a follow up to last year's Yikes! Exploding Servers.
I've blogged about some of these videos already, but since there are probably a few out there buying the brand new Apple iPhone looking for YouTube videos to play on them, these links might provide some exam Next week has "Fourth of July" Independence Day holiday in the USA smack in the middle of the week, so I suspect the blogosphereto quiet down a bit. So whether you are working next week or not, in the USA or elsewhere, take some time to enjoy your friends and family.
Next week has "Fourth of July" Independence Day holiday in the USA smack in the middle of the week, so I suspect the blogosphereto quiet down a bit. So whether you are working next week or not, in the USA or elsewhere, take some time to enjoy your friends and family.
Comments (2) Visits (10165)
Chris Evans over at Storage Architect posts aboutHardware Replacement Lifecycle Update, on how storage virtualization can helpwith storage hardware replacemement. He makes two points that I would like to comment on.
In a typical four year lifecycle of storage arrays, it might take six months or so to fill up the box, and might takeas much as a year at the end to move the data out to other equipment. SVC can greatly reduce both of these, so that you can take immediate advantage of new equipment as soon as possible, and keep using it for close to the full four years,migrating weeks or days before your lease expires.
Seth Godin has an interesting post titled Times a Million.He recounts how many people determine the fuel savings of higher-mileage cars to be only $300-$900 per year,and that this is not enough to motivate the purchase of a more-efficient vehicle, such as a hybrid orelectric car. Of course, if everyone drove more efficient vehicles, the benefits "times a million" wouldbenefit everyone and the world's ecology.
When I discuss storage-related concepts, many executives mistakenly relate them to the one area of information technologythey know best: their laptop. Let's take a look at some exam Information Lifecycle Management (ILM) includes classifying data by business value, and then using this to determineplacement, movement or deletion. If you think about the amount of time and effort to review the files on yourindividual laptop, and to manually select and move or delete data, versus the benefits for the individual laptopowner, you would dismiss the concept. Most administrative tasks are done manually on laptops, because automatedsoftware is either unavailable or too expensive to justify for a single owner. In medium and large size enterprises, automated software to help classify, move and delete data makes a lot of sense.Executives who decide that ILM is not for their data center, based on their experiences with their laptop, are losingout on the "times a million" effect. Laptops have various controls to minimize the use of battery, and these controls are equally available when pluggedin. Many users don't bother turning off the features and functions they don't need when plugged in, because theyfeel the cost savings would only amount to pennies per day. Times a million, energy savings do add up, and options to reduce the amount used per server, per TB of data stored, not only save millions of dollars per year, but can also postpone the need to build a new data center, or upgrade the electrical systems in your existing data center. I am not surprised how many laptops do not have adequate backup and disaster recovery plans. When executives thinkin terms of the time and effort to backup their data, often crudely copying key files to CDrom or USB key, and worryingabout the management of those copies, which copies are the latest, and when those copies can be destroyed, theymight reject deploying appropriate backup policies for others. Times a million, the collected data stored on laptops could easily be half of your companies emails and intellectual property. Products like IBM Tivoli Storage Manager can manage a large number of clients with a few admi So, next time you are looking at technology or solutions for your data center, don't suffer from "Laptop Mentality". Focus instead on the data center as a whole.
Information Lifecycle Management (ILM) includes classifying data by business value, and then using this to determineplacement, movement or deletion. If you think about the amount of time and effort to review the files on yourindividual laptop, and to manually select and move or delete data, versus the benefits for the individual laptopowner, you would dismiss the concept. Most administrative tasks are done manually on laptops, because automatedsoftware is either unavailable or too expensive to justify for a single owner.
In medium and large size enterprises, automated software to help classify, move and delete data makes a lot of sense.Executives who decide that ILM is not for their data center, based on their experiences with their laptop, are losingout on the "times a million" effect.
Laptops have various controls to minimize the use of battery, and these controls are equally available when pluggedin. Many users don't bother turning off the features and functions they don't need when plugged in, because theyfeel the cost savings would only amount to pennies per day.
Times a million, energy savings do add up, and options to reduce the amount used per server, per TB of data stored, not only save millions of dollars per year, but can also postpone the need to build a new data center, or upgrade the electrical systems in your existing data center.
I am not surprised how many laptops do not have adequate backup and disaster recovery plans. When executives thinkin terms of the time and effort to backup their data, often crudely copying key files to CDrom or USB key, and worryingabout the management of those copies, which copies are the latest, and when those copies can be destroyed, theymight reject deploying appropriate backup policies for others.
Times a million, the collected data stored on laptops could easily be half of your companies emails and intellectual property. Products like IBM Tivoli Storage Manager can manage a large number of clients with a few admi
So, next time you are looking at technology or solutions for your data center, don't suffer from "Laptop Mentality". Focus instead on the data center as a whole.
Comment (1) Visits (10192)
Avi Bar-Zeeb of RealityPrime has an interesting post aboutHow Google Earth [really] Works.Normally, people who are very knowledgeable in a topic have a hard time describing concepts in basic terms. Avi was one of the co-founders of Keyhole, the company that built the predecessor for Google Earth, and also worked with Linden Lab for its 3D rendering it its virtual world, so he certainly knows what he is talking about. While he sometimes drops down into techno-talk about patents, the post overall is a good read.
It is perhaps human nature to be curious on how things are put together and how they function, leading to the popularity of web sites like www.
Many things can be used without understanding their internal inner workings. You can put on a pair of blue jeans without knowing how the cotton was made into denim fabric; lace up your favorite pair of running shoes without understanding the chemical make-up of the plastic that cushions your feet; or drink a glass of beer after your five mile run without knowing how alcohol is processed by your liver.
For technology, however, some people insist they need to know how it works in order for them to get the most use of it. When shopping for a car, for example, a guy might look under the hood, and ask questions about how the engine works, while his wife sits inside the vehicle, counting cup holders and making sure the radio has all the right buttons.
Not all technology suffers from need-to-know-itis. For example, the Apple iPod music player and the Canon PowerShot digital camera, are both just disk systems that read and write data, with knobs and dials on one end, and ports for connectivity on the other. Everyone just asks how to use their controls, and might read the manual to understand how to connect the cables. Few people who use these devices ask how they work before they buy them.
Other disk systems, the kind designed for data centers for the medium and large enterprise, apparently aren't there yet. Storage admins who might happily own both an iPod player and a PowerShot camera, insist they need to know how the technologies inside various storage offerings work. Is this just curiosity talking? Or are there some tasks like configuration, tuning, and support that just can't be done without this knowledge? Does knowing the inner workings somehow make the job more enjoyable, easier, or performed with less stress?
I'm curious what you think, send me a comment on this.
technorati tags: Avi Bar-Zeeb, Google, Earth, cotton, demin, plastic, shoes, beer, alcohol, liver, IBM, disk, system, storage, technology, Apple, iPod, music, player, Canon, PowerShot, digital, camera[Read More]
Comments (2) Visits (13666)
It's Tuesday, which means IBM makes its announcements. We had several for the IBM System Storage product line. Here's a quick recap.
I'm off to Denver, Colorado this week. I hope it is cooler there than it is down here in Tucson, Arizona.
technorati tags: IBM, disk, system, storage, SAS, FC, DS3000, DS3200, DS3400, EXP3000, NAS, EXN1000, tape, virtualization, library, TS7740, grid, Copy Export, throughput, TS3400, TS3200, mainframe, LTO, Ultrium, Cisco, MDS, 9124, Express, Advantage, DS4000, DS4700, TS3200, GAM, Grid Archive Manager, 3996, optical, WORM, Denver, Colorado, Tucson, Arizona, announcements[Read More]
This week and next I am touring Asia, meeting with IBM Business Partners and sales repsabout our July 10 announcements.
Clark Hodge might want to figure out where I am, given the nuclearreactor shutdowns from an earthquake in Japan. His theory is that you can follow my whereabouts just by following the news of major power outages throughout the world.
So I thought this would be a good week to cover the topic of Business Continuity, which includes disaster recovery planning. When making Business Continuity plans, I find it best to work backwards. Think of the scenarios that wouldrequire such recovery actions to take place, then figure out what you need to have at hand to perform the recovery, and then work out the tasks and processes to make sure those things are created and available when and where needed.
I will use my IBM Thinkpad T60 as an example of how this works. Last week, I was among several speakers making presentations to an audience in Denver, and this involved carrying my laptop from the back of the room, up to the front of the room, several times. When I got my new T60 laptop a year ago, it specifically stated NOT to carry the laptop while the disk drive was spinning, to avoid vibrations and gyroscopic effects. It suggested always putting the laptop in standby, hibernate or shutdown mode, prior to transportation, but I haven't gotten yet in the habit of doing this. After enough trips back and forth, I had somehow corrupted my C: drive. It wasn't a complete corruption, I could still use Microsoft PowerPoint to show my slides, but other things failed, sometimes the fatal BSOD and other times less drastically. Perhaps the biggest annoyance was that I lost a few critical DLL files needed for my VPN software to connect to IBM networks, so I was unable to download or access e-mail or files inside IBM's firewall.
Fortunately, I had planned for this scenario, and was able to recover my laptop myself, which is important when you are on the road and your help desk is thousands of miles away. (In theory, I am now thousands of miles closer to our help desk folks in India and China, but perhaps further away from those in Brazil.) Not being able to respond to e-mail for two days was one thing, but no access for two weeks would have been a disaster! The good news: My system was up and running before leaving for the trip I am on now to Asia.
Following my three-step process, here's how this looks:
technorati tags: IBM, July, announcements, earthquake, Japan, nuclear reactor, power, outage, business, continuity, disaster, recovery, plan, plans, planning, IBM, Thinkpad, T60, laptop, Windows, Denver, BSOD, VPN, India, China, Brazil, help desk, Asia, Tivoli, Storage, Manager, TSM, BMR, external, USB, bootable, CD, DVD, separating, programs, data, Clark Hodge[Read More]
Continuing this week's theme on Business Continuity, I thought I would explore more on the identification of scenarios to help drive appropriate planning. As I mentioned in my last post, this should be done first.
A recent post in Anecdote talks about the long list of cognitive biases which affect business decision making. This list is a good explanation of why so many people have a difficult time identifying appropriate recovery scenarios as the basis for Business Continuity planning. Their "cognitive biases" get in the way.
Again, using my IBM Thinkpad T60 laptop as an example, here are a variety of different scenarios:
technorati tags: IBM, Business, Continuity, plan, plans, planning, Thinkpad, T60, laptop, NTFS, CHKDSK, hard disk crash, USB, key, Live, CD, LiveCD, DVD, Ubuntu, Linux, SUSE, RedHat, Fedora, shell, failure[Read More]
Continuing this week's theme on Business Continuity, I will use this post to discuss this week'sIBM solid state disk announcement.This new offering provides a new way to separate programs from data, to help minimizedowntime and outages normally associated with disk drive failures.
Until now, the method most people used to minimize the amount of data on internalstorage was to use disk-less servers with Boot-Over-SAN, however, not all operating systems, and not all disk systems, supported this.
Windows, however, is not supported, because of the small 4GB size and USB protocol limitations. For Windows, you would add a SAS drive, you boot from this hard drive, and use the 4GB Flash drive for data only.
So what's new this time? Here's a quick recap of July 17 announcement. For the IBM BladeCenter HS21 XM blade servers, new models of internal "disk" storage:
Until recently, solid state storage was available at a price premium only. Flash prices have dropped 50% annually while capacities have doubled. This trend is expected to continue through 2009.
Flash drives use non-volatile memory instead of moving parts, so less likely to break down during high external environmental stress conditions, like vibration and shock, or extreme temperature ranges (-0C° to +70°C) that would make traditional hard disks prone to failure.This is especially important for our telecommunications clients, who are always looking for solutions that are NEBS Level 3 compliant.
Last year, I mentioned that flash drives could provide only a limited number of write and erase cycles, but today's new advances in wear-leveling algorithms have nearly eliminated this limitation.
As with any SATA drive, performance depends on workload.Solid state drives perform best as OS boot devices, taking only a few secondslonger to boot an OS than from a traditional 73GB SAS drive. Flash drives also excel in applications featuring random read workloads, such as web servers. For random and sequential write workloads, use SAS drives instead for higher levels of performance.
So, even though this is not part of the System Storage product line, I am very excitedfor IBM. To find out if this will work in your environment, go to the IBM Server Provenwebsite that lists compatability with hardware, applications and middleware, or review the latest Configuration and Options Guide (COG).
technorati tags: IBM, Business, Continuity, solid, state, flash, disk, drive, announcement, blade, server, BladeCenter, H21, XM, 4GB, Flash, Memory, Device, USB2.0, Linux, RedHat, RHEL, Novell, SUSE, SLES, Windows, Project, Big Green, SATA, SAS, energy, efficient, efficiency, performance, NEBS, telecommunications, boot-over-SAN, Google, Carnegie Mellon, study, Vmware
Wrapping up my week's discussion on Business Continuity, I've had lots of interest in myopinion stated earlier this week that it is good to separate programs from data, and thatthis simplifies the recovery process, and that the Windows operating system can fit in a partition as small asthe 15.8GB solid state drive we just announced for BladeCenter. It worked for me, and I will use this post to show you how to get it done.
Disclaimer: This is based entirely on what I know and have experienced with my IBM Thinkpad T60 running Windows XP, and is meant as a guide. If you are running with different hardware or different operating system software, some steps may vary.
For this project, I have a DVD/CD burner in my Ultra-Bay, a stack of black CDs and DVDs, and a USB-attached 320GB external disk drive.
As with Business Continuity in the data center, planning in this manner can help you get back "up and running"quickly in the event of a disaster.
technorati tags: IBM, Business Continuity, Windows, XP, BladeCenter, solid, state, disk, backup, Linux, sysresccd, LiveCD, dd, gzip, split, Tivoli, Storage Manager, USB, Lotus Notes, NTFS, NTFS-3G, FAT32, primary, extended, logical, partition, magic, gparted[Read More]
Comments (15) Visits (12749)
For those in the US, a comedian named Carlos Mencia has a great TV show, Mind of Menciaand one of my favorite segments is "Why the @#$% is this news!" where he goes about showingblatantly obvious things that were reported in various channels.
So, when I saw that IBM once again, for the third year in a row, has the fastest disk system,the IBM System Storage SAN Volume Controller (SVC), based on widely-accepted industry benc
(Last year, I received comments fromWoody Hutsell, VP of Texas Memory Systems,because I pointed out that their "World's Fastest Storage"® cache-only system, was not as fast as IBM's SVC.You can ready my opinions, and the various comments that ensued, hereand here. )
That all changed when EMC uber-blogger Chuck Hollis forgot his own Lessons in Marketingwhen heposted his rantDoes Anyone Take The SPC Seriously?That's like asking "Does anyone take book and movie reviews seriously?" Of course they do!In fact, if a movie doesn't make a big deal of its "Two thumbs up!" rating, you know it did not sitwill with the reviewers. It's even more critical for books. I guess this latest news from SPC reallygot under EMC's skin.
For medium and large size businesses, storage is expensive, and customers want to do as much research as possible ahead of time to make informed decisions. A lot of money is at stake, and often, once you choose a product, you are stuckwith that vendor for many years to come, sometimes paying software renewals after only 90 days, and hardware maintenance renewals after only a year when the warranty runs out.
Customers shopping for storage like the idea of a standardized test that is representative, so they can compare one vendor's claims with another. The Storage Performance Council (SPC), much like the Transaction Processing Performance Council (TPC-C) for servers, requires full disclosure of the test environment so people can see what was measured and make their own judgement on whether or not it reflects their workloads. Chuck pours scorn on SPC but I think we should point to TPC-C as a great success story and ask why he thinks the same can't happen for storage? Server performance is also a complicatedsubject, but people compare TPC-C and TPC-H benchmarks all the time.
Note: This blog post has been updated. I am retracting comments that were unfair generalizations. The next two paragraphs are different than originally posted.
Chuck states that "Anyone is free, however, to download the SPC code, lash it up to their CLARiiON, and have at it." I encourage every customer to do this with whatever disk systems they already have installed. Judge for yourself how each benchmark compares to your experience with your application workload, and consider publishing the results for the benefit of others, or at least send me the results, so that I can understand better all of these"use cases" that Chuck talks about so often. I agree that real-world performance measurements using real applications and real data are always going to be more accurate and more relevant to that particular customer. Unfortunately, there are little or no such results made public. They are noticeably absent. With thousands of customers running with storage from all the major storage vendors, as well as storage from smaller start-up companies, I would expect more performance comparison data to be readily available.
In my opinion, customers would benefit by seeing the performance results obtained by others. SPC benchmarks help to fill this void, to provide customers who have not yet purchased the equipment, and are looking for guidance of which vendors to work with, and which products to put into their consideration set.
Truth is, benchmarks are just one of the many ways to evaluate storage vendors and their products. There are also customer references, industry awards, and corporate statements of a company's financial health, strategy and vision.Like anything, it is information to weigh against other factors when making expensive decisions. And I am sure the SPC would be glad to hear of any suggestions for a third SPC-3 benchmark, if the first two don't provide you enough guidance.
So, if you are not delighted with the performance you are getting from your storage now, or would benefit by having even faster I/O, consider improving its performance by adding SAN Volume Controller. SVC is like salt or soy sauce, it makes everything taste better. IBM would be glad to help you with a try-and-buy or proof-of-concept approach, and even help you compare the performance, before and after, with whatever gear you have now. You might just be surprised how much better life is with SVC. And if, for some reason, the performance boost you experience for your unique workload is only 10-30% better with SVC, you are free to tell the world about your disappointment.
technorati tags: Carlos Mencia, Mind of Mencia, IBM, system, storage, SVC, SAN Volume Controller, Storage Performance Council,SPC, benchmarks, Texas Memory Systems, Woody Hutsell, EMC, Chuck Hollis, movie, book, reviews, awards, salt, soy sauce
Continuing my business trip through Asia, I have left Chengdu, China, and am now in Kuala Lumpur, Malaysia.
On Sunday, a colleague and I went to the famous Petronas Twin Towers, which a few years ago were officially the tallestbuildings in the world. If you get there early enough in the day, and wait in line for a few hours, you can get a ticket permitting you to go up to the "Skybridge" on the 41st floor that connects the two buildings. The views are stunning, and I am glad to have done this.(If you are afraid of heights, get cured by facing your fears with skydiving)
You would think that a question as simple as "Which is the tallest building in the world?" could easily be answered, given that buildings remain fixed in one place and do not drastically shrink or get taller over time or weather conditions, and the unit of height, the "meter", is an officially accepted standard in all countries, defined as the distance traveled by light in absolute vacuum in 1/299,792,458 of a second.
The controversy stems around two key areas of dispute:
To bring some sanity to these comparisons, the Council on Tall Buildings and Urban Habitat has tried to standardize the terms and definitions to makecomparisons between buildings fair. Why does all this matter whose building is tallest? It matters in twoways:
What does any of this have to do with storage? Two weeks ago, IBM and the Storage Performance Councilanswered the question "Which is the fastest disk system?" with apress release. Customers thatcare about performance of their most mission critical applications are often willing to pay a premium to run theirapplications on the fastest disk system, and the IBM System Storage SAN Volume Controller, built through aglobal collaboration of architects and engineers across several countries, is (in my opinion at least) an impressive feat of storage engineering.
EMC bloggerChuck Hollis was the first to question the relevance of these results, and I failed to "turn the other cheek" and responded accordingly. The blogosphere erupted, with more opinions piled on by others, many from EMC andIBM, found in comments on these posts or other blogs, some have since been retracted or deleted, while othersremain for historical purposes.
At the heart of all this opinionated debate, lies a few areas of exploration:
I will try to address some of these issues in a series of posts this week.
technorati tags: IBM, KL, Kuala Lumpur, Malaysia, Petronas, Twin Towers, SkyBridge, tallest, building, structure, tower, fasted, disk, system, SVC, SAN Volume Controller, EMC, Chuck Hollis, SPC, Storage Performance Council
Comments (8) Visits (12506)
Yesterday, I started this week's topic discussing the various areas of exploration to helpunderstand our recent press release of the IBM System Storage SAN Volume Controller and itsimpressive SPC-1 and SPC-2 benchmark results that ranks it the fastest disk system in the industry.
Some have suggested that since the SVC has a unique design, it should be placed in its own category,and not compared to other disk systems. To address this, I would like to define what IBM meansby "disk system" and how it is comparable to other disk systems.
When I say "disk system", I am going to focus specifically on block-oriented direct-access storage systems, which I will define as:
One or more IT components, connected together, that function as a whole, to serve as a target forread and write requests for specific blocks of data.
Clarification: One could argue, and several do in various comments below, that there are other typesof storage systems that contain disks, some that emulate sequential access tape libraries, some that emulate file-systems through CIFS or NFS protocols, and some that support thestorage of archive objects and other fixed content. At the risk of looking like I may be including or excluding such to fit my purposes, I wanted to avoid appl
People who have been working a long time in the storage industry might be satisfied by this definition, thinkingof all the disk systems that would be included by this definition, and recognize that other types of storage liketape systems that are appropriately excluded.
Others might be scratching their heads, thinking to themselves "Huh?" So, I will provide some background, history, and additional explanation. Let's break up the definition into different phrases, and handle each separately.
So, the SAN Volume Controller is a disk system comprising of one to four node-pairs. Each node is a piece of IT equipment that have processors and cache. These node-pairs are connected to a pair of UPS power supplies to protect the cache memory holding writes that have not yet been de-staged. The combination of node-pairs and UPS acting as a whole, is able to serve as a target to SCSI commands sent over Fibre Channel cables on a Storage Area Network (SAN). To read some blocks of data, it uses its internal cache storage to satisfy the request, and for others, it goes out to external disk systems that contain the data required. All writes are satisfied immediately in cache on the SVC, and later de-staged to external disk when appropriate.
As of end of 2Q07, having reached our four-year anniversary for this product, IBM has sold over 9000 SVC nodes, which are part of more than 3100 SVC disk systems. These things are flying off the shelves, clocking in a 100% YTY growth over the amount we sold twelve months ago. Congratulations go to the SVC development team for their impressive feat of engineering that is starting to catch the attention of many customers and return astounding results!
So, now that I have explained why the SVC is considered a disk system, tomorrow I'll discuss metrics to measure performance.
Comments (2) Visits (8952)
Continuing our exploration this week into the performance of disk systems, today I will cover the metrics to measure performance. Why do people have metrics?
Several bloggers suggested that perhaps an analogy to vehicles would be reasonable, given that cars and trucks are expensive pieces of engineering equipment, and people make purchase decisions between different makes and models.
In the United States, the Environmental Protection Agency (EPA) government entity is responsible for measuringfuel economy of vehicles using the metric Miles Per Gallon (mpg).Specifically, these are U.S. miles (not nautical miles) and U.S. gallons, not imperial gallons. It is importantwhen defining metrics that you are precise on the units involved.
Since nearly all vehicles are driven by gallons of gasoline, and travel miles of distance, this is a great metric to use for comparing all kinds of vehicles, including motorcycles, cars, trucks and airplanes. The EPA has a fuel economy website to help people make these comp What about storage performance? What could we use as the "MPG"-like metric that would allow you to compare different makes and models of storage? The two most commonly used are I/O requests per second (IOPS) and Megabytes transferred per second (MB/s). To understand the difference in each one, let's go back to our analogy from yesterday's post. In this example, it might have only taken 1 second to actually provide the answer, but it might have taken 10-30 seconds to pick up the phone, hear the request, respond, and then hang up the phone. If one person is able to do this in 10 seconds, on average, then he can handle 360 questions per hour. If another person takes 30 seconds, then only 120 questions per hour. Many business applications read or write less than 4KB of information per I/O request, and as such the dominant factor is not the amount of time to transfer the data, but how quickly the disk system can respond to each request. IOPS is very much like counting "Questions handled per hour" at the public library. To be more specific on units, we may specify the specific block size of the request, say 512 bytes or 4096 bytes, to make comparisons consistent. Now suppose that instead of asking for something with a short answer, you ask the public library to read you the article from a magazine, identify all the movies and show times of a local theatre, or recite a work from Shakespeare. In this case, the time it took to pick up the phone and respond is very small compared to the time it takes to deliverthe information, and could be measured instead in words per minute. Some employees of the library may be faster talkers, having perhaps worked in auction houses in a prior job, and can deliver more words per minute than other employees. MB/s is very much like counting "Spoken words per minute" at the public library. To be more specific on units, we may request a specific amount of information, say the words contained in "Romeo and Juliet", to make comparisons consistent. Now that we understand the metrics involved, tomorrow we can discuss how to use these in the measurement process.
What about storage performance? What could we use as the "MPG"-like metric that would allow you to compare different makes and models of storage?
The two most commonly used are I/O requests per second (IOPS) and Megabytes transferred per second (MB/s). To understand the difference in each one, let's go back to our analogy from yesterday's post.
In this example, it might have only taken 1 second to actually provide the answer, but it might have taken 10-30 seconds to pick up the phone, hear the request, respond, and then hang up the phone. If one person is able to do this in 10 seconds, on average, then he can handle 360 questions per hour. If another person takes 30 seconds, then only 120 questions per hour. Many business applications read or write less than 4KB of information per I/O request, and as such the dominant factor is not the amount of time to transfer the data, but how quickly the disk system can respond to each request. IOPS is very much like counting "Questions handled per hour" at the public library. To be more specific on units, we may specify the specific block size of the request, say 512 bytes or 4096 bytes, to make comparisons consistent.
Now suppose that instead of asking for something with a short answer, you ask the public library to read you the article from a magazine, identify all the movies and show times of a local theatre, or recite a work from Shakespeare. In this case, the time it took to pick up the phone and respond is very small compared to the time it takes to deliverthe information, and could be measured instead in words per minute. Some employees of the library may be faster talkers, having perhaps worked in auction houses in a prior job, and can deliver more words per minute than other employees. MB/s is very much like counting "Spoken words per minute" at the public library. To be more specific on units, we may request a specific amount of information, say the words contained in "Romeo and Juliet", to make comparisons consistent.
Now that we understand the metrics involved, tomorrow we can discuss how to use these in the measurement process.
Comments (5) Visits (12764)
Wrapping up this week's exploration on disk system performance, today I willcover the Storage Performance Council (SPC) benchmarks, and why I feel they are relevant to help customers make purchase decisions. This all started to address a comment from EMC blogger Chuck Hollis, who expressed his disappointment in IBM as follows:
You've made representations that SPC testing is somehow relevant to customers' environments, but offered nothing more than platitudes in support of that statement.
Apparently, while everyone else in the blogosphere merely states their opinions and moves on,IBM is held to a higher standard. Fair enough, we're used to that.Let's recap what we covered so far this week:
Today, I will explore ways to apply these metrics to measure and compare storageperformance.
Let's take, for example, an IBM System Storage DS8000 disk system. This has a controller thatsupports various RAID configurations, cache memory, and HDD inside one or more frames.Engineers who are testing individual components of this system might run specifictypes of I/O requests to test out the performance or validate certain processing.
Known affectionately in the industry as the "four corners" test, because you can show them on a box, with writes on the left, reads on the right,hits on the top, and misses on the bottom.Engineers are proud of these results, but these workloads do notreflect any practical production workload. At best, since all I/O requests are oneof these four types, the four corners provide an expectation range from the worst performance (most often write-missin the lower left corner)and the best performance (most often read-hit in the upper right corner) you might get with a real workload.
To understand what is needed to design a test that is more reflective of real business conditions,let's go back to yesterday's discussion of fuel economy of vehicles, with mileage measured in miles per gallon.The How Stuff Works websiteoffers the following description for the two measurements taken by the EPA:
Why two different measurements? Not everyone drives in a city in stop-and-go traffic. Having only one measurement may not reflect the reality that you may travel long distances on the highway. Offering both city and highway measurements allows the consumers to decide which metric relates closer to their actual usage.
Should you expect your actual mileage to be the exact same as the standardized test?Of course not. Nobody drives exactly 11 miles in the city every morning with 23 stops along the way,or 10 miles on the highway at the exact speeds listed.The EPA's famous phrase "your mileage may vary" has been quickly adopted into popular culture's lexicon. All kinds of factors, like weather, distance, anddriving style can cause people to get better or worse mileage than thestandardized tests would estimate.
Want more accurate results that reflect your driving pattern, in specific conditions that you are most likely to drive in? You could rentdifferent vehicles for a week and drive them around yourself, keeping track of whereyou go, and how fast you drove, and how many gallons of gas you purchased, so thatyou can then repeat the process with another rental, and so on, and then use yourown findings to base your comparisons. Perhaps you find that your results are always20% worse than EPA estimates when you drive in the city, and 10% worse when you driveon the highway. Perhaps you have many mountains and hills where you drive, you drive too fast, you run the Air Conditioner too cold, or whatever.
If you did this with five or more vehicles, and ranked them best to worstfrom your own findings, and also ranked them best to worst based on the standardizedresults from the EPA, you likely will find the order to be the same. The vehiclewith the best standardized result will likely also have the best result from your ownexperience with the rental cars. The vehicle with the worst standardized result willlikely match the worst result from your rental cars.
(This will be one of my main points, that standardized estimates don't have to be accurate to beuseful in making comparisons. The comparisons and decisions you would make with estimatesare the same as you would have made with actual results, or customized estimates based on current workloads. Because the rankings are in the same order, they are relevant and useful for making decisions based on those comparisons.)
Most people shopping around for a new vehicle do not have the time or patience to do this with rental cars. Theycan use the EPA-certified standardized results to make a "ball-park" estimate on how much they will spendin gasoline per year, decide only on cars that might go a certain distancebetween two cities on a single tank of gas, or merely to provide ranking of thevehicles being considered. While mileage may not be the only metric used in making a purchase decision, it can certainly be used to help reduce your consideration setand factor in with other attributes, like number of cup-holders, or leather seats.
In this regard, the Storage Performance Council has developed two benchmarks that attempt to reflect normal business usage, similar to "City" and "Highway" driving measurements.
The SPC-2 benchmark was added when people suggested that not everyone runs OLTP anddatabase transactional update workloads, just as the "Highway" measurement was addedto address the fact that not everyone drives in the City.
If you are one of the customers out there willing to spend the time and resources to do your own performance benchmarking, either at your own data center, or with theassistance of a storage provider, I suspect most, if not all, the major vendors(including IBM, EMC and others), and perhaps even some of the smaller start-ups, would be glad to work with you.
If you want to gather performance data of your actual workloads, and use this to estimate how your performance might be with a new or different storage configuration, IBMhas tools to make these estimates, and I suspect (again) that most, if not all, of theother storage vendors have developed similar tools.
For the rest of you who are just looking to decide which storage vendors to invite on your next RFP, and which products you might like to investigate that matchthe level of performance you need for your next project or application deployment,than the SPC benchmarks might help you with this decision. If performance is importantto you, factor these benchmark comparisons with the rest of the attributes you arelooking for in a storage vendor and a storage system.
In my opinion, I feel that for some people, the SPC benchmarks provide some value in this decision making process. They are proportionally correct, in that even ifyour workload gets only a portion of the SPC estimate, that storage systems withfaster benchmarks will provide you better performance than storage systems with lower benchmark results. That is why I feel they can be relevant in makingvalid comparisons for purchase decisions.
Hopefully, I have provided enough "food for thought"on this subject to support why IBM participates in the Storage Performance Council, why the performance of the SAN Volume Controller can be compared to the performanceof other disk systems, and why we at IBM are proud of the recent benchmark results in our recent press release.
Enjoy the weekend!
technorati tags: IBM, SPC, EMC, Chuck Hollis, fastest, disk, system, SVC, HDD, storage, four corners, read-hit, read-miss, write-hit, write-miss, City, Highway, MPG, OLTP, SPC-1, SPC-2, benchmarks, file, database, video,[Read More]
Comments (5) Visits (16509)
Perhaps I wrapped up my exploration of disk system performance one day too early. (While it is Friday here in Malaysia, it is still only Thursday back home)
Barry Burke, EMC blogger (aka The Storage Anarchist) writes:
Aren't you mixing metrics here?
This is a fair question, Barry, so I will try to address it here.
It was not a typo, I did mean MPG (miles per gallon) and not MPH (miles per hour). It is always challenging to find an analogy that everyone can relate to explain concepts in Information Technology that might be harder to grasp. I chose MPG because it was closely related to IOPS and MB/s in four ways:
It seemed that if I was going to explain why standardized benchmarks were relevant, I should find an analogy that has similar features to compare to. I thought about MPH, since it is based on time units like IOPS and MB/s, butdecided against it based on an earlier comment you made, Barry, about NASCAR:
Let's imagine that a Dodge Charger wins the overwhelming majority of NASCAR races. Would that prove that a stock Charger is the best car for driving to work, or for a cross-country trip?
Your comparison, Barry, to car-racing brings up three reasons why I felt MPH is a bad metric to use for an analogy:
You also mention, Barry, the term "efficiency" but mileage is about "fuel economy".Wikipedia is quick to point out that the fuel efficiency of petroleum engines has improved markedly in recent decades, this does not necessarily translate into fuel economy of cars. The same can be said about the performance of internal bandwidth ofthe backplane between controllers and faster HDD does not necessarily translate to external performance of the disk system as a whole. You correctly point this out in your blog about the DMX-4:
Complementing the 4Gb FC and FICON front-end support added to the DMX-3 at the end of 2006, the new 4Gb back-end allows the DMX-4 to support the latest in 4Gb FC disk drives.
This also explains why the IBM DS8000, with its clever "Adaptive Replacement Cache" algorithm, has such highSPC-1 benchmarks despite the fact that it still uses 2Gbps drives inside. Given that it doesn't matter between2Gbps and 4Gbps on the back-end, why would it matter which vendor came first, second or third, and why call it a "distant 3rd" for IBM? How soon would IBM need to announce similar back-end support for it to be a "close 3rd" in your mind?
I'll wrap up with you're excellent comment that Watts per GB is a typical "green" metric. I strongly support the whole"green initiative" and I used "Watts per GB" last month to explain about how tape is less energy-consumptive than paper.I see on your blog you have used it yourself here:
The DMX-3 requires less Watts/GB in an apples-to-apples comparison of capacity and ports against both the USP and the DS8000, using the same exact disk drives
It is not clear if "requires less" means "slightly less" or "substantially less" in this context, and have no facts from my own folks within IBM to confirm or deny it. Given that tape is orders of magnitude less energy-consumptive than anything EMC manufacturers today, the point is probably moot.
I find it refreshing, nonetheless, to have agreed-upon "energy consumption" metrics to make such apples-to-apples comparisons between products from different storage vendors. This is exactly what customers want to do with performance as well, without necessarily having to run their own benchmarks or work with specific storage vendors. Of course, Watts/GB consumption varies by workload, so to make such comparisons truly apples-to-apples, you would need to run the same workload against both systems. Why not use the SPC-1 or SPC-2 benchmarks to measure the Watts/GB consumption? That way, EMC can publish the DMX performance numbers at the same time as the energy consumption numbers, and then HDS can follow suit for its USP-V.
I'm on my way back to the USA soon, but wanted to post this now so I can relax on the plane.
technorati tags: IBM, EMC, Storage Anarchist, MPG, MPH, IOPS, NASCAR, Malaysia, Watts, GB, green, back-end, DMX-3, DMX-4, HDS, USP, USP-V, SPC, SPC-1, SPC-2, standardized, benchmarks, workload, DS8000, disk, storage, tape[Read More]
Stephen2615 over at RupturedMonkey asksDo more SAN related issues happen with blade enclosures?and shares some of his bad experiences related to HP Blades in B class enclosures. Others comment that they had similar experiences with their B class equipment.
The question is if this is unique or specific to these particular models, or if this affects all kinds of blade servers because of their very nature and architecture. Stephen indicates that they also have HP C class enclosures, but since they are still in test mode, cannot comment on them.
I have no experience with any of HP's blade servers, but I have worked closely with our IBM BladeCenter team to help make sure that our storage, and our SAN equipment, work well together with the BladeCenter, and more importantly, that problems can be diagnosed effectively.
When I asked why people feel they need to know the inner workings of storage, the overwhelming response was to help diagnose problems. This could include problems inplacing related data on a potentially single point of failure, problems with performance, and problems communicating with 1-800-IBM-SERV.
So, if you have encountered problems diagnosing SAN problems with BladeCenter, or find that setting up an IBM SAN with blade servers in general, I would be interested in hearing what IBM can do to make the situation better.[Read More]
Comment (1) Visits (7213)
There are a lot of exciting conferences and events coming up soon.
I am sure there are others, but these are the ones that I am aware of IBM's involvement.I'll be in Chicago next week, meeting with Sales Reps and Business Partners.
Enjoy the weekend!
I would like to welcome IBMer Barry Whyte to the blogosphere!
From his bio:
Barry Whyte is a 'Master Inventor' working in the Systems & Technology Group based in IBM Hursley, UK. Barry primarly works on the IBM SAN Volume Controller virtualization appliance. Barry graduated from The University of Glasgow in 1996 with a B.Sc (Hons) in Computing Science. In his 10 years at IBM he has worked on the successful Serial Storage Architecture (SSA) range of products and the follow-on Fibre Channel products used in the IBM DS8000 series. Barry joined the SVC development team soon after its inception and has held many positions before taking on his current role as SVC performance architect. Outside of work, Barry enjoys playing golf and all things to do with Rotary Engines.
To avoid confusion in future posts, I will refer to Barry Whyte as BarryW, and fellow EMC blogger Barry Burke (aka the Storage Anarchist) as BarryB.
I'm in Chicago this week, but it is actually HOTTER here than in my home town of Tucson, Arizona.
Comments (4) Visits (9633)
Jon W Toigo over at Drunkendata has had a great set of posts on his skepticism of storage vendors touting their "green storage" solutions. My apologies for my"unnecessary" use of quotation marks.
The ones I liked specifically were:
The last of which refers to this ComputerWorld article "EPA: U.S. needs more power plants to support data centers", which claims "from a technology perspective, the systems most responsible for gobbling up power are the relatively low-cost x86 servers ..." The article is based onthe recent EPA report that was just released.
Last month, in my post How manys Watts per Terabyte, I mentioned:
Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
Perhaps they won't be surprised any more. Here is an article in eWeek that explains how IBM isreducing energy costs 80% by consolidating 3,900 rack-optimized servers to 33 IBM System z mainframe servers, running Linux, in its own data centers. Since 1997, IBM has consolidated its 155 strategic worldwide data center locations down to just seven.
I am very pleased that IBM has invested heavily into Linux, with support across servers, storage, software andservices. Linux is allowing IBM to deliver clever, innovative solutions that may not be possible with other operating systems. If you are in storage, you should consider becoming more knowledgeable in Linux.
The older systems won't just end up in a landfill somewhere. Instead, the details are spelled out inthe IBM Press Release:
As part of the effort to protect the environment, IBM Global Asset Recovery Services, the refurbishment and recycling unit of IBM, will process and properly dispose of the 3,900 reclaimed systems. Newer units will be refurbished and resold through IBM's sales force and partner network, while older systems will be harvested for parts or sold for scrap. Prior to disposition, the machines will be scrubbed of all sensitive data. Any unusable e-waste will be properly disposed following environmentally compliant processes perfected over 20 years of leading environmental skill and experience in the area of IT asset disposition.
Whereas other vendors might think that some operational improvements will be enough, such as switching to higher-capacity SATA drives, or virtualizing x86 servers, IBM recognizes that sometimes more fundamental changes are required to effect real changes and real results.
Comment (1) Visits (8109)
Stephen over at RupturedMonkey discusses the challenges of recruiting storage admi
There is actually a great standard called Information Technology Infrastructure Library (ITIL) that applies not just to storage administrators, but other IT personnel such as network administrators and server administrators. Here's a quick web-site about ITIL History:
ITIL History can be traced back to the late 1980’s when the British government determined that the level of IT service quality provided to them was not sufficient enough. The Central Computer and Telecommunications Agency (CCTA), now called the Office of Government Commerce (OGC), was tasked with developing a framework for efficient and financially responsible use of IT resources within the British government and the private sector.
This standard spread from the UK to other governments in Europe, and is now being adopted worldwide by government agencies, non-profit organizations and commercial enterprises. IBM, of course, has been involved along the way, encouraging this set of best practices to take hold.
IBMer John Long, in ITSM Watch article, points outsome key points:
The general process is now referred to as "IT Service Management", and the seven ITIL books are managed by the IT Service Management forum (ITSMf).
ITIL is vendor-independent. You can learn ITIL disciplines at one IT shop, and carry those skills with you when you go to another IT shop that has completely different gear. A common vocabulary would allow employers to post jobs in a consistent manner, and ask questions to those interviewing for the job. You can be ITIL-trained, and even ITIL-certified. IBM offers this training.
Of course, specific skills on how to use specific software to configure storage devices, request change control approvals, or define SAN zones, are useful, but often can be picked up on the job, reading the vendor manuals on the specifics. Of course, you can use IBM TotalStorage Productivity Center, which would allow someone to manage a variety of disk, tape and SAN fabric gear from one interface, greatly reducing the learning curve.
technorati tags: IBM, ITIL, IT, Service, Management, standards, storage, administrators, admins, skills, recruitment, vocabulary, TotalStorage, Prod
CNET staff writer Elinor Mills writes how some things in Web 2.0 have morphed, going from killer app to major Web platform.Among the examples are Salesforce.com, Google, Second Life, and Facebook.
Philip Rosedale, chief executive of Linden Labs, which produced the Second Life virtual reality environment, said Second Life and Facebook are popular because they give people a new environment to interact in that they are comfortable with.
Of course I have blogged for months now on my involvement in Second Life, and how IBM is investing in this platform for business purposes. Recently, IBM made news for publishing its Code of Conduct,and set of guidelines on how you run your avatar in virtual worlds, including Second Life. IBM recognizesthe business potential of virtual worlds, and has formed the "3D Internet" group exploring the possibilities.Over 5000 IBM employees now use Second Life on a regular basis.
I was surprised to learn that there were over 23,000 IBMers already on Facebook. I used to be on LinkedIn,but found FaceBook to have more IBMers and have made the switch. Recently, we were told that these 23,000 IBMers spend 19 minutes, on average, per day visiting Facebook pages. Nobody askedme how much time I spend every day on FaceBook, but with over 350,000 employees in the company,I am sure some have ways to track the lives of others.
Both of these count as adding more "FUN" into the workplace, which everyone should strive for. It is also good to know that the skills you developusing Second Life or FaceBook can carry over to your next job role or your next employer.The number-one question I get from new colleagues when I mention either these exciting new ways to communicate and collaborate is: "But how is this related to business?"
Second Life is obvious, a new innovative way to hold meetings with colleagues, Business Partners and clients isgoing to have business value. Meetings in Second Life help you focus on what is being discussed, versus a plaintelephone call where your eyes may wander to other things in your view. Of course nothing beatsthe effectiveness of face-to-face meetings, but Second Life offers a more energy-efficient alternative than traveling to other cities or countries.
I am still fairly new to Facebook, installing and trying out new apps. I found this article that explains12 Ways to Use Facebook Professionally. So far it serves me well as a replacement for LinkedIn,and provides my friends and family a quick answer to Where in the world is Tony Pearson?
What else can these and other Web platfoms do? I am still in the exploratory stages.Read More]