Comments (2) Visits (5383)
Chris Evans over at Storage Architect posts aboutHardware Replacement Lifecycle Update, on how storage virtualization can helpwith storage hardware replacemement. He makes two points that I would like to comment on.
In a typical four year lifecycle of storage arrays, it might take six months or so to fill up the box, and might takeas much as a year at the end to move the data out to other equipment. SVC can greatly reduce both of these, so that you can take immediate advantage of new equipment as soon as possible, and keep using it for close to the full four years,migrating weeks or days before your lease expires.
Seth Godin has an interesting post titled Times a Million.He recounts how many people determine the fuel savings of higher-mileage cars to be only $300-$900 per year,and that this is not enough to motivate the purchase of a more-efficient vehicle, such as a hybrid orelectric car. Of course, if everyone drove more efficient vehicles, the benefits "times a million" wouldbenefit everyone and the world's ecology.
When I discuss storage-related concepts, many executives mistakenly relate them to the one area of information technologythey know best: their laptop. Let's take a look at some exam Information Lifecycle Management (ILM) includes classifying data by business value, and then using this to determineplacement, movement or deletion. If you think about the amount of time and effort to review the files on yourindividual laptop, and to manually select and move or delete data, versus the benefits for the individual laptopowner, you would dismiss the concept. Most administrative tasks are done manually on laptops, because automatedsoftware is either unavailable or too expensive to justify for a single owner. In medium and large size enterprises, automated software to help classify, move and delete data makes a lot of sense.Executives who decide that ILM is not for their data center, based on their experiences with their laptop, are losingout on the "times a million" effect. Laptops have various controls to minimize the use of battery, and these controls are equally available when pluggedin. Many users don't bother turning off the features and functions they don't need when plugged in, because theyfeel the cost savings would only amount to pennies per day. Times a million, energy savings do add up, and options to reduce the amount used per server, per TB of data stored, not only save millions of dollars per year, but can also postpone the need to build a new data center, or upgrade the electrical systems in your existing data center. I am not surprised how many laptops do not have adequate backup and disaster recovery plans. When executives thinkin terms of the time and effort to backup their data, often crudely copying key files to CDrom or USB key, and worryingabout the management of those copies, which copies are the latest, and when those copies can be destroyed, theymight reject deploying appropriate backup policies for others. Times a million, the collected data stored on laptops could easily be half of your companies emails and intellectual property. Products like IBM Tivoli Storage Manager can manage a large number of clients with a few admi So, next time you are looking at technology or solutions for your data center, don't suffer from "Laptop Mentality". Focus instead on the data center as a whole.
Information Lifecycle Management (ILM) includes classifying data by business value, and then using this to determineplacement, movement or deletion. If you think about the amount of time and effort to review the files on yourindividual laptop, and to manually select and move or delete data, versus the benefits for the individual laptopowner, you would dismiss the concept. Most administrative tasks are done manually on laptops, because automatedsoftware is either unavailable or too expensive to justify for a single owner.
In medium and large size enterprises, automated software to help classify, move and delete data makes a lot of sense.Executives who decide that ILM is not for their data center, based on their experiences with their laptop, are losingout on the "times a million" effect.
Laptops have various controls to minimize the use of battery, and these controls are equally available when pluggedin. Many users don't bother turning off the features and functions they don't need when plugged in, because theyfeel the cost savings would only amount to pennies per day.
Times a million, energy savings do add up, and options to reduce the amount used per server, per TB of data stored, not only save millions of dollars per year, but can also postpone the need to build a new data center, or upgrade the electrical systems in your existing data center.
I am not surprised how many laptops do not have adequate backup and disaster recovery plans. When executives thinkin terms of the time and effort to backup their data, often crudely copying key files to CDrom or USB key, and worryingabout the management of those copies, which copies are the latest, and when those copies can be destroyed, theymight reject deploying appropriate backup policies for others.
Times a million, the collected data stored on laptops could easily be half of your companies emails and intellectual property. Products like IBM Tivoli Storage Manager can manage a large number of clients with a few admi
So, next time you are looking at technology or solutions for your data center, don't suffer from "Laptop Mentality". Focus instead on the data center as a whole.
Comment (1) Visits (5588)
Avi Bar-Zeeb of RealityPrime has an interesting post aboutHow Google Earth [really] Works.Normally, people who are very knowledgeable in a topic have a hard time describing concepts in basic terms. Avi was one of the co-founders of Keyhole, the company that built the predecessor for Google Earth, and also worked with Linden Lab for its 3D rendering it its virtual world, so he certainly knows what he is talking about. While he sometimes drops down into techno-talk about patents, the post overall is a good read.
It is perhaps human nature to be curious on how things are put together and how they function, leading to the popularity of web sites like www.
Many things can be used without understanding their internal inner workings. You can put on a pair of blue jeans without knowing how the cotton was made into denim fabric; lace up your favorite pair of running shoes without understanding the chemical make-up of the plastic that cushions your feet; or drink a glass of beer after your five mile run without knowing how alcohol is processed by your liver.
For technology, however, some people insist they need to know how it works in order for them to get the most use of it. When shopping for a car, for example, a guy might look under the hood, and ask questions about how the engine works, while his wife sits inside the vehicle, counting cup holders and making sure the radio has all the right buttons.
Not all technology suffers from need-to-know-itis. For example, the Apple iPod music player and the Canon PowerShot digital camera, are both just disk systems that read and write data, with knobs and dials on one end, and ports for connectivity on the other. Everyone just asks how to use their controls, and might read the manual to understand how to connect the cables. Few people who use these devices ask how they work before they buy them.
Other disk systems, the kind designed for data centers for the medium and large enterprise, apparently aren't there yet. Storage admins who might happily own both an iPod player and a PowerShot camera, insist they need to know how the technologies inside various storage offerings work. Is this just curiosity talking? Or are there some tasks like configuration, tuning, and support that just can't be done without this knowledge? Does knowing the inner workings somehow make the job more enjoyable, easier, or performed with less stress?
I'm curious what you think, send me a comment on this.
technorati tags: Avi Bar-Zeeb, Google, Earth, cotton, demin, plastic, shoes, beer, alcohol, liver, IBM, disk, system, storage, technology, Apple, iPod, music, player, Canon, PowerShot, digital, camera[Read More]
Comments (2) Visits (7732)
It's Tuesday, which means IBM makes its announcements. We had several for the IBM System Storage product line. Here's a quick recap.
I'm off to Denver, Colorado this week. I hope it is cooler there than it is down here in Tucson, Arizona.
technorati tags: IBM, disk, system, storage, SAS, FC, DS3000, DS3200, DS3400, EXP3000, NAS, EXN1000, tape, virtualization, library, TS7740, grid, Copy Export, throughput, TS3400, TS3200, mainframe, LTO, Ultrium, Cisco, MDS, 9124, Express, Advantage, DS4000, DS4700, TS3200, GAM, Grid Archive Manager, 3996, optical, WORM, Denver, Colorado, Tucson, Arizona, announcements[Read More]
This week and next I am touring Asia, meeting with IBM Business Partners and sales repsabout our July 10 announcements.
Clark Hodge might want to figure out where I am, given the nuclearreactor shutdowns from an earthquake in Japan. His theory is that you can follow my whereabouts just by following the news of major power outages throughout the world.
So I thought this would be a good week to cover the topic of Business Continuity, which includes disaster recovery planning. When making Business Continuity plans, I find it best to work backwards. Think of the scenarios that wouldrequire such recovery actions to take place, then figure out what you need to have at hand to perform the recovery, and then work out the tasks and processes to make sure those things are created and available when and where needed.
I will use my IBM Thinkpad T60 as an example of how this works. Last week, I was among several speakers making presentations to an audience in Denver, and this involved carrying my laptop from the back of the room, up to the front of the room, several times. When I got my new T60 laptop a year ago, it specifically stated NOT to carry the laptop while the disk drive was spinning, to avoid vibrations and gyroscopic effects. It suggested always putting the laptop in standby, hibernate or shutdown mode, prior to transportation, but I haven't gotten yet in the habit of doing this. After enough trips back and forth, I had somehow corrupted my C: drive. It wasn't a complete corruption, I could still use Microsoft PowerPoint to show my slides, but other things failed, sometimes the fatal BSOD and other times less drastically. Perhaps the biggest annoyance was that I lost a few critical DLL files needed for my VPN software to connect to IBM networks, so I was unable to download or access e-mail or files inside IBM's firewall.
Fortunately, I had planned for this scenario, and was able to recover my laptop myself, which is important when you are on the road and your help desk is thousands of miles away. (In theory, I am now thousands of miles closer to our help desk folks in India and China, but perhaps further away from those in Brazil.) Not being able to respond to e-mail for two days was one thing, but no access for two weeks would have been a disaster! The good news: My system was up and running before leaving for the trip I am on now to Asia.
Following my three-step process, here's how this looks:
technorati tags: IBM, July, announcements, earthquake, Japan, nuclear reactor, power, outage, business, continuity, disaster, recovery, plan, plans, planning, IBM, Thinkpad, T60, laptop, Windows, Denver, BSOD, VPN, India, China, Brazil, help desk, Asia, Tivoli, Storage, Manager, TSM, BMR, external, USB, bootable, CD, DVD, separating, programs, data, Clark Hodge[Read More]
Continuing this week's theme on Business Continuity, I thought I would explore more on the identification of scenarios to help drive appropriate planning. As I mentioned in my last post, this should be done first.
A recent post in Anecdote talks about the long list of cognitive biases which affect business decision making. This list is a good explanation of why so many people have a difficult time identifying appropriate recovery scenarios as the basis for Business Continuity planning. Their "cognitive biases" get in the way.
Again, using my IBM Thinkpad T60 laptop as an example, here are a variety of different scenarios:
technorati tags: IBM, Business, Continuity, plan, plans, planning, Thinkpad, T60, laptop, NTFS, CHKDSK, hard disk crash, USB, key, Live, CD, LiveCD, DVD, Ubuntu, Linux, SUSE, RedHat, Fedora, shell, failure[Read More]
Continuing this week's theme on Business Continuity, I will use this post to discuss this week'sIBM solid state disk announcement.This new offering provides a new way to separate programs from data, to help minimizedowntime and outages normally associated with disk drive failures.
Until now, the method most people used to minimize the amount of data on internalstorage was to use disk-less servers with Boot-Over-SAN, however, not all operating systems, and not all disk systems, supported this.
Windows, however, is not supported, because of the small 4GB size and USB protocol limitations. For Windows, you would add a SAS drive, you boot from this hard drive, and use the 4GB Flash drive for data only.
So what's new this time? Here's a quick recap of July 17 announcement. For the IBM BladeCenter HS21 XM blade servers, new models of internal "disk" storage:
Until recently, solid state storage was available at a price premium only. Flash prices have dropped 50% annually while capacities have doubled. This trend is expected to continue through 2009.
Flash drives use non-volatile memory instead of moving parts, so less likely to break down during high external environmental stress conditions, like vibration and shock, or extreme temperature ranges (-0C° to +70°C) that would make traditional hard disks prone to failure.This is especially important for our telecommunications clients, who are always looking for solutions that are NEBS Level 3 compliant.
Last year, I mentioned that flash drives could provide only a limited number of write and erase cycles, but today's new advances in wear-leveling algorithms have nearly eliminated this limitation.
As with any SATA drive, performance depends on workload.Solid state drives perform best as OS boot devices, taking only a few secondslonger to boot an OS than from a traditional 73GB SAS drive. Flash drives also excel in applications featuring random read workloads, such as web servers. For random and sequential write workloads, use SAS drives instead for higher levels of performance.
So, even though this is not part of the System Storage product line, I am very excitedfor IBM. To find out if this will work in your environment, go to the IBM Server Provenwebsite that lists compatability with hardware, applications and middleware, or review the latest Configuration and Options Guide (COG).
technorati tags: IBM, Business, Continuity, solid, state, flash, disk, drive, announcement, blade, server, BladeCenter, H21, XM, 4GB, Flash, Memory, Device, USB2.0, Linux, RedHat, RHEL, Novell, SUSE, SLES, Windows, Project, Big Green, SATA, SAS, energy, efficient, efficiency, performance, NEBS, telecommunications, boot-over-SAN, Google, Carnegie Mellon, study, Vmware
Wrapping up my week's discussion on Business Continuity, I've had lots of interest in myopinion stated earlier this week that it is good to separate programs from data, and thatthis simplifies the recovery process, and that the Windows operating system can fit in a partition as small asthe 15.8GB solid state drive we just announced for BladeCenter. It worked for me, and I will use this post to show you how to get it done.
Disclaimer: This is based entirely on what I know and have experienced with my IBM Thinkpad T60 running Windows XP, and is meant as a guide. If you are running with different hardware or different operating system software, some steps may vary.
For this project, I have a DVD/CD burner in my Ultra-Bay, a stack of black CDs and DVDs, and a USB-attached 320GB external disk drive.
As with Business Continuity in the data center, planning in this manner can help you get back "up and running"quickly in the event of a disaster.
technorati tags: IBM, Business Continuity, Windows, XP, BladeCenter, solid, state, disk, backup, Linux, sysresccd, LiveCD, dd, gzip, split, Tivoli, Storage Manager, USB, Lotus Notes, NTFS, NTFS-3G, FAT32, primary, extended, logical, partition, magic, gparted[Read More]
Comments (15) Visits (7562)
For those in the US, a comedian named Carlos Mencia has a great TV show, Mind of Menciaand one of my favorite segments is "Why the @#$% is this news!" where he goes about showingblatantly obvious things that were reported in various channels.
So, when I saw that IBM once again, for the third year in a row, has the fastest disk system,the IBM System Storage SAN Volume Controller (SVC), based on widely-accepted industry benc
(Last year, I received comments fromWoody Hutsell, VP of Texas Memory Systems,because I pointed out that their "World's Fastest Storage"® cache-only system, was not as fast as IBM's SVC.You can ready my opinions, and the various comments that ensued, hereand here. )
That all changed when EMC uber-blogger Chuck Hollis forgot his own Lessons in Marketingwhen heposted his rantDoes Anyone Take The SPC Seriously?That's like asking "Does anyone take book and movie reviews seriously?" Of course they do!In fact, if a movie doesn't make a big deal of its "Two thumbs up!" rating, you know it did not sitwill with the reviewers. It's even more critical for books. I guess this latest news from SPC reallygot under EMC's skin.
For medium and large size businesses, storage is expensive, and customers want to do as much research as possible ahead of time to make informed decisions. A lot of money is at stake, and often, once you choose a product, you are stuckwith that vendor for many years to come, sometimes paying software renewals after only 90 days, and hardware maintenance renewals after only a year when the warranty runs out.
Customers shopping for storage like the idea of a standardized test that is representative, so they can compare one vendor's claims with another. The Storage Performance Council (SPC), much like the Transaction Processing Performance Council (TPC-C) for servers, requires full disclosure of the test environment so people can see what was measured and make their own judgement on whether or not it reflects their workloads. Chuck pours scorn on SPC but I think we should point to TPC-C as a great success story and ask why he thinks the same can't happen for storage? Server performance is also a complicatedsubject, but people compare TPC-C and TPC-H benchmarks all the time.
Note: This blog post has been updated. I am retracting comments that were unfair generalizations. The next two paragraphs are different than originally posted.
Chuck states that "Anyone is free, however, to download the SPC code, lash it up to their CLARiiON, and have at it." I encourage every customer to do this with whatever disk systems they already have installed. Judge for yourself how each benchmark compares to your experience with your application workload, and consider publishing the results for the benefit of others, or at least send me the results, so that I can understand better all of these"use cases" that Chuck talks about so often. I agree that real-world performance measurements using real applications and real data are always going to be more accurate and more relevant to that particular customer. Unfortunately, there are little or no such results made public. They are noticeably absent. With thousands of customers running with storage from all the major storage vendors, as well as storage from smaller start-up companies, I would expect more performance comparison data to be readily available.
In my opinion, customers would benefit by seeing the performance results obtained by others. SPC benchmarks help to fill this void, to provide customers who have not yet purchased the equipment, and are looking for guidance of which vendors to work with, and which products to put into their consideration set.
Truth is, benchmarks are just one of the many ways to evaluate storage vendors and their products. There are also customer references, industry awards, and corporate statements of a company's financial health, strategy and vision.Like anything, it is information to weigh against other factors when making expensive decisions. And I am sure the SPC would be glad to hear of any suggestions for a third SPC-3 benchmark, if the first two don't provide you enough guidance.
So, if you are not delighted with the performance you are getting from your storage now, or would benefit by having even faster I/O, consider improving its performance by adding SAN Volume Controller. SVC is like salt or soy sauce, it makes everything taste better. IBM would be glad to help you with a try-and-buy or proof-of-concept approach, and even help you compare the performance, before and after, with whatever gear you have now. You might just be surprised how much better life is with SVC. And if, for some reason, the performance boost you experience for your unique workload is only 10-30% better with SVC, you are free to tell the world about your disappointment.
technorati tags: Carlos Mencia, Mind of Mencia, IBM, system, storage, SVC, SAN Volume Controller, Storage Performance Council,SPC, benchmarks, Texas Memory Systems, Woody Hutsell, EMC, Chuck Hollis, movie, book, reviews, awards, salt, soy sauce
Continuing my business trip through Asia, I have left Chengdu, China, and am now in Kuala Lumpur, Malaysia.
On Sunday, a colleague and I went to the famous Petronas Twin Towers, which a few years ago were officially the tallestbuildings in the world. If you get there early enough in the day, and wait in line for a few hours, you can get a ticket permitting you to go up to the "Skybridge" on the 41st floor that connects the two buildings. The views are stunning, and I am glad to have done this.(If you are afraid of heights, get cured by facing your fears with skydiving)
You would think that a question as simple as "Which is the tallest building in the world?" could easily be answered, given that buildings remain fixed in one place and do not drastically shrink or get taller over time or weather conditions, and the unit of height, the "meter", is an officially accepted standard in all countries, defined as the distance traveled by light in absolute vacuum in 1/299,792,458 of a second.
The controversy stems around two key areas of dispute:
To bring some sanity to these comparisons, the Council on Tall Buildings and Urban Habitat has tried to standardize the terms and definitions to makecomparisons between buildings fair. Why does all this matter whose building is tallest? It matters in twoways:
What does any of this have to do with storage? Two weeks ago, IBM and the Storage Performance Councilanswered the question "Which is the fastest disk system?" with apress release. Customers thatcare about performance of their most mission critical applications are often willing to pay a premium to run theirapplications on the fastest disk system, and the IBM System Storage SAN Volume Controller, built through aglobal collaboration of architects and engineers across several countries, is (in my opinion at least) an impressive feat of storage engineering.
EMC bloggerChuck Hollis was the first to question the relevance of these results, and I failed to "turn the other cheek" and responded accordingly. The blogosphere erupted, with more opinions piled on by others, many from EMC andIBM, found in comments on these posts or other blogs, some have since been retracted or deleted, while othersremain for historical purposes.
At the heart of all this opinionated debate, lies a few areas of exploration:
I will try to address some of these issues in a series of posts this week.
technorati tags: IBM, KL, Kuala Lumpur, Malaysia, Petronas, Twin Towers, SkyBridge, tallest, building, structure, tower, fasted, disk, system, SVC, SAN Volume Controller, EMC, Chuck Hollis, SPC, Storage Performance Council