Yesterday, I started this week's topic discussing the various areas of exploration to helpunderstand our recent press release of the IBM System Storage SAN Volume Controller and itsimpressive SPC-1 and SPC-2 benchmark results that ranks it the fastest disk system in the industry.
Some have suggested that since the SVC has a unique design, it should be placed in its own category,and not compared to other disk systems. To address this, I would like to define what IBM meansby "disk system" and how it is comparable to other disk systems.
When I say "disk system", I am going to focus specifically on block-oriented direct-access storage systems, which I will define as:
One or more IT components, connected together, that function as a whole, to serve as a target forread and write requests for specific blocks of data.
Clarification: One could argue, and several do in various comments below, that there are other typesof storage systems that contain disks, some that emulate sequential access tape libraries, some that emulate file-systems through CIFS or NFS protocols, and some that support thestorage of archive objects and other fixed content. At the risk of looking like I may be including or excluding such to fit my purposes, I wanted to avoid apples-to-orangescomparisons between very different access methods. I will limit this exploration to block-oriented, direct-access devices. We can explore these other types of storage systems in later posts.
People who have been working a long time in the storage industry might be satisfied by this definition, thinkingof all the disk systems that would be included by this definition, and recognize that other types of storage liketape systems that are appropriately excluded.
Others might be scratching their heads, thinking to themselves "Huh?" So, I will provide some background, history, and additional explanation. Let's break up the definition into different phrases, and handle each separately.
- read and write requests
Let's start with "read and write requests", which we often lump together generically as input/output request, or just I/O request. Typically an I/O request is initiated by a host, over a cable or network, to a target. The target responds with acknowledgment, data, or failure indication. A host can be a server, workstation, personal computer, laptop or other IT device that is capable of initiating such requests, and a target is a device or system designed to receive and respond to such requests.
(An analogy might help. A woman calls the local public library. She picks up the phone, and dials the phone number of the one down the street. A man working at the library hears the phone ring, answers it with "Welcome to the Public Library! How can I help you?" She asks "What is the capital city of Ethiopia?" and replies "Addis Ababa." and hangs up. Satisfied with this response, she hangs up. In this example, the query for information was the I/O request, initiated by the lady, to the public library target)
Today, there are three popular ways I/O requests are made:
- CCW commands over OEMI, ESCON or FICON cables
- SCSI commands over SCSI, Fibre Channel or SAS cables
- SCSI commands over Ethernet cables, wireless or other IP communication methods
- specific blocks of data
In 1956, IBM was the first to deliver a disk system. It was different from tape because it was a "direct access storage device" (the acronym DASD is still used today by some mainframe programmers). Tape was a sequential media, so it could handle commands like "read the next block" or "write the next block", it could not directly read without having to read past other blocks to get to it, nor could it write over an existing block without risking overwriting the contents of blocks past it.
The nature of a "block" of data varies. It is represented by a sequence of bytes of specific length. The length is determined in a variety of ways.
- CCW commands assume a Count-Key-Data (CKD) format for disk, meaning that tracks are fixed in size, but that a track can consist of one or more blocks, and can be fixed or variable in length. Some blocks can span off the end of one track, and over to another track. Typical block sizes in this case are 8000 to 22000 bytes.
- SCSI commands assume a Fixed-Block-Architecture (FBA) format for disk, where all blocks are the same size, almost always a power of two, such as 512 or 4096 bytes. A few operating systems, however, such as i5/OS on IBM System i machines, use a block size that doesn't follow this power-of-two rule.
- one or more IT components
You may find one or more of the following IT components in a disk system:
- customized or general-purpose processing chips
- memory, such as RAM, Flash, or similar
- batteries and/or other power supply
- Host attachment cards or ports
- motorized platter(s) covered in magnetic coating with a read/write head to move over its surface. These are often referred to as Hard Disk Drive (HDD) or Disk Drive Modules (DDM), and are manufacturedby companies like Seagate or Hitachi Global Storage Technologies.
A set of HDD can be accessed individually, affectionately known as JBOD for Just-a-bunch-of-disk, or collectively in a RAID configuration.
Memory can act as the high-speed cache in front of slower storage, or as the storage itself. For example, the solid state disk that IBM announced last week is entirely memory storage, using Flash technology.
Lately, there are two popular packaging methods for disk systems:
- Monolithic -- all the components you need connected together inside a big refrigerator-sized unit, with options to attach additional frames. The IBM System Storage DS8000, EMC Symmetrix DMX-4 and HDS TagmaStore USP-V all fit this category.
- Modular -- components that fit into standard 19-inch racks, often the size of the vegetable drawer inside a refrigerator, that can be connected externally with other components, if necessary, to make a complete disk system. The IBM System Storage DS6000, DS4000, and DS3000 series, as well as our SVC and N series, fall into this category.
Regardless of packaging, the general design is that a "controller" receives a request from its host attachment port, and uses its processors and cache storage to either satisfy the request, or pass the request to the appropriate HDD,and the results are sent back through the host attachment port.
In all of the monolithic systems, as well as some of the modular ones, the controller and HDD storage are contained in the same unit. On other modular systems, the controller is one system, and the HDD storage is in a separate system, and they are cabled together.
- serve as a target
The last part is that a disk system must be able to satisfy some or all requests that come to it.
(Using the same analogy used above, when the lady asked her question, the guy at the public library knew the answer from memory, and replied immediately. However, for other questions, he might need to look up the answer in a book, do a search on the internet, or call another library on her behalf.)
Some disk systems are cache-only controllers. For these, either the I/O request is satisfied as a read-hit or write-hit in cache, or it is not, and has to go to the HDD. The IBM DS4800 and N series gateways are examples of this type of controller.
Other systems may have controller and disk, but support additional disk attachment. In this case, either the I/O request is handled by the cache or internal disk, or it has to go out to external HDD to satisfy the request. IBM DS3000 series, DS4100, DS4700, and our N series appliance models, all fall into this category.
So, the SAN Volume Controller is a disk system comprising of one to four node-pairs. Each node is a piece of IT equipment that have processors and cache. These node-pairs are connected to a pair of UPS power supplies to protect the cache memory holding writes that have not yet been de-staged. The combination of node-pairs and UPS acting as a whole, is able to serve as a target to SCSI commands sent over Fibre Channel cables on a Storage Area Network (SAN). To read some blocks of data, it uses its internal cache storage to satisfy the request, and for others, it goes out to external disk systems that contain the data required. All writes are satisfied immediately in cache on the SVC, and later de-staged to external disk when appropriate.
As of end of 2Q07, having reached our four-year anniversary for this product, IBM has sold over 9000 SVC nodes, which are part of more than 3100 SVC disk systems. These things are flying off the shelves, clocking in a 100% YTY growth over the amount we sold twelve months ago. Congratulations go to the SVC development team for their impressive feat of engineering that is starting to catch the attention of many customers and return astounding results!
So, now that I have explained why the SVC is considered a disk system, tomorrow I'll discuss metrics to measure performance.
technorati tags: IBM, SVC, HDD, DDM, DS4800, SVC, SAN Volume Controller, EMC, HDS, HP, DS4100, DS4700, Flash, RAM, solid-state, disk, system, controller, array, RAID, I/O, IO, request, read, write,
Continuing my business trip through Asia, I have left Chengdu, China, and am now in Kuala Lumpur, Malaysia.
On Sunday, a colleague and I went to the famous Petronas Twin Towers, which a few years ago were officially the tallestbuildings in the world. If you get there early enough in the day, and wait in line for a few hours, you can get a ticket permitting you to go up to the "Skybridge" on the 41st floor that connects the two buildings. The views are stunning, and I am glad to have done this.(If you are afraid of heights, get cured by facing your fears with skydiving)
You would think that a question as simple as "Which is the tallest building in the world?" could easily be answered, given that buildings remain fixed in one place and do not drastically shrink or get taller over time or weather conditions, and the unit of height, the "meter", is an officially accepted standard in all countries, defined as the distance traveled by light in absolute vacuum in 1/299,792,458 of a second.
The controversy stems around two key areas of dispute:
- What constitutes a building?
A building is a structure intended for continuous human occupancy, as opposed to the dozens ofradio and television broadcasting towers which measure over 600 meters in height. The Petronas Twin Towers is occupied by a variety of business tenants and would qualify as a building. Radio and Television towers are not intended for occupation, and should not be considered.
- Where do you start measuring, and where do you stop?
Since 1969, the height was generally based on a building's height from the sidewalk level of the main entrance to the architectural top of the building. The "architectural top" included towers, spires (but not antennas), masts or flagpoles. Should the measurements be only to the top to the highest inhabitable floor?
What if the building has many more floors below ground level? What if the building exists in a body of water, should sidewalk level equate to water level, and at low tide or high tide? (Laugh now, but this might happen sooner than you think!)
To bring some sanity to these comparisons, the Council on Tall Buildings and Urban Habitat has tried to standardize the terms and definitions to makecomparisons between buildings fair. Why does all this matter whose building is tallest? It matters in twoways:
- People and companies are willing to pay more to be a tenant in tall towers, affording a luxurious bird's-eyeview to impress friends, partners and clients, and so the rankings can influence purchase or leasing prices of floorspace in these buildings.
- Architects and engineers involved in building these structures want to list these on their resume.These buildings are an impressive feat of engineering, and the teams involved collaborate in a global mannerto accomplish them. If an architecture or engineeering company can build the world's tallest building, you can trust themto build one for you. The rankings can help drive revenues in generating demand for services and offerings.
What does any of this have to do with storage? Two weeks ago, IBM and the Storage Performance Councilanswered the question "Which is the fastest disk system?" with apress release. Customers thatcare about performance of their most mission critical applications are often willing to pay a premium to run theirapplications on the fastest disk system, and the IBM System Storage SAN Volume Controller, built through aglobal collaboration of architects and engineers across several countries, is (in my opinion at least) an impressive feat of storage engineering.
EMC bloggerChuck Hollis was the first to question the relevance of these results, and I failed to "turn the other cheek" and responded accordingly. The blogosphere erupted, with more opinions piled on by others, many from EMC andIBM, found in comments on these posts or other blogs, some have since been retracted or deleted, while othersremain for historical purposes.
At the heart of all this opinionated debate, lies a few areas of exploration:
- What constitutes a "disk system"? What should or should not be considered for comparison?
- What metrics should be used to measure performance? What is a version of the "meter" everyone can use?
- How should the measurements occur? Who should perform them?
- Do the measurements provide sufficient value for the purpose of aiding the purchase decision making process?
I will try to address some of these issues in a series of posts this week.
technorati tags: IBM, KL, Kuala Lumpur, Malaysia, Petronas, Twin Towers, SkyBridge, tallest, building, structure, tower, fasted, disk, system, SVC, SAN Volume Controller, EMC, Chuck Hollis, SPC, Storage Performance Council
For those in the US, a comedian named Carlos Mencia has a great TV show, Mind of Mencia
and one of my favorite segments is "Why the @#$% is this news!" where he goes about showingblatantly obvious things that were reported in various channels.
So, when I saw that IBM once again, for the third year in a row, has the fastest disk system,the IBM System Storage SAN Volume Controller (SVC), based on widely-accepted industry benchmarksrepresenting typical business workloads, I thought, "Do I really want to blog about this,and sound like a broken record, repeating my various statements of the past of how great SVC is?" It's like reminding people that IBM hashad the most US patents than any other company, every year, for the past 14 years.
(Last year, I received comments fromWoody Hutsell, VP of Texas Memory Systems,because I pointed out that their "World's Fastest Storage"® cache-only system, was not as fast as IBM's SVC.You can ready my opinions, and the various comments that ensued, hereand here. )
That all changed when EMC uber-blogger Chuck Hollis forgot his own Lessons in Marketingwhen heposted his rantDoes Anyone Take The SPC Seriously?That's like asking "Does anyone take book and movie reviews seriously?" Of course they do!In fact, if a movie doesn't make a big deal of its "Two thumbs up!" rating, you know it did not sitwill with the reviewers. It's even more critical for books. I guess this latest news from SPC reallygot under EMC's skin.
For medium and large size businesses, storage is expensive, and customers want to do as much research as possible ahead of time to make informed decisions. A lot of money is at stake, and often, once you choose a product, you are stuckwith that vendor for many years to come, sometimes paying software renewals after only 90 days, and hardware maintenance renewals after only a year when the warranty runs out.
Customers shopping for storage like the idea of a standardized test that is representative, so they can compare one vendor's claims with another. The Storage Performance Council (SPC), much like the Transaction Processing Performance Council (TPC-C) for servers, requires full disclosure of the test environment so people can see what was measured and make their own judgement on whether or not it reflects their workloads. Chuck pours scorn on SPC but I think we should point to TPC-C as a great success story and ask why he thinks the same can't happen for storage? Server performance is also a complicatedsubject, but people compare TPC-C and TPC-H benchmarks all the time.
Note: This blog post has been updated. I am retracting comments that were unfair generalizations. The next two paragraphs are different than originally posted.
Chuck states that "Anyone is free, however, to download the SPC code, lash it up to their CLARiiON, and have at it." I encourage every customer to do this with whatever disk systems they already have installed. Judge for yourself how each benchmark compares to your experience with your application workload, and consider publishing the results for the benefit of others, or at least send me the results, so that I can understand better all of these"use cases" that Chuck talks about so often. I agree that real-world performance measurements using real applications and real data are always going to be more accurate and more relevant to that particular customer. Unfortunately, there are little or no such results made public. They are noticeably absent. With thousands of customers running with storage from all the major storage vendors, as well as storage from smaller start-up companies, I would expect more performance comparison data to be readily available.
In my opinion, customers would benefit by seeing the performance results obtained by others. SPC benchmarks help to fill this void, to provide customers who have not yet purchased the equipment, and are looking for guidance of which vendors to work with, and which products to put into their consideration set.
Truth is, benchmarks are just one of the many ways to evaluate storage vendors and their products. There are also customer references, industry awards, and corporate statements of a company's financial health, strategy and vision.Like anything, it is information to weigh against other factors when making expensive decisions. And I am sure the SPC would be glad to hear of any suggestions for a third SPC-3 benchmark, if the first two don't provide you enough guidance.
So, if you are not delighted with the performance you are getting from your storage now, or would benefit by having even faster I/O, consider improving its performance by adding SAN Volume Controller. SVC is like salt or soy sauce, it makes everything taste better. IBM would be glad to help you with a try-and-buy or proof-of-concept approach, and even help you compare the performance, before and after, with whatever gear you have now. You might just be surprised how much better life is with SVC. And if, for some reason, the performance boost you experience for your unique workload is only 10-30% better with SVC, you are free to tell the world about your disappointment.
technorati tags: Carlos Mencia, Mind of Mencia, IBM, system, storage, SVC, SAN Volume Controller, Storage Performance Council,SPC, benchmarks, Texas Memory Systems, Woody Hutsell, EMC, Chuck Hollis, movie, book, reviews, awards, salt, soy sauce
Wrapping up my week's discussion on Business Continuity, I've had lots of interest in myopinion stated earlier this week that it is good to separate programs from data
, and thatthis simplifies the recovery process, and that the Windows operating system can fit in a partition as small asthe 15.8GB solid state drive
we just announced for BladeCenter. It worked for me, and I will use this post to show you how to get it done.
Disclaimer: This is based entirely on what I know and have experienced with my IBM Thinkpad T60 running Windows XP, and is meant as a guide. If you are running with different hardware or different operating system software, some steps may vary.
(Warning: Windows Vista apparently handles data, Dual Boot, andPartitions differently. These steps may not work for Vista)
For this project, I have a DVD/CD burner in my Ultra-Bay, a stack of black CDs and DVDs, and a USB-attached 320GB external disk drive.
- Step 0 - Backup your system
I find it amusing that this is ALWAYS the first step, but nobody provides any instruction.I will assume we start with a single C: drive with an operational Windows operating system, intermixed programs and data. If you have a Thinkpad, you should have "IBM Rescue and Recovery" program already installed, but is probably down-level. Mine was version 2.0 -- Yikes! Download IBM Rescue and Recovery Version 4.0 for Windows XP and Windows 2000,and reboot to make it fully installed.
Make TWO backups. First, make a bootable rescue CD and backup to several DVDs. Second, backup to a large external 320GB USB-attached disk drive. IBM Rescue and Recovery does compression, so a 60GB drive that is mostly full might take about 8-10 DVDs, have plenty on hand. If you have to recover, boot from CD, and restore from the USB-attached drive. If that doesn't work, you have the DVDs just in case.
If you are suitably happy with your backups, you are ready for step 1. For added protection, you can use a Linux LiveCD to backup your entire drive. I suggestSysRescCD, which is designed to be a rescue CD and can do backups and restores.
First, figure out if your drive is "hda" or sda. The "dmesg" command below shows that mine is "sda", with output like this:
tpearson@tpearson:~$ dmesg | grep [hs]d[ 7.968000] SCSI device sda: 117210240 512-byte hdwr sectors (60012 MB)[ 7.968000] sda: Write Protect is offI like to backup the master boot record to one file, and then the rest of the C: drive to a series of 690MB compressed chunks. These can be directed to the USB-attached drive, and then later burned onto CDrom, or pack 6 files per DVD.Most USB-attached drives are formatted to FAT32 file system, which doesn't support any chunks greater than 2GB, so splitting these up into 690MB is well below that limit.
[ 7.968000] sda: Mode Sense: 00 3a 00 00[ 7.968000] SCSI device sda: write cache: enabled, read cache: enabled
dd if=/dev/sda of=/media/USBdrive/master.MBR bs=512 count=1dd if=/dev/sda1 conv=sync,noerror | gzip -c | split -b 690m - /media/USBdrive/master.gz.To recover your system, just reverse the process:
cat /media/USBdrive/master.gz.* | gzip -dc | dd of=/dev/sda1dd if=/media/USBdrive/master.MBR of=/dev/sda bs=512 count=1You can learn more about these commands here and here.
- Step 1 - Defrag your C: drive
From Windows, right-click on your Recycle Bin and select "Empty Recycle Bin".
Click Start->Programs->Accessories->System Tools->Disk Defragmenter. Select C: drive and push the Analyze button. You will see a bunch of red, blue and white vertical bars. If there are any greenbars, we need to fix that. The following worked for me:
- Right-click "My Computer" and select Properties. Select Advanced, then press "Settings" buttonunder Performance. Select Advanced tab and press the "Change" button under Virtual Memory.Select "No Paging File" and press the "Set" button. Virtual memory lets you have many programs open, moving memory back and forth between your RAM and hard disk.
- Click Start->Control Panel->Performance and Maintenance->Power Options. On the Hibernate tab,make sure the "Enable Hibernation" box is un-checked. I don't use Hibernate, as it seems likeit takes just as long to come back from Hibernation as it does to just boot Windows normally.
- Reboot your system to Windows.
If all went well, Windows will have deleted both pagefile.sys and hiberfil.sys, the twomost common unmovable files, and free up 2GB of space. You can run just fine without either of these features, but if you want them back, we will put them back on Step 6 below.
Go back to Disk Defragmenter, verify there are no green bars, andproceed by pressing the "Defragment" button. If there are still some green bars,you can proceed cautiously (you can always restore from your backup right?), or seek professional help.
- Step 2 - Resize your C: drive
When the defrag is done, we are ready to re-size your file system. This can be done with commercial software like Partition Magic.If you don't have this, you can use open source software. Burn yourself the Gparted LiveCD.This is another Linux LiveCD, and is similar to Partition Magic.
Either way, re-size the C: drive smaller. In theory, you can shrink it down to 15GB if this is a fresh install of Windows, and there is no data on it. If you have lots of data, and the drive wasnearly full, only resize the C: drive smaller by 2GB. That is how much we freed upfrom the unmovable files, so that should be safe.
You could do steps 2 and 3 while you are here, but I don't recommend it. Just re-size C:press the "Apply" button, reboot into Windows, and verify everything starts correctly before going to the next step.
- Step 3 - Create Extended Paritition and Logical D: drive
You can only have FOUR partitions, either Primary for programs, or Extended for data. However, theExtended partition can act as a container of one or more logical partitions.
Get back into Partition Magic or Gparted program, and in the unused space freed up from re-sizing inthe last step, create a new extended/logical partition. For now, just have one logical inside theextended, but I have co-workers who have two logical partitions, D: for data, and E: for their e-mailfrom Lotus Notes. You can always add more logical partitions later.
I selected "NTFS" type for the D: drive. In years past, people chose the older FAT32 type, but this has some limitations, but allowed read/write capability from DOS, OS/2, and Linux.Windows XP can only format up to 32GB partitions of FAT32, and each file cannot be bigger than 2GB.I have files bigger than that. Linux can now read/write NTFS file systems directly, using the new NTFS-3Gdriver, so that is no longer an issue.
- Step 4 - Format drive D: as NTFS
Just because you have told your partitioning program that D: was NTFS type, you stillhave to put a file system on it.
Click Start->Control Panel->Performance and Maintenance->Computer Management. Under Storage, select Disk Management. Right-click your D: drive and choose format.Make sure the "Perform Quick Format" box is un-checked, so that it peforms slowly.
- Step 5 - Move data from C: to D: drive
Create two directories, "D:\documents" and "D:\notes\data", either through explorer, or in a commandline window with "MKDIR documents notes\data" command.
Move files from c:\notes\data to d:\notes\data, and any folder in your "My Documents" over to d:\documents.
(If you have more data than the size of the D: drive, copy over what you can, run another defrag, resize your C: drive even smaller with Partition Magic or Gparted, Reboot, verify Windows is still working,resize your D: bigger, and repeat the process until you have all of your data moved over.)
To inform Lotus Notes that all of your data is now on the D: drive, use NOTEPAD to edit notes.ini and change the Directory line to "Directory=D:\notes\data". If you have a special signature file, leave it in C:\notes directory.
Once all of your data is moved over to D:\documents, right-click on "My Documents" and select Properties. Change the target to "D:\documents" and press "Move" button. Now, whenever you select "My Documents", youwill be on your D: drive instead.
- Step 6 - Take A Fresh Backup
If you use IBM Tivoli Storage Manager, now would be a good time to re-evaluate your "dsm.opt" file that listswhat drives and sub-directories to backup. Take a backup, and verify your data is being backed up correctly.
With the USB-attached, backup both C: and D: drives. I leave my USB drive back in Tucson. For a backup copywhile traveling, go to IBM Rescue and Recovery and take a C:-only backup to DVD. Make sure D: drive box is un-checked. Now, if I ever need to reinstall Windows, because of file system corruption or virus, I can do this from my one bootable CD plus 2 DVDs, which I can easily carry with me in my laptop bag, leaving all my data on the D: drive in tact.
In the worst case, if I had to re-format the whole drive or get a replacement disk, I can restore C: and thenrestore the few individual data files I need from IBM Tivoli Storage Manager, or small USB key/thumbdrive,delaying a full recovery until I return to Tucson.
Lastly, if you want, reactivate "Virtual Memory" and "Hibernation" features that we disabled in Step 1.
As with Business Continuity in the data center, planning in this manner can help you get back "up and running"quickly in the event of a disaster.
technorati tags: IBM, Business Continuity, Windows, XP, BladeCenter, solid, state, disk, backup, Linux, sysresccd, LiveCD, dd, gzip, split, Tivoli, Storage Manager, USB, Lotus Notes, NTFS, NTFS-3G, FAT32, primary, extended, logical, partition, magic, gparted
Continuing this week's theme on Business Continuity, I will use this post to discuss this week'sIBM solid state disk announcement
.This new offering provides a new way to separate programs from data, to help minimizedowntime and outages normally associated with disk drive failures.
Until now, the method most people used to minimize the amount of data on internalstorage was to use disk-less servers with Boot-Over-SAN, however, not all operating systems, and not all disk systems, supported this.
In April, the BladeCenter HS21 XM blade server introduced the option to have oneIBM 4GB Flash Memory Device that used the USB2.0 protocol. The 4GB drive can be usedto boot 32-bit and 64-bit versions of Linux, such as Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES)), but not Windows. Linux is incredibly small operating system. You can bootversions from a USB key/thumbdrive (64MB) or CD (700MB) image, so it makes sense that a 4GB flash drive based on USB protocol was a good fit for Linux.
Windows, however, is not supported, because of the small 4GB size and USB protocol limitations. For Windows, you would add a SAS drive, you boot from this hard drive, and use the 4GB Flash drive for data only.
So what's new this time? Here's a quick recap of July 17 announcement. For the IBM BladeCenter HS21 XM blade servers, new models of internal "disk" storage:
- Single drive model
A single 15.8GB solid-state disk drive, based on SATA protocol. In addition to theLinux operating systems mentioned above, the capacity and SATA protocols allowsyou to boot 32-bit and 64-bit versions of Windows 2003 Server R2, with plans in placeto other platforms in the future, such as VMware. I am able to run my laptop Windows with only 15GB of C: drive, separating my data to a separate D: partition, so this appears to be a reasonable size.
- Dual drive model
The dual drive fits in the space of a single 2.5-inch HDD drive bay.You can combine these in either RAID 0 or RAID 1 mode.
- RAID 0 gives you a total of 31.6GB, but is riskier. If you lose either drive,you lose all your data. Michael Horowitz of Cnet covers the risks of RAID zerohere andhere.However, if you are just storing your operating system and application, easily re-loadable from CD or DVD in the case of loss, then perhaps that is a reasonable risk/benefit trade-off.
- RAID 1 keeps the capacity at 15.8GB, but provides added protection. If you loseeither drive, the server keeps running on the surviving drive, allowing you to schedule repair actions when convenient and appropriate. This would be the configuration I would recommend for most applications.
Until recently, solid state storage was available at a price premium only. Flash prices have dropped 50% annually while capacities have doubled. This trend is expected to continue through 2009.
According to recent studies from Google and Carnegie Mellon, hard drives fail more oftenthan expected. By one account, conventional hard disk drives internal to the server account for as much as 20-50% of component replacements.IBM analysis indicates that the replacement rate of a solid state drive on a typical blade server configuration is only about 1% per year, vs. 3% or more mentionedin the these studies for traditional disk drives.
Flash drives use non-volatile memory instead of moving parts, so less likely to break down during high external environmental stress conditions, like vibration and shock, or extreme temperature ranges (-0C° to +70°C) that would make traditional hard disks prone to failure.This is especially important for our telecommunications clients, who are always looking for solutions that are NEBS Level 3 compliant.
Last year, I mentioned that flash drives could provide only a limited number of write and erase cycles, but today's new advances in wear-leveling algorithms have nearly eliminated this limitation.
As with any SATA drive, performance depends on workload.Solid state drives perform best as OS boot devices, taking only a few secondslonger to boot an OS than from a traditional 73GB SAS drive. Flash drives also excel in applications featuring random read workloads, such as web servers. For random and sequential write workloads, use SAS drives instead for higher levels of performance.
Part of IBM's Project Big Green, these flash drives are very energy efficient. Thanks to sophisticated power management software, the power requirement of the solid state drive can be 95 percent better than that of a traditional 73GB hard disk drive. These 15.8GB drives use only 2W per drive versus as much as 10W per 2.5” hard drive and 16W per 3.5” hard drive. The resulting power savings can be up to 1,512 watts per server rack, with 50% heat reduction.
So, even though this is not part of the System Storage product line, I am very excitedfor IBM. To find out if this will work in your environment, go to the IBM Server Provenwebsite that lists compatability with hardware, applications and middleware, or review the latest Configuration and Options Guide (COG).
technorati tags: IBM, Business, Continuity, solid, state, flash, disk, drive, announcement, blade, server, BladeCenter, H21, XM, 4GB, Flash, Memory, Device, USB2.0, Linux, RedHat, RHEL, Novell, SUSE, SLES, Windows, Project, Big Green, SATA, SAS, energy, efficient, efficiency, performance, NEBS, telecommunications, boot-over-SAN, Google, Carnegie Mellon, study, Vmware
Continuing this week's theme on Business Continuity, I thought I would explore more on the identification of scenarios to help drive appropriate planning. As I mentioned in my last post
, this should be done first.
A recent post in Anecdote talks about the long list of cognitive biases which affect business decision making. This list is a good explanation of why so many people have a difficult time identifying appropriate recovery scenarios as the basis for Business Continuity planning. Their "cognitive biases" get in the way.
Again, using my IBM Thinkpad T60 laptop as an example, here are a variety of different scenarios:
- Corrupted File System
Some file systems are more fragile than others. If your NTFS file system gets corrupted, you might be able to run
CHKDSK C: /F but this just puts damaged blocks into dummy files, it doesn't really repair your files back to their pre-damage level.All kinds of things can damage the file system, including viruses, software defects, and user error.
I keep my programs and data in separate file systems. C: has my Windows operating system and applications, and D: holds my pure data. If one file system is corrupted, the other one might be in tact, mitigating the risk.
- Hard Disk Crash
Hopefully, you will have temporary read/write errors to provide warning prior to a complete failure. In theory, if I kept a spare hard disk in my laptop bag, I could swap out the bad drive with the good drive. I don't have that. The three times that I have had a disk failure all occurred while I was in Tucson.
Instead, I keep the few files I need for my trip on a separate USB key, and carry bootable Live CD, which allows you to boot entirely from CDrom drive, either to run applications, or perform rescue operations.
The latest one that I am trying out is Ubuntu Linux, which has OpenOffice 2.2 that can read/write PowerPoint, Word, and Excel spreadsheets; Firefox web browser; Gimp graphics software; and a variety of other applications, all in a 700MB CDrom image. I even have been able to get Wireless (Wi-Fi) working with it, and the process to create your own customized Live CD with the your own application packages is fairly straightforward. Combined with a writeable USB key, you can actually get work done this way. Special thanks to IBM blogger Bob Sutor for pointing me to this.
(If you have a DVD-RAM drive, there are bigger Live CDs from SUSE and RedHat Fedora that provide even more applications)
- Laptop Shell Failure
This might catch some people by surprise. I have had the keyboard, LCD screen, or some essential port/plug fail on my laptop. The disk drive and CDrom drive work fine, but unless you have another "laptop" to stick them into, they don't help you recover. This can also happen if the motherboard fails, or the battery is unable to hold a charge.
IBM provides a 24-hour turn around fix. Basically, IBM sends me a laptop shell, no drive, no CDrom, with instructions to move the disk drive and CDrom drive from your broken shell, to the new shell, then send the bad shell back in the same shipping box.
Here, again, I am thankful that I keep my key files on an USB key. Often I travel with other IBMers, and can borrow their laptop to make presentations, check my e-mail, or other work, until I can get my replacement shell. In you are travelling outside the US, you might be able to move your disk drive into a colleague's laptop, access the data, copy it to your USB key or burn a copy on CD or DVD.
In a data center, many outages are really "failures to access data", but the data is safe. For example, power outages, network outages, and so on, can prevent people from using their IT systems, but the data is safe when these are re-established.
- Temporary Separation
At times, I have been temporarily separated from my laptop. Three examples:
- A higher level executive had technical difficulties with his laptop, and usurped mine instead.
- A colleague forgot his power supply for his laptop, and borrowed my laptop instead. (I wish there were a standard for laptop power plug connectors)
- Customs agents confiscate your laptop, give you a receipt, and eventually you get it back.
In all cases, I was glad that no "recovery" was required, and that the few files I needed were on my USB key. A few times, I was able to get by on the machines available at the nearest Internet Cafe, in the meantime.
With some imagination, you can recognize that this scenario is similar to the previous one for laptop shell failure.Here is a good example that you can identify different scenarios, and then later discover they have similar properties in terms of recovery, and can be treated as one.
- Permanent Separation
Laptops are stolen every day. Luckily, I've only had this happen twice to me in my career at IBM, and I managed to get a replacement soon enough. The key lesson here is to keep your USB key and recovery media in separate luggage.I know it is more convenient to keep all computer-related stuff in one place, but a thief is going to take your whole laptop bag, to make sure that all cables and power supplies are included, and is not going to leave anything behind. That would just slow them down.
In each case, some brainstorming, or personal experience, can help identify scenarios, identify what makes them unique from a recovery perspective, and plan accordingly. If you looking to create or upgrade your Business Continuity plan, give IBM a call, we can help!
technorati tags: IBM, Business, Continuity, plan, plans, planning, Thinkpad, T60, laptop, NTFS, CHKDSK, hard disk crash, USB, key, Live, CD, LiveCD, DVD, Ubuntu, Linux, SUSE, RedHat, Fedora, shell, failure
This week and next I am touring Asia, meeting with IBM Business Partners and sales repsabout our July 10 announcements
Clark Hodge might want to figure out where I am, given the nuclearreactor shutdowns from an earthquake in Japan. His theory is that you can follow my whereabouts just by following the news of major power outages throughout the world.
So I thought this would be a good week to cover the topic of Business Continuity, which includes disaster recovery planning. When making Business Continuity plans, I find it best to work backwards. Think of the scenarios that wouldrequire such recovery actions to take place, then figure out what you need to have at hand to perform the recovery, and then work out the tasks and processes to make sure those things are created and available when and where needed.
I will use my IBM Thinkpad T60 as an example of how this works. Last week, I was among several speakers making presentations to an audience in Denver, and this involved carrying my laptop from the back of the room, up to the front of the room, several times. When I got my new T60 laptop a year ago, it specifically stated NOT to carry the laptop while the disk drive was spinning, to avoid vibrations and gyroscopic effects. It suggested always putting the laptop in standby, hibernate or shutdown mode, prior to transportation, but I haven't gotten yet in the habit of doing this. After enough trips back and forth, I had somehow corrupted my C: drive. It wasn't a complete corruption, I could still use Microsoft PowerPoint to show my slides, but other things failed, sometimes the fatal BSOD and other times less drastically. Perhaps the biggest annoyance was that I lost a few critical DLL files needed for my VPN software to connect to IBM networks, so I was unable to download or access e-mail or files inside IBM's firewall.
Fortunately, I had planned for this scenario, and was able to recover my laptop myself, which is important when you are on the road and your help desk is thousands of miles away. (In theory, I am now thousands of miles closer to our help desk folks in India and China, but perhaps further away from those in Brazil.) Not being able to respond to e-mail for two days was one thing, but no access for two weeks would have been a disaster! The good news: My system was up and running before leaving for the trip I am on now to Asia.
Following my three-step process, here's how this looks:
- Step 1: Identify the scenario
In this case, my scenario is that the file system the runs my operating system is corrupted, but my drive does not have hardware problems. Running PC-Doctor confirmed the hardware was operating correctly. This can happen in a variety of ways, from errant application software upgrades, malicious viruses, or in my case, picking up your laptop and carrying it across the room while the disk drive is spinning.
- Step 2: Figure out what you need at hand
All I needed to do was repair or reload my file sytem. "Easier said than done!" you are probably thinking. Many people use IBM Tivoli Storage Manager (TSM) to back up their application settings and data. Corporate include/exclude lists avoid backing up the same Windows files from everyone's machines. This is great for those who sit at the same desk, in the same building, and would be given a new machine with Windows pre-installed as the start of their recovery process. If on the other hand you are traveling, and can't access your VPN to reach your TSM server, you have to do something else. This is often called "Bare Metal Restore" or "Bare Machine Recovery", BMR for short in both cases.
I carry with me on business trips bootable rescue compact discs, DVDs of full system backup of my Windows operating system, and my most critical files needed for each specific trip on a separate USB key. So, while I am on the road, I can re-install Windows, recover my applications, and copy over just the files I need to continue on my trip, and then I can do a more thorough recovery back in the office upon return.
- Step 3: Determine the tasks and processes
In addition to backing up with IBM TSM, I also use IBM Thinkvantage Rescue and Recovery to make local backups. IBM Rescue and Recovery is provided with IBM Thinkpad systems, and allows me to backup my entire system to an external 320GB USB drive that I can leave behind in Tucson, as well as create bootable recovery CD and DVDs that I can carry with me while traveling.
The problem most people have with a full system backup is that their data changes so frequently, they would have to take backups too often, or recover "very old" data. Most Windows systems are pre-formatted as one huge C: drive that mixes programs and data together. However, I follow best practice, separating programs from data. My C: drive contains the Windows operating system, along with key applications, and the essential settings needed to make them run. My D: drive contains all my data. This has the advantage that I only have to backup my C: drive, and this fits nicely on two DVDs. Since I don't change my operating system or programs that often, and monthly or quarterly backup is frequent enough.
In my situation in Denver, only my C: drive was corrupted, so all of my data on D: drive was safe and unaffected.
When it comes to Business Continuity, it is important to prioritize what will allow you to continue doing business, and what resources you need to make that happen. The above concepts apply from laptops to mainframes. If you need help creating or updating your Business Continuity plan, give IBM a call.
technorati tags: IBM, July, announcements, earthquake, Japan, nuclear reactor, power, outage, business, continuity, disaster, recovery, plan, plans, planning, IBM, Thinkpad, T60, laptop, Windows, Denver, BSOD, VPN, India, China, Brazil, help desk, Asia, Tivoli, Storage, Manager, TSM, BMR, external, USB, bootable, CD, DVD, separating, programs, data, Clark Hodge
It's Tuesday, which means IBM makes its announcements. We had several for the IBM System Storage product line. Here's a quick recap.
- Disk Systems
The IBM System Storage DS3000 now offers DC power models.New DC powered models of the DS3200, DS3400, and EXP3000 are well suited for Telco industry environments, as theseare NEBS and ETSI compliant and are powered by an industry standard 48 volt DC power source.
Also, the IBM System Storage N series now supports750GB SATA drives available for the EXN1000 drawer.
- Tape Systems
IBM Virtualization Engine TS7740now supports 3-cluster grids. Unlike 3-way replication on disk mirroring, such as IBM Metro/Global Mirror for the DS8000 that enforces a primary, secondary and tertiary copy, the grid implementation of TS7740 tape virtualization allows for any-to-any mirroring. Existing standalone TS7740 clusters can be converted to grid-enabled. A "Copy Export" feature allows virtual tapes to be exported onto physical tape. And in keeping with our theme of "enabling business flexibility", performance throughput can now be purchased in 100 MB/sec increments, up to 600 MB/sec, to match your workload bandwidth requirements.
The IBM System Storage TS1120drives installed in the IBM System Storage™ TS3400 Tape Library can now be attached to System z platforms using the IBM System Storage™ TS1120 Tape Controller. Before this, the TS3400 could only be attached to UNIX, Windows and Linux systems.
The IBM System StorageTS2230 Express is offered as an external stand-alone or rack-mountable unit. This model incorporates the new LTO IBM Ultrium 3 Serial Attached SCSI (SAS) Half-High Tape Drive, and a 3 Gbps single port SAS interface for a connection to a wide spectrum of distributed system servers that support Microsoft Windows and Linux systems.
- Storage Networking
IBM has added theCisco MDS 9124 for IBM System Storageentry-level fabric switch as an Express offering and part of the IBM Express Advantage Program. Express offerings are specifically created for mid-market companies and are well suited for workgroup storage applications like e-mail serving, collaborative databases and web serving. They bring enterprise-class performance, scalability and features to small and medium-sized companies and are easy to use, highly scalable, and cost-effective.This will make it easier for IBM Business Partners to provide fabric switch connectivity for:
- Storage consolidation solutions with IBM System Storage™ DS4000 Express disk arrays, especially the DS4700 Express.
- Backup / restore solutions with IBM System Storage™ TS3000 Tape Libraries, such as the TS3200.
- Archive and Retention
Ordering large configurations of the IBM System Storage Grid Access Manager just got a lot easier.New features enable configurations greater than 500 TB to be submitted as a single order. No change in the actualproduct, just an improvement in the ordering process.
For System p and System i servers, the IBM 3996 Optical library now supports Gen 2 60GB optical cartridges. These can be read/write or WORM cartridges.
I'm off to Denver, Colorado this week. I hope it is cooler there than it is down here in Tucson, Arizona.
technorati tags: IBM, disk, system, storage, SAS, FC, DS3000, DS3200, DS3400, EXP3000, NAS, EXN1000, tape, virtualization, library, TS7740, grid, Copy Export, throughput, TS3400, TS3200, mainframe, LTO, Ultrium, Cisco, MDS, 9124, Express, Advantage, DS4000, DS4700, TS3200, GAM, Grid Archive Manager, 3996, optical, WORM, Denver, Colorado, Tucson, Arizona, announcements
Avi Bar-Zeeb of RealityPrime has an interesting post aboutHow Google Earth [really] Works
.Normally, people who are very knowledgeable in a topic have a hard time describing concepts in basic terms. Avi was one of the co-founders of Keyhole, the company that built the predecessor for Google Earth, and also worked with Linden Lab for its 3D rendering it its virtual world, so he certainly knows what he is talking about. While he sometimes drops down into techno-talk about patents, the post overall is a good read.
It is perhaps human nature to be curious on how things are put together and how they function, leading to the popularity of web sites like www.howstuffworks.com that cover a wide range of topics.
Many things can be used without understanding their internal inner workings. You can put on a pair of blue jeans without knowing how the cotton was made into denim fabric; lace up your favorite pair of running shoes without understanding the chemical make-up of the plastic that cushions your feet; or drink a glass of beer after your five mile run without knowing how alcohol is processed by your liver.
For technology, however, some people insist they need to know how it works in order for them to get the most use of it. When shopping for a car, for example, a guy might look under the hood, and ask questions about how the engine works, while his wife sits inside the vehicle, counting cup holders and making sure the radio has all the right buttons.
Not all technology suffers from need-to-know-itis. For example, the Apple iPod music player and the Canon PowerShot digital camera, are both just disk systems that read and write data, with knobs and dials on one end, and ports for connectivity on the other. Everyone just asks how to use their controls, and might read the manual to understand how to connect the cables. Few people who use these devices ask how they work before they buy them.
Other disk systems, the kind designed for data centers for the medium and large enterprise, apparently aren't there yet. Storage admins who might happily own both an iPod player and a PowerShot camera, insist they need to know how the technologies inside various storage offerings work. Is this just curiosity talking? Or are there some tasks like configuration, tuning, and support that just can't be done without this knowledge? Does knowing the inner workings somehow make the job more enjoyable, easier, or performed with less stress?
I'm curious what you think, send me a comment on this.
technorati tags: Avi Bar-Zeeb, Google, Earth, cotton, demin, plastic, shoes, beer, alcohol, liver, IBM, disk, system, storage, technology, Apple, iPod, music, player, Canon, PowerShot, digital, camera
Seth Godin has an interesting post titled Times a Million
.He recounts how many people determine the fuel savings of higher-mileage cars to be only $300-$900 per year,and that this is not enough to motivate the purchase of a more-efficient vehicle, such as a hybrid orelectric car. Of course, if everyone drove more efficient vehicles, the benefits "times a million" wouldbenefit everyone and the world's ecology.
When I discuss storage-related concepts, many executives mistakenly relate them to the one area of information technologythey know best: their laptop. Let's take a look at some examples:
- Information Lifecycle Management
Information Lifecycle Management (ILM) includes classifying data by business value, and then using this to determineplacement, movement or deletion. If you think about the amount of time and effort to review the files on yourindividual laptop, and to manually select and move or delete data, versus the benefits for the individual laptopowner, you would dismiss the concept. Most administrative tasks are done manually on laptops, because automatedsoftware is either unavailable or too expensive to justify for a single owner.
In medium and large size enterprises, automated software to help classify, move and delete data makes a lot of sense.Executives who decide that ILM is not for their data center, based on their experiences with their laptop, are losingout on the "times a million" effect.
- Energy Efficiency
Laptops have various controls to minimize the use of battery, and these controls are equally available when pluggedin. Many users don't bother turning off the features and functions they don't need when plugged in, because theyfeel the cost savings would only amount to pennies per day.
Times a million, energy savings do add up, and options to reduce the amount used per server, per TB of data stored, not only save millions of dollars per year, but can also postpone the need to build a new data center, or upgrade the electrical systems in your existing data center.
- Backup and Disaster Recovery planning
I am not surprised how many laptops do not have adequate backup and disaster recovery plans. When executives thinkin terms of the time and effort to backup their data, often crudely copying key files to CDrom or USB key, and worryingabout the management of those copies, which copies are the latest, and when those copies can be destroyed, theymight reject deploying appropriate backup policies for others.
Times a million, the collected data stored on laptops could easily be half of your companies emails and intellectual property. Products like IBM Tivoli Storage Manager can manage a large number of clients with a few administrators,keeping track of how many copies to keep, and how long to keep them.
So, next time you are looking at technology or solutions for your data center, don't suffer from "Laptop Mentality". Focus instead on the data center as a whole.
technorati tags: Seth Godin, IBM, storage, ILM, energy, efficiency, backup, disaster recovery, Tivoli Storage Manager, laptop
Chris Evans over at Storage Architect posts aboutHardware Replacement Lifecycle Update
, on how storage virtualization can helpwith storage hardware replacemement. He makes two points that I would like to comment on.
... indeed products such as USP, SVC and Invista can help in this regard. However at some stage even the virtualisation tools need replacing and the problem remains, although in a different place.
Knowing that replacement of technologies at all levels are inevitable, IBM System Storage SAN Volume Controlleris actually designed to allow cluster non-disruptive upgrade, which we announcedMay 2006.
The process is quite elegant. The SVC consists of one or more node-pairs, and can be upgraded while the systemis up and running by replacing nodes one at a time in a sequence of suspend and resume. All of the mapping tablesare loaded onto the new nodes from the rest of the still active nodes.
I was hoping as part of the USP-V announcement HDS would indicate how they intend to help customers migrate from an existing USP which is virtualising storage, but alas it didn't happen.
Unlike the SVC, once cannot just upgrade the USP in place and make it into a USP-V. While it might be possible tounplug external disk from the old USP, and re-plug into the new USP-V, what do you do about the internal disk data?I doubt you can just move drawers and trays of disk from the old to the new. The data has to be moved some other way.
Some have asked why not just put an SVC in front of both the old USP and the new USP-V and transfer the data that way.While SVC does support virtualizing the old USP device, IBM is still testing the new USP-V as a managed device, and so this solution is not yet available, and would only apply to the LUNs in the USP-V, not the volumes specifically formatted for System i or System z.
An alternative is to take advantage of IBM's Data Mobility Services, the result of our recentacquisition of SofTek. IBM can help you both mainframe and distributed systems data from any device, to any device.
In a typical four year lifecycle of storage arrays, it might take six months or so to fill up the box, and might takeas much as a year at the end to move the data out to other equipment. SVC can greatly reduce both of these, so that you can take immediate advantage of new equipment as soon as possible, and keep using it for close to the full four years,migrating weeks or days before your lease expires.
technorati tags: IBM, SVC, Chris Evans, storage, architect, disk, virtualization, Invista, USP, USP-V, data mobility services, SofTek
NetworkWorld has compiled interlude with storage videos
, a follow up to last year's Yikes! Exploding Servers
I've blogged about some of these videos already, but since there are probably a few out there buying the brand new Apple iPhone looking for YouTube videos to play on them, these links might provide some exampleentertainment on your new handheld device.
Next week has "Fourth of July" Independence Day holiday in the USA smack in the middle of the week, so I suspect the blogosphereto quiet down a bit. So whether you are working next week or not, in the USA or elsewhere, take some time to enjoy your friends and family.
technorati tags: NetworkWorld, storage, videos, HP, IBM, EMC, HDS, Sun,exploding, servers, Apple, iPhone, YouTube
Ian Hughes talks about this Web 2.0 in his postExplaining Web 2.0 State of Mind
Alan Lepofsky posts about The Value Of Social Networking which points to this same presentation about Web 2.0 concepts and ideas.He also points to this article in the Wall Street Journal titledPlaying Well With Others about IBM and their leadership in Web 2.0 technologies, such as those from our Lotus group.
Some quotes from the WSJ article I found interesting:
Some 26,000 IBM workers have registered blogs on the company's internal computer network where they opine on technology and their work.
Social networking is especially important for the 42% of IBM employees who regularly work from their homes or client locations rather than IBM facilities.
At most companies, public-relations managers and the human-resources department tightly control all electronic communications except for email and instant messaging. ... Not at IBM.
"Any employee can have a blog, a wiki or a podcast,..."
IBM owns more than 50 "islands" in Second Life and often uses them for lectures and group discussions.
Two years ago, IBM started Wiki Central to manage wikis for IBM groups. It now has more than 20,000 wikis online with more than 100,000 users.
Interesting in learning more about Web 2.0? The last page of the deck above has a good set of links and resources, for example, here are 23 Things to know about Web 2.0 to get you started.
technorati tags: Ian Hughes, eightbar, secondlife, Alan Lepofsky, Lotus, Connections, Quickr, collaboration, social networking, wiki, blog, podcast, islands, work from home
Chuck Hollis makes some excellent points about Green Data Center Goes Marketing Mainstream
. He does a great job summarizing EMC's strategy in this area:
- Use VMware to virtualize your x86-based servers
- Use more efficient disk media, such as high-capacity SATA disk drives
Both are great recommendations, but why limit yourself to what EMC offers? Your x86-based machines are only a subset of your servers,and disk is only a subset of your storage. IBM takes a more holistic approach, looking at the entire data center.
- VMware is a great product, and IBM is its top reseller. But in addition to VMware, there are other solutions for the x86-based servers, like Xen and Microsoft Virtual Server. IBM's System p, System i, and System z product lines all support logical partitioning.
To compare the energy effectiveness of server virtualization, consider a metric that can apply across platforms. For example, for an e-mail server, consider watts per mailbox. If you have, say, 15,000 users, you can calculate how many watts you are consuming to manage their mailboxes on your current environment, and compare that with running them on VMware, or logical partitions on other servers. Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
- More efficient Media
- SATA and FATA disks support higher capacities, and run at slower RPM speeds, thus using fewer watts per terabyte.A terabyte stored on 73GB high-speed 15K RPM drives consumes more watts than the same terabyte stored using 500GB SATA.Chuck correctly identifies that tape is more power-efficient than disk, but then argues that paper is more power-efficient than tape. But paper is not necessarily more efficient than tape.
ESG analyst Steve Duplessie divides up data betweenDynamic vs. Persistent. The best place to put dynamic data is on disk, and here is where evaluation of FC/SAS versus SATA/FATA comes into play.Persistent data, on the other hand, can be stored on paper, microfiche, optical or tape media. All of these shelf-resident media consume no electricity, nor generate any heat that would require additional cooling.
A study by scientists at the Lawrence Berkeley National Laboratory titled High-Tech Means High-Efficiency: The Business Case for Energy Management in High-Tech Industries indicates thatData centers consume 15 to 100 times more energy per square foot than traditional office space. Storing persistent data in traditional office space can save a huge amount of energy. Steve Duplessie feels the ratio of dynamic to persistent data is 1:10 today, but is likely to grow to 1:100 in the near future, raising the demand for energy-efficient storage of persistent data ever more important to our environment.
Data centers consume nearly 5000 Megawatts in the USA alone, 14000 Megawatts worldwide. To put that in perspective, the country of Hungary I was in last week can generate up to 8000 Megawatts for the entire country (and they were using 7400 Megawatts last week as a result of their current heat wave, causing them grave concern).
Back in the 1990's, one of the insurance companies IBM worked with kept data on paper in manila folders, and armiesof young adults in roller skates were dispatched throughout the large warehouses of shelves to get the appropriate folder in response to customer service inquiries. Digitizing this paper into electronic format greatly reduced the need for this amount of warehouse space, as well as improved the time to retrieve the data.
A typical file storage box (12 inch x 12 inch x 18 inch) containing typed pages single-spaced, double-sided, 12 point font could hold perhaps 100MB. The same box could hold a hundred or more LTO or 3592 tape cartridges, each storing hundreds of GB of information. That's a million-to-one improvement of space-efficiency, and from a watts-per-TB basis, translates to substantial improvement in standard office air conditioning and lighting conditions.
To learn more about IBM's Project Big Green, watch thisintroductory video
which used Second Life for the animation.
technorati tags: IBM, EMC, Chuck Hollis, VMware, FC, SAS, SATA, FATA, disk, storage, logical partition, energy, power, cooling, Steve Duplessie, dynamic, persistent, data, Lawrence Berkeley National Laboratory, megawatt, paper, optical, microfiche, LTO, 3592, Project Big Green, Secondlife
I'm in the Malev lounge at the Budapest Airport, waiting for my flight to return back to Tucson.
My buddy Marc Farley from EqualLogic points to a great InfoStorarticle by Ann Silverthorn titled The benefits of SANs for SMBs.
Back in the late 1980's and early 1990's, I was one of the architects for DFSMS on z/OS, and customers always asked, "What is the clip level?", in other words, how big does a customer have to be to take advantage of DFSMS. We worked it out that if you had more than 100GB of disk data, DFSMS is worthwhile. DFSMS is now just standard by default, as everyone now easily has more than 100GB of data.
Later, in the late 1990's, I worked on Linux for System z. Again, customers asked how many Linux guest images would justify deploying applications on a mainframe. We worked it out to about 10 images. 10 Linux logical partitions, or Linux guests under z/VM was enough to cost justify the entire investment.
So what is the "clip level" for SANs? How many servers does an SMB need to have to justify deploying a SAN? IBM announced the new BladeCenter S designed specifically for mid-sized companies, 100 to 1000 employees, typically running 25 to 45 servers. However, I suspect companies as small as 7-10 servers would probably benefit from deploying an FC or IP SAN.
What do you think? Send me a comment on how many servers should be the clip level.
technorati tags: IBM, Marc Farley, EqualLogic, Ann Silverthorn, SMB, SAN, IP, iSCSI, FC, Linux, DFSMS, z/OS, BladeCenter, Budapest
A client complained that their tape drives were not compressing data as well as it used to. Investigating further reminded me of a scene from the 1970's television show "All in the family"
, summarized well inAmerican Scientist
... in one episode of All in the Family, Archie Bunker's son-in-law, Mike, watches Archie put on his shoes and socks. Mike goes into a conniption when Archie puts the sock and shoe completely on one foot first, tying a bow to complete the action, while the other foot remains bare. To Mike, if I remember correctly, the right way to put on shoes and socks is first to put a sock on each foot and only then put the shoes on over them, and only in the same order as the socks. In an ironic development in his character, the politically liberal Mike shows himself to be intolerant of differences in how people do common little things, unaccepting of the fact that there is more than one way to skin a cat or put on one's shoes.
Both agreed that socks go first, then shoes, but the actual deployment was different.
In the case of this customer, a recent change was the use of "encryption" before the data reached the tape drive. In regards to compression and encryption, you should always compress first, then encrypt. Compression algorithms rely on frequency of data, for example the letter "E" appears more often in the English language than the letter "Z". However, once you encrypt data, those data patterns are randomized, and any attempt to compress the data afterwards is wasted effort.
With IBM tape encryption on either the TS1120 or LTO4 tape drives, we compress, then encrypt, the data when it arrives to the tape drive, so that the compression has some chance of getting up to 3:1 reduction. This compress-then-encrypt process can be done at the host as well, either from the application software or feature of the operating system.
So, just as the case between Archie Bunker and his son-in-law, there are many ways to deploy compression and encryption, just make sure you do them in the right order to get the most benefit.
technorati tags: IBM, tape, storage, encryption, TS1120, LTO4, Archie Bunker, meathead, socks, shoes
This week I am off to Budapest, Hungary, for business meetings. It is the closest major city to IBM'smanufacturing plant in a small town called Vac (rhymes with "knots") where the IBM System Storage DS8000 seriesand SAN Volume Controller are assembled.
technorati tags: IBM, Vac, Hungary, DS8000, SVC, disk, storage, manufacturing, plant
Last week, I opined that Monday's IDC announcement "IBM #1 in combined disk and tape storage hardwaresales for 2006" was in part because of a resurgence of interest in tape
, with four specific examples. There was a lot of reaction and reflection fromboth sides.
- On the one side...
EMC blogger Mark Twomey at Storagezilla admits that perhapsTape Isn't Dead after all,is perhaps the best place to put long-term archive data, but not for backup? EMC's "creative marketing types" put out this Fun With Tape video that I found amusing. (It asks for a first name,last name, and e-mail address, which are then embedded into the resulting video itself, and perhaps forwarded to your nearest EMC sales rep, so answer according to your wishes for privacy).
The "mummy wrapped in tape media" seems to be a common theme, and shows up again in LiveVault'svideo with John Cleese, which makes the same argument asthe EMC video above, namely: switch your backups from tape to disk because we are a disk-only vendor.
- ... and on the other side
JWT over at DrunkenData asks Which is greener, disk or tape?Tape is, of course, by a long shot, and an essential part of IBM's Big Green initiative, a project to invest$1US Billion dollars per year for data centers to be more efficient for power and cooling.
Sun/StorageTek blogger Randy Chalfant questions the Death of Tape, and argues thatdisk-only solutions suffer from atrophy.The results he posts from a survey of 200 customers are similar to those we've seen with customers using IBM TotalStorage Productivity Center, our software to help evaluate data usage, and identify misuse, in your data center.
To my readers in the USA, United Kingdom, Ireland, South Africa, China and Japan, and a few other countries, Happy Father's Day!
technorati tags: IBM, IDC, combined, disk, tape, storage, announcement, EMC, dead, LiveVault, video, John Cleese, JWT, DrunkenData, Sun, StorageTek, STK, Randy Chalfant, TotalStorage, Productivity Center, Fathers Day, Big, Green, initiative
One of the differences between IBM and the other storage vendors is that IBM is also in the business of middleware, application-aware backup software, and advanced copy services. This allows IBM to put togethersolutions that work to address specific challenges for our clients.
IBM has written a whitepaper on a cleverVSS Snapshot Backup for Exchange using IBM Tivoli Storage Manager and the point-in-time copy capabilities of IBM System Storage disk systems.
A problem in the past was that each vendor's point-in-time copy method had its own unique proprietary interface.Microsoft Developed Volume Shadow Copy Services (VSS) as a common interface front-end to resolve this concern.IBM Tivoli Storage Manager for Mail can invoke standard VSS interfaces, and this in turn can invoke FlashCopyon the IBM System Storage SAN Volume Controller, DS8000 series, or DS6000 series disk system.
You might be thinking: Wouldn't it have been less effort to just have TSM for Mail invoke IBM proprietary interfaces,rather than having to put full VSS support into TSM for mail, and then full VSS support into IBM's various disksystems? Perhaps, but IBM doesn't decide to do things because it is the cheapest way, we focus on what is theright way, and in this case, customers now have more choices, then can use TSM for Mail with IBM or non-IBM disksystems that support the VSS interface, and IBM disk systems can be employed into other uses for VSS snapshot.
Of course, we would like our clients to consider both TSM and IBM System Storage disk systems for a combined solution,not because they are required to make the solution work, but because both are best-of-breed, and whitepapers likethis show how they can provide synergy working together.
technorati tags: IBM, Tivoli, Storage, Manager, for, Mail, TSM, Microsoft, Windows, VSS, Exchange, SVC, DS8000, DS6000, FlashCopy, snapshot, whitepaper
A recent blog by Chris Mellor makes the outlandish conspiracy theory that IBM and HDS copied virtualisation technology
from small start-up company DataCore
(Chris doesn't actually name who is his source making such a claim, whether thatsomeone was employed by any of the parties involved at the time the events occurred,or is currently employed by a competitor like EMC bitterly jealous of the success IBM and HDScurrently enjoy with their offerings.)
As I already posted before about IBM'slong history of storage virtualization, SAN Volume Controller was really part of a sequence of major product in this area, after the successful 3850 MSS and 3494 VTS block virtualization products.
In the late 1990's, our research teams in Almaden, California and Hursley, UK were exploring storagetechnologies that could take advantage of commodity hardware parts and the industry-leadingLinux operating system.
As is often the case, while IBM was working on "the perfect product", small start-ups announce "not-yet-perfect" products into the marketplace. Tactical moves like partneringwith DataCore was a smart move, for the following reasons:
- Helps identify market segments. Identify which subset of customers would most benefit fromdisk virtualization. While our 3850 MSS and 3494 VTS were focused on mainframe customers, this newtechnology was focused on distributed Unix, Windows and Linux servers.
- Helps prioritize market requirements. What are the most appealing features?What drives clients to buy disk virtualization for distributed systems platforms?
- Helps evaluate packaging options. Should we deliver pure software and expect customersto purchase their own servers? Should we offer this as a "service offering" with installation anddeployment services included? Should we offer this as hardware with software pre-installed?
The partnership proved worthwhile, not just to prove to IBM that this was a worthwhile market to enter, but also how "NOT" to package a solution. Specifically, DataCore SANsymphony was software that you had to install on your own Windows-based server. The client was left with the task of orderinga suitable Intel-based server, with the right amount of CPU cycles, RAM and host bus adapter ports,and configure the Windows operating system and DataCore software.
It didn't go well. Basically, customers were expected to be their own "hardware engineers", having to knowway too much about storage hardware and software to design a combination that worked for theirworkloads. Most clients were disappointed with the amount of effort involved, and the resulting poor performance.
To fix this, IBM delivered the SAN Volume Controller, with an optimized Linux operating system and internally-writtensoftware that runs on IBM System x(tm) server hardware optimized for performance.
I can't speak for HDS, but I suspect they came to similar conclusions that resulted in a similar decisionto build their product in-house. I welcome Hu Yoshida to correct me if I am wrong on this.
technorati tags: Chris Mellor, DataCore, SANsymphony, IBM, SVC, HDS, EMC, Invista, disk, storage, virtualization, Hu Yoshida, Windows, Linux