Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
My post last week [Solid State Disk on DS8000 Disk Systems] kicked up some dust in the comment section.Fellow blogger BarryB (a member of the elite [Anti-Social Media gang from EMC]) tried to imply that 200GB solid state disk (SSD) drives were different or better than the 146GB drives used in IBM System Storage DS8000 disk systems. I pointed out that they are the actual same physical drive, just formatted differently.
To explain the difference, I will first have to go back to regular spinning Hard Disk Drives (HDD). There are variances in manufacturing, so how do you make sure that a spinning disk has AT LEAST the amount of space you are selling it as? The solution is to include extra. This is the same way that rice, flour, and a variety of other commodities are sold. Legally, if it says you are buying a pound or kilo of flour, then it must be AT LEAST that much to be legal labeling. Including some extra is a safe way to comply with the law. In the case of disk capacity, having some spare capacity and the means to use it follows the same general concept.
(Disk capacity is measured in multiples of 1000, in this case a Gigabyte (GB) = 1,000,000,000 bytes, not to be confused with [Gibibyte (GiB)] = 1,073,741,824 bytes, based on multiples of 1024.)
Let's say a manufacturer plans to sell 146GB HDD. We know that in some cases there might be bad sectors on the disk that won't accept written data on day 1, and there are other marginally-bad sectors that might fail to accept written data a few years later, after wear and tear. A manufacturer might design a 156GB drive with 10GB of spare capacity and format this with a defective-sector table that redirects reads/writes of known bad sectors to good ones. When a bad sector is discovered, it is added to the table, and a new sector is assigned out of the spare capacity.Over time, the amount of space that a drive can store diminishes year after year, and once it drops below its rated capacity, it fails to meet its legal requirements. Based on averages of manufacturing runs and material variances, these could then be sold as 146GB drives, with a life expectancy of 3-5 years.
With Solid State Disk, the technology requires a lot of tricks and techniques to stay above the rated capacity. For example, you can format a 256GB drive as a conservative 146GB usable, with an additional 110GB (75 percent) spare capacity to handle all of the wear-leveling. You could lose up to 22GB of cells per year, and still have the rated capacity for the full five-year life expectancy.
Alternatively, you could take a more aggressive format, say 200GB usable, with only 56GB (28 percent) of spare capacity. If you lost 22GB of cells per year, then sometime during the third year, hopefully under warranty, your vendor could replace the drive with a fresh new one, and it should last the rest of the five year time frame. The failed drive, having 190GB or so usable capacity, could then be re-issued legally as a refurbished 146GB drive to someone else.
The wear and tear on SSD happens mostly during erase-write cycles, so for read-intensive workloads, such as boot disks for operating system images, the aggressive 200GB format might be fine, and might last the full five years.For traditional business applications (70 percent read, 30 percent write) or more write-intensive workloads, IBM feels the more conservative 146GB format is a safer bet.
This should be of no surprise to anyone. When it comes to the safety, security and integrity of our client's data, IBM has always emphasized the conservative approach.[Read More]
(Note: IBM [Guidelines] prevent me from picking blogfights, so this post is only to set the record straight on some misunderstandings, point to some positive press about IBM's leadership in this area, and for me to provide a different point of view.)
First, let's set the record straight on a few things. The [RedPaper is still in draft form] under review, and so some information has not yet been updated to reflect the current situation.
You can have 16 or 32 SSD per DA pair. However, you can only have a maximum of 128 SSD drives total in any DS8100 or DS8300. In the case of the IBM DS8300 with 8 DA pairs, it makes more senseto spread the SSD out across all 8 pairs, and perhaps this is what confused BarryB.
Yes, you can order an all-SSD model of the IBM DS8000 disk system. I don't see anywhere in the RedPaper that suggests otherwise, and I have confirmed with our offering manager that this is the case.
The 73GB and 146GB are freshly manufactured from STEC. The 146GB drive and 200GB drives are actually the same drive but just formatted differently. The 200GB format does not offer as much spare capacity for wear-leveling, and are therefore intended only for read-intensive workloads. (Perhaps EMC wants you to find this out the hard way so that you replace them more often???) These reduced-spare-capacity formats may not be appropriate with some write-intensive workloads. Don't let anyone from EMC try to misrepresent the 73GB or 146GB drives from STEC as older, obsolete, collecting dust in a warehouse, or otherwise no longer manufactured by STEC.
You can relocate data from HDD to SSD using "Data Set FlashCopy", a feature that does not involve host-based copy services, does not consume any MIPS on your System z mainframe, and is performed inside the DS8000 disk system. You can also use host-based copy services as well, but it is not the only way.
You can use any supported level of z/OS with SSD in the IBM DS8000. There is ENHANCED support mentioned in the RedPaper that you get only with z/OS 1.8 and above, allowing you to create automation policies that place data sets onto SSD or non-SSD storage pools. This synergy makes SSD with IBM DS8000 superior to the initial offerings that EMC had offered without this OS support.
I find it amusing that BarryB's basic argument is that IBM's initial release of SSD disk on DS8000 is less than what the potential architecture could be extended to support much more. Actually, if you look at EMC's November release of Atmos, as well as their most recent announcement of V-Max, they basically say the same thing "Stay Tuned, this is just our initial release, with various restrictions and limitations, but more will follow." Architecturally, IBM DS8000 could support a mix of SSD and non-SSD on the same DA pairs, could support RAID6 and RAID10 as well, and could support larger capacity drives or use higher-capacity read-intensive formats. These could all be done via RPQ if needed, or in a follow-on release.
BarryB's second argument is that IBM is somehow "throwing cold water" on SSD technology. That somehow IBM is trying to discourage people from using SSD by offering disk systems with this technology. IBM offered SSD storage on BladeCenter servers LONG BEFORE any EMC disk system offering, and IBM continues to innovate in ways that allow the best business value of this new technology. Take for example this 24-page IBM Technical Brief:[IBM System z® and System Storage DS8000:Accelerating the SAP® Deposits Management Workload With Solid State Drives]. It is full of example configurations that show that SSD on IBM DS8000 can help in practical business applications. IBM takes a solution view, and worked with DB2, DFSMS, z/OS, High Performance FICON (zHPF), and down the stack to optimize performance to provide real business value innovation. Thanks to this synergy,IBM can provide 90 percent of the performance improvement with only 10 percent of the SSD disk capacity as EMC offerings. Now that's innovative!
The price and performance differences between FC and SATA (what EMC was mostly used to) is only 30-50 percent. But the price and performance differences between SSD and HDD is more than an order of magnitude in some cases 10-30x, similar to the differences between HDD and tape. Of course, if you want hybrid solutions that take best advantage of SSD+HDD, it makes more sense to go to IBM, the leading storage vendor that has been doing HDD+Tape hybrid solutions for the past 30 years. IBM understands this better, and has more experience dealing with these orders of magnitude than EMC.
But don't just take my word for it. Here is an excerpt from Jim Handy, from [Objective Analysis] market research firm, in a recent Weekly Review from [Pund-IT] (Volume 5, Issue 23--May 6, 2009):
"What about IBM? One thing that we are finding is that IBM really “Gets It” in the area offlash in the data center. Readers of the Pund-IT Review will not only recall that IBM Researchpushed its SSD-based “Quicksilver” storage system to one million IOPS using Fusion-ioflash-based storage, but they also may have noticed that the recent MySQL and mem-cachedappliances recently introduced by Schooner Information Technology are both flash-enableddevices introduced in partnership with IBM. Ironically, while other OEMs are takingthe cautious approach of introducing a standard SSD option to their systems first, IBM appearsto have been working on several approaches simultaneously to bring flash to thedata center not only in SSDs, but in innovative ways as well."
As for why STEC put out a press release on their own this week without a corresponding IBM press release, I can only say that IBM already announced all of this support back in February, and I blogged about it in my post [Dynamic Infrastructure - Disk Announcements 1Q09]. This is not the first time one of IBM's suppliers has tried to drum up business in this manner. Intel often funds promotions for IBM System x servers (the leading Intel-based servers in the industry) to help drive more business for their Xeon processor.
So, BarryB, perhaps its time for you to take out your green pen and work up another one of your all-too-common retraction and corrections.[Read More]
It's Tuesday, which means IBM announcements, and today IBM made some major announcementsthat support a [Dynamic Infrastructure]! I hinted at this yesterday, choosing the week's theme to be all about Cloud Computing and Alternative Sourcing. I will briefly highlight today's announcements related to storage here, and try to go into more detail over the next few weeks.
Ethernet switches and routers
In support of Cloud Computing and Cloud Storage, IBM is now back in theEthernet networking business. This is part of storage as protocols likeiSCSI, CIFS and NFS are gaining prominence. Extending IBM's existing OEMrelationship with Brocade, there are four series:
[c-series] - "c" for Compact, these are 1U high fixed port switches
[g-series] - "g" for Pay-as-you-Grow using IronStack stacking technology to allow up to 8 switches to be glommed, glued, er.. "gathered" together as a single virtual chassis.
[m-series] - "m" for Multiprotocol Label Switching [MPLS] which supports routing between LAN and WAN networks over OC12 and OC48 lines.
[s-series] - "s" for slots, the B08S has eight slots, and the B16S has sixteen slots, supporting up to 384 ports. These models support Power-over-Ethernet [PoE] that simplifies attaching Voice-over-IP (VoIP) telephones and IP-based surveillance cameras.
IBM announced it will strengthen its partnership with Juniper Networks, and continues to consider Cisco a strategic partner as well. To help customer position themselves for Cloud Computing and Cloud Storage,IBM also launches some new services:
The IBM [DS5000] now supports self-encrypting disk drives, known also as "full-disk encryption" or FDE, for added security, and 8Gbps Fibre Channel (FC) ports for added performance. The DS5300 model in particular now supports up to 448 disk drives for added scalability.
Comprehensive Data Protection Solution
IBM's [Data Protection Solution] shows off IBM's awesome synergy between servers, storage and software. Combining System x servers, Tivoli Storage Manager FastBack software, and DS5000, DS4000 or DS3000 series disk systems. The solution is designed to both Windows-based servers and their applications, offering bare metal restores, and application–level protection for Oracle, SQL, Exchange and SAP.
Tivoli Storage Productivity Center
Last February, IBM previewed the renaming of TotalStorage Productivity Center to its new name,Tivoli Storage Productivity Center. Today, IBM announces [Tivoli Storage Productivity Center v4.1]. Some key changes include:
Productivity Center for Fabric has been merged into Productivity Center for Disk
Productivity Center for Replication is now integrated, but remains separately licensed
Productivity Center can now feed input to IBM's Novus Storage Enterprise Resource Planner [SERP]
TS7650 ProtecTIER Data Deduplication IP-based replication
IBM previews IP-based replication which allows the TS7650 appliance or TS7650G gateway to sendvirtual tape data over to a remote location. This is instead of having the underlying disk systemsperform the replication on its behalf. Having the TS7650 do the replication is preferred, as itcan maintain virtual cartridge integrity, when a virtual tape is unmounted the replication can beginat that point.
Earlier this week, EMC announced its Symmetrix V-Max, following two trends in the industry:
Using Roman numerals. The "V" here is for FIVE, as this is the successor to the DMX-3 and DMX-4. EMC might have gotten the idea from IBM's success with the XIV (which does refer to the number 14, specifically the 14th class of a Talpiot program in Israel that the founders of XIV graduated from).
Adding "-Max", "-Monkey" or "2.0" at the end of things to make them sound more cool and to appeal to a younger, hipper audience. EMC might have gotten this idea from Pepsi-Max (... a taste of storage for the next generation?)
I took a cue from President Obama and waited a few days to collect my thoughts and do my homework before responding.Special thanks to fellow blogger ChuckH in giving me a [handy list of reactions] for me to pick and choose from. It appears that EMC marketing machine feels it is acceptable for their own folks to claim that EMC is doing something first, or that others are catching up to EMC, but when other vendors do likewise, then that is just pathetic or incoherent. Here are a few reactions already from fellow bloggers:
This was a major announcement for EMC, addressing many of the problems, flaws and weaknesses of the earlier DMX-3 and DMX-4 deliverables. Here's my read on this:
Now you can have as many FCP ports (128) as an IBM System Storage DS8300, although the maximum number of FICON ports is still short, and no mention of ESCON support. The Ethernet ports appear to be 1Gb, not the new 10GbE you might expect.
Support for System z mainframe
V-Max adds some new support to catch up with the DS8000, like Extended Address Volumes (EAV). EMC is still not quite there yet. IBM DS8000 continues to be the best, most feature-rich storage option if you have System z mainframe servers.
Both the IBM DS8000 and HDS USP-V beat the DMX-4 in performance, and in some cases the DMX-4 even lost to the IBM XIV, so EMC had to do something about it. EMC chooses not to participate in industry-standard performance benchmarks like those from the [Storage Performance Council], which limits them to vague comparisons against older EMC gear. I'll give EMC engineers the benefit of the doubt and say that now V-Max is now "comparably as fast as HDS and IBM offerings".
Getting "V" in the name
The "V" appears to be for the roman number five, not to be confused with external heterogeneous storage virtualization that HDS USP-V and IBM SVC provide. There is no mention of synergy with EMC's failed "Invista" product, and I see no support for attaching other vendors disk to the back of this thing.
Switch to Intel processor
Apple switched its computers from PowerPC to Intel-based, and now EMC follows in the same path. There are some custom ASICs still in V-Max, so it is not as pure as IBM's offerings.
Modular, XIV-like Scale-out Architecture
Actually, the packaging appears to follow the familiar system bays and storage bays of the DMX-4 and DMX-4 950 models, but architecturally offers XIV-like attachment across a common switch network between "engines", EMC's term for interface modules.
Non-disruptive data migration
IBM's SoFS, DR550 and GMAS have this already, as does as anything connected behind an IBM SAN Volume Controller.
A long time ago, IBM used to have midrange disk storage systems called "FAStT" which stood for Fibre Array Storage Technology, so this might have given EMC the idea for their "Fully Automated Storage Tiering" acronym. The concept appears similar to what IBM introduced back in 2007 for the Scale-Out-File Services [SofS] which not only provides policy-based placement, movement and expiration on different disk tiers, includes tape tiers as well for a complete solution. I don't see anything in the V-Max announcement that it will support tape anytime soon.
And what ever happend to EMC's Atmos? Wasn't that supposed to be EMC's new direction in storage?
Zero-data loss Three-site replication
IBM already calls this Metro/Global Mirror for its IBM DS8000 series, but EMC chose to call it SRDF/EDP for Extended Distance Protection.
Ease of Use
The most significant part of the announcement is that EMC is finally focusing on ease-of-use.In addition to reducing the requirement for "Bin File" modifications, this box has a redesigned user interface to focus on usability issues. For past DMX models, EMC customers had to either hire EMC to do tasks for them that were just to difficult otherwise, or buy expensive software like their EMC Control Center to manage. EMC willcontinue to sell DMX-4 boxes for a while, as they are probably supply-constrained on the V-Max side, but I doubt they will retro-fit these new features back to DMX-3 and DMX-4.
When IBM announced its acquisition of XIV over a year ago now, customers were knocking down our doors to get one. This caught two particular groups looking like a [deer in headlights]:
EMC Symmetrix sales force: Some of the smarter ones left EMC to go sell IBM XIV, leaving EMC short-handed and having to announce they [were hiring during their layoffs]. Obviously, a few of the smart ones stayed behind, to convince their management to build something like the V-Max.
IBM DS8000 sales force: If clients are not happy with their existing EMC Symmetrix, why don't they just buy an IBM DS8000 instead? What does XIV have that DS8000 doesn't?
Let me contrast this with the situation Microsoft Windows is currently facing.
I am often asked by friends to help them pick out laptops and personal computers. I use Linux, Windows and MacOS, so have personal experience with all three operating systems.
Linux is cheaper, offers the power-user the most options for supporting older, less-powerfulequipment, but I wouldn't have my Mom use it. While distributions like Ubuntu are makinggreat strides, it is just too difficult for some people.
MacOS is nice, I like it, it works out of the box with little or no customization and an intuitive interface. However, some of my friends don't make IBM-level salaries, and have to watch their budget.
In their "I'm a PC" campaign, Microsoft is fighting both fronts. Let's examine two commercials:
In the first commercial, a young eight-year-old puts together a video from pictures oftoy animals and some background music.The message: "Windows is easier to use than Linux!" If they really wanted to send this message, they should have shown senior citizens instead.
In the second commercial, a young college student is asked to find a laptop with 17 inchscreen, and a variety of other qualifications, for under $1000 US dollars. The only modelat the Apple store below this price had a 13 inch screen, but she finds a Windows-based system that had this size screen and met all the other qualifications. The message: "Windows-based hardware from a variety of competitors are less expensive than hardware from Apple!"
Both Microsoft and Apple charge a premium for ease-of-use.In the storage world, things are completely opposite. Vendors don't charge a premium forease-of-use. In fact, some of the easiest to use are also the least expensive.
If you just have Windows and Linux, you can get some entry level system likethe IBM DS3000 series, only a few features, and can be set up in six simple steps.
Next, if you have a more interesting mix of operating systems, Linux, Windows and some flavorsof UNIX like IBM AIX, HP-UX or Sun Solaris, then you might want the features and functionsof more pricier midrange offerings. More options means that configuration and deploymentis more difficult, however.
Finally, if you are serious Fortune 500 company, running your mission critical applications on System z or System i centralized systems in a big data center, that you might be willing to pay top dollar for the most feature-rich offerings of an Enterprise-class machine.Thankfully you have an army of highly-trained staff to handle the highest levels of complexity.
IBM's DS8000, HDS USP-V and EMC's Symmetrix are the key players in the Enterprise-classspace. They tried to be ["all things to all people"], er.. perhaps all things to allplatforms. All of the features and functions came at a price, not just in dollars, butin complexity and difficulty. You needed highly skilled storage admins using expensive storage management software, or be willing to hirethe storage vendor's premium services to get the job done.
IBM recognized this trend early. IBM's SVC, N series and now XIV all offer ease-of-use withenterprise-class features and functions, at lower total cost of ownership than traditional enterprise-class systems. IBM is not the only one, of course, as smaller storage start-ups like 3PAR,Pillar Data Systems, Compellent, and to some extent Dell's EqualLogic all recognized thisand developed clever offerings as well.
While IBM's XIV may not have been the first to introduce a modular, scale-out architectureusing commodity parts managed by sophisticated ease-of-use interfaces, its success might have been the kick-in-the-butt EMC needed to follow the rest of the industry in this direction.
Continuing this week's series on Pulse 2009 video, we have a double header. Bob Dalton discusses our entry-level IBM System Storage [DS3000] and midrange IBM System Storage [DS4000] disk systems, followed by Dan Thompson discussing [IBM Tivoli Storage Manager FastBack] software.
IBM Tivoli Storage Manager FastBack is the result of IBM's [acquisition of FilesX], a company in Israel that developed software to backup servers at remote branch offices running Microsoft Windows operating system.
Well, it's Tuesday, which means IBM makes its announcements!
This week, IBM announces that it now supports 50GB Solid State Disk (SSD) in its [IBM System Storage EXP3000] disk systems.IBM has already made announcements about SSD enablement in the DS8000 and SAN Volume Controller (SVC), but now the EXP3000 brings SSD technology down to smaller System x server deployments.
Adoption of this new exciting technology is still in the early stages, despite the fact that IBM and other vendors have been touting this technology for a while. (For a quick blast to the past, here was my first post on the subject back from December 20, 2006: [Hybrid, Solid State and the future of RAID])Recently, fellow blogger BarryB admitted that EMC have only sold SSD to [hundreds of their customers], and to be fair, I suspect IBM's sales of SSD in its BladeCenter servers [available since July 2007] have been in similar single-digit percentage territory as well.
The advantage of today's announcement is that you can mix and match SSD drives with SAS and SATA drives in the EXP3000. You won't have to buy the entire drawer of SSD, you can start with just a few, depending on your business needs. On the other extreme, you can have up to two drawers, with 12 SSD drives each, for a total of 24 drives directly attached to System x servers via the ServeRAID MR10M SAS/SATA controller adapter.
People are confused over various orders of magnitude. News of the economic meltdownoften blurs the distinction between millions (10^6), billions (10^9), and trillions (10^12).To show how different these three numbers are, consider the following:
A million seconds ago - you might have received your last paycheck (12 days)
A billion seconds ago - you were born or just hired on your current job (31 years)
A trillion seconds ago - cavemen were walking around in Asia (31,000 years)
That these numbers confuse the average person is no surprise, but that it confuses marketing people in the storage industry is even more hilarious. I am often correcting people who misunderstandMB (million bytes), GB (billion bytes) and TB (trillion bytes) of information.Take this graph as an example from a recent presentation.
At first, it looks reasonable, back in 2004, black-and-white 2D X-Ray images were only 1MBin size when digitized, but by 2010 there will be fancy 4D images that now take 1TB, representinga 1000x increase. What?When I pointed out this discrepancy, the person who put this chart together didn't know what to fix.Were 4D images only 1GB in size, or was it really a 1000000x increase.
If a 2D image was 1000 by 1000 pixels, each pixel was a byte of information, then a 3D imagemight either be 1000 by 1000 by 1000 [voxels], or 1000 by 1000 at 1000 frames per second (fps). Thefirst being 3D volumetric space, and the latter called 2D+time in the medical field, the rest of us just say "video".4D images are 3D+time, volumetric scans over time, so conceivably these could be quite large in size.
The key point is that advances in medical equipment result in capturing more data, which canhelp provide better healthcare. This would be the place I normally plug an IBM product, like the Grid Medical Archive Solution [GMAS], a blended disk and tape storage solution designed specifically for this purpose.
So, as government agencies look to spend billions of dollars to provide millions of peoplewith proper healthcare, choosing to spend some of this money on a smarter infrastructure can result in creating thousands of jobs and save everyone a lot of money, but more importantly, save lives.
Short 2-minute [video] argues the case for Smarter Healthcare
For more on this, check out Adam Christensen's blog post on[Smarter Planet], which points to a podcast byDr. Russ Robertson, chairman of the Counsel of Medical Education at Northwestern University’s Feinberg School of Medicine, and Dan Pelino, general manager of IBM's Healthcare and Life Sciences Industry.
An avid reader of this blog pointed me to a blog post [A Small Tec DIGG on IBM XIV], byGowri Ananthan, a System Engineer in Singapore.Basically, she covers past battles, er.. discussions between me and fellow blogger BarryB from EMC, and [blegs] foranswers to three questions.
Gowri, here are your answers:
Q1. Does IBM offer a Pay-as-you-Go [PAYGO] upgrade path for its IBM XIV disk storage system?
The concern was expressed as:
PAYGO also requires the customer to purchase the remaining capacity within 12 months of installation. So it is More of a 12-month installment plan than pay-as-you-grow.
A1. Actually, IBM offers several methods for your convenience:
With IBM's Capacity on Demand (CoD) plan, you get the full framewith 15 modules installed on your data center floor, but only pay for the first four modules 21 TB, then pay for 5.3TB module increments as you need them over the next 12 months. This is ideal for companies that don't know how fast they will grow, but do not want to wait for new modules to be delivered and installed when needed.
With IBM's Partial Rack offering, you can get a system with as little as six modules (27TB),and then over time, add more modules as you need. This does not have to be done within 12 months, you can stay at six modules for as long as you like, and you can take as long asyou want to add more modules. When you are ready for more capacity, the drawer or drawerscan be delivered, and installed non-disruptively.
Neither of these are "payment installment plans", but certainly if you want to spread yourcosts into regularly-scheduled monthlypayments across multiple years, IBM Global Financing can probably work something out.
Q2. Does IBM consider the XIV as green storage?
The concern was expressed as:
You are powering (8.4KW) and cooling all 180 drives for the whole duration, whether you're using the capacity or not. is it what you called Greener power usage..?
A2. Yes. IBM considers the IBM XIV as green storage. The 8.4KW per frame is lessthan the 10-plus KW that a comparable 2-frame EMC DMX-950 system would consume. Theenergy savings in IBM XIV comes from delivering FC-like speeds using slower SATA disks that rotate slower, and therefore take less energy to spin.
In the fully-populated or Capacity on Demand configuration, you would spin all 180disks. However, using the partial rack configuration, the 6-module has only 40 percent ofthe disks, and therefore consumes only 40 percent of the energy. If you don't plan to storeat least 20-30 TB, you might consider the DS3000, DS4000, DS5000, or DS8000 disk system instead.
Q3. How do you connect more than 24 host ports to an IBM XIV?
The concern was expressed as:
And finally do not forget my question on 24-FC Ports… Up to 24 Fiber Channel ports offering 4 Gbps, 2Gbps or 1 Gbps multi-mode and single-mode support.Stop.. stop.. how you gonna squeeze existing bunch of FC cables in 24 ports?
A3. Best practices suggest that if you have ten or more physical servers, each with two separate FC ports, then you should use a SAN switch or director in between. If you require four ports per server, then you would need a SAN switch beyond six servers to connect to the IBM XIV. If you consider that 24 FC ports, at 4Gbps, represents nearly 10 GB/sec of bandwidth, you will recognize that this is not a performance bottleneck for the system.
Wrapping up this week's theme on IBM's Dynamic Infrastructure® strategic initiative, we have a few more goodies in the goody bag.
First item: Dave Bricker shows off the XIV cloud-optimized storage at Pulse 2009
Second item: Rodney Dukes discusses the latest features of the DS8000 disk system at Pulse 2009
Third item: IBM launches the [Dynamic Infrastructure Journal]. You can read the February 2009 edition online, and if you find it useful and interesting, subscribe to learn from IBM's transformation experts how to reduce cost, manage risk and improve service.
Whether or not you attended the IBM Pulse 2009 conference, you might enjoy looking at the rest of the series of videos on [YouTube] and photographs on [Flickr].