Wrapping up this week's theme on the XO laptop, I decided to take on thechallenge of printing. I managed to print from my XO laptop to my laserjet printer.I checked the One Laptop Per Child [ OLPC] website,and found there is no built-in support for printers, but there have been several peopleasking how to print from the XO, so here are the steps I did to make it happen. (Note: I did all of these steps successfully on my Qemu-emulated system first, and then performed them on my XO laptop) - Step 1: Determine if you have an acceptable printer
The XO laptop can only connect to a printer via USB cable or over the network.Check your printer to see if it supports either of these two options. In my case, my printer is connected to my Linksys hub that offers Wi-Fi in my home. The XO runs a modified version of Red Hat's Fedora 7, so we need to also determineif the printer is supported on Linux.Check the [Open Printing Database]for the level of support. This database has come up with the following ranking system.Printers are categorized according to how well they work under Linux and Unix. The ratings do not pertain to whether or not the printer will be auto-recognized or auto-configured, but merely to the highest level of functionality achieved. - Perfectly - everything the printer can do is working also under Linux
- Mostly - work almost perfectly - funny enhanced resolution modes may be missing, or the color is a bit off, but nothing that would make the printouts not useful
- Partially - mostly don't work; you may be able to print only in black and white on a color printer, or the printouts look horrible
- Paperweight - These printers don't work at all. They may work in the future, but don't count on it
If your printer only supports a parallel cable connection, or does not have a high enough ranking above, go buy another printer. The [Linux Foundation] websiteoffers a list of suggested printers and tutorials. In my case, I have a Brother HL5250-DN black-and-white laserjet printer connected over a network to Windows XP, OS X and my other Linux systems. It is rated as supporting Linux perfectly, so I decided to use this for my XO laptop.
- Step 2: Install Common UNIX Printing System (CUPS)
Technically, Linux is not UNIX, but for our purposes, close enough. Start the Terminalactivity, use "su" to change to root, and then use "yum" to install CUPS. Yum will automatically determine what other packages are needed, in this case paps and tmpwatch. Once installed, use "/usr/sbin/cupsd" to get the CUPS daemon started, and add this to the end ofrc.local so that it gets started every time you reboot.  Click graphic on the left to see larger view | [olpc@xo-10-CC-6F ~]$ subash-3.2# yum install cups...Total download size = 3.0 MIs this OK [y/N]? y
bash-3.2# /usr/sbin/cupsdbash-3.2# echo "/usr/sbin/cupsd" >> /etc/rc.d/rc.localbash-3.2# exit[olpc@xo-10-CC-6F ~]$
|
- Step 3: Install Opera or Firefox browser
To download the appropriate drivers, you may need a browser that can handle file downloads. I have triedto do this with the built-in Browse activity (aka Gecko) but encountered problems. I have both Opera and Firefox installed, but I will focus on Opera for this effort.I also installed the older9.0.48.0 version of the Flash player (worked better than the latest 9.0.115.0 version) and Java JRE.Follow the OLPC Wiki instructions for [Opera, Adobe Flash,and Sun Java] installation, thenverify with the following [Java and Flash] testers.
- Step 4: Download drivers and packages unique for your printer
In my case, I used Opera to get to the [Brother Linux Driver Homepage], and downloaded the RPM's for LPR and CUPS wrapper. These are the ones listed under "Drivers for Red Hat, Mandrake (Mandriva), SuSE". I saved these under "/home/olpc" directory.[olpc@xo-10-CC-6F ~]$ subash-3.2# cd /home/olpcbash-3.2# rpm -vi brhl5250dnlpr-2.0.1-1.i386.rpmbash-3.2# rpm -vi cupswrapperHL5250DN-2.0.1-1.i386.rpmbash-3.2# exit[olpc@xo-10-CC-6F ~]$
- Step 5: Create a "root" password
By default, the root user has no password. However, you will need it to be something for later steps,so here is the process to create a root password. I set mine to "tony" which normallywould be considered too simple a password, but ignore those messages and continue.We will remove it in step 8 (below) to put things back to normal.[olpc@xo-10-CC-6F ~]$ subash-3.2# passwdChanging password for user root.New UNIX password: tonyBAD PASSWORD: it is too shortRetype new UNIX password: tonypasswd: all authentication tokens updated successfullybash-3.2# exit[olpc@xo-10-CC-6F ~]$
- Step 6: Launch CUPS administration
Here I followed the instructions in Robert Spotswood's [Printing In Linux with CUPS] tutorial.Launch the Opera browser, and enter "http://localhost:631/admin" as the URL. The localhostrefers to the laptop itself, and 631 is the special port that CUPS listens to from browsers. You can alsouse 127.0.0.1 as a shortcut for "localhost", and can be used interchangeably.In my case, it detected both of my networked printers, so I selected the HL5250DN, entered thelocation of my PPD file "/usr/share/cups/model/HL5250DN.ppd" that was created in Step 4. I set the URI to "lpd://192.168.0.75/binary_p1" per the instructions [Network Setting in CUPS based Linux system] in the Brother FAQ page. I chage the page size from "A4" to "Letter".I set this printer as the default printer. When it asks for userid and password, that is whereyou would enter "root" for the user, and "tony" or whatever you decided to set your root password to.Select "Print a Test Page" to verify that everything is working.
- Step 7: Printing actual files
Sadly, I don't know Opera well enough to know how to print from there. So, I went over to my trustedFirefox browser. Select File->Page Setup to specify the settings, File->Print Preview tosee what it will look like, and then File->Print to send it to the printer. To print the file "out.txt" that is in your /home/olpc directory, for example, enter"file:///home/olpc/out.txt" as the URL of the firefox browser. This will show the file,which you can then print to your printer. I had to specify 200% scaling otherwise the fontswere too small to read.
- Step 8: Remove the "root" password
If you want to remove the root password, here are the steps.[olpc@xo-10-CC-6F ~]$ suPassword: tonybash-3.2# passwd -d rootRemoving password for user root.passwd: Successbash-3.2# exit[olpc@xo-10-CC-6F ~]$
Now the problem is that there is no way to print stuff from any of the Sugar activities. The best place toput in print support would be the Journal activity. Along the bottom where the mounted USB keys arelocated could be an icon for a printer, and dragging a file down to the printer ojbect could cause it tobe send to the printer. The alternative is to write some scripts invocable from the Terminal activity to determine what isin the journal, and send them to LPR with the appropriate parameters. I did not have time to do either of these, but perhaps someone out there can take on that as a project. technorati tags: OLPC, XO, printing, printer, linux, Opera, Firefox, Java, Flash
[ Read More]
Tags: 
olpc
linux
|
As we wrap up the year, people's thoughts turn to archive anddata retention. The [Robert Frances Group] have put out a research paper titled Optimizing Data Retention and Archiving - November 2007 that helps IT executives understand the cost differences for a disk-only archive approach versus disk/tape archive approach and how an [IBM System Storage DR550] offering can help address the long-term storage archive requirements with a world-class storage strategy that reduces cost, improves efficiency and supports compliance. Here is an excerpt: Ongoing legal, audit, and regulatory requirementswill continue to drive IT groups to improvearchive policies, processes, strategy, andefficiency. The choice of which technologies touse will have a profound impact on the success ofsuch efforts, since technologies like the DR550embody many aspects of the strategy, processes,and policies that must be decided upon. When itcomes to tape, IBM's DR550 is unique inproviding that support. Competitors tout disk-onlysolutions as the wave of the future, but researchindicates otherwise. The most basic benefits arecost and mobility, and despite the various vendorproclamations to the contrary, tape is still only afraction of the cost of disk and will remain so inthe foreseeable future. This paper is yet another nail in the coffin of EMC Centera.In his post [Anyone Naughty on Your List…], Jon W Toigo points to an eBay fire sale of an EMC Centera Gen 4. There has never been a better time to switch from EMC Centera to theIBM System Storage DR550. technorati tags: Robert Francis Group, IBM, DR550, archive, data retention, storage, solution, disk, tape, drunkendata, Jon Toigo, EMC, Centera
[ Read More]
Tags: 
lifecycle
|
Over at StorageMojo, Robin Harris writes in his post[ The High-End Storage Melt-Down]: Expect to hear a lot more about the SMB segment over the next 6 months.Because the high-end market is sucking wind. NetApp and EMC are both reporting problems in the high-end. HP and IBM don’t break out as much detail but I’m sure they are feeling the chill as well. With Hulk/Maui coming in Q2CY08, you should hold off on any 2nd tier storage purchases you can. I estimate that H/M will be about 30% per GB less than the current gear. Robin blames the U.S. subprime mortgage mess, butI disagree with the term melt-down. IBM doesn't publicly report subset numbers on individual product lines, but we are growing, albeit single-digit growth, on the high-end with our IBM System Storage DS8000 and DS6000 series products. Single digit growth is not "booming", but it is what we expected in this space, so it is not like we are"feeling the chill" as Robin stated.Obviously, if the U.S. market overall is doing poorly, then it must be from something else. IBM's success appears to be from organic growth in our Asia and Europe markets, and taking marketshare away from the top two contenders, EMC and HDS. Here are my thoughts why: - EMC is remodeling its kitchen
Not happy with its status as #1 disk hardware specialty shop, EMC is admirably trying to redefine itself as an ["information infrastructure"] company, buying up software companies and introducing new storage services. [Byte and Switch] reports onEMC's recent acquisitions: EMC is the latest vendor to pin its colors to the SaaS mast, revealing its plan to offer SaaS-based archiving services during its recent Innovation Day in Boston.EMC gave another clear indication of its SaaS intentions last month, when it spent $76 million to acquire online backup specialist Mozy. IBM has offered[Managed Storage Services] foryears through our Global Technology Services (GTS) division. Gartner recognized IBM as the #1 leader in storageservices, with three times more revenues than EMC in this space.As with a restaurant that is remodeling its kitchen, it can expect a temporary drop inrevenue. If it is done right, customers will come back to a bigger brighter restaurant. If not, the restaurant re-opens as a much smaller lesser version of itself. Recent events this year might incent EMC to get that kitchen done quickly: - A recent [class-action lawsuit]might result in having EMC's "86 percent male" sales force goes to sexual harassment sensitivity training, takingtime away from selling high-end storage arrays in the field. Analysts consider "high-end" boxes as those costingover $300,000 US dollars. Because of the money involved, there is a lot of competition for high-end storage, so face-to-face time with prospective customers is crucial to making the sale.Anytime any vendor is mentioned in a lawsuit (andcertainly IBM has had its share in the past, as Chuck Hollis correctly points out in the comment below), priorities get shifted, and there is potential dip in revenues.
- Dell acquires EMC's rival EqualLogic. Dell resold EMC midrange storage, like CLARiiON, so this should notimpact their high-end storage sales. While Dell will be allowed to sell EMC until 2011, this new acquisition mightmean Dell leads with the EqualLogic offerings, and that could potentially reduce EMC revenues in the midrange space.
IBM went through a similar phase in the 1990's, redefining itself from an "IT Technology" company, intoa "Systems, Software and Services" company. These transitions can't be done in a quarter, or even a year, theytake several years. IBM lost business to EMC in the 1990s, but is back with a stronger portfolio in the 2000's, and so IBM's kitchen remodeling effort appears to be paying off. We will see what happens with EMC in a few years. - HDS puts on the white lab coats
Meanwhile, HDS appears interested in taking over as #1 disk hardware specialty shop.For years, Hitachi was the stereotypical JCM (Japanese IBM-compatible manufacturer) that made well-engineered"me, too" storage arrays. They would see what innovators like IBM and EMC were doing, and copy them. Recently,however, they seemed to have changed strategy, introducing new featuresand functions on their high-end USP-V device, like[Dynamic Provisioning]. The problem is that customers don't want to feel like [Guinea pigs] in an experimental lab, especially withmission-critical data that they trust to their most-available, most-reliable high-end disk storage systems.Like IBM and EMC and the rest of the major storage vendors, Hitachi has top-notch engineers making quality products, but new features scare people, and so there is a lag in the adoption of new technologies.In our youth, we might have preferred beer with recent born-on dates, and tequila aged less than 90 days. But as weget older, we switch to drinks like wine and whiskey, aged years, not weeks. The same is true for themarketplace. New start-ups and other "early adopters"might be willing to try fresh new features and functions on their storage systems, but more established enterprises prefer storage with more mature and stable microcode.Storage admins want to leave at the end of the day, knowing that the data will still be there the next morning. In tough financial times, many established companies want the technological equivalent to ["comfort food"], nothing spicy or exotic, but simplehearty fare that fills the belly and keeps you satisfied. Recognizing this, IBM often introduces new features and functions on its midrange lines first, and position them accordingly. Once customers are comfortable with the concepts, IBM then can consider moving them into the high-end lines. For example, dynamic volume expansion was introduced on the DS4000 and SAN Volume Controller first, and once proven safe and effective, brought over to the DS8000 series. This strategy has served us well.
Well those are my theories. If you have a different explanation of why storage vendors are not doing well in thehigh-end, drop me a comment! technorati tags: SMB, EMC, NetApp, DS8000, DS6000, HDS, Dell, EqualLogic, subprime, mortgage, USP, USP-V, Dynamic Provisioning, DS4000, SAN Volume Controller, SVC
[ Read More]
Tags: 
disk
|
It's official! My "blook" Inside System Storage - Volume I is now available.  | This blog-based book, or “blook”, comprises the first twelve months of posts from this Inside System Storage blog,165 posts in all, from September 1, 2006 to August 31, 2007. Foreword by Jennifer Jones. 404 pages.- IT storage and storage networking concepts
- IBM strategy, hardware, software and services
- Disk systems, Tape systems, and storage networking
- Storage and infrastructure management software
- Second Life, Facebook, and other Web 2.0 platforms
- IBM’s many alliances, partners and competitors
- How IT storage impacts society and industry
|
You can choose between hardcover (with dust jacket) or paperback versions: This is not the first time I've been published. I have authored articles for storage industry magazines, written large sections of IBM publications and manuals, submitted presentations and whitepapers to conference proceedings, and even had a short story published with illustrations by the famous cartoon writer[Ted Rall]. But I can say this is my first blook, and as far as I can tell, the first blook from IBM's many bloggers on DeveloperWorks, and the first blook about the IT storage industry.I got the idea when I saw [Lulu Publishing] run a "blook" contest. The Lulu Blooker Prize is the world's first literary prize devoted to "blooks"--books based on blogs or other websites, including webcomics. The [Lulu Blooker Blog] lists past year winners. Lulu is one of the new innovative "print-on-demand" publishers. Rather than printing hundredsor thousands of books in advance, as other publishers require, Lulu doesn't print them until you order them. I considered cute titles like A Year of Living Dangerously, orAn Engineer in Marketing La-La land, or Around the World in 165 Posts, but settled on a title that matched closely the name of the blog. In addition to my blog posts, I provide additional insights and behind-the-scenes commentary. If you go to the Luluwebsite above, you can preview an entire chapter in its entirety before purchase. I have added a hefty 56-page Glossary of Acronyms and Terms (GOAT) with over 900 storage-related terms defined, which also doubles as an index back to the post (or posts) that use or further explain each term. So who might be interested in this blook? - Business Partners and Sales Reps looking to give a nice gift to their best clients and colleagues
- Managers looking to reward early-tenure employees and retain the best talent
- IT specialists and technicians wanting a marketing perspective of the storage industry
- Mentors interested in providing motivation and encouragement to their proteges
- Educators looking to provide books for their classroom or library collection
- Authors looking to write a blook themselves, to see how to format and structure a finished product
- Marketing personnel that want to better understand Web 2.0, Second Life and social networking
- Analysts and journalists looking to understand how storage impacts the IT industry, and society overall
- College graduates and others interested in a career as a storage administrator
And yes, according to Lulu, if you order soon, you can have it by December 25. technorati tags: IBM, blook, Volume I, Jennifer Jones, system, storage, strategy, hardware, software, services, disk, tape, networking, SAN, secondlife, Web2.0, facebook, Lulu, publishing, Blooker Prize, articles, magazines, proceedings, Ted Rall, insights, glossary, early-tenure, mentors, library, classroom, administrator, print, publish, on demand
[ Read More]
Tags: 
blook
lifecycle
announcements
disk
infrastructure
secondlife
san
bc
green
tape
|
In North America, today marks the start of the "Give 1 Get 1" program.  | Children using the XO laptop |
I first learned from this when I was reading about Timothy Ferriss' [LitLiberation project] on his [Four Hour Work Week] blog, and was surfing around for related ideas, and chanced upon this. I registered for a reminder, and it came today(the reminder, not the laptop itself). Here's how the program works. You give $399 US dollars to the "One Laptop per Child" (OLPC)[laptop.org] organization for two laptops: One goes to a deserving child ina developing country, the second goes to you, for your own child, or to donate to a localcharity that helps children. This counts as a $199 purchase plus a $200 tax-deductible donation.For Americans, this is a [US 501(c)(3)] donation, and for Canadians and Mexicans, take advantage of the low-value of the US dollar! If your employer matches donations, like IBM does, get them to match the $200donation for a third laptop, which goes to another child in a developing country. As for shipping, you pay only for the shipping of the one to you, each receiving country covers their own shipping. In my case, the shipping was another $24 US dollars for Arizona.No guarantees that it will arrive in time for the holidays this December, but it might. To sweeten the deal, T-mobile throws in a year's worth of "Wi-Fi Hot Spot"that you can use for yourself, either with the XO laptop itself, or your regular laptop, iPhone, or otherWi-Fi enabled handheld device. National Public Radio did a story last week on this:[The $100 Laptop Heads for Uganda]where they interview actor [Masi Oka], best known from the TV show ["Heroes"], who has agreed to be their spokesman.At the risk of sounding like their other spokesman, I thought I would cover the technology itself, inside the XO,and how this laptop represents IBM's concept of "Innovation that matters"! The project was started by [Nicholas Negroponte] from [MIT University] as the "$100 laptop project". Once the final designwas worked out, it turns out it costs $188 US dollars to make, so they rounded it up to $200. This is stillan impressive price, and requires that hundreds of thousands of them be manufactured to justify ramping upthe assembly line. Two of IBM's technology partners are behind this project. First is Advanced Micro Devices (AMD) that providesthe 433Mhz x86 processor, which is 75 percent slower than Thinkpad T60. Second is Red Hat,as this runs lean Fedora 6 version of Linux. Obviously, you couldn't have Microsoft Windows or Apple OS X, as both require significantly more resources. The laptop is "child size", and would be considered in the [subnotebook] category. At 10" x 9" x 1.25", it is about the size of class textbook,can be carried easily in a child's backpack, or carried by itself with the integrated handle. When closed, it is sealedenough to be protected when carried in rain or dust storms. It weighs about 3.5 pounds, less than the 5.2 pounds of myThinkpad T60. The XO is "green", not just in color, but also in energy consumption.This laptop can be powered by AC, or human power hand-crank, with workin place to get options for car-battery or solar power charging. Compared to the 20W normally consumed bytraditional laptops, the XO consumes 90 percent less, running at 2W or less. To accomplish this, there is no spinning disk inside. Instead, a 1GB FLASH drive holds 700MB of Linux, and gives you 300MB to hold your files. There isa slot for an MMC/SD flash card, and three USB 2.0 ports to connect to USB keys, printers or other remote I/O peripherals. The XO flips around into three positions: - Standard
Standard laptop position has screen and keyboard. The water-tight keyboard comes in ten languages:International/English, Thai, Arabic, Spanish, Portuguese, West African, Urdu, Mongolian, Cyrillic, and Amharic.(I learned some Amharic, having lived five years with Ethiopians.)There does not appear be a VGA port, so don't be thinking this could be used as an alternative to project Powerpoint presentations onto a big screen.Built-in 640x480 webcam, microphone and speakers allow the XO to be used as a communication device. Voice-over-IP (VOIP) client software, similar to Skype or [IBM Lotus Sametime], is pre-installed for this purpose. The basic built-in communication are 802.1g (54Mbs) that you can use to surf the web usingthe Wi-Fi at your local Starbucks; and 802.1s which forms a "mesh network" with other XO laptops, and can surf theweb finding the one laptop nearby that is connected to the internet to share bandwidth. This eliminates the need to build a separate Wi-Fi hub at the school. There are USB-to-Ethernet and USB-to-Cellular converters, so that might be an alternative option. For kids who want to learn how to program, the Python, Javascript, Logo and Squeak languages are included.Since the machine runs Linux, it can be modified as needed to fit individual needs of each country.
- E-book
Flipped vertically, the device can be read like a book.The screen can be changed between full-color and black-white, 200 dpi, with decent 1200x900 pixel resolution. The full-color is back-lit, and can be read in low-lighting. The black-white is not back-lit, consumes much less power, andcan be read in bright sunlight. In that regards, it is comparable to other [e-book devices], like a Cybook or Sony Reader. Software includes a web-browser, document reader, word processor and RSS feed reader to read blogs.The OLPC identifies all of the software, libraries and interfaces they use, so that anyone that wants to developchildren software for this platform can do so. - Game mode
With the keyboard flipped back, the 6" x 4.5" screen has directional controls and X/Y/A/B buttons to run games. This would make it comparable to a Nintendo DS or Playstation Portable (PSP). Again, the choice between back-lit color,or sunlight black-white screen modes apply. Some games are pre-installed.
So for $399, you could buy a Wi-Fi enabled[ 16GB iPod Touch] for yourself, which does much the same thing, or you can make a difference in the world.I made my donation this morning, and suggest you--my dear readers in the US, Canada and Mexico--consider doing the same.Go to [ www.laptopgiving.org] for details. technorati tags: North America, Give-1-Get-1, program, XO, laptop, Timothy Ferriss, Four Hour Work Week, LitLiberation, One Laptop Per Child, OLPC, $100 laptop, IBM, AMD, Red Hat, Fedora, T-mobile, NPR, Nicholas Negroponte, MIT, Thinkpad, T60, notebook, subnotebook, Linux, Windows, OS X, green, AC, hand-crank, human-power, International, English, Thai, Arabic, Spanish, Portuguese, West+African, Urdu, Mongolian, Cyrillic, Amharic, Ethiopia, FLASH, MMC/SD, USB, VGA, webcam, VOIP, Lotus, Sametime, Skype, Starbucks, mesh, network, Nintendo DS, Playstation, portable, PSP, 802.1g, 802.1s, Wi-Fi, Ethernet, Powerpoint, Python, Javascript, Logo, Squeak, back-lit, sunlight, Cybook, Sony, Reader, RSS,
[ Read More]
Tags: 
disk
infrastructure
olpc
green
|
Perhaps I wrapped up my exploration of disk system performance one day too early. (While it is Friday here in Malaysia, it is still only Thursday back home) Barry Burke, EMC blogger (aka The Storage Anarchist) writes: Aren't you mixing metrics here?Miles per Gallon measures an effeciency ratio (amount of work done with a fixed amount of energy), not a speed ratio (distance traveled in a unit of time). Given that IOPs and MB/s are the unit of "work" a storage array does, wouldn't the MPG equivalent for storage be more like IOPs per Watt or MB/s per Watt? Or maybe just simply Megabytes Stored per Watt (a typical "green" measurement)? You appear to be intentionally avoiding the comparison of I/Os per Second and Megabytes per Second to Miles Per Hour? May I ask why? This is a fair question, Barry, so I will try to address it here. It was not a typo, I did mean MPG (miles per gallon) and not MPH (miles per hour). It is always challenging to find an analogy that everyone can relate to explain concepts in Information Technology that might be harder to grasp. I chose MPG because it was closely related to IOPS and MB/s in four ways: - MPG applies to all instances of a particular make and model. Before Henry Ford and the assembly line, cars were made one at a time, by a small team of craftsmen, and so there could be variety from one instance to another. Today, vehicles and storage systems are mass-produced in a manner that provides consistent quality. You can test one vehicle, and safely assume that all similar instances of the same make and model will have the similar mileage. The same is true for disk systems, test one disk system and you can assume that all others of the same make and model will have similar performance.
MPG has a standardized measurement benchmark that is publicly available. The US Environmental Protection Agency (EPA) is an easy analogy for the Storage Performance Council, providing the results of various offerings to chose from. MPG has usage-specific benchmarks to reflect real-world conditions.The EPA offers City MPG for the type of driving you do to get to work, and Highway MPG, to reflect the type ofdriving on a cross-country trip. These serve as a direct analogy to SPC having SPC-1 for Online transaction processing (OLTP) and SPC-2 for large file transfers, database queries and video streaming. MPG can be used for cost/benefit analysis.For example, one could estimate the amount of business value (miles travelled) for the amount of dollar investment (cost to purchase gallons of gasoline, at an assumed gas price). The EPA does this as part of their analysis. This is similar to the way IOPS and MB/s can be divided by the cost of the storage system being tested on SPC benchmark results. The business value of IOPS or MB/s depends on the application, but could relate to the number of transactions processed per hour, the number of music downloads per hour, or number of customer queries handled per hour, all of which can be assigned a specific dollar amount for analysis.
It seemed that if I was going to explain why standardized benchmarks were relevant, I should find an analogy that has similar features to compare to. I thought about MPH, since it is based on time units like IOPS and MB/s, butdecided against it based on an earlier comment you made, Barry, about NASCAR: Let's imagine that a Dodge Charger wins the overwhelming majority of NASCAR races. Would that prove that a stock Charger is the best car for driving to work, or for a cross-country trip? Your comparison, Barry, to car-racing brings up three reasons why I felt MPH is a bad metric to use for an analogy: - Increasing MPH, and driving anywhere near the maximum rated MPH for a vehicle, can be reckless and dangerous,risking loss of human life and property damage. Even professional race car drivers will agree there are dangers involved. By contrast, processing I/O requests at maximum speed poses no additional risk to the data, nor possibledamage to any of the IT equipment involved.
- While most vehicles have top speeds in excess of 100 miles per hour, most Federal, State and Local speed limits prevent anyone from taking advantage of those maximums. Race-car drivers in NASCAR may be able to take advantage of maximum MPH of a vehicle, the rest of us can't. The government limits speed of vehicles precisely because of the dangers mentioned in the previous bullet. In contrast, processing I/O requests at faster speeds poses no such dangers, so the government poses no limits.
- Neither IOPS nor MB/s match MPH exactly.Earlier this week,I related IOPS to "Questions handled per hour" at the local public library, and MB/s to "Spoken words per minute" in those replies. If I tried to find a metric based on unit type to match the "per second" in IOPS and MB/s, then I would need to find a unit that equated to "I/O requests" or "MB transferred" rather than something related to "distance travelled".
In terms of time-based units, the closest I could come up with for IOPS was acceleration rate of zero-to-sixty MPH in a certain number of seconds. Speeding up to 60MPH, then slamming the breaks, and then back up to 60MPH, start-stop, start-stop, and so on, would reflect what IOPS is doing on a requestby request basis, but nobody drives like this (except maybe the taxi cab drivers here in Malaysia!) Since vehicles are limited to speed limits in normal road conditions, the closest I could come up with for MB/s would be "passenger-miles per hour", such that high-occupancy vehicles like school buses could deliver more passengers than low-occupancy vehicles with only a few passengers. Neither start-stops nor passenger-miles per hour have standardized benchmarks, so they don't work well for comparisonbetween vehicles.If you or anyone can come up with a metric that will help explain the relevance of standardized benchmarks better than the MPG that I already used, I would be interested in it.
You also mention, Barry, the term "efficiency" but mileage is about "fuel economy".Wikipedia is quick to point out that the fuel efficiency of petroleum engines has improved markedly in recent decades, this does not necessarily translate into fuel economy of cars. The same can be said about the performance of internal bandwidth ofthe backplane between controllers and faster HDD does not necessarily translate to external performance of the disk system as a whole. You correctly point this out in your blog about the DMX-4: Complementing the 4Gb FC and FICON front-end support added to the DMX-3 at the end of 2006, the new 4Gb back-end allows the DMX-4 to support the latest in 4Gb FC disk drives.You may have noticed that there weren't any specific performance claims attributed to the new 4Gb FC back-end. This wasn't an oversight, it is in fact intentional. The reality is that when it comes to massive-cache storage architectures, there really isn't that much of a difference between 2Gb/s transfer speeds and 4Gb/s. Oh, and yes, it's true - the DMX-4 is not the first high-end storage array to ship a 4Gb/s FC back-end. The USP-V, announced way back in May, has that honor (but only if it meets the promised first shipments in July 2007). DMX-4 will be in August '07, so I guess that leaves the DS8000 a distant 3rd. This also explains why the IBM DS8000, with its clever "Adaptive Replacement Cache" algorithm, has such highSPC-1 benchmarks despite the fact that it still uses 2Gbps drives inside. Given that it doesn't matter between2Gbps and 4Gbps on the back-end, why would it matter which vendor came first, second or third, and why call it a "distant 3rd" for IBM? How soon would IBM need to announce similar back-end support for it to be a "close 3rd" in your mind? I'll wrap up with you're excellent comment that Watts per GB is a typical "green" metric. I strongly support the whole"green initiative" and I used "Watts per GB" last month to explain about how tape is less energy-consumptive than paper.I see on your blog you have used it yourself here: The DMX-3 requires less Watts/GB in an apples-to-apples comparison of capacity and ports against both the USP and the DS8000, using the same exact disk drives It is not clear if "requires less" means "slightly less" or "substantially less" in this context, and have no facts from my own folks within IBM to confirm or deny it. Given that tape is orders of magnitude less energy-consumptive than anything EMC manufacturers today, the point is probably moot. I find it refreshing, nonetheless, to have agreed-upon "energy consumption" metrics to make such apples-to-apples comparisons between products from different storage vendors. This is exactly what customers want to do with performance as well, without necessarily having to run their own benchmarks or work with specific storage vendors. Of course, Watts/GB consumption varies by workload, so to make such comparisons truly apples-to-apples, you would need to run the same workload against both systems. Why not use the SPC-1 or SPC-2 benchmarks to measure the Watts/GB consumption? That way, EMC can publish the DMX performance numbers at the same time as the energy consumption numbers, and then HDS can follow suit for its USP-V. I'm on my way back to the USA soon, but wanted to post this now so I can relax on the plane. technorati tags: IBM, EMC, Storage Anarchist, MPG, MPH, IOPS, NASCAR, Malaysia, Watts, GB, green, back-end, DMX-3, DMX-4, HDS, USP, USP-V, SPC, SPC-1, SPC-2, standardized, benchmarks, workload, DS8000, disk, storage, tape
[ Read More]
Tags: 
tape
disk
|
Wrapping up this week's exploration on disk system performance, today I willcover the Storage Performance Council (SPC) benchmarks, and why I feel they are relevant to help customers make purchase decisions. This all started to address a comment from EMC blogger Chuck Hollis, who expressed his disappointment in IBM as follows: You've made representations that SPC testing is somehow relevant to customers' environments, but offered nothing more than platitudes in support of that statement.Not good. Apparently, while everyone else in the blogosphere merely states their opinions and moves on,IBM is held to a higher standard. Fair enough, we're used to that.Let's recap what we covered so far this week: - Monday, I explained how seemingly simple questions like "Which is the tallestbuilding?" or "Which is the fastest disk system?" can be steeped in controversy.
- Tuesday, I explored what constitutes a disk system. While there are special storage systemsthat include HDD that offer tape-emulation, file-oriented access, or non-erasable non-rewriteable protection,it is difficult to get apples-to-apples comparisions with storage systems that don't offer these special features.I focused on the majority of general-purpose disk systems, those that are block-oriented, direct-access.
- Wednesday, I explored two metrics to measure storage performance, I/O requestsper second (IOPS) and Megabytes transferred per second (MB/s).
Today, I will explore ways to apply these metrics to measure and compare storageperformance. Let's take, for example, an IBM System Storage DS8000 disk system. This has a controller thatsupports various RAID configurations, cache memory, and HDD inside one or more frames.Engineers who are testing individual components of this system might run specifictypes of I/O requests to test out the performance or validate certain processing. - 100% read-hit, this means that all the I/O requests are to read data expectedto be in the cache.
- 100% read-miss, this means that all the I/O requests are to read data expectedNOT to be in the cache, and must go fetch the data from HDD.
- 100% write-hit, this means that all the I/O requests are to write data into cache.
- 100% write-miss, this means that all the I/O requests are to bypass the cache,and are immediately de-staged to HDD. Depending on the RAID configuration, this can result in actually reading or writing several blocks of data on HDD to satisfy thisI/O request.
Known affectionately in the industry as the "four corners" test, because you can show them on a box, with writes on the left, reads on the right,hits on the top, and misses on the bottom.Engineers are proud of these results, but these workloads do notreflect any practical production workload. At best, since all I/O requests are oneof these four types, the four corners provide an expectation range from the worst performance (most often write-missin the lower left corner)and the best performance (most often read-hit in the upper right corner) you might get with a real workload. To understand what is needed to design a test that is more reflective of real business conditions,let's go back to yesterday's discussion of fuel economy of vehicles, with mileage measured in miles per gallon.The How Stuff Works websiteoffers the following description for the two measurements taken by the EPA: - City MPG
The "city" program is designed to replicate an urban rush-hour driving experience in which the vehicle is started with the engine cold and is driven in stop-and-go traffic with frequent idling. The car or truck is driven for 11 miles and makes 23 stops over the course of 31 minutes, with an average speed of 20 mph and a top speed of 56 mph. - Highway MPG
The "highway" program, on the other hand, is created to emulate rural and interstate freeway driving with a warmed-up engine, making no stops (both of which ensure maximum fuel economy). The vehicle is driven for 10 miles over a period of 12.5 minutes with an average speed of 48 mph and a top speed of 60 mph.
Why two different measurements? Not everyone drives in a city in stop-and-go traffic. Having only one measurement may not reflect the reality that you may travel long distances on the highway. Offering both city and highway measurements allows the consumers to decide which metric relates closer to their actual usage. Should you expect your actual mileage to be the exact same as the standardized test?Of course not. Nobody drives exactly 11 miles in the city every morning with 23 stops along the way,or 10 miles on the highway at the exact speeds listed.The EPA's famous phrase "your mileage may vary" has been quickly adopted into popular culture's lexicon. All kinds of factors, like weather, distance, anddriving style can cause people to get better or worse mileage than thestandardized tests would estimate. Want more accurate results that reflect your driving pattern, in specific conditions that you are most likely to drive in? You could rentdifferent vehicles for a week and drive them around yourself, keeping track of whereyou go, and how fast you drove, and how many gallons of gas you purchased, so thatyou can then repeat the process with another rental, and so on, and then use yourown findings to base your comparisons. Perhaps you find that your results are always20% worse than EPA estimates when you drive in the city, and 10% worse when you driveon the highway. Perhaps you have many mountains and hills where you drive, you drive too fast, you run the Air Conditioner too cold, or whatever. If you did this with five or more vehicles, and ranked them best to worstfrom your own findings, and also ranked them best to worst based on the standardizedresults from the EPA, you likely will find the order to be the same. The vehiclewith the best standardized result will likely also have the best result from your ownexperience with the rental cars. The vehicle with the worst standardized result willlikely match the worst result from your rental cars. (This will be one of my main points, that standardized estimates don't have to be accurate to beuseful in making comparisons. The comparisons and decisions you would make with estimatesare the same as you would have made with actual results, or customized estimates based on current workloads. Because the rankings are in the same order, they are relevant and useful for making decisions based on those comparisons.) Most people shopping around for a new vehicle do not have the time or patience to do this with rental cars. Theycan use the EPA-certified standardized results to make a "ball-park" estimate on how much they will spendin gasoline per year, decide only on cars that might go a certain distancebetween two cities on a single tank of gas, or merely to provide ranking of thevehicles being considered. While mileage may not be the only metric used in making a purchase decision, it can certainly be used to help reduce your consideration setand factor in with other attributes, like number of cup-holders, or leather seats. In this regard, the Storage Performance Council has developed two benchmarks that attempt to reflect normal business usage, similar to "City" and "Highway" driving measurements. - SPC-1
SPC-1 consists of a single workload designed to demonstrate the performance of a storage subsystem while performing the typical functions of business critical applications. Those applications are characterized by predominately random I/O operations and require both queries as well as update operations. Examples of those types of applications include OLTP, database operations, and mail server implementations.- SPC-2
SPC-2 consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload. - Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large-scale financial processing.
- Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence.
- Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library.
The SPC-2 benchmark was added when people suggested that not everyone runs OLTP anddatabase transactional update workloads, just as the "Highway" measurement was addedto address the fact that not everyone drives in the City. If you are one of the customers out there willing to spend the time and resources to do your own performance benchmarking, either at your own data center, or with theassistance of a storage provider, I suspect most, if not all, the major vendors(including IBM, EMC and others), and perhaps even some of the smaller start-ups, would be glad to work with you. If you want to gather performance data of your actual workloads, and use this to estimate how your performance might be with a new or different storage configuration, IBMhas tools to make these estimates, and I suspect (again) that most, if not all, of theother storage vendors have developed similar tools. For the rest of you who are just looking to decide which storage vendors to invite on your next RFP, and which products you might like to investigate that matchthe level of performance you need for your next project or application deployment,than the SPC benchmarks might help you with this decision. If performance is importantto you, factor these benchmark comparisons with the rest of the attributes you arelooking for in a storage vendor and a storage system. In my opinion, I feel that for some people, the SPC benchmarks provide some value in this decision making process. They are proportionally correct, in that even ifyour workload gets only a portion of the SPC estimate, that storage systems withfaster benchmarks will provide you better performance than storage systems with lower benchmark results. That is why I feel they can be relevant in makingvalid comparisons for purchase decisions. Hopefully, I have provided enough "food for thought"on this subject to support why IBM participates in the Storage Performance Council, why the performance of the SAN Volume Controller can be compared to the performanceof other disk systems, and why we at IBM are proud of the recent benchmark results in our recent press release. Enjoy the weekend! technorati tags: IBM, SPC, EMC, Chuck Hollis, fastest, disk, system, SVC, HDD, storage, four corners, read-hit, read-miss, write-hit, write-miss, City, Highway, MPG, OLTP, SPC-1, SPC-2, benchmarks, file, database, video,
[ Read More]
Tags: 
disk
|
Well, this week I am in Maryland, just outside of Washington DC. It's a bit cold here. Robin Harris over at StorageMojo put out this Open Letter to Seagate, Hitachi GST, EMC, HP, NetApp, IBM and Sun about the results of two academic papers, one from Google, and another from Carnegie Mellon University (CMU). The papers imply that the disk drive module (DDM) manufacturers have perhaps misrepresented their reliability estimates, and asks major vendors to respond. So far, NetAppand EMC have responded. I will not bother to re-iterate or repeat what others have said already, but make just a few points. Robin, you are free to consider this "my" official response if you like to post it on your blog, or point to mine, whatever is easier for you. Given that IBM no longer manufacturers the DDMs we use inside our disk systems, there may not be any reason for a more formal response. - Coke and Pepsi buy sugar, Nutrasweet and Splenda from the same sources
Somehow, this doesn't surprise anyone. Coke and Pepsi don't own their own sugar cane fields, and even their bottlers are separate companies. Their job is to assemble the components using super-secret recipes to make something that tastes good. IBM, EMC and NetApp don't make DDMs that are mentioned in either academic study. Different IBM storage systems uses one or more of the following DDM suppliers: - Seagate (including Maxstor they acquired)
- Hitachi Global Storage Technologies, HGST (former IBM division sold off to Hitachi)
- Fujitsu
In the past, corporations like IBM was very "vertically-integrated", making every component of every system delivered.IBM was the first to bring disk systems to market, and led the major enhancements that exist in nearly all disk drives manufactured today. Today, however, our value-add is to take standard components, and use our super-secret recipe to make something that provides unique value to the marketplace. Not surprisingly, EMC, HP, Sun and NetApp also don't make their own DDMs. Hitachi is perhaps the last major disk systems vendor that also has a DDM manufacturing division.So, my point is that disk systems are the next layer up. Everyone knows that individual components fail. Unlike CPUs or Memory, disks actually have moving parts, so you would expect them to fail more often compared to just "chips". If you don't feel the MTBF or AFR estimates posted by these suppliers are valid, go after them, not the disk systems vendors that use their supplies. While IBM does qualify DDM suppliers for each purpose, we are basically purchasing them from the same major vendors as all of our competitors. I suspect you won't get much more than the responses you posted from Seagate and HGST. - American car owners replace their cars every 59 months
According to a frequently cited auto market research firm, the average time before the original owner transfers their vehicle -- purchased or leased -- is currently 59 months.Both studies mention that customers have a different "definition" of failure than manufacturers, and often replace the drives before they are completely kaput. The same is true for cars. Americans give various reasons why they trade in their less-than-five-year cars for newer models. Disk technologies advance at a faster pace, so it makes sense to change drives for other business reasons, for speed and capacity improvements, lower power consumption, and so on. The CMU study indicated that 43 percent of drives were replaced before they were completely dead.So, if General Motors estimated their cars lasted 9 years, and Toyota estimated 11 years, people still replace them sooner, for other reasons. At IBM, we remind people that "data outlives the media". True for disk, and true for tape. Neither is "permanent storage", but rather a temporary resting point until the data is transferred to the next media. For this reason, IBM is focused on solutions and disk systems that plan for this inevitable migration process. IBM System Storage SAN Volume Controller is able to move active data from one disk system to another; IBM Tivoli Storage Manager is able to move backup copies from one tape to another; and IBM System Storage DR550 is able to move archive copies from disk and tape to newer disk and tape. If you had only one car, then having that one and only vehicle die could be quite disrupting. However, companies that have fleet cars, like Hertz Car Rentals, don't wait for their cars to completely stop running either, they replace them well before that happens. For a large company with a large fleet of cars, regularly scheduled replacement is just part of doing business. This brings us to the subject of RAID. No question that RAID 5 provides better reliability than having just a bunch of disks (JBOD). Certainly, three copies of data across separate disks, a variation of RAID 1, will provide even more protection, but for a price. Robin mentions the "Auto-correlation" effect. Disk failures bunch up, so one recent failure might mean another DDM, somewhere in the environment, will probably fail soon also. For it to make a difference, it would (a) have to be a DDM in the same RAID 5 rank, and (b) have to occur during the time the first drive is being rebuilt to a spare volume.
- The human body replaces skin cells every day
So there are individual DDMs, manufactured by the suppliers above; disk systems, manufactured by IBM and others, and then your entire IT infrastructure. Beyond the disk system, you probably have redundant fabrics, clustered servers and multiple data paths, because eventually hardware fails. People might realize that the human body replaces skin cells every day. Other cells are replaced frequently, within seven days, and others less frequently, taking a year or so to be replaced. I'm over 40 years old, but most of my cells are less than 9 years old. This is possible because information, data in the form of DNA, is moved from old cells to new cells, keeping the infrastructure (my body) alive. Our clients should approach this in a more holistic view. You will replace disks in less than 3-5 years. While tape cartridges can retain their data for 20 years, most people change their tape drives every 7-9 years, and so tape data needs to be moved from old to new cartridges. Focus on your information, not individual DDMs. What does this mean for DDM failures. When it happens, the disk system re-routes requests to a spare disk, rebuilding the data from RAID 5 parity, giving storage admins time to replace the failed unit. During the few hours this process takes place, you are either taking a backup, or crossing your fingers.Note: for RAID5 the time to rebuild is proportional to the number of disks in the rank, so smaller ranks can be rebuilt faster than larger ranks. To make matters worse, the slower RPM speeds and higher capacities of ATA disks means that the rebuild process could take longer than smaller capacity, higher speed FC/SCSI disk. According to the Google study, a large portion of the DDM replacements had no SMART errors to warn that it was going to happen. To protect your infrastructure, you need to make sure you have current backups of all your data. IBM TotalStorage Productivity Center can help identify all the data that is "at risk", those files that have no backup, no copy, and no current backup since the file was most recently changed. A well-run shop keeps their "at risk" files below 3 percent.
So, where does that leave us? - ATA drives are probably as reliable as FC/SCSI disk. Customers should chose which to use based on performance and workload characteristics. FC/SCSI drives are more expensive because they are designed to run at faster speeds, required by some enterprises for some workloads. IBM offers both, and has tools to help estimate which products are the best match to your requirements.
- RAID 5 is just one of the many choices of trade-offs between cost and protection of data. For some data, JBOD might be enough. For other data that is more mission critical, you might choose keeping two or three copies. Data protection is more than just using RAID, you need to also consider point-in-time copies, synchronous or asynchronous disk mirroring, continuous data protection (CDP), and backup to tape media. IBM can help show you how.
- Disk systems, and IT environments in general, are higher-level concepts to transcend the failures of individual components. DDM components will fail. Cache memory will fail. CPUs will fail. Choose a disk systems vendor that combines technologies in unique and innovative ways that take these possibilities into account, designed for no single point of failure, and no single point of repair.
So, Robin, from IBM's perspective, our hands are clean. Thank you for bringing this to our attention and for giving me the opportunity to highlight IBM's superiority at the systems level. technorati tags: IBM, Seagate, Hitachi, HGST, EMC, NetApp, HP, HDS, Sun, Google, CMU, DDM, Fujitsu, MTBF, MTTF, AFR, ARR, JBOD, RAID, Tivoli, SVC, DR550, CDP, FC, SCSI, disk, tape, SAN,
[ Read More]
Tags: 
bc
infrastructure
disk
lifecycle
tape
san
|
Well, it's Tuesday again, and you know what that means? IBM Announcements! After much needed vacation in Cancun Mexico, Lake Havasu and Sedona, Arizona, I am glad to be back at work! This week, I was visiting clients in the Los Angeles area.
- IBM FlashSystem 9100
-
IBM's latest addition to its lineup of All-Flash Arrays is the FlashSystem 9100.
There are actually two models: the 9110 (model AF7) has 8-core processors, and the 9150 (model AF8) has 14-core processors. Both models are 2U 19-inch shelves with 24 drives on the front, with two control node canisters in the back. The term "FlashSystem 9100" applies to both 9110 and 9150 models.
Each canister has two processors, 64GB to 768GB of cache memory, an on-board 1GbE port for management, four 10GbE ports for Ethernet, and three HIC slots for I/O adapters, which can be any mix of quad-port FC cards, dual-port 25GbE Ethernet cards, or 12Gb SAS cards for expansion drawers.
For drives, you can have any mix of FlashCore Modules (FCM) or Industry-Standard NVMe (ISN) drives. The FlashCore modules are similar to the FlashCore boards in the FlashSystem 900, including Variable-Striped RAID, advanced flash management, heat binning, health separation, hardware-embedded encryption and compression.
These FCM are packaged into standard NVMe SSD form-factor, with 4.8, 9.6 and 19.2 TB capacities. The Industry-Standard NVMe drives come in 1.92, 3.84, 7.68 and 15.36 TB capacities to offer additional price/capacity options to clients.
A fully maxed out twenty-four FCM module system at 19.2TB represents approximately 400TB usable capacity, combined with 5:1 data footprint reduction with deduplication and compression, can provide up to an effective 2PB in as little as 2U of rack space!
The NVMe and FlashCore technology truly accelerates performance. Latencies as low as 100 microseconds are 2.5x lower than competitive offerings. Each control enclosure can deliver up to 2.5 Million IOPS, and a four-way cluster up to 10 million IOPS in just 8U!
You can mix and match FCM and ISN drives in the same controller, but FCM and ISN have to be in their own separate RAID groups. To use Distributed RAID6 (DRAID6), you need at least six drives for this.
IBM has made a "Statement of Direction" that these models are NVMe-OF hardware ready and will support both FC-NVMe and NVMe-OF over Ethernet by year end. Part of this involves changes to server-side software, including various operating systems, device drivers, and multi-pathing drivers.
The FlashSystem 9100 support up to 40U of expansion drawers, over 12Gb SAS, in two sizes. A 2U drawer for 24 SFF drives, and 5U for 92 SFF/LFF drives. Each FlashSystem 9100 can support up to 760 drives. These expansion drawers are not NVMe, so the Solid-State Drives (SSD) inside them use standard SAS. Consider using Easy Tier sub-LUN automated tiering to move fast data up to the FCM/ISN drives, and slower data to these SAS-based SSD.
Even though it doesn't have a "V" in its name, the FlashSystem 9100 runs Spectrum Virtualize, so you can also virtualize other storage behind it. Over 400 different storage devices from leading storage vendors are supported. The FlashSystem 9100 can be virtualized behind SVC or FlashSystem V9000.
FlashSystem 9100 can also cluster with Gen2 and Gen2+ models of the Storwize V7000 and V7000F controllers. You can connect up to four of any of these into a single cluster, supporting up to 3,040 drives.
The FlashSystem 9100 offers all of the features you have come to love from the rest of the Spectrum Virtualize products: data deduplication and compression, encryption, high-availability guarantee, data footprint reduction guarantee, hardware refresh option after three years, storage utility pricing, and IBM Storage Insights support.
IBM has no plans to withdraw either the existing FlashSystem V9000 nor the Storwize V7000/F models anytime soon. They continue to be available for purchase.
To learn more, see [IBM FlashSystem 9100] announcement letter, and fellow blogger Barry Whyte's post [Introducing the FlashSystem 9100 NVMe with FCM].
- IBM FlashSystem 9100 Multi-Cloud solutions
-
To complement the hardware features of the FlashSystem 9100, IBM has come up with three Multi-cloud solutions.
- Multi-Cloud Solution for Data Reuse, Protection and Efficiency - this combines Spectrum CDM with Spectrum Protect Plus to take snapshots of volumes on FlashSystem 9100. These snapshots are not just for data protection, but can also be "reused" for other purposes, like dev/test, DevOPS, or analytics.
- Multi-Cloud Solution for Business Continuity and Data Reuse - combines Spectrum CDM with Spectrum Virtualize in the Public Cloud, allowing you to take snapshots to the IBM Cloud for disaster recovery. The snapshots can be used in the cloud, or copied back to the same or different data center.
- Multi-Cloud Solution for Private Cloud Flexibility and Data Protection - combines IBM Cloud Private, Spectrum CDM, and Spectrum Connect to support client's efforts to re-factor their applications with Docker containers and Kubernetes. IBM FlashSystem 9100 can be used as persistent storage for containerized applications.
To learn more, see [IBM Multi-Cloud solutions] announcement letter.
- IBM Spectrum Virtualize 8.2 release
-
This release applies only to the Storwize V7000/F and the new FlashSystem 9100 models, and provides support for iSCSI Extensions over RDMA (iSER) on the 25GbE NIC cards. If you want to cluster existing Storwize V7000/F models to the new FlashSystem 9100 models, you need all of them to be at least v8.2.0 release.
Lower latencies and higher bandwidth requirements can be addressed by using RDMA to implement iSCSI. iSER is a new interconnect protocol that allows iSCSI to run on top of RDMA technology. RDMA can be implemented by using RoCE (RDMA over Converged Ethernet) or iWARP (Internet Wide-area RDMA Protocol). iSER enables iSCSI to run on top of it regardless of which of these technologies is used underneath.
To learn more, see [ IBM Spectrum Virtualize Software V8.2] announcement letter.
- IBM Storage Utility Pricing
-
The "Storage Utility" pricing available for many of IBM's other products has been extended to include the IBM FlashSystem 9100 and IBM Cloud Object Storage.
Basically, this is a variable-priced usage-based lease. Let's say you lease 500TB of capacity, but only use 150TB, the first few months you only pay for 150TB, a bit later, you use more, and now start paying more monthly, say 200TB. The price can go up or down. At the end of the lease, typically 36 or 60 months, you have a choice: give the equipment back, or pay the difference.
To learn more, see [IBM Storage Utility offerings for IBM Cloud Object Storage] announcement letter.
IBM is pleased to be on the leading edge of NVMe technology!
technorati tags: FlashSystem 9100, Multi-Cloud, Spectrum Virtualize
|
Well, it's Tuesday again, and you know what that means? IBM announcements!
Today's announcements are all about the Storwize family, IBM's market-leading Software Defined Storage offerings. Having sold over 55,000 systems, and managing over 1.6 Exabytes of data, IBM continues to be the #1 leader in storage virtualization solutions. The Storwize family consists of the SAN Volume Controller (SVC), Storwize V7000, Storwize V7000 Unified, Flex System V7000, Storwize V5000, Storwize V3700 and V3500.
-
SAN Volume Controller 2145-DH8
-
The new 2145-DH8 model is a complete repackaging of this popular storage system. The previous model, the 2145-CG8, was 1U-high x86 server per node, and each node required a separate 1U-high UPS to provide battery protection for its cache. Nobody liked this. The new 2145-DH8 instead is a 2U-high node with two hot-swappable batteries, eliminating the need for UPS altogether. Thus, an SVC node-pair using the 2145-DH8 models takes up the same 4U space, but with fewer cables. The SVC can now also support standard office 110/240 voltage sources.
The new model sports an 8-core processor with 32GB RAM. Since these are 2-socket servers, IBM offers that option to add a second 8-core processor and additional 32GB RAM to help boost Real-time Compression. Each node can have optionally one or two hardware-assisted compression cards which use the Intel QuickAssist chip to boost compression performance.
While the Real-time Compression was in fact, real-time, performed in-line to the read/write I/O process, at latency comparable to uncompressed data for applications, the compression process on older models was entirely software-based, consuming some of the CPU resources, which lowered the maximum IOPS of the solution. With the added cores, added RAM, and hardware-assisted compression chips, IBM resolves that concern. In fact, the new 2145-DH8 with compression can provide more IOPS than an older 2145-CG8 without compression.
The previous model 2145-CG8 allowed you to put up to 4 small SSD drives in the node itself, which were treated the same as externally Flash drives for purposes of having a high-speed storage pool for select volumes, or automated sub-LUN tiering with Easy Tier. The new model 2145-DH8 allows you to attach up to 48 Solid State Drives (SSD) via 12Gb SAS cables. These are housed in the new 2U-high 24F enclosures that can offer up to 38.4 TB of Flash per SVC I/O group.

IBM also re-designed the host/device ports to use Hardware Interface Card (HIC) slots. In the 2145-CG8, you had four FCP ports, two 1GbE Ethernet ports, with options to add two 10GbE Ethernet ports or four additional FCP ports. If you had mostly an FCoE or iSCSI environment, you didn't need the FCP, and if you were mostly a FCP Storage Area Network (SAN) environment, then most of the Ethernet ports went unused. To solve this, the 2145-DH8 can allow you to have up to six HIC cards that are either FCP, Ethernet, or SAS. There are three 1GbE fixed Ethernet ports which can be used for iSCSI and administration.
If you have SVC today, you can upgrade non-disruptively by either swapping out your current SVC engines with the new 2145-DH8 engines, or you can add the new 2145-DH8 engines to your existing SVC cluster. Either way, there is no outage to your applications!
To learn more, see the [Announcement letter: SAN Volume Controller Storage Engine DH8].
-
New Storwize V7000 hardware
-
This is the next generation of the popular Storwize V7000. The previous generation had a 4-core processor and 8GB RAM per canister. The new model has an 8-core processor with 32GB of RAM per canister, with the option to double these to boost Real-time compression. There are two canisters per control enclosure, which gives you 64GB to 128GB of RAM per Storwize V7000 I/O group.
The new Storwize V7000 comes with one hardware-assisted compression chip on the mother board of each canister, with the option to add a second chip per canister.
Each canister offers three HIC slots, which can be used for the additional hardware-assist compression chip, FCP or Ethernet ports.
To accommodate these HIC slots, new canisters were needed. Instead of the flat wide style top and bottom, we now have taller, thinner canisters that sit side to side. This side-to-side design is similar to our existing Storwize V5000 and V3700 models.
The previous model could support up to 9 expansion enclosures per control enclosure. The Storwize V7000 can have up to 24 drives in its control enclosure, and now attach up to 20 expansion enclosures, which allows up to 504 drives per control enclosure, and up to a maximum of 1,056 drives per Storwize cluster.
If you have previous models of Storwize V7000, you can add the new Storwize V7000 into the same cluster, or virtualize the previous storage for migration purposes.
To learn more, see the [Announcement letter: New Storwize V7000].
-
IBM Storwize Family Software V7.3.0
-
The new software applies new capabilities to both new generation hardware as well as the older models, so people with existing gear can benefit as well.
In prior releases, the sub-LUN automated tiering was limited to two levels: Flash and HDD. This lumped all 15K, 10K and 7200 RPM drives into a common HDD category. In the new v7.3.0 code, you can now have three levels: Flash, Enterprise HDD, and Nearline HDD, or two HDD levels: Enterprise and Nearline. The Enterprise level combines 15K and 10K RPM drives, similar to what is done on the IBM System Storage DS8000 disk systems.
The new code is also able balance your storage pools, and can be used with uniform or mixed storage pools to eliminate performance hot spots.
The new code has been enhanced to detect the hardware-assisted compression chip on the new SVC and Storwize V7000 models, and use those if available.
For the Storwize V3700 and V5000 models, the new code allows up to nine expansion enclosures per control enclosure. In the previous models, the V3700 allowed only four expansions, and the V6000 only six expansions per control enclosure. The V3700 can now support up to 240 drives, and the V5000 can support up to 480 drives.
To learn more, see the [Announcement letter: Storwize Family Software v7.3.0].
-
IBM Storwize V7000 Unified File Module software v1.5
-
For Storwize V7000 Unified clients, there is new software for the File Modules that provide NFS, CIFS, FTP, HTTPS and SCP protocol capability. The new v1.5 code now adds NFS v4 and SMB 2.1 levels of support. Most NFS users are still on NFSv3, but about 20 percent of NFS users are using NFS v4 which offers stateful access. The SMB 2.1 for CIFS was introduced by Microsoft in Windows 7 and Windows Server 2008 R2.
Deterministic ID mapping allows you to map Windows userids to UNIX/Linux group and owner id numbers. In the past, the problem is that this mapping is different on each machine, so people often had to stand up a Windows System for Unix Services (SFU) server to provide consistent ID mapping. Now, with v1.5 code, you will no longer have to do this. The deterministic ID mapping will can now replicate the mapping to each machine without an SFU server.
Active Cloud Engine allows up to ten Storwize V7000 Unified to be connected across distance to form a single global name space. WAN caching, however, was restricted to a single site having write capabilities, while the others were read-only. In v1.5 release, IBM now supports multiple independent writers at different locations on the same fileset.
Security enhancements include multi-tenancy, configurable password policies, session policies, and hardened boot and SSH configurations. With NFS v3/v4, you can now use [Kerberos] for security.
Finally, I am please to see that we now have Cinder support for files on the Storwize V7000 Unified on the OpenStack Havana release that just came out last month. The OpenStack Cinder interface can assign LUNs to virtual machines, but the new Havana release allows NAS systems to dole out files that act as LUNs, such as OVA or VMDK files. The advantage is that these files can managed by Active Cloud Engine, cached locally across global name space, have policies place them on appropriate storage tiers, and inactive Virtual Machine images can be migrated to less expensive disk or tape.
To learn more, see the [Announcement letter: Storwize Family Software v7.3.0].
You can learn more about the Storwize family at the [IBM Edge Conference], May 19-23, at Las Vegas. I'll be there!
technorati tags: IBM, Announcements, SAN Volume Controller, SVC, Storwize, Storwize V7000, Flex System V7000, Storwize V5000, Storwize V3700, 2145-DH8, hardware-assisted compression, Real-time Compression, Intel QuickAssist, New Storwize, HIC, Easy Tier, Storwize V7000 Unified, File Modules, OpenStack, OpenStack Havana, OpenStack Cinder, multiple-writer, independent-writer, Active Cloud Engine, Windows SFU, Kerberos, Storwize family, #ibmEdge, Las Vegas
|