This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
The comic combines the recent popularity in cookbooks to help parents get their children to eat morevegetables, such as Jessica Seinfeld's [Deceptively Delicious: Simple Secrets to Get Your Kids Eating Good Food], with the popularity of the latest Batman movie, [The Dark Knight]. To be fair, I have not reviewed the recipe book,but certainly being the wife of comedian Jerry Seinfeld and mother of his children sufficiently qualifies her to write such a book. I did have the pleasure to see this movie at an IMAX movie theater in Hartford, CT a few weeks ago. I highly recommend it. (See also my friend Pam's awesome [review of this movie]).Some have argued the movie franchise has "gone dark" from the previous Batman movies and may not be appropriatefor children. Hiding vegetables in meals may not the right thing for children either.
Unlike IBM that repeatedly delivers unique and innovative new products to the marketplace, Microsoft pulls theold ["bait and switch"] routine. In a series of hiddencamera interviews, Microsoft asks skeptical people who have never used Microsoft Vista operating system their opinions.As expected, all express concerns of problems they have heard about Microsoft's new OS, from friends, colleagues or Apple television advertisements. On a scale of 0 (won't touch it) to 10 (can't wait to have it), the averageskeptic rated Vista with a paltry 4.4 score.
The Microsoft interviewers then show them the new "Microsoft Mojave" Operating System, and askthese same skeptics for their opinions, of which many (35 out of 140 by one account) express they like it, find this new OS usefuland intuitive. The interviewers then explain that this Mojave OS was nothing more than the existing Vista OS alreadyin the marketplace. The average rating for Mojave OS was a significantly higher 8.5 score.Just like hiding spinach in a meal to get your kids to eat it. They tricked you, and you saidyou liked it!
Perhaps the key take-away is whom should prospective customers listen to when evaluating a new product. Microsoftis reasonable in feeling that customers should not base their opinions about Vista solely on lopsided Apple televisioncommercials. Apple, Inc. is one of Microsoft's primary competitors. I feel, however, that if you have friends or colleagues who have shared with you their hands-on experiences, that indeed should have much higher weighting.
Nothing, of course, beats personal experience. If you want to try out one of IBM's latest products for yourself, please contact your local IBM Business Partner or IBM sales representative.
No post today. I will be joining the majority of IBMers in Tucson for "Days of Caring" held annually bythe [United Way of Tucson and Southern Arizona].IBM has been doing this for years, and we are joined by volunteers from other local businesses, including HealthNet, Wells Fargo bank, Texas Instruments, KVOA local NBC affiliate, 94.9 MixFM radio, and others.
The "days" involve a kick-off last week (Sep 19) and two days of helping local charities (Sep 24 and 27).We are split into teams and are assigned out to help fix up old buildings, clean out gutters, re-paintwalls. My team will be sorting canned goods at the local[Community Food Bank], and assembling boxes of items to begiven out to needy families.
This week, I am attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
Here is my recap of the lunch-time sessions Wednesday afternoon.
5663A Beyond Hyperconvergence to a Hyperscale Converged Infrastructure
Bernard "Bernie" Spang, IBM, presented. Organizations continue to face challenges with efficiently managing unprecedented volumes and varieties of data. Meanwhile, new frameworks such as Spark and Hadoop are emerging to efficiently exploit that data. These offerings have the potential to deliver significant benefits, but they can also increase data center complexity and cluster sprawl.
Bernie covered the evolution of Hyperconvergence to a Hyperscale converged technology. By extending software-defined infrastructure concepts to a converged application- and data-optimized fabric, IBM is enabling organizations to reduce costs and accelerate time to insight by efficiently storing, analyzing and protecting their data.
Hyperconvergence is the concept of running hypervisor software on storage-rich servers. Software-only versions include IBM Spectrum Accelerate and VMware VSAN, whereas pre-built systems are available from Nutanix, Simplivity and others.
But not everything is x86 or Hypervisor based. Some applications are better served on bare metal, while others might be better served on containers like Docker or LXC. IBM Spectrum Scale provides for all of these additional platforms, works on both x86 and POWER systems, and can handle storage tiering from flash to disk to tape. It can work across locations, representing any mix of on-premises and off-premises facilities.
1841A IBM Cloud Storage Options
I was pleased to have a standing-room only crowd attend my session!
The term "Cloud Storage" can be misleading. I spell out four unique types of storage:
Ephemeral Storage - storage that exists only as long as the Virtual Machine using it is running. This is ideal for boot volumes and temporary work space.
Persistent Storage - typically block/transactional/high-speed storage that continues to live beyond the life of the Virtual Machine.
Hosted Storage - files, documents and backup copies that are read/written in the Cloud
Reference Storage - files and objects that are written once, and never modified thereafter, such as archives, financial records, and photographs. Since the term Write-Once-Read-Many (WORM) applies only to tape and optical media, the IT Industry now uses Non-Erasable-Non-Rewriteable (NENR) to include flash and disk media protected in some manner through software to avoid tampering.
The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also discuss the differences between block, file and object access, and why different Cloud storage types use different access methods.
I wrapped up the session covering the various IBM storage solutions that we offer for all four Cloud Storage types.
Technology Review has a great 6-minute video showing how the PowerTune system works in the ['self-tuning' guitar].
As with any self-tuning equipment, there are three essential parts.
Measurement. In the case of the guitar, small sensors identify the current note based on string tension.
Response. Based on the measurement, the self-tuning system either decides that there is no more to do, or to take specific action. In the case of this guitar, the action would be to loosen or tighten the string.
Action. The action taken that is expected to get closer to the desired result. In this case, tiny motorsinside the handle turn the thumbscrews to loosen or tighten the strings accordingly.
These are part of a "closed-loop design", as it is called in [Control Theory].After the action in step 3 is taken, goes back to step 1, takes a new measurement, and determines a new response. Thiscould mean that the string is tightened and loosened by ever smaller amounts until it is close enough to the desiredaccuracy, in this case an impressive two [cent].
On the server side, IBM has offered this for years. For example, for z/OS applications on System z mainframes, the[Workload Manager (WLM) offers a "goal mode"] that allows you to set desired results for your business applications, for example, how quickly they respond in processing transactions. WLM measures the response time of the transactions, determines anappropriate response if any, and takes action to shift processor cycles (MIPS) or RAM to help out the workloads with the highest priority, in some cases stealing cycles and RAM away from lesser priority tasks.
For storage, we have IBM TotalStorage Productivity Center. It can scan for file systems over 90 percent full, for example, determine an appropriate response based on policies, and take action to expand the file system to a larger size.This may involve dynamically expanding the LUN that the file system sits on, a feature available on IBM SAN VolumeController, DS8000 series, DS4000 series and N series disk systems.This is the kind of closed loop design that can help eliminate those pesky phone calls at 3am.
But why focus on just storage alone? Combining servers and storage into a higher-level closed loop design is accomplished with [IBM Tivoli Intelligent Orchestrator] and [IBM Tivoli Provisioning Manager]. In thiscombo, Orchestrator measures and responds, and can invoke Provisioning Manager workflows to take action. Workflows are like scripts on steroids. Unlike normal scripts which run on a single machine, workflows can communicate with multiple servers, storage and even networking gear to take the appropriate actions on each of those machines, like install updated software, carve a new LUN, or define a new SAN zone.
The products are well integrated with TotalStorage Productivity Center for the storage aspects.
Before acquisition, Diligent offered only software. The task of putting this software on an appropriate x86 server with sufficientmemory and processor capability was left as an exercise for the storage admin. With the TS7650G, IBM installs theProtectTIER software on the fastest servers in the industry, the IBM System x3850 M2 and x3950 M2. This eliminateshaving the storage admins pretend that they have hardware engineering degrees.
Before acquisition, the software worked only on a single system. IBM was able to offer multiple configurations of the TS7650G, including a single-controller model as well as a clustered dual-controller model. The clustered dual-controller model can ingest data at an impressive 900 MB/sec, which is up to nine times faster than some of thecompetitive deduplication offerings.
Before acquisition, ProtecTIER emulated DLT tape technology. This limited its viability, as the market sharefor DLT has dropped dramatically, and continues to dwindle. Most of the major backup software support DLT as anoption, but going forward this may not be true much longer for new tape applications.IBM was able to extend support by adding LTO emulation on theTS7650G gateway, future-proofing this into the 21st Century.
At last week's launch, covering so many products with so few slides, this announcement was shrunken down to a single line "Store 25 TB of backups onto 1 TB of disk, in 8 hours" and perhaps a few people missed that this wasactually covering two key features.
With deduplication, the TS7650G might get up to 25 times reduction on disk. If you back up a 1 TB data basethat changes only slightly from one day to the next, once a day for 25 days, it might only take 1 TB, or so, of disk tohold all the unique versions, as most of the blocks would be identical, rather than 25 TB on traditional disk or tapestorage systems. The TS7650G can manage up to 1 PB of disk,which could represent in theory up to 25 PB of backup data.
With an ingest rate of 900 MB/sec, the TS7650G could ingest 25 TB of backups during a typical 8 hour backup window.
The 25 TB of the first may not necessarily be the 25 TB of the second, but the wording was convenient for marketingpurposes, and a comma was used to ensure no misunderstandings.Of course, depending on the type of application, the frequency of daily change, and the backup software employed, your mileage may vary.
My IBM colleague Marissa Benekos brought her hand-held video camera to [Storage Networking World] conference in Orlando, Florida.I am not there, as I had a conflict with another conference going on here in Tucson, so am relyingon Marissa to feed me information to blog about.
In this segment, she interviews "booth babe" David Bricker. I've known David a long time,and if you are there at the conference, tell him I sent you to visit him at the IBM booth.
David Bricker shows off some of the IBM System Storage product line at SNWin this YouTube video (2 minutes)
Sadly, I can't be in two places at once. SNW is a great conference to attend!
On Wednesday, I walked through the gardens of [The Grotto] on Sandy Blvd, ate a German lunch at [The Rheinelander], then visited the [Crown Point Vista House] along the [Columbia River Gorge]. There were several fabulous waterfalls that could be seen from the parking area without hiking into the wilderness. We wouldn't want to encounter a bear in the woods, or a cow in the field!
Afterwards, I drove to the [Timberline lodge] at the peak of Mt. Hood to watch the snow fall and have dinner and drinks. This is the lodge featured in the movie ["The Shining"].
Thursday was a Spa day, which I spent relaxing at the pool and sauna. In the evening, I had dinner at [Henry's Tavern], and then shopped at [Powell's Books].
In the afternoon, Rafael, Mo and I explored Portland's waterfront and various bridges via [Segway tour]. The cherry blossoms along our path were in full bloom. If you have not ridden on one of these Segway scooters, they are a lot of fun!
On Saturday, Portland held their [Saturday Market] with arts and crafts for sale. This is similar to Tucson's 4th Avenue Street fair. The difference is that the "Saturday Market" occurs every Saturday of the year and Tucson's 4th Avenue Street Fair occurs only twice per year. The weather was very nice, so, many of the locals were in t-shirts and shorts. A live concert by [Grupo Condor] were playing on the main stage.
I walked past the [Voodoo Donuts store]. There was a long line to get in. A woman leaving the store carrying a pink donut box complained she waited 2 hours just to spend $28 for a dozen donuts. The magic is in the hole!
Getting out of the hustle and bustle of the Saturday Market, I had some green tea at the [Lan Su Chinese Garden]. A sister city to Portland is Suzhou, China, and this garden was very peaceful to walk through.
I went back to Powell's Books, did some shopping for shoes at [Dr. Martens], and had some pizza and salad at [Sizzle Pie] next door.
Nearly everything was closed on Easter Sunday, so I went down to the [TulipFest at the Wooden Shoe Tulip Farm] in Woodburn, OR. This was the opening weekend, with over 40 acres of flowers to walk through, various food carts, wine tasting, and rides for the kids.
Getting back to Tucson proved to be a bit challenging. The flight from Portland to San Francisco was delayed due to fog, so we got re-routed to Seattle, then back to Los Angeles, and finally to Tucson.
Today I spoke at the IBM Think Green Roadshow in Phoenix, Arizona. This is justone of a 15-city tour to help make people aware of Green data center issues.Here is the schedule forthe remaining cities. Contact your local IBM rep for details.
Victor Ferreira was our moderator and host. He is the site level executive for the2000 IBM employees in the Phoenix area, and manages the Public Sector for our Westernregion.
The first speaker was Dave McCoy, IBM principal in our Data Center services group.He explained IBM's Project Big Green and the Energy Efficiency Initiative, and wentinto details on how IBM can act as general contractor to design, plan and build theideal Green Data Center for you. IBM can also retrofit existing buildings, with new technologies like stored cooling, optimized airflow assessments, and modulardata center floorspace. While not related to energy, but still important to ourenvironment was IBM Asset Recovery Services, where IBM can take all those old PCmonitors, keyboards and other outdated equipment and refurbish or melt down to recapture useful metals and plastics, and disposing the rest in an environmentally-friendly,non-toxic manner.
I was the second speaker, covering "How to get it done". While Dave covered the issuesand technologies available, I explained how to put it all into practice. This includesIT systems assessments, health audits, and thermal profiling. Using server and storagevirtualization, you can increase resource utilization and reduce energy waste. IBM's CoolBlueproduct line, which includes the IBM PowerExecutive software to monitor your IT environment, and the "Rear Door Heat Exchanger" that uses chilled water to remove asmuch as 60% of the heat coming out of the back of a server rack, greatly reducing hot-spotson the data center floor, and allowing you to run the entire room at warmer, less-expensivetemperatures.
On the server side, I covered IBM's System z mainframe and the BladeCenter as examples of how innovative technologies can be used to run more applications with less energy. The newSystem p570 based on the energy-intelligent POWER6 processor has twice the performance for the same amountof power as its POWER5 predecessor. On thestorage side, I explained how Information Lifecycle Management (ILM), storage virtualization,and the use of a blended disk and tape environment can greatly reduce energy costs.
Reps from our many technology partners Eaton, APC, Schneider Electric, Liebert, and Anixter werethere to support this event.
The session ended with a Q&A Panel, with Dave McCoy, myself, and Greg Briner from IBM GlobalFinancing. IBM is able to offer creative "project financing" that can often times match theactual monthly savings, resulting in net zero cost to your operational budget, with payback periods as little as 2.5 years.
To learn more about IBM's efforts to help clients create "Green" data centers, clickGreen Data Center.
Well, I have left Japan, and while everyone else is enjoying the Super Bowl, I am now in Australia, at another conference.Today I had the pleasure to hear filmmakers talk about their successes, and how IBM helps the movie industry.
At one extreme was Khoa Do, independent filmmaker. After acting in movies asideMichael Caine and Billy Zane, he decided to become his own director. He started a project to help seven disadvantaged youths from a poor drug-ridden section of Sydney, by having them act in his first full-length film.Armed with only an IBM laptop and small budget, he made the film called "The Finished People" that had critical acclaim.
The film was a success, and many of the disadvantaged youths have gone on to act in other movies. In 2005, Khoa Do was named "Young Australian of the Year".
Thanks to IBM technology, filmmaking is now accessible to a wider number of aspiring wanna-be directors. It is no longer necessary to be part of a large film studio with a multi-million dollar budget to tell your story.
At the other extreme, was Xavier Desdoigts, director of technical operations at Animal Logic, the Computer Graphics (CG) arthouse that produced special effects of movies like "The Matrix", "House of Flying Dragons" and "World Trade Center". They started with producing digital effects for TV commercials, like this one forCarlton Draught Beer.
With the support of a large film studio and multi-million dollar budget, Animal Logic now boasts the 86th most powerful "Supercomputer" based on IBM BladeCenter technology, with over 4000 servers connected into a cluster, for making the movie "Happy Feet". The movie took four years to make, with over 500 people, of 27 different nationalities. It was the first CG movie made in Australia, and has been well-received by audiences worldwide.
Mr. Desdoigts gave out some interesting facts and figures about the movie:
While visually stunning on the big screen, each frame is only 1.4 Megapixel, about the same resolution as most camera phones.
In one scene, there are 427,086 penguins all appearing on frame.
Mumble, the lovable lead character, is made up of over 6 million feathers.
As many as 17 dancers were "motion-captured" to choreograph the tap-dancing and character interaction segments.
Only one system admin was needed to manage this entire server farm. (IBM Systems Director technology makes this possible)
The movie consumed 103 TB of disk space, backed up to 595 LTO tape cartridges.
An estimated 17 million CPU-hours were needed for all the processing and rendering.
Rather than talking about technology for technology sake, these filmmakers showed how technology couldbe put to use, in a practical sense, to provide the world something of value.
Oh my, it is Tuesday again, and you know what that means? IBM Announcements!
This week, IBM announced its latest storage arrays in its IBM System Storage DS8000 series: the DS8880 models. Similar to the "Business Class" vs. "Enterprise Class" distinctions of the DS8870, IBM announced two new models, the DS8884 and the DS8886.
All of the new DS8880 models are based on the latest IBM POWER8 processors, and are noticeably thinner! These are now standard 19-inch wide, fitting nicely into standard IBM racks alongside most other standard 19-inch rack equipment.
The DC-UPS that used to be on the side are now at the bottom of each frame, taking up 8U of space. The High Performance Flash Enclosures (HPFE) that formerly were stored vertically above the DC-UPS will be stored horizontally with the rest of the HDD and SSD drives.
The DS8884 will have 6-core controllers, up to 256 GB Cache, 64 ports that can negotiate between 16Gbps and 8Gbps, up to 240 drives in a single-rack configuration or 768 drives in a three-frame configuration, and up to 120 flash cards in HPFEs. The performance of this one is equal or better to existing DS8870 systems.
The DS8886 will have 8-core, 16-core and 24-core controllers, offering up to three times the performance as the previous DS8870 models, with up to 2 TB of Cache, 128 ports, up to 1,536 drives across five frames, and up to 240 flash cards in HPFEs.
Field model conversion from DS8870 to DS8886 is available for existing clients with DS8870 Enterprise Configurations. This will let clients move their existing HDD, SSD, HPFE and Host Adapters over to the new DS8880 models.
In previous DS8000 models, clients would have one Hardware Management Console (HMC) inside the array, and an optional second HMC workstation somewhere else for high availability. While the second one was optional, it was always considered best practice to have it for redundancy sake. In the new DS8880 models, you can have both HMC in the array, and the Keyboard/Video/Monitor (KVM) can select between the two.
The new I/O enclosure pairs are four times faster, supporting six Device Adapters and two HPFE connections over PCIe Gen 3 network, the fastest available in the industry.
Lastly, IBM simplified the licensing of software features into three bundles, based on TB total capacity of Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes:
Base function License: Logical Configuration support for FB, Operating Environment License, Thin Provisioning, Easy Tier® automated sub-volume tiering, and I/O Priority Manager.
Copy Services License: FlashCopy®, Metro Mirror, Global Mirror, Metro/Global Mirror, z/Global Mirror (XRC), z/Global Mirror Resync, and Multi-Target PPRC.
z-Synergy Service License: Parallel Access Volumes (PAV), HyperPAV, FICON® attachment, High performance FICON (zHPF), and IBM z/OS® Distributed Data Backup (zDDB).
IBM also provided a "Product preview", announcing plans for a third member of the DS8880 family in 2016 that will be flash-optimized to provide an all-flash, higher performance storage system model.
Over time, I have gotten many emails, comments and tweets related to this post. The instructions have been downloaded over 130,000 times!
The letter below was so inspiring that I felt I need to share it. (Published here with permission from the author, who goes by the screen name DaveAlex)
Thought you would like to know that I am working toward an AI Agent hopefully more advanced than "Watson Jr." although I will probably include the software behind it.
The hardware I have on hand is a System X3650M2 which I bought for $250 on eBay. It has four 2.66 GHz Xeons with 6 cores each, and 16 GB RAM. I have another 16 to install when I need it. I will shortly have 4 TB of HDD space on line, plus an addition 3 TB USB3 drive.
Ultimately, I hope to have some of the available knowledge bases on line, Freebase, CYC, etc which will handle specific information perhaps better than the Watson software by itself.
What the target (goal) that I am aiming for is a stationary version of Commander Data of Star Trek, Next generation.
I envision if having some form of self knowledge, being capable of processing graphical data, i.e., facial recognition, gesture interpretation, voice input/output, mathematical processing, with graphical output (display & hardcopy) and several additional features.
As I have studied this project, I am amazed at how much of the required software is already available. The biggest stumbling block is integrating the separate parts.
Back to Hardware. I just bought 2 Dell 2850 servers, each with dual Intel Xeons which can handle some of the tasks. If I need more processing power, I just happen to have about 10 other towers with Pentium IV or dual core processors sitting around, which can be pressed into service as needed. So far, my total cost is less than $1000 US Dollars, and my wife has not thrown me out yet. I continue to watch eBay for additional older used equipment for fractions of the original cost. My friends who follow my project keep telling me that I need to get on with the software, and add hardware as needed; they are absolutely correct, but I can't resist a bargain.
The power consumption is a potential problem, but I have a 4500 Watt solar array to use. The cooling could be a problem too, but my house sits into the side of hill, and can readily duct the air supply pass the sub-surface wall, perhaps with old Processor cooling fins glued to the wall.
I hope to get some hobby programmers involved in the project, it is a bit beyond my programming capabilities. I hope that I can live long enough to see it come to fruition; I am 78 now, and mentally in very good condition.
Wow! He is 78 years old! While others his age are playing shuffleboard at the nursing home, he is out there learning new things about the latest technology. I wish him the best of luck on this! If you would like to reach out to DaveAlex, send me a note or comment below, and I will forward them on to him.
Well, tomorrow is the Winter solstice, at least for those of us in the Northern hemisphere of the planet.As often happens, I have more vacation days left than I can physically take before they evaporateat the end of the year, so next week I will be off, going to see movies like the new["Golden Compass"]or perhaps read the latest book from [Richard Dawkins].
Next week, I suspect some of the kids on my block will be playing with radio-controlled cars orplanes. If you are not familiar with these, here's a [video on BoingBoing]that shows Carl Rankin's flying machines that he made out of household materials.
Which brings me to the thought of scalability. For the most part, the physics involvedwith cars, planes, trains or sailboats apply at the toy-size level as well as the real-world level. One human operator can drive/manage/sail one vehicle. While I have seen a chess master play seven opponents on seven chess boards concurrently, itwould be difficult for a single person to fly seven radio-controlled airplanes at the same time.
How can this concept be extended to IT administrators in the data center? They have to deal withhundreds of applications running on thousands of distributed servers.In a whitepaper titled [Single System Image (SSI)], the threeauthors write:
A single system image (SSI) is the property of a systemthat hides the heterogeneous and distributed nature of theavailable resources and presents them to users and applicationsas a single unified computing resource.
IBM has some offerings that can help towards this goal.
Even in the case where yourvehicle is being pulled by eight horses--(or eight reindeer?)--a single operator can manage it, holding the reins in both hands. In the same manner,IBM has spent a lot of investment and research into supercomputers, where hundreds of individualservers all work together towards a common task. The operator submits a math problem, for example,and the "system system image" takes care of the rest, dividing the work up into smaller chunksthat are executed on each machine.
When done with IBM mainframes, it is called a Parallel Sysplex. The world's largest business workloadsare processed by mainframes, and connecting several together and working in concert makes this possible.In this case, the tasks are typically just single transactions, no need to divide them up further, justbalance the workload across the various machines, with shared access to a common database and storageinfrastructure so they can all do the work equally.
Last August, in my post [Fundamental Changes for Green Data Centers], I mentioned that IBM consolidated 3900 Intel-based servers onto 33 mainframes. This not only saves lots of electricity, but makes it much easier for the IT administratorsto manage the environment.
Parallel Sysplex configurations often require thousands of disk volumes, which would have been quitea headache dealing with them individually. With DFSMS, IBM was able to create "storage groups" wherea few groups held the data. You might have reasons to separate some data from others, put them inseparate groups. An IT administrator could handle a handful of storage groups much easier than thousandsof disk volumes. As businesses grow, there would be more data in each storage group, but the numberof storage groups remains flat, so an IT administrator could manage the growth easily.
IBM System Storage SAN Volume Controller (SVC) is able to accomplish this for other distributed systems.All of the physical disk space assigned to an SVC cluster is placed into a handful of "managed diskgroups". As the system grows in capacity, more space is added to each managed disk group, but few IT administrators can continue to manage this easily.
The new IBM System Storage Virtual File Manager (VFM) is able to aggregate file systems into one globalname space, again simplifying heterogeneous resources into a single system image. End users have a singledrive letter or mount point to deal with, rather than many to connect to all the disparate systems.
Lastly we get to the actual management aspect of it all. Wouldn't it be nice if your entire data centercould be managed by a hand-held device with two joysticks and a couple of buttons? We're not quite there yet, but last October we announced the [IBM System Storage Productivity Center (SSPC)]. This is a master consolethat has a variety of software pre-installed to manage your IBM and non-IBM storage hardware, includingSAN fabric gear, disk arrays and even tape libraries. It lets the storage admin see the entire data centeras a single system image, displaying the topology in graphical view that can be drilled down using semanticzooming to look at or manage a particular device or component.
Customers are growing their storage capacity on average 60 percent per year. They could do this by havingmore and more things to deal with, and gripe about the complexity, or they can try to grow theirsingle system image bigger, with interfaces and technologies that allow the existing IT staff to manage.
In case you missed it, IBMunveiled a new digital video surveillance service yesterday. This "marks an important shift in the industry's approach to security, applying advanced analytics to video data and signaling the ability to converge physical and information technology (IT) security."
The IBM Smart Surveillance Solution is designed to provide the unique capability to carry out efficient data analysis of video sequences either in real time or from recordings. These recordings can be on disk or tape storage.
The problem with today's existing "analog" surveillance is that the analog cameras record onto traditional VHS tapes, and these are rotated through, re-written after a few hours or days. To review tapes often involves human intervention, and must be done before the VHS tapes are re-used. Many shoplifters, thieves, and other law-breakers take a chance that their actions will not be caught on tape, or that they will be long gone by the time the video is analyzed.
The IBM Smart Surveillance Solution can provide a number of advantages over traditional video solutions, including:
Real-time alerts that can help anticipate incidents by identifying suspicious behaviors.
Forensic capabilities are enhanced by utilizing unique indexing and attribute-based search of video events to classify objects into categories such as people and cars.
Situational awareness of the location, identity and activity of objects in a monitored space including license plate recognition and face capture.
With real-time analytics capabilities, the new DVS service can open up a wide array of new applications that go far beyond the traditional security aspects of surveillance systems. Early adopter industries in this rapidly evolving market include retail, public sector and financial services. The retail industry estimates nearly $50 billion is lost annually to fraud, theft and administrative errors.
Once in digital format, video surveillance can be sent further, processed quicker, and stored for longer periods of time, than traditional media makes practical today.
IBM introduces the eight generation of Linear Tape Open (LTO) tape drive technology, with corresponding support in all of the IBM tape libraries.
Fellow blogger Jon Toigo, of Drunkendata.com fame, came to Tucson to interview Lee Jesionowski, Ed Childers, Calline Sanchez, and me about this. Check out the various segments on YouTube or his website.
The LTO-8 cartridges are not yet available, but when they are, they will hold 12 TB raw capacity, or 30 TB effective capacity at 2.5-to-1 compression ratio. The new drives are N-1 compatible to read/write LTO-7 cartridge media.
Previous generations also supported reading N-2 generation tapes, LTO-8 breaks from that tradition and will not support LTO-6 cartridges at all.
LTO-8 comes in both "Full Height" (FH) and Half-Height (HH) models. The FH models can transfer data at 360 MB/sec (or 900 MB/sec effective at 2.5-to-1 compression), and the HH models at 300 MB/sec (or 750 MB/sec effective at 2.5-to-1).
LTO-8 supports IBM Spectrum Archive and the "Linear Tape File System" (LTFS) tape format for self-describing long-term retention of data.
Compliance storage has come under many names. For tape and optical media, we had "WORM" for Write-Once, Read-Many. For disk-based storage, we had "Fixed-Content" or "Content-Addressable Storage". For file systems, we had "Immutable Storage".
Fortunately, the clever folks who crafted the SEC 17a-4 law came up with an umbrella term: "Non-Erasable, Non-Rewriteable" (NENR) that covers all storage media, from WORM tape and optical, to tamperproof flash, disk and cloud-based solutions.
The other major change is "Concentrated Dispersal" mode, or "CD mode" for short. Erasure Coding works best when data is dispersed across three or more sites. When this happens, you can lose all of the data at one site, and still have 100 percent access to all data from the other locations.
IBM's "Information Dispersal Algorithm", or IDA for short, scattered slices of data across many servers. Great for high availability and performance, but often meant that the minimum deployment was 500TB or greater.
Not every organization is ready for such a large purchase. Some want to just [dip their toe in the water] with something smaller, less expensive. Well IBM delivered!
The new CD mode means that instead of one slice per Slicestor node, you can pack lots of slices on each node. Each slice will be on distinct disk drives, for high availability.
Entry-level configurations now can be as little as 72-104 TB, across 1, 2 or 3 sites.
This week, I am attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
The last day of the conference had fewer people. Many stayed for the Elton John concert then left. I am glad to be one of the few that squeezed out every last value of learning from the money it cost for my employer to send me here.
2419A Enhance the Agility of Your Cloud with IBM FlashSystem
Kristy Ortega and Shaluka Perera, IBM FlashSystem Solutions team, presented. Cloud Service Providers (CSP) and Managed Service Providers (MSP) are leveraging flash technology for a variety of reasons:
To meet Service Level Agreements (SLAs)
To handle unpredictable workloads
To minimize noisy neighbor interference
To offer premium performance as an up-sell feature
To be able to scale faster to meet incoming requests
To reduce server count
To keep custonmers delighted and reduce customer churn
To offer data-rich features without sacrificing performance
Kristy gave three practical client use cases:
IP-Only -- an MSP in the Nordic countries, employed IBM FlashSystem and Storwize V5000. They achieved five times VMware density on their servers and 300 percent improved application performance. Nearly all of the cost of the new storage hardware was offset by the savings in VMware license costs!
Cageka -- an MSP in Europe, employed IBM FlashSystem and SAN Volume Controller. They achieved 66 percent reduced SAP ERP response time, 97 percent reduction in floorspace, and 95 percent reduced power and cooling costs.
COCC -- formerly the Connecticut On-Line Computer Center, a CSP for bank and credit unions, employed IBM FlashSystem with IBM POWER servers. They achieved 10x faster OLTP transaction processing times, 80 percent reduction in power and cooling costs. The payback period for this was less than 3 months!
IBM sells SAN switches featuring Brocade Gen5 "Fabric Vision" technology, and resells Cisco MDS switches like the 9396S model. Both of these have been enhanced to handle the lower latency and higher throughput that IBM FlashSystem provides.
IBM Data Engine for NoSQL employs Redis with Coherent Accelerator Processor Interface (CAPI) that allows POWER8 servers to connect directly to IBM FlashSystem as an extension of memory rather than bus-attached external storage. This reduces the code path length to read/write to IBM FlashSystem by 97 percent, resulting in solutions that use six times less rack space, and three times less costs. This solution reduces CPU core requirements by 20-30 cores for every 1M IOPS of workload!
Spectrum Scale supports IBM FlashSystem in a variety of configurations. First, IBM FlashSystem can serve as a high-speed cache when Spectrum Scale virtualizes other NFS storage devices. Second, IBM FlashSystem can serve as a low-latency storage pool to direct new or hot data to. Third, Spectrum Scale can separate its metadata from the content of files and objects, putting the metadata on IBM FlashSystem. This greatly improves searching through directory structures or for specific object attributes.
Last year, IBM, Hewlett-Packard, and VMware launched Project Capstone to "leave no application behind". They made a concerted effort to make sure that all relevant applications that run on bare metal can also run on VMware hypervisor. IBM FlashSystem has support for VMware features, including VAAI, VASA, and VVols.
IBM has partnered with Atlantas ILIO to offer in-line data deduplication for Virtual Desktop Infrastructure (VDI). A single 2U IBM FlashSystem can support 5,000 users and 10,000 virtual desktops, running at 382 IOPS per desktop.
Lastly, Healthcare provider Trizetto has used IBM FlashSystem to reduce OPEX by 90 percent, shrinking from a 20U disk system array to a 2U IBM FlashSystem device.
4331A Leverage zOS and Cloud Storage for Backup/Archive Efficiency and Cost Reduction
Eddie Lin, IBM Senior Technical Staff Member for DS8000 development team, presented this technology preview. Taking advantage of cloud storage is not limited to the distributed storage world alone. The ability to connect existing archive and backup solutions in z/OS to on-premise object storage platforms provides huge efficiency gains, enabling clients to do more during their critical batch windows.
IBM is integrating cloud gateway software into its DS8870 and DS8880 Enterprise Disk Systems in conjunction with DFSMShsm and DFSMSdss for a complete end-to-end solution to optimize this space. We will show a live demonstration of this capability during this session.
This solution uses the Storage-as-the-Storage-Cloud methodology I mentioned in my session yesterday. DS8000 is #1 storage provider for mainframe environments. Eddie explained the current inefficient process of moving cold data to tape, using 37-year-old DFSMShsm functionality.
A new approach involves moving data directly from DS8870 storage systems to object storage, either on-premises or off-premises. This eliminates MIPS used for data movement, and reduces the record-keeping normally done by DFSMShsm. z/OS Data sets migrated to the Cloud will continue to be designated as MIGRAT in the ICF Catalog. Similar recall times from tape or Cloud.
There will also be options for DFSMSdss to invoke the function. However, you will need to provide in the DFSMSdss command parameters all of the information needed to connect to the Cloud that would normally be handled by DFSMShsm.
To make this all happen, you will need a certain level of DFSMS, and a certain level of DS8000 firmware. No new hardware is required, as it uses 1GbE Ethernet ports that already exist in DS8870 and DS8880 models. If you still have DS8100, DS8300, DS8700 or DS8800 models, now is a good time to start upgrade!
Internal tests on a 5GB data set were done to compare MIPS consumption. With DFSMShsm, 0.127 CPU, versus this new "Transparent Cloud Storage Tiering" method was only 0.068 CPU, indicating a 46 percent reduction in MIPS. DFSMShsm is often the #2 biggest consumer of MIPS (DB2 is #1), so any reduction here is a big deal.
IBM plans to support Spectrum Scale, Cleversafe, IBM SoftLayer, Amazon S3, Rackspace, Microsoft Azure. Full encryption data-in-flight is included, with keys managed using IBM SKLM. This capability will be fully supported by z/OS Security products (RACF, Top Secret, etc.) and z/OS audit logging.
Eddie wrapped up with a live demo.
7341A IBM Storage and Catalogic: Software Defined Solutions for Hybrid Cloud and DevOps
Third party Catalogic ECX software supports IBM, NetApp and EMC storage devices. I was hoping to hear how it works specifically with IBM storage models, but instead the speaker explained why Copy Data Management (CDM) was helpful for Bi-Modal environments.
Basically, copies of data taken to protect production data sit idle until needed. With Copy Data Management, the copies are available to development and test personnel. While traditional production IT operations are like Marathon runners, the new DevOps is like short-distance sprinters, needing to be agile in developing and testing new applications. Having ready access to copies of production data can speed this process.
4921A Radical Storage Simplicity for Your Cloud and How it Can Impact Your Customers
Diane Benjuya and Yafit Sami, both from IBM, presented IBM Spectrum Accelerate, the software "de-coupled" from traditional XIV hardware.
The XIV grid architecture automatically distributes data, eliminates hot-spots, and provides enterprise-class features like thin provisioning, VMware support, snapshots and remote mirroring. It's "Distributed RAID-10" capability can rebuild after the loss of a 6TB disk drive failure in less than an hour.
Spectrum Accelerate has nearly the same set of features, minus Microsoft Hyper-V integration, FCP host access support, VMware vSphere v6 VVol support, Real-time Compression, and Encryption. Spectrum Accelerate adds a feature not available to XIV called Hyperconvergence. This allows application Virtual Machines to run on the same servers used for Spectrum Accelerate. Spectrum Accelerate can run on-premises on customer-choice hardware, or in the Cloud, such as IBM SoftLayer.
In response to complaints that IBM XIV was a single-frame storage array, IBM introduced Hyper-Scale, a series of features that allow up to 144 XIV Gen3 frames as a single system. With the introduction of Spectrum Accelerate, Hyper-Scale Manager can now manage any combination of XIV Gen3 and Spectrum Accelerate clusters, on-premises or off-premises, up to 144 total.
Hyper-Scale Mobility can migrate volumes from one XIV to another without the need for external virtualization such as IBM SAN Volume Controller. For iSCSI volumes, Hyper-Scale Mobility can migrate data between XIV and Spectrum Accelerate, or from one Spectrum Accelerate cluster to another, on-premises with off-premises.
Hyper-Scale Consistency allows Snaphots to be taken of a group of volumes across multiple XIV frames. Now, snapshots can be taken of a group of volumes across both XIV and Spectrum Accelerate clusters.
Remote Mirroring is fully supported. You can replicate data from XIV to Spectrum Accelerate, Spectrum Accelerate to XIV, or from one Spectrum Accelerate cluster to another.
The IBM XIV Mobile Dashboard for Apple and Android phones can support any mix of XIV and Spectrum Accelerate clusters. This includes monitoring your environment, as well as push notifications.
IBM has also introduced flexible licensing options. With newly purchased XIV boxes and Spectrum Accelerate, you can choose to buy the software license as "perpetual", allow you to move it to new hardware when your hardware kicks the bucket. This license can be moved to new XIV hardware, or to Spectrum Accelerate cluster deployment.
For Spectrum Accelerate, an additional license option is "monthly", allowing you to elastically add or reduce the amount of storage you manage, either on-premises or off-premises.
Like the idea of Spectrum Accelerate but don't want to build it yourself? Third party SuperMicro offers hardware pre-certified and pre-installed with Spectrum Accelerate. You license Spectrum Accelerate directly from IBM, and SuperMicro will take care of the rest.
Spectrum Accelerate is a component of the Spectrum Storage suite. A single flat-TB pricing for all six Spectrum Storage products.
Want to try IBM Spectrum Accelerate yourself? Here are three options:
Free 90-day trial with self-destruct. After 90 days, the code stops working. You can download this and try it out.
90-day evaluation copy. Your authorized IBM Seller works with you to install, and if you like it, you buy it after 90 days to continue to use it.
Special promotion before June 30, 2016 -- Purchase IBM Spectrum Accelerate for production, and your first 20TB are free. No strings attached.
IBM's Silverpop uses IBM Spectrum Accelerate to deploy their market analytics solution. They can spin up a new customer with 250TB of capacity in 24-48 hours on IBM SoftLayer. They found they use half as many storage admins managing storage with IBM Spectrum Accelerate than their previous method.
Well, that's the end of the conference. I have to go back and submit all of my survey responses, which I should have done every day all along, but was too busy writing blog posts!
The presentations are also now available for download for those who attended the conference. (Go to Session Preview on the IBM InterConnect attendee website and hit the Download Presentation button)
The question is if this is unique or specific to these particular models, or if this affects all kinds of blade servers because of their very nature and architecture. Stephen indicates that they also have HP C class enclosures, but since they are still in test mode, cannot comment on them.
I have no experience with any of HP's blade servers, but I have worked closely with our IBM BladeCenter team to help make sure that our storage, and our SAN equipment, work well together with the BladeCenter, and more importantly, that problems can be diagnosed effectively.
When I asked why people feel they need to know the inner workings of storage, the overwhelming response was to help diagnose problems. This could include problems inplacing related data on a potentially single point of failure, problems with performance, and problems communicating with 1-800-IBM-SERV.
So, if you have encountered problems diagnosing SAN problems with BladeCenter, or find that setting up an IBM SAN with blade servers in general, I would be interested in hearing what IBM can do to make the situation better.[Read More]
Continuing this week's coverage of the 27th annual [Data Center Conference] I attended some break-out sessions on the "storage" track.
Effectively Deploying Disruptive Storage Architectures and Technologies
Two analysts co-presented this session. In this case, the speakers are using the term "disruptive" in the [positive sense] of the word, as originally used by Clayton Christensen in hisbook[The Innovator's Dilemma], andnot in the negative sense of IT system outages. By a show of hands,they asked if anyone had more storage than they needed. No hands went up.
The session focused on the benefits versus risks of new storage architectures, and which vendors they felt would succeed in this new marketplace around the years 2012-2013.
By electronic survey, here were the number of storage vendors deployed by members of the audience:
14 percent - one vendor
33 percent - two vendors, often called a "dual vendor" strategy
24 percent - three vendors
29 percent - four or more storage vendors
For those who have deployed a storage area network (SAN), 84 percent also have NAS, 61 percent also have some form or archive storage such as IBM System Storage DR550, and 18 percent also have a virtual tape library (VTL).
The speaker credited IBM's leadership in the now popular "storage server" movement to the IBM Versatile Storage Server [VSS] from the 1990s, the predecessor to IBM's popular Enterprise Storage Server (ESS). A "storage server" is merely a disk or tape system built using off-the-shelf server technology, rather than customized [ASIC] chips, lowering thebarriers of entry to a slew of small start-up firms entering the IT storage market, and leading to newinnovation.
How can a system designed for now single point of failure (SPOF) actually then fail? The speaker convenientlyignored the two most obvious answers (multiple failures, microcode error) and focused instead on mis-configuration. She felt part of the blame falls on IT staff not having adequate skills to deal with the complexities of today's storage devices, and the other part of the blame falls on storage vendors for making such complicated devices in the first place.
Scale-out architectures, such as IBM XIV and EMC Atmos, represent a departure from traditional "Scale-up" monolithic equipment. Whereas scale-up machines are traditionally limited in scalability from their packaging, scale-out are limited only by the software architecture and back-end interconnect.
To go with cloud computing, the analyst categorized storage into four groups: Outsourced, Hosted, Cloud, and Sky Drive. The difference depended on where servers, storage and support personnel were located.
How long are you willing to wait for your preferred storage vendor to provide a new feature before switching to another vendor? A shocking 51 percent said at most 12 months! 34 percent would be willing to wait up to 24 months, and only 7 percent were unwilling to change vendors. The results indicate more confidence in being able to change vendors, rather than pressures from upper management to meet budget or functional requirements.
Beyond the seven major storage vendors, there are now dozens of smaller emerging or privately-held start-ups now offering new storage devices. How willing were the members of the audience to do business with these? 21 percent already have devices installed from them, 16 percent plan to in the next 12-24 months, and 63 percent have no plans at all.
The key value proposition from the new storage architectures were ease-of-use and lower total cost of ownership.The speaker recommended developing a strategy or "road map" for deploying new storage architectures, with focus on quantifying the benefits and savings. Ask the new vendor for references, local support, and an acceptance test or "proof-of-concept" to try out the new system. Also, consider the impact to existing Disaster Recovery or other IT processes that this new storage architecture may impact.
Tame the Information Explosion with IBM Information Infrastructure
Susan Blocher, IBM VP of marketing for System Storage, presented this vendor-sponsored session, covering theIBM Information Infrastructure part of IBM's New Enterprise Data Center vision. This was followed by BradHeaton, Senior Systems Admin from ProQuest, who gave his "User Experience" of the IBM TS7650G ProtecTIER virtual tape library and its state-of-the-art inline data deduplication capability.
Best Practices for Managing Data Growth and Reducing Storage Costs
The analyst explained why everyone should be looking at deploying a formal "data archiving" scheme. Not just for "mandatory preservation" resulting from government or industry regulations, but also the benefits of "optional preservation" to help corporations and individual employees be more productive and effective.
Before there were only two tiers of storage, expensive disk and inexpensive tape. Now, with the advent of slower less-expensive SATA disks, including storage systems that emulate virtual tape libraries, and others that offer Non-Erasable, Non-Rewriteable (NENR) protection, IT administrators now have a middle ground to keep their archive data.
New software innovation supports better data management. The speaker recalled when "storage management" was equated to "backup" only, and now includes all aspects of management, including HSM migration, compliance archive, and long term data preservation. I had a smile on my face--IBM has used "storage management" to refer to these other aspects of storage since the 1980s!
The analyst felt the best tool to control growth is the "Delete" the data no longer needed, but felt that nobody uses Storage Resource Management (SRM) tools needed to make this viable. Until then, people willchose instead to archive emails and user files to less expensive media.The speaker also recommended looking into highly-scalable NAS offerings--such as IBM's Scale-Out File Services (SoFS), Exanet, Permabit, IBRIX, Isilon, and others--when fast access to files is worth the premium price over tape media.The speaker also made the distinction between "stub-based" archiving--such as IBM TSM Space Manager, Sun's SAM-FS, and EMC DiskXtender--from "stub-less" archive accomplished through file virtualization that employes a global namespace--such as IBM Virtual File Manager (VFM), EMC RAINfinity or F5's ARX.
She made the distinction between archives and backups. If you are keeping backups longer than four weeks, they are not really backups, are they? These are really archives, but not as effective. Recent legal precedent no longer considers long-term backup tapes as valid archive tapes.
To deploy a new archive strategy, create a formal position of "e-archivist", chose the applications that will be archived and focus on requirements first, rather than going out and buying compliance storage devices. Try to get users to pool their project data into one location, to make archiving easier. Try to have the storage admins offer a "menu" of options to Line-of-Business/Legal/Compliance teams that may not be familiar with subtle differences in storage technologies.
While I am familiar with many of these best practices already, I found it useful to see which competitiveproducts line up with those we have already within IBM, and which new storage architectures others find mostpromising.
Twenty years ago, I flew to Atlanta for the semi-annual SHARE conference. I was a lead architect for DFSMS, the storage management software for mainframe servers. When I got to the hotel, I realized that I had forgotten to pack my saline solution for my contact lenses. I went to the hotel gift shop, and picked the first one I found. I took my contacts in the solution and went to bed.
The next morning, I put on my contacts, got dressed, and participated in meetings. One of my colleagues noticed my eyes were quite red, and suggested I switch from contact lenses to glasses. I went back to my hotel room, saw to my horror that what I thought was saline solution was actually hydrogen peroxide intended for hard lenses. When I removed the lenses, all I could see was white light.
I managed to find my way to the elevator, and feel for the button with the star that indicated the lobby on the ground floor. I asked a hotel staffer to call me an ambulance, but instead, they put me in a cab, and sent me to Emory Hospital. On arrival, all I could do was hand over my wallet to my cabbie, and let him take out what he felt was fair, since I could not see him, the meter, or his license number.
After bumping my knees into dozens of cars in the parking lot, I finally made it to the ER, only to have receptionist give me a form to fill out and a pen. At this point, I lost it. I gave her my wallet and said that any information she may need should be in there.
Thankfully, a doctor noticed this exchange, and took care of me right away. I had chemically burned off both corneas. He injected some green fluid into both eyeballs, and sent me off in a cab to the Pharmacy. At least I had both eyes were bandaged in gauze, so people were kind enough to take me to get to the counter to get my pain killers, Percocet.
The pharmacist provided me the pills, and warned me NOT to operate any heavy machinery under the influece of this medication. Seriously? I can't see, both eyes covered, and he tells me that?
I got back to the hotel, got ready for bed, took the pills and brushed my teeth. I woke up the next morning on the bathroom floor, still clutching the toothbrush, and vertical and horizontal lines across my right cheek which were made by the one-inch tiles of the bathroom floor. These pills really knocked me out.
That day, I had to present a full hour in front of hundreds of people. I had a colleague flip my transparencies for me, while I spoke to each one, my eyes still covered in gauze. That evening, I was one of the experts on the panel for a "Birds of a Feather", or BOF session, answering a variety of questions. People could see that I was blind, but I could still hear the questions, and I could still answer them as well.
If you are going to Edge 2013 in Las Vegas, please consider attending my BOF session on Security for PureSystems, System x and Storage products, scheduled for Thursday afternoon, June 13. I will be moderating a distinguished panel of experts to answer your questions! I have listed them here alphabetically:
Jack Arnold, US Federal. Jack has worked decades in the storage industry, and will provide insight into security issues related to the government.
Tom Benjamin, Development Manager for Key Lifecycle Management and Java Cryptography. Tom will bring his expertise in both TKLM and ISKLM for managing encryption keys, and how to communicate these between security and storage administrators.
Paul Bradshaw, Chief Storage Architect for Clouds. A research scientist from IBM's Almaden Research Lab, Paul will provide insight in how to deal with security issues related to private, hybrid and public cloud deployments.
Ajay Dholakia, Solution Center of Excellent. Ajay will cover server-side considerations for security deployments, including System x and PureSystems.
Jim Fisher, Advanced Technical Skills. Jim brings expertise related to deploying data-at-rest encryption.
Not sure what kind of questions to ask? Here is a series of Questions and Answers we had at a Storage event in 2011 that might give you a good idea: [2011 Storage Free-for-All].
Well, it's Tuesday, which means IBM Announcements!
We have both disk and tape related announcements today.
2 TB Drives
Yes, they are finally here. IBM now offers [2 TB SATA drives for its IBM System Storage DCS9900 series] disk systems. These are 5400 RPM, slower than traditional 7200 RPM SATA drives. This increases the maximum capacity of a single DCS9900 from 1200 TB to 2400 TB. The DCS9900 is IBM's MAID system (Massive Array of Idle Disk) which allows for drive spin-down to reduce energy costs and is ideal for long term retention of archive data that must remain on disk for High Performance Computing or video streaming.
TS3000 System Console
The TS3000 System Console [provides improved features for service and support] of up to 24 tape library frames or 43 unique tape systems. Tape frames include those of the TS7740, TS7720 and TS7650. Tape systems include TS3500, TS3400 or 3494 libraries as well as stand-alone TS1120 and TS1130 drives. Having the TS3000 System Console in place is a benefit to both IBM and the customer, as it improves IBM's ability to provide service in a more timely manner.
Both announcements are part of IBM's strategy to provide cost-effective, energy-efficient, long-term retention storage for archive data.
This week, I am attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
4955A IBM and Box: Delivering Hybrid Solutions for Enterprise Content Management
Rich Howarth, IBM VP of Enterprise Content Management, and Rand Wacker, Vice President at Box, co-presented this session on the [IBM-and-Box partnership], integrating content management, social and analytics products with the Box cloud content management offering to enable enterprise customers to deploy hybrid solutions leveraging the best of their existing on-premise technologies along with new cloud technologies.
IBM and Box are partnering to re-imagine content management, case management and governance in the cloud. For example, IBM StoredIQ that scans various data sources to find documents and evidence needed to defend yourself against lawsuits can be run against files uploaded to Box.
On a personal note, the IBM Tucson Executive Briefing Center where I work now uses Box to upload presentation files that are then sent to the client attendees.
6524A The Role of Tape in a Cloud-Based World for Economical and Secure Data Retention
This was a 50/50 session. The first half was presented by Shawn Brume, IBM, that covered Linear Tape File System (LTFS) and IBM Spectrum Archive.
Like the cloud, tape has made great strides -- evolving independently in capacity, durability and data access capability while maintaining its economic benefits. As a result, today's tape is just as well suited to cloud service providers as it is to the enterprises and midsize organizations that rely on it to support their production and data protection strategies.
If a cloud service provider does not use tape, the provider and its customers are almost guaranteed to experience a long-term cost outlay that is higher than necessary, and will be putting their oldest and most compliance-sensitive data at risk, thanks to a disk-only based MSP model. See how incorporating tape into your storage strategy can reduce costs and improve MSP margins.
How does tape compare to disk for Cloud providers? A [Zettabyte] of data would cost $41 billion USD per year on disk, but only 8 billion USD per year on tape. Electricity for a Zettabyte of data requires 1.2 Gigawatt for disk, but only 300 Megawatt for tape.
For access to files that require a tape mount, an access time to first byte averages 45 seconds, with a worst case around 75 seconds. After that, tape can stream data as fast as the Internet can deliver it, so performance is not an issue beyond first byte access.
The second half was presented by Michael Piltoff, from value-added reseller Champion Solutions Group, covering their latest product called EchoLeaf. This can run on Windows or Linux that attaches to any IBM tape library, and exports the files on those cartridges as NFS or CIFS/SMB.
In other words, the entire library appears as a single mount point or drive letter, and each tape cartridge appears as a sub-directory. This uses IBM Spectrum Archive Library Edition under the covers.
4759A Cloud Storage Success: MSPs and Enterprises Reveal their Secrets
How do you distinguish fact from fiction when it comes to claims made by vendors about storage for cloud? Eric Herzog, IBM Vice President Marketing for IBM Storage Systems, served as emcee for a panel of experts using IBM Storage solutions across different industries for their Hybrid Cloud deployments.
The panel shared their experiences in using various technologies to get the most out of their private and hybrid cloud, discussed how they are building out their next-gen data centers to cope with today's business needs, talked about how they are using flash and software defined storage to place them where they need to be to succeed in the future.
On the panel were:
Richard Spurlock, Cobalt Iron, using PB of storage on Spectrum Scale and Cleversafe
Paul Rafferty, IBM Silverpop, using Spectrum Accelerate with different Cloud providers
Johnny Oldenburg, Tieto Sweden AB, using SVC, Storwize V7000 and FlashSystem
Keith Dobbins, Time Warner Cable/Navisite, over 30 fully-populated XIV storage systems
Here were some of the nuggets of wisdom:
Eliminate the debate between private or public cloud. Consider everything to be a unique shade of Hybrid Cloud.
Get the network right, all data and management control is done through the network in the Cloud
Take an "Outside-In" approach, focusing on the business problems being solved, rather than trying to exploit specific technologies.
Workloads are unpredictable in the Cloud. Cloud can sometimes be unreliable in their response to workload changes. Partner with vendors like IBM to provide support and scalability to handle the unexpected.
Ensure that you comply with government and industry regulations. For example, Payment Card Industry Data Security Standard [PCI-DSS] for credit card transactions.
Use VMware Storage Vmotion and VVols to migrate data from one Cloud to another.
Software defined network (SDN) and Software Defined Storage (SDS) greatly automate the provisioning process, pushing many storage admin tasks down to NOC personnel.
Use tools like Spectrum Control to provide a single-pane-of-glass management of your entire environment.
Build abstraction layers at touch points to avoid being impacted by external changes, and use documented reference architectures to ensure success.
Educate your clients and end-users on what is possible, and what is probable, in the Cloud.
Use "Flash Cache" technologies, such as IBM XIV, Oracle, Spectrum Scale, and VMware.
Analytics can help with "data rationalization", which identifies the business value of the data.
Object Store is a first-class citizen and should seriously be considered for new projects.
5467A My Data is Out of Control! Managing the Lifecycle of Your Data with "Big Storage" Cloud Archive
Jeff Karmiol and Quaid Nasir, both from IBM, presented a technology preview of a deep archive to be launched later this year.
A staggering 80 percent of data is never touched after 90 days of capture or creation. However, the data may need be kept for business, compliance or regulatory reasons.
"Big Storage" offers cloud storage for customers who need to store large amounts of data and retrieve it on-demand at the lowest cost possible. This easy-to-use cloud service provides fast retrieval times with affordable, transparent pricing and retrieval rates.
This service uses standard OpenStack Swift and POSIX interfaces so you don't need to learn any new APIs. Files and objects remain visible while archived, making it easy and affordable to continue to extract business value from your archived data.
This deep archive is located in a secure, IBM-managed data center. How deep? The facility is 350 feet deep under a mountain, which allows that tape cartridges to be kept at constant humidity and 40 degree Fahrenheit temperature.
Multiple resiliency and data protection options will be available. The data can be part of a global namespace, with some data on premises, connected to data migrated to the archive. Data movement can be either manually-initiated or policy-managed.
7256: Blogging 301: The Art of Opinion
"Turbo" Todd Watson and I started blogging 10 years ago, and we have both been ranked in the top-10 bloggers for IBM. He presented a series covering the basics of blogging. This session was a deeper dive into best writing practices and structures for being confident, engaging, and convincing in their writing.
Here are some of his bits of wisdom:
Base your opinions on fact and well-research information.
Educate your readers, without being "preachy"
Generate interest and enthusiasm, and encourage readers to participate
Don't equivocate, pick a position or side of a debate and stick with it
Leave your reader with the next logical step, a call to action, or pointer to additional information
Forrester analysts kicked off the keynote sessions for Day 1 of the Forrester IT Forum 2009 event. The theme for this conference is "Redefining IT's value to the Enterprise."Rather than focusing on blue-sky futures that are decades away, Forrester wants to present instead a blend of pragmatic informationthat is actionable now in the next 90 days along with some forward-looking trends.
If you ask CEOs how well their IT operations are doing, 75 percent will saythey are doing great. However, if you dig down, and ask how their companies are leveraging IT to help generate revenues, reduce costs, improve employee morale, drive profits, improve customer service, or manage risks, then the percentage drops down to 30 to 35 percent.
What are the root causes of this "perception gap" in value between business and IT? Several ideas come to mind:
Some CEOs still consider IT departments as "cost centers". Rather than exploiting technology to help drive the rest of the business, they are seen as a necessary evil, an extension of the accounting department, for example.
Some CEOs consider IT's role as basically "keeping the lights on". They only notice IT when the lights go out, or other business outages caused by disruptions in IT.
IT departments measure themselves in technology terms, not business terms. CEOs and the rest of the senior management team may not be "tech savvy", and the CIO and IT directors may not be "business savvy", resulting in failure to communicate IT's role and value to the rest of the business.
This conference is focused on CIOs and IT professionals, and how they can bridge the tech/business gap. The first two executive keynote presentations emphasized this point.
Bob Moffat, Senior VP and Group Executive, IBM
Bob Moffat (my fifth-line manager, or if you prefer, my boss's boss's boss's boss's boss) is the Senior VP and Group Executive of IBM's Systems and Technology Group that manufactures storage and other hardware. He presented how IBM is helping our clients deploy smarter solutions. Globalization has changed world business markets, has changed the reach of information technology, and has changed our client's needs.To support that, IBM is focused on making the world a smarter planet, instrumented with appropriate sensors, interconnected over converging networks, and intelligent to provide visibility, control and automation.
It's time to rethink IT in light of these new developments, to think about IT in client terms, with business metrics. Bob gave several internal and customer examples, here's one from the City of Stockholm:
Covering nine square miles of Stockholm Sweden, IBM led [the largest project of its kind] for traffic congestion in Europe. To reduce congestion caused by 300,000 vehicles, the City of Stockhold enacted a "congestion fee" with real-time recognition of license plates and a Web infrastructure to collect payments. The analytics, metrics and incentives have paid off. Since August 2007, traffic is reduced 18 percent, a reduction of travel time on inner streets, and a 9 percent increase in "green" vehicles.
In addition to smarter traffic, IBM has initiatives for smarter water, smarter energy, smarterhealthcare, smarter supply chain, and smarter food supply.
Dave Barnes, Senior VP and CIO, United Postal Service (UPS)
Dave Barnes must act as the "trusted advisor" to the rest of the senior management team. UPS delivers packages worldwide. They put sensors on all of the vehicles, not just to know how fast they were driving,but also how often they drove in reverse gear, and sensors on the engines to determine maintenance schedules.Analytics found that driving in reverse was the most dangerous, and by providing this information to the drivers themselves, the drivers were able to come up with their own innovative ways to minimize accidents.This is one role of IT, to provide employees the information they need to enable them to be better at their own jobs.
Dave also mentioned the importance of collaborating across business units. Their "Information Technology Steering Committee (ITSC)" has 15 members, of which only three are from the IT department. This helped deploy social media initiatives within UPS. For example, Twitter has been adopted so that senior management can get unfiltered customer feedback. This is perhaps another key role of IT, to flatten an organization from cultural hierarchies that prevent top brass up in the ivory tower from hearing what is going wrong down on the street. Too often, a customer or client complains to the nearest employee, and this may or may not get passed up accurately along the chain of command. Twitter allowed executives to see what was going on for themselves.
Dave also covered the "Best Neighbor" approach. If you were going to build a deck in your back yard, you might ask your neighbors that have already done this, and learn from their experience. Sadly, this does not happen enough in IT. To address this, UPS has a "Tech Governance Group" that focused on business process across the organization. For example, they improved "package flow", reducing 100 million miles in the past few years.
Lastly, he mentioned that many technologists are "loners". They have a few like that, but try to hire techies who look to team across business units instead. Likewise, they try to hire business people who are somewhat tech savvy. For example, they have encouraged business employees to write their own reports, rather than requesting new reports to be developed by the IT department. The end result, the business people get exactly the reports they want, faster than waiting for IT to do it. Another role for IT is to provide end-users the tools to make their own reports.
(Dave didn't mention what tools these were, but it sounded like the Business Intelligence and Reporting Tools [BIRT] that IBM uses.)
These two sessions were a great one-two punch to the audience of 600 CIOs and IT professionals. First, IBM sets the groundwork for what needs to be done. Then, UPS shows how they did exactly that, adopting a dynamic infrastructure and got great results. This is going to be an interesting week!
Today we watched Barack Obama get inaugurated as the 44th President of the United States, and he reminded all Americans that the power and strength of this country comes through its diversity.To some extent, this is also what gives IBM its power and strength as well. While not quite the orator of President Obama, IBM's own CFO, Mark Loughridge, gave a rousing speech about IBM's 4Q08 and year-end financial results.
In 2008, IBM was not just successful because it had a wide diversity of servers and storage hardware products, but also a diversity of software, and a diversity of service offerings.And lastly, IBM sells to a diversity of clients in different industries, throughout a diversity of markets. While the current economic meltdown might have affected businesses focused on the US and other major markets, IBM did particularly well last year in growth markets, including the so-called BRIC countries (Brazil, Russia, India and China).
IBM's approach to invest in R&D and its nearly 400,000 employees for long-term success continues to pay off. Where "Cash is King", IBM can also afford all those acquisitions and strategic initiatives, positioning the company for a brighter future.
Where there are challenges, IBM finds opportunity.
Well,This is completely off-topic, but now that I have a bluetooth-enabled Thinkpad T60, I have been interested in this new wireless technology. I have a bluetooth cell phone, a bluetooth wireless headset, and my thinkpad, and they all work together seemlessly. I am able to speak on my cell phone through my headset, listen to music and videos on my laptop through my headset, and even dial in to the IBM network through my cell phone, all without any cables!
A variation of the Wi-Fi soup-cantenna has emerged to intercepting bluetooth signals. Check out this coolBlueSniper Rifle
I am saddened to learn that one of my favorite comedians, [George Carlin],passed away yesterday. He was famous for a skit about "seven words" you could not say on Television.A few of those came to mind in the response I got from my post[Yes, Jon,There is a mainframe that can help replace 1500 x86 servers, which attempted to provide an answerto a simple question about the IBM System z10 Enterprise Class (EC) mainframe.
Jon: So, where is the 1500 number coming from? Tony: I’ll investigate and get back to you.
My post tried to explain how IBM estimated that number. However, my fellow blogger from Sun, Jeff Savit, posted on his blog [No, there isn't a Santa Claus] in response. (If Sun'sshareholders are expecting anything other than a [lump of coal] under the tree this year, they should probablyread Sun's press release about their last [financial results].)A few others contacted me about this also, from a bunch of rather different angles, from reverse-engineering emulation of other company's chipsets to my use of internal codenames. (There are now MORE than seven words I can't type in this blog!) Jon is just trying to gather information, but his [head hurts] from all of this debate.
This week I will try to clarify some of the confusion.
The IBM Storage and Storage Networking Symposium continues ...
DS8300 Benchmark for Global Mirror
Phil Allison of Fidelity National Information Services presented his success switching from competition over to IBM DS8300 disk systems for use with Global Mirror. They had usedPerformance Associates famous PAIO driver to help to the benchmarktesting. They ran the benchmars at 2x and 3x their current workloads to see how well the DS8000 performed,measuring IOPS, MB/sec, and millisecond response time (msec). They were very impressed with their results,staying below their target 0.8 msec for most of their runs.
For the Global Mirror, the did a performance "bake-off" between Ciena CN2000 versus Cisco 9216i. These areimplemented differently. Ciena uses a Layer-2 approach, encapsulating the Fibre Channel packets directlyto transport as SDH/SONET or Gigabit Ethernet (GigE), which required dedicated circuits between JacksonvilleFlorida and Little Rock, Arkansas. By contrast, Cisco uses a Layer-3 approach, encapsulating Fibre Channelpackets within an IP packet, which can leverage existing datacenter-to-datacenter backbone.
To add stress to the benchmarks, they used a "Network Impairment" emulator. These artificially inject errors,lose packets, and other signal loss conditions. Running both Cisco and Ciena under these tests help them decide which to purchase, but also enforced that idea that they made the right choice choosing IBM for theirremote distance mirroring solution.
Comparison of Bare Machine Recovery Techniques
"Bare machine recovery" is the phrase used to restore a machine that has no operating system installed (or thewrong operating system). Dave Canan from IBM Advanced Technical Support did a great job reviewing the variousproducts and techniques available, and the pros and cons of each approach. The ones he covered were:
Tivoli Storage Manager - install fresh Windows Operating System, TSM client, and then follow certain steps
Automated System Recovery(ASR) - a new feature of Windows XP and Windows 2003 works with TSM client
Symantec Ghost - formerly callled PowerQuest Drive Image, there are now two versions: Ghost Home Edition and Ghost Corporate Solution Suite
Cristie Bare Machine Recovery(CBMR) - This is an IBM partner that provides both Linux and Windows PE versions. Cristie includes a license for Windows PE, so no need to use the alternative Bart PE method.
SAN Volume Controller - Customer Experience
Bill Giles of Catholic Medical Center, a hospital in New Hampshire, presented his experienceswith IBM System Storage SAN Volume Controller. They have a mix of IBM System x, System p, andSystem i servers, as well as machines from HP, Sun, and Dell. For applications, they havePicture Archiving and Communicatiion System (PACS) for cardiology and radiology, HL7 Interface engine, Clinical Information System, TSM for backup, and Microsoft Exchange fore-mail.
They deployed SVC on AIX, Solaris, Windows 2000 and 2003. They were very delightedwith the results:
Centralized Storage Provisioning
Consolidating disparate storage into a universal platform
Enables non-disruptive data migration
Increased utilization of existing disk resources
Improved disaster recovery with FlashCopy and Metro Mirror
Birds of a Feather (BOF) sessions
We had two BOFs, one for storage attached to System z operating systems, and another for storage attached to Linux, UNIX and Windows systems. This distinctionmade sense when mainframes could only attach to CKD disks and ESCON/FICON tape,and distributed systems could only do FCP/SCSI, but these days, there are all kindsof convergence going on.
Linux on System z can now attach via FCP to LTO tape and SAN Volume Controller, allowing now a wide range of storage options for that platform. z/OS, z/VM, z/VSEand Linux on System z can all access IBM System Storage N series via NFS.
The format was traditional Q&A panel, we had experts at the front of the room,handling the questions and discussion topics brought up by the audience. I'll spareyou the individual questions and answers.
It seems like [only yesterday] I was talking about IBM's strategic initiatives for the New Enterprise Data Center, including the launch of asset and service management at [Pulse 2008] in Orlando, Florida.
This week, my colleagues are at [Pulse 2009] in Las Vegas, Nevada. (I'm not there this time, so stop asking all my colleagues where I am!)Obviously, a lot has change in the last 12 months: the world's financial economy has collapsed, our delicate environment continues to unravel, and a new US President was elected to fix all that was broken by the former occupant. As a result, IBM's strategy has evolved beyond just data centers for large enterprises.
I can't think of a better time to emphasize the need for a more dynamic infrastructure. And this is not just focused on IT operations, but smarter business infrastructure as well, as the two now are very much intertwined. Everything from smarter healthcare, smarter telecom, smarter retail, smarter distribution, smarter transportation, and smarter financial services. IBM's [Dynamic Infrastructure@reg;] is one of four strategic initiatives to help build a smarter planet.
Let's take a quick look at the key benefits:
Do you remember back to the days that the IT department was like the accounting department in the back office, merely recording what happened in a series of transactions? Not anymore! Today, IT is front and center of most businesses, helping to generate revenue, drive innovation, and provide better customer service. We are finding a convergence between the physical world of running business with the digital world of IT. Intelligence is everywhere, embedded in systems and operations throughout, not just in a data center.
Imagine only 10-15 years ago the primary concern for IT operations was the cost of hardware. Now, thanks to[Moore's law], hardware is cheaper, but other IT budget costs like labor, management software, power and cooling costs are growing faster and becoming more predominant factors. IBM recognizes that you must consider thetotal cost of ownership, not just the acquisition cost of new hardware. But again, this isn't just reducing the costs of IT, but making more effective use of IT resources to reduce costs everywhere else, in schedulingtransportation, in managing manufacturing assets, and so on.
While the world feels much safer now that Barack Obama has taken over, there are still risks and threats out there, and businesses large and small have to manage them. Economic swings like we have experienced lately help weed out those companies that had fixed costs and static infrastructures, in favor of those with more variable costs and dynamic infrastructures. When the marketplace slows down, can your business "dial down" its operations to match? And when the recession is over and business is booming again, can your business "ramp up" fast enough to take on new opportunity? With IBM's Cloud Computing, companies can minimize their fixed investments and use a variable amount of computing as business needs change dynamically.
To learn more about Dynamic Infrastructure, read the IBM [Press Release].
IBM announced the industry's first corporate-led initiative to enable clients to earn energy efficiency certificates for reducing the energy needed to run their data centers.For the first time, this provides a way for businesses to attain a certified measurement of their energy use reduction, a key, emerging business metric. The certificates can be traded for cash on the growing energy efficiency certificate market or otherwise retained to demonstrate reductions in energy use and associated CO2 emissions. The Efficiency Certificates initiative engages Neuwing Energy Ventures, a leading verifier of energy efficiency projects and marketer of energy efficiency certificates.
How it works:
The Neuwing Energy assessments are a two-part evaluation to 1) determine the initial energy draw from the data center or IT equipment identified for consolidation based on industry accepted energy estimates for the servers in use and the power and cooling profiles of the data center, and 2) a second review of energy draw after steps are taken that are designed to reduce energy consumption.
Neuwing Energy will issue customers an Efficiency Certificate for the total megawatt-hours of energy no longer needed to power and cool their data center or operate IT equipment. Neuwing Energy will keep a portion of each customer's earned certificates or charge a per MWH saved fee in exchange for the assessment.
Customers can trade earned Efficiency Certificates on the energy efficiency certificate market or they can retain their certificates, using them to demonstrate reductions in energy use and associated CO2 emissions.
IBM intends to make the Efficiency Certificates program available across its entire line of server and storage offerings.
Continuing my week in Tokyo, Japan, I was going to title this post "Chunks, Extents and Grains", but decidedinstead to use the fairy tale above.
Fellow blogger BarryB from EMC, on his The Storage Anarchist blog, once again shows off his [PhotoShop talents], in his post [the laurel and hardy of thin provisioning]. This time, BarryB depicts fellow blogger and IBM master inventor, Barry Whyte, as Stan Laurel and fellow blogger Hu Yoshida from HDS as Oliver Hardy.
At stake is the comparison in various implementations of thin provisioning among the major storage vendors.On the "thick end", Hu presents his case for 42MB chunks on his post [When is Thin Provisioning Too Thin]. On the "thin end", IBMer BarryW presents the "fine-grained" details of Space-efficient Volumes (SEV), made available with the IBM System Storage SAN Volume Controller (SVC) v4.3, in his series of posts:
BarryB paints both implementations as "extremes" in inefficiency. Some excerpts from his post:
"... Hitachi's "chubby" provisioning is probably more performance efficient with external storage than is the SVC's "thin" approach. But it is still horribly inefficient in context of capacity utilization.
... the "thin extent" size used by Symmetrix Virtual Provisioning is both larger than the largest that SVC uses, and (significantly) smaller than what Hitachi uses."
"free" may be the most expensive solution you can buy...
Before you rush off to put a bunch of SVCs running (free) SEV in front of your storage arrays, you might want to consider the performance implications of that choice. Likewise, for Hitachi's DP, you probably want to understand the impact on capacity utilization that DP will have. DP isn't free, and it isn't very space efficient, either."
BarryB would like you to think that since EMC has chosen an "extent" size between 257KB and 41MB it must therefore be the optimal setting, not too hot, and not too cold. As I mentioned last January in my post[DoesSize Really Matter for Performance?], EMC engineers had not yet decided what that extent size should be, andBarryB is noticeably vague on the current value.According to this [VMware whitepaper],the thin extent size is currently 768 KBin size. Future versions of the EMC Enginuity operating environment may change the thin extent size. (I am sure theEMC engineers are smarter and more decisive than BarryB would lead us to believe!)
BarryB is correct that any thin provisioning implementation is not "free", even though IBM's implementation is offeredat no additional charge. Some writes may be slowed downwaiting for additional storage to be allocated to satisfy the request, and some amount of storage must be set asideto hold the metadata directory to point to all these chunks, extents or grains. For the convenience of not havingto dynamically expand LUNs manually as more space is needed, you will pay both a performance and capacity "price".
However, as they say, the [proof of the pudding is in the eating], or perhaps I should say porridge in this case.Given that the DMX4 is slower than both HDS USP-V and IBM SVC, you won't see EMC publishing industry-standard[SPC benchmarks] using their"thin extent" implementation anytime soon. IBM allows a choice of grain size, from 32KB to 256KB, in an elegantdesign that keeps the metadata directory less than 0.1 to 0.5 percent overhead. I would be surprised if EMC canmake a case to be more efficient than that! The performance tests are stillbeing run, but what I have seen so far, people will be very pleased with the minimal impact from IBM SEV, an acceptable trade-off for improved utilization and reduced out-of-space conditions.
So if you are a client waiting for your EMC equipment to be fully depreciated so you can replace it for faster equipment from IBM or HDS, you can at least improveits performance and capacity utilization today by virtualizing it with IBM SAN Volume Controller.
Did you miss your chance to attend Storage Networking World last week? IBM has some upcoming conferences that might be of interest to you.
IBM Systems Conference 2009
In this inaugural event, IBM executives, developers and industry experts reveal the latest innovations, trends and directions. In the span of three full days, you will hear and see technologies demonstrated that are needed to transform and respond effectively in these economic times.
There will be three tracks:
IBM Systems -- Including storage, mainframe, POWER and x86 systems
Solutions for a Dynamic Infrastructure
Professional Development -- including negotiation skills, project management and TCO analysis
IBM System Storage and Storage Networking Symposium
If the above conference is too broad, we have a more storage-specificconference. The [IBM System Storage and Storage Networking Symposium] brings IBM storage developers, architects, technical experts, solution providers and customer speakers together in one place to show you how to address the growing challenge of managing and securing retention managed data. You'll also learn about the latest IBM System Storage™ portfolio product announcements.
I have spoken at these perhaps 12 of the last 14 years. The list of presenters has not yet finalized, so I do not yet know if I will actually be there this year.
Two exciting things are new this year. First, instead of being in San Diego or Las Vegas, it will be held in Chicago, Illinois instead!Secondly, you get a two-for-one with the [IBM System x and BladeCenter Technical Conference]. That's right, they are co-located there in Chicago so that you can attend sessions from both! Perhaps you spend 80 percent of your time on storage, and 20 percent on x86 servers, or 80 percent servers and 20 percent storage, now you can register for one price, and decide when you get there.
If you act soon, you can save money with the early-registration discount by May 31.
Hopefully, this will give you enough time to plan and make travel arrangements!