This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Yes, it's Tuesday, and that means more IBM Announcements! A lot was announced today, so I have selected an eclectic mix for your enjoyment.
Microsoft Windows support on IBM Mainframes
Last year's announcement of the new IBM zEnterprise included the zEnterprise BladeCenter Extention (zBX) which could run POWER7 and x86 operating systems, but managed by the mainframe's overall Unified Resource Manager. Initially, this was intended for AIX and Linux-x86, but today, IBM [announced a statement of general direction to support Microsoft Windows] on the zBX extension by end of this year. Of course, the standard disclaimer applies: All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party's sole risk and will not create liability or obligation for IBM.
New 15K RPM drives for IBM Storwize V7000
Last October, when IBM introduced the Storwize V7000, it offered both large (3.5 inch) and small form factor (2.5 inch) drives. Unfortunately, a few people were upset that there were no 15K RPM drives for the small form factor models. There were SSD and 10K RPM drives, but nothing in between. Today, IBM [announced new 15K RPM drives of 146GB capacity] have been qualified for both the controller and expansion drawers.
New RVU licensing for IBM Tivoli products
IBM [announced it is changing over to this new RVU licensing model], from the previous PVU license, based on processor value units. What is an RVU? An RVU is a unit of measure by which the program can be licensed. RVU Proofs of Entitlement (PoE) are based on the number of units of a specific resource used or managed by the program. This makes sense, resource management software should be charged by the amount of resources you manage, not the size of the server the software runs on. This change also enables running on server virtualization and live movement of VM guest images from one type of host machine to another.
If you are contemplating a visit to an IBM [Executive Briefing Center], then April and May is a great time to come to Tucson. The weather is ideal here. The cold snap appears to be over, and spring is in the air!
This week, I will be in Las Vegas for the 30th annual [Data Center Conference]. For those on Twitter, follow the conference on hashtag #GartnerDC, and follow me at [@az990tony].
Once again, I will be working the IBM Exhibition Booth of the Solution Showcase, attending keynote and break-out sessions, and meeting with clients and analysts. Today is mostly setting up the booth, getting my registration badge and materials, an orientation meeting for first-timers, and finish off the evening with a networking event to get the party started!
Traffic to and from the hotel was a mess today because of the [Las Vegas Strip at Night Rock-n-Roll Marathon]. The entire Las Vegas Boulevard was blocked off from 2pm to 11pm, causing taxis some headaches getting to and from each hotel. This marathon included a "Stiletto Dash" where women had to run in shoes that had at least three inch heels! (Only in Las Vegas!)
The conference is organized into 8 tracks:
Navigating the Journey to Cloud-Delivered Services
Achieving and Maintaining IT Operational Excellence
Modernizing Your Storage Strategy to Keep Pace with Burgeoning Demand
Ensuring Your Business Continuity Management Plan Reflects Today's Realities and Tomorrow's Challenges
Virtualization: Moving at Light Speed While Leveraging Your Existing Investments
The Future of Servers and Operating Systems
Data Center Modernization: Staying Agile in Chaotic Times
Pervasive Mobility: What Infrastructure and Operations Needs to Know Now
I am glad to see that storage got its own track this year! If you are attending the conference, here are the sessions that IBM is featuring for Monday:
IBM: Watson and Your Data Center
This is a lunch-time talk. Steve Sams, IBM VP of Sites and Facilities, will explain how to leverage Watson-like analytic approaches to provide flexible, cost-effective data center solutions. Analytics can be used to better align IT to the business needs, optimize server, storage and network utilization and improve data center design.
IBM: University of Rochester Medical Center cracks the code on data growth
Rick Haverty, Director of Infrastructure for University of Rochester Medical Center (URMC), will discuss how his team built a storage strategy that transformed their environment to bring savings right to their bottom line without sacrificing the speed, criticality and performance requirements of their imaging and EMR systems. I will be there to introduce Rick at the beginning, and then moderate the Q&A after the talk.
Solution Showcase Reception
The Solution Showcase opens up Monday night with a reception, serving food and drinks. Look for the IBM Portable Mobile Data Center (PMDC), the big trailer on the show floor. We also have an exhibit booth, across from the PMDC, to ask questions and talk with various IBM experts. You can look for me and the other experts wearing white lab coats!
Continuing my week in Las Vegas for the Data Center Conference 2009, I attended a keynote session on Service Management. There were two analysts that co-presented this session.
One analyst was the wife of a real CEO, and the other was the wife of a real CIO, so the two analysts explained that there was a langauge gap between IT and business. Use the analogy of a clock, business is concerned with the time shown on the front face is correct and ticking properly, but behind the scenes, the gears of the clock, represent IT, finance, supply chain and other operations.
Based on recent surveys, there is a 45 percent "alignment" between the goals of CEO and the goals of a CIO. CEOs are concerned about decision making, workforce productivity, and customer satisfaction. CIOs on the other hand are worried about costs, operations and change initiatives. Both CEOs and CIOs are focused on innovations that can improve business process. Service management strives to shorten the language gap between business and IT, by helping to drive operational excellence that benefits both CEO and CIO goals. Recent surveys found the key drivers for this are controlling costs, improving customer satisfaction, availability, agilty and making better business decisions.
Unfortunately, in this economy, the idea of "transformation" is out, and "restructuring" is in. In much the same way that employees have abandoned career development in favor of simple job preservation, companies are focused on tactical solutions to get through this financial meltdown, rather than launching transformation projects like deploying Service Management tools.
How much influence does the CIO have on running the rest of the business? Various surveys have found the following, ranked from most influential to least:
5-9 percent, Enterprise Leader
15-18 percent, Trusted Ally
25-32 percent, Partner
27-35 percent, Transactional
7-20 percent, At Risk
The bottom rank not only have little or no influence, but are at risk of losing their jobs. Evaluations based on a Maturity model finds many I&O operations in trouble, 11 percent taking some pro-active measures, 59 percent committed to improvement, and 30 percent aware of the problems.
IT Service Management tries to bring a similar discipline as Portfolio Management and Application Lifecycle Management. Why can't IT be treated like any other part of the business portfolio? What is the business value of IT? IT can help a business run, grow and even transform. IT can help consolidate and centralize shared services to help leverage resources and offer cost optimizations not just for itself, but for the business as a whole.
CIOs that can adopt IT Service Management can have a "Jacks or Better" chance for a seat at the executive table to help drive the business forward.
I have created blog categories, based on our System Storage offering matrix, which you can track individually:
Disk systems, including the IBM System Storage DS Family of products, SAN Volume Controller, N series, as well as features unique to these products, such as FlashCopy, MetroMirror, or SnapLock. Tape
Tape systems, including the IBM System Storage TS Family of products, tape-related products in the Virtualization Engine portfolio, drives, libraries and even tape media.
Storage Networking offerings, from Brocade, McData, Cisco and others, such as switches, routers and directors.
Infrastructure management, including IBM TotalStorage Productivity Center software, IBM Tivoli Provisioning Manager, IBM Tivoli Intelligent Orchestrator, and IBM Tivoli Storage Process Manager.
Business Continuity, including IBM Tivoli Storage Manager, Tivoli CDP for Files, Productivity Center for Replication software component, Continuous Availability for Windows (CAW), Continuous Availability for AIX (CAA).
Lifecycle and Retention offerings, including our IBM System Storage DR550, DR550 Express, GPFS, Tivoli Storage Manager Space Management for UNIX, Tivoli Storage Manager HSM for Windows, and DFSMS.
Storage services, including consulting, assessments, design, deployment, management and outsourcing.
This week, Allyson Klein, Director of Technical Leadership Marketing from Intel, interviewed me for the Intel® [Chip Chat podcast] to promote the upcoming [IBM Edge conference] to be held June 4-8 in Orlando, Florida. Intel is a big sponsor of the conference. The podcast is only about 8 minutes long. Enjoy!
A faithful reader of this blog, Tom, sent me a link to Orson Scott Card's article titled[PROGRAMMERS AS BEES (or, how to kill a software company)]. "Is there any truth in this?" Tom asked?Having worked both sides of this fence as I approach my 22 year anniversary at IBM, I guess I can venturesome opinions on this piece. Let's start with this excerpt:
"The environment that nurtures creative programmers kills management and marketing types - and vice versa."
By this, he means "kills" in the UNIX sense, I imagine, and not the "Grand Theft Auto IV" sense.Different people solve problems differently. Some programmers have the luxury that theycan often focus on a single platform, single chipset, single OS, and so on, but Marketing types are tryingto come up with messaging that appeals to a broad audience, from people with business backgrounds to others with moretechnical backgrounds, and that can be more challenging. For programmers, "creative" is an adjective; formarketers, it's a noun.
"Programming is the Great Game. It consumes you, body and soul. When you're caught up in it, nothing else matters."
True. As a storage consultant, I find myself writing code a lot, from small programs, scripts, and even HTML codefor this blog. When you are in your zone, working on something, one can easily lose track of time.
"Here's the secret that every successful software company is based on: You can domesticate programmers the way beekeepers tame bees. You can't exactly communicate with them, but you can get them to swarm in one place and when they're not looking, you can carry off the honey. You keep these bees from stinging by paying them money. More money than they know what to do with. But that's less than you might think."
I have never tamed bees, but many of my friends who are still programmers are motivated by factors other thanmaximizing their income, such as: friendly co-workers, job security, casual attire, and interesting challenges. A few make more than they know what to do with, the rest have girlfriends"significant others" who solve that problem for them.
"One way or another, marketers get control. But...control of what? Instead of finding assembly lines of productive workers, they quickly discover that their product is produced by utterly unpredictable, uncooperative, disobedient, and worst of all, unattractive people who resist all attempts at management."
False. Either marketing had control in the first place (ala Apple, Inc.) or they never had. "Control of what?" is the key phrase here.
"The shock is greater for the coder, though. He suddenly finds that alien creatures control his life. Meetings, Schedules, Reports. And now someone demands that he PLAN all his programming and then stick to the plan, never improving, never tweaking, and never, never touching some other team's code."
True. But if you don't like surprises, perhaps software engineering is not the right career path for you.
"The hive has been ruined. The best coders leave. And the marketers, comfortable now because they're surrounded by power neckties and they have things under control, are baffled that each new iteration of their software loses market share as the code bloats and the bugs proliferate. Got to get some better packaging. Yeah, that's it."
This one depends. I've seen teams survive and manage, with junior programmers stepping up to backfill leadership roles, and other times, projects are scrapped, or started anew elsewhere. As for marketers, it doesn't take much to get one baffled, does it?
In North America, today marks the start of the "Give 1 Get 1" program.
Children using the XO laptop
I first learned from this when I was reading about Timothy Ferriss' [LitLiberation project] on his [Four Hour Work Week] blog, and was surfing around for related ideas, and chanced upon this. I registered for a reminder, and it came today(the reminder, not the laptop itself).
Here's how the program works. You give $399 US dollars to the "One Laptop per Child" (OLPC)[laptop.org] organization for two laptops: One goes to a deserving child ina developing country, the second goes to you, for your own child, or to donate to a localcharity that helps children. This counts as a $199 purchase plus a $200 tax-deductible donation.For Americans, this is a [US 501(c)(3)] donation, and for Canadians and Mexicans, take advantage of the low-value of the US dollar!
If your employer matches donations, like IBM does, get them to match the $200donation for a third laptop, which goes to another child in a developing country. As for shipping, you pay only for the shipping of the one to you, each receiving country covers their own shipping. In my case, the shipping was another $24 US dollars for Arizona.No guarantees that it will arrive in time for the holidays this December, but it might.
To sweeten the deal, T-mobile throws in a year's worth of "Wi-Fi Hot Spot"that you can use for yourself, either with the XO laptop itself, or your regular laptop, iPhone, or otherWi-Fi enabled handheld device.
National Public Radio did a story last week on this:[The $100 Laptop Heads for Uganda]where they interview actor [Masi Oka], best known from the TV show ["Heroes"], who has agreed to be their spokesman.At the risk of sounding like their other spokesman, I thought I would cover the technology itself, inside the XO,and how this laptop represents IBM's concept of "Innovation that matters"!
The project was started by [Nicholas Negroponte] from [MIT University] as the "$100 laptop project". Once the final designwas worked out, it turns out it costs $188 US dollars to make, so they rounded it up to $200. This is stillan impressive price, and requires that hundreds of thousands of them be manufactured to justify ramping upthe assembly line.
Two of IBM's technology partners are behind this project. First is Advanced Micro Devices (AMD) that providesthe 433Mhz x86 processor, which is 75 percent slower than Thinkpad T60. Second is Red Hat,as this runs lean Fedora 6 version of Linux. Obviously, you couldn't have Microsoft Windows or Apple OS X, as both require significantly more resources.
The laptop is "child size", and would be considered in the [subnotebook] category. At 10" x 9" x 1.25", it is about the size of class textbook,can be carried easily in a child's backpack, or carried by itself with the integrated handle. When closed, it is sealedenough to be protected when carried in rain or dust storms. It weighs about 3.5 pounds, less than the 5.2 pounds of myThinkpad T60.
The XO is "green", not just in color, but also in energy consumption.This laptop can be powered by AC, or human power hand-crank, with workin place to get options for car-battery or solar power charging. Compared to the 20W normally consumed bytraditional laptops, the XO consumes 90 percent less, running at 2W or less. To accomplish this, there is no spinning disk inside. Instead, a 1GB FLASH drive holds 700MB of Linux, and gives you 300MB to hold your files. There isa slot for an MMC/SD flash card, and three USB 2.0 ports to connect to USB keys, printers or other remote I/O peripherals.
The XO flips around into three positions:
Standard laptop position has screen and keyboard. The water-tight keyboard comes in ten languages:International/English, Thai, Arabic, Spanish, Portuguese, West African, Urdu, Mongolian, Cyrillic, and Amharic.(I learned some Amharic, having lived five years with Ethiopians.)There does not appear be a VGA port, so don't be thinking this could be used as an alternative to project Powerpoint presentations onto a big screen.
Built-in 640x480 webcam, microphone and speakers allow the XO to be used as a communication device. Voice-over-IP (VOIP) client software, similar to Skype or [IBM Lotus Sametime], is pre-installed for this purpose.
The basic built-in communication are 802.1g (54Mbs) that you can use to surf the web usingthe Wi-Fi at your local Starbucks; and 802.1s which forms a "mesh network" with other XO laptops, and can surf theweb finding the one laptop nearby that is connected to the internet to share bandwidth. This eliminates the need to build a separate Wi-Fi hub at the school. There are USB-to-Ethernet and USB-to-Cellular converters, so that might be an alternative option.
Flipped vertically, the device can be read like a book.The screen can be changed between full-color and black-white, 200 dpi, with decent 1200x900 pixel resolution. The full-color is back-lit, and can be read in low-lighting. The black-white is not back-lit, consumes much less power, andcan be read in bright sunlight. In that regards, it is comparable to other [e-book devices], like a Cybook or Sony Reader.
Software includes a web-browser, document reader, word processor and RSS feed reader to read blogs.The OLPC identifies all of the software, libraries and interfaces they use, so that anyone that wants to developchildren software for this platform can do so.
With the keyboard flipped back, the 6" x 4.5" screen has directional controls and X/Y/A/B buttons to run games. This would make it comparable to a Nintendo DS or Playstation Portable (PSP). Again, the choice between back-lit color,or sunlight black-white screen modes apply. Some games are pre-installed.
So for $399, you could buy a Wi-Fi enabled[16GB iPod Touch] for yourself, which does much the same thing, or you can make a difference in the world.I made my donation this morning, and suggest you--my dear readers in the US, Canada and Mexico--consider doing the same.Go to [www.laptopgiving.org] for details.
Use more efficient disk media, such as high-capacity SATA disk drives
Both are great recommendations, but why limit yourself to what EMC offers? Your x86-based machines are only a subset of your servers,and disk is only a subset of your storage. IBM takes a more holistic approach, looking at the entire data center.
VMware is a great product, and IBM is its top reseller. But in addition to VMware, there are other solutions for the x86-based servers, like Xen and Microsoft Virtual Server. IBM's System p, System i, and System z product lines all support logical partitioning.
To compare the energy effectiveness of server virtualization, consider a metric that can apply across platforms. For example, for an e-mail server, consider watts per mailbox. If you have, say, 15,000 users, you can calculate how many watts you are consuming to manage their mailboxes on your current environment, and compare that with running them on VMware, or logical partitions on other servers. Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
More efficient Media
SATA and FATA disks support higher capacities, and run at slower RPM speeds, thus using fewer watts per terabyte.A terabyte stored on 73GB high-speed 15K RPM drives consumes more watts than the same terabyte stored using 500GB SATA.Chuck correctly identifies that tape is more power-efficient than disk, but then argues that paper is more power-efficient than tape. But paper is not necessarily more efficient than tape.
ESG analyst Steve Duplessie divides up data betweenDynamic vs. Persistent. The best place to put dynamic data is on disk, and here is where evaluation of FC/SAS versus SATA/FATA comes into play.Persistent data, on the other hand, can be stored on paper, microfiche, optical or tape media. All of these shelf-resident media consume no electricity, nor generate any heat that would require additional cooling.
A study by scientists at the Lawrence Berkeley National Laboratory titled High-Tech Means High-Efficiency: The Business Case for Energy Management in High-Tech Industries indicates thatData centers consume 15 to 100 times more energy per square foot than traditional office space. Storing persistent data in traditional office space can save a huge amount of energy. Steve Duplessie feels the ratio of dynamic to persistent data is 1:10 today, but is likely to grow to 1:100 in the near future, raising the demand for energy-efficient storage of persistent data ever more important to our environment.
Data centers consume nearly 5000 Megawatts in the USA alone, 14000 Megawatts worldwide. To put that in perspective, the country of Hungary I was in last week can generate up to 8000 Megawatts for the entire country (and they were using 7400 Megawatts last week as a result of their current heat wave, causing them grave concern).
Back in the 1990's, one of the insurance companies IBM worked with kept data on paper in manila folders, and armiesof young adults in roller skates were dispatched throughout the large warehouses of shelves to get the appropriate folder in response to customer service inquiries. Digitizing this paper into electronic format greatly reduced the need for this amount of warehouse space, as well as improved the time to retrieve the data.
A typical file storage box (12 inch x 12 inch x 18 inch) containing typed pages single-spaced, double-sided, 12 point font could hold perhaps 100MB. The same box could hold a hundred or more LTO or 3592 tape cartridges, each storing hundreds of GB of information. That's a million-to-one improvement of space-efficiency, and from a watts-per-TB basis, translates to substantial improvement in standard office air conditioning and lighting conditions.
To learn more about IBM's Project Big Green, watch thisintroductory video which used Second Life for the animation.
Last week, fellow IBMer Ron Riffe started his three-part series on the Storage Hypervisor. I discussed Part I already in my previous post [Storage Hypervisor Integration with VMware]. We wrapped up the week with a Live Chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts.
"The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog.
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Private storage clouds are a little different. Administrators we talk to aren’t generally ready to let all their application owners and departments have the freedom to provision new storage on their own without any control. In most cases, new capacity requests still need to stop off at the IT administration group. But once the request gets there, life for the IT administrator is sweet!
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
A storage hypervisor increases the utilization of storage resources, and optimizes what is most scarce in your environment. For Linux, UNIX and Windows servers, you typically see utilization rates of 20 to 35 percent, and this can be raised to 55 to 80 percent with a storage hypervisor. But what is most scarce in your environment? Time! In a competitive world, it is not big animals eating smaller ones as much as fast ones eating the slow.
Want faster time-to-market? A storage hypervisor can help reduce the time it takes to provision storage, from weeks down to minutes. If your business needs to react quickly to changes in the marketplace, you certainly don't want your IT infrastructure to slow you down like a boat anchor.
Want more time with your friends and family? A storage hypervisor can migrate the data non-disruptively, during the week, during the day, during normal operating hours, instead of scheduling down-time on an evenings and weekends. As companies adopt a 24-by-7 approach to operations, there are fewer and fewer opportunities in the year for scheduled outages. Some companies get stuck paying maintenance after their warranty expires, because they were not able to move the data off in time.
Want to take advantage of the new Solid-State Drives? Most admins don't have time to figure out what applications, workloads or indexes would best benefit from this new technology? Let your storage hypervisor automated tiering do this for you! In fact, a storage hypervisor can gather enough performance and usage statistics to determine the characteristics of your workload in advance, so that you can predict whether solid-state drives are right for you, and how much benefit you would get from them.
Want more time spent on strategic projects? A storage hypervisor allows any server to connect to any storage. This eliminates the time wasted to determine when and how, and let's you focus on the what and why of your more strategic transformational projects.
If this sounds all too familiar, it is similar to the benefits that one gets from a server hypervisor -- better utilization of CPU resources, optimizing the management and administration time, with the agility and flexibility to deploy new technologies in and decommission older ones out.
"Server virtualization is a fairly easy concept to understand: Add a layer of software that allows processing capability to work across multiple operating environments. It drives both efficiency and performance because it puts to good use resources that would otherwise sit idle.
Storage virtualization is a different animal. It doesn't free up capacity that you didn't know you had. Rather, it allows existing storage resources to be combined and reconfigured to more closely match shifting data requirements. It's a subtle distinction, but one that makes a lot of difference between what many enterprises expect to gain from the technology and what it actually delivers."
Jon Toigo on his DrunkenData blog brings back the sanity with his post [Once More Into the Fray]. Here is an excerpt:
"What enables me to turn off certain value-add functionality is that it is smarter and more efficient to do these functions at a storage hypervisor layer, where services can be deployed and made available to all disk, not to just one stand bearing a vendor’s three letter acronym on its bezel. Doesn’t that make sense?
I think of an abstraction layer. We abstract away software components from commodity hardware components so that we can be more flexible in the delivery of services provided by software rather than isolating their functionality on specific hardware boxes. The latter creates islands of functionality, increasing the number of widgets that must be managed and requiring the constant inflation of the labor force required to manage an ever expanding kit. This is true for servers, for networks and for storage.
Can we please get past the BS discussion of what qualifies as a hypervisor in some guy’s opinion and instead focus on how we are going to deal with the reality of cutting budgets by 20% while increasing service levels by 10%. That, my friends, is the real challenge of our times."
Did you miss out on last Friday's Live Chat? We are doing it again this Friday, covering parts I and II of Ron's posts, so please join the conversation! The virtual dialogue on this topic will continue in another [Live Chat] on September 30, 2011 from 12 noon to 1pm Eastern Time.
We've been quite busy here at the Tucson Executive Briefing Center. I am often asked to explain the relationship between IBM's various storage products. While automakers don't have to explain why they sell sports coupes, pickup trucks and minivans, this analogy does not adequately cover IT storage products. So, I have come up with a new analogy that seems to be a better fit: foundations and flavorings.
All over the world, meals are often comprised of a foundation, perhaps rice, potatoes or pasta, covered with some form of flavoring, sauces, pieces of meat or fish, grated cheese and spices. In Puerto Rico, I had dishes where the foundation was mashed bananas called [plantains]. Sandwich shops often let you pick your choice of bread, the foundation, and then your meats and cheeses, the flavorings.At our local steakhouse,[McMahon's], the menulists a set of steaks, the foundation such as Rib Eye, Filet Mignon, Prime Rib or New York Strip, andvarious flavorings, such as sauces and rubs to cover the steak. Last night, I had the Delmonico steak with the Cristiani sauce consisting of Portobello mushrooms, garlic and aged Romano cheese.
This serves as a useful analogy for IBM's storage strategy. Allowing thefoundations and flavorings to be separately orderable greatly simplifies the selection menu and providesa nearly any-to-any approach to meeting a variety of client needs.Let's take a look at both.
IBM's foundation products are the DS family [DS3000, DS4000, DS5000, DS6000 and DS8000 series], [DS9900 series], and [XIV] for disk, and the TS family [TS1000, TS2000, TS3000] series for tape drives and libraries. In much thesame way you might prefer brown rice instead of white rice, or linguine instead of penne pasta, you might find the attributes of one storagefoundation more attractive based on its performance, scalability and availability features for yourparticular application workloads.
Fellow IBM blogger Barry Whyte discusses SVC at great length on his [Storage Virtualization] blog. Flavoring disk foundation storage with SAN Volume Controller can provide you additionalfeatures and functions, and help improve the scalability, performance or availability characteristics.For example, if you have DS4000, DS8000 and XIV, you might use SVC to provide a consistent methodologyfor asynchronous replication, a form of consistent "flavoring" if you will.
N series Gateways
The [N series gateways] offerflavoring to disk foundation, including unified NAS, iSCSI and FCP protocol host attachment, and application aware capabilities. (As for our IBM N series appliances or "filers", these could be foundational storage behind an SVC, but that's perhaps a topic for another post.)
SoFS provides a global namespace with clustered NAS access to files. This is a blended disk-and-tape solution with built-in backup and Information Lifecycle Management [ILM]. Policies can be used to place different files onto different tiers of storage, automate the movement from tier to tier, including migration to tape, and even expiration when the data is no longer needed.
The [IBM System Storage DR550] provides Non-erasable, Non-rewriteable (NENR) flavoring to storage. While the DR550 comes with internal disk storage, it can front end a tape library filled with WORM cartridges. The DR550 hasbeen paired up with small libraries (TS3200 or TS3310) as well as larger libraries like the TS3500.
The IBM Grid Medical Archive Solution [GMAS] provides a variety of capabilities for storing and accessing medical images, using a blended disk-and-tape approach. This allows hospital and clinicnetworks to provide access for doctors and radiologists from multiple locations.
Many of the flavorings are called "gateways". The IBM TS7650G flavors disk that provides a virtualtape library[VTL] with inline data deduplication capability. Recent performance tests pairing the TS7650G flavoring with XIV foundation storage found this combination to be an excellent match.
Let me know what you think. Does this help you understand IBM's storage strategy and acquisitions? Enteryour comments below.
This week is Thanksgiving holiday in the USA, so I thought a good theme would be things I am thankful for.
I'll start with saying that I am thankful EMC has finally announcedAtmos last week. This was the "Maui" part of the Hulk/Maui rumors we heard over a year ago. To quickly recap, Atmos is EMC's latest storage offeringfor global-scale storage intended for Web 2.0 and Digital Archive workloads. Atmos can be sold as just software, or combined with Infiniflex,EMC's bulk, high-density commodity disk storage systems. Atmos supports traditionalNFS/CIFS file-level access, as well as SOAP/REST object protocols.
I'm thankful for various reasons, here's a quick list:
It's hard to compete against "vaporware"
Back in the 1990s, IBM was trying to sell its actual disk systems against StorageTek's rumored "Iceberg" project. It took StorageTek some four years to get this project out,but in the meantime, we were comparing actual versus possibility. The main feature iswhat we now call "Thin Provisioning". Ironically, StorageTek's offering was not commercially successful until IBM agreed to resell this as the IBM RAMAC Virtual Array (RVA).
Until last week, nobody knew the full extent of what EMC was going to deliver on the many Hulk/Maui theories. Severalhinted as to what it could have been, and I am glad to see that Atmos falls short of those rumored possibilities. This is not to say that Atmos can't reach its potential, and certainly some of the design is clever, such as offering native SOAP/REST access.
Instead, IBM now can compare Atmos/Infiniflex directly to the features and capabilities of IBM's Scale Out File Services [SoFS], which offers a global-scale multi-site namespace with policy-based data movement, IBM System Storage Multilevel Grid Access Manager[GAM] that manages geographical distrubuted information,and IBM [XIV Storage System] that offers high-density bulk storage.
Web 2.0 and Digital Archive workloads justify new storage architectures
When I presented SoFS and XIV earlier this year, I mentioned they were designed forthe fast-growing Web 2.0 and Digital Archive workloads that were unique enough to justify their own storage architectures. One criticism was that SoFS appeared to duplicate what could be achieved with dozens of IBM N series NAS boxes connected with Virtual File Manager (VFM). Why invent a new offering with a new architecture?
With the Atmos announcement, EMC now agrees with IBM that the Web 2.0 and DigitalArchive workloads represent a unique enough "use case" to justify a new approach.
New offerings for new workloads will not impact existing offerings for existing workloads
I find it amusing that EMC is quickly defending that Atmos will not eat into its DMXbusiness, which is exactly the FUD they threw out about IBM XIV versus DS8000 earlier this year. In reality, neither the DS8000 nor the DMX were used much for Web 2.0 andDigital Archive workloads in the past. Companies like Google, Amazon and others hadto either build their own from piece parts, or use low-cost midrange disk systems.
Rather, the DS8000 and DMX can now focus on the workloads they were designed for,such as database applications on mainframe servers.
Cloud-Oriented Storage (COS)
Just when you thought we had enough terminology already, EMC introduces yet another three-letter acronym [TLA]. Kudos to EMC for coining phrases to help move newconcepts forward.
Now, when an RFP asks for Cloud-oriented storage, I am thankful this phrase will help serve as a trigger for IBM to lead with SoFS and XIV storage offerings.
Digital archives are different than Compliance Archives
EMC was also quick to point out that object-storage Atmos was different from theirobject-storage EMC Centera. The former being for "digital archives" and the latter for"compliance archives". Different workloads, Different use cases, different offerings.
Ever since IBM introduced its [IBM System Storage DR550] several years ago, EMC Centera has been playing catch-up to match IBM'smany features and capabilities. I am thankful the Centera team was probably too busy to incorporate Atmos capabilities, so it was easier to make Atmos a separate offering altogether. This allows the IBM DR550 to continue to compete against Centera's existingfeature set.
Micro-RAID arrays, logical file and object-level replication
I am thankful that one of the Atmos policy-based feature is replicating individualobjects, rather than LUN-based replication and protection. SoFS supports this forlogical files regardless of their LUN placement, GAM supports replication of files and medical images across geographical sites in the grid, and the XIV supports this for 1MBchunks regardless of their hard disk drive placement. The 1MB chunk size was basedon the average object size from established Web 2.0 and DigitalArchive workloads.
I tried to explain the RAID-X capability of the XIV back in January, under muchcriticism that replication should only be done at the LUN level. I amthankful that Marc Farley on StorageRap coined the phrase[Micro-RAID array] to helpmove this new concept further. Now, file-level, object-level and chunk-level replication can be considered mainstream.
Much larger minimum capacity increments
The original XIV in January was 51TB capacity per rack, and this went up to 79TB per rack for the most recent IBM XIV Release 2 model. Several complained that nobody would purchase disk systems at such increments. Certainly, small and medium size businessesmay not consider XIV for that reason.
I am thankful Atmos offers 120TB, 240TB and 360TB sizes. The companies that purchasedisk for Web 2.0 and Digital Archive workloads do purchase disk capacity in these large sizes. Service providers add capacity to the "Cloud" to support many of theirend-clients, and so purchasing disk capacity to rent back out represents revenue generating opportunity.
Renewed attention on SOAP and REST protocols
IBM and Microsoft have been pushing SOA and Web Services for quite some time now.REST, which stands for [Representational State Transfer] allows static and dynamic HTML message passing over standard HTTP.SOAP, which was originally [Simple Object Access Protocol], and then later renamed to "Service Oriented Architecture Protocol", takes this one step further, allowingdifferent applications to send "envelopes" containing messages and data betweenapplications using HTTP, RPC, SMTP and a variety of other underlying protocols.Typically, these messages are simple text surrounded by XML tags, easily stored asfiles, or rows in databases, and served up by SOAP nodes as needed.
It's hard to show leadership until there are followers
IBM's leadership sometimes goes unnoticed until followerscreate "me, too!" offerings or establish similar business strategies. IBM's leadership in Cloud and Grid computing is no exception.Atmos is the latest me-too product offering in this space, trying pretty muchto address the same challenges that SoFS and XIV were designed for.
So, perhaps EMC is thankful that IBM has already paved the way, breaking throughthe ice on their behalf. I am thankful that perhaps I won't have to deal with as much FUD about SoFS, GAM and XIV anymore.
Guy Kawasaki is hosting a Web Conference next week on The Art of Evangelism.By this he is referring to promoting products and services, rather than the traditionaldefinition: the preaching or promulgation of the gospel.
A few years ago, I myself had the official title of "Technical Evangelist" for the IBM System Storageproduct line. I never liked the title, and asked to use something else, but since I was part of ateam of "Technical Evangelists," I had to keep it. A lot of companies were using this as a title,I was told, and everyone knew that it was not a religious reference, but a marketing one.
Sometimes, words do not translate well into other countries or cultures. Four years ago, on theweek of September 11, 2003, I traveled to Kuwait, Qatar and UAE for a business trip to present thelatest on our storage products. On arrival in Kuwait, I had to fill out my "visa application" to enterthe country, and it asked for my "occupation/title" but there were not enough spaces to write "Technical Evangelist" so I just entered "Evangelist".
The two Kuwaitis behind the desk looked it up in their Arabic/English dictionary, discussed it, andweren't sure if they should shoot me, or take me to the back room to video tape my proper be-heading. Our official hostcame over to ask what was the delay, and they showed her the dictionary translation. She asked me,"Why would you put Evangelist as your title?" So, I gave her my business card, and told herthat my full title of Technical Evangelist did not fit in the space provided.
She explained to the two behind the desk that I had misunderstood the question, and misspelled theactual word intended was "Engineer". She showed them the agenda of the IBM Technical Conference I wasspeaking at, and the list of Oil and Construction companies that were attending. They looked upthe new title "Engineer", and agreed the translation was suitable for entry, and that these two words,Evangelist and Engineer, used enough similar letters they could understand how one might misspell one for the other.
Our limo took a small detour to the middle of the desert so that we could burn and bury the ashes of the remainder of my business cards, before arriving to the hotel. All of my powerpoint slides that listed my title were changed to "Technical Engineer". The events themselves went very well,as IT people are the same all over the world, and had no problem setting aside religious or politicaldifferences in an effort to learn more about technology.
When I got back to the United States, I shared my experience with my fellow team-mates, most of whom never leavethe country, and would never have thought this might happen. Management agreed to let us change our titles.That was good for me, as I had to order a new box of business cards anyways.
Last year, I became "Manager of Brand Marketing Strategy" of the IBM System Storage product line.Now on business trips I just write "Manager" on the Occupation/Title line. It fits in every form I have ever had to fill, and translates properly into every language.
This week I am in Orlando, Florida for the IBM Edge conference. Here is a recap of Day 4 afternoon sessions which related to Cloud computing.
IBM SmartCloud Enterprise -- Object Storage
George Contino, IBM GTS Consultant for Cloud Storage Service Enablement, presented IBM's latest Object Storage offering, based on an alliance IBM formed with Nirvanix last October 2011, launched January 31, 2012. It is part of the IBM SmartCloud Enterprise system.
IBM currently has two datacenters for this now, Secaucus NJ and Frankfurt Germany, but will have five by end of 2012, and hopefully seven datacenters by nid-year 2013.
The storage is then divided in several layers:
Customer master account, assigned a 128-bit encryption key
Name spaces by department or LOB
User file objects
The objects are given random names, with the real customer-assigned file names stored elsewhere, to provide additional privacy through obfuscation. For added security, it uses Two-Factor Authentication, requiring the users to provide both the 128-bit encryption key and the password.
There are three ways to access data:
Proprietary API - An API is available on Windows and Linux. Symantec NetBackup, BackupExec and Commvault Simpana have already coded to the Nirvanix API to allow backups to be stored in the Nirvanix storage cloud. IBM InfoSphere Optim can archive data to the Nirvanix storage cloud.
CloudNAS - Nirvanix provides software that provides CIFS and NFS interfaces, that converts to the Nivranix API. IBM Tivoli Storage Manager can send backups and archives to the Nirvanix storage cloud using this approach.
Cloud Storage Gateway - Third parties have developed hardware that runs the CloudNAS software, or directly codes to the API, to provide standard interfaces to the local clients, and provides access to the Nirvanix storage cloud. Two examples were Panzura File System Controller and Twinstrata Cloud Array Gateway.
One of Nirvanix's partners is OxygenCloud, which allows mobile/laptop access to work files. This includes security checks on Active Directory or LDAP, AES-256 bit encryption and HTTPS protocol support. For example, if you had to give a bunch of PDF files to your clients outside your company, you could create a folder, and send out a URL link to the clients, and this link would be valid for the next 14 days for them to download the files.
How University of Wisconsin-Milwaukee (UWM) moved SAP to the Cloud
Maik Gasterstaedt, IBM Technical Enablement for SAP, Storage and Cloud solutions, presented this session on the deployment of an SAP cloud at UWM. Worldwide, SAP has established five University Competency Centers (UCC) to provide SAP cloud services to other universities, and UWM is one of these five UCC.
Basically, the UWM manages SAP instances that are then "rented out" to 107 other universities. An SAP instance represents a "sample company" that could be used in a course curriculum, for example, "Global Bikes, Inc.", "Fitter Snacker", or IDES. An SAP Client represents a fresh copy of the data for this sample company.
UWM charges each University per "SAP client" per semester. Suppose a professor will teach three classes on SAP. He can arrange the SAP clients depending on how much he is willing to spend.
Get one SAP Client to be shared across all three classes. All three classes would be using the same sample company.
Get an SAP Client for each class. Each class could be based on the same or different sample companies.
Get one or more SAP Clients for each class. In this case, for example, a class could get two or more sample companies.
The problem was that they were running on Sun servers approaching end-of-life. They decided to switch to IBM, running 43 SAP Instances on AIX with two Power750 servers, 7 SAP instances on Windows guests of VMware across two BladeCenter chassis using HS22 blades, XIV storage, backed up by Tivoli Storage Manager and Tivoli Storage FlashCopy Manager. They can run 50 SAP clients on each SAP instance. Each client could be rented out to different professors at different universities.
They started installation April 1, and the entire system was running in production by August 15, less than five months end-to-end.
The results were stunning. SAP instance provisioning used to take 5 days, now takes 12 hours. Backups that used to take an hour complets in about 30 seconds.
The conference is almost over folks! Just a few sessions tomorrow and then it is all done.
IBM had over a dozen storage-related announcements this week. This is my third and final part in my series to provide a quick overview of the announcements.
IBM Tivoli® Storage Manager v6.3
IBM Tivoli Storage Manager is market-leading software that provides not just backup, but also HSM and archive capabilities across a wide variety of operating systems. Originally developed in the IBM Almaden Research Center, it then moved about 15 years ago to Tucson to become a commercial product.
The new TSM v6.3 introduces site-to-site hot-standby disaster recovery feature that replicates the TSM meta data and data for fast recovery. The maximum number of objects supported has doubled to four billion. Reporting has been enhanced using technologies borrowed from IBM Cognos. Lastly, a feature on Tivoli Storage Productivity Center has been carried forward to deploy and update agents on the various clients.
IBM Tivoli Storage FlashCopy Manager coordinates application-aware backups through the use of point-in-time copy services such as FlashCopy or Snapshot on various IBM and non-IBM disk systems. The versions can remain on disk, or optionally processed by Tivoli Storage Manager to move them to external storage such as tape for added protection.
There will always be a spot in my heart for this product, as the method to use FlashCopy for application-aware backups on the mainframe was my 19th patent, and subsequently delivered as a series of enhancements to DFSMS over the past decade on the z/OS operating system. It is good to see this innovation has "jumped over" to distributed systems.
The new FlashCopy Manager v3.1 adds support for HP-UX and VMware, expands support for IBM DB2 and Oraqcle databases, and introduces an interface for custom business applications.
IBM Tivoli Storage Manager for Virtual Environments v6.3
TSM for VE is a new addition to the TSM family, focused on being able to coordinate hypervisor-aware data protection. Initially it supports VMware, but IBM has plans to support a variety of other server virtualization hypervisors as well, as over 40 percent of companies run two or more hypervisors in their data center.
The new TSM for VE v6.3 adds a VMware vCenter plug-in, and support for hardware-based disk snapshots.
IBM Tivoli Storage Productivity Center v4.2.2
A long time ago, I was the chief architect IBM Tivoli Storage Productivity Center v1, now we are already up to v4.2.2 release!
IBM has added enhanced reporting based on IBM Cognos technology, including storage tiering analysis reports (STAR). Few companies keep all of their storage tiers in a single disk system. Rather, they have different boxes, and often from different vendors. IBM's Productivity Center can report on both IBM and non-IBM disk systems. New this release is support for the internal disks of the Storwize V7000 midrange disk system.
Productivity Center's "SAN Planner" has been enhanced to consider XIV replication criteria. This SAN Planner helps clients decide where to carve LUNs, and to make sure they pick the right place given all of the criteria such as remote copy replications.
Last year, we introduced Productivity Center for Disk Midrange Edition (MRE) which to offer lower price when you are only managing midrange disk systems DS5000, DS3000, Storwize V7000 and SVC managing these. This was so successful, that we now have TPC Select, which is basically Productivity Center Standard Edition (SE) for these midrange disk systems.
Whew! I have already heard from some of my readers to slow down, that this is too much information to deal with all at once. IBM has tried everything from having just a few announcements nearly every Tuesday, to having huge launches every two to three years, and settled in the middle with announcements about four to five times per year.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the Tuesday morning sessions:
Wells Fargo: Data Center Lessons Learned from the Wachovia Acquisition
This was the next in their "Mastermind Interview" series. The analyst interviewed Scott Dillon, EVP and Head of Technology Infrastructure Services for Wells Fargo bank. Some 13 years ago, Wells Fargo merged with Norwest, and three years ago, Wells Fargo merged again, this time with Wachovia bank. Today, the new merged Wells Fargo manages 1.2 Trillion USD in assets, some 12,000 ATMs, and 9,000 branch offices within two miles of 50 percent of the US population.
On the technical side, Scott's team has to deal with 10,000 IT changes per month, spanning 85 discrete businesses that Wells Fargo is involved in. To help drive the consolidation, they formed a culture group called "One Wells Fargo".
Often, Wells Fargo and Wachovia used different applications for the same function. The consolidation team took the A-or-B-but-not-C approach, which means they would either choose the existing application that Wells Fargo was already using (A), or the one that Wachovia was already using (B), but not look for a replacement (C). They also wanted to avoid re-platforming any apps during the merger. This simplified the process of developing target operating models (TOMs).
Before each application cut-over, the consolidation team did dry-run, dress rehearsals and walkthroughs over the phone to ensure smooth success. They wanted a Wachovia account holder to be able to walk into the bank on one day, and then come back the next day as a Wells Fargo account holder, into the same branch office but now with Wells Fargo signage, with minimal disruption.
Wells Fargo also adopted a test-to-learn approach of choosing small test markets to see how well the transition would work before tackling larger, more complicated markets. For example, they started in Colorado, where Wells Fargo has a huge presence, but Wachovia had a small presence.
This was first and foremost a business merger, not just an IT merger. Each decision to 6-18 months to act on, and the IT team spent the last three years working every weekend to make this a reality.
A Satirical Look at Business and Technology
Comedian Bob Hirschfeld presented a light-hearted look at the IT industry. Bob actually attended sessions on Monday at this conference so his satire was exceptionally hard-hitting. He took jabs at the latest IT job requirements, padding on light poles, IBM Watson, social media's impact on dictators, various industry acronyms, virtualization, the various reasons why printer ink is so expensive, and the evil masterminds behind Powerpoint.
Storing Big Data takes a Village
Two analysts co-presented this session on the 12 dimensions of information management that revolve around the volume, variety and velocity of "Big Data".
In the past, it took a while to gather data, and a while to process the data, so annual, quarterly and monthly reports were common. Today, with high-velocity streams like Twitter, especially during cultural events or natural disasters, data is produced and analyzed quickly. It is important to sort the steady-state from the anomalies.
Myth 1: All data fits nicely into relational databases. The analysts feel the concept of putting everything into one big data base is dead. Some data sets are so complicated that traditional database joins would cause smoke to come out of the sides of the servers. Instead, new technologies have emerged, including NoSQL, Cassandra, Hadoop, Columnar databases, and In-memory databases. XML has helped to bring together disparate data formats.
Companies need to adapt to this new reality of Business Analytics. Here is a poll of the audience on how many are in what stage of adaptation:
Myth 2: Everyone will do Big Data with commodity hardware. Businesses want commmercial offerings that don't fail every day. (For example, instead of using open-source Hadoop, consider IBM's [InfoSphere BigInsights] commercial product based on Hadoop designed for the Enterprise).
Myth 3: Big Data is too big for backup. Certainly, traditional full-plus-incremental approaches fail to scale, but that is not the only option you have. Consider disk replication, snapshots, and integrated disk-and-tape blended solutions that adopt a more progressive backup methodology.
Capacity forecasting can be difficult with Big Data. Scale-out NAS systems, including IBM SONAS and the various me-too competitive offerings, were originally focused on High Performance Computing (HPC) and the Media & Entertainment (M&E) industries, are now ready for prime-time and appropriate for other use cases.
It's like the game of Clue, but instead of Professor Plum with the candlestick in the library, it was Chuck with the Cluster in the Closet. To avoid shadow IT creating huge Hadoop Clusters in your closets, encourage the use of Cloud Computing for "sandbox" projects. IBM, Amazon and others offer hosted MapReduce engines for this purpose.
What type of storage do you plan to use for Big Data? The top five, weighted from a list during a poll of the audience were: (78) traditional disk arrays, (71) Scale-out NAS, (46) pre-configured appliances, (30) Hadoop clusters, and (23) Cloud Storage.
Big Data is about doing things differently. Do your employees understand analytical techniques? Your company may need to start thinking about policies for capturing Big Data, storing it correctly, and analyzing it for insights and patterns needed to stay competitive.
It was good to mix reality with a bit of humor. Some of these conference attendees take themselves too seriously, and it is good to be reminded that IT is just part of the overall business operation.
Continuing my post-week coverage of the [Data Center 2010 conference], Wednesday morning started with another keynote session, followed by some break-out sessions.
Realities of IT Investment
Tighter budgets mean more business decisions. Future investments will come from cost savings. The analysts report that 77 percent of IT decisions are made by CFOs. Most organizations are spending less now than back in 2008 before the recession.
How we innovate through IT is changing. In bad times, risk trumps return, but only 21 percent of the audience have a formal "risk calculation" as part of their purchase plans.
Divestment matters as much as investment. Reductions in complexity have the greatest long-term cost savings. Try to retire at least 20 percent of your applications next year. With the advent of Cloud Computing, companies might just retire it and go entirely with public cloud offerings. Note that this graph the years are different than the ones above, in groups of half-decade increments.
It is important to identify functional dependencies and link your IT risks to business outcomes. Focus on making costs visible, and re-think how you communicate IT performance measurements and their impact to business. Try to change the culture and mind-set so that projects are not referred to as "IT projects" focused on technology, but rather they are "business projects" focused on business results.
Moving to the Cloud
Richard Whitehead from Novell presented challenges in moving to Cloud Computing. There are risks and challenges managing multiple OS environments. Users should have full access to all IT resources they need to do their jobs. Computing should be secure, compliant, and portable. Here is the shift he sees from physical servers to virtual and cloud deployments, years 2010 to 2015:
Richard considers a "workload" as being the combination of the operating system, middleware, and application. He then defines "Business Service" as an appropriate combination of these workloads. For example, a business service that provides a particular report might involve a front-end application, talking through business logic workload server, talking to a back-end database workload server.
To address this challenge, Novell introduces "Intelligent Workload Management", called WorkloadIQ. This manages the lifecycle to build, secure, deploy, manage and measure each workload. Their motto was to take the mix of physical, virtual and cloud workloads all "make it work as one". IBM is a business partner with Novell, and I am a big fan of Novell's open-source solutions including SUSE Linux.
A Funny Thing Happened on the Way to the Cloud....
Bud Albers, CTO of Disney, shared their success in deploying their hybrid cloud infrastructure. Everyone recognizes the Disney brand for movies and theme parks, but may not aware that they also own ABC News and ESPN television, Travel cruises, virtual worlds, mobile sites, and deploy applications like Fantasy Football and Fantasy Fishing.
Two years ago, each Line of Business (LOB) owned their own servers, they were continually out of space, power and HVAC issues forced tactical build-outs of their datacenters. But in 2008, the answer to all questions was Cloud Computing, it slices and dices like something invented by [Ron Popeill], with no investment or IT staff required. However, continuing to ask the CFO for CAPEX to purchase assets that were only 1/7th used was not working out either. That's right, over 75 percent of their servers were running less than 15 percent CPU utilization.
The compromise was named "D*Cloud". Internal IT infrastructure would be positioned for Cloud Computing, by adopting server virtualization, implementing REST/SOAP interfaces, and replicating the success across their various Content Distribution Networks (CDN). Disney is no stranger to Open Source software, using Linux and PHP. Their [Open Source] web page shows tools available from Disney Animation studios.
At the half-way point, they had half their applications running virtualized on just 4 percent of their servers. Today, they run over 20 VMs per host and have 65 percent of their apps virtualized. Their target is 80 percent of their apps virtualized by 2014.
Bud used the analogy that public clouds will be the "gas stations" of the IT industry. People will choose the cheapest gas among nearby gas stations. By focusing on "Application management" rather than "VM instance management", Disney is able to seamlessly move applications as needed from private to public cloud platforms.
Their results? Disney is now averaging 40 percent CPU utilization across all servers. Bud feels they have achieved better scalability, better quality of service, and increased speed, all while saving money. Disney is spending less on IT now than in 2008,
UPMC Maximizes Storage Efficiency with IBM
Kevin Muha, UPMC Enterprise Architect & Technology Manager for Storage and Data Protection Services, was unable to present this in person, so Norm Protsman (IBM) presented Kevin's charts on the success at the University of Pittsburgh Medical Center [UPMC]. UPMC is Western Pennsylvania's largest employer, with roughly 50,000 employees across 20 hospitals, 400 doctors' offices and outpatient sites. They have frequently been rated one of the best hospitals in the US.
Their challenge was storage growth. Their storage environment had grown 328 percent over the past three years, to 1.6PB of disk and nearly 7 PB of physical tape. To address this, UPMC deployed four IBM TS7650G ProtecTIER gateways (2 clusters) and three XIV storage systems for their existing IBM Tivoli Storage Manager (TSM) environment. Since they were already using TSM over a Fibre Channel SAN, the implementation took only three days.
UPMC was backing up nearly 60TB per day, in a 15-hour back window. Their primary data is roughly 60 percent Oracle, with the rest being a mix of Microsoft Exchange, SQL Server, and unstructured data such as files and images.
Their results? TSM reclamation is 30 percent faster. Hardware footprint reduced from 9 tiles to 5. Over 50 percent reduction in recovery time for Oracle DB, and 20 percent reduction in recovery of SQL Server, Microsoft Exchange, and Epic Cache. They average 24:1 deduplication overall, which can be broken down by data category as follows:
29:1 Cerner Oracle
18:1 EPIC Cache
10:1 Microsoft SQL Server
8:1 Unstructured files
6:1 Microsoft Exchange
UPMC still has lots of LTO-4 tapes onsite and offsite from before the change-over, so the next phase planned is to implement "IP-based remote replication" between ProtecTIER gateways to a third data center at extended distance. The plan is to only replicate the backups of production data, and not replicate the backups of test/dev data.
Next week, April 6, IBM will host the [Smarter Computing Virtual Event] to cover IBM's Smarter Computing initiative, with key themes of Smarter Computing - Big Data, Optimized Systems, and Cloud. Smarter Computing is a new and innovative approach to computing based on the evolving role of IT in your business and an intrinsic understanding of the economics of IT.
(I found it amusing that EMC has chosen two of IBM's themes, "Big Data" and "Cloud", for their upcoming EMC World 2011 conference. I was tempted to include their graphic, but people might have accused me of using Photoshop or GIMP to make EMC look bad. Instead, you can look at the graphic on this blog post titled [When Cloud Meets Big Data: Information Logistics Revisited] by fellow blogger Chuck Hollis from EMC. IBM has been a leader in IT for decades, so we are used to having other companies follow in our footsteps. As an [IBM wannabee], EMC is no different.)
For many on tight travel budgets, this event REQUIRES NO TRAVEL! This is a virtual event, You can participate from your desk. You will hear from key IBM executives, all of which I have heard speak myself, so I can vouch that this should be a good event.
Steve Mills - IBM Senior Vice President and Group Executive, Software and Systems (my seventh-line manager)
Tom Rosamilia - IBM General Manager, Power and Mainframe Systems, IBM Systems and Technology Group.
Robert LaBlanc - IBM Senior Vice President, Middleware Software
Helene Armitage - IBM General Manager, Systems Software, Systems and Technology Group.
This event is targeted to CIOs, IT Directors and Managers, Business Analysts, Systems and Storage Administrators, and DBAs. However, we don't check what your actual title is, so feel free to attend even if you have different job responsibilities.
I am giving you one week's notice for this event. If this is the first time you have heard of this event, then I hope that is enough time to plan for this event in your busy schedule. If you had heard of it already, perhaps this serves as a useful reminder to [Register Now!] Is a week ahead the right amount of time? For virtual events, do we need more or less advance notice? What about for events that involve travel? Feel free to enter your thoughts on this in the comments section below.
I hope all of my American readers had a wonderful Thanksgiving holiday! The day after Thanksgiving is "Black Friday", the unofficial starting data for shopping for upcoming holiday presents and decorations. The Monday after that is now often referred to as "Cyber Monday", where many people purchase items on-line.
I thought this would be good time to promote my book series, Inside System Storage, Volumes I through V. These are available direct from my publisher, [Lulu], or from other on-line retailers.
The old adage "Never judge a book by its cover" often leads technical authors to select bland cover designs. I designed the cover art for the series to have a consistent look, but be unique enough to know each book is different. They all have a beige background with black text, three or four graphics representing the various storage themes du jour, and a color stripe spread diagonally across the spine.
Several readers have asked if there was any rhyme or reason for the color of each spine. One guessed it was based on the [electronic color code] used on resistors to mark their value. When I was getting my college degree in Electrical Engineering, the mnemonic "Better Be Right Or Your Great Big Venture Goes West" helped us remember the sequence: Black, Brown, Red, Orange, Yellow, Green, Blue, Violet, Grey and White.
I can assure everyone I was not that clever. Here, instead, is the story behind each color chosen:
Volume I: Green
I received a flyer from Barnes and Noble advertising various books on sale. One caught my eye, so I went to buy it, but forgot to bring the flyer with me. A young woman offered to help me find it, but I could not remember the title, nor the editor, but it had a green cover, and was a collection of the world's shortest stories, all exactly 55 words in length, all winners in some high school contest. She found the flyer, looked up the book, and directed me to the shelf. After several minutes of her scanning the shelf by author, I reached for it, saying, "Here it is, the green one. This shade of green will fit perfectly in my collection of green books!" As I stood in line, the young woman told her boss, "That guy buys green books!" The rest of the folks in line overheard her, and all started laughing at her gullibility.
Volume II: Orange
In late 2007, I was under NDA to review the acquisition of a company called XIV. I was disclosed on the innovative design of the storage system, so that I could blog about it when the announcement was formal. This box would have a distinctive orange stripe across the disks. The announcement launch was a big success. Since then, every time the storage sales team needed a boost in sales for the [IBM XIV Storage System], I would write another blog about the clever features and capabilities.
Volume III: Purple
In 1996, I joined a social club called "Mile High Adventures and Entertainment", headquartered in Denver, Colorado, with locations in Phoenix, Tucson, San Diego, Los Angeles and Portland, Oregon. It was a group for singles to meet each other through social activities and events. A year later, it colapsed under the weight of heavy radio advertising debt. The local staff bought out the membership list, and launched a new club, under the name Tucson Fun and Adventures. It was a big part of my social life.
However, as the owners dropped out, one to start a family, another to take care of her father after her mother passed away, I started 2009 as the majority owner. The economic recession took its toll. Members were not spending as much of their disposable income of fun and entertainment. We restructured the company, revamped the website, and adopted Purple as our official color. Our event coordinators all wore purple shirts, and carried purple clipboards. Despite this major transformation, I just did not have time to run this company while still working full-time at IBM, so I sold it at year end.
Volume IV: Blue
As I mentioned in my blog post [IBM Introduces a New Era of Computing], IBM launched [PureSystems], a new family of expert-integrated systems. Since Volume IV was going to publish shortly after this announcement, I decided on the color blue to match the new door covers on the racks they came in. In less than a year, IBM has already sold over 1,000 of these systems in over 40 different countries.
Volume V: Grey
Chosing a color to represent the IBM Watson computer proved quite a challenge. I finally decided on grey, to represent "grey matter", a phrase often used to refer to the human brain. I picked a shade of grey that complements the three graphics that represent last year's strategic storage marketing themes. My blog post [How to Build Your Own Watson Jr. in your Basement] continues to be one of my highest read posts.
If you were having trouble getting ideas for gifts this holiday season, hopefully, this post gave you five new ideas for your friends, family, coworkers and clients! They are all available in hardcover, paperback, and eBook (PDF) for viewing on desktops, laptops, tablets or smartphones.
The proof-of-concept that IBM Haifa research center developed back in 1998 became what we now call the iSCSI protocol.The book iSCSI: The Universal Storage Connection introduces the history as follows:
In the fall of 1999 IBM and Cisco met to discuss the possibility of combining their SCSI-over-TCP/IP efforts. After Cisco saw IBM's demonstration of SCSI over TCP/IP, the two companies agreed to develop a proposal that would be taken to the IETF for standardization.
There are three ways to introduce iSCSI into your data center:
Through a gateway, like the IBM System Storage N series gateway, that allows iSCSI-based servers connect to FC-based storage devices
Through a SAN switch or director, a FC-based server can access iSCSI-based storage, an iSCSI-based server accessing FC-based storage, or even iSCSI-based servers attaching to iSCSI-based storage.
Directly through the storage controller.
IBM has been delivering the first method with its successful IBM System Storage N series gateway products, buttoday we have announced additional support for the second and third methods.Here's a quick recap.
New SAN director blades
Supporting the second method, IBM TotalStorage SAN256B Director is enhanced to deliver iSCSI functionality with a new M48 iSCSI Blade, which includes 16 ports (8 Fibre Channel ports; and 8 Ethernet ports for iSCSI connectivity). We also announced a new Fibre Channel M48 Blade which provides 10 Gbps Fibre Channel Inter Switch Link (ISL) connectivity between SAN256B Directors.
With support for Boot-over-iSCSI, diskless rack-optimized and blade servers can boot Windows or Linux over Ethernet,eliminating the management hassles with internal disk.
All of this is part of IBM's overall push into the Small and Medium size Business marketplace, making it easier to shop for and buy from IBM and its many IBM Business Partners, easier to deploy and install storage, and easier tomanage the storage once you have it.
Rather than a target weight, I chose a target waist measurement, but did not quite make this one. I did keep up with my weekly exercise regime, but we recently installed an "ice cream freezer" here at work, and I have failed to resist temptation.
Reduce, Reuse and Recycle
In my post [Stayingon Budget], I resolved to "reduce, reuse and recycle". I have taken measures to de-clutter and simplify mylife, and already things are paying off. So I am happy about this one.
Learn to Better use Lotus Notes and Office 2007 software
In my post [Honeyour Tools and Skills], I resolved to learn how to better use Lotus Notes and Office 2007. We never got Office 2007.In a surprise move, IBM put out Lotus Symphony, an Office 2007 replacement. Lotus Symphony works on IBM's three approved recognized desktop platforms (Windows XP, Linux and Mac OS X). Here's a collection of [IBM Press Releases about Lotus Symphony].
I did learn how to better use Lotus Notes,thanks to Alan Lepofsky's blog [IBM Lotus Notes Hints, Tips, and Tricks].Ironically, the best help for dealing with Lotus Notes was not the software itself, but the skills in handling emailin general. This includes:
Resist the urge to copy the world, and better use "bcc" to be kind to upper management on "reply all" respondents.
Avoid attaching large documents, but use URL's to NAS file shares, websites, or [YouSendIt.com] instead. Obviously, the recipient has to have access to whatever you point to, but it greatly reduces total email volume and improves transmission over wireless.
Delegate. A lot of times I was the "middleman" between someone asking a question, and someone else Iknew had the answer. Now, I just introduce them together and step out of the way.
Checking email only a few times a day. I use to check my email every 5-10 minutes, now only 2-4 times per day.
In my post, [Lighten Up], I resolved to laugh more, stretch more, get enough sleep, and listen to music more. I participated in monthly[Tucson Laughter Club]events, incorporated stretching in my weekly exercise program, have gotten more sleep, and rediscovered some of my older music that I hadn't listened to in a while. Overall, I feel happy I met this one.
My New Year's Resolutions for 2008:
Improve my writing skills
Going back through my past blog postings, some of my sentences and paragraphs were frightful. I resolve toimprove my sentence and paragraph structure, and make better use of HTML tags to improve the layout andformatting.
Improve my HTML and Web design skills
Contribute to the OLPC Foundation
Last year, as a "Day 1 Donor", I had donated to this important charitable organization to help educate the childrenof third world nations. This year, I plan to learn Python and other programming languages used on the XO laptop,and see how I can contribute my skills and expertise on the OLPC forums.
Eat Healthier and Drink more
I think my downfall with last year's resolution was that it was merely a goal, 35 inch waist, rather thana "call for action". This year, I plan to eat more fish, salads, whole grains and other heart-healthy foods.
While many people resolve to "Quit Drinking", I need to drink more. My doctor, my personaltrainer, and even my interpreter teams, have asked me to do so. We live in Tucson, Arizona, during a centuryof global warming, and dehydration can cause stress on the body.
Attend more movies and film-making events
Last year, I joined the Tucson Film Society, and produced[my first film], part of which was filmedfrom Bogota, Colombia. I got invited to see a lot of independent films, premieres, and film-maker events, but did not attend many. I resolve to attend more in 2008.
Get better Organized
Moving offices from one building to another brought to light that I wasn't well organized. While I havemade some efforts to de-clutter my home, I need to step this up to my work as well.
I decided to start with something very non-tech, a [Hipster PDA]. I have nowmet or heard several people who use this approach successfully, and have decided to give it a try.
Hopefully, this list might inspire you to come up with your own resolutions. Not surprisingly, writing them in a public forum helped me keep most of them, and stick to my resolutions throughout the year.