Well, it's Thursday, and today IBM is having a major launch for storage. We have lots of exciting announcements today, so here is the major highlights:
- IBM Storwize V7000 midrange disk system
Fellow blogger Rolf Potts just completed his [No Baggage Challenge], travelling around the world, twelve countries in six weeks with no luggage. I first learned of this trip from fellow published author and blogger Tim Ferriss in his post [How to Travel 12 Countries with No Baggage Whatsoever]. This trip was sponsored by a travel agency [BootsnAll.com] and travel clothing manufacturer [ScotteVest].
From New York, Rolf went to London, Paris, Madrid, Morocco, Cairo, South Africa, Bangkok Thailand, Malaysia, Singapore, New Zealand, Australia, and then back to United States. I was hoping to run into him while I was in Australia and New Zealand last month, but our schedules did not line up.
Â
Travelingwithout baggage is more than just a convenience, it is a metaphor for the philosophy that we should keep only what we need, and leave behind what we don't. This was the approach taken by IBM in the design of the IBM Storwize V7000 midrange disk system.
|
The IBM Storwize V7000 disk system consists of 2U enclosures. Controller enclosures have dual-controllers and drives. Expansion enclosures have just drives. Enclosures can have either 24 smaller form factor (SFF) 2.5-inch drives, or twelve larger 3.5-inch drives. A controller enclosure can be connected up to nine expansion enclosures.
|
The drives are all connected via 6 Gbps SAS, and come in a variety of speeds and sizes: 300GB Solid-State Drive (SSD); 300GB/450GB/600GB high-speed 10K RPM; and 2TB low-speed 7200 RPM drives. The 12-bay enclosures can be intermixed with 24-bay enclosures on the same system, and within an enclosure different speeds and sizes can be intermixed. A half-rack system (20U) could hold as much as 480TB of raw disk capacity.
This new system, freshly designed entirely within IBM, competes directly against systems that carry a lot of baggage, including the HDS AMS, HP EVA, an EMC CLARiiON CX4 systems. Instead, we decided to keep the what we wanted from our other successful IBM products.
- Inspired by our successful XIV storage system, IBM has developed a web-based GUI that focuses on ease-of-use. This GUI uses the latest HTML5 and dojo widgets to provide an incredible user experience.
- Borrowed from our IBM DS8000 high-end disk systems, state-of-the-art device adapters provide 6 Gbps SAS connectivity with a variety of RAID levels: 0, 1, 5, 6, and 10.
- From our SAN Volume Controller, the embedded [ SVC 6.1 firmware] provides all of the features and functions normally associated with enterprise-class systems, including Easy Tier sub-LUN automated tiering between Solid-State Drives and Spinning disk, thin provisioning, external disk virtualization, point-in-time FlashCopy, disk mirroring, built-in migration capability, and long-distance synchronous and asynchronous replication.
To learn more on this, read the [announcement letter], [landing page],
[product page],
[services page], as well as the blog posts from fellow master inventor and blogger Barry Whyte (IBM) at his [Storage Virtualization] blog.
- My New Book is Now Available!
-
Finally, the various "internal NDA" that kept me from publishing this sooner have expired, so now I have the long-awaited [Inside System Storage: Volume II], documenting IBM's transformation in its storage strategy, including behind-the-scenes commentary about IBM's acquisitions of XIV and Diligent. Available initially in paperback form. I am still working on the hard cover and eBook editions.
For those who have not yet read my first book, Inside System Storage: Volume I, it is still available from my publisher Lulu, in [hard cover], [paperback] and [eBook] editions.
|
|
- IBM System Storage DS8800
A lesson IBM learned long ago was not to make radical changes to high-end disk systems, as clients who run mission-critical applications are more concerned about reliability, availability and serviceability than they are performance or functionality. Shipping any product before it was ready meant painfully having to fix the problems in the field instead.
(EMC apparently is learning this same lesson now with their VMAX disk system. Their Engenuity code from Symmetrix DMX4 was ported over to new CLARiiON-based hardware. With several hundred boxes in the field, they have already racked up over 150 severity 1 problems, roughly half of these resulted in data loss or unavailability issues. For the sake of our mutual clients that have both IBM servers and EMC disk, I hope they get their act together soon.)
To avoid this, IBM made incremental changes to the successful design and architecture of its predecessors. The new DS8800 shares 85 percent of the stable microcode from the DS8700 system. Functions like Metro Mirror, Global Mirror, and Metro/Global Mirror, are compatible with all of the previous models of the DS8000 series, as well as previous models of the IBM Enterprise Storage Server (ESS) line.
|
The previous models of DS8000 series were designed to take in cold air from both front and back, and route the hot air out the top, known as chimney design. However, many companies are re-arranging their data centers into separate cold aisles and hot aisles. The new DS8800 has front-to-back cooling to help accommodate this design.
|
My colleague Curtis Neal would call the rest of this a "BFD" announcement, which of course stands for "Bigger, Faster and Denser". The new DS8800 scales-up to more drives than its DS8700 predecessor, and can scale-out from a single-frame 2-way system to a multi-frame 4-way system. IBM has upgraded to faster 5GHz POWER6+ processors, with dual-core 8 Gbps FC and FICON host adapters, 8 Gbps device adapters, and 6 Gbps SAS connectivity to smaller form factor (SFF) 2.5-inch SAS drives. IBM Easy Tier will provide sub-LUN automated tiering between Solid-State Drives and spinning disk. The denser packaging with SFF drives means that we can pack over 1000 drives in only three frames, compared to five frames required for the DS8700.
To learn more, read the [landing page] or the announcement letters for the machine types [2421], [2422], [2423], [2424].
- IBM System Storage SAN Volume Controller v6.1
The [IBM System Storage SAN Volume Controller] software release v6.1 brings Easy Tier sub-LUN automated tiering to the rest of the world. IBM Easy Tier moves the hottest, most active extents up to Solid-State Drives (SSD) and moves the coldest, least active down to spinning disk. This works whether the SSD is inside the SVC 2145-CF8 nodes, or in the managed disk pool.
Tired of waiting for EMC to finally deliver FAST v2 for your VMAX? It has been 18 months since they first announced that someday they would have sub-LUN automatic tiering. What is taking them so long? Why not virtualize your VMAX with SVC, and you can have it sooner!
SVC 6.1 also upgrades to a sexy new web-based GUI, which like the one for the IBM Storwize V7000, is based on the latest HTML5 and dojo widget standards. Inspired by the popular GUI from the IBM XIV Storage System, this GUI has greatly improved ease-of-use.
To learn more, read the [announcement letter] and [SVC product page].
These are just a subset of today's announcements. To see the rest, read [What's New].
technorati tags: IBM, announcements, #IBMstorage, Storwize V7000, DS8800, Lulu, SVC, Easy Tier, SAS
Tags: 
ibm
svc
easy+tier
ds8800
announcements
lulu
storwize+v7000
#ibmstorage
sas
|
Wrapping up my coverage of the annual [2010 System Storage Technical University], I attended what might be perhaps the best session of the conference. Jim Nolting, IBM Semiconductor Manufacturing Engineer, presented the new IBM zEnterprise mainframe, "A New Dimension in Computing", under the Federal track.
The zEnterprises debunks the "one processor fits all" myth. For some I/O-intensive workloads, the mainframe continues to be the most cost-effective platform. However, there are other workloads where a memory-rich Intel or AMD x86 instance might be the best fit, and yet other workloads where the high number of parallel threads of reduced instruction set computing [RISC] such as IBM's POWER7 processor is more cost-effective. The IBM zEnterprise combines all three processor types into a single system, so that you can now run each workload on the processor that is optimized for that workload.
- IBM zEnterprise z196 Central Processing Complex (CPC)
Let's start with the new mainframe z196 central processing complex (CPC). Many thought this would be called the z11, but that didn't happen. Basically, the z196 machine has a maximum 96 cores versus z10's 64 core maximum, and each core runs 5.2GHz instead of z10's cores running at 4.7GHz. It is available in air-cooled and water-cooled models. The primary operating system that runs on this is called "z/OS", which when used with its integrated UNIX System Services subsystem, is fully UNIX-certified. The z196 server can also run z/VM, z/VSE, z/TPF and Linux on z, which is just Linux recompiled for the z/Architecture chip set. In my June 2008 post [Yes, Jon, there is a mainframe that can help replace 1500 servers], I mentioned the z10 mainframe had a top speed of nearly 30,000 MIPS (Million Instructions per Second). The new z196 machine can do 50,000 MIPS, a 60 percent increase!
(Update: Back in 2007, IBM and Sun mutually supported [OpenSolaris on an IBM System z mainframe]. Unfortunately, after Oracle acquired Sun, the OpenSolaris Governing Board has [grown uneasy over Oracle's silence] about the future of OpenSolaris on any platform. The OpenSolaris [download site] identifies 2009.06 as the latest release, but only for x86 and SPARC chip sets. Apparently, the 2010.03 release expected five months ago in March has slipped. Now it looks official that [OpenSolaris is Dead].)
The z196 runs a hypervisor called PR/SM that allows the box to be divided into dozens of logical partitions (LPAR), and the z/VM operating system can also act as a hypervisor running hundreds or thousands of guest OS images. Each core can be assigned a specialty engine "personality": GP for general processor, IFL for z/VM and Linux, zAAP for Java and XML processing, and zIIP for database, communications and remote disk mirroring. Like the z9 and z10, the z196 can attach to external disk and tape storage via ESCON, FICON or FCP protocols, and through NFS via 1GbE and 10GbE Ethernet.
- IBM zEnterprise BladeCenter Extension (zBX)
There is a new frame called the zBX that basically holds two IBM BladeCenter chassis, each capable of 14 blades, so total of 28 blades per zBX frame. For now, only select blade servers are supported inside, but IBM plans to expand this to include more as testing continues. The POWER-based blades can run native AIX, IBM's other UNIX operating system, and the x86-based blades can run Linux-x86 workloads, for example. Each of these blade servers can run a single OS natively, or run a hypervisor to have multiple guest OS images. IBM plans to look into running other POWER and x86-based operating systems in the future.
If you are already familiar with IBM's BladeCenter, then you can skip this paragraph. Basically, you have a chassis that holds 14 blades connected to a "mid-plane". On the back of the chassis, you have hot-swappable modules that snap into the other side of the mid-plane. There are modules for FCP, FCoE and Ethernet connectivity, which allows blades to talk to each other, as well as external storage. BladeCenter Management modules serve as both the service processor as well as the keyboard, video and mouse Local Console Manager (LCM). All of the IBM storage options available to IBM BladeCenter apply to zBX as well.
Besides general purpose blades, IBM will offer "accelerator" blades that will offload work from the z196. For example, let's say an OLAP-style query is issued via SQL to DB2 on z/OS. In the process of parsing the complicated query, it creates a Materialized Query Table (MQT) to temporarily hold some data. This MQT contains just the columnar data required, which can then be transferred to a set of blade servers known as the Smart Analytics Optimizer (SAO), then processes the request and sends the results back. The Smart Analytics Optimizer comes in various sizes, from small (7 blades) to extra large (56 blades, 28 in each of two zBX frames).  A 14-blade configuration can hold about 1TB of compressed DB2 data in memory for processing.
- IBM zEnterprise Unified Resource Manager
You can have up to eight z196 machines and up to four zBX frames connected together into a monstrously large system. There are two internal networks. The Inter-ensemble data network (IEDN) is a 10GbE that connects all the OS images together, and can be further subdivided into separate virtual LANs (VLAN). The Inter-node management network (INMN) is a 1000 Mbps Base-T Ethernet that connects all the host servers together to be managed under a single pane of glass known as the Unified Resource Manager. It is based on IBM Systems Director.
By integrating service management, the Unified Resource Manager can handle Operations, Energy Management, Hypervisor Management, Virtual Server Lifecycle Management, Platform Performance Management, and Network Management, all from one place.
- IBM Rational Developer for System z Unit Test (RDz)
But what about developers and testers, such as those Independent Software Vendors (ISV) that produce mainframe software. How can IBM make their lives easier?
Phil Smith on z/Journal provides a history of [IBM Mainframe Emulation]. Back in 2007, three emulation options were in use in various shops:
- Open Mainframe, from Platform Solutions, Inc. (PSI)
- FLEX-ES, from Fundamental Software, Inc.
- Hercules, which is an open source package
None of these are viable options today. Nobody wanted to pay IBM for its Intellectual Property on the z/Architecture or license the use of the z/OS operating system. To fill the void, IBM put out an officially-supported emulation environment called IBM System z Professional Development Tool (zPDT) available to IBM employees, IBM Business Partners and ISVs that register through IBM Partnerworld. To help out developers and testers who work at clients that run mainframes, IBM now offers IBM Rational Developer for System z Unit Test, which is a modified version of zPDT that can run on a x86-based laptop or shared IBM System x server. Based on the open source [Eclipse IDE], the RDz emulates GP, IFL, zAAP and zIIP engines on a Linux-x86 base. A four-core x86 server can emulate a 3-engine mainframe.
With RDz, a developer can write code, compile and unit test all without consuming any mainframe MIPS. The interface is similar to Rational Application Developer (RAD), and so similar skills, tools and interfaces used to write Java, C/C++ and Fortran code can also be used for JCL, CICS, IMS, COBOL and PL/I on the mainframe. An IBM study ["Benchmarking IDE Efficiency"] found that developers using RDz were 30 percent more productive than using native z/OS ISPF. (I mention the use of RAD in my post [Three Things to do on the IBM Cloud]).
What does this all mean for the IT industry? First, the zEnterprise is perfectly positioned for [three-tier architecture] applications. A typical example could be a client-facing web-server on x86, talking to business logic running on POWER7, which in turn talks to database on z/OS in the z196 mainframe. Second, the zEnterprise is well-positioned for government agencies looking to modernize their operations and significantly reduce costs, corporations looking to consolidate data centers, and service providers looking to deploy public cloud offerings. Third, IBM storage is a great fit for the zEnterprise, with the IBM DS8000 series, XIV, SONAS and Information Archive accessible from both z196 and zBX servers.
To learn more, see the [12-page brochure] or review the collection of [IBM Redbooks]. Check out the [IBM Conferences schedule] for an event near you. Next year, the IBM Storage University will be held July 18-22, 2011 in Orlando, Flordia.
technorati tags: IBM, Technical University, zEnterprise, x86, POWER7, RISC, z/OS, Linux, AIX, OpenSolaris, Oracle, FICON, NFS, z196, zBX, DB2, SAO, IEDN, INMN, RDz, ISV, Eclipse, Cloud Computing
Tags: 
z196
#ibmstorage
db2
cloud+computing
nfs
x86
zenterprise
opensolaris
risc
rdz
z/os
ibm
#ibmtechu
power7
inmn
#storage
sao
aix
isv
linux
iedn
ficon
eclipse
technical+university
zbx
oracle
|
This week, July 26-30, 2010, I am in Washington DC for the annual [2010 System Storage Technical University]. As with last year, we have joined forces with the System x team. Since we are in Washington DC this time, IBM added a "Federal Track" to focus on government challenges and solutions. So, basically, offering attendees the option to attend three conferences for one low price.
This conference was previously called the "Symposium", but IBM changed the name to "Technical University" to emphasize the technical nature of the conference. No marketing puffery like "Journey to the Private Cloud" here! Instead, this is bona fide technical training, qualifying attendees to count this towards their Continuing Professional Education (CPE).
(Note to my readers:The blogosphere is like a playground. In the center are four-year-olds throwing sand into each other's faces, while mature adults sit on benches watching the action, and only jumping in as needed. For example, fellow blogger Chuck Hollis (EMC) got sand in his face for promising to resign if EMC ever offered a tacky storage guarantee, and then [failed to follow through on his promise] when it happened.
Several of my readers asked me to respond to another EMC blogger's latest [fistful of sand].
A few months ago, fellow blogger Barry Burke (EMC) committed to [stick to facts] in posts on his Storage Anarchist blog. That didn't last long! BarryB apparently has fallen in line with EMC's over-promise-then-under-deliver approach. Unfortunately, I will be busy covering the conference and IBM's robust portfolio of offerings, so won't have time to address BarryB's stinking pile of rumor and hearsay until next week or later. I am sorry to disappoint.)
This conference is designed to help IT professionals make their business and IT infrastructure more dynamic and, in the process, help reduce costs, mitigate risks, and improve service. This technical conference event is geared to IT and Business Managers, Data Center Managers, Project Managers, System Programmers, Server and Storage Administrators, Database Administrators, Business Continuity and Capacity Planners, IBM Business Partners and other IT Professionals. This week will offer over 300 different sessions and hands-on labs, certification exams, and a Solutions Center.
For those who want a quick stroll through memory lane, here are my posts from past events:
- 2007 Storage Symposium: [Day 1,
Day 2,
Day 3,
Day 4,
Day 5]
- 2009 Storage Symposium: [
Day 1-Keynote,
Day 2-Breakout,
Day 2-Server Virtualization,
Day 3-Extraordinary Networks,
Day 3-XIV,
Day 4-SVC,
Day 4-Linux,
Day 5-Meet the Experts]
In keeping up with IBM's leadership in Social Media, IBM Systems Lab Services and Training team running this event have their own [Facebook Fan Page] and
[blog]. IBM Technical University has a Twitter account [@ibmtechconfs], and hashtag #ibmtechu. You can also follow me on Twitter [@az990tony].
technorati tags: IBM, Technical University, Federal, System Storage, System x, Washington DC, CPE, EMC, Facebook, Twitter
Tags: 
#ibmtechu
washington+dc
system+storage
ibm
twitter
cpe
emc
#storage
facebook
system+x
federal
technical+university
#ibmstorage
|
Continuing coverage of my week in Washington DC for the annual [2010 System Storage Technical University], I attended several XIV sessions throughout the week. There were many XIV sessions. I could not attend all of them. Jack Arnold, one of my colleagues at the IBM Tucson Executive Briefing Center, often presents XIV to clients and Business Partners. He covered all the basics of XIV architecture, configuration, and features like snapshots and migration. Carlos Lizarralde presented "Solving VMware Challenges with XIV". Ola Mayer presented "XIV Active Data Migration and Disaster Recovery".
Here is my quick recap of two in particular that I attended:
- XIV Client Success Stories - Randy Arseneau
Randy reported that IBM had its best quarter ever for the XIV, reflecting an unexpected surge shortly after my blog post debunking the DDF myth last April. He presented successful case studies of client deployments. Many followed a familiar pattern. First, the client would only purchase one or two XIV units. Second, the client would beat the crap out of them, putting all kinds of stress from different workloads. Third, the client would discover that the XIV is really as amazing as IBM and IBM Business Partners have told them. Finally, in the fourth phase, the client would deploy the XIV for mission-critical production applications.
- A large US bank holding company managed to get 5.3 GB/sec from a pair of XIV boxes for their analytics environment. They now have 14 XIV boxes deployed in mission-critical applications.
- A large equipment manufacturer compared the offerings among seven different storage vendors, and IBM XIV came out the winner. They now have 11 XIV boxes in production and another four boxes for development/test. They have moved their entire VMware infrastructure to IBM XIV, running over 12,000 guest instances.
- A financial services company bought their first XIV in early 2009 and now has 34 XIV units in production attached to a variety of Windows, Solaris, AIX, Linux servers and VMware hosts. Their entire Microsoft Exchange was moved from HP and EMC disk to IBM XIV, and experienced noticeable performance improvement.
- When a University health system replaced two competitive disk systems with XIV, their data center temperature dropped from 74 to 68 degrees Fahrenheit. In general, XIV systems are 20 to 30 percent more energy efficient per usable TB than traditional disk systems.
- A service provider that had used EMC disk systems for over 10 years evaluated the IBM XIV versus upgrading to EMC V-Max. The three year total cost of ownership (TCO) of EMC's V-Max was $7 Million US dollars higher, so EMC counter-proposed CLARiiON CX4 instead. But, in the end, IBM XIV proved to be the better fit, and now the customer is happy having made the switch.
- The manager of an information communications technology service provider was impressed that the XIV was up and running in just a couple of days. They now have over two dozen XIV systems.
- Another XIV client had lost all of their Computer Room Air Conditioning (CRAC) units for several hours. The data center heated up to 126 degrees Fahrenheit, but the customer did not lose any data on either of their two XIV boxes, which continued to run in these extreme conditions.
- Optimizing XIV Performance - Brian Cormody
This session was an update from the [one presented last year] by Izhar Sharon. Brian presented various best practices for optimizing the performance when using specific application workloads with IBM XIV disk systems.
- Oracle ASM: Many people allocate lots of small LUNs, because this made sense a long time ago when all you had was just a bunch of disks (JBOD). In fact, many of the practices that DBAs use to configure databases across disks become unnecessary with XIV. Wth XIV, you are better off allocating a few number of very large LUNs from the XIV. The best option was a 1-volume ASM pool with 8MB AU stripe. A single LUN can contain multiple Oracle databases. A single LUN can be used to store all of the logs.
- VMware: Over 70 percent of XIV customers use it with VMware. For VMFS, IBM recommends allocating a few number of large LUNs. You can specify the maximum of 2181 GB. Do not use VMware's internal LUN extension capability, as IBM XIV already has thin provisioning and works better to allow XIV to do this for you. XIV Snapshots provide crash-consistent copies without all the VMware overhead of VMware Snapshots.
- SAP: For planning purposes, the "SAPS" unit equates roughly to 0.4 IOPS for ERP OLTP workloads, and 0.6 IOPS for BW/BI OLAP workloads. In general, an XIV can deliver 25-30,000 IOPS at 10-15 msec response time, and 60,000 IOPS at 30 msec response time. With SAP, our clients have managed to get 60,000 IOPS at less than 15 msec.
- Microsoft Exchange: Even my friends in Redmond could not believe how awesome XIV was during ESRP testing. Five Exchange 2010 servers connected two a pair of XIV boxes using the new 2TB drawers managed 40,000 mailboxes at the high profile (0.15 IOPS per mailbox). Another client found four XIV boxes (720 drives) was able to handle 60,000 mailboxes (5GB max), which would have taken over 4000 drives if internal disk drives were used instead. Who said SANs are obsolete for MS Exchange?
- Asynchronous Replication: IBM now has an "Async Calculator" to model and help design an XIV async replication solution. In general, dark fiber works best, and MPLS clouds had the worst results. The latest 10.2.2 microcode for the IBM XIV can now handle 10 Mbps at less than 250 msec roundtrip. During the initial sync between locations, IBM recommends setting the "schedule=never" to consume as much bandwidth as possible. If you don't trust the bandwidth measurements your telco provider is reporting, consider testing the bandwidth yourself with [iPerf] open source tool.
Several members of the XIV team thanked me for my April 5th post [Double Drive Failure Debunked: XIV Two Years Later]. Since April 5th, IBM has sold more XIV units this quarter than any prior quarters. I am glad to have helped!
technorati tags: IBM, Technical University, XIV, HP, EMC, CLARiiON, VMAX, TCO, CRAC, JBOD, SAP, Oracle, ASM, Microsoft Exchange, ESRP
Tags: 
jbod
clariion
#ibmstorage
#storage
#techu
asm
hp
ibm
sap
tco
crac
emc
technical+university
vmax
esrp
oracle
xiv
microsoft+exchange
|
Continuing my coverage of the annual [2010 System Storage Technical University], I attended some sessions from the System x and Federal track side of this conference.
- Grid, SOA and Cloud Computing
Bill Bauman, IBM System x Field Technical Support Specialist and System x University celebrity, presented the differences between Grid, SOA and Cloud Computing. I thought this was an odd combination to compare and contrast, but his presentation was well attended.
- Grid - this is when two or more independently owned and managed computers are brought together to solve a problem. Some research facilities do this. IBM helped four hospitals connect their computers together into a grid to help analyze breast cancer. IBM also supports the [World Community Grid] which allows your personal computer to be connected to the grid and help process calculations.
- SOA - SOA, which stands for Service Oriented Architecture, is an approach to building business applications as a combination of loosely-coupled black-box components orchestrated to deliver a well-defined level of service by linking together business processes. I often explain SOA as the the business version of Web 2.0. You can download a free copy of the eBook "SOA for Dummies" at the [IBM Smart SOA] landing page.
- Cloud - A Cloud is a dynamic, scalable, expandable, and completely contractible architecture. It may consist of multiple, disparate, on-premise and off-premise hardware and virtualized platforms hosting legacy, fully installed, stateless, or virtualized instances of operating systems and application workloads.
Bill has his own blog, and has an interesting post [Cloud Computing, What it Is, and What it is Not] that appears to be the basis of this presentation.
- Chaos to Cloud
Tom Vezina, IBM Advanced Technical Sales Specialist, presented "Chaos to Cloud Computing". Survey results show that roughly 70 percent of cloud spend will be for private clouds, and 30 percent for public, hybrid or community clouds. Of the key motivations for public cloud, 77 percent or respondents cited reducing costs, 72 percent time to value, and 50 percent improving reliability.
Tom ran over 500 "server utilization" studies for x86 deployments during the past eight years. Of these, the worst was 0.52 percent CPU utilization, the best was 13.4 percent, and the average was 6.8 percent. When IBM mentions that 85 percent of server capacity is idle, it is mostly due to x86 servers. At this rate, it seems easy to put five to 20 guest images onto a machine. However, many companies encounter "VM stall" where they get stuck after only 25 percent of their operating system images virtualized.
He feels the problem is with the fact most Physical-to-Virtual (P2V) migrations are manual efforts. There are tools available like Novell [PlateSpin Recon] to help automate and reduce the total number of hours spent per migration.
- System x KVM Solutions
Boy, I walked into this one. Many of IBM's cloud offerings are based on the Linux hypervisor called Kernel-based Virtual Machine [a href="http://www.linux-kvm.org/page/Main_Page">KVM] instead of VMware or Microsoft Hyper-V. However, this session was about the "other KVM": keyboard video and mouse switches, which thankfully, IBM has renamed to Console Managers to avoid confusion. Presenters Ben Hilmus (IBM) and Steve Hahn (Avocent) presented IBM's line of Local Console Managers (LCM) and Global Console Managers (GCM) products.
LCM are the traditional KVM switches that people are familiar with. A single keyboard, video and mouse can select among hundreds of servers to perform maintenance or check on status. GCM adds KVM-over-IP capabilities, which means that now you can access selected systems over the Ethernet from a laptop or personal computer. Both LCM and GCM allow for two-level tiering, which means that you can have an LCM in each rack, and an LCM or GCM that points to each rack, greatly increasing the number of servers that can be managed from a single pane of glass.
Many severs have a "service processor" to manage the rest of the machine. IBM RSA II, HP iLO, and Dell DRAC4 are some examples. These allow you to turn on and off selected servers. IBM BladeCenter offers an Management Module that allows the chassis to be connected to a Console Manager and select a specific blade server inside. These can also be used with VMware viewer, Virtual Network Computing (VNC), or Remote Desktop Protocol (RDP).
IBM's offerings are unique it that you can have an optical CD/DVD drive or USB external storage attached at the LCM or GCM, and make it look like the storage is attached to the selected server. This can be used to install or upgrade software, transfer log files, and so on. Another great use, and apparently the motivation for having this session in the "Federal Track", is that the USB can be used to attach a reader for a smart card, known as a Common Access Card [CAC] used by various government agencies. This provides two-factor authentication [TFA]. For example, to log into the system, you enter your password (something you know) and swipe your employee badge smart card (something you have). The combination are validated at the selected server to provide access.
I find it amusing that server people limit themselves to server sessions, and storage people to storage sessions. Sometimes, you have to step "outside your comfort zone" and learn something new, something different. Open your eyes and look around a bit. You might just be surprised what you find.
(FTC note: I work for IBM. IBM considers Novell a strategic Linux partner. Novell did not provide me a copy of Platespin Recon, I have no experience using it, and I mention it only in context of the presentation made. IBM resells Avocent solutions, and we use LCM gear in the Tucson Executive Briefing Center.)
technorati tags: IBM, Technical University, Grid, SOA, Cloud Computing, P2V, VMware, Novell, Platespin, x86, KVM, LCM, GCM, Avocent, CAC, TFA
Tags: 
lcm
#ibmstorage
cloud+computing
novell
soa
p2v
kvm
cac
gcm
technical+university
tfa
vmware
ibm
grid
x86
platespin
avocent
#ibmtechu
|
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], I presented a session on Storage for the Green Data Center, and attended a System x session on Greening the Data Center. Since they were related, I thought I would cover both in this post.
- Storage for the Green Data Center
I presented this topic in four general categories:
- Drivers and Metrics - I explained the three key drivers for consuming less energy, and the two key metrics: Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE).
- Storage Technologies - I compared the four key storage media types: Solid State Drives (SSD), high-speed (15K RPM) FC and SAS hard disk, slower (7200 RPM) SATA disk, and tape. I had comparison slides that showed how IBM disk was more energy efficient than competition, for example DS8700 consumes less energy than EMC Symmetrix when compared with the exact same number and type of physical drives. Likewise, IBM LTO-5 and TS1130 tape drives consume less energy than comparable HP or Oracle/Sun tape drives.
- Integrated Systems - IBM combines multiple storage tiers in a set of integrated systems managed by smart software. For example, the IBM DS8700 offers [Easy Tier] to offer smart data placement and movement across Solid-State drives and spinning disk. I also covered several blended disk-and-tape solutions, such as the Information Archive and SONAS.
- Actions and Next Steps - I wrapped up the talk with actions that data center managers can take to help them be more energy efficient, from deploying the IBM Rear Door Heat Exchanger, or improving the management of their data.
- Greening of the Data Center
Janet Beaver, IBM Senior Manager of Americas Group facilities for Infrastructure and Facilities, presented on IBM's success in becoming more energy efficient. The price of electricity has gone up 10 percent per year, and in some locations, 30 percent. For every 1 Watt used by IT equipment, there are an additional 27 Watts for power, cooling and other uses to keep the IT equipment comfortable. At IBM, data centers represent only 6 percent of total floor space, but 45 percent of all energy consumption. Janet covered two specific data centers, Boulder and Raleigh.
At Boulder, IBM keeps 48 hours reserve of gasoline (to generate electricity in case of outage from the power company) and 48 hours of chilled water. Many power outages are less than 10 minutes, which can easily be handled by the UPS systems. At least 25 percent of the Computer Room Air Conditioners (CRAC) are also on UPS as well, so that there is some cooling during those minutes, within the ASHRAE guidelines of 72-80 degrees Fahrenheit. Since gasoline gets stale, IBM runs the generators once a month, which serves as a monthly test of the system, and clears out the lines to make room for fresh fuel.
The IBM Boulder data center is the largest in the company: 300,000 square feet (the equivalent of five football fields)! Because of its location in Colorado, IBM enjoys "free cooling" using outside air temperature 63 percent of the year, resulting in a PUE of 1.3 rating. Electricity is only 4.5 US cents per kWh. The center also uses 1 Million KwH per year of wind energy.
The Raleigh data center is only 100,000 Square feet, with a PUE 1.4 rating. The Raleigh area enjoys 44 percent "free cooling" and electricity costs at 5.7 US cents per kWh. The Leadership in Energy and Environmental Design [LEED] has been updated to certify data centers. The IBM Boulder data center has achieved LEED Silver certification, and IBM Raleigh data center has LEED Gold certification.
Free cooling, electricity costs, and disaster susceptibility are just three of the 25 criteria IBM uses to locate its data centers. In addition to the 7 data centers it manages for its own operations, and 5 data centers for web hosting, IBM manages over 400 data centers of other clients.
It seems that Green IT initiatives are more important to the storage-oriented attendees than the x86-oriented folks. I suspect that is because many System x servers are deployed in small and medium businesses that do not have data centers, per se.
technorati tags: IBM, Technical University, Green Data Center, PUE, DCiE, Free Cooling, ASHRAE, LEED, SSD, Disk, Tape, SONAS, Archive
Tags: 
free+cooling
tape
#ibmtechu
#storage
#ibmstorage
sonas
disk
ashrae
ibm
leed
technical+university
pue
green+data+center
ssd
dcie
archive
|
Continuing my coverage of the annual [2010 System Storage Technical University], I participated in the storage free-for-all, which is a long-time tradition, started at SHARE User Group conference, and carried forward to other IT conferences. The free-for-all is a Q&A Panel of experts to allow anyone to ask any question. These are sometimes called "Birds of a Feather" (BOF). Last year, they were called "Meet the Experts", one for mainframe storage, and the other for storage attached to distributed systems. This year, we had two: one focused on Tivoli Storage software, and the second to cover storage hardware. This post provides a recap of the Storage Hardware free-for-all.
The emcee for the event was Scott Drummond. The other experts on the panel included Dan Thompson, Carlos Pratt, Jack Arnold, Jim Blue, Scott Schroder, Ed Baker, Mike Wood, Steve Branch, Randy Arseneau, Tony Abete, Jim Fisher, Scott Wein, Rob Wilson, Jason Auvenshine, Dave Canan, Al Watson, and myself, yours truly, Tony Pearson.
 What can I do to improve performance on my DS8100 disk system? It is running a mix of sequential batch processing and my medical application (EPIC). I have 16GB of cache and everything is formatted as RAID-5.
We are familiar with EPIC. It does not "play well with others", so IBM recommends you consider dedicating resources for just the EPIC data. Also consider RAID-10 instead for the EPIC data.
 How do I evaluate IBM storage solutions in regards to [ PCI-DSS] requirements.
Well, we are not lawyers, and some aspects of the PCI-DSS requirements are outside the storage realm. In March 2010, IBM was named ["Best Security Company"] by SC Magazine, and we have secure storage solutions for both disk and tape systems. IBM DS8000 and DS5000 series offer Full Disk Encryption (FDE) disk drives. IBM LTO-4/LTO-5 and TS1120/TS1130 tape drives meet FIPS requirements for encryption. We will provide you contact information on an encryption expert to address the other parts of your PCI-DSS specific concerns.
 My telco will only offer FCIP routing for long-distance disk replication, but my CIO wants to use Fibre Channel routing over CWDM, what do I do?
IBM XIV, DS8000 and DS5000 all support FC-based long distance replication across CWDM. However, if you don't have dark fiber, and your telco won't provide this option, you may need to re-negotiate your options.
 My DS4800 sometimes reboots repeatedly, what should I do.
This was a known problem with microcode level 760.28, it was detecting a failed drive. You need to replace the drive, and upgrade to the latest microcode.
 Should I use VMware snapshots or DS5000 FlashCopy?
VMware snapshots are not free, you need to upgrade to the appropriate level of VMware to get this function, and it would be limited to your VMware data only. The advantage of DS5000 FlashCopy is that it applies to all of your operating systems and hypervisors in use, and eliminates the consumption of VMware overhead. It provides crash-consistent copies of your data. If your DS5000 disk system is dedicated to VMware, then you may want to compare costs versus trade-offs.
 Any truth to the rumor that Fibre Channel protocol will be replaced by SAS?
SAS has some definite cost advantages, but is limited to 8 meters in length. Therefore, you will see more and more usage of SAS within storage devices, but outside the box, there will continue to be Fibre Channel, including FCP, FICON and FCoE. The Fibre Channel Industry Alliance [FCIA] has a healthy roadmap for 16 Gbps support and 20 Gbps interswitch link (ISL) connections.
 What about Fibre Channel drives, are these going away?
We need to differentiate the connector from the drive itself. Manufacturers are able to produce 10K and 15K RPM drives with SAS instead of FC connectors. While many have suggested that a "Flash-and-Stash" approach of SSD+SATA would eliminate the need for high-speed drives, IBM predicts that there just won't be enough SSD produced to meet the performance needs of our clients over the next five years, so 15K RPM drives, more likely with SAS instead of FC connectors, will continue to be deployed for the next five years. Â
 We'd like more advanced hands-on labs, and to have the certification exams be more product-specific rather than exams for midrange disk or enterprise disk that are too wide-ranging.
Ok, we will take that feedback to the conference organizers.
 IBM Tivoli Storage Manager is focused on disaster recovery from tape, how do I incorporate remote disk replication.
This is IBM's Unified Recovery Management, based on the seven tiers of disaster recovery established in 1983 at GUIDE conference. You can combine local recovery with FastBack, data center server recovery with TSM and FlashCopy manager, and combine that with IBM Tivoli Storage Productivity Center for Replication (TPC-R), GDOC and GDPS to manage disk replication across business continuity/disaster recovery (BC/DR) locations.
 IBM Tivoli Storage Productivity Center for Replication only manages the LUNs, what about server failover and mapping the new servers to the replicated LUNs?
There are seven tiers of disaster recovery. The sixth tier is to manage the storage replication only, as TPC-R does. The seventh tier adds full server and network failover. For that you need something like IBM GDPS or GDOC that adds this capability.
 All of my other vendor kit has bold advertising, prominent lettering, neon lights, bright colors, but our IBM kit is just black, often not even identifying the specific make or model, just "IBM" or "IBM System Storage".
IBM has opted for simplified packaging and our sleek, signature "raven black" color, and pass these savings on to you.
 Bring back the SHARK fins!
We will bring that feedback to our development team. ("Shark" was the codename for IBM's ESS 800 disk model. Fiberglass "fins" were made as promotional items and placed on top of ESS 800 disk systems to help "identify them" on the data center floor. Unfortunately, professional golfer [<a href="http://www.shark.com/">Greg Norman</a>] complained, so IBM discontinued the use of the codename back in 2005.)
 Where is Infiniband?
Like SAS, Infiniband had limited distance, about 10 to 15 meters, which proved unusable for server-to-storage network connections across data center floorspace. However, there are now 150 meter optical cables available, and you will find Infiniband used in server-to-server communications and inside storage systems. IBM SONAS uses Infiniband today internally. IBM DCS9900 offers Infiniband host-attachment for HPC customers.
 We need midrange storage for our mainframe please?
In addition to the IBM System Storage DS8000 series, the IBM SAN Volume Controller and IBM XIV are able to connect to Linux on System z mainframes.
 We need "Do's and Don'ts" on which software to run with which hardware.
IBM [Redbooks] are a good source for that, and we prioritize our efforts based on all those cards and letters you send the IBM Redbooks team.
 The new TPC v4 reporting tool requires a bit of a learning curve.
The new reporting tool, based on Eclipse's Business Intelligence Reporting Tool [BIRT], is now standardized across the most of the Tivoli portfolio. Check out the [Tivoli Common Reporting] community page for assistance.
 An unfortunate side-effect of using server virtualization like VMware is that it worsens management and backup issues. We now have many guests on each blade server.
IBM is the leading reseller of VMware, and understands that VMware adds an added layer of complexity. Thankfully, IBM Tivoli Storage Manager backups uses a lightweight agent. IBM [System Director VMcontrol] can help you manage a variety of hypervisor environments.
This was a great interactive session. I am glad everyone stayed late Thursday evening to participate in this discussion.
technorati tags: IBM, Technical University, DS8100, EPIC, PCI-DSS, FDE, Encryption, XIV, CWDM, DS5000, SAS, InfiniBand, FCIA, FCoE, FICON, GUIDE, Tivoli, Productivity Center, TPC-R, GDPS, SONAS, SVC, BIRT, Systems Director, VMcontrol
Tags: 
svc
sas
infiniband
pci-dss
birt
ds8100
tpc-r
ficon
fcoe
sonas
vmcontrol
gdps
ibm
fde
ds5000
productivity+center
#ibmtechu
tivoli
systems+director
encryption
epic
xiv
fcia
guide
#ibmstorage
cwdm
#storage
technical+university
|
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], here is my quick recap of the keynote sessions presented Monday morning. Marlin Maddy, Worldwide Technical Events Executive for IBM Systems Lab Services and Training, served as emcee.
- Jim Northington
Jim Northington, IBM System x Business Line Executive, covered the IT industry's "Love/Hate Relationship" with x86 platform. Many of the physical limitations that were previously a pain on this platform are now addressed, through a combination of IBM's new innovative eX5 architecture and virtualization technologies.
Jim also presented the [IBM CloudBurst] solution. IBM CloudBurst is one of the many "Integrated Systems" designed to help simplify deployment. Based on IBM BladeCenter, the IBM CloudBurst is basically a Private Cloud rack for those that are ready to deploy in their own data center.
Jim feels that server virtualization on x86 platforms is still in its infancy. IBM calls it the 70/30 rule: 70 percent of x86 workloads are running virtualized on 30 percent of the physical servers.
- Maria Azua
Maria Azua, IBM Vice President of Cloud Computing Enablement, presented on Cloud Computing. Technology is being adopted at faster rates. It took 40 years for radio to get 60 million listeners, 20 years for 60 million television viewers, 3 years to get 60 million surfers on the Internet, but it only took 4 months to get 60 million players on Farmville!
Maria covered various aspects of Cloud Computing: virtualization images, service catalog, provisioning elasticity, management and billing services, and virtual networks. With Cloud Computing, the combination of virtualization technologies, standardization, and automation can reduce costs and improve flexibility.
We've seen this happen before. Telcos transitioned from human operators to automated digital switches. Manufacturers went from having small teams of craftsmen to assembly lines of robots. Banks went from long lines of bank tellers to short lines at the ATM.
Maria said that companies are faced with three practical choices:
- Do-it-Yourself, buy the servers, storage and switches and connect everything together.
- Purchase pre-installed "integrated systems" to simplify deployment.
- Subscribe to Cloud computing, allowing a service provider do all this for you.
In countries where network access is not ubiquitous, IBM has developed tools for the cloud that work in "offline" mode. IBM has also developed or modified tools to run better in the cloud. Launching a computer instance from the cloud from the service catalog is so easy to do, your 5-year-old child can do this!
Want to see Cloud Computing in action? Check out [Innovation.ed.gov], which is run in the IBM cloud, for the US Department of Education's website to foster innovation.
Whether you adopt public, private or a hybrid cloud computing approach, Maria suggests you take time to plan, test your applications for standardization, examine all risks, and explore new workloads that might be good candidates. Otherwise, moving to the cloud might just mean "More mess for less". Maria provided a list of applications that IBM considers good fit for Cloud Computing today.
I heard several audience members indicate that this is the first time someone finally explained Cloud Computing to them in a way that made sense!
technorati tags: IBM, Technical University, eX5, CloudBurst, x86, Maria Azua, cloud computing, Department of Education, private cloud, public cloud, hybrid cloud
Tags: 
ibm
hybrid+cloud
cloud+computing
#ibmtechu
x86
#ibmstorage
private+cloud
cloudburst
department+of+education
ex5
public+cloud
#systemx
#storage
maria+azua
#cloud
technical+university
|
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], here is my quick recap of the keynote sessions presented Monday morning. Marlin Maddy, Worldwide Technical Events Executive for IBM Systems Lab Services and Training, served as emcee.
- Roland Hagen
Roland Hagan, IBM Vice President for IBM System x server platform, presented on how IBM is redefining the x86 computing experience. More than 50 percent of all servers are x86 based. These x86 servers are easy to acquire, enjoy a large application base, and can take advantage of readily available skilled workforce for administration. The problem is that 85 percent of x86 processing power remains idle, energy costs are 8 times what they were 12 years ago, and management costs are now 70 percent of the IT budget.
IBM has the number one market share for scalable x86 servers. Roland covered the newly announced eX5 architecture that has been deployed in both rack-optimized models as well as IBM BladeCenter blade servers. These can offer 2x the memory capacity as competitive offerings, which is important for today's server virtualization, database and analytics workloads. This includes 40 and 80 DIMM models of blades, and 64 to 96 DIMM models of rack-optimized systems. IBM also announced eXFlash, internal Solid State Drives accessible at bus speeds. FlexNode allows a 4-node system to dynamically change to 2 separate 2-node systems.
By 2013, analysts estimate that 69 percent of x86 workloads will be virtualized, and that 22 percent of servers will be running some form of hypervisor software. By 2015, this grows to 78 percent of x86 workloads being virtualized, and 29 percent of servers running hypervisor.
- Doug Balog
Doug Balog, IBM Vice President and Disk Storage Business Line Executive, presented how the growth of information results in a "perfect storom" for the storage industry. Storage Admins are focused on managing storage growth and the related costs and complexity, proper forecasting and capacity planning, and backup administration. IBM's strategy is to help clients in the following areas:
- Storage Efficiency - getting the most use out of the resources you invest
- Service Delivery - ensuring that information gets to the right people at the right time, simplify reporting and provisioning
- Data Protection - protecting data against unethical tampering, unauthorized access, and unexpected loss and corruption
He wrapped up his talk covering the success of DS8700 and XIV. In fact, 60 percent of XIV sales are to EMC customers. The TCO of an XIV is less than half the TCO of a comparable EMC VMAX disk system.
- Dave McQueeney
Dave McQueeney, IBM Vice President for Strategy and CTO for US Federal, covered how IBM's Smarter Planet vision for smarter cities, smarter healthcare, smarter energy grid and smarter traffic are being adopted by the public sector. Almost every data center in US Federal government is out of power, floor space and/or cooling capability. An estimated 80 percent of US Federal government IT budgets are spent on maintenance and ongoing operations, leaving very little left over for the big transformational projects that President Barack Obama wants to accomplish.
Who has the most active Online Transaction Processing (OLTP)? You might guess a big bank, but it is the US Department of Homeland Security (DHS), with a system processing 600 million transactions per day. Another government agency is #2, and the top Banking application is finally #3. The IBM mainframe has solved problems 10 to 15 years ago that the distributed systems are just now encountering today. Worldwide, more than 80 percent of banks use mainframes to handle their financial transactions.
IBM's recent POWER7 set of servers are proving successful in the field. For example, Allianz was able to consolidate 60 servers to 1. Running DB2 on POWER7 server is 38 percent less expensive than Oracle on x86 Nehalem processors. For Java, running JVM on POWER7 is 73 percent better than JVM on x86 Nehalem.
The US federal government ingests a large amount of data. It has huge 10-20 PB data warehouses. In fact, the amount of GB received every year by the US federal government alone exceed the production of all disk drives produced by all drive manufacturers. This means that all data must be processed through "data reduction" or it is gone forever.
- Clod Barrera
The last keynote for Monday was given by Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist for System Storage. He started out shocking the audience with his view that the "disk drive industry is a train wreck". While R&D in disk drives enjoyed a healthy improvement curve up to about 2004, it has now slowed down, getting more difficult and more expensive to improve performance and capacity of disk drives. The rest of his presentation was organized around three themes:
- Integrated Stacks - while new-comers like Oralce/Sun and the VCE coalition are promoting the benefits of integrated stacks, IBM has been doing this for the past five decades. New advancements in Server and Storage virtualization provide exciting new opportunities.
- Integrated Systems - solutions like IBM Information Archive and SONAS, and new features like Easy Tier that help adopt SSD transparently. As it gets harder and harder to scale-up, IBM has moved to innovative scale-out architectures.
- Integrated Data Center management - companies are now realizing that management and governance are critical factors of success, and that this needs to be integrated between traditional IT, private, public and hybrid cloud computing.
This was a great inspiring start for what looks like an awesome week!
technorati tags: IBM, Technical University, Marlin Maddy, Roland Hagen, Doug Balog, Dave McQueeney, Clod Barrera, x86, eX5, FlexNode, Barack Obama, DHS, OLTP, DB2, POWER7, Oracle, JVM, Intel, Nehalem
Tags: 
dhs
ex5
oracle
#ibmtechu
roland+hagen
x86
barack+obama
clod+barrera
dave+mcqueeney
oltp
technical+university
#ibmstorage
marlin+maddy
nehalem
intel
flexnode
ibm
#storage
doug+balog
jvm
db2
power7
|
Continuing my coverage of the annual [2010 System Storage Technical University], I gave three sessions, some twice to accommodate the size of the rooms. The first was the ["Storage for a Green Data Center"] I covered in my previous post. This post covers the other two.
- IBM Tivoli Storage Productivity Center version 4.1 Overview
In conferences like these, there are two types of product-level presentations. An "Overview" explains how products work today to those who are not familiar with it. An "Update" explains what's new in this version of the product for those who are already familiar with previous releases. This session was an Overview of [Tivoli Storage Productivity Center], plus some information of IBM's Storage Enterprise Resource Planner [SERP] from IBM's acquisition of NovusCG.
I was one of the original lead architects of Productivity Center many years ago, and was able to share many personal experiences about its evolution in development and in the field at client facilities. Analysts have repeatedly rated IBM Productivity Center as one of the top Storage Resource Management (SRM) tools available in the marketplace.
I would like to thank my colleague Harley Puckett for his assistance in putting the finishing touches on this presentation. This was my best attended session of the week, indicating there is a lot of interest in this product in particular, and managing a heterogeneous mix of storage devices in general. To hear a quick video introduction, see Harley Puckett's presentation at the [IBM Virtual Briefing Center].
- Information Lifecycle Management (ILM) Overview
Can you believe I have been doing ILM since 1986? I was the lead architect for DFSMS which provides ILM support for z/OS mainframes. In 2003-2005, I spent 18 months in the field performing ILM assessments for clients, and now there are dozens of IBM practitioners in Global Services and Lab Services that do this full time. This is a topic I cover frequently at the IBM Executive Briefing Center [EBC], because it addresses several top business challenges:
- Reducing costs and simplifying management
- Improving efficiency of personnel and application workloads
- Managing risks and regulatory compliance
IBM has a solution based on five "entry points". The advantage of this approach is that it allows our consultants to craft the right solution to meet the specific requirements of each client situation. These entry points are: - Enterprise Content Management [ECM]
- Tiered Information Infrastructure - we don't limit ourselves to just "Tiered Storage" as storage is only part of a complete[information infrastructure] of servers,networks and storage
- Storage Optimization and Virtualization - including virtual disk, virtual tape and virtual file solutions
- Process Enhancement and Automation - an important part of ILM are the policies and procedures, such as IT Infrastructure Library [ITIL] best practices
- Archive and Retention - space management and data retention solutions for email, database and file systems
When I presented ILM last year, I did not get many attendees. This time I had more, perhaps because of the recent announcement of ILM and HSM support in IBM SONAS and our April announcement of IBM DS8700 Easy Tier has renewed interest in this area.
I have safely returned back to Tucson, but have still a lot of notes of the other sessions I attended, so will cover them this week.
Tags: 
ibm
technical+university
tpc
#ibmstorage
srm
#storage
#techu
|