Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
For the longest time, people thought that humans could not run a mile in less than four minutes. Then, in 1954, [Sir Roger Bannister] beat that perception, and shortly thereafter, once he showed it was possible, many other runners were able to achieve this also. The same is being said now about the IBM Watson computer which appeared this week against two human contestants on Jeopardy!
(2014 Update: A lot has happened since I originally wrote this blog post! I intended this as a fun project for college students to work on during their summer break. However, IBM is concerned that some businesses might be led to believe they could simply stand up their own systems based entirely on open source and internally developed code for business use. IBM recommends instead the [IBM InfoSphere BigInsights] which packages much of the software described below. IBM has also launched a new "Watson Group" that has [Watson-as-a-Service] capabilities in the Cloud. To raise awareness to these developments, IBM has asked me to rename this post from IBM Watson - How to build your own "Watson Jr." in your basement to the new title IBM Watson -- How to replicate Watson hardware and systems design for your own use in your basement. I also took this opportunity to improve the formatting layout.)
Often, when a company demonstrates new techology, these are prototypes not yet ready for commercial deployment until several years later. IBM Watson, however, was made mostly from commercially available hardware, software and information resources. As several have noted, the 1TB of data used to search for answers could fit on a single USB drive that you buy at your local computer store.
Take a look at the [IBM Research Team] to determine how the project was organized. Let's decide what we need, and what we don't in our version for personal use:
Do we need it for personal use?
Yes, That's you. Assuming this is a one-person project, you will act as Team Lead.
Yes, I hope you know computer programming!
No, since this version for personal use won't be appearing on Jeopardy, we won't need strategy on wager amounts for the Daily Double, or what clues to pick next. Let's focus merely on a computer that can accept a question in text, and provide an answer back, in text.
Yes, this team focused on how to wire all the hardware together. We need to do that, although this version for personal use will have fewer components.
Optional. For now, let's have this version for personal use just return its answer in plain text. Consider this Extra Credit after you get the rest of the system working. Consider using [eSpeak], [FreeTTS], or the Modular Architecture for Research on speech sYnthesis [MARY] Text-to-Speech synthesizers.
Yes, I will explain what this is, and why you need it.
Yes, we will need to get information for personal use to process
Yes, this team developed a system for parsing the question being asked, and to attach meaning to the different words involved.
No, this team focused on making IBM Watson optimized to answer in 3 seconds or less. We can accept a slower response, so we can skip this.
(Disclaimer: As with any Do-It-Yourself (DIY) project, I am not responsible if you are not happy with your version for personal use I am basing the approach on what I read from publicly available sources, and my work in Linux, supercomputers, XIV, and SONAS. For our purposes, this version for personal use is based entirely on commodity hardware, open source software, and publicly available sources of information. Your implementation will certainly not be as fast or as clever as the IBM Watson you saw on television.)
Step 1: Buy the Hardware
Supercomputers are built as a cluster of identical compute servers lashed together by a network. You will be installing Linux on them, so if you can avoid paying extra for Microsoft Windows, that would save you some money. Here is your shopping list:
Three x86 hosts, with the following:
64-bit quad-core processor, either Intel-VT or AMD-V capable,
8GB of DRAM, or larger
300GB of hard disk, or larger
CD or DVD Read/Write drive
Computer Monitor, mouse and keyboard
Ethernet 1GbE 4-port hub, and appropriate RJ45 cables
Surge protector and Power strip
Local Console Monitor (LCM) 4-port switch (formerly known as a KVM switch) and appropriate cables. This is optional, but will make it easier during the development. Once your implementation is operational, you will only need the monitor and keyboard attached to one machine. The other two machines can remain "headless" servers.
Step 2: Establish Networking
IBM Watson used Juniper switches running at 10Gbps Ethernet (10GbE) speeds, but was not connected to the Internet while playing Jeopardy! Instead, these Ethernet links were for the POWER7 servers to talk to each other, and to access files over the Network File System (NFS) protocol to the internal customized SONAS storage I/O nodes.
The implementation will be able to run "disconnected from the Internet" as well. However, you will need Internet access to download the code and information sources. For our purposes, 1GbE should be sufficient. Connect your Ethernet hub to your DSL or Cable modem. Connect all three hosts to the Ethernet switch. Connect your keyboard, video monitor and mouse to the LCM, and connect the LCM to the three hosts.
Step 3: Install Linux and Middleware
To say I use Linux on a daily basis is an understatement. Linux runs on my Android-based cell phone, my laptop at work, my personal computers at home, most of our IBM storage devices from SAN Volume Controller to XIV to SONAS, and even on my Tivo at home which recorded my televised episodes of Jeopardy!
For this project, you can use any modern Linux distribution that supports KVM. IBM Watson used Novel SUSE Linux Enterprise Server [SLES 11]. Alternatively, I can also recommend either Red Hat Enterprise Linux [RHEL 6] or Canonical [Ubuntu v10]. Each distribution of Linux comes in different orientations. Download the the 64-bit "ISO" files for each version, and burn them to CDs.
Graphical User Interface (GUI) oriented, often referred to as "Desktop" or "HPC-Head"
Command Line Interface (CLI) oriented, often referred to as "Server" or "HPC-Compute"
Guest OS oriented, to run in a Hypervisor such as KVM, Xen, or VMware. Novell calls theirs "Just Enough Operating System" [JeOS].
For this version for personal use, I have chosen a [multitier architecture], sometimes referred to as an "n-tier" or "client/server" architecture.
Host 1 - Presentation Server
For the Human-Computer Interface [HCI], the IBM Watson received categories and clues as text files via TCP/IP, had a [beautiful avatar] representing a planet with 42 circles streaking across in orbit, and text-to-speech synthesizer to respond in a computerized voice. Your implementation will not be this sophisticated. Instead, we will have a simple text-based Query Panel web interface accessible from a browser like Mozilla Firefox.
Host 1 will be your Presentation Server, the connection to your keyboard, video monitor and mouse. Install the "Desktop" or "HPC Head Node" version of Linux. Install [Apache Web Server and Tomcat] to run the Query Panel. Host 1 will also be your "programming" host. Install the [Java SDK] and the [Eclipse IDE for Java Developers]. If you always wanted to learn Java, now is your chance. There are plenty of books on Java if that is not the language you normally write code.
While three little systems doesn't constitute an "Extreme Cloud" environment, you might like to try out the "Extreme Cloud Administration Tool", called [xCat], which was used to manage the many servers in IBM Watson.
Host 2 - Business Logic Server
Host 2 will be driving most of the "thinking". Install the "Server" or "HPC Compute Node" version of Linux. This will be running a server virtualization Hypervisor. I recommend KVM, but you can probably run Xen or VMware instead if you like.
Host 3 - File and Database Server
Host 3 will hold your information sources, indices, and databases. Install the "Server" or "HPC Compute Node" version of Linux. This will be your NFS server, which might come up as a question during the installation process.
Technically, you could run different Linux distributions on different machines. For example, you could run "Ubuntu Desktop" for host 1, "RHEL 6 Server" for host 2, and "SLES 11" for host 3. In general, Red Hat tries to be the best "Server" platform, and Novell tries to make SLES be the best "Guest OS".
My advice is to pick a single distribution and use it for everything, Desktop, Server, and Guest OS. If you are new to Linux, choose Ubuntu. There are plenty of books on Linux in general, and Ubuntu in particular, and Ubuntu has a helpful community of volunteers to answer your questions.
Step 4: Download Information Sources
You will need some documents for your implementation to process.
IBM Watson used a modified SONAS to provide a highly-available clustered NFS server. For this version, we won't need that level of sophistication. Configure Host 3 as the NFS server, and Hosts 1 and 2 as NFS clients. See the [Linux-NFS-HOWTO] for details. To optimize performance, host 3 will be the "official master copy", but we will use a Linux utility called rsync to copy the information sources over to the hosts 1 and 2. This allows the task engines on those hosts to access local disk resources during question-answer processing.
We will also need a relational database. You won't need a high-powered IBM DB2. Your implementation can do fine with something like [Apache Derby] which is the open source version of IBM CloudScape from its Informix acquisition. Set up Host 3 as the Derby Network Server, and Hosts 1 and 2 as Derby Network Clients. For more about structured content in relational databases, see my post [IBM Watson - Business Intelligence, Data Retrieval and Text Mining].
Linux includes a utility called wget which allows you to download content from the Internet to your system. What documents you decide to download is up to you, based on what types of questions you want answered. For example, if you like Literature, check out the vast resources at [FullBooks.com]. You can automate the download by writing a shell script or program to invoke wget to all the places you want to fetch data from. Rename the downloaded files to something unique, as often they are just "index.html". For more on wget utility, see [IBM Developerworks].
Step 5: The Query Panel - Parsing the Question
Next, we need to parse the question and have some sense of what is being asked for. For this we will use [OpenNLP] for Natural Language Processing, and [OpenCyc] for the conceptual logic reasoning. See Doug Lenat presenting this 75-minute video [Computers versus Common Sense]. To learn more, see the [CYC 101 Tutorial].
Unlike Jeopardy! where Alex Trebek provides the answer and contestants must respond with the correct question, we will do normal Question-and-Answer processing. To keep things simple, we will limit questions to the following formats:
Who is ...?
Where is ...?
When did ... happen?
What is ...?
Host 1 will have a simple Query Panel web interface. At the top, a place to enter your question, and a "submit" button, and a place at the bottom for the answer to be shown. When "submit" is pressed, this will pass the question to "main.jsp", the Java servlet program that will start the Question-answering analysis. Limiting the types of questions that can be posed will simplify hypothesis generation, reduce the candidate set and evidence evaluation, allowing the analytics processing to continue in reasonable time.
Step 6: Unstructured Information Management Architecture
The "heart and soul" of IBM Watson is Unstructured Information Management Architecture [UIMA]. IBM developed this, then made it available to the world as open source. It is maintained by the [Apache Software Foundation], and overseen by the Organization for the Advancement of Structured Information Standards [OASIS].
Basically, UIMA lets you scan unstructured documents, gleam the important points, and put that into a database for later retrieval. In the graph above, DBs means 'databases' and KBs means 'knowledge bases'. See the 4-minute YouTube video of [IBM Content Analytics], the commercial version of UIMA.
Starting from the left, the Collection Reader selects each document to process, and creates an empty Common Analysis Structure (CAS) which serves as a standardized container for information. This CAS is passed to Analysis Engines , composed of one or more Annotators which analyze the text and fill the CAS with the information found. The CAS are passed to CAS Consumers which do something with the information found, such as enter an entry into a database, update an index, or update a vote count.
(Note: This point requires, what we in the industry call a small matter of programming, or [SMOP]. If you've always wanted to learn Java programming, XML, and JDBC, you will get to do plenty here. )
If you are not familiar with UIMA, consider this [UIMA Tutorial].
Step 7: Parallel Processing
People have asked me why IBM Watson is so big. Did we really need 2,880 cores of processing power? As a supercomputer, the 80 TeraFLOPs of IBM Watson would place it only in 94th place on the [Top 500 Supercomputers]. While IBM Watson may be the [Smartest Machine on Earth], the most powerful supercomputer at this time is the Tianhe-1A with more than 186,000 cores, capable of 2,566 TeraFLOPs.
To determine how big IBM Watson needed to be, the IBM Research team ran the DeepQA algorithm on a single core. It took 2 hours to answer a single Jeopardy question! Let's look at the performance data:
Number of cores
Time to answer one Jeopardy question
Single IBM Power750 server
< 4 minutes
Single rack (10 servers)
< 30 seconds
IBM Watson (90 servers)
< 3 seconds
The old adage applies, [many hands make for light work]. The idea is to divide-and-conquer. For example, if you wanted to find a particular street address in the Manhattan phone book, you could dispatch fifty pages to each friend and they could all scan those pages at the same time. This is known as "Parallel Processing" and is how supercomputers are able to work so well. However, not all algorithms lend well to parallel processing, and the phrase [nine women can't have a baby in one month] is often used to remind us of this.
Fortuantely, UIMA is designed for parallel processing. You need to install UIMA-AS for Asynchronous Scale-out processing, an add-on to the base UIMA Java framework, supporting a very flexible scale-out capability based on JMS (Java Messaging Services) and ActiveMQ. We will also need Apache Hadoop, an open source implementation used by Yahoo Search engine. Hadoop has a "MapReduce" engine that allows you to divide the work, dispatch pieces to different "task engines", and the combine the results afterwards.
Host 2 will run Hadoop and drive the MapReduce process. Plan to have three KVM guests on Host 1, four on Host 2, and three on Host 3. That means you have 10 task engines to work with. These task engines can be deployed for Content Readers, Analysis Engines, and CAS Consumers. When all processing is done, the resulting votes will be tabulated and the top answer displayed on the Query Panel on Host 1.
Step 8: Testing
To simplify testing, use a batch processing approach. Rather than entering questions by hand in the Query Panel, generate a long list of questions in a file, and submit for processing. This will allow you to fine-tune the environment, optimize for performance, and validate the answers returned.
There you have it. By the time you get your implementation fully operational, you will have learned a lot of useful skills, including Linux administration, Ethernet networking, NFS file system configuration, Java programming, UIMA text mining analysis, and MapReduce parallel processing. Hopefully, you will also gain an appreciation for how difficult it was for the IBM Research team to accomplish what they had for the Grand Challenge on Jeopardy! Not surprisingly, IBM Watson is making IBM [as sexy to work for as Apple, Google or Facebook], all of which started their business in a garage or a basement with a system as small as this version for personal use.
Are you tired of hearing about Cloud Computing without having any hands-on experience? Here's your chance. IBM has recently launched its IBM Development and Test Cloud beta. This gives you a "sandbox" to play in. Here's a few steps to get started:
Generate a "key pair". There are two keys. A "public" key that will reside in the cloud, and a "private" key that you download to your personal computer. Don't lose this key.
Request an IP address. This step is optional, but I went ahead and got a static IP, so I don't have to type in long hostnames like "vm353.developer.ihost.com".
Request storage space. Again, this step is optional, but you can request a 50GB, 100GB and 200GB LUN. I picked a 200GB LUN. Note that each instance comes with some 10 to 30GB storage already. The advantage to a storage LUN is that it is persistent, and you can mount it to different instances.
Start an "instance". An "instance" is a virtual machine, pre-installed with whatever software you chose from the "asset catalog". These are Linux images running under Red Hat Enterprise Virtualization (RHEV) which is based on Linux's kernel virtual machine (KVM). When you start an instance, you get to decide its size (small, medium, or large), whether to use your static IP address, and where to mount your storage LUN. On the examples below, I had each instance with a static IP and mounted the storage LUN to /media/storage subdirectory. The process takes a few minutes.
So, now that you are ready to go, what instance should you pick from the catalog? Here are three examples to get you started:
IBM WebSphere sMASH Application Builder
Base OS server to run LAMP stack
Next, I decided to try out one of the base OS images. There are a lot of books on Linux, Apache, MySQL and PHP (LAMP) which represents nearly 70 percent of the web sites on the internet. This instance let's you install all the software from scratch. Between Red Hat and Novell SUSE distributions of Linux, Red Hat is focused on being the Hypervisor of choice, and SUSE is focusing on being the Guest OS of choice. Most of the images on the "asset catalog" are based on SLES 10 SP2. However, there was a base OS image of Red Hat Enterprise Linux (RHEL) 5.4, so I chose that.
To install software, you either have to find the appropriate RPM package, or download a tarball and compile from source. To try both methods out, I downloaded tarballs of Apache Web Server and PHP, and got the RPM packages for MySQL. If you just want to learn SQL, there are instances on the asset catalog with DB2 and DB2 Express-C already pre-installed. However, if you are already an expert in MySQL, or are following a tutorial or examples based on MySQL from a classroom textbook, or just want a development and test environment that matches what your company uses in production, then by all means install MySQL.
This is where my SSH client comes in handy. I am able to login to my instance and use "wget" to fetch the appropriate files. An alternative is to use "SCP" (also part of PuTTY) to do a secure copy from your personal computer up to the instance. You will need to do everything via command line interface, including editing files, so I found this [VI cheat sheet] useful. I copied all of the tarballs and RPMs on my storage LUN ( /media/storage ) so as not to have to download them again.
Compiling and configuring them is a different matter. By default, you login as an end user, "idcuser" (which stands for IBM Developer Cloud user). However, sometimes you need "root" level access. Use "sudo bash" to get into root level mode, and this allows you to put the files where they need to be. If you haven't done a configure/make/make install in awhile, here's your chance to relive those "glory days".
In the end, I was able to confirm that Apache, MySQL and PHP were all running correctly. I wrote a simple index.php that invoked phpinfo() to show all the settings were set correctly. I rebooted the instance to ensure that all of the services started at boot time.
Rational Application Developer over VDI
This last example, I started an instance pre-installed with Rational Application Developer (RAD), which is a full Integrated Development Environment (IDE) for Java and J2EE applications. I used the "NX Client" to launch a virtual desktop image (VDI) which in this case was Gnome on SLES 10 SP2. You might want to increase the screen resolution on your personal computer so that the VDI does not take up the entire screen.
From this VDI, you can launch any of the programs, just as if it were your own personal computer. Launch RAD, and you get the familiar environment. I created a short Java program and launched it on the internal WebSphere Application Server test image to confirm it was working correctly.
If you are thinking, "This is too good to be true!" there is a small catch. The instances are only up and running for 7 days. After that, they go away, and you have to start up another one. This includes any files you had on the local disk drive. You have a few options to save your work:
Copy the files you want to save to your storage LUN. This storage LUN appears persistent, and continues to exist after the instance goes away.
Take an "image" of your "instance", a function provided in the IBM Developer and Test Cloud. If you start a project Monday morning, work on it all week, then on Friday afternoon, take an "image". This will shutdown your instance, and backup all of the files to your own personal "asset catalog" so that the next time you request an instance, you can chose that "image" as the starting point.
Another option is to request an "extension" which gives you another 7 days for that instance. You can request up to five unique instances running at the same time, so if you wanted to develop and test a multi-host application, perhaps one host that acts as the front-end web server, another host that does some kind of processing, and a third host that manages the database, this is all possible. As far as I can tell, you can do all the above from either a Windows, Mac or Linux personal computer.
Getting hands-on access to Cloud Computing really helps to understand this technology!
Well it's Tuesday again, and you know what that means.. IBM announcements! Today, IBM announces that next Monday marks the 60th anniversary of first commercial digital tape storage system! I am on the East coast this week visiting clients, but plan to be back in Tucson in time for the cake and fireworks next Monday.
1925 - masking tape (which 3M sold under its newly announced Scotch® brand)
1930 - clear cellulose-based tape (today, when people say Scotch tape, they usually are referring to the cellulose version)
1935 - Allgemeine Elektrizitatsgesellschaft (AEG) presents Magnetophon K1, audio recording on analog tape
1942 - Duct tape
1947 - Bing Crosby adopts audio recording for his radio program. This eliminated him doing the same program live twice per day, perhaps the first example of using technology for "deduplication".
According to the IBM Archives the [IBM 726 tape drive was formally announced May 21, 1952]. It was the size of a refrigerator, and the tape reel was the size of a large pizza. The next time you pull a frozen pizza from your fridge, you can remember this month's celebration!
When I first joined IBM in 1986, there were three kinds of IBM tape. The round reel called 3420, and the square cartridge called 3480, and the tubes that contained a wide swath of tape stored in honeycomb shelves called the [IBM 3850 Mass Storage System].
My first job at IBM was to work on DFHSM, which was specifically started in 1977 to manage the IBM 3850, and later renamed to the DFSMShsm component of the DFSMS element of the z/OS operating system. This software was instrumental in keeping disk and tape at high 80-95 percent utilization rates on mainframe servers.
While visiting a client in Detroit, the client loved their StorageTek tape automation silo, but didn't care for the StorageTek drives inside were incompatible with IBM formats. They wanted to put IBM drives into the StorageTek silos. I agreed it was a good idea, and brought this back to the attention of development. In a contentious meeting with management and engineers, I presented this feedback from the client.
Everyone in the room said IBM couldn't do that. I asked "Why not?" The software engineers I spoke to already said they could support it. With StorageTek at the brink of Chapter 11 bankruptcy, I argued that IBM drives in their tape automation would ease the transition of our mainframe customers to an all-IBM environment.
Was the reason related to business/legal concerns, or was their a hardware issue? It turned out to be a little of both. On the business side, IBM had to agree to work with StorageTek on service and support to its mutual clients in mixed environments. On the technical side, the drive had to be tilted 12 degrees to line up with the robotic hand. A few years later, the IBM silo-compatible 3592 drive was commercially available.
Rather than put StorageTek completely out of business, it had the opposite effect. Now that IBM drives can be put in StorageTek libraries, everyone wanted one, basically bringing StorageTek back to life. This forced IBM to offer its own tape automation libraries.
In 1993, I filed my first patent. It was for the RECYCLE function in DFHSM to consolidate valid data from partial tapes to fresh new tapes. Before my patent, the RECYCLE function selected tapes alphabetically, by volume serial (VOLSER). My patent evaluated all tapes based on how full they were, and sorted them least-full to most-full, to maximize the return of cartridges.
Different tape cartridges can hold different amounts of data, especially with different formats on the same media type, with or without compression, so calculating the percentage full turned out to be a tricky algorithm that continues to be used in mainframe environments today.
The patent was popular for cross-licensing, and IBM has since filed additional patents for this invention in other countries to further increase its license revenue for intellectual property.
In 1997, IBM launched the IBM 3494 Virtual Tape Server (VTS), the first virtual tape storage device, blending disk and tape to optimal effect. This was based off the IBM 3850 Mass Storage Systems, which was the first virtual disk system, that used 3380 disk and tape to emulate the older 3350 disk systems.
In the VTS, tape volume images would be emulated as files on a disk system, then later moved to physical tape. We would call the disk the "Tape Volume Cache", and use caching algorithms to decide how long to keep data in cache, versus destage to tape. However, there were only a few tape drives, and sometimes when the VTS was busy, there were no tape drives available to destage the older images, and the cache would fill up.
I had already solved this problem in DFHSM, with a function called pre-migration. The idea was to pre-emptively copy data to tape, but leave it also on disk, so that when it needed to be destaged, all we had to do was delete the disk copy and activate the tape copy. We patented using this idea for the VTS, and it is still used in the successor models of IBM Sysem Storage TS7740 virtual tape libraries today.
Today, tape continues to be the least expensive storage medium, about 15 to 25 times less expensive, dollar-per-GB, than disk technologies. A dollar of today's LTO-5 tape can hold 22 days worth of MP3 music at 192 Kbps recording. A full TS1140 tape cartridge can hold 2 million copies of the book "War and Peace".
(If you have not read the book, Woody Allen took a speed reading course and read the entire novel in just 20 minutes. He summed up the novel in three words: "It involves Russia." By comparison, in the same 20 minutes, at 650MB/sec, the TS1140 drive can read this novel over and over 390,000 times.)
If you have your own "war stories" about tape, I would love to hear them, please consider posting a comment below.
Tonight PBS plans to air Season 38, Episode 6 of NOVA, titled [Smartest Machine On Earth]. Here is an excerpt from the station listing:
"What's so special about human intelligence and will scientists ever build a computer that rivals the flexibility and power of a human brain? In "Artificial Intelligence," NOVA takes viewers inside an IBM lab where a crack team has been working for nearly three years to perfect a machine that can answer any question. The scientists hope their machine will be able to beat expert contestants in one of the USA's most challenging TV quiz shows -- Jeopardy, which has entertained viewers for over four decades. "Artificial Intelligence" presents the exclusive inside story of how the IBM team developed the world's smartest computer from scratch. Now they're racing to finish it for a special Jeopardy airdate in February 2011. They've built an exact replica of the studio at its research lab near New York and invited past champions to compete against the machine, a big black box code -- named Watson after IBM's founder, Thomas J. Watson. But will Watson be able to beat out its human competition?"
Like most supercomputers, Watson runs the Linux operating system. The system runs 2,880 cores (90 IBM Power 750 servers, four sockets each, eight cores per socket) to achieve 80 [TeraFlops]. TeraFlops is the unit of measure for supercomputers, representing a trillion floating point operations. By comparison, Hans Morvec, principal research scientist at the Robotics Institute of Carnegie Mellon University (CMU) estimates that the [human brain is about 100 TeraFlops]. So, in the three seconds that Watson gets to calculate its response, it would have processed 240 trillion operations.
Several readers of my blog have asked for details on the storage aspects of Watson. Basically, it is a modified version of IBM Scale-Out NAS [SONAS] that IBM offers commercially, but running Linux on POWER instead of Linux-x86. System p expansion drawers of SAS 15K RPM 450GB drives, 12 drives each, are dual-connected to two storage nodes, for a total of 21.6TB of raw disk capacity. The storage nodes use IBM's General Parallel File System (GPFS) to provide clustered NFS access to the rest of the system. Each Power 750 has minimal internal storage mostly to hold the Linux operating system and programs.
When Watson is booted up, the 15TB of total RAM are loaded up, and thereafter the DeepQA processing is all done from memory. According to IBM Research, "The actual size of the data (analyzed and indexed text, knowledge bases, etc.) used for candidate answer generation and evidence evaluation is under 1TB." For performance reasons, various subsets of the data are replicated in RAM on different functional groups of cluster nodes. The entire system is self-contained, Watson is NOT going to the internet searching for answers.
“In times of universal deceit, telling the truth will be a revolutionary act.”
-- George Orwell
Well, it has been over two years since I first covered IBM's acquisition of the XIV company. Amazingly, I still see a lot of misperceptions out in the blogosphere, especially those regarding double drive failures for the XIV storage system. Despite various attempts to [explain XIV resiliency] and to [dispel the rumors], there are still competitors making stuff up, putting fear, uncertainty and doubt into the minds of prospective XIV clients.
Clients love the IBM XIV storage system! In this economy, companies are not stupid. Before buying any enterprise-class disk system, they ask the tough questions, run evaluation tests, and all the other due diligence often referred to as "kicking the tires". Here is what some IBM clients have said about their XIV systems:
“3-5 minutes vs. 8-10 hours rebuild time...”
-- satisfied XIV client
“...we tested an entire module failure - all data is re-distributed in under 6 hours...only 3-5% performance degradation during rebuild...”
-- excited XIV client
“Not only did XIV meet our expectations, it greatly exceeded them...”
In this blog post, I hope to set the record straight. It is not my intent to embarrass anyone in particular, so instead will focus on a fact-based approach.
Fact: IBM has sold THOUSANDS of XIV systems
XIV is "proven" technology with thousands of XIV systems in company data centers. And by systems, I mean full disk systems with 6 to 15 modules in a single rack, twelve drives per module. That equates to hundreds of thousands of disk drives in production TODAY, comparable to the number of disk drives studied by [Google], and [Carnegie Mellon University] that I discussed in my blog post [Fleet Cars and Skin Cells].
Fact: To date, no customer has lost data as a result of a Double Drive Failure on XIV storage system
This has always been true, both when XIV was a stand-alone company and since the IBM acquisition two years ago. When examining the resilience of an array to any single or multiple component failures, it's important to understand the architecture and the design of the system and not assume all systems are alike. At it's core, XIV is a grid-based storage system. IBM XIV does not use traditional RAID-5 or RAID-10 method, but instead data is distributed across loosely connected data modules which act as independent building blocks. XIV divides each LUN into 1MB "chunks", and stores two copies of each chunk on separate drives in separate modules. We call this "RAID-X".
Spreading all the data across many drives is not unique to XIV. Many disk systems, including EMC CLARiiON-based V-Max, HP EVA, and Hitachi Data Systems (HDS) USP-V, allow customers to get XIV-like performance by spreading LUNs across multiple RAID ranks. This is known in the industry as "wide-striping". Some vendors use the terms "metavolumes" or "extent pools" to refer to their implementations of wide-striping. Clients have coined their own phrases, such as "stripes across stripes", "plaid stripes", or "RAID 500". It is highly unlikely that an XIV will experience a double drive failure that ultimately requires recovery of files or LUNs, and is substantially less vulnerable to data loss than an EVA, USP-V or V-Max configured in RAID-5. Fellow blogger Keith Stevenson (IBM) compared XIV's RAID-X design to other forms of RAID in his post [RAID in the 21st Centure].
Fact: IBM XIV is designed to minimize the likelihood and impact of a double drive failure
The independent failure of two drives is a rare occurrence. More data has been lost from hash collisions on EMC Centera than from double drive failures on XIV, and hash collisions are also very rare. While the published worst-case time to re-protect from a 1TB drive failure for a fully-configured XIV is 30 minutes, field experience shows XIV regaining full redundancy on average in 12 minutes. That is 40 times less likely than a typical 8-10 hour window for a RAID-5 configuration.
A lot of bad things can happen in those 8-10 hours of traditional RAID rebuild. Performance can be seriously degraded. Other components may be affected, as they share cache, connected to the same backplane or bus, or co-dependent in some other manner. An engineer supporting the customer onsite during a RAID-5 rebuild might pull the wrong drive, thereby causing a double drive failure they were hoping to avoid. Having IBM XIV rebuild in only a few minutes addresses this "human factor".
In his post [XIV drive management], fellow blogger Jim Kelly (IBM) covers a variety of reasons why storage admins feel double drive failures are more than just random chance. XIV avoids load stress normally associated with traditional RAID rebuild by evenly spreading out the workload across all drives. This is known in the industry as "wear-leveling". When the first drive fails, the recovery is spread across the remaining 179 drives, so that each drive only processes about 1 percent of the data. The [Ultrastar A7K1000] 1TB SATA disk drives that IBM uses from HGST have specified 1.2 million hours mean-time-between-failures [MTBF] would average about one drive failing every nine months in a 180-drive XIV system. However, field experience shows that an XIV system will experience, on average, one drive failure per 13 months, comparable to what companies experience with more robust Fibre Channel drives. That's innovative XIV wear-leveling at work!
Fact: In the highly unlikely event that a DDF were to occur, you will have full read/write access to nearly all of your data on the XIV, all but a few GB.
Even though it has NEVER happened in the field, some clients and prospects are curious what a double drive failure on an XIV would look like. First, a critical alert message would be sent to both the client and IBM, and a "union list" is generated, identifying all the chunks in common. The worst case on a 15-module XIV fully loaded with 79TB data is approximately 9000 chunks, or 9GB of data. The remaining 78.991 TB of unaffected data are fully accessible for read or write. Any I/O requests for the chunks in the "union list" will have no response yet, so there is no way for host applications to access outdated information or cause any corruption.
(One blogger compared losing data on the XIV to drilling a hole through the phone book. Mathematically, the drill bit would be only 1/16th of an inch, or 1.60 millimeters for you folks outside the USA. Enough to knock out perhaps one character from a name or phone number on each page. If you have ever seen an actor in the movies look up a phone number in a telephone booth then yank out a page from the phone book, the XIV equivalent would be cutting out 1/8th of a page from an 1100 page phone book. In both cases, all of the rest of the unaffected information is full accessible, and it is easy to identify which information is missing.)
If the second drive failed several minutes after the first drive, the process for full redundancy is already well under way. This means the union list is considerably shorter or completely empty, and substantially fewer chunks are impacted. Contrast this with RAID-5, where being 99 percent complete on the rebuild when the second drive fails is just as catastrophic as having both drives fail simultaneously.
Fact: After a DDF event, the files on these few GB can be identified for recovery.
Once IBM receives notification of a critical event, an IBM engineer immediately connects to the XIV using remote service support method. There is no need to send someone physically onsite, the repair actions can be done remotely. The IBM engineer has tools from HGST to recover, in most cases, all of the data.
Any "union" chunk that the HGST tools are unable to recover will be set to "media error" mode. The IBM engineer can provide the client a list of the XIV LUNs and LBAs that are on the "media error" list. From this list, the client can determine which hosts these LUNs are attached to, and run file scan utility to the file systems that these LUNs represent. Files that get a media error during this scan will be listed as needing recovery. A chunk could contain several small files, or the chunk could be just part of a large file. To minimize time, the scans and recoveries can all be prioritized and performed in parallel across host systems zoned to these LUNs.
As with any file or volume recovery, keep in mind that these might be part of a larger consistency group, and that your recovery procedures should make sense for the applications involved. In any case, you are probably going to be up-and-running in less time with XIV than recovery from a RAID-5 double failure would take, and certainly nowhere near "beyond repair" that other vendors might have you believe.
Fact: This does not mean you can eliminate all Disaster Recovery planning!
To put this in perspective, you are more likely to lose XIV data from an earthquake, hurricane, fire or flood than from a double drive failure. As with any unlikely disaster, it is best to have a disaster recovery plan than to hope it never happens. All disk systems that sit on a single datacenter floor are vulnerable to such disasters.
For mission-critical applications, IBM recommends using disk mirroring capability. IBM XIV storage system offers synchronous and asynchronous mirroring natively, both included at no additional charge.
If you store your VMware bits on external SAN or NAS-based disk storage systems, this post is for you. The subject of the post, VM Volumes, is a potential storage management game changer!
Fellow blogger Stephen Foskett mentioned VM Volumes in his [Introducing VMware vSphere Storage Features] presentation at IBM Edge 2012 conference. His session on VMware's storage features included VMware APIs for Array Integration (VAAI), VMware Array Storage Awareness (VASA), vCenter plug-ins, and a new concept he called "vVol", now more formally known as VM Volumes. This post provides a follow-up to this, describing the VM Volumes concepts, architecture, and value proposition.
"VM Volumes" is a future architecture that VMware is developing in collaboration with IBM and other major storage system vendors. So far, very little information about VM Volumes has been released. At VMworld 2012 Barcelona, VMware highlights VM Volumes for the first time and IBM demonstrates VM Volumes with the IBM XIV Storage System (more about this demo below). VM Volumes is worth your attention -- when it becomes generally available, everyone using storage arrays will have to reconsider their storage management practices in a VMware environment -- no exaggeration!
But enough drama. What is this all about?
(Note: for the sake of clarity, this post refers to block storage only. However, the VM Volumes feature applies to NAS systems as well. Special thanks to Yossi Siles and the XIV development team for their help on this post!)
The VM Volumes concept is simple: VM disks are mapped directly to special volumes on a storage array system, as opposed to storing VMDK files on a vSphere datastore.
The following images illustrate the differences between the two storage management paradigms.
You may still be asking yourself: bottom line, how will I benefit from VM Volumes?
Well, take a VM snapshot for example. With VM Volumes, vSphere can simply offload the operation by invoking a hardware snapshot of the hardware volume. This has significant implications:
VM-Granularity: Only the right VMs are copied (with datastores, backing up or cloning individual-VM portions of hardware snapshot of a datastore would require more complex configuration, tools and work)
Hardware Offload: No ESXi server resources are consumed
XIV advantage: With XIV, snapshots consume no space upfront and are completed instantly.
Here's the first takeaway: With VM Volumes, advanced storage services (which cost a lot when you buy a storage array), will become available at an individual VM level. In a cloud world, this means that applications can be provisioned easily with advanced storage services, such as snapshots and mirroring.
Now, let's take a closer look at another relevant scenario where VM Volumes will make a lot of difference - provisioning an application with special mirroring requirements:
VM Volumes case: The application is ordered via the private cloud portal. The requestor checks a box requesting an asynchronous mirror. He changes the default RPO for his needs. When the request is submitted, the process wraps up automatically: Volumes are created on one of the storage arrays, configured with a mirror and RPO exactly as specified. A few minutes later, the requestor receives an automatic mail pointing to the application virtual machine.
Datastores case #1: As may be expected, a datastore that is mirrored with the special RPO does not exist. As a result, the automated workflow sets a pending status on the request, creates an urgent ticket to a VMware administrator and aborts. When the VMware admin handles that ticket, she re-assigns the ticket to the storage administrator, asking for a new volume which is mirrored with the special RPO, and mapped to the right ESXi cluster. The next day, the volume is created; the ticket is re-assigned to the storage admin, with the new LUN being pointed to. The VMware administrator follows and creates the datastore on top of it. Since the automated workflow was aborted, the admin re-assigns the ticket to the cloud administrator, who sometime later completes the application provisioning manually.
Datastores case #2: Luckily for the requestor, a datastore that is mirrored with the special RPO does exist. However, that particular datastore is consuming space from a high performance XIV Gen3 system with SSD caching, while the application does not require that level of performance, so the workflow requires a storage administrator approval. The approval is given to save time, but the storage administrator opens a ticket for himself to create a new volume on another array, as well as a follow-up ticket for the VMware admin to create a new datastore using the new volume and migrate the application to the other datastore. In this case, provisioning was relatively rapid, but required manual follow up, involving the two administrators.
Here's the second takeaway: With VM Volumes, management is simplified, and end-to-end automation is much more applicable. The reason is that there are no datastores. Datastores physically group VMs that may otherwise be totally unrelated, and require close coordination between storage and VMware administrators.
Now, the above mainly focuses on the VMware or cloud administrator perspective. How does VM Volumes impact storage management?
VM's are the new hosts: Today, storage administrators have visibility of physical hosts in their management environment. In a non-virtualized environment, this visibility is very helpful. The storage administrator knows exactly which applications in a data center are storage-provisioned or affected by storage management operations because the applications are running on well-known hosts. However, in virtualized environments the association of an application to a physical host is temporary. To keep at least the same level of visibility as in physical environments, VMs should become part of the storage management environment, like hosts. Hosts are still interesting, for example to manage physical storage mapping, but without VM visibility, storage administrators will know less about their operation than they are used to, or need to. VM Volumes enables such visibility, because volumes are provided to individual VMs. The XIV VM Volumes demonstration at VMworld Barcelona, although experimental, shows a view of VM volumes, in XIV's management GUI.
Here's a screenshot:
That's not all!
Storage Profiles and Storage Containers: A Storage Profile is a vSphere specification of a set of storage services. A storage profile can include properties like thin or thick provisioning, mirroring definition, snapshot policy, minimum IOPS, etc.
Storage administrators define a portfolio of supported storage services, maintained as a set of storage profiles, and published (via VASA integration) to vSphere.
VMware or cloud administrators define the required storage profiles for specific applications
VMware and storage administrators need to coordinate the typical storage requirements and the automatically-available storage services. When a request to provision an application is made, the associated storage profiles are matched against the published set of available storage profiles. The matching published profiles will be used to create volumes, which will be bound to the application VMs. All that will happen automatically.
Note that when a VM is created today, a datastore must be specified. With VM Volumes, a new management entity called Storage Container (also known as Capacity Pool) replaces the use of datastore as a management object. Each Storage Container exposes a subset of the available storage profiles, as appropriate. The storage container also has a capacity quota.
Here are some more takeaways:
New way to interface vSphere and storage management: Storage administrators structure and publish storage services to vSphere via storage profiles and storage containers.
Automated provisioning, out of the box: The provisioning process automatically matches application-required storage profiles against storage profiles available from the specified storage containers. There is no need to build custom scripts and custom processes to automate storage provisioning to applications
The XIV advantage:
XIV services are very simple to define and publish. The typical number of available storage profiles would be low. It would also be easy to define application storage profiles.
XIV provides consistent high performance, up to very high capacity utilization levels, without any maintenance. As a result, automated provisioning (which inherently implies less human attention) will not create an elevated risk of reduced performance.
Note: A storage vendor VASA provider is required to support VM Volumes, storage profiles, storage containers and automated provisioning. The IBM Storage VASA provider runs as a standalone service that needs to be deployed on a server.
To summarize the VM Volumes value proposition:
Streamline cloud operation by providing storage services at VM and application level, enabling end-to-end provisioning automation, and unifying VMware and storage administration around volumes and VMs.
Increase storage array ROI, improve vSphere scalability and response time, and reduce cloud provisioning lag, by offloading VM-level provisioning, failover, backup, storage migration, storage space recycling, monitoring, and more, to the storage array, using advanced storage operations such as mirroring and snapshots.
Simplify the adoption of VM Volumes using XIV, with smaller and simpler sets of storage profiles. Apply XIV's supreme fast cloning to individual VMs, and keep automation risks at bay with XIV's consistent high performance.
Until you can get your hands on a VM Volumes-capable environment, the VMware and IBM developer groups will be collaborating and working hard to realize this game-changing feature. The above information is definitely expected to trigger your questions or comments, and our development teams are eager to learn from them and respond. Enter your comments below, and I will try to answer them, and help shape the next post on this subject. There's much more to be told.
This month, I am pleased to announce the new [IBM STG Executive Briefing Center] website, representing a huge improvement over the previous website we had been using over the past two years. STG refers to IBM's Systems and Technology Group, the division that focuses on servers, storage, switches and the system software that makes them run. This new website is for the dozen STG EBCs that span the globe. The new website reminds me of this famous quote:
"Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away"
-- Antoine de Saint-Exupery
Let's take a quick look at what makes it so much better.
The previous website required registration. At every briefing, those of us who work in the EBCs had to pass around a sign-up sheet for email addresses from each attendee so that we could send them an invitation to register for the site. We would have a hard time reading people's handwriting, resulting in some emails coming back rejected.
Inspired by self-service gas stations, automated teller machines, and the many self-service portals of Cloud Computing, the new website has everything up-front, without registration. IBM Business Partners and sales representatives can easily request a briefing at any of the dozen briefing centers represented!
IBM-managed and IBM-hosted
We had a difficult time explaining to our attendees why our previous website was hosted on a lone machine and maintained by a third party. Think about it, IBM manages the data centers of over 400 clients. IBM has provided web hosting to the most mission critical workloads, with high levels of availability and reliability, and is recognized as one of the "Big 5" Cloud companies. I have done web design myself in my career, and we were terribly disappointed with the third party chosen to create and maintain our previous website, constantly having to point out errors in their HTML and CSS.
For the new website, IBM took back control. Staff from each EBC, myself included, came up with a simple page to bring the essence of each location to life. Special thanks to my colleage Hal Jennings, from the Austin EBC, for bringing this altogether!
Despite two years of manually registering attendees to use the previous website, Google Analytics showed that few people visited, and the few that did spent little time exploring the vast repository of content.
The new website is vastly simpler. The front page points to all twelve EBCs, and a single mouse click gets you to the location you are interested in, with all the details you need to make a decision to book a briefing, and the contact information to make it happen.
Elimination of Wasted and Duplicate Effort
In the previous website, we spent as much as 15 hours just to create, voice over, edit and produce a single 15-minute recorded presentation. Less than six percent of the previous website visitors watched more than five minutes of these videos, making us feel that most of our effort was wasted.
The EBC staff kept wasting their time, month after month, thanks to all-stick, no-carrot tactics that mandated minimums for contributions for more and more content that nobody was ever looking at. Even more disappointing was that much of our work duplicated the formal responsibilities of our IBM Marketing team. They weren't happy about this either, causing confusion between the roles of our two teams.
Finally, we said enough was enough! The new STG EBC website is a marvel in minimalism. If you want to see presentations, videos, expert profiles, or partake in on-going conversations, I welcome you to visit the [IBM Expert Network], the [IBM Storage YouTube Channel], and the [Storage Community] where they belong.
Can Structured Query Language [SQL] be considered a storage protocol?
Several months ago, I was asked to review a book on SQL, titled appropriately enough "The Complete Idiot's Guide to SQL", by Steven Holzner, Ph.D. As a published author myself, I get a lot of these requests, and I agreed in this case, given that SQL was invented by IBM, and is a good fundamental skill to have for Business Analytics and Database Management.
(FTC Disclosure: I work for IBM but was not part of the SQL development team. I was provided a copy of this book for free to review it. I was not paid to mention this book, nor told what to write. I do not know the author personally nor anyone that works for his publicist. All of my opinions of the book in this blog post are my own.)
Despite an agreed-upon standard for SQL, each relational database management system (RDBMS) has decided to customize it for their own purposes. First, SQL can be quite wordy, so some RDBMS have made certain keywords optional. Second, RDBMS offer extra features by adding keywords or programming language extentions, options or parameters above and beyond what the SQL standard calls for. Third, the SQL standard has changed over the years, and some RDBMS have opted to keep some backward compatibility with their prior releases. Fourth, some RDBMS want to discourage people from easily porting code from one RDBMS to another, known in the industry as vendor lock-in.
Throughout my career, I have managed various databases, including Informix, DB2, MySQL, and Microsoft SQL Server, so I am quite familiar with the differences in SQL and the problems and implications that arise.
Most authors who want to write about SQL typically make a choice between (a) stick to the SQL standard, and expect the reader to customize the examples to their particular DBMS; or (b) stick to a single RDBMS implemenation, and offer examples that may not work on other RDBMS.
I found the book "The Complete Idiot's Guide to SQL" covered the basics quite well, but with an odd twist. The basics include creating databases and tables, defining columns, inserting and deleting rows, updating fields, and performing queries or joins. The odd twist is that Steven does not make the typical choice above, but rather shows how the various DBMS are different than standard SQL syntax, with actual working examples for different RDBMS.
You might be thinking to yourself that only an idiot would work in a place that had to require knowledge of multiple RDBMS. The sad truth is that most of the medium and large companies I speak to have two or more in production. This is either through acquisitions, or in some cases, individual business units or departments implementing their own via the [Shadow IT].
(For those who want to learn SQL and try out the examples in this book, IBM offers a free version of DB2 called [DB2-C Express] that runs on Windows, Linux, Mac OS, and Solaris.)
Last week, while I was in Russia for the [Edge Comes to You] event, I was interviewed by a journalist from [Storage News] on various topics. One question stuck me as strange. He asked why I did not mention IBM's acquisition of Netezza in my keynote session about storage. I had to explain that Netezza was not in the IBM System Storage product line, it is in a different group, under Business Analytics, where it belongs.
While it is true that Netezza can store data, because it has storage components inside, the same could also be said about nearly every other piece of IT equipment, from servers with internal disk, to digital cameras, smart phones and portable music players. They can all be considered storage devices, but doing so would undermine what differentiates them from one another.
Which brings me back to my original question: Should we consider SQL to be a storage protocol? For the longest time, IT folks only considered block-based interfaces as storage protocols, then we added file-based interfaces like CIFS and NFS, and we also have object-based interfaces, such as IBM's Object Access Method (OAM) and the System Storage Archive Manager (SSAM) API. Could SQL interfaces be the next storage protocol?
Let me know what you think on this. Leave a comment below.
This week, I am in beautiful Sao Paulo, Brazil, teaching Top Gun class to IBM Business Partners and sales reps. Traditionally, we have "Tape Thursday" where we focus on our tape systems, from tape drives, to physical and virtual tape libraries. IBM is the number #1 tape vendor, and has been for the past eight years.
(The alliteration doesn't translate well here in Brazil. The Portuguese word for tape is "fita", and Thursday here is "quinta-feira", but "fita-quinta-feira" just doesn't have the same ring to it.)
In the class, we discussed how to handle common misperceptions and myths about tape. Here are a few examples:
Myth 1: Tape processing is manually intensive
In my July 2007 blog post [Times a Million], I coined the phrase "Laptop Mentality" to describe the problem most people have dealing with data center decisions. Many folks extend linearly their experiences using their PCs, workstations or laptops to apply to the data center, unable to comprehend large numbers or solutions that take advantage of the economies of scale.
For many, the only experience dealing with tape was manual. In the 1980s, we made "mix tapes" on little cassettes, and in the 1990s we recorded our favorite television shows on VHS tapes in the VCR. Today, we have playlists on flash or disk-based music players, and record TV shows on disk-based video recorders like Tivo. The conclusion is that tapes are manual, and disk are not.
Manual processing of tapes ended in 1987, with the introduction of a silo-like tape library from StorageTek. IBM quickly responded with its own IBM 3495 Tape Library Data Server in 1992. Today, clients have many tape automation choices, from the smallest IBM TS2900 Tape Autoloader that has one drive and nine cartridges, all the way to the largest IBM TS3500 multiple-library shuttle complex that can hold exabytes of data. These tape automation systems eliminate most of the manual handling of cartridges in day-to-day operations.
Myth 2: Tape media is less reliable than disk media
For any storage media to be unreliable is to return the wrong information that is different than what was originally stored. There are only two ways for this to happen: if you write a "zero" but read back a "one", or write a "one" and read a "zero". This is called a bit error. Every storage media has a "bit error rate" that is the average likelihood for some large amount of data written.
According to the latest [LTO Bit Error rates, 2012 March], today's tape expects only 1 bit error per 10E17 bits written (about 100 Petabytes). This is 10 times more reliable than Enterprise SAS disk (1 bit per 10E16), and 100 times more reliable than Enterprise-class SATA disk (1 bit per 10E15).
Tape is the media used in "black boxes" for airplanes. When an airplane crashes, the black box is retrieved and used to investigate the causes of the crash. In 1986, the Space Shuttle Challenger exploded 73 seconds after take-off. The tapes in the black box sat on the ocean floor for six weeks before being recovered. Amazingly, IBM was able to successfully restore [90 percent of the block data, and 100 percent of voice data].
Analysts are quite upset when they are quoted out of context, but in this case, Gartner never said anything closely similar to this. Nor did the other analysts that Curtis investigated for similar claims. What Garnter did say was that disk provides an attractive alternative storage media for backup which can increase the performance of the recovery process.
Back in the 1990s, Savur Rao and I developed a patent to help backup DB2 for z/OS by using the FlashCopy feature of IBM's high-end disk system. The software method to coordinate the FlashCopy snapshots with the database application and maintain multiple versions was implemented in the DFSMShsm component of DFSMS. A few years later, this was part of a set of patents IBM cross-licensed to Microsoft for them to implement a similar software for Windows called Data Protection Manager (DPM). IBM has since introduced its own version for distributed systems called IBM Tivoli FlashCopy Manager that runs not just on Windows, but also AIX, Linux, HP-UX and Solaris operating systems.
Curtis suspects the "71 percent" citation may have been propogated by an ambitious product manager of Microsoft's Data Protection Manager, back in 2006, perhaps to help drive up business to their new disk-based backup product. Certainly, Microsoft was not the only vendor to disparage tape in this manner.
A few years ago, an [EMC failure brought down the State of Virginia] due to not just a component failure it its production disk system, but then made it worse by failing to recover from the disk-based remote mirror copy. Fortunately, the data was able to be restored from tape over the next four days. If you wonder why nobody at EMC says "Tape is Dead" anymore, perhaps it is because tape saved their butts that week.
(FTC Disclosure: I work for IBM and this post can be considered a paid, celebrity endorsement for all of the IBM tape and software products mentioned on this post. I own shares of stock in both IBM and Google, and use Google's Gmail for my personal email, as well as many other Google services. While IBM, Google and Microsoft can be considered competitors to each other in some areas, IBM has working relationships with both companies on various projects. References in this post to other companies like EMC are merely to provide illustrative examples only, based on publicly available information. IBM is part of the Linear Tape Open (LTO) consortium.)
Myth 4: Vendors and Manufacturers are no longer investing in tape technology
IBM and others are still investing Research and Development (R&D) dollars to improve tape technology. What people don't realize is that much of the R&D spent on magnetic media can be applied across both disk and tape, such as IBM's development of the Giant Magnetoresistance read/write head, or [GMR] for short.
Most recently, IBM made another major advancement with tape with the introduction of the Linear Tape File Systems (LTFS). This allows greater portability to share data between users, and between companies, but treating tape cartridges much like USB memory sticks or pen drives. You can read more in my post [IBM and Fox win an Emmy for LTFS technology]!
Next month, IBM celebrates the 60th anniversary for tape. It is good to see that tape continues to be a vibrant part of the IT industry, and to IBM's storage business!
Well, it's Tuesday again, and you know what that means!
This Thursday is the Thanksgiving holiday here in the United States, so instead of announcing IBM products, I wanted to announce the general availability of my latest book, [Inside System Storage: Volume III].
This book includes blog posts from May 2008 to March 2009, along with the ever popular behind-the-scenes commentary on what was going on during IBM's launch of the Information Infrastructure initiative.
Do you know someone who celebrates Chanukah, Christmas, Kwanza, or the Winter Solstice, and have a hard time finding the right gift?
Do you know a client or IBM Business Partner that would appreciate a nominally-priced gift to thank them for their business?
Do you know someone newly hired into IBM or another IT company that could benefit from behind-the-scenes insight and commentary?
As with the other two volumes, Inside System Storage: Volume III is available in your choice of paperback, hardcover, and eBook (Adobe PDF) format.
In the spirit of Thanksgiving, I would like to thank my editor, Susan Pollard, who put in the extra effort, working evenings and weekends, to get this book done in time for the upcoming holiday season. For those outside the United States, there is an American tradition to shop in brick-and-mortar stores on Black Friday (the day after Thanksgiving) and to shop on-line for books like mine on Cyber Monday (the Monday after Thanksgiving).
I would also like to thank my publisher, Lulu.com, for upgrading me to "Spotlight" level, so now I have a spotlight page titled [Books Written by Tony Pearson], making it easy for you to order any of my books in various formats.
And last, but not least, I would like to thank all my friends and family that were supportive these past few difficult months while I was putting this book together.
Next month, I will be in Las Vegas, Dec 4-8, speaking at Gartner's [Data Center Conference]. If you order a book today, and bring it with you to the IBM booth at the Solution Expo, I can sign it for you!
This week, IBM made over a dozen announcements related to IBM storage products. Here is part 2 of my overview:
IBM System Storage® DS8000 series microcode
One of the advantages of acquiring XIV as IBM's other high-end disk system, is that it allows the DS8000 team to focus on the IBM i and z/OS operating systems. As a result, IBM DS8000 has over half the mainframe-attach market share.
For both the DS8700 and DS8800 models, IBM Easy Tier now support sub-LUN automated tiering across three storage tiers: Solid-State Drives, high-performance spinning disk drives (15K and 10K RPM), and high-capacity disk drives (7200 RPM).
For System z customers, the latest DS8000 microcode has synergy with z/OS and GDPS, now supporting 4x larger EAV volumes, faster high-performance FICON (zHPF), and Workload Manager (WLM) integration with the I/O Priority Manager. IBM has a world record SAP performance of 59 million account postings per hour. DB2 v10 for z/OS queries were measured at 11x faster using the new zHPF feature.
IBM System Storage® DS8800 systems
On the hardware side, the DS8800 now supports a fourth frame to hold a total over 1,500 disk drives. Yes, we have customers that three frames wasn't enough, and they wanted more.
IBM is now also offering new drive options. Small Form Factor (2.5 inch) drives now include 300GB 15K RPM drives, and a 900GB 10K RPM drives. But wait! There's more! The DS8800 is no longer a SFF-only box, it now allows for mixing in Large form factor (3.5 inch) drives, starting with the 3TB NL-SAS 7200 RPM drive.
IBM XIV® Storage System Gen3
We announced the XIV Gen3 already, but we have two enhancements.
First, we now offer a model based entirely on 3TB NL-SAS drives. If you are thinking, what IBM is going to put 3TB drives into everything? Yup. Once we go through all the pain and suffering of qualifying a drive, we make sure we get our money's worth!
Secondly, we have now an iPad application to manage the XIV. This has nothing to do with Apple CEO Steve Jobs passing away last week, it was merely coincidence.
IBM Real-time Compression Appliances™ STN6500 and STN6800 V3.8
The latest software for RtCA now supports Microsoft SMB v2, and enhanced reporting so that storage admins know exactly the benefits of the compression ratios of different file extensions.
IBM System Storage EXP2500 Express®
The EXP2500 is for direct-attach situations, like the IBM BladeCenter. IBM adds LFF 3.5-inch 3TB NL-SAS drives, SFF 2.5-inch 300GB 15K RPM SAS drives, and 900GB 7200 RPM NL-SAS drives.
My colleague Curtis Neal refers to these as "B.F.D" announcements, which of course stands for Bigger, Faster, Denser!
Last week, fellow IBMer Ron Riffe started his three-part series on the Storage Hypervisor. I discussed Part I already in my previous post [Storage Hypervisor Integration with VMware]. We wrapped up the week with a Live Chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts.
"The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog.
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Private storage clouds are a little different. Administrators we talk to aren’t generally ready to let all their application owners and departments have the freedom to provision new storage on their own without any control. In most cases, new capacity requests still need to stop off at the IT administration group. But once the request gets there, life for the IT administrator is sweet!
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
A storage hypervisor increases the utilization of storage resources, and optimizes what is most scarce in your environment. For Linux, UNIX and Windows servers, you typically see utilization rates of 20 to 35 percent, and this can be raised to 55 to 80 percent with a storage hypervisor. But what is most scarce in your environment? Time! In a competitive world, it is not big animals eating smaller ones as much as fast ones eating the slow.
Want faster time-to-market? A storage hypervisor can help reduce the time it takes to provision storage, from weeks down to minutes. If your business needs to react quickly to changes in the marketplace, you certainly don't want your IT infrastructure to slow you down like a boat anchor.
Want more time with your friends and family? A storage hypervisor can migrate the data non-disruptively, during the week, during the day, during normal operating hours, instead of scheduling down-time on an evenings and weekends. As companies adopt a 24-by-7 approach to operations, there are fewer and fewer opportunities in the year for scheduled outages. Some companies get stuck paying maintenance after their warranty expires, because they were not able to move the data off in time.
Want to take advantage of the new Solid-State Drives? Most admins don't have time to figure out what applications, workloads or indexes would best benefit from this new technology? Let your storage hypervisor automated tiering do this for you! In fact, a storage hypervisor can gather enough performance and usage statistics to determine the characteristics of your workload in advance, so that you can predict whether solid-state drives are right for you, and how much benefit you would get from them.
Want more time spent on strategic projects? A storage hypervisor allows any server to connect to any storage. This eliminates the time wasted to determine when and how, and let's you focus on the what and why of your more strategic transformational projects.
If this sounds all too familiar, it is similar to the benefits that one gets from a server hypervisor -- better utilization of CPU resources, optimizing the management and administration time, with the agility and flexibility to deploy new technologies in and decommission older ones out.
"Server virtualization is a fairly easy concept to understand: Add a layer of software that allows processing capability to work across multiple operating environments. It drives both efficiency and performance because it puts to good use resources that would otherwise sit idle.
Storage virtualization is a different animal. It doesn't free up capacity that you didn't know you had. Rather, it allows existing storage resources to be combined and reconfigured to more closely match shifting data requirements. It's a subtle distinction, but one that makes a lot of difference between what many enterprises expect to gain from the technology and what it actually delivers."
Jon Toigo on his DrunkenData blog brings back the sanity with his post [Once More Into the Fray]. Here is an excerpt:
"What enables me to turn off certain value-add functionality is that it is smarter and more efficient to do these functions at a storage hypervisor layer, where services can be deployed and made available to all disk, not to just one stand bearing a vendor’s three letter acronym on its bezel. Doesn’t that make sense?
I think of an abstraction layer. We abstract away software components from commodity hardware components so that we can be more flexible in the delivery of services provided by software rather than isolating their functionality on specific hardware boxes. The latter creates islands of functionality, increasing the number of widgets that must be managed and requiring the constant inflation of the labor force required to manage an ever expanding kit. This is true for servers, for networks and for storage.
Can we please get past the BS discussion of what qualifies as a hypervisor in some guy’s opinion and instead focus on how we are going to deal with the reality of cutting budgets by 20% while increasing service levels by 10%. That, my friends, is the real challenge of our times."
Did you miss out on last Friday's Live Chat? We are doing it again this Friday, covering parts I and II of Ron's posts, so please join the conversation! The virtual dialogue on this topic will continue in another [Live Chat] on September 30, 2011 from 12 noon to 1pm Eastern Time.
Can you believe it has been five years since I started blogging?
(If you absolutely abhor the navel-gazing associated with blogging-about-blogging posts, then by all means stop reading now!)
Back in July 2005, IBM decided to merge together two brands, IBM eServer and IBM TotalStorage, into a single all-encompassing "IBM Systems" brand. Thus TotalStorage brand became the "IBM System Storage" product line of the "IBM Systems" brand. The next six months was spent renaming some (not all) of the products. The following January, I was named the Marketing Strategist for this new product line, with the mission to help promote the new naming convention.
We looked at possibly doing a regularly-scheduled podcast, but nobody back then, including myself, were familar with audio editing tools. Instead, we chose a blog. Most blogs at IBM are internal, safely hidden behind the firewall, accessible only to IBM employees. I wanted mine to be different, to be accessible to the public, clients, prospects, IBM Business Partners, and yes, even those working for IBM's various competitors. One thing I like about blogs is that if you have a typo, or make a mistake, you can go back and correct it after it has posted.
Marketing through social media is quite different than traditional marketing techniques. Management was supportive, but legal wanted to review and approval everything I wrote before I posted it onto my blog. Official IBM Press Releases, for example, go through a dozen reviews before they are finally made public. I refused. This kind of review and approval would ruin the blogging process.
Fortunately, this blog was not my first attempt at technical writing. Our legal counsel reviewed my past trip reports from various conferences, and decided to let me blog without review. Occasionally, someone will reivew my blog once already posted, and ask me to make some corrections. It reminds me of my favorite saying used heavily within IBM:
Despite these delays, we managed to launch this blog in September 2006, just in time to celebrate the 50th anniversary of disk systems. IBM introduced the industry's first commercial disk system on September 13, 1956.
Over the years, this blog has helped sales reps and IBM Business Partners close deals, and address the FUD their prospects heard from competition. I have helped my readers get in touch with the right people within IBM. And, I have "sent the elevator back down", helping other IBMers launch their own blogs, including [Barry Whyte], [Elisabeth Stahl], and [Anthony Vandewerdt].
Today, bloggers have a profound impact on the world. Not everyone has a positive view on this. Bloggers and other users of social media have been seen as whistle-blowers for fraudulent corporations, as activists against corrupt governments and dictators, and as subject matter experts and fact checkers referenced during television and radio newscasts. In a recent movie, one of the major characters was a trouble-making blogger, and another character describes his blogging as nothing more than "graffiti with punctuation."
I want to thank all of my readers for making this the #1 most influential blog on IBM DeveloperWorks in 2011! This blog has been [published in a series of books], Inside System Storage Volume I and Volume II. And yes, before you all ask in the comments below, I am actively working on Volume III.
For a bit of nostalgia, I invite you to read my first 21 blog posts that I posted back in [September 2006].
After the amount of flack Jon Toigo had to endure for not giving advanced notice to his upcoming Webcast, I thought I would better remind people about my own Webinar that is happening next Tuesday, August 23.
So here's the scoop, next Tuesday I will be presenting [The Future of Storage], August 23, 1pm to 2pm EDT. You can register to attend at the [Infoboom Registration Page]. Infoboom is a social community for business and IT leaders of small and midsize businesses brought to you by IBM.
But that's not all! After the webinar, I will then travel to various cities for face-to-face lectures. Here are the first two:
September 7 - Indianapolis
September 8 - Boston area
If you are near either of these two locations, contact your local IBM storage specialist or IBM business partner to participate.
The IBM Storwize V7000 was introduced last October, and has proven to be wildly successful. I saw two awesome reviews recently of the IBM Storwize V7000 disk system that I thought I would bring to your attention.
The first review is [IBM Storwize V7000] from Roger Howorth of ZDNet UK. Here are some quotes:
"Under the hood, the Storwize V7000 is built from technologies originally developed for IBM's enterprise-class storage systems, so the V7000 benefits from a comprehensive set of high-end features that have been scaled down for mid-range buyers."
"Initial configuration couldn't be simpler."
"We really liked the layout and functionality of the GUI."
"Storwize V7000 is virtual storage that offers efficiency and flexibility through built-in SSD optimization and "thin provisioning" technologies while enabling users to virtualize and re-use existing disk systems..."
"Storwize V7000 advanced functionality also enables non-disruptive migration of data from existing storage, simplifying implementation and minimizing disruption to users."
"The Storwize V7000 graphical user interface is a browser-based, easy to navigate intuitive GUI."
"ESG Lab found that getting started with the Storwize V7000 disk system was intuitive and straightforward."
"Easy Tier increases the efficiency and simplicity of deploying SSD drives."
Full VMware Vstorage API for Array Integration (VAAI). Back in 2008, VMware announced new vStorage APIs for its vSphere ESX hypervisor: vStorage API for Site Recovery Manager, vStorage API for Data Potection, vStorage API for Multipathing. Last July, VMware added a new API called vStorage API for Array Integration [VAAI] which offers three primitives:
Hardware-assisted Blocks zeroing. Sometimes referred to as "Write Same", this SCSI command will zero out a large section of blocks, presumably as part of a VMDK file. This can then be used to reclaim space on the XIV on thin-provisioned LUNs.
Hardware-assisted Copy. Make an XIV snapshot of data without any I/O on the server hardware.
Hardware-assisted locking. On mainframes, this is call Parallel Access Volumes (PAV). Instead of locking an entire LUN using standard SCSI reserve commands, this primitive allows an ESX host to lock just an individual block so as not to interfere with other hosts accessing other blocks on that same LUN.
Quality of Service (QoS) Performance Classes.
When XIV was first released, it treated all hosts and all data the same, even when deployed for a variety of different applications. This worked for some clients, such as [Medicare y Mucho Más]. They migrated their databases, file servers and email system from EMC CLARiiON to an IBM XIV Storage System. In conjunction with VMware, the XIV provides a highly flexible and scalable virtualized architecture, which enhances the company's business agility.
However, other clients were skeptical, and felt they needed additional "nobs" to prioritize different workloads. The new 10.2.4 microcode allows you to define four different "performance classes". This is like the door of a nightclub. All the regular people are waiting in a long line, but when a celebrity in a limo arrives, the bouncer unclips the cord, and lets the celebrity in. For each class, you provide IOPS and/or MB/sec targets, and the XIV manages to those goals. Performance classes are assigned to each host based on their value to the business.
Offline Initialization for Asynchronous Mirror.
Internally, we called this Truck Mode. Normally, when a customer decides to start using Asynchronous Mirror, they already have a lot of data at the primary location, and so there is a lot of data to send over to the new XIV box at the secondary location. This new feature allows the data to be dumped to tape at the primary location. Those tapes are shipped to the secondary location and restored on the empty XIV. The two XIV boxes are then connected for Asynchronous Mirroring, and checksums of each 64KB block are compared to determine what has changed at the primary during this "tape delivery time". This greatly reduces the time it takes for the two boxes to get past the initial synchronization phase.
IP-based Replication. When IBM first launched the Storwize V7000 last October, people commented that the one feature they felt missing was IP-based replication. Sure, we offered FCP-based replication as most other Enterprise-class disk systems offer today, but many midrange systems also offer IP-based repliation to reduce the need for expensive FCIP routers. [IBM Tivoli Storage FastBack for Storwize V7000] provides IP-based replication for Storwize V7000 systems.
Network Attached Storage
IBM announced two new models of the IBM System Storage N series. The midrange N6240 supports up to 600 drives, replacing the N6040 system. The entry-level N6210 supports up to 240 drives, and replaces the N3600 system. Details for both are available on the latest [data sheet].
IBM Real-Time Compression appliances work with all N series models to provide additional storage efficiency. Last October, I provided the [Product Name Decoder Ring] for the STN6500 and STN6800 models. The STN6500 supports 1 GbE ports, and the STN6800 supports 10GbE ports (or a mix of 10GbE and 1GbE, if you prefer). The IBM versions of these models were announced last December, but some people were on vacation and might have missed it. For more details of this, read the [Resources page], the [landing page], or [watch this video].
IBM System Storage DS3000 series
IBM System Storage [DS3524 Express DC and EXP3524 Express DC] models are powered with direct current (DC) rather than alternating current (AC). The DS3524 packs dual controllers and two dozen small-form factor (2.5 inch) drives in a compact 2U-high rack-optimized module. The EXP3524 provides addition disk capacity that can be attached to the DS3524 for expansion.
Large data centers, especially those in the Telecommunications Industry, receive AC from their power company, then store it in a large battery called an Uninterruptible Power Supply (UPS). For DC-powered equipment, they can run directly off this battery source, but for AC-powered equipment, the DC has to be converted back to AC, and some energy is lost in the conversion. Thus, having DC-powered equipment is more energy efficient, or "green", for the IT data center.
Whether you get the DC-powered or AC-powered models, both are NEBS-compliant and ETSI-compliant.
New Tape Drive Options for Autoloaders and Libraries
IBM System Storage [TS2900 Autoloader] is a compact 1U-high tape system that supports one LTO drive and up to 9 tape cartridges. The TS2900 can support either an LTO-3, LTO-4 or LTO-5 half-height drive.
IBM System Storage [TS3100 and TS3200 Tape Libraries] were also enhanced. The TS3100 can accomodate one full-height LTO drive, or two half-height drives, and hold up to 24 cartridges. The TS3200 offers twice as many drives and space for cartridges.
From New York, Rolf went to London, Paris, Madrid, Morocco, Cairo, South Africa, Bangkok Thailand, Malaysia, Singapore, New Zealand, Australia, and then back to United States. I was hoping to run into him while I was in Australia and New Zealand last month, but our schedules did not line up.
Travelingwithout baggage is more than just a convenience, it is a metaphor for the philosophy that we should keep only what we need, and leave behind what we don't. This was the approach taken by IBM in the design of the IBM Storwize V7000 midrange disk system.
The IBM Storwize V7000 disk system consists of 2U enclosures. Controller enclosures have dual-controllers and drives. Expansion enclosures have just drives. Enclosures can have either 24 smaller form factor (SFF) 2.5-inch drives, or twelve larger 3.5-inch drives. A controller enclosure can be connected up to nine expansion enclosures.
The drives are all connected via 6 Gbps SAS, and come in a variety of speeds and sizes: 300GB Solid-State Drive (SSD); 300GB/450GB/600GB high-speed 10K RPM; and 2TB low-speed 7200 RPM drives. The 12-bay enclosures can be intermixed with 24-bay enclosures on the same system, and within an enclosure different speeds and sizes can be intermixed. A half-rack system (20U) could hold as much as 480TB of raw disk capacity.
This new system, freshly designed entirely within IBM, competes directly against systems that carry a lot of baggage, including the HDS AMS, HP EVA, an EMC CLARiiON CX4 systems. Instead, we decided to keep the what we wanted from our other successful IBM products.
Inspired by our successful XIV storage system, IBM has developed a web-based GUI that focuses on ease-of-use. This GUI uses the latest HTML5 and dojo widgets to provide an incredible user experience.
Borrowed from our IBM DS8000 high-end disk systems, state-of-the-art device adapters provide 6 Gbps SAS connectivity with a variety of RAID levels: 0, 1, 5, 6, and 10.
From our SAN Volume Controller, the embedded [ SVC 6.1 firmware] provides all of the features and functions normally associated with enterprise-class systems, including Easy Tier sub-LUN automated tiering between Solid-State Drives and Spinning disk, thin provisioning, external disk virtualization, point-in-time FlashCopy, disk mirroring, built-in migration capability, and long-distance synchronous and asynchronous replication.
Finally, the various "internal NDA" that kept me from publishing this sooner have expired, so now I have the long-awaited [Inside System Storage: Volume II], documenting IBM's transformation in its storage strategy, including behind-the-scenes commentary about IBM's acquisitions of XIV and Diligent. Available initially in paperback form. I am still working on the hard cover and eBook editions.
For those who have not yet read my first book, Inside System Storage: Volume I, it is still available from my publisher Lulu, in [hard cover], [paperback] and [eBook] editions.
IBM System Storage DS8800
A lesson IBM learned long ago was not to make radical changes to high-end disk systems, as clients who run mission-critical applications are more concerned about reliability, availability and serviceability than they are performance or functionality. Shipping any product before it was ready meant painfully having to fix the problems in the field instead.
(EMC apparently is learning this same lesson now with their VMAX disk system. Their Engenuity code from Symmetrix DMX4 was ported over to new CLARiiON-based hardware. With several hundred boxes in the field, they have already racked up over 150 severity 1 problems, roughly half of these resulted in data loss or unavailability issues. For the sake of our mutual clients that have both IBM servers and EMC disk, I hope they get their act together soon.)
To avoid this, IBM made incremental changes to the successful design and architecture of its predecessors. The new DS8800 shares 85 percent of the stable microcode from the DS8700 system. Functions like Metro Mirror, Global Mirror, and Metro/Global Mirror, are compatible with all of the previous models of the DS8000 series, as well as previous models of the IBM Enterprise Storage Server (ESS) line.
The previous models of DS8000 series were designed to take in cold air from both front and back, and route the hot air out the top, known as chimney design. However, many companies are re-arranging their data centers into separate cold aisles and hot aisles. The new DS8800 has front-to-back cooling to help accommodate this design.
My colleague Curtis Neal would call the rest of this a "BFD" announcement, which of course stands for "Bigger, Faster and Denser". The new DS8800 scales-up to more drives than its DS8700 predecessor, and can scale-out from a single-frame 2-way system to a multi-frame 4-way system. IBM has upgraded to faster 5GHz POWER6+ processors, with dual-core 8 Gbps FC and FICON host adapters, 8 Gbps device adapters, and 6 Gbps SAS connectivity to smaller form factor (SFF) 2.5-inch SAS drives. IBM Easy Tier will provide sub-LUN automated tiering between Solid-State Drives and spinning disk. The denser packaging with SFF drives means that we can pack over 1000 drives in only three frames, compared to five frames required for the DS8700.
The [IBM System Storage SAN Volume Controller] software release v6.1 brings Easy Tier sub-LUN automated tiering to the rest of the world. IBM Easy Tier moves the hottest, most active extents up to Solid-State Drives (SSD) and moves the coldest, least active down to spinning disk. This works whether the SSD is inside the SVC 2145-CF8 nodes, or in the managed disk pool.
Tired of waiting for EMC to finally deliver FAST v2 for your VMAX? It has been 18 months since they first announced that someday they would have sub-LUN automatic tiering. What is taking them so long? Why not virtualize your VMAX with SVC, and you can have it sooner!
SVC 6.1 also upgrades to a sexy new web-based GUI, which like the one for the IBM Storwize V7000, is based on the latest HTML5 and dojo widget standards. Inspired by the popular GUI from the IBM XIV Storage System, this GUI has greatly improved ease-of-use.
A client asked me to explain "Nearline storage" to them. This was easy, I thought, as I started my IBM career on DFHSM, now known as DFSMShsm for z/OS, which was created in 1977 to support the IBM 3850 Mass Storage System (MSS), a virtual storage system that blended disk drives and tape cartridges with robotic automation. Here is a quick recap:
Online storage is immediately available for I/O. This includes DRAM memory, solid-state drives (SSD), and always-on spinning disk, regardless of rotational speed.
Nearline storage is not immediately available, but can be made online quickly without human intervention. This includes optical jukeboxes, automated tape libraries, as well as spin-down massive array of idle disk (MAID) technologies.
Offline storage is not immediately available, and requires some human intervention to bring online. This can include USB memory sticks, CD/DVD optical media, shelf-resident tape cartridges, or other removable media.
Sadly, it appears a few storage manufacturers and vendors have been misusing the term "Nearline" to refer to "slower online" spinning disk drives. I find this [June 2005 technology paper from Seagate], and this [2002 NetApp Press Release], the latter of which included this contradiction for their "NearStore" disk array. Here is the excerpt:
"Providing online access to reference information—NetApp nearline storage solutions quickly retrieve and replicate reference and archive information maintained on cost-effective storage—medical images, financial models, energy exploration charts and graphs, and other data-intensive records can be stored economically and accessed in multiple locations more quickly than ever"
Which is it, "online access" or "nearline storage"?
If a client asked why slower drives consume less energy or generate less heat, I could explain that, but if they ask why slower drives must have SATA connections, that is a different discussion. The speed of a drive and its connection technology are for the most part independent. A 10K RPM drive can be made with FC, SAS or SATA connection.
I am opposed to using "Nearlne" just to distinguish between four-digit speeds (such as 5400 or 7200 RPM) versus "online" for five-digit speeds (10,000 and 15,000 RPM). The difference in performance between 10K RPM and 7200 RPM spinning disks is miniscule compared to the differences between solid-state drives and any spinning disk, or the difference between spinning disk and tape.
I am also opposed to using the term "Nearline" for online storage systems just because they are targeted for the typical use cases like backup, archive or other reference information that were previously directed to nearline devices like automated tape libraries.
Can we all just agree to refer to drives as "fast" or "slow", or give them RPM rotational speed designations, rather than try to incorrectly imply that FC and SAS drives are always fast, and SATA drives are always slow? Certainly we don't need new terms like "NL-SAS" just to represent a slower SAS connected drive.
Well, it feels like Tuesday and you know what that means... "IBM Announcement Day!" Actually, today is Wednesday, but since Monday was Memorial Day holiday here in the USA, my week is day-shifted. Yesterday, IBM announced its latest IBM FlashCopy Manager v2.2 release. Fellow blogger, Del Hoobler (IBM) has also posted something on this out atthe [Tivoli Storage Blog].
IBM FlashCopy Manager replaces two previous products. One was called Tivoli Storage Manager for Copy Services, the other was called Tivoli Storage Manager for Advanced Copy Services. To say people were confused between these two was an understatement, the first was for Windows, and the second was for UNIX and Linux operating systems. The solution? A new product that replaces both of these former products to support Windows, UNIX and Linux! Thus, IBM FlashCopy Manager was born. I introduced this product back in 2009 in my post [New DS8700 and other announcements].
IBM Tivoli Storage FlashCopy Manager provides what most people with "N series SnapManager envy" are looking for: application-aware point-in-time copies. This product takes advantage of the underlying point-in-time interfaces available on various disk storage systems:
FlashCopy on the DS8000 and SAN Volume Controller (SVC)
Snapshot on the XIV storage system
Volume Shadow Copy Services (VSS) interface on the DS3000, DS4000, DS5000 and non-IBM gear that supports this Microsoft Windows protocol
For Windows, IBM FlashCopy Manager can coordinate the backup of Microsoft Exchange and SQL Server. The new version 2.2 adds support for Exchange 2010 and SQL Server 2008 R2. This includes the ability to recover an individual mailbox or mail item from an Exchange backup. The data can be recovered directly to an Exchange server, or to a PST file.
For UNIX and Linux, IBM FlashCopy Manager can coordinate the backup of DB2, SAP and Oracle databases. Version 2.2 adds support specific Linux and Solaris operating systems, and provides a new capability for database cloning. Basically, database cloning restores a database under a new name with all the appropriate changes to allow its use for other purposes, like development, test or education training. A new "fcmcli" command line interface allows IBM FlashCopy Manager to be used for custom applications or file systems.
A common misperception is that IBM FlashCopy Manager requires IBM Tivoli Storage Manager backup software to function. That is not true. You have two options:
In Stand-alone mode, it's just you, the application, IBM FlashCopy Manager and your disk system. IBM FlashCopy Manager coordinates the point-in-time copies, maintains the correct number of versions, and allows you to backup and restore directly disk-to-disk.
Unified Recovery Management with Tivoli Storage Manager
Of course, the risk with relying only on point-in-time copies is that in most cases, they are on the same disk system as the original data. The exception being virtual disks from the SAN Volume Controller. IBM FlashCopy Manager can be combined with IBM Tivoli Storage Manager so that the point-in-time copies can be copied off to a local or remote TSM server, so that if the disk system that contains both the source and the point-in-time copies fails, you have a backup copy from TSM. In this approach, you can still restore from the point-in-time copies, but you can also restore from the TSM backups as well.
IBM FlashCopy Manager is an excellent platform to connect application-aware fucntionality with hardware-based copy services.
Well, I'm back safely from my tour of Asia. I am glad to report that Tokyo, Beijing and Kuala Lumpur are pretty much how I remember them from the last time I was there in each city. I have since been fighting jet lag by watching the last thirteen episodes of LOST season 6 and the series finale.
Recently, I have started seeing a lot of buzz on the term "Storage Federation". The concept is not new, but rather based on the work in database federation, first introduced in 1985 by [A federated architecture for information management] by Heimbigner and McLeod. For those not familiar with database federation, you can take several independent autonomous databases, and treat them as one big federated system. For example, this would allow you to issue a single query and get results across all the databases in the federated system. The advantage is that it is often easier to federate several disparate heterogeneous databases than to merge them into a single database. [IBM Infosphere Federation Server] is a market leader in this space, with the capability to federate DB2, Oracle and SQL Server databases.
Storage expansion: You want to increase the storage capacity of an existing storage system that cannot accommodate the total amount of capacity desired. Storage Federation allows you to add additional storage capacity by adding a whole new system.
Storage migration: You want to migrate from an aging storage system to a new one. Storage Federation allows the joining of the two systems and the evacuation from storage resources on the first onto the second and then the first system is removed.
Safe system upgrades: System upgrades can be problematic for a number of reasons. Storage Federation allows a system to be removed from the federation and be re-inserted again after the successful completion of the upgrade.
Load balancing: Similar to storage expansion, but on the performance axis, you might want to add additional storage systems to a Storage Federation in order to spread the workload across multiple systems.
Storage tiering: In a similar light, storage systems in a Storage Federation could have different capacity/performance ratios that you could use for tiering data. This is similar to the idea of dynamically re-striping data across the disk drives within a single storage system, such as with 3PAR's Dynamic Optimization software, but extends the concept to cross storage system boundaries.
To some extent, IBM SAN Volume Controller (SVC), XIV, Scale-Out NAS (SONAS), and Information Archive (IA) offer most, if not all, of these capabilities. EMC claims its VPLEX will be able to offer storage federation, but only with other VPLEX clusters, which brings up a good question. What about heterogenous storage federation? Before anyone accuses me of throwing stones at glass houses, let's take a look at each IBM solution:
IBM SAN Volume Controller
The IBM SAN Volume Controller has been doing storage federation since 2003. Not only can IBM SAN Volume Controller bring together storage from a variety of heterogenous storage, the SVC cluster itself can be a mix of different hardware models. You can have a 2145-8A4 node pair, 2145-8G4 node pair, and the new 2145-CF8 node pair, all combined together into a single SVC cluster. Upgrading SVC hardware nodes in an SVC cluster is always non-disruptive.
IBM XIV storage system
The IBM XIV has two kinds of independent modules. Data modules have processor, cache and 12 disks. Interface modules are data modules with additional processor, FC and Ethernet (iSCSI) adapters. Because these two modules play different roles in an XIV "colony", that number of each type is predetermined. Entry-level six-module systems have 2 interface and 4 data modules. Full 15-module systems have 6 interface and 9 data modules. Individual modules can be added or removed non-disruptively in an XIV.
IBM Scale-Out NAS
The SONAS is comprised of three kinds of nodes that work together in concert. A management node, one or more interface nodes, and two or more storage nodes. The storage nodes are paired to manage up to 240 nodes in a storage pod. Individual interface or data nodes can be added or removed non-disruptively in the SONAS. The underlying technology, the General Parallel File System, has been doing storage federation since 1996 for some of the largest top 500 supercomputers in the world.
IBM Information Archive (IA)
For the IA, there are 1, 2 or 3 nodes, which manages a set of collections. A collection can either be file-based using industry-standard NAS protocols, or object-based using the popular System Storage™ Archive Manager (SSAM) interface. Normally, you have as many collections as you have nodes, but nodes are powerful enough to manage two collections to provide N-1 availability. This allows a node to be removed, and a new node added into the IA "colony", in a non-disruptive manner.
Even in an ant colony, there are only a few types of ants, with typically one queen, several males, and lots of workers. But all the ants are red. You don't see colonies that mix between different species of ants. For databases, federation was a way to avoid the much harder task of merging databases from different platforms. For storage, I am surprised people have latched on to the term "federation", given our mixed results in the other "federations" we have formed, which I have conveniently (IMHO) ranked from least effective to most effective:
The Union of Soviet Socialist Republics (USSR)
My father used to say, "If the Soviet Union were in charge of the Sahara desert, they would run out of sand in 50 years." The [Soviet Union] actually lasted 68 years, from 1922 to 1991.
The United Nations (UN)
After the previous League of Nations failed, the UN was formed in 1945 to facilitate cooperation in international law, international security, economic development, social progress, human rights, and the achieving of world peace by stopping wars between countries, and to provide a platform for dialogue.
The European Union (EU)
With the collapse of the Greek economy, and the [rapid growth of debt] in the UK, Spain and France, there are concerns that the EU might not last past 2020.
The United States of America (USA)
My own country is a federation of states, each with its own government. California's financial crisis was compared to the one in Greece. My own state of Arizona is under boycott from other states because of its recent [immigration law]. However, I think the US has managed better than the EU because it has evolved over the past 200 years.
The Organization of the Petroleum Exporting Countries [OPEC]
Technically, OPEC is not a federation of cooperating countries, but rather a cartel of competing countries that have agreed on total industry output of oil to increase individual members' profits. Note that it was a non-OPEC company, BP, that could not "control their output" in what has now become the worst oil spill in US history. OPEC was formed in 1960, and is expected to collapse sometime around 2030 when the world's oil reserves run out. Matt Savinar has a nice article on [Life After the Oil Crash].
United Federation of Planets
The [Federation] fictitiously described in the Star Trek series appears to work well, an optimistic view of what federations could become if you let them evolve long enough.
Given the mixed results with "federation", I think I will avoid using the term for storage, and stick to the original term "scale-out architecture".
Here I am, day 11 of a 17-day business trip, on my last leg of the trip this week, in Kuala Lumpur in Malaysia. I have been flooded with requests to give my take on EMC's latest re-interpretation of storage virtualization, VPLEX.
I'll leave it to my fellow IBM master inventor Barry Whyte to cover the detailed technical side-by-side comparison. Instead, I will focus on the business side of things, using Simon Sinek's Why-How-What sequence. Here is a [TED video] from Garr Reynold's post
[The importance of starting from Why].
Let's start with the problem we are trying to solve.
Problem: migration from old gear to new gear, old technology to new technology, from one vendor to another vendor, is disruptive, time-consuming and painful.
Given that IT storage is typically replaced every 3-5 years, then pretty much every company with an internal IT department has this problem, the exception being those companies that don't last that long, and those that use public cloud solutions. IT storage can be expensive, so companies would like their new purchases to be fully utilized on day 1, and be completely empty on day 1500 when the lease expires. I have spoken to clients who have spent 6-9 months planning for the replacement or removal of a storage array.
A solution to make the data migration non-disruptive would benefit the clients (make it easier for their IT staff to keep their data center modern and current) as well as the vendors (reduce the obstacle of selling and deploying new features and functions). Storage virtualization can be employed to help solve this problem. I define virtualization as "technology that makes one set of resources look and feel like a different set of resources, preferably with more desirable characteristics.". By making different storage resources, old and new, look and feel like a single type of resource, migration can be performed without disrupting applications.
Before VPLEX, here is a breakdown of each solution:
Non-disruptive tech refresh, and a unified platform to provide management and functionality across heterogeneous storage.
Non-disruptive tech refresh, and a unified platform to provide management and functionality between internal tier-1 HDS storage, and external tier-2 heterogeneous storage.
Non-disruptive tech refresh, with unified multi-pathing driver that allows host attachment of heterogeneous storage.
New in-band storage virtualization device
Add in-band storage virtualization to existing storage array
New out-of-band storage virtualization device with new "smart" SAN switches
SAN Volume Controller
HDS USP-V and USP-VM
For IBM, the motivation was clear: Protect customers existing investment in older storage arrays and introduce new IBM storage with a solution that allows both to be managed with a single set of interfaces and provide a common set of functionality, improving capacity utilization and availability. IBM SAN Volume Controller eliminated vendor lock-in, providing clients choice in multi-pathing driver, and allowing any-to-any migration and copy services. For example, IBM SVC can be used to help migrate data from an old HDS USP-V to a new HDS USP-V.
With EMC, however, the motivation appeared to protect software revenues from their PowerPath multi-pathing driver, TimeFinder and SRDF copy services. Back in 2005, when EMC Invista was first announced, these three software represented 60 percent of EMC's bottom-line profit. (Ok, I made that last part up, but you get my point! EMC charges a lot for these.)
Back in 2006, fellow blogger Chuck Hollis (EMC) suggested that SVC was just a [bump in the wire] which could not possibly improve performance of existing disk arrays. IBM showed clients that putting cache(SVC) in front of other cache(back end devices) does indeed improve performance, in the same way that multi-core processors successfully use L1/L2/L3 cache. Now, EMC is claiming their cache-based VPLEX improves performance of back-end disk. My how EMC's story has changed!
So now, EMC announces VPLEX, which sports a blend of SVC-like and Invista-like characteristics. Based on blogs, tweets and publicly available materials I found on EMC's website, I have been able to determine the following comparison table. (Of course, VPLEX is not yet generally available, so what is eventually delivered may differ.)
Scalable, 1 to 4 node-pairs
One size fits all, single pair of CPCs
SVC-like, 1 to 4 director-pairs
Works with any SAN switches or directors
Required special "smart" switches (vendor lock-in)
SVC-like, works with any SAN switches or directors
Broad selection of IBM Subsystem Device Driver (SDD) offered at no additional charge, as well as OS-native drivers Windows MPIO, AIX MPIO, Solaris MPxIO, HP-UX PV-Links, VMware MPP, Linux DM-MP, and comercial third-party driver Symantec DMP.
Limited selection, with focus on priced PowerPath driver
Invista-like, PowerPath and Windows MPIO
Read cache, and choice of fast-write or write-through cache, offering the ability to improve performance.
No cache, Split-Path architecture cracked open Fibre Channel packets in flight, delayed every IO by 20 nanoseconds, and redirected modified packets to the appropriate physical device.
SVC-like, Read and write-through cache, offering the ability to improve performance.
Space-Efficient Point-in-Time copies
SVC FlashCopy supports up to 256 space-efficient targets, copies of copies, read-only or writeable, and incremental persistent pairs.
Like Invista, No
Remote distance mirror
Choice of SVC Metro Mirror (synchronous up to 300km) and Global Mirror (asynchronous), or use the functionality of the back-end storage arrays
No native support, use functionality of back-end storage arrays, or purchase separate product called EMC RecoverPoint to cover this lack of functionality
Limited synchronous remote-distance mirror within VPLEX (up to 100km only), no native asynchronous support, use functionality of back-end storage arrays
Provides thin provisioning to devices that don't offer this natively
Like Invista, No
SVC Split-Cluster allows concurrent read/write access of data to be accessed from hosts at two different locations several miles apart
I don't think so
PLEX-Metro, similar in concept but implemented differently
Non-disruptive tech refresh
Can upgrade or replace storage arrays, SAN switches, and even the SVC nodes software AND hardware themselves, non-disruptively
Tech refresh for storage arrays, but not for Invista CPCs
Tech refresh of back end devices, and upgrade of VPLEX software, non-disruptively. Not clear if VPLEX engines themselves can be upgraded non-disruptively like the SVC.
Heterogeneous Storage Support
Broad support of over 140 different storage models from all major vendors, including all CLARiiON, Symmetrix and VMAX from EMC, and storage from many smaller startups you may not have heard of
Invista-like. VPLEX claims to support a variety of arrays from a variety of vendors, but as far as I can find, only DS8000 supported from the list of IBM devices. Fellow blogger Barry Burke (EMC) suggests [putting SVC between VPLEX and third party storage devices] to get the heterogeneous coverage most companies demand.
Back-end storage requirement
Must define quorum disks on any IBM or non-IBM back end storage array. SVC can run entirely on non-IBM storage arrays
HP SVSP-like, requires at least one EMC storage array to hold metadata
SVC 2145-CF8 model supports up to four solid-state drives (SSD) per node that can treated as managed disk to store end-user data
Invista-like. VPLEX has an internal 30GB SSD, but this is used only for operating system and logs, not for end-user data.
In-band virtualization solutions from IBM and HDS dominate the market. Being able to migrate data from old devices to new ones non-disruptively turned out to be only the [tip of the iceberg] of benefits from storage virtualization. In today's highly virtualized server environment, being able to non-disruptively migrate data comes in handy all the time. SVC is one of the best storage solutions for VMware, Hyper-V, XEN and PowerVM environments. EMC watched and learned in the shadows, taking notes of what people like about the SVC, and decided to follow IBM's time-tested leadership to provide a similar offering.
EMC re-invented the wheel, and it is round. On a scale from Invista (zero) to SVC (ten), I give EMC's new VPLEX a six.
Last week, I presented "An Introduction to Cloud Computing" for two hours to the local Institute of Management Accountants [IMA] for their Continuing Professional Education [CPE]. Since I present IBM's leadership in Cloud Storage offerings, I have had to become an expert in Cloud Computing overall. The audience was a mix of bookkeepers, accountants, auditors, comptrollers, CPAs, and accounting teachers.
Here is a sample of the questions I took during and after my presentation:
If I need to shut down host machine, I lose all my virtual machines as well?
No, it is possible to seemlessly move virtual machines from one host to another. If you need to shut down a host machine, move all the VMs to other hosts, then you can shut down the empty host without impacting business.
Does the SaaS provider have to build their own app, can they not buy an app and then rent it out?
Yes, but they won't have competitive differentiation, and the software development they buy from will want a big cut of the action. SaaS developers that build their own applications can keep more of the profits for themselves.
How do backups work in cloud computing? Do I have to contact someone at the cloud computing company to find the backup tape?
Large datacenters often keep the most recent backups on disk, and older versions on tape in automated tape libraries that can fetch your backup in less than 2 minutes. Because of this, there is no need to talk to anyone, you can schedule or invoke your own backups, and often perform the recovery yourself using self-service tools.
Last month, my sister tried to rent a car during the week the Tucson Gem Show, but they were out of cars she wanted to drive. Could this happen with Cloud Computing?
Not likely. With rental cars, the cars have to be physically in Tucson to rent them. Rental companies could have brought cars down from Phoenix to satisfy demand. With Cloud Computing, it is all accessible over the global network, you are not limited to the cloud providers nearest you.
Is there a reason why Amazon Web Services (AWS) charges more for a Windows image than a Linux image?
Yes, Amazon and Microsoft have a patent cross-licensing agreement where Amazon pays Microsoft for the priveledge of offering Windows-based images on their EC2 cloud infrastructure. It just makes business sense to pass those costs onto the consumer. Linux is a free open source operating system, and is often the better choice.
So if we rent a machine from Amazon, they send it to my accounting office? What exactly am I getting for 12 cents per hour?
No. The computer remains in their datacenter. You get a virtual machine that runs 1.2Ghz Intel processor, with 1700MB of RAM, and 160GB of hard disk space, with Windows operating system running on it, comparable to a machine you can get at the local BestBuy, but instead of it running in the next room, it is running in a datacenter somewhere else in the United States with electricity and air conditioning.
You access it remotely from your desktop or laptop PC.
Why would I ever rent more than one computer?
It depends on your workload. For example, Derek Gottfrid at the New York Times needed to convert 11 million articles from TIFF format to PDF format so that he could put them up on the web. This would have taken him months using a single computer, so he rented 100 computers and got the entire stack converted in 24 hours, for a cost of about $240. See the articles [Self-Service, Prorated, Super Computing] and [TimesMachine] for details.
What about throughput? Won't I need to run cables from my accounting office to this cloud computing data center?
You will need connectivity, most likely from connections provided by your local telephone or cable company, or through the Internet. Certainly, there can be cases where direct privately-owned fiber optic cables, known as "dark fiber", can directly connect consumers to local Cloud service providers, for added security.
What about medical records? Will Cloud Computing help the Healthcare industry?
Yes, hospitals are finding that digitizing their records greatly reduces costs. IBM offers the Grid Medical Archive Solution [GMAS] as a private cloud storage solution to store X-ray images and other electronic medical records on disk and tape, and these records can be accessed from multiple hospitals and clinics, wherever the doctor or patient happens to be.
The advantage of personal computers was individualization, I could put on my own choices of software, and customize my own settings, won't we lose this with Cloud Computing?
Yes, customized software and settings cost companies millions of dollars with help desk calls. Cloud Computing attempts to provide some standardization, reducing the amount of effort to support IT operations.
Won't putting all the computers into a big datacenter make them more vulnerable to hackers?
Security is a well-known concern, but this is being addressed with encryption, access control lists, multi-tenancy isolation, and VPN connections.
My daughter has a BlackBerry or iPod or something, and when we mentioned that someone in Phoenix wore a monkey suit to avoid photo-radar speed cameras, she was able to pull up a picture on her little hand-held thing, is this the future?
Yes, mobile phones and other hand-held devices now have internet access to take advantage of Cloud Computing services. People will be able to access the information they need from wherever they happen to be. (You can see the picture here: [Man Dons Mask for Speed-Camera Photos])
IBM offers a variety of Cloud Computing services, as well as customized solutions and integrated systems that can be deployed on-premises behind your corporate firewall. To learn more, go to [ibm.com/cloud].
The second speaker was local celebrity Dan Ryan presenting the financials for the upcoming [Rosemont Copper] mining operations. Copper is needed for emerging markets, such as hybrid vehicles and wind turbines. Copper is a major industry in Arizona.
This week I got a comment on my blog post [IBM Announces another SSD Disk offering!]. The exchange involved Solid State Disk storage inside the BladeCenter and System x server line. Sandeep offered his amazing performance results, but we have no way to get in contact with him. So, for those interested, I have posted on SlideShare.net a quick five-chart presentation on recent tests with various SSD offerings on the eX5 product line here:
It's Tuesday, and that means more IBM announcements!
I haven't even finished blogging about all the other stuff that got announced last week, and here we are with more announcements. Since IBM's big [Pulse 2010 Conference] is next week, I thought I would cover this week's announcement on Tivoli Storage Manager (TSM) v6.2 release. Here are the highlights:
Client-Side Data Deduplication
This is sometimes referred to as "source-side" deduplication, as storage admins can get confused on which servers are clients in a TSM client-server deployment. The idea is to identify duplicates at the TSM client node, before sending to the TSM server. This is done at the block level, so even files that are similar but not identical, such as slight variations from a master copy, can benefit. The dedupe process is based on a shared index across all clients, and the TSM server, so if you have a file that is similar to a file on a different node, the duplicate blocks that are identical in both would be deduplicated.
This feature is available for both backup and archive data, and can also be useful for archives using the IBM System Storage Archive Manager (SSAM) v6.2 interface.
Simplified management of Server virtualization
TSM 6.2 improves its support of VMware guests by adding auto-discovery. Now, when you spontaneously create a new virtual machine OS guest image, you won't have to tell TSM, it will discover this automatically! TSM's legendary support of VMware Consolidated Backup (VCB) now eliminates the manual process of keeping track of guest images. TSM also added support of the Vstorage API for file level backup and recovery.
While IBM is the #1 reseller of VMware, we also support other forms of server virtualization. In this release, IBM adds support for Microsoft Hyper-V, including support using Microsoft's Volume Shadow Copy Services (VSS).
Automated Client Deployment
Do you have clients at all different levels of TSM backup-archive client code deployed all over the place? TSM v6.2 can upgrade these clients up to the latest client level automatically, using push technology, from any client running v5.4 and above. This can be scheduled so that only certain clients are upgraded at a time.
Simultaneous Background Tasks
The TSM server has many background administrative tasks:
Migration of data from one storage pool to another, based on policies, such as moving backups and archives on a disk pool over to a tape pools to make room for new incoming data.
Storage pool backup, typically data on a disk pool is copied to a tape pool to be kept off-site.
Copy active data. In TSM terminology, if you have multiple backup versions, the most recent version is called the active version, and the older versions are called inactive. TSM can copy just the active versions to a separate, smaller disk pool.
In previous releases, these were done one at a time, so it could make for a long service window. With TSM v6.2, these three tasks are now run simultaneously, in parallel, so that they all get done in less time, greatly reducing the server maintenance window, and freeing up tape drives for incoming backup and archive data. Often, the same file on a disk pool is going to be processed by two or more of these scheduled tasks, so it makes sense to read it once and do all the copies and migrations at one time while the data is in buffer memory.
Enhanced Security during Data Transmission
Previous releases of TSM offered secure in-flight transmission of data for Windows and AIX clients. This security uses Secure Socket Layer (SSL) with 256-bit AES encryption. With TSM v6.2, this feature is expanded to support Linux, HP-UX and Solaris.
Improved support for Enterprise Resource Planning (ERP) applications
I remember back when we used to call these TDPs (Tivoli Data Protectors). TSM for ERP allows backup of ERP applications, seemlessly integrating with database-specific tools like IBM DB2, Oracle RMAN, and SAP BR*Tools. This allows one-to-many and many-to-one configurations between SAP servers and TSM servers. In other words, you can have one SAP server backup to several TSM servers, or several SAP servers backup to a single TSM server. This is done by splitting up data bases into "sub-database objects", and then process each object separately. This can be extremely helpful if you have databases over 1TB in size. In the event that backing up an object fails and has to be re-started, it does not impact the backup of the other objects.
Continuing on the [IBM Storage Launch of February 9], John Sing has offered to write the following guest post about the [announcement] of IBM Scale Out Network Attached Storage [IBM SONAS]. John and I have known each other for a while, traveled the world to work with clients and speak at conferences. He is an Executive IT Consultant on the SONAS team.
Guest Post written by John Sing, IBM San Jose, California
What is IBM SONAS? It’s many things, so let’s start with this list:
It’s IBM’s delivery of a productized, pre-packaged Scale Out NAS global virtual file server, delivered in a easy-to-use appliance
IBM’s solution for large enterprise file-based storage requirements, where massive scale in capacity and extreme performance is required, especially for today’s modern analytics-based Competitive Advantage IT applications
Scales to many petabytes of usable storage and billions of files in a single global namespace
Provides integrated central management, central deployment of petabyte levels of storage
Modular commercial-off-the-shelf [COTS] building blocks. I/O, storage, network capacity scale independently of each other. Up to 30 interface nodes and 60 storage nodes, in an IBM General Parallel File System [GPFS]-based cluster. Each 10Gb CEE interface node port is capable of streaming at 900 MB/sec
Files are written in block-sized chunks, striped over as many multiple disk drives in parallel – aggregating throughput on a massive scale (both read and write), as well as providing auto-tuning, auto-balancing
Functionality delivered via one program product, IBM SONAS Software, which provides all of above functions, along with clustered CIFS, NFS v2/v3 with session auto-failover, FTP, high availability, and more
IBM SONAS makes automated tiered storage achievable and realistic at petabyte levels:
Integrated high performance parallel scan engine capable of identifying files at over 10 million files per minute per node
Integrated parallel data movement engine to physically relocate the data within tiered storage
And we’re just scratching the surface. IBM has plans to deploy additional protocols, storage hardware options, and software features.
However, the real question of interest should be, “who really needs that much storage capacity and throughput horsepower?”
The answer may surprise you. IMHO, the answer is: almost any modern enterprise that intends to stay competitive. Hmmm…… Consider this: the reason that IT exists today is no longer to simply save cost (that may have been true 10 years ago). Everyone is reducing cost… but how much competitive advantage is purchased through “let’s cut our IT budget by 10% this year”?
Notice that in today’s world, there are (many) bright people out there, changing our world every day through New Intelligence Competitive Advantage analytics-based IT applications such as real time GPS traffic data, real time energy monitoring and redirection, real time video feed with analytics, text analytics, entity analytics, real time stream computing, image recognition applications, HDTV video on demand, etc. Think of how GPS industry, cell phone / Twitter / Facebook, iPhone and iPad applications, as examples, are creating whole new industries and markets almost overnight.
Then start asking yourself, “What's behind these Competitive Advantage IT applications – as they are the ones that are driving all my storage growth? Why do they need so much storage? What do those applications mean for my storage requirements?”
To be “real-time”, long-held IT paradigms are being broken every day. Things like “data proximity”: we can no longer can extract terabytes of data from production databases and load them to a data warehouse – where’s the “real-time” in that? Instead, today’s modern analytics-based applications demand:
Multiple processes and servers (sometimes numbering in the 100s) simultaneously ….
Running against hundreds of terabytes of data of live production data, streaming in from expanding number of smarter sensors, input devices, users
Producing digital image-intensive results that must be programatically sent to an ever increasing number of mobile devices in geographically dispersed storage
Requiring parallel performance levels, that used to be the domain only of High Performance Computing (HPC)
This is a major paradigm shift in storage – and that is the solution and storage capabilities that IBM SONAS is designed to address. And of course, you should be able to save significant cost through the SONAS global virtual file server consolidation and virtualization as well.
Certainly, this topic warrants more discussion. If you found it interesting, contact me, your local IBM Business Partner or IBM Storage rep to discuss Competitive Advantage IT applications and SONAS further.
Last Tuesday, we had our official "Grand Opening" for the new Tucson Executive Briefing Center!
We sent out fancy invitations to all the IBM executives who supported this center, local dignitaries from the Tucson and State of Arizona level, and all of the IBM employees on the Tucson campus.
Since our new center is significantly cozier (5700 square feet versus our previous 15,000 square feet), we split the day into two separate events. The first for the IBM executives and local VIPs, and the second for the rest of the IBM employees on campus.
Of course, there is no free lunch. The day started out with a series of speeches. My manager, Doug Davies, was the master of ceremonies to introduce each speaker.
Alistair Symon, IBM Vice President of Enterprise Storage, explained how important storage affects everyone's lives. If you use an ATM machine to withdraw money, for example, you are most probably using IBM System Storage behind the scenes. Nearly all of the IBM disk and tape storage products are designed here in Tucson.
Bruce Wright (shown here) directs the University of Arizona's Office of University Research Parks, serves as CEO of the UA Tech Park, and the founder and president of the Arizona Center for Innovation. Bruce said a few words on how please he was that IBM decided to reverse its July 2011 decision to leave Tucson. The UofA owns all the property, renting back four of the eleven buildings back to IBM, so is effectively our landlord. Next year will mark the 20th anniversary of IBM's sale of the technology park to the University.
Tucson Councilwoman Shirley Scott talked about the improtance of high-paying jobs to the local economy. While IBMers in Tucson are paid less than our counterparts in San Jose, Austin, Raleigh or Poughkeepsie, we are certainly [paid more than the average Tucsonan], thus helping to raise the standard of living here.
Dr. Michael Varney, president and CEO of the local Tucson Metropolitan Chamber of Commerce, praised IBM for its strong reputation in ethics and diversity.
My new second-line manager, Karl Duvalsaint, and my new third-line manager, Doug Dreyer, emphasized the importance of co-locating Briefing Centers in sites that have Research and Development activity. It is important for clients to interact directly with developers, and it is also good for developers to understand directly from clients their needs, preferences and requirements. Worldwide, the IBM Systems and Technology Group has only twelve Executive Briefing Centers, and the Tucson EBC is one of them.
This is not to say that IBM does not have centers in other locations. Our newest client center in Singapore is a shining example. Of course, if they want experts to speak to clients there, they need to be flown in. Doug Dreyer mentioned that IBM plans to launch six such centers in Africa as well.
Next was the ribbon cutting. From left to right, Lee Olguin (our Gunny Sargeant), Tucson Councilwoman Shirley Scott, UofA's Bruce Wright, IBM VP of Program Management Calline Sanchez, My second-line manager Karl Duvalsaint, IBM VP Allistair Simon, my first-line manager Doug Davies, Tucson Chamber of Commerce President Dr. Michael Varney, and my third-line manager Doug Dreyer. We had a member of the local high school band do the drum roll.
Once the ribbon was cut, the IBM Executves and local VIPs were brought in to see the new facility, which has two large rooms, one common dining area, an 800-square foot green data center to showcase our products, our own set of restrooms, a galley to stage up the food and beverage service, and two smaller rooms for private conversations or conference calls. A local high school band provided live music throughout the day.
Wrapping up my coverage of the 2013 IT Security and Storage Expo in Belgium, I noticed some interesting things in the other booths.
The EMC booth had a whiteboard so that clients could do some one-on-one collaboration. All of their cocktail waitresses were wearing sharp pin-stripe coats with matching mini-skirts.
Another booth had a "virtual graffiti wall". Using a "digital spraycan", you could write on the wall. I am not sure what connection this had with anything the company had to offer, but perhaps they also wanted to collaborate with attendees on solutions. In either case, it was very cool, and brought a lot of traffic.
(FTC Disclosure: I work for IBM. I was not paid to mention any of the other companies, their products or people on this blog post. Mentioning other companies is not to be considered an endorsement of any kind.)
There were some interesting costumes. Leila from [Aerohive] wearing a "bee costume" complete with black wings. Hans from STS in a bright orange business suit. (Orange is the national color of Belgium). Sophie from Fortinet handed out champagne. The plastic glassware were cones that snapped onto her tray, but they had no flat bottom to rest your glass down, so you had to hold it the entire time until you finished drinking it. The Homer Simpson sticker eating the Apple logo shows the Belgians have a sense of humor!
The NetApp booth had a huge banner claiming that "Data OnTap" was the #1 storage OS. Obviously Windows, AIX, Solaris and Linux aren't consider "storage Operating Systems" per se. Is NetApp claiming they outsell FreeNAS, the only other storage OS that I can think of?
While IBM and I.R.I.S-ICT easily won the "Best Looking Big Booth" award, I have to give the "Best Looking Small Booth" award to my friends at Hitachi Data Systems. Like EMC, the Hitachi team did not have any equipment on the floor, but they made use of their tiny space by having a Japanese theme, with cocktail waitresses in kimonos.
Continuing my coverage of the IT Security and Storage Expo in Brussels, Belgium, we had a nice reception Wednesday evening.
Clara handed out Ceasar Chicken salads. Joelle handed out small rolled up pieces of duck.
Ilsa is an IBM expert in System x, VMware and the PureSystems family on hand to help with the demos and any client questions. I.R.I.S.-ICT employee Ans is only in her 20's, but is recognized as one of Belgium's leading experts in System z mainframe. I used to be the lead architect for DFSMS on z/OS, so we had plenty to talk about.
Of course, the best time for the press to ask for interviews is during the reception, where everyone is relaxed and ready to speak. I am "media-trained" which allows me to speak to the press about IBM matters. I do a lot of these interviews either over the phone, or on camera.
I took a picture to capture the typical setup. Mandy on the left is asking me questions, while camera operator Lisa focuses on my body language. The trick is to spend 80 percent of the time focused on your interviewer, and then 20 percent looking into the camera for strategic pauses. If Mandy decides to use any of the footage, she will be sending me the YouTube video link!
Hans and Sophie from Veeam stopped by the IBM booth to say hello. (See 2010 Aug 27 blog post comparing Veeam to Tivoli Storage Manager). These two DJ's kept the IBM and I.R.I.S-ICT booth hopping.
Belgium is a small country, and many of the IT storage people know each other. This made for quite the party! Our group closed up the booth around 8:30pm and we went over to join their friends at Arrow and Huawei. Here is Maiva from Huawei.
Continuing my coverage of the IT Security and Storage Expo in Brussels, Belgium, we had some great storage solutions on display at the IBM and I.R.I.S-ICT booth.
Here my IBM colleague Tom Provost is showing the front of the "Smarter Office" solution. The second photo gives the view from behind. While I always explained the solution from the front of the box, many of the more technical attendees at this conference wanted to inspect the ports in the back.
This sound-isolated 11U solution combines the following:
The [IBM Storwize V3700] with 300GB small-form-factor (SFF) drives provides shared storage for the servers.
Two [IBM System x3550 M4 servers] that can run VMware, Hyper-V or Linux KVM server hypervisor software for your Windows and/or Linux applications. These are two socket servers that can have up to 16 x86 cores each.
A Juniper EX2200 switch to network the servers and storage together.
A Local Console Manager (LCM) with rackable keyboard, video, and mouse.
In this next example, the IBM team combined a BladeCenter S chassis that can hold six blade servers, with a Storwize V7000 Unified which offers FCP, iSCSI, FCoE, NFS, CIFS, HTTPS, SCP and FTP block and file protocols.
If those configurations are too small for your needs, consider the Flex System chassis or full PureFlex system frame. The rack-mountable 10U chassis can hold the Flex System V7000 and 10 compute notes. The PureFlex frame can hold up to four of these chasses.
IBM and I.R.I.S-ICT also had an IBM XIV Gen3 and a TS3500 Tape library on display.
Continuing my coverage of the IT Security and Storage Expo in Brussels, Belgium, here is my post on the presentations I gave during the week.
There were four presentations each day. Of the five rooms, I was assigned one room in which to give all of my presentations, room 3. My room was quite large, with sixty seats.
It is a good idea for public speakers to understand Dutch, French, German and English in Belgium. In recognition of the fact that Belgians are multi-lingual, I started each session with "Goede Middag, Bon Jour and Good Afternoon!" and ended each with "Dank U, Merci and Thank you for attending!"
12:00 to 12:30pm
What is big data? Architectures and Practical Use Cases
What is big data? Architectures and Practical Use Cases (repeat)
12:45 to 1:15pm
An IBM Storage solution for small and mid-size business? The Storwize V3700!
An IBM Storage solution for small and mid-size business? The Storwize V3700! (repeat)
1:30 to 2:00pm
A New Generation of Storage Tiering
A New Generation of Storage Tiering (repeat)
2:15 to 2:45pm
Replication for High Availability, Business Continuity and Disaster Recovery
Storage, Server and Network in one Flexible and Integrated solution! The PureSystems family
The sessions were all half-hour slots. The only presentation that I had a challenge getting down to 30 minutes was my session on "New Generation of Storage Tiering" in which I was asked to cover Easy Tier sub-LUN automated tiering, Server-to-Storage cooperative caching, Texas Memory Systems, hierarchical storage Management (HSM), Active Cloud Engine, and SmartCloud Storage!
Helping me out were three local IBM interns. From left to right: Joelle, Clara and Bryan. I hadn't noticed that there were only short breaks between sessions, all of this time consumed with one-on-one discussions with clients, so the interns were kind enough to fetch me snacks and drinks.
Joelle and Bryan speak Dutch, which is similar to the local Flemish language. Clara speaks French, which came in handy for translations.
I would like to thank my room monitors: Jolijn, Ella and Chloe. All three are local college students hired by the conference for the two days to scan name badges and count bodies in seats.
(I had to ask Jolijn to write her name on a piece of paper because it is Dutch and I had no clue how to spell it for this blog post.)
While it might appear that room 3 was "The Tony Pearson Show -- all Tony, all the time!" there were actually worthwhile sessions in the other rooms. Fellow blogger Jon Toigo [known for his DrunkenData blog] presented "Storage Infrastruggle 2013 -- Containing Storage Costs without Sacrificing Access, Protection or Management". My IBM colleague Ron Riffe presented a vendor-neutral look at Storage Hypervisors.
If the attendees wanted copies of my presentations, they were directed to get their name badge scanned at the IBM and I.R.I.S-ICT booth, all the way at the other end of the hall, and my presentations would be emailed to them.
(For those who have missed it, you can find all five of my presentations uploaded to the [IBM Expert Network] on Slideshare.)
Finally, I would like to thank my IBM colleagues who helped me develop and review my presentations: Brigitte Van Den Eynde, Joe Hayward, Jeff Jonas, Tom Deutsch, Chris Saul, Marisol Diaz, Iliana Garcia, Harley Puckett, Jack Arnold, and Steve McKinney.
The Belgium IT Security and Storage Expo was a great success!
(I am back to the USA in Portland, Oregon this week, so these posts relate to last week.)
However, that wasn't to say I didn't encounter a few challenges during my week in Belgium. The first was getting to the venue. The Belgium Expo is a large complex of buildings to the north of the city. The local IBM team suggested I go to the facility a day in advance so that I would be able to see where it was and how to get there.
I was staying in the center of town, in Place Rogier section. I had many transportation options:
Take a taxi. It was raining this week, so finding a taxi was difficult.
Take the bus. The Bus #260 goes directly from my hotel to the Belgium Expo, but only goes once an hour.
Take the metro. The metro operates frequently, and the Haysel stop is right in front of the Belgium Expo complex.
Upon arrival to the building complex, I was unsure of which building I needed to be in. Standing in front of the beautiful Building 5, I found this legend that provided the answer: Building 8. In front of Building 12 was a map that showed where Building 8 was located on the campus.
For this event, IBM joined forces with IBM Business Partner I.R.I.S-ICT to have a fabulous booth, with plenty of experts and equipment demos. As is often the case, the team had to work late into the night to get all the equipment set up, all the podiums and counters constructed, and the demos fully operational.
Apparently, I was not the only one to have troubles finding the place, so I did not feel alone. Some with cars drove around the complex several times before figuring out which parking lot to park in. Others parked at the first spot they found, and still ended up walking as much as I did.
For future reference, If you plan to attend any event at the Belgium Expo, either (a) ask for more explicit directions, and (b) plan to do lots of walking!
Well, I am back safely from my trip last week to Chicago, and now I am writing this in Madrid, Spain, on my way to Brussels, Belgium for the IT Storage Expo.
For those who have asked how the construction on the new Tucson EBC is going, here are a few pictures I took on Friday. As you can see, it is coming along nicely. The official grand opening will be April 2.
Did you miss IBM Pulse 2013 this week? I wasn't there either, having scheduled visits with clients in Washington DC this week, only to have those meetings cancelled due to the [U.S. sequestration cuts].
Fortunately, there are plenty of videos and materials to review from the event. Here's a [12-minute video] interview between Laura DuBois, Program VP of Storage for industry analyst firm [IDC], and fellow IBM executive Steve "Woj" Wojtowecz, VP of Tivoli Storage and Networking Software.
(Update: Apparently, IBM had not secured re-distribution rights from IDC to post this video prior to my blog post. IBM now has full permission to distribute. My apologies for any inconvenience last week.)
The two discuss client opportunities and requirements for storage clouds and compute clouds. Client cloud storage requirements include backup and archive clouds, file storage clouds, and storage that supports compute cloud environments.
Here are some upcoming events related to IBM Storage!
If you sell IBM and/or Oracle solutions, please join me for IBM Oracle Virtual University 2013!
A few weeks ago, I recorded a session on IBM Storage: Overview, Positioning and How to Sell that will be available on demand starting tomorrow, February 26th, at the IBM Oracle Virtual University 2013.
It's one of 65 new sessions that will help IBM to surround Oracle applications with IBM infrastructure, services and industry solutions. Oracle software, after all, runs best on IBM hardware. Other highlights of Oracle Virtual University include a live executive State of the Alliance session with Q&A, Oracle keynote, updates by Oracle product managers, sessions on PureSystems, Selling IBM into an Oracle environment, Cloud, and much more.
There will be live technical teams on hand throughout launch day to answer your questions in real time, so I hope you can carve out 30 minutes or more on February 26th to take advantage of these available resources.
After helping launch the first Pulse back in 2008, I have sadly not been back since. Last year, I was invited to attend as a last-minute replacement for another speaker, but I was busy [having emergency surgery].
This year's [Pulse 2013] conference looks amazing. It will be held in Las Vegas, Nevada. Guest Speaker Payton Manning, NFL 4-time MVP football player, and Carrie Underwood, 6-time Grammy award winner, join IBM's Software Group executives and experts on how IBM Tivoli can help optimize your IT infrastructure.
Sadly, once again, I will not be there at Pulse. This time, I will be on the East Coast visiting clients instead, but my on-premise correspondent, Tom Rauchut, has informed me that he will be there. Hopefully, he will provide me something to write about.
Later in March, I will be in Brussels, Belgium for the Storage Expo. This is held March 20-21, at the Brussels-Expo venue. I will be presenting several topics each day, as well as visit clients in the area. This event comes on behalf of IBM Belgium in association with IBM Business Partner IRIS-ICT.
If you plan to participate in any of these events, let me know!
Sadly, only 70 percent of doctors in the United States use Electronic Medical Record [EMR] systems. My own Primary Care Physician has made the switch, and told me he how much he loves having ready access to the information he needs. EMR systems reduce costs, help manage risk, and improve healthcare outcomes. It is no surprise that the U.S. government has taken a [stick-and-carrot approach] to encourage doctors to use them.
A frequent topic at the Tucson Executive Briefing Center where I work is how to make the most use of IT for healthcare and life sciences. For much of 2011 and 2012, I was also one of the technical advocates assigned to Wellpoint Insurance, in support of their adoption of IBM Watson technology for healthcare.
Recently, I spoke with Jarrett Potts, my long-time friend and former IBM colleague, who now works as Director of Strategic Marketing over at STORServer. If you have never heard of STORServer, it is a company that makes purpose-built backup appliances.
What is a Backup Appliance? It is an integrated solution of hardware and software that serves a single purpose: backup and recovery. STORServer Enterprise Backup Appliance (EBA) combines IBM's high-end x86 M4 server, IBM disk and tape storage, and IBM Tivoli Storage Manager (TSM) backup software.
(Fun Fact: The 2012 IBM year-end financial results were announced last month. IBM not only continues its #1 lead in servers overall, but has the #1 marketshare for high-end x86 servers, market-leading disk and tape storage hardware, and market leading backup software.)
To determine the appropriate size of your backup appliance, the folks at STORServer help you every step of the way. They figure out the number of TB you will backup every day, and even help configure all of the TSM server parameters to achieve the policies that make the most sense for your organization.
The appliance can backup every type of data, from databases and Virtual Machines (VMs) to documents, spreadsheets, and other unstructured data.
Are you then left with a solution too complicated to run yourself? No. The STORServer Console is an easy-to-use GUI for ongoing monitoring and maintenance. Plus, your friends at STORServer are only a phone call away in case you have any questions.
(FTC Disclosure: I work for IBM, and STORSever is an approved IBM Business Partner that uses IBM hardware and software to build their solution. I have no financial interest in STORServer, and was not paid by STORServer to mention their company or products on my blog. This post may be considered a celebrity endorsement of STORServer and its Enterprise Backup Appliances.)
Perhaps my readers feel that I am a bit biased in describing a TSM-based solution, and you want a second opinion. No worries, I understand. In the latest 165-page [2012 DCIG Backup Appliance Buyer's Guide], the STORServer models ranked very high. Here is an excerpt:
"Nowhere is this demand for purpose built appliances more evident than in the rise of purpose
built backup appliances (PBBAs) over the last few years and their anticipated growth rate
going forward. A recent market analysis performed by IDC found that worldwide PBBA revenue totaled $2.4 billion in 2011 which was a 42.4 percent increase over the prior year.
This scoring came into play in preparing this Buyer's Guide
as the STORServer EBA 3100 model scored so highly
overall that it fell outside of the two (2) standard deviations
that DCIG generally uses as a guideline for inclusion and
exclusion of products.
The reason DCIG included this model in this Buyer's Guide
whereas in other situations it might not is that DCIG is
unaware of any other backup appliance(s) from any other
providers that come close to matching the EBA 3100's
software and hardware attributes. As such, DCIG felt it
would be doing STORServer specifically and the market
generally a disservice by not highlighting in this Buyer's
Guide that such a backup appliance existed and was
generally available for purchase."
Backup Appliance Models
STORServer EBA 3100
Symantec NetBackup 5220 Backup Appliance
STORServer EBA 2100
STORServer EBA 1100
STORServer EBA 800
Symantec Backup Exec 3600 Appliance
The STORServer is ideal for small and medium-sized business (SMB), but can scale quite large to handle business growth. If you are currently unhappy with your current backup environment, and feel now is the time to look around for a better way of taking backups, you won't go wrong choosing a solution based on IBM's market-leading server and storage hardware with Tivoli Storage Manager software.
Well, it was Tuesday again, and we had quite a lot of announcements here at IBM this week!
Over 1,800 clients attended the [Live February 5 webcast]! The announcements were all part of IBM's SmartCloud Storage portfolio. Here are the highlights:
STN7800 Real-time Compression Appliance
Back in October 2010, IBM announced the acquisition of Storwize, Inc., renaming its NAS-compression units to the IBM Real-time Compression appliances. Some folks were confused, so I had a blog post [IBM Storwize Product Name Decoder Ring].
IBM initially offered two models:
The [STN6500 model] had 16 Ethernet ports 1GbE (16x1GbE) and a pair of four-core processors.
The [STN6800 model] had either eight 10GbE ports (8x10GbE), or four 10GbE plus eight 1GbE ports (4x10GbE+8x1GbE). It has a pair of six-core processors.
Now, IBM offers the [STN7800 model], which can replace either of the ones above, offering 16x1GbE, 8x10GbE, and 4x10GbE+8x1GBE port configurations. It has a pair of eight-core processors to handle more robust Cloud Storage environments. See [Announcement Letter 113-012] for more details.
New XIV Gen3 model 214
With its awesome support for VMware, the XIV is often chosen for Cloud storage. The new XIV model 214 now offers up to a dozen 10GbE ports, or you can stay with the 22 1GbE ports available on previous models. These can be used for iSCSI host attachment and/or IP-based replication.
IBM strives to make each new model of every storage device more energy efficient than the last.
The new XIV model is no exception. The original XIV, introduced in 2008, consumed 8.4 kVA fully loaded. The XIV Gen 3 model 114 consumed 7.0 kVA. This new model 214 consumes only 5.9 kVA!
It has been almost three years since my now infamous post [Double Drive Failure Debunked: XIV Two Years Later]. Back then, the XIV offered only 1TB and 2TB drives, with rebuild time for 1TB drive of less than 30 minutes, and for 2TB less than 60 minutes.
The new XIV Gen3 software 11.2 release, available for both the 114 and 214 models, can now rebuild a 2TB drive in less than 26 minutes, and a 3TB drive in less than 39 minutes. There is also support specific to Windows Server 2012 including thin provisioning, MSCS, VSS, and Hyper-V. See [Announcement Letter 113-013] for more details.
SmartCloud Storage Access
IBM is the first major storage vendor to offer a product of this kind, so understanding it may be a bit difficult.
The concept is simple. Rather than having end-users having to ask IT every time they need some storage space, IBM created a self-service portal that frees up the IT department to work on more important transformational projects.
This is basically what people can do with "Public Cloud" storage service providers, so basically IBM is now giving you the capability with your "Private Cloud" storage deployment.
Here is the sequence of events. End users point their favorite web browser to the self-service portal, and login using their credentials stored in your Active Directory or LDAP server database.
Once validated, the end-user now can request new storage space, expanding their existing space, or returning the space to the IT department. For new storage requests, users can have a choice of storage classes, -- such as Gold, Silver and Bronze-- defined in the Tivoli Storage Productivity Center (TPC), either stand-alone or in the SmartCloud Virtual Storage Center.
But wait! Do you want to give every end-user a blank check to provision their own storage? Most IT staff are horrified at the thought.
Knowing this, IBM has included an option to put in an approval process, based on the end-user and the amount of capacity requested. The approver can be the cloud administrator, or someone delegated for approvals, known as an environment owner.
For some users, policies may restrict the storage classes as well. For example, Fred can only have Silver or Bronze, but not Gold.
Once the approval is obtained, TPC then issues the appropriate commands to the appropriate SONAS or Storwize V7000 Unified device. SmartCloud Storage Access can do this for thousands of storage devices across dozens of geographically dispersed locations.
Before, the Cloud Admin had to configure storage pools of managed disks, define file systems, dole out file sets to hundreds or thousands of users with hard quotas, and then configure shares based on the protocols required, like CIFS, NFS, HTTPS, etc.
With SmartCloud Storage Access, the Cloud admin still defines the pools and file systems, but then lets the self-service capability of the software to create the file sets, set the quotas and configure shares with the appropriate protocols. This greatly reduces the work on the IT staff, and greatly improves the turn-around time for end-user requests to get exactly what they want, when they need it.
The next time you withdraw money from an ATM machine, fill up your gas tank at the self-service gas station, then serve your own salad at the salad bar and fill up your own soft drink at the fast food restaurant, you will realize and appreciate that SmartCloud Storage Access is a brilliant move for the IT staff.
Cloud administrators, environment owners, and end-users can all use SmartCloud Storage Access to monitor and report on storage usage.
(What does this have to do with Storage? When IBM got back into networking in a big way, they had to decide whether to combine it with one of the existing groups, or form its own group. IBM decided to merge networking with storage, which makes sense since the primary purpose of most networks is to access or transmit information stored somewhere else.)
Last April, the Wharton School and the Institute for the Future convened a one-day [After Broadband] workshop in San Francisco, California, that brought together a group of leading technologists, entrepreneurs, academics and policymakers to explore the future of broadband over the next decade.
Today is the last day of 2012, so it is only fitting to end the year looking forward to the future!
While I have been accused of being a historian, I consider myself a bit of a futurist. Since 2006, I have been blogging about the future of technology, including Cloud, Big Data, and the explosion of information. As a consultant for the IBM Executive Briefing Center, I present to clients IBM's future plans, strategies, and product roadmaps.
(Fellow blogger Mark Twomey on his Storagezilla blog has a humorous post titled [Stuff your Predictions], expressing his disdain for articles this time of year that predict what the next 12 months will bring. Don't worry, this is not one of those posts!)
What exactly is a futurist? Biologists study biology. Techologists study technology. But a person can't simply time-travel to the future, read the newspaper, make observations, take notes, and then go back in time to share his findings.
Here seem to be the key differences between Historians vs. Futurists:
There is only one past.
There are many possible futures.
Only six percent of humanity are alive today, so historians must study history through the writings, tools, and remains of those that have passed on.
Futurists study the past and the present, looking for patterns and trends.
Search for insight.
Search for foresight.
Framework to explain what happened and why.
Framework to express what is possible, probable, and perhaps even preferable.
A common framework for both is the concept of the various "Ages" that humanity has been through:
Around 200,000 years ago, in the middle of what archaeologists refer to as the [Paleolithic Era], man walked upright and used tools made of stone to hunt and gather food. Humans were nomadic and travelled in tribes to follow the herds of animals as they migrated season to season. The History Channel had a great eight-hour series called [Mankind: The Story of All of Us] that started here, and worked all the way up to modern times.
About 10,000 years ago, humans got tired of chasing after their meals, and settled down, growing their food instead. Grains like wheat, rice, and corn became staples of most diets around the world. Civilization evolved, and people traded what they grew or made in exchange for items they needed or wanted.
About 300 years ago, humans developed machines to help do things, and even to help build other machines. While farmers harnessed oxen to plow fields, and horses to speed up travel and communication, these were all based on muscle power.
Machines like the steam engine were powered by coal, petroleum, or natural gas. Today, one gallon of gasoline can do the work of 600 man-hours of human muscle power, or [move a ton of freight 400 miles].
Cities grew up with skyscrapers of steel, connected by trains, planes and automobiles. Communications with the telegraph, telephone, radio and television replaced sending message on horseback.
The forces that drove humanity to the Industrial age clashed with the culture and identity established during the Agricultural age. I highly recommend futurist Thomas Friedman's book [The Lexus and the Olive Tree] that covers these conflicts.
When exactly did the Information age begin? Did it start with Guttenberg's muscle-powered [Printing Press] in the year 1450, or the first punched card in 1725?
Futurist [Alvin Toffler] published his book The Third Wave in 1980. He coined the phrase "Third Wave" to describe the transition from the Industrial age to the Information age.
While IBM mainframes were processing information in the 1950's, many people associate the Information Age with the IBM Personal Computer (1981) or the World Wide Web (1991). Over 100 years ago, IBM started out in the Industrial age, with business machines like meat scales and cheese slicers. IBM led the charge into the Information Age, and continues that leadership today.
In any case, value went from atoms to bits. Computers and mobile devices transfer bits of data, information and ideas, from nearly anyplace on the planet to another, in seconds.
Ideas and content are now king, rather than land, buildings, machines and raw materials of the Industrial age. In 1975, less than 20 percent of a business assets were intangible. By 2005, over 80 percent is.
While the Industrial age was dominated by left-brain thinking, the Information Age requires the creativity of right-brain thinking. I highly recommend Daniel Pink's book, [A Whole New Mind] that covers this in detail.
"The future is already here -- it's just not very evenly distributed!" -- William Gibson (1993)
The problem with looking back through history as a series of "Ages" is that they really didn't start and end on specific days. The Agricultural age didn't end on a particular Sunday evening, with the Industrial age starting up the following Monday morning.
There are still people on the planet today in the Stone age. On my last visit to Kenya, I met a nomadic tribe that still lives this way. Huts were temporarily constructed from sticks and mud, and abandoned when it was time to move on.
A short-sighted charity built a one-room school house for them, hoping to convince the tribe that staying in one place for education was more important than hunting and gathering food in a nomadic lifestyle. Some stayed and starved.
In the United States, about 2 percent of Americans grow food for the rest of us, with enough left over to make ethanol and give food aid to other countries.
Sadly, the Standard American Diet continues to be foods mostly processed from wheat, rice and corn, even though our human genetic make-up has not yet evolved from a "Paleolithic" mix of [meats, nuts and berries].
There are still people on the planet today in the Industrial age. American schools are still geared to teach children for Industrial age jobs, but still take "summer vacation" to work in the fields of the Agricultural age? Seth Godin's book [Stop Stealing Dreams] is a great read on what we should do about this.
Wrapping up my series on a [Laptop for Grandma], I finally have something that I think meets all of my requirements! Special thanks to Guidomar and the rest of my readers who sent in suggestions!
I could have called this series "The Good, the Bad, and the Ugly". The [Cloud-oriented choices] weren't bad per se, but expected persistent Internet connection. The [Low-RAM choices] were not ugly per se, but had limited application options. The ones below were good, in that they helped me decide what would be just right for grandma.
Linux Mint 9
One of my readers, Guidomar, suggested Linux Mint Xfce. At LinuxFest Northwest 2012, Bryan Lunduke indicated that [Linux Mint] is the fastest growing Linux in popularity. You can watch his 43-minute presentation of [Why Linux Sucks!] on YouTube.
The latest version is Mint 14, but that has grown so big it has to be installed on a DVD, as it will no longer fit on a 700MB CD-ROM. Since I don't have a DVD drive on this Thinkpad R31, I dropped down to the latest Gnome edition that did fit on a LiveCD, which was Mint 9.
(In retrospect, I could have used the [PLoP Boot Manager CD], and installed the latest Linux Mint 14 from USB memory stick! My concern was that if a distribution didn't fit on a CD-ROM, it was expecting a more modern computer overall, and thus would probably require more than 384MB or RAM as well.)
Linux Mint is actually a variant of Ubuntu, which means that it can tap into the thousands of applications already available. Mint 9 is based on Ubuntu 10.04 LTS.
One of the nice features of Linux Mint is that there are versions with full [Codecs] installed. A codec is a coder/decoder software routine that can convert a digital data stream or signal, such as for audio or video data. Many formats are proprietary, so codecs are generally not open source, and often not included in most Linux distros. They can be installed manually by the Linux administrator. Windows and Mac OS are commercially sold and don't have this problem, as Microsoft and Apple take care of all the licensing issues behind the scenes.
The installation went smooth. It would have gladly set up a dual-boot with Windows for me, but instead I opted to wipe the disk clean and install fresh for each Linux distribution I tried.
Running it was a different matter. The screen would go black and crash. There just wasn't enough memory.
Since [Peppermint OS] was partially based on Lubuntu, I thought I would give [Lubuntu 12.04] a try. The difference is that Peppermint OS is based on Xfce (as is Xubuntu), but Lubuntu claims to have a smaller memory footprint using Lightweight X11 Desktop Environment (LXDE). This version claims to run in 384MB, which is what I have on grandma's Thinkpad R31.
There are two installers. The main installer requires more than 512MB to run, so I used the alternate text-based Installer-only CD, which needs only 192MB.
The LXDE GUI is simple and straightforward. As with Peppermint OS, I did have to install the Codec plugins. However, the time-to-first-note was less than two minutes, so we can count this as a success!
Linux Mint 12 LXDE edition
Circling back to Linux Mint, I realized that my problem up above was chosing the wrong edition. Apparently, Linux Mint comes in various editions, the main edition I had selected was based on Gnome which requires at least 512MB of RAM.
Other editions are based on KDE, xFCE and LXDE. Linux Mint 9 LXDE requires only 192MB of RAM, and the newer Linux Mint 12 LXDE requires only 256MB. I choose the latter, and the install went pretty much the same as Mint and Lubuntu above.
The music player that comes pre-installed is called [Exaile], which supports playlists, audio CDs, and a variety of other modern features, so no reason to install Rhythmbox or anything else. Grandma can even rip her existing audio CDs to import her music into MP3 format. Time-to-first-note was about two minutes.
The best part: the OS only takes up about 4GB of disk, leaving about 15GB for MP3 music files!
Lubuntu and Linux Mint LXDE were similar, but I decided to go with the latter because I like that they do not force version upgrades. This is a philosophical difference. Ubuntu likes to keep everyone on the latest supported releases, so will often remind you its time to upgrade. Linux Mint prefers to take an if-it-aint-broke-don't-fix-it approach that will be less on-going maintenance for me.
A few finishing touches to make the system complete:
A nice wallpaper from [InterfaceLift]. This website has high-res photography that are just stunning.
Power management with screen-saver settings to a nice pink background with white snowflakes falling.
A small collection of her MP3 music pre-loaded so that she would have something to listen to while she learns how to rip CDs and copy over the rest of her music.
Icons on the main desktop for Exaile, My Computer, Home Directory, and the Welcome Screen.
Larger Font size, to make it easier to read.
Update settings that only look for levels "1" and "2". There are five levels, but "1" and "2" are considered the safest, tested versions. Also, an update is only done if it does not involve installing or removing other packages. This should offer some added stability.
I considered installing [ClamAV] for anti-virus protection, but since this laptop will not be connected to the Internet, I decided not to burn up CPU cycles. I also considered installing [Team Viewer] which would allow me remote access to her system if anything should every fail. However, since she does not have Wi-Fi at home, and lives only a few minutes across town, I decided to leave this off.
Once again, I want to thank all of my readers for their suggestions! I learned quite a lot on this journey, and am glad that I have something that I am proud to present to grandma: boots quickly enough, simple to use, and does not require on-going maintenance!
Continuing my series on a [Laptop for Grandma], I thought I would pursue some of the "low-RAM" operating system choices. Grandma's Thinkpad R31 has only 384MB of RAM.
All of the ones below are based on Linux. For those who aren't familiar with installing or running the Linux operating system, here are some helpful tips:
Most Linux distributors allow you to download an ISO file for free. These can be either (a) burned to a CD, (b) burned to a DVD, or (c) written to a USB memory stick.
The ISO can be either a "LiveCD/LiveDVD" version, an installation program, or a combination of the two. The "Live" version allows you to boot up and try out the operating system without modifying the contents of your hard drive. Windows and Mac OS users can try out Linux without impact to their existing environment. Some Linux distributions offer both a full LiveCD+Installer version, as well as an alternate text-based Installer-only version. The latter often requires less RAM to use.
When installing, it is best to have the laptop plugged in to an electrical outlet, and hard-wired to the internet in case it needs to download the latest drivers for your particular hardware.
A CD can hold only 700MB. Many of the newer Linux distributions exceed that, requiring a DVD or USB stick instead. If your laptop has an older optical drive, it may not be able to read DVD media. Some older optical drives can only read CD's, not burn them. In my case, I burned the CDs on another machine, and then used them on grandma's Thinkpad R31.
To avoid burning "a set of coasters" when trying out multiple choices, consider using rewriteable optical media, or the USB option. If you don't like it, you can re-use for something else.
The program [Unetbootin] can take most ISO files and write them to a bootable USB stick. On my Red Hat Enterprise Linux 6 laptop, I had to also install p7zip and p7zip-plugins first.
The BIOS on some older machines, like my grandma's Thinkpad R31, cannot boot from USB. The [PLoP Boot Manager] allows you to first boot from floppy or CD-ROM, and then allows you to boot from the USB. This worked great on my grandma's system. The PLoP Boot Manager is also available on the [Ultimate Boot CD].
While I am a big fan of SUSE, Red Hat, and Ubuntu, these all require more RAM than available on grandma's laptop. Here are some Low-RAM alternatives I tried:
Damn Small Linux 4.11 RC2
The Damn Small Linux [DSL] project was dormant since 2008, but has a fresh new release for 2012. This baby can run in as little 16MB or RAM! If you have 128MB of RAM or more, the OS can run entirely from RAM, providing much faster performance.
Of course, there are always trade-offs, and in this case, apps were chosen for their size and memory footprint, not necessarily for their user-friendliness and eye candy. For example, the xMMS plays MP3 music, but I did not find it as friendly as iTunes or Rhythmbox.
Boot time is fast. From hitting the power-on button to playing the first note of MP3 music was about 1 minute.
Installing DSL Linux on the hard drive converts it into a Debian distribution, which then allows more options for applications.
Next up was [MacPup]. The latest version is 529, based on Pupply Linux 5.2.60 Precise, compatible with Ubuntu 12.04 Precise Pangolin. While traditional Puppy Linux clutters the screen with apps, the MacPup tries to have the look-and-feel of the MacOS by having a launcher tray at the bottom center of the screen.
Both MacPup and Puppy Linux can run in very small amounts of RAM and disk space. Like DSL above, you can opt to run MacPup entirely in 128MB of RAM. Unfortunately, the trade-off is a lack of application choices.
Installation to the hard drive was quite involved, certainly not for the beginner. First, you have to use Gparted to partition the disk. I created a 19GB (sda1) for my files, and 700MB (sda5) for swap. I had troubles with "ext4" file system, so re-formatted to "ext3". Second, you have to copy the files over from the LiveCD using the "Puppy Universal Installer". Third, you have to set up the Bootloader. Grub didn't work, so I installed Grub4Dos instead.
The music app is called "Alsa Player", and I was able to drag the icon into the startup tray. time-to-first-note was just over 1 minute. Fast, but not as "simple-to-use" as I would like.
SliTaz 4.0 claims to be able to run in as little as 48MB of RAM and 100MB of disk space. Time-to-first-note was similar to MacPup, but I didn't care for the TazPanel for setup, and the TazPkg for installing a limited set of software packages. I could not get Wi-Fi working at all on SliTaz, and just gave up trying.
All three of these ran on grandma's Thinkpad R31, and all three could play MP3 music. However, I was concerned that they were not as simple to use as grandma would like, and I would be concerned the amount of time and effort I might have to spend if things go wrong.
I've gotten suggestions to upgrade the memory and disk storage, and how to fine-tune the Microsoft Windows XP operating system. Others suggested replacing the OS with Linux, and to use the Cloud to avoid some of the storage space limitations.
But first, I have to mention the latest in our series of "Enterprise Systems" videos. The first was being [Data Ready]. The second was being [Security Ready]. The now the third in the series: the 3-minute
[Cloud Ready] video.
So I decided to try different Cloud-oriented Operating Systems, to see if any would be a good fit. Here is what I found:
(FTC Disclosure: I work for IBM and own IBM stock. This blog post is not meant to endorse one OS over another. I have financial interests in, and/or have friends and family who work at some of the various companies mentioned in this post. Some of these companies also have business relationships with IBM.)
Jolicloud and Joli OS 1.2
I gave this OS a try. This is based on Linux, but with an interesting approach. First, you have to be on-line all the time, and this OS is designed for 15-25 year-olds who are on social media websites like Facebook. By having a Jolicloud account, you can access this from any browser on any system, or run the Joli OS operating system, or buy the already pre-installed Jolibook netbook computer.
The Joli OS 1.2 LiveCD ran fine on my T410 with 4GB or RAM, giving me a chance to check it out, but sadly did not run on grandma's Thinkpad R31 with 384MB of RAM. According to the [Jolicloud specifications], Joli OS should run in as little as 384MB of RAM and 2GB of disk storage space, but it didn't for me.
Google Chrome and Chromium OS Vanilla
Like the Jolibook, Google has come out with a $249 Chromebook laptop that runs their "Chrome OS". This is only available via OEM install on desginated hardware, but the open source version is available called Chromium OS. These are also based on Linux.
Rather than compiling from source, Hexxeh has made nightly builds available. You can download [Chromium OS Vanilla] zip file, unzip the image file, and copy it to a 4GB USB memory stick. The compressed image is about 300MB, but uncompressed about 2.5GB, so too big to fit on a CD. The image on the USB stick is actually two partitions, and cannot be run from DVD either.
If you don't have a 4GB USB stick handy, and want to see what all the fuss is about, just install the Google Chrome browser on your Windows or Linux system, and then maximize the browser window. That's it. That is basically what Chromium OS is all about.
Files can be stored locally, or out on your Google Drive. Documents can be edited using "Google Docs" in the Cloud. You can run in "off-line" mode, for example, read your Gmail notes when not connected to the Internet. Music and video files can be played using the "Files" app.
If you really need to get out of the browser, you can hit the right combination of keys to get to the "crosh" command line shell.
Like Joli OS, I was able to run this from my Thinkpad T410 with 4GB of RAM, but not on grandma's Thinkpad R31. It appears that Chromium requires at least 1GB of RAM to run properly.
Android for x86
While researching the Chromium OS, I found that there is an open source community porting [Android to the x86] platform. Android is based on Linux, and would allow your laptop or netbook to run very much like a smartphone or tablet. Most of the apps available to Android should work here as well.
Unfortunately, the project has focused only on selected hardware:
ASUS Eee PCs/Laptops
Viewsonic Viewpad 10
Dell Inspiron Mini Duo
Lenovo ThinkPad x61 Tablet
I tried running the Thinkpad x61 version on both my Thinkpad T410 and grandma's Thinkpad R31, but with no success.
Peppermint OS Three
Next up was Peppermint OS, which claims to be a blend of Linux Mint, Lubuntu, and Xfce, but with a "twist" of aspiring to be a Cloud-oriented OS.
Rather than traditional apps to write documents or maintain a calendar, this OS offers a "Single-Site Browser" (SSB) experience, where you can configure "apps" by pointing to their respective URL. For documents, launch GWoffice, the client for Google Docs. For calendar, launch Google Calendar.
Most Linux distros have both a number and a project name associated with them. For example, Ubuntu 10.04 LTS is known as "Lucid Lynx". The Peppermint OS team avoided this by just calling their latest version "Three" which serves as both its number and its name.
The browser is Chromium, similar to Google Chrome OS above, and uses the "DuckDuckGo" search engine. This is how the Peppermint OS folks make their money to defray the costs of this effort.
Peppermint OS claims to run in systems as little as 192MB or RAM, and only 4GB of disk space. The LiveCD ran well on both my Thinkpad T410, as well as grandma's Thinkpad R31. More importantly, when I installed on the hard drive, it ran well.
The music app "Guayadeque" that came pre-installed was awful. It couldn't play MP3 music out-of-the-box. I had to install the Codec plugins from various "ubuntu-restricted-extras" libraries. I also installed the music app "Rhythmbox", and that worked great. Time from power-on to first-note was less than 2 minutes! However, the problems with the Guayadeque gave me the impression this OS might not be ready for primetime.
I contacted grandma to ask if she has Wi-Fi in her home, and sure enough, she doesn't. Her PC upstairs is direct attached to the cable modem. So, while the Cloud suggestion was worthy of investigation, I will continue to pursue other options that do not require being connected. I certainly do not want to spend any time and effort getting Wi-Fi installed there.
Happy Winter Solstice everyone! The Mayan calendar flipped over yesterday, and everything continued as normal.
The next date to watch out for is ... drumroll please ... April 8, 2014. This is the date Microsoft has decided to [drop support for Windows XP].
While many large corporations are actively planning to get off Windows XP, there are still many homes and individuals that are running on this platform.
When [Windows XP] was introduced in 2001, it could support systems with as little as 64MB of RAM. Nowadays, the latest versions of Windows now requires a minimum of 1GB for 32-bit systems, with 2GB or 3GB recommended.
That leaves Windows XP users on older hardware few choices:
Continue to run Windows XP, but without support (and hope for the best)
Upgrade their hardware with more RAM (and possibly more disk space) needed to run a newer level of Windows
Install a different operating system like Linux
Put the hardware in the recycle bin, and buy a new computer
Here is a personal example. A long time ago, I gave my sister a Thinkpad R31 laptop so that she could work from home. When she got a newer one, she passed this down to her daughter for doing homework. When my neice got a newer one, she passed this old laptop to her grandma.
Grandma is fairly happy with her modern PC running Windows XP. She plays all kinds of games, scans photographs, sends emails, listens to music on iTunes, and even uses Skype to talk to relatives. Her problem is that this PC is located upstairs, in her bedroom, and she wanted something portable that she could play music downstairs when she is playing cards with her friends.
"Why not use the laptop you have?" I asked. Her response: "It runs very slow. Perhaps it has a virus. Can you fix that?" I was up for the challenge, so I agreed.
(The Challenge: Update the Thinkpad R31 so that grandma can simply turn it on, launch iTunes or similar application, and just press a "play" button to listen to her music. It will be plugged in to an electrical outlet wherever she takes it, and she already has her collection of MP3 music files. My hope is to have something that is (a) simple to use, (b) starts up quickly, and (c) will not require a lot of on-going maintenance issues.)
Here are the relevant specifications of the Thinkpad R31 laptop:
The system was pre-installed with Windows XP, but was terribly down-level. I updated to Windows XP SP3 level, downloaded the latest anti-virus signatures, and installed iTunes. A full scan found no viruses. All this software takes up 14GB, leaving less than 6GB for MP3 music files.
The time it took from hitting the "Power-on" button to hearing the first note of music was over 14 minutes! Unacceptable!
If you can suggest what my next steps should be, please comment below or send me an email!
Tomorrow, according to the [Mayan calendar], the end of the 5,125 year cycle rolls over, so it only makes sense to party like it's 1999!
Of course, if you were in the IT industry 13 years ago, you may remember similar hoopla around [Year 2000] when the Gregorian calendar rolled over from "99" to "00". Some of us were asked to work right up to the last day of 1999, and be on-call the first week of 2000, just in case! Tomorrow may prove to be more or less a repeat of that.
Fortunately, there was plenty of other reasons to celebrate these past few weeks.
Birthdays in December Party
The IBM Tucson employees and contractors of building 9070 got together for a combination party, celebrating both the end of 2012 and for three people with birthdays in December: my former manager Bill, my colleague Kris, and myself. Here is our birthday cake! Afterwards, we allVacation movie.
(Note: This was sponsored by my third-line manager, David Gelardi, who one way or another, is responsible for all the IBMers in this building. Thank you David! )
This will be the last year for us to do this, as we are planning to move over to join the employees of building 9032 next year!
IBM Club Event
The IBM Club had its final event at [Golf N' Stuff] family fun park. Over 700 IBM employees and their family members came to eat breakfast burritos and play miniature golf and other games. It had rained earlier in the morning, so the go-kart track was wet, and the staff were trying to dry with leaf blowers. The rest of the park was fully operational, and the weather cleared up nicely. Mo, Rafael and I played golf but the turf was still wet in a few spots. There were also video games, bumper boats, and batting cages.
IBM volunteers dressed up as fictional characters for the kids to take pictures with.
I was proud to be a member of the seven-person IBM Club board for 2012. When I was nominated, I didn't think I stood a chance to be elected, as I was running against five or six other well-qualified candidates, but somehow it happened. I am glad to have been part of the 19-year tradition of the IBM Club history.
(Note: I didn't campaign for this position, but many IBMers in Tucson knew that I had previously owned and managed Tucson Fun & Adventures that organized 15-25 events every month for hundreds of single adults in the Tucson area. This might have helped my chances for election a bit!)
Next year, the IBM Club transitions to the more-efficient "Club Central" model, which is both board-less and cash-less. Instead of a seven-person board organizing events that are fully-funded or partially-subsidized by IBM, events will now be organized by IBM volunteers who post the details on Facebook. All participants simply pay for the events they attend directly to the venue or facility involved.
While the National Aeronautics and Space Administration [NASA] has put out videos and press releases these past 10 days to assure us [there will be a 2013], this shouldn't stop anyone from having a good time! If you did anything special to celebrate the end of the Mayan Calendar, please comment below!
In my last blog post [Full Disk Encryption for Your Laptop] explained my decisions relating to Full-Disk Encryption (FDE) for my laptop. Wrapping up my week's theme of Full-Disk Encryption, I thought I would explain the steps involved to make it happen.
Last April, I switched from running Windows and Linux dual-boot, to one with Linux running as the primary operating system, and Windows running as a Linux KVM guest. I have Full Disk Encryption (FDE) implemented using Linux Unified Key Setup (LUKS).
Here were the steps involved for encrypting my Thinkpad T410:
Step 0: Backup my System
Long-time readers know how I feel about taking backups. In my blog post [Separating Programs from Data], I emphasized this by calling it "Step 0". I backed up my system three ways:
Backed up all of my documents and home user directory with IBM Tivoli Storage Manager.
Backed up all of my files, including programs, bookmarks and operating settings, to an external disk drive (I used rsync for this). If you have a lot of bookmarks on your browser, there are ways to dump these out to a file to load them back in the later step.
Backed up the entire hard drive using [Clonezilla].
Clonezilla allows me to do a "Bare Machine Recovery" of my laptop back to its original dual-boot state in less than an hour, in case I need to start all over again.
Step 1: Re-Partition the Drive
"Full Disk Encryption" is a slight misnomer. For external drives, like the Maxtor BlackArmor from Seagate (Thank you Allen!), there is a small unencrypted portion that contains the encryption/decryption software to access the rest of the drive. Internal boot drives for laptops work the same way. I created two partitions:
A small unencrypted partition (2 GB) to hold the Master Boot Record [MBR], Grand Unified Bootlloader [GRUB], and the /boot directory. Even though there is no sensitive information on this partition, it is still protected the "old way" with the hard-drive password in the BIOS.
The rest of the drive (318GB) will be one big encrypted Logical Volume Manager [LVM] container, often referred to as a "Physical Volume" in LVM terminology.
Having one big encrypted partition means I only have to enter my ridiculously-long encryption password once during boot-up.
Step 2: Create Logical Volumes in the LVM container
I create three logical volumes on the encrypted physical container: swap, slash (/) directory, and home (/home). Some might question the logic behind putting swap space on an encrypted container. In theory, swap could contain sensitive information after a system [hybernation]. I separated /home from slash(/) so that in the event I completely fill up my home directory, I can still boot up my system.
Step 3: Install Linux
Ideally, I would have lifted my Linux partition "as is" for the primary OS, and a Physical-to-Virtual [P2V] conversion of my Windows image for the guest VM. Ha! To get the encryption, it was a lot simpler to just install Linux from scratch, so I did that.
Step 4: Install Windows guest KVM image
The folks in our "Open Client for Linux" team made this step super-easy. Select Windows XP or Windows 7, and press the "Install" button. This is a fresh install of the Windows operating system onto a 30GB "raw" image file.
(Note: Since my Thinkpad T410 is Intel-based, I had to turn on the 'Intel (R) Virtualization Technology' option in the BIOS!)
There are only a few programs that I need to run on Windows, so I installed them here in this step.
Step 5: Set up File Sharing between Linux and Windows
In my dual-boot set up, I had a separate "D:" drive that I could access from either Windows or Linux, so that I would only have to store each file once. For this new configuration, all of my files will be in my home directory on Linux, and then shared to the Windows guest via CIFS protocol using [samba].
In theory, I can share any of my Linux directories using this approach, but I decide to only share my home directory. This way, any Windows viruses will not be able to touch my Linux operating system kernels, programs or settings. This makes for a more secure platform.
Step 6: Transfer all of my files back
Here I used the external drive from "Step 0" to bring my data back to my home directory. This was a good time to re-organize my directory folders and do some [Spring cleaning].
Step 7: Re-establish my backup routine
Previously in my dual-boot configuration, I was using the TSM backup/archive client on the Windows partition to backup my C: and D: drives. Occasionally I would tar a few of my Linux directories and storage the tarball on D: so that it got included in the backup process. With my new Linux-based system, I switched over to the Linux version of TSM client. I had to re-work the include/exclude list, as the files are different on Linux than Windows.
One of my problems with the dual-boot configuration was that I had to manually boot up in Windows to do the TSM backup, which was disruptive if I was using Linux. With this new scheme, I am always running Linux, and so can run the TSM client any time, 24x7. I made this even better by automatically scheduling the backup every Monday and Thursday at lunch time.
There is no Linux support for my Maxtor BlackArmor external USB drive, but it is simple enough to LUKS-encrypt any regular external USB drive, and rsync files over. In fact, I have a fully running (and encrypted) version of my Linux system that I can boot directly from a 32GB USB memory stick. It has everyting I need except Windows (the "raw" image file didn't fit.)
I can still use Clonezilla to make a "Bare Machine Recovery" version to restore from. However, with the LVM container encrypted, this renders the compression capability worthless, and so takes a lot longer and consumes over 300GB of space on my external disk drive.
Backing up my Windows guest VM is just a matter of copying the "raw" image file to another file for safe keeping. I do this monthly, and keep two previous generations in case I get hit with viruses or "Patch Tuesday" destroys my working Windows image. Each is 30GB in size, so it was a trade-off between the number of versions and the amount of space on my hard drive. TSM backup puts these onto a system far away, for added protection.
Step 8: Protect your Encryption setup
In addition to backing up your data, there are a few extra things to do for added protection:
Add a second passphrase. The first one is the ridiculously-long one you memorize faithfully to boot the system every morning. The second one is a ridiculously-longer one that you give to your boss or admin assistant in case you get hit by a bus. In the event that your boss or admin assistant leaves the company, you can easily disable this second passprhase without affecting your original.
Backup the crypt-header. This is the small section in front that contains your passphrases, so if it gets corrupted, you would not be able to access the rest of your data. Create a backup image file and store it on an encrypted USB memory stick or external drive.
If you are one of the lucky 70,000 IBM employees switching from Windows to Linux this year, Welcome!
Earlier this year, IBM mandated that every employee provided a laptop had to implement Full-Disk Encryption for their primary hard drive, and any other drive, internal or external, that contained sensitive information. An exception was granted to anyone who NEVER took their laptop out of the IBM building. At IBM Tucson, we have five buildings, so if you are in the habit of taking your laptop from one building to another, then encryption is required!
The need to secure the information on your laptop has existed ever since laptops were given to employees. In my blog post [Biggest Mistakes of 2006], I wrote the following:
"Laptops made the news this year in a variety of ways. #1 was exploding batteries, and #6 were the stolen laptops that exposed private personal information. Someone I know was listed in one of these stolen databases, so this last one hits close to home. Security is becoming a bigger issue now, and IBM was the first to deliver device-based encryption with the TS1120 enterprise tape drive."
Not surprisingly, IBM laptops are tracked and monitored. In my blog post [Using ILM to Save Trees], I wrote the following:
"Some assets might be declared a 'necessary evil' like laptops, but are tracked to the n'th degree to ensure they are not lost, stolen or taken out of the building. Other assets are declared "strategically important" but are readily discarded, or at least allowed to [walk out the door each evening]."
Unfortunately, dual-boot environments won't cut it for Full-Disk Encryption. For Windows users, IBM has chosen Pretty Good Privacy [PGP]. For Linux users, IBM has chosen Linux Unified Key Setup [LUKS]. PGP doesn't work with Linux, and LUKS doesn't work with Windows.
For those of us who may need access to both Operating Systems, we have to choose. Select one as the primary OS, and run the other as a guest virtual machine. I opted for Red Hat Enterprise Linux 6 as my primary, with LUKS encryption, and Linux KVM to run Windows as the guest.
I am not alone. While I chose the Linux method voluntarily, IBM has decided that 70,000 employees must also set up their systems this way, switching them from Windows to Linux by year end, but allowing them to run Windows as a KVM guest image if needed.
Let's take a look at the pros and cons:
LUKS allows for up to 8 passphrases, so you can give one to your boss, one to your admin assistant, and in the event they leave the company, you can disable their passphrase without impacting anyone else or having to memorize a new one. PGP on Windows supports only a single passphrase.
Linux is a rock-solid operating system. I found that Windows as a KVM guest runs better than running it natively in a dual-boot configuration.
Linux is more secure against viruses. Most viruses run only on Windows operating systems. The Windows guest is well isolated from the Linux operating system files. Recovering from an infected or corrupted Windows guest is merely re-cloning a new "raw" image file.
Linux has a vibrant community of support. I am very impressed that anytime I need help, I can find answers or assistance quickly from other Linux users. Linux is also supported by our help desk, although in my experience, not as well as the community offers.
Employees that work with multiple clients can have a separate Windows guest for each one, preventing any cross-contamination between systems.
Linux is different from Windows, and some learning curve may be required. Not everyone is happy with this change.
(I often joke that the only people who are comfortable with change are babies with soiled diapers and prisoners on death row!)
Implementation is a full re-install of Linux, followed by a fresh install of Windows.
Not all software required for our jobs at IBM runs on Linux, so a Windows guest VM is a necessity. If you thought Windows ran slowly on a fully-encrypted disk, imagine how much slower it runs as a VM guest with limited memory resources.
In theory, I could have tried the Windows/PGP method for a few weeks, then gone through the entire process to switch over to Linux/LUKS, and then draw my comparisons that way. Instead, I just chose the Linux/LUKS method, and am happy with my decision.
For the past three decades, IBM has offered security solutions to protect against unauthorized access. Let's take a look at three different approaches available today for the encryption of data.
Approach 1: Server-based
Server-based encryption has been around for a while. This can be implemented in the operating system itself, such as z/OS on the System z mainframe platform, or with an applicaiton, such as IBM Tivoli Storage Manager for backup and archive.
While this has the advantage that you can selectively encrypt individual files, data sets, or columns in databases, it has several drawbacks. First, you consume server resources to perform the encryption. Secondly, as I mention in the video above, if you only encrypt selected data, the data you forget to, or choose not to, encrypt may result in data exposure. Third, you have to manage your encryption keys on a server-by-server basis. Fourth, you need encryption capability in the operating system or application. And fifth, encrypting the data first will undermine any storage or network compression capability down-line.
Approach 2: Network-based
Network-based solutions perform the encryption between the server and the storage device. Last year, when I was in Auckland, New Zealand, I covered the IBM SAN32B-E4 switch in my presentation [Understanding IBM's Storage Encryption Options]. This switch receives data from the server, encrypts it, and sends it on down to the storage device.
This has several advantages over the server-based approach. First, we offload the server resources to the switch. Second, you can encrypt all the files on the volume. You can select which volumes get encrypted, so there is still the risk that you encrypt only some volumes, and not others, and accidently expose your data. Third, the SAN32B-E4 can centralized the encryption key management to the IBM Tivoli Key Lifecycle Manager (TKLM). This is also operating system and application agnostic. However, network-based encryption has the same problem of undermining any storage device compression capability, and often has a limit on the amount of data bandwidth it can process. The SAN32B-E4 can handle 48 GB/sec, with a turbo-mode option to double this to 96 GB/sec.
Approach 3: Device-based
Device-based solutions perform the encryption at the storage device itself. Back in 2006, IBM was the first to introduce this method on its [TS1120 tape drive]. Later, it was offered on Linear Tape Open (LTO-4) drives. IBM was also first to introduce Full Disk Encryption (FDE) on its IBM System Storage DS8000. See my blog post [1Q09 Disk Announcements] for details.
As with the network-based approach, the device-based method offloads server resources, allows you to encrypt all the files on each volume, can centrally manage all of your keys with TKLM, and is agnostic to operating system and application used. The device can compress the data first, then encrypt, resulting in fewer tape cartridges or less disk capacity consumed. IBM's device-based approach scales nicely. IBM has an encryption chip is placed in each tape drive or disk drive. No matter how many drives you have, you will have all the encryption horsepower you need to scale up.
Not all device-based solutions use an encryption chip per drive. Some of our competitors encrypt in the controller instead, which operates much like the network-based approach. As more and more disk drives are added to your storage system, the controller may get overwhelmed to perform the encryption.
The need for security grows every year. Enterprise Systems are Security-ready to protect your most mission critical application data.
Mark your calednars! The dates are now official for IBM storage-related events in 2013. I know many of you plan your travel budgets early in the year, so I hope this will help you plan accordingly.
[IBM Pulse 2013] will be held March 3-6, 2013, at the MGM Grand in Las Vegas, Nevada. Back in 2008, I helped launch the inaugural event, combining previous events that focused on Tivoli and Maximo software solutions.
On a smarter planet, organizations must implement bold strategies to optimize business services, processes, and relationships. Cloud and mobility offer unlimited potential to create smarter infrastructures that fundamentally change the way we do business.
However, to deliver on this potential, you must manage your infrastructure through rapid change while changing the economics of IT: unleashing innovation, reinventing relationships and uncovering new markets.
Attend Pulse 2013 for the opportunity to share your expertise with thousands of your business and IT peers as you explore these strategies and more. With three days of top-notch keynotes, over 300 breakout sessions, labs, certification and our best Solution Expo ever, Pulse will provide the tools, insights and networking you need to turn opportunities into outcomes.
[IBM Edge 2013] will be held June 10-14, 2013, at the Mandalay Bay in Las Vegas, Nevada. Last year, I helped launch the inaugural event, combining previous storage events for storage admins, executives, and IBM Business Partners. Next year, Edge2013 will offer:
Over 400 technical sessions and hands on labs geared for novices to experts, with the ability to test drive the latest technology.
Exciting general sessions focused on Smarter Computing innovations and real-world success stories.
World class certification available on-site to validate your skills and demonstrate your proficiency in the latest IBM technology and solutions.
A comprehensive and expanded Solution Center giving you access to the latest storage, System x and PureSystems solutions from IBM and our sponsors.
The list of speakers have not yet been finalized, but I hope to participate at one or both of these events!
I hope all of my American readers had a wonderful Thanksgiving holiday! The day after Thanksgiving is "Black Friday", the unofficial starting data for shopping for upcoming holiday presents and decorations. The Monday after that is now often referred to as "Cyber Monday", where many people purchase items on-line.
I thought this would be good time to promote my book series, Inside System Storage, Volumes I through V. These are available direct from my publisher, [Lulu], or from other on-line retailers.
The old adage "Never judge a book by its cover" often leads technical authors to select bland cover designs. I designed the cover art for the series to have a consistent look, but be unique enough to know each book is different. They all have a beige background with black text, three or four graphics representing the various storage themes du jour, and a color stripe spread diagonally across the spine.
Several readers have asked if there was any rhyme or reason for the color of each spine. One guessed it was based on the [electronic color code] used on resistors to mark their value. When I was getting my college degree in Electrical Engineering, the mnemonic "Better Be Right Or Your Great Big Venture Goes West" helped us remember the sequence: Black, Brown, Red, Orange, Yellow, Green, Blue, Violet, Grey and White.
I can assure everyone I was not that clever. Here, instead, is the story behind each color chosen:
Volume I: Green
I received a flyer from Barnes and Noble advertising various books on sale. One caught my eye, so I went to buy it, but forgot to bring the flyer with me. A young woman offered to help me find it, but I could not remember the title, nor the editor, but it had a green cover, and was a collection of the world's shortest stories, all exactly 55 words in length, all winners in some high school contest. She found the flyer, looked up the book, and directed me to the shelf. After several minutes of her scanning the shelf by author, I reached for it, saying, "Here it is, the green one. This shade of green will fit perfectly in my collection of green books!" As I stood in line, the young woman told her boss, "That guy buys green books!" The rest of the folks in line overheard her, and all started laughing at her gullibility.
Volume II: Orange
In late 2007, I was under NDA to review the acquisition of a company called XIV. I was disclosed on the innovative design of the storage system, so that I could blog about it when the announcement was formal. This box would have a distinctive orange stripe across the disks. The announcement launch was a big success. Since then, every time the storage sales team needed a boost in sales for the [IBM XIV Storage System], I would write another blog about the clever features and capabilities.
Volume III: Purple
In 1996, I joined a social club called "Mile High Adventures and Entertainment", headquartered in Denver, Colorado, with locations in Phoenix, Tucson, San Diego, Los Angeles and Portland, Oregon. It was a group for singles to meet each other through social activities and events. A year later, it colapsed under the weight of heavy radio advertising debt. The local staff bought out the membership list, and launched a new club, under the name Tucson Fun and Adventures. It was a big part of my social life.
However, as the owners dropped out, one to start a family, another to take care of her father after her mother passed away, I started 2009 as the majority owner. The economic recession took its toll. Members were not spending as much of their disposable income of fun and entertainment. We restructured the company, revamped the website, and adopted Purple as our official color. Our event coordinators all wore purple shirts, and carried purple clipboards. Despite this major transformation, I just did not have time to run this company while still working full-time at IBM, so I sold it at year end.
Volume IV: Blue
As I mentioned in my blog post [IBM Introduces a New Era of Computing], IBM launched [PureSystems], a new family of expert-integrated systems. Since Volume IV was going to publish shortly after this announcement, I decided on the color blue to match the new door covers on the racks they came in. In less than a year, IBM has already sold over 1,000 of these systems in over 40 different countries.
Volume V: Grey
Chosing a color to represent the IBM Watson computer proved quite a challenge. I finally decided on grey, to represent "grey matter", a phrase often used to refer to the human brain. I picked a shade of grey that complements the three graphics that represent last year's strategic storage marketing themes. My blog post [How to Build Your Own Watson Jr. in your Basement] continues to be one of my highest read posts.
If you were having trouble getting ideas for gifts this holiday season, hopefully, this post gave you five new ideas for your friends, family, coworkers and clients! They are all available in hardcover, paperback, and eBook (PDF) for viewing on desktops, laptops, tablets or smartphones.
Well, it's Tuesday again, and you know what that means! IBM announcements!
Today, I am in New York visiting clients. The weather is a lot nicer than I expected. Here is a picture of the Hudson River through some trees with leaves turning color. Something we don't see in Tucson! Our cactus and pine trees stay green year-round!
The announcements today center around the IBM PureSystems family of expert integrated systems. The PureFlex is based on Flex System components. The Flex System chassis is 10U high that hold 14 bays, consisting of 7 rows by 2 columns. Computer and Storage nodes fit in the front, and switches, fans and power supplies in the back. Here is a quick recap:
IBM Flex System Compute Nodes
The x220 Compute Node is a single-bay low-power 2-socket x86 server. The x440 Compute Node is a powerful double-bay (1 row, 2 columns). The p260 Compute Node is a single-bay server based on the latest POWER7+ CPU processor.
IBM Flex System Expansion Nodes
Do you remember those old movies where a motorcycle would have a sidecar that could hold another passenger, or extra cargo? IBM introduces "Expansion Nodes" for the x200 series single-bay Compute nodes. The idea here is that in a single column, you have one bay for the Compute node, and then on the side in the next bay (same column) you have an Expanions node. There are two choices:
Storage Expansion Node allows you to have eight additional drives
PCIe Expansion Node allows to to have four PCIe cards, which could include the SSD-based PCIe cards from IBM's recent acquisition, Texas Memory Systems.
There are times where one or two internal drives are just not enough storage for a single server, and these expanion nodes could just be the perfect solution for some use cases.
IBM Flex System V7000 Storage Node
I saved the best for last! The Flex System V7000 Storage Node is basically the IBM Storwize V7000 repackaged to fit into the Flex System chassis. This means that in the front of the chassis, the Flex System V7000 takes up four bays (2 rows by 2 columns). In the back of the chassis are the power supplies, fans and switches.
The new Flex System V7000 supports everything the Storwize V7000 does except the upgrade to "Unified" through file modules. For those who want to have Storwize V7000 Unified in their PureFlex systems, IBM will continue to offer the outside-the-chassis original Storwize V7000 that can have two file modules added for NFS, CIFS, HTTPS, FTP and SCP protocol support.
IBM Flex System Converged Network Switch
The Converged Network Switch provide Fibre Channel over Ethernet (FCoE) directly from the chassis. This eliminates the need for a separate "Top-of-Rack" switch, and allows the new Flex System V7000 Storage Node to externally virtualize FCoE-based disk arrays.
Patterns of Expertise for Infrastructure
The original patterns of expertise focused on the PureApplication Systems. Now IBM has added some for the Infrastructure on PureFlex systems.
IBM has sold over 1,000 Flex System and PureFlex systems, across 40 different countries around the world, since their introduction a few months ago in April! These latest enhancements will help solidify IBM's industry leadership,
Well, it's Tuesday again, and you know what that means! IBM Announcements!
Today also happens to be [Election Day] in the United States, and some have questioned IBM's logic of making major storage announcements on Election Day. During the campaigns, a major theme was to help Small and Medium size businesses, because these are the engines of economic growth and improved employment.
Hopefully, you all saw today's Launch Webcast on these announcements, but in case you missed it, waiting in line at the polling station to cast your vote, or caught without electricity or Internet access from [Superstorm Sandy], it is now available [On-Demand].
The 2U control enclosure can have up to four additional 2U expansion enclosures, for a maximum of 120 drives, or 180TB of raw disk capacity. Like the Storwize V7000, the Storwize V3700 supports a [large number of servers and operating systems.]
Many of the features you already know from the Storwize V7000 are carried forward:
1GbE iSCSI + 8GbFC
8GbFC, 10GbE iSCSI/FCoE, Statement of Direction for 6Gb SAS
8GB per canister
4GB per canister, upgradeable to 8GB
Up to 4 control enclosures in a clustered system, each with up to 9 expansion enclosures
Up to 4 expansion enclosures
Maximum Number of drives/TB
Up to 120 drives/180TB
RAID levels supported
GUI, CLI, SMI-S API
GUI, CLI, SMI-S API
Internal (included), external (optional)
Internal only (included)
Non-disruptive data migration
One-directional (migrate to Storwize V3700, included)
Statement of direction
Up to 256 targets (included)
Up to 64 targets (included) Statement of Direction for optional 2,040 targets
Metro Mirror and Global Mirror (optional)
Statement of Direction (optional)
The IBM Storwize V3700 is offered at attractive leasing options through IBM Global Financing.
IBM LTO-6 drives and midrange tape libraries
Last month, IBM's [Tape and Storage Hypervisor Announcements] included LTO-6 for the enterprise-class TS3500 tape library. Today, the LTO-6 support is complete with support for midrange tape drives and libraries.
There are two tape drive models. The TS2260 is based on the half-height drive, intended for occasional 9-to-5 usage. The TS2360 is based on the full-height drive, intended for 24x7 access. These drives can read LTO-4 and LTO-5 tape cartridge media, and can write LTO-5 cartridge media. The new LTO-6 tape cartridge media is expected to be available next month.
In addition to the IBM TS3500 Enterprise Tape Library, LTO-6 is now supported on all of the midrange tape libraries: TS2900, TS3100, TS3200 and TS3310.
IBM Linear Tape File System Library Edition V2.1.2
There are two levels of [Linear Tape File System], or LTFS for short. The first is the Single Drive Edition (LTFS-SDE), which allows you to attach an LTO-5, LTO-6 or TS1140 tape drive to a single workstation, and allow you to mount tape cartridges as easy as mounting USB memory sticks. This presents a full file system view that allows you to read, edit, create, and even drag-and-drop files to other file systems. The LTFS-SDE driver is available for Windows, Linux, and Mac OS.
The second is the Library Edition (LTFS-LE), which allows you to mount the entire tape library as a file system. Each tape cartridge in the library is presented as a subdirectory folder, that you can access like any file system on disk. This was only available for Linux systems, which could then export the files through NFS, FTP or HTTP protocols to other clients. Now, with release v2.1.2, LTFS-LE supports Windows servers, so that you can share the files with other clients through CIFS as well.
Wow! Since my last blog post on this, we have over 600 registrants!
Smarter Storage for Midsize Businesses
Businesses of all sizes are getting buried in the avalanche of data. Data is coming in at faster rates and in greater volumes. The value of data is increasing. Old processes and technologies aren't working. Midsize businesses have the same issues managing the rapid growth of data as large enterprises, but they don't have the same size budget or staff. They need advanced capabilities at an affordable price that are easy to implement.
Speakers for this webcast include Brian Truskowski, General Manager, IBM System Storage and Networking; Ed Walsh, Vice President of Market and Strategy, IBM System Storage; and Tommy Rickard, IBM Director, UK Storage Development.
Date: Tuesday, November 6, 2012 Time: 8:00 AM PST / 9:00AM Arizona / 11:00 PM EST Duration: 60 Minutes
[Register now!] Learn how new IBM Smarter Storage solutions can help midsize businesses tame the explosion of information and their IT budgets.
Joining the IBM executive speakers are the following:
Clay Hales, President & CEO InfoSystems Inc.
Lief Morin, President, Key Information Systems, Inc.
Vincent Louvel, Storage Mgr, Agence France-Presse (AKA AFP)
Laurent Cervera, IT Manager, Agence France-Presse (AKA AFP)
I worked with the IBM Redbooks residency team to review this paper and ensure it had the right focus. I did not want a Redpaper that just listed all of the IBM technologies available, but rather spend some effort on the business benefits, and realistic use cases with actual client examples, that help illustrate not just what a Smart Storage Cloud is, but why your business may benefit from having one, and how others have already benefited from their deployment.
To help promote this new Redpaper, my colleagues Larry Coyne and Karen Orlando filmed me talking about the book. This has been posted as a [4-minute YouTube video]. This is the first time we have promoted a Redpaper using a video, so let me know what you thinkk in the comment section below.
We have some exciting webcasts in the upcoming weeks!
Smarter Enterprises Need Smarter Storage
In this [InformationWeek webcast], my IBM colleague Allen Marin will present a brief overview of IBM Smarter Storage for the enterprise with a focus on new high-end disk and Virtual Tape solutions.
Allen will take you through the recent enhancements [announced earlier this month], highlighting how the new capabilities can address the requirements of your mission-critical applications, as well as your evolving business analytics, and cloud initiatives.
Date: Wednesday, October 24, 2012 Time: 10:00 AM PDT / 10:00AM Arizona / 1:00 PM EDT Duration: 60 Minutes
[Register now!] All registrants will get the independent Clipper Group Report - "When Infrastructure Really Matters - A Focus on High-End Storage" - free!
Smarter Storage for Midsize Businesses
Businesses of all sizes are getting buried in the avalanche of data. Data is coming in at faster rates and in greater volumes. The value of data is increasing. Old processes and technologies aren't working. Midsize businesses have the same issues managing the rapid growth of data as large enterprises, but they don't have the same size budget or staff. They need advanced capabilities at an affordable price that are easy to implement.
Speakers for this webcast include Brian Truskowski, General Manager, IBM System Storage and Networking; Ed Walsh, Vice President of Market and Strategy, IBM System Storage; and Tommy Rickard, IBM Director, UK Storage Development.
Date: Tuesday, November 6, 2012 Time: 8:00 AM PST / 9:00AM Arizona / 11:00 AM EST Duration: 60 Minutes
[Register now!] Learn how new IBM Smarter Storage solutions can help midsize businesses tame the explosion of information and their IT budgets.
I hope you can find time in your busy schedule to participate in one or both of these webcasts.
New IBM PureData Systems help clients harness data for critical insights
Well it's Tuesday, and you know what that means! IBM Announcements! Actually, it is Wednesday, but I started writing this post yesterday, and had to do some additional research to finish.
This week, IBM introduced the newest member of the PureSystems family of expert integrated systems - IBM PureData System. The new systems are designed to help clients effectively harness the massive volume, variety and velocity of information being created every day. The result? They deliver critical insights to improve business results.
The new systems are available in three different models, each optimized specifically for different workloads.
PureData System for Transactions. Optimized for transactional processing workloads such as e-commerce and built to handle large volumes of transactions with flexibility, availability, scalability and integrity. Basically, this is IBM DB2 pureScale and InfoSphere Optim features running on Linux-x86 nodes. The system comes in small, medium and large tee-shirt sizes, and can support over 100 databases. If you have DB2 applications, these can work with PureData unchanged. If your applications are based on Oracle databases, these can work with minimal changes to use PureData systems.
PureData System for Analytics. Powered by Netezza technology, this data warehouse system features built-in database analytics to quickly explore and analyze large amounts of sturctured information. This is the beefed-up version of the Netezza TwinFin 1000. IBM DB2® Analytics Accelerator for z/OS® V3.1 (IDAA) supports both the new IBM PureData System for Analytics N1001 and existing IBM Netezza 1000 systems as accelerators.
PureData System for Operational Analytics. Capable of delivering actionable insights concurrently to more than 1,000 business operations, supporting real-time decision making for businesses. This is the follow-on product to the IBM Smart Analytics System 7700 based on POWER7 nodes. This uses IBM Storwize V7000 disk systems inside.
PureData System joins the PureSystems family which also includes the PureFlex System and PureApplication System, [both announced last April]. PureSystems provide built-in expertise, integration by design and simplification through the system lifecyle, helping businesses reduce complexity, accelerate value and improve IT economics.
In a related announcement, Andy Monshaw was recently named IBM General Manager, PureFlex. Some of you readers may remember that Andy Monshaw was previously the General Manager for IBM System Storage several years ago, and was my second line manager, and I am glad to welcome him back!
A lot was announced this week, so I decided to break it up into several separate posts. This is part 3 in my 3-part series, focusing on our Tivoli Storage products.
To read the rest of the series, see:
The latest release of FlashCopy Manager now supports NetApp and IBM N series storage devices. This provides application-aware snapshots, coordinated with applications like SAP, DB2 and Oracle.
FlashCopy Manager now integrates with Metro and Global Mirror capabilities, so that application-consistent copies are available at remote sites for disaster recovery, or to off-load the FlashCopy destination copy from disk to Tivoli Storage Manager storage pools.
Tivoli Storage Manager v6.4
IBM Tivoli Storage Manager is part of IBM's Unifed Recovery Management. Here are some highlights:
Enhanced Reporting. Cognos reporting to monitor backup and archive environments.
TSM for ERP. I remember when these were called "Tivoli Data Protection" modules. We still refer to them as "TDPs". The TSM for ERP provides backup capability for SAP environments, and this latest release adds support for in-memory SAP HANA databases.
TSM for Virtualization Environments IBM TSM is famous for its patented "Progressive Incremental Backup" which is far more efficient than full+incrementals or full+differentials. IBM now extends this method to VM images. With people consolidating more and more VMs onto fewer host servers, TSM-VE now offers multiple backup streams in parallel. TSM-VE can now take application-aware backups of Microsoft Exchange, SQL Server, and Active Directory running in VMs. TSM-VE will also support vApp and VM templates. If it takes you [a day and a half to build a VMware template], you would want to make sure all that work was backed up, right?
Enhanced Security. Complex password support and improved user authentication and management by integration with Lightweight Directory Access Protocol (LDAP)
A lot was announced yesterday, so I decided to break it up into several separate posts. This is part 2 in my 3-part series, focusing on: Storwize V7000 Unified, LTO-6 tape, and the SmartCloud Virtual Storage Center.
The Storwize V7000 Unified is a product that consists of a 2U-high Storwize V7000 control enclosure that provides block-based access, combined with two 2U-high File Modules that provide file-based NAS protocols: CIFS, NFS, HTTPS, SCP and FTP. The problem was that when it was introduced, it was based on Storwize V7000 v6.3, so when the Storwize V7000 v6.4 features were announced last June, they did not apply to the Storwize V7000 Unified.
That is all fixed now, so the Storwize V7000 Unified now supports the full v6.4 features, including Real-time Compression for both file and block-based access to primary data, and Fibre Channel over Ethernet (FCoE) for block access.
The two File Modules are no longer limited to a single Storwize V7000 control enclosure, you can now connect to up to four control enclosures clustered together. Combined with up to nine expansion enclosures for additional disk raises the total maximum to 960 drives.
If you don't already have an Active Directory or LDAP server, the Storwize V7000 Unified now offers an embedded LDAP server, for smaller deployments that want to reduce the number of servers they need to purchase for a complete solution.
Like the [IBM XIV Gen3 storage system], both the Storwize V7000 and V7000 Unified now also support the OpenStack Nova-volume interface.
Lastly, if you have a Storwize V7000 v6.4, you can upgrade it to a Storwize V7000 Unified by simply adding the two File Modules. This can be done in the field.
IBM LTO-6 for tape libraries and drives
IBM introduces the sixth generation of Linear Tape Open (LTO-6) drives, which can be used as stand-alone IBM TS1060 drives, or in IBM tape libraries. As with previous models of LTO, the LTO-6 can read two older generations (LTO-4 and LTO-5) tape media, and can write to previous generation (LTO-5) tape media. You can buy the LTO-6 drives now, and use the older media until LTO-6 tape cartridges are available (hopefully later this year!)
My friend, Brad Johns, from Brad Johns Consulting, has a great post on this [LTO-6 Announcement]. While you expect the new drives to be faster with a denser tape media format, the key advantage to the LTO-6 is that it improves the compression algorithm, from the previous 2:1 to the new 2.5:1 compression ratio:
Thus, with the improved compression, the LTO-6 is 40 percent faster, with double the tape cartridge density. This can reduce backup times by 30 percent, increase the amount of data that sits in your automated tape libraries, and reduce the courier costs sending tapes off-site.
IBM SmartCloud Virtual Storage Center v5.1
Last year, IBM coined the phrase "Storage Hypervisor" to refer to the underlying technology in the IBM SAN Volume Controller (SVC) and Storwize V7000 disk systems.
At the IBM Edge conference last June, my colleague Mike Griese presented [SmartCloud Virtual Storage Center]. Back then, it was a pilot program (beta test), and this week, IBM announces that it will be formally available as a product.
The idea was simple: take the basic storage hypervisor, and add the necessary software to make it a complete solution.
If all of your disk is currently virtualized behind IBM SAN Volume Controller (SVC), or you want to put all of your data behind SVC, then SmartCloud Virtual Storage Center is for you. Basically, for one per-TB price, you get all of the following:
The software features of SAN Volume Controller v6.4, including FlashCopy, Metro Mirror and Global Mirror.
The full advanced features of IBM Tivoli Storage Productivity Center v5.1, including the Storage Analytics Engine that does "Right-Tiering", recommending which LUNs should be moved entirely from one disk system to another, based on policies and access patterns.
IBM Tivoli Storage FlashCopy Manager v3.2 which manages FlashCopy with full coordination with applications, including Microsoft Exchange, SQL Server, DB2, Oracle, SAP, and VMware. This ensures that the FlashCopy destination copies are clean, eliminating the need to run backout or redo logs to correct any incomplete units of work.
If this combination sounds familiar, it was based on IBM's previous attempt called [Rapid Application Storage] which combined the Storwize V7000 with Tivoli Storage Productivity Center Midrange Edition and FlashCopy Manager.
The key difference is that SmartCloud VSC does not include the SVC hardware itself, you buy this separately. If you want Real-time Compression, that is charged separately for the subset of TB of the volumes that you select for compression.
Well it's Wednesday, and you know what that means... IBM Announcements.
(Normally, announcements are on Tuesdays, but we moved this one over to Wednesday to line up with our big launch event in Pinehurst, NC. )
A lot was announced today, so I decided to break it up into several separate posts. I will start with our Enterprise Systems: DS8870, TS7700 Release 3, and XIV Gen3.
Enterprise systems are the servers, storage and software at the core of an enterprise IT infrastructure. Enterprise systems enable a private cloud infrastructure at enterprise scale, with flexible service delivery models that provide dynamic efficiency for resource and workload management. They make sure critical data is always available across the enterprise, making it accessible in new ways so that actionable insights can be derived from advanced and operational analytics. They also provide ultimate security, ensuring the integrity of critical data while mitigating risk and providing assured compliance.
IBM System Storage DS8870® disk system
This new storage system is the next generation in IBM's DS8000 series, based on IBM's POWER7 chipset. Each CEC can have 2, 4, 8 or 16 cores. Like the DS8800, you can have a mix of 2.5-inch and 3.5-inch disk drives of different speeds and capacities, up to 1,536 drives in a four-frame configuration. The maximum cache is now 1TB usable. The combination of faster chipset and more cache can triple performance for some workloads!
All DS8870s ship standard with all Full Disk Encryption (FDE-capable) drives. The problem in the past was that people would buy DS8000 with non-FDE drives, and then later want to activate encryption, and discovered that they have to swap out their drives with those with the encryption chip built in. Now, all drives on the DS8870 will have the encryption chip. This also allows Easy Tier sub-volume automated tiering to move encrypted data between all media types.
Flash optimization with DS8000 Easy Tier can improve performance up to 3 times with 3% of data on solid-state storage. Easy Tier is easy to deploy and runs automatically.
Support of the American National Standards Institute's (ANSI) T10 Data Integrity Field (DIF) standard. This is a feature that the mainframe has had for years, and is now being extended to distributed operating systems. The concept is simple. When sending data between server and storage, generate a checksum at the source, and then validate the checksum at the target. When you write a block of data, the server generates the checksum, and the DS8870 validates the checksum on arrival. When you read the data back, the DS8870 generates the checksum, and the server validates it on arrival. This ensures that data was not corrupted in between. There is a great write-up on IBM developerWorks: [End-to-end data protection using T10 standard data integrity field].
Energy Efficient. The DS8870 consumes less energy than its predecessor, the DS8800. For example, a fully-configured four-frame DS8870 with 1,536 disk drives consumes only 23.2kW, compared to the same number of drives in a DS8800 consumed 26.3 kW. By comparison, the DS8700 with five frames and 1,024 drives consumed 29.2kW.
Support for new System z load balancing algorithm. System z Workload Manager now interacts with the DS8870 I/O Priority Manager to optimize designated Quality of Service (QoS) levels. We have also the fastest operational analytics solution with DB2 list Prefetch cache optimization with DS8870 High Performance FICON (zHPF) integration. This solution increases DB2 query performance up to 11 times with disk, and up to 60 times with solid-state drives (SSD). File scans are up to 30 percent faster using DS8870 zHPF support for sequential access methods (QSAM, BPAM, and BSAM).
VMware vStorage APIs for Array Integration (VAAI) support. Why should the IBM DS8800 series support VMware when IBM already offers great VMware support with SAN Volume Controller (SVC), Storwize V7000 and XIV storage sytsems? Good question. This was hotly debated between development and marketing. Several DS8000 customers have already added SVC to provide full VMware VAAI support. As a consultant, I am neither development nor marketing, but felt it necessary to weigh in on my opinion on this. The DS8000 is a consolidation platform. According to one analyst survey, 22 percent of companies run on a single disk platform, so for DS8000 to be the one, it needs to support VMware and exploit these special APIs.
Six Nines Availability. Critical enterprise systems need to deliver continuous data availability, or very close to it. IBM solutions can help deliver up to six “nines” of availability, or 99.9999 percent when combining DS8000 Metro Mirror and GDPS Hyperswap. That's less than 30 seconds of downtime per year.
The TS7700 Release 3 represents a refresh to our existing virtual tape libraries. These are mainframe-only, offered in two models: TS7720 is a disk-only device, and the TS7740 is a blended disk-and-tape solution.
Industry standard hardware encryption. This applies to user data stored on the TS7700 system cache (disk), and for data transferred between TS7700 systems. This is especially important for regulations, like Payment Card Industry Data Security Standard (PCI-DSS). In previous models, the data would not be encrypted until it was moved off disk and written to tape. Now, it is encrypted the minute in lands on the disk cache, and stays encrypted as it is replicated from one TS7700 to another in the grid.
Up to 4 Million logical volume capacity. This is twice the previous support.
More physical capacity for TS7720 systems. The maximum capacity for the disk-only model is raised from 440TB to 620TB, representing a 40 percent increase.
My latest book "Inside System Storage: Volume V" is now available!
I have published my fifth volume in my "Inside System Storage" series! Currently, it is only available in Paperback. My editor, Susan Pollard, is hoping to have the eBook and Hardcover versions ready for Cyber Monday. The foreword was written by my Dr. Sondra Ashmore.
You can order this, and all my other books, in all formats, directly from my [Author Spotlight] page. The paperback will also be available soon from other online booksellers, search for ISBN 978-1-300-26223-7.
Improved Scalability. A new Multi-system Manager (MSM) server reduces the operational complexity for large and multi-site XIV deployments. Previously, admins connected directly to XIV boxes. If you had 10 admins logged in, then every XIV box was managing 10 admin conversations. The new MSM acts as a go-between. The admins connect to the MSM, and the MSM connects to the XIV boxes. The MSM polls and caches the status of each XIV, greatly increasing the number of XIV boxes that an admin can manage.
Enhanced User Interface. A new Multi-system Manager server reduces the operational complexity for large and multi-site XIV deployments. We also added support for IPsec and US. Government (USGv6) certification for admistering the XIV over IPv6 networks. The XIV Mobile Dashboard app for iPhone and iPad is spiffed up. Finally, the GUI has been internationalized and translated to the Japanese language.
Enhanced Integration for Cloud. For OpenStack, XIV now offers a Nova-volume driver which provides persistent storage to OpenStack compute nodes. The Nova task force is now looking to move storage into its own project called Cinder. For VMware, XIV has full support for Site Recovery Manager (SRM) v4.1 and v5.0 releases. XIV now also supports the Microsoft System Center Virtual Machine Manager, which can manage Hyper-V, VMware and Citrix XenServer hypervisors.
Smaller entry point. The original XIV supported 1TB and 2TB drives, with the smallest offering being 27TB usable. When IBM introduced the XIV Gen3, the two choices were 2TB and 3TB disk drives. Unfortunately, this meant that the initial entry model was now 55TB in size, and each additional module would be more expensive as well. IBM is now going to offer 1TB support for XIV Gen3 for a lower price point, these are actually 2TB drives with half the capacity turned off.
The job is located in Tucson, Arizona, which is a great place to live! Tucson is the headquarters for IBM storage design and development, with the largest collection of engineers, software developers and testers. The IBM Tucson Executive Briefing Center is located on the [University of Arizona Science and Technology Park] campus that houses over 7,000 employees from 50 different companies.
What does the job entail?
Primarily, you will be developing, customizing and presenting Powerpoint presentations and live product demos. For some briefings, you will work with sales reps, IBM Business Partners, and clients to develop an agenda of topics to discuss. At times, the presentation may involve working to solve the client's problems, drawing on the whiteboard or flip charts to help capture the requirements and architect a solution.
Which products are we talking about?
The [IBM System Storage product line] includes solid-state drives (SSD), block and file-based disk systems, tape drives and libraries, storage virtualization, and storage management software.
Is there any opportunity for travel?
Most of the presentations will be performed in Tucson, either in person, by webcast or video conference call. Sometimes, this includes discussions over drinks, dinner or golfing. Occasionally, there will be travel to present at client locations, IBM branch offices, events or conferences. My manager estimates approximately 10 percent travel.
Is the pay based on a commission?
Absolutely not! We are consultants, not salespeople. To maintain our "trusted advisor" status, it is a flat salary, with possibility for year-end bonus based on how well our division does overall. This allows us to present and position all of the products fairly to the clients at briefings without bias. Our clients appreciate that! The job is considered pre-sales technical support.
Is training included?
Yes. Assuming you already have a strong background in storage hardware and software, and how these connect to SAN and LAN networks for a variety of operating systems like z/OS, AIX, Windows and Linux, there will be training for the latest updates and features of the IBM products throughout the year. Also, there will be professional training to build up your public speaking and meeting facilitation skills.
How do I apply?
If you are an American citizen, fluent in the English language, and have at least a Bachelor's Degree, go to the [IBM Employment website], look for "Storage Support Specialist" position using job code "STG-0524037" or "STG-0525309". IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Last year, the Austin Executive Briefing Center had a room full of experts to help customers learn about IBM hardware to run Oracle applications. This year, IBM is back in San Francisco, with subject matter experts representing Power Systems, System x servers, PureSystems, Storage and System z mainframes. If you are in San Francisco, consider taking 1-2 hours out of your schedule to speak to IBM experts. These are intended to answer the question: Why choose IBM for your Oracle (and other) workloads?
Event: IBM Mini-Briefings Location: San Francisco Marriott Marquis, 55 Fourth Street, very close to the Moscone Center Dates: Monday through Wednesday, October 1-3, 2012
Subject Matter Experts:
Pat O'Rourke, Austin Briefing Center, Power Systems
Dennis Wunder, Poughkeepsie Briefing Center, System z mainframes
Steve Loeschorn, Raleigh Briefing Center, System x servers
Curtis Neal, Tucson Briefing Center, Storage
IBM will also have a booth presence on the main Oracle OpenWorld showroom floor. Please stop by and visit my colleagues! To sign up for a Mini-Briefing at Oracle OpenWorld, for any or all of the topics above, visit the new [IBM STG Austin EBC] website.
Many thanks to the 186 people who registered for yesterday's webcast "Solving the Storage Capacity Crisis -- Tools and Practices for Effective Management!" We had some excellent questions posed during the live Q&A:
Do you recommend moving to a SAN before implementing the management techniques you described, or will these tactics work just as well on direct-attached storage?
How does data center tiering differ from hierarchical storage management?
How do you recommend decisions about data priority be made when there are multiple stakeholders competing for attention?
You didn't mention deduplication. Does that have much impact on capacity management?
When outsourcing to a storage service provider, do you have any recommendations of the merits of wholesale outsourcing vs. partial outsourcing?
What are the dangers of giving end-users the ability to manage their own storage? What kind of education should be put in place?
The webcast was recorded, so in case you missed it, or just want to hear it again, the recording is now available in the [On24 archives].
Now an avid reader of my blog has brought this to my attention. Apparently,
EMC has been showing customers a presentation
[Accelerating Storage Transformation with VMAX and VPLEX] with false and misleading comparison claims between IBM DS8000, HDS VSP and EMC VMAX 40K disk system performance.
(FTC Disclosure: This would be a good time to remind my readers that I work for IBM and own IBM stock. I do not endorse any of the EMC or HDS products mentioned in this post, and have no financial affiliation or investments directly with either EMC nor HDS. I am basing my information solely on the presentation posted on the internet and other sources publicly available, and not on any misrepresentations from EMC speakers at the various conferences where these charts might have been shown.)
The problem with misinformation is that it is not always obvious. The EMC presentation is quite pretty and professional-looking. It is the typical slick, attention-getting, low-content, over-simplified marketing puffery you have come to expect from EMC. There are two slides in particular that I have issue with.
This first graphic implies that IBM and HDS are nearly tied in performance, but that EMC VMAX 40K has nearly triple that bandwidth. Overall the slide has very little detail. That makes it difficult to determine what exactly is being claimed and whether a fair comparison is being made.
The title claims that VMAX 40K is "#1 in High Bandwidth Apps". Only three disk systems are shown so the claim appears to be relative to only the three systems. The wording "High Bandwidth Apps" is confusing considering the cited numbers are for disk systems and no application is identified. By comparison, IBM SONAS can drive up to 105 GB/sec sequential bandwidth, nearly double what EMC claims for its VMAX 40K, so EMC is certainly not even close to #1.
Is the workload random or sequential? That is not easy to determine. The use of "GB/s" along with the large block size of 128KB implies the I/O workload is sequential, which is great for some workloads like high performance computing, technical computing and video broadcasts. Random workloads, on the other hand, are usually measured in I/Os per second (IOPS) with a block size ranging 4KB to 64KB. (I am assuming the 128K blocks refers to 128KB block size, and not reading the same block of cache 128,000 times.)
The slide states "Maximum Sustainable RRH Bandwidth 128K Blocks". The acronym "RRH" is not defined; but I suspect this refers to "random read hits". For random workloads, 100 percent random read hits from cache represents one corner of the infamous "four corners" test. Real-world workloads have a mix of reads and writes, and a mix of cache hits and cache misses. It is also unclear whether the hits are from standard data cache or from internal buffers in adapters (perhaps accessing the same blocks repeatedly) or something else. So is this really for a random workload, or a sequential workload?
(The term "Hitachi Math" was coined by an EMC blogger precisely to slam Hitachi Data Systems for their blatant use of four-corners results, claiming that spouting ridiculously large, but equally unrealistic, 100 percent random read hit results don't provide any useful information. I agree. There are much better industry-standard benchmarks available, such as SPC-1 for random workloads, SPC-2 for sequential workloads, and even benchmarks for specific applications, that represent real-world IT environments. To shame HDS for their use of four-corners results, only for EMC themselves to use similar figures in their own presentation is truly hypocritical of them!)
The IBM system is identified as "DS8000". DS8000 is a generic family name that applies to multiple generations of systems first introduced in 2004. The specific model is not identified, but that is critical information. Is this a first generation DS8100, or the latest DS8800, or something in between?
The slide says "Full System Configs", but that is not defined and configuration details are not identified. Configuration details, also critical information in assessing system performance capabilities, are not specified. If the EMC box costs seven times more than IBM or HDS, would you really buy it to get 3x more performance? Is the EMC packed with the maximum amount of SSD? Were there any SSD in the IBM or HDS boxes to match?
The source of the claimed IBM DS8000 performance numbers is not identified. Did they run their own tests? While I cannot tell, the VMAX may have been configured with 64 Fibre Channel 8Gbps host connections. In that case each channel is theoretically capable of supporting about 800 MB/s at 100% channel utilization. Multiplying 64 x 800MB/s = 51.2GB/s, so did EMC just do the performance comparison on the back of a napkin, assuming there are no other bottlenecks in the system? Even then, I would not round up 51.2 to 52!
Response times were not identified. For random I/Os, response time is a very important metric. It is possible that the Symmetrix was operating with some resources at 100% utilization to get the highest GB/s result, but that would likely make I/O response times unacceptable for real-world random I/O workloads.
IBM and HDS have both published Storage Performance Council [SPC] industry-standard performance benchmarks. EMC has not published any SPC benchmarks for VMAX systems. If EMC is interested in providing customers with audited, detailed performance information along with detailed configuration information, all based on benchmarks designed to represent real-world workloads, EMC can always publish SPC benchmark results as IBM and other vendors have done. In past blog fights, EMC resorts to the excuse that SPC isn't perfect, but can they really argue that vague and unrealistic claims cited in its presentation are better?
The second graphic is so absurd, you would think it came directly from Larry Ellison at an Oracle OpenWorld keynote session. EMC is comparing a configuration with VMAX 40K plus an EMC VFCache host-side flash memory cache card to a configuration with an IBM and HDS disk system without host-side flash memory cache also configured. The comparison is clearly apples-to-oranges. Other disk system configuration details are also omitted.
FAST VP is EMC's name for its sub-volume drive tiering feature, comparable to IBM Easy Tier and Hitachi's Dynamic Tiering. The graph implies that IBM and HDS can only achieve a modest increment improvement from their sub-volume tiering. I beg to differ. I have seen various cases where a small amount of SSD on IBM DS8000 series can drastically improve performance 200 to 400 percent.
The "DBClassify" shown on the graph is a tool run as part of an EMC professional services offering called Database Performance Tiering Assessment, makes recommendations for storing various database objects on different drive tiers based on object usage and importance. Do you really need to pay for professional services? With IBM Easy Tier, you just turn it on, and it works. No analysis required, no tools, no professional services, and no additional charge!
VFCache is an optional product from EMC that currently has no integration whatsoever with VMAX. A fair comparison would have included a host-side flash memory cache (from any vendor) when the IBM or HDS storage system was configured. Or leave it out altogether and just focus on the sub-volume tiering comparison.
Keep in mind that EMC's VFCache supports only selected x86-based hosts. IBM has published a [Statement of Direction] indicating that it will also offer this for Power systems running AIX and Linux host-side flash memory cache integrated with DS8000 Easy Tier.
I feel EMC's claims about IBM DS8000 performance are vague and misleading. EMC appears to lack the kind of technical marketing integrity that IBM strives to attain.
Since EMC is not able or willing to publish fair and meaningful performance comparisons, it is up to me to set the record straight and point out EMC's failings in this matter.
Reminder: It's not to late to register for my Webcast "Solving the Storage Capacity Crisis" on Tuesday, September 25. See my blog post [Upcoming events in September] to register!
Can you believe it is September already? We have a number upcoming events that you might be interested in.
IBM Smarter Analytics by Design
Join the first of our 'Smarter Analytics by Design' virtual events to learn more from leading industry analyst IDC on how analytics can help you solve business challenges, and the capabilities you'll need to be successful in this ever-changing landscape. You'll also hear real case examples from AXTEL and Miami-Dade County and the results of their analytics approaches.
Webcast: IBM Smarter Analytics by Design Date: Thursday, September 13, 2012 Time: 1:00 pm ET / 12:00 pm CT / 10:00 am PT
Dan Vessett and Jean Bozman, International Data Corporation (IDC)
Gaspar Rivera Del Valle, AXTEL, Monterrey, Mexico
Adrienne DiPrima, Rosario Fiallos, Jaci Newmark, Miami-Dade County, South Florida
The problems that used to keep storage managers awake at night -- power, cooling and physical footprint -- are being successfully addressed by technology, but a more vexing issue still remains: How to get more out of the limited supply of skilled storage management professionals.
Webcast: Solving the Storage Capacity Crisis Date: Tuesday, September 25, 2012 Time: 12:00 pm ET / 10:00 am CT / 09:00 am PT
Demand for storage capacity continues to grow far faster than the pool of people to manage it. With no end in sight to data growth, businesses need to apply technology and practices that distribute management responsibility to the people who need storage, and multiply the volumes of storage that skilled professionals can handle.
In this presentation, in this session, I will cover best practices and new tools that are enabling leaps in productivity, in three main areas:
IBM is bringing back and expanding its Mini Briefing program to Oracle OpenWorld.
What is a Mini-Briefing you might ask? It is a small, customized briefing by the Executive Briefing Centers, held nearby a related conference, allowing conference attendees to take 1-2 hours out of their schedule to speak to IBM experts. These are intended to answer the question: Why choose IBM for your Oracle (and other) workloads?
Event: IBM Mini-Briefings Location: San Francisco Marriott Marquis, 55 Fourth Street, very close to the Moscone Center Dates: Monday through Wednesday, October 1-3, 2012
Last year, the Austin Executive Briefing Center had a room full of experts to help customers learn about IBM hardware to run Oracle applications. This year, IBM is back in San Francisco, with subject matter experts representing Power Systems, System x servers, PureSystems, Storage and System z mainframes.
Subject Matter Experts:
Pat O'Rourke, Austin Briefing Center, Power Systems
Dennis Wunder, Poughkeepsie Briefing Center, System z mainframes
Steve Loeschorn, Raleigh Briefing Center, System x servers
Curtis Neal, Tucson Briefing Center, Storage
Of course, IBM will also have a booth presence on the main Oracle OpenWorld showroom floor. Sadly, I will not be there myself this year. Please stop by and visit my colleagues!
To sign up for a Mini-Briefing at Oracle OpenWorld, for any or all of the topics above, visit the new [IBM STG Austin EBC] website.
I hope you can participate in one or more of these events!
With all of the distractions this week, from the Republican National Convention in Florida, to the Tropical Storm Isaac that hit New Orleans on the 7th anniversary of Hurricane Katrina, I thought I would continue this week's theme on the IBM zEnterprise EC12.
Processing an insurance claim: $56 U.S. dollars (USD) with mainframes, versus $92 USD with distributed servers.
Processing a mobile subscriber: $18.26 USD with mainframes, versus $26.12 USD with distributed servers.
IT cost for an ATM machine: $572 USD with mainframes, versus $1021 USD with distributed servers.
In the whitepaper [Total Economic Impact of IBM System z], Forrester Research interviews the executives of five existing mainframe clients, and through in-depth analysis of their deployments, is able to present a "composite" company with an IT-staff of 4,500 employees. The result is impressive: deploying an IBM System z had an ROI of 199 percent. That is a payback period of less than five months!
A finish this post with a quick [6-minute Youtube video], featuring my colleage, Nick Sardino. Nick and I have worked together in the past at various conferences and conventions.
Well it's Tuesday again, and you know what that means! IBM Announcements!
For nearly 50 years, IBM has been leading the IT industry with its mainframe servers. Today, IBM announced its 12th generation mainframe in its [System z product family], the IBM zEnterprise EC12, or zEC12 for short. I joined IBM in 1986, and my first job was to work on DFHSM for the MVS operating system. The product is now known as DFSMShsm as part of the Data Facility Storage Management System, and the operating systems went through several name changes: MVS/ESA, OS/390, and lately z/OS. I was the lead architect for DFSMS up until 2001. I then switched to be part of the team that brought Linux to the mainframe. Both of these experiences come in handy as I deal with mainframe storage clients at the Tucson Executive Briefing Center.
Let's take a look at some recent developments over the past few years.
In the 9th and 10th generations (IBM System z9 and z10, respectively), IBM introduced the concept of a large "Enterprise Class", and a small "Business Class" to offer customer choice. These were referred to as the EC and BC models.
For the 12th generation, IBM kept the name "zEnterprise", but went back to the "EC" to refer to Enterprise Class. Rather than offer a separate "small" Business Class version, the zEC12 comes in 60 different sub-capacity levels. Many software vendors charge per core, or per [MIPS], so offering sub-capacity means that some portion of the processors are turned off, so the software license is lower. The top rating for the zEC12 is 78,000 MIPS. (I would have thought by now that we would have switched over to BIPS by now!)
If you currently have a z10 or z196, then it can be upgraded to zEC12. The zEC12 can attach to up to four zBX model 003 frames that can run AIX, Microsoft Windows and Linux-x86. If you currently have zBX model 002 frames, these can be upgraded to model 003.
The key enhancements reflect the three key initiatives:
Operational Analytics - Most analytics are done after-the-fact, but IBM zEnterprise can enable operational analytics in real-time, such as fraud detection while the person is using the credit card at a retail outlet, or online websites providing real-time suggestions for related products while the person is still adding items to their shopping card. Operational analytics provides not just the insight, but in a timely manner that makes it actionable. There is even work in place to [certify Hadoop on the mainframe].
Security and Resiliency - IBM is famous for having the most secure solutions. With industry-leading EAL5+ security rating, it beats out competitive offerings that are typically only EAL4 or lower. IBM has a Crypto Express4S card to provide tamper-proof co-processing for the system. IBM introduces the new "zAware" feature, which is like "Operational Analytics" pointed inward, evaluating all of the internal processes, error logs and traces, to determine if something needs to be fixed or optimized.
Cloud Agile - When people hear the phrase "Cloud Agile" they immeidately think of IBM System Storage, but servers can be Cloud Agile also, and the mainframe can run Linux and Java better, faster, and at a lower cost, than many competitve alternatives.
Earlier this week, Jon Erickson from Forrester Reserch, and Chris Saul from IBM, co-presented a webcast on the economic impact of using SAN Volume Controller for storage virtualization. The event was co-sponsored by IBM, InformationWeek, and UBM TechWeb, The Global Leader in Business Technology Media, a Division of UBM LLC. Jon spoke first, covering the cost savings and financial benefits of using SAN Volume Controller in your environment. His analysis shows a payback period of only 18 months!
Chris Saul (IBM) then covered the latest features introduced last June for SAN Volume Controller v6.4 release. Many of these features are available on older hardware models of SAN Volume Controller. One of the most exciting features is Real-time Compression.
If you missed the webcast, you can listen to the [Replay]. There is also a [whitepaper] if you prefer that format.
The Real-time Compression benefits can vary by the type of data compressed. Some data compresses only 20% savings. Other data compresses 80% or more. The best way to find out how much Compression would benefit your environment is to run the [IBM Comprestimator Tool] that runs against your own data!
If you are constantly battling out-of-space conditions, and would like to make extra room on your existing storage devices, your dreams have come true!
IBM has announced it has entered into a definitive agreement to acquire Texas Memory Systems, Inc. (TMS), a privately held Houston, Texas-based company with about 100 employees, that focuses on solid-state flash optimized systems and solutions, including the RamSan family of external rack-mounted storage, as well as PCIe cards for internal storage that fit inside servers.
I've mentioned Solid-State Drive storage quite a few times over the past few years in this blog, which included some great interactions with my friends over at Texas Memory Systems. Here's a quick look:
In my now infamous blog post [Hybrid, Solid State and the future of RAID], I resort to a deck of [Tarot cards] in an effort to fight [writer's block] responding to query about combining solid-state with spinning disk. In the original post, I poked fun at Texas Memory Systems having the slogan "World's Fastest Storage". Woody Hutsell, then VP of marketing for Texas Memory Systems, explained that the reason that TMS did not have faster benchmark results was because it did not have a million dollars to buy the fastest IBM UNIX server.
In my post [Good News and Bad News], I mentioned that Texas Memory Systems has an impressive SPC benchmark result. The Storage Performance Council [SPC] publishes the benchmarking industry standard by which all block-based storage devices are measured. It looks like the TMS performance test department finally got the million-dollar IBM server they needed for this.
My colleagues in marketing were not amused, afraid that mentioning small companies like TMS would give them a huge boost in marketing awareness, above and beyond what TMS could do on their own modest marketing budget, similar to the [Colbert Bump]. I could call it the Pearson Bump. If you first heard of Texas Memory Systems from my blog, or bought TMS products based on my discussion, please post a comment below!
IBM made history as the first major storage vendor to [break the 1 million IOPS barrier with Solid State Disk]. The project was known as "Quicksilver", and was able to demonstrate that a product like SAN Volume Controller with Solid-State Drives (SSD) can indeed provide a significant boost in performance to external disk arrays. The IBM 2145-CF8 and 2145-CG8 models allow up to four SSD in each node. I was asked not to blog the entire month of August, so that our upcoming September announcements would get more notice, but I couldn't resist covering Quicksilver. The original post had mentioned Texas Memory Systems, but were later removed to avoid the "Pearson Bump".
In my post [Day 2 IBM Storage University - Solutions Expo - TMS After-party], I mentioned that I attended the TMS after-party. Texas Memory Systems had just been qualified as Solid-State Drive (SSD) storage behind the IBM SAN Volume Controller, and the two products work extremely well together for IBM Easy Tier, the sub-volume automated tiering capability to optimize storage performance. I was able to catch up with my friend Erik Eyberg, and meet CEO and Founder Holly Frost.
Nearly half (43 percent) of IT decision makers say they have plans to use SSD technology in the future or are already using it in their datacenter. Solid-state can refer to both volatile Random Access Memory (RAM) and non-volatile Flash, and Texas Memory Systems has built solutions around both types. The survey question referred to non-volatile Flash Solid-State Drives (SSD) that do not require a battery to keep the data from fading away after the power goes out. Nearly all storage in the datacenter has volatile Random Access Memory (RAM).
Speeding delivery of data was the motivation behind 75 percent of respondents who plan to use or already use SSD technology. I would have thought this would have been 100 percent, but the other options included reduced energy consumption, and improved drive reliability, which are both also true with Solid-State Drives.
However, for those who were not using SSD today, the major factor was cost, according to 71 percent of respondents. On a Dollar-per-GB basis, Solid-State Drives continue to be anywhere from 10 to 25 times more expensive spinning disk. Last year's tsunami in Japan, and the floods in Thailand, have caused spinning disk prices to rise to cover component shortages, thereby shrinking the price gap between SSD and spinning disk.
Nearly half (48 percent) say they plan on increasing storage investments in the area of virtualization, cloud (26 percent) and flash memory/solid state (24 percent) and analytics (22 percent).
I am back from lovely Taipei. The IBM Top Gun class went well. Here are a few pictures of things I found interesting while I was there.
On the first day of class, I asked for some coffee. Our lovely class assistant, Ashley, brought me a cup with an interesting paper filter hanging on the edge. I have since learned that there are two drinks never to order in Taiwan: coffee and wine. If you enjoy either, you won't here. Instead, I drank the local "Taiwan Beer" and various types of tea.
Our class was on the 14th floor of the building, and there was this warning sign posted in the elevator. I have no idea what Chinese characters say, but we found the cartoon depictions of elevator dangers amusing. We interpreted the lower left corner to mean "Don't let your evil twin sister push you out of a moving elevator!"
I have to say that the variety of food was excellent. One night, we had dinner at a [Spanish Tapas] restaurant. The Spanish had a settlement on Taiwan island, known as Formosa back then, until driven out by the Dutch in 1642. We also had a traditional Chinese lunch, with dumplings, pickled cabbage, and "Lion's Head" soup.
From the classroom floor, we could see the Taipei 101 building, considered the third [tallest skyscraper in the world]. This wasn't here the last time I was in Taiwan.
On the last day, we were treated to some [Bubble tea], a specialty drink that originated in Taiwan in the 1980's. The straw was unusually thick, about twice as thick as a normal straw. We quickly figured out why. It was so that we could slurp up the brown floating things at the bottom. We didn't realize this until after the first sip. These floaties were actually Boba Tapioca pearls. The tea itself was delicious and sweet.
Special thanks to Joe Ebidia for managing the class, his assistant Ashley, and our local support team Justin and Stewart. I would also like to thank the staff at the Sherwood Hotel.
This week, I am in Taipei, teaching Top Gun class. There was concern that another typhoon would hit the island of Taiwan later this week, but it looks like it is now headed for Hong Kong instead.
Elsewhere in the world, there are several events going on next week, so I thought I would bring them to your attention.
ECTY - South Africa
Next week, Jerry Kluck, IBM Global Sales Executive for Storage Optimization and Integration Services, will be the keynote speaker at "Edge Comes to You" (ECTY) conference in South Africa. This is a one-day event, similar to the [ECTY event in Moscow, Russia] that I spoke at last June.
Here is the schedule for South Africa next week:
Monday, August 20, 2012 - Johannesburg
Wednesday, August 22, 2012 - Cape Town
(I have been to both Jo'burg and Cape Town back in 1994. A month after Apartheid had just ended, I was part of a small group of IBMers sent to re-establish IBM's business operations there. I would have liked to have attended the events next week, not just to hear Jerry speak, but also to see how much the country has changed over the past 18 years, but I could not get a work permit in time.)
If you are interested in attending either of these next week, contact your local IBM Business Partner or sales rep to attend.
Forrester's Total Economic Impact Study of Virtualized Storage
Virtualized storage can help organizations stretch their storage investment dollar and storage administration and management resources. Jon Erickson from Forrester Research will review the latest findings from IBM SAN Volume Control (SVC) users studied as part of the recently completed Forrester Total Economic Impact Study of IBM System Storage SAN Volume Controller.
Date: Tuesday, August 21, 2012
Time: 10:00 AM PDT / 1:00 PM EDT
Duration: 60 minutes
Among the findings, users were able to:
Avoid the capital cost of additional storage
Increase IT productivity
Provide greater end user data availability
The second presenter is Chris Saul, IBM Storage Virtualization Manager, who will explain how SVC can manage heterogeneous disk from a single point of control, autonomously manage tiered disk storage and can store up to five times as much data on your existing disk using IBM Real-time Compression.
Not all virtualization solutions are created equal! That's true for storage virtualization, like the SAN Volume Controller mentioned above, and it's true for server virtualization as well.
This webcast discusses the real-world impact on businesses that deploy IBM's PowerVM®
virtualization technology as compared to those using Oracle® VM for SPARC (OVM SPARC), Microsoft® Hyper-V, VMware® vSphere or other competing products.
Date: Wednesday, August 22, 2012
Time: 10:00 AM PDT / 1:00 PM EDT
Duration: 60 minutes
This webcast will include findings from a [Solitaire Interglobal] study of over 61,000 customer sites on the value of virtualization from a business perspective and how IBM's PowerVM provides real business value.
Other key discussion points that will be covered during this webcast include:
Behavioral characteristics of server virtualization technologies that were examined and analyzed from survey participant's environments
How IT colleagues were able to obtain a faster time-to-market for business initiatives when using IBM PowerVM
Why the learning curve time for PowerVM is as much as 2.58 times faster than for other offerings
Why VM reboot comparisons for PowerVM vs competitive platforms resulted in downtime of 5.5 times less than with other options
A TCO reduction of up to 71.4% for PowerVM compared to alternative options
This webcast will also feature an in-depth discussion on the IBM PowerVM solution from an IBM product expert who will share the unique virtualization features available when PowerVM is utilized within the IBM Power Systems™ environment.
Every year, I teach hundreds of sellers how to sell IBM storage products. I have been doing this since the late 1990s, and it is one task that has carried forward from one job to another as I transitioned through various roles from development, to marketing, to consulting.
This week, I am in the city of Taipei [Taipei] to teach Top Gun sales class, part of IBM's [Sales Training] curriculum. This is only my second time here on the island of Taiwan.
As you can see from this photo, Taipei is a large city with just row after row of buildings. The metropolitan area has about seven million people, and I saw lots of construction for more on my ride in from the airport.
The student body consists of IBM Business Partners and field sales reps eager to learn how to become better sellers. Typically, some of the students might have just been hired on, just finished IBM Sales School, a few might have transferred from selling other product lines, while others are established storage sellers looking for a refresher on the latest solutions and technologies.
I am part of the teach team comprised of seven instructors from different countries. Here is what the week entails for me:
Monday - I will present "Selling Scale-Out NAS Solutions" that covers the IBM SONAS appliance and gateway configurations, and be part of a panel discussion on Disk with several other experts.
Tuesday - I have two topics, "Selling Disk Virtualization Solutions" and "Selling Unified Storage Solutions", which cover the IBM SAN Volume Controller (SVC), Storwize V7000 and Storwize V7000 Unified products.
Wednesday - I will explain how to position and sell IBM products against the competition.
Thursday - I will present "Selling Infrastructure Management Solutions" and "Selling Unified Recovery Management Solutions", which focus on the IBM Tivoli Storage portfolio, including Tivoli Storage Productivity Center, Tivoli Storage Manager (TSM), and Tivoli Storage FlashCopy Manager (FCM). The day ends with the dreaded "Final Exam".
Friday - The students will present their "Team Value Workshop" presentations, and the class concludes with a formal graduation ceremony for the subset of students who pass. A few outstanding students will be honored with "Top Gun" status.
These are the solution areas I present most often as a consultant at the IBM Executive Briefing Center in Tucson, so I can provide real-life stories of different client situations to help illustrate my examples.
The weather here in Taipei calls for rain every day! I was able to take this photo on Sunday morning while it was still nice and clear, but later in the afternoon, we had quite the downpour. I am glad I brought my raincoat!
With all the announcements we had in June, it is easy for some of the more subtle enhancements to get overlooked. While I was at Orlando for the IBM Edge conference, I was able to blog about some of the key featured announcements. Then, later, when I got back from Orlando to Tucson, I was able to then blog about [More IBM Storage Announcements]. For IBM's Scale-Out Network Attach Storage (SONAS), I had simply:
"SONAS v1.3.2 adds support for management by the newly announced IBM Tivoli Storage Productivity Center v5.1 release. Also, IBM now officially supports Gateway configurations that have the storage nodes connected to XIV or Storwize V7000 disk systems. These gateway configurations offer new flexible choices and options for our ever-expanding set of clients."
In my defense, IBM numbers its software releasees with version.release.modification, so 1.3.2 is Version 1, Release 3, Modification 2. Generally, modification announcements don't get much attention. The big announcement for v1.3.0 of SONAS happened last October, see my blog post [October 2011 Announcements - Part I] or
the nice summary post [IBM Scale-out Network Attached Storage 1.3.0] from fellow blogger Roger Luethy.
Here is a diagram showing the three configurations of SONAS.
I have covered the SONAS Appliance model in depth in previous blogs, with options for fast and slow disk speeds, choice of RAID protection levels, a collection of enterprise-class software features provided at no additional charge, and interfaces to support a variety of third party backup and anti-virus checking software.
The basics haven't changed. The SONAS appliance consists of 2 to 32 interface nodes, 2 to 60 storage nodes, and up to 7,200 disk drives. The maximum configuration takes up 17 frames and holds 21.6PB of raw disk capacity, which is about 17PB usable space when RAID6 is configured. An interface nodes has one or two hex-core processors with up to 144GB of RAM to offer up to 3.5GB/sec performance each. This makes IBM SONAS the fastest performing and most scalable disk system in IBM's System Storage product line.
I thought I would go a bit deeper on the gateway models. These models support up to ten storage nodes, organized in pairs. The key difference is that instead of internal disk controllers, the storage nodes connect to external disk systems. There is enough space in the base SONAS rack to hold up to six interface nodes, or you can add a second rack if you need more interface nodes for increased performance.
SONAS with XIV gateway
XIV offers a clever approach to storage that allows for incredibly fast access to data on relatively slow 7200 RPM drives. By scattering data across all drives and taking advantage of parallel processing, rebuild times for a failed 3TB drive are less than 75 minutes. Compare that to typical rebuild times for 3TB drives that could take as much as 9-10 hours under active I/O loads!
In the configuration, each pair of storage nodes can connect to external SAN Fabric switches that then connect to one or two XIV storage systems. How simple is that? These can be the original XIV systems that support 1TB and 2TB drives, or the new XIV Gen3 systems that support 400GB Solid-state drives (SSD) and 3TB spinning disk drives. In both cases, you can acquire additional storage capacity as little as 12 drives at a time (one XIV module holds 12 drives).
The maximum configuration of ten XIV boxes could hold 1,800 drives. At 3TB drive per drive, that would be 2.4PB usable capacity.
The SONAS with XIV gateway does not require the XIV devices to be dedicated for SONAS purposes. Rather, you can assign some XIV storage space for the SONAS, and the rest is available for other servers. In this manner, SONAS just looks like another set of Linux-based servers to the XIV storage system. This in effect gives you "Unified Storage", with a full complement of NAS protocols from the SONAS side (NFS, CIFS, FTP, HTTPS, SCP) as well as block-based protocols directly from the XIV (FCP, iSCSI).
SONAS with Storwize V7000 gateway
The other gateway offering is the SONAS with Storwize V7000. Like the SONAS with XIV gateway model, you connect a pair of SONAS storage nodes to 1 or 2 Storwize V7000 disk systems. However, you do not need a SAN Fabric switch in between. You can instead connect the SONAS storage nodes directly to the Storwize V7000 control enclosures.
To acquire additional storage capacity, you can purchase a single drive at a time. That's right. Not 12 drives, or 60 drives, at a time, but one at a time. The Storwize V7000 supports a wide range of SSD, SAS and NL-SAS drives at different sizes, speeds and capacities. The drives can be configured into various RAID protection levels: RAID 0, 1, 3, 5, 6 and 10.
Each Storwize V7000 control enclosure can have up to nine expansion drawers. If you choose the 2.5-inch 24-bay models, you can have up to 480 drives per storage node pair, for a total of 2,400 drives. If you choose the 3.5-inch 12-bay models, you can have up to 240 drives per node pair, 1,200 drives total. At 3TB per drive, this could be 3.6PB of raw capacity. The usable PB would depend on which RAID level you selected. Of course, you don't have to limit yourself all to one size or the other. Feel free to mix 2.5-inch and 3.5-inch drawers to provide different storage pool capabilities.
All three SONAS configurations support Active Cloud Engine. This is a collection of features that differentiate SONAS from the other scale-out NAS wannabees in the marketplace:
Policy-driven Data Placement -- Different files can be directed to different storage pools. You no longer have to associate certain file systems to certain storage technologies.
High-speed Scan Engine -- SONAS can scan 10 million files per minute, per node. These scans can be used to drive data migration, backups, expirations, or replications, for example. It is over 100 times faster than traditional walk-the-directory-tree approaches employed by other NAS solutions.
Policy-driven Migration -- You can migrate files from one storage pool to another, based on age, days since last reference, size, and other criteria. The files can be moved from disk to disk, or move out of SONAS and stored on external media, such as tape or a virtual tape library. A lot of data stored on NAS systems is dormant, with little or no likelihood of being looked at again. Why waste money keeping that kind of data on expensive disk? With SONAS, you can move those files to tape can save lots of money. The files are stubbed in the SONAS file system, so that an access request to a file will automatically trigger a recall to fetch the data from tape back to the SONAS system.
Policy-driven Expiration -- SONAS can help you keep your system clean, by helping you decide what files should be deleted. This is especially useful for things like logs and traces that tend to just hang around until some deletes them manually.
WAN Caching -- This allows one SONAS to act as a "Cloud Storage Gateway" for another SONAS at a remote location connected by Wide Area Network (WAN). Let's say your main data center has a large SONAS repository of files, and a small branch office has a smaller SONAS. This allows all locations to have a "Global" view of the all the interconnected SONAS systems, with a high-speed user experience for local LAN-based access to the most recent and frequently used files.
If you want to learn more, see the [IBM SONAS landing page]. Next week, I will be across the Pacific Ocean in [Taipei], to teach IBM Top Gun class to sales reps and IBM Business Partners. "Selling SONAS" will be one of the topics I will be covering!
Next week we have two events related to Infrastructure for midsize businesses!
On Monday, August 6th, 1pm EDT, we have a TweetChat to cover "IT Infrastructure Improvements for Midsize Businesses." You can join at [http://tweetchat.com/room/expertsyschat] or simply tweet with hashtag: #ExpertSysChat
On Tuesday, August 7th, 12pm EDT, IBM's Midsize Insider is hosting me as a speaker for a Webcast: [Storage Management with IBM]. Midsize Insider is a valuable repository of expert content tailored for small-to-midsized business owners and IT decision makers.
Mark your calendars! Next month, IBM's Midsize Insider is hosting me as a speaker for a Webcast: [Storage Management with IBM], on August 7th, 12pm EDT. Midsize Insider is a valuable repository of expert content tailored for small-to-midsized business owners and IT decision makers.
The problems that used to keep storage managers awake at night -- power, cooling and physical footprint -- are being successfully addressed by technology, but a more vexing issue still remains: How to get more out of the limited supply of skilled storage management professionals.
Demand for storage capacity continues to grow far faster than the pool of people to manage it. With no end in sight to data growth, businesses need to apply technology and practices that distribute management responsibility to the people who need storage, and multiply the volumes of storage that skilled professionals can handle.
In this presentation, in this session, I will cover best practices and new tools that are enabling leaps in productivity, in three main areas:
Abandon the Craftsman Approach. Storage administrators need to discard some long-help myths about storage management and adopt new ways of thinking that enable them to handle significantly greater capacity.
Adopt software tools. Computers can now provide unprecedented guidance on storage optimization so that people don’t have to. Policy-based management, smart provisioning and automated tiering are among the innovations that are powering leaps in productivity.
Consider self-service portals. Companies are now exploring the self-service capabilities of private and public clouds. However, organizations need to adopt policies and limits in place to create an atmosphere of trust that enables efficient self-provisioning for storage.
Robert LeBlanc, IBM Senior Vice President for Middleware, gave a keynote presentation at the Red Hat Summit. Here is the [26-minute YouTube video]:
I am running Red Hat Enterprise Linux (RHEL) 6.2 as my primary laptop operating system. Most of IBM's products, like Lotus Notes for email, run natively on Linux for the desktop. I have a Windows XP running as a Linux KVM guest to run a few third-party software that we are still using.
Happy Fourth of July everyone! For my readers outside the U.S.A, this Wednesday marks America's [Independence Day]. Celebrations include parades during the day, and fireworks at night.
A long time ago, the IBM Tucson lab decided to close down the entire week, forcing everyone to take a week of their allotted vacation, so as to perform maintenance on the air conditioners and other equipment. Since then, many IBMers in Tucson have adopted this week as a good time to get out of town.
Most years, I head over to San Diego, California. This year, however, I will be taking a cruise on the Caribbean.
This week I am in Moscow, Russia for today's "Edge Comes to You" event. Although we had over 20 countries represented at the Edge2012 conference in Orlando, Florida earlier this month, IBM realizes that not everyone can travel to the United States. So, IBM has created the "Edge Comes to You" events where a condensed subset of the agenda is presented. Over the next four months, these events are planned in about two dozen other countries.
This is my first time in Russia, and the weather was very nice. With over 11 million people, Moscow is the 6th largest city in the world, and boasts having the largest community of billionaires. With this trip, I have now been to all five of the so-called BRICK countries (Brazil, Russia, India, China and Korea) in the past five years!
The venue was the [Info Space Transtvo Conference Center] not far from the Kremlin. While Barack Obama was making friends with Vladimir Putin this week at the G2012 Summit in Mexico, I was making friends with the lovely ladies at the check-in counter.
If it looks like some of the letters are backwards, that is not an illusion. The Russian language uses the [Cyrillic alphabet]. The backwards N ("И"), backwards R ("Я"), the number 3 ("З), and what looks like the big blue staple logo from Netapp ("П"), are actually all characters in this alphabet.
Having spent eight years in a fraternity during college, I found these not much different from the Greek alphabet. Once you learn how to pronounce each of the 33 characters, you can get by quite nicely in Moscow. I successfully navigated my way through Moscow's famous subway system, and ordered food on restaurant menus.
The conference coordinators were Tatiana Eltekova (left) and Natalia Grebenshchikova (right). Business is booming in Russia, and IBM just opened ten new branch offices throughout the country this month. So these two ladies in the marketing department have been quite busy lately.
I especially liked all the attention to detail. For example, the signage was crisp and clean, and the graphics all matched the Powerpoint charts of each presentation.
Moscow is close to the North pole, similar in latitude as Juneau, Alaska; Edinburgh, Scottland; Copenhagen, Denmark; and Stockholm, Sweden.
As a result, it is daylight for nearly 18 hours a day. The first part of the day, from 8:00am to 4:30pm, was "Technical Edge", a condensed version of the 4.5 day event in Orlando, Florida. I gave three of the five keynote presentations:
Game Change on a Smarter Planet: A New Era in IT, discussing Smarter Computing and Expert-Integrated systems, based on what Rod Adkins presented in Orlando.
A New Approach to Storage, explaining IBM Smarter Storage for Smarter Computing, IBM's new approach to the way storage is designed and deployed for our clients
IBM Watson: How it Works and What it Means for Society Beyond Winning Jeopardy! explaining how IBM Watson technologies are being used in Healthcare and Financial Services, based on what I presented in Orlando.
(Note: I do not speak Russian fluently enough to give a technical presentation, so I did then entire presentation in English, and had real-time translators convert to Russian for me. The audience wore headphones. However, I was able to sprinkly a few Russian phrases, such as "доброе утро", "Я не понимаю по-русский" and "спасибо".)
After the keynote sessions, I was interviewed by a journalist for [Storage News] magazine. The questions covered a variety of topics, from the implications of [Big Data analytics] to the future of storage devices that employ [Phase Change Memory]. I look forward to reading the article when it gets published!
The afternoon had break-out sessions in three separate rooms. Each room hosted seven topics, giving the attendees plenty to choose from for each time slot. I presented one of these break-out sessions, Big Data Cloud Storage Technology Comparison. The title was already printed in all the agendas, so we went with it, but I would have rather called it "Big Data Storage Options". In this session, I explained Hadoop, InfoSphere BigInsights, internal and external storage options.
I spent some time comparing Hadoop File System (HDFS) with IBM's own General Parallel File System (GPFS) which now offers Hadoop interfaces in a Shared-Nothing Cluster (SNC) configuration. IBM GPFS is about twice as fast as HDFS for typical workloads.
At the end of the Technical Edge event, there was a prize draw. Business cards were drawn at random, and three lucky attendees won a complete four-volume set of my book series "Inside System Storage"! Sadly, these got held up in customs, so we provided a "certificate" to redeem them for the books when they arrive to the IBM office.
The second part of the day, from 5:00pm to 8pm, was "Executive Edge", a condensed version of the 2 day event in Orlando, designed for CIOs and IT leaders. Having this event in the evening allowed busy executives to come over after they spend the day in the office. I presented IBM Storage Strategy in the Smarter Computing Era, similar to my presentation in Orlando.
Both events were well-attended. Despite fighting jet lag across 11 time zones, I managed to hang in there for the entire day. I got great feedback and comments from the attendees. I look forward to hearing how the other "Edge Comes to You" events fare in the other countries. I would like to thank Tatiana and Natalia for their excellent work organizing and running this event!
Well, it's Tuesday again, and you know what that means... IBM announcements!
Last week, IBM had a big storage launch of various products, with the June 4 announcements at the IBM Edge 2012 conference. I provided highlights in my post [IBM Edge Announcements]. As promised, here are the rest of the announcements.
SONAS v1.3.2 adds support for management by the newly announced IBM Tivoli Storage Productivity Center v5.1 release. Also, IBM now officially supports "Gateway configurations" that have the storage nodes connected to XIV or Storwize v7000 disk systems. These gateway configurations offer new flexible choices and options for our ever-expanding set of clients.
ProtecTIER appliances and gateways
IBM ProtecTIER line of data deduplication appliances and gateways add CIFS file system support. Rather than using OST or a VTL interface, you now have CIFS as a new option for host attach. Also, IBM introduces the new TS7620 Express model, with options for 5.4TB and 11TB in capacity, replacing the previous TS7610 entry level.
LTFS Storage Manager
The Linear Tape File System (LTFS) allows files to be stored on tape cartridges in a manner that allows them to be mounted as file systems, much like a USB memory stick. The new LTFS Storage Manager software allows you to manage a collection of files across a set of cartridges, moving files from one cartridge to another, consolidating valid data onto fewer cartridges, and removing files no longer needed. This is sometimes referred to as "lifecycle management".
Tape System Library Manager
When IBM first introduced the "shuttle" that allowed up to fifteen TS3500 tape libraries to be connected together into a single system, only HPSS customers could take advantage of this. Software was required to coordinate the movement of cartridges from one library to another. The new IBM Tape System Library Manager now offers an alternative to HPSS for coordinating this activity.
DS8000 v6.3 microcode
IBM now offers 400GB solid-state drives. IBM's market leading support for Full Disk Encryption (FDE) is now extended to cover all drive speeds, from the slowest 7200RPM NL-SAS drives up to the fastest solid-state. IBM Easy Tier extends its super-easy implementation to work across all three of these tiers including encryption.
IBM now offers implementation services for IBM XIV Gen3 storage system, and the N series models 3220 and 3240.
This week I am on the road visiting various clients. Next week, Moscow Russia for the "Edge Comes to You" event!
This week I am in Orlando, Florida for the IBM Edge conference. This is the last day, so it ends early for people who want to get home to their datacenters (er.. families) for the weekend.
How Real-Time Compression Can Maximize Storage Efficiency for Production Applications
This was a split session with two speakers. First, Ian Rimmer, Senior IT Engineer and Architect at iBurst, presented their experience with the IBM Real-Time Compression Appliance in front of NetApp NAS storage arrays. Second, Jerry Haigh, IBM offering manager for IBM System Storage, presented the new Real-Time compression feature announced this week on IBM SAN Volume Controller (SVC) and Storwize V7000.
iBurst is the #1 Wireless Telecom for South Africa. The also offer cable broadband and VOIP. They have 200 employees servicing 120,000 subscriber/households. They need to keep five years' worth of text files, and have chosen real-time compression of their NAS storage. This was before IBM acquired the Storwize company, as they have been using it for the past six years.
The monetary savings from compression was used to purchase Performance Accelerator Modules (PAM) cards for their NetApp NAS gear, which benefit from the compression (more data stored in SSD to improve performance).
For backup, they use NDMP with Symantec NetBackup that keeps data in its compressed form as it is written to tape. They have an IBM TS3100 library with LTO tape as the backup repository.
Jerry Haigh presented Real-Time compression for primary disk data. Unlike the competition, this is designed to be used with primary data, including databases, and does this real-time, not post-process. In some performance tests, DB2 compressed on 48 drives out-performed the same data uncompressed on 96 drives. In another test focused on VMware Vmark benchmark, the compressed data was able to be same or better performance as uncompressed. In a third test with SVC virtualizing XIV running Oracle ORION test, the Oracle databases compressed 50 to 64 percent, and had better performance.
For those who already have SVC or Storwize V7000, consider a 45-day trial to check out compression for yourself.
NAS File Systems: Access and Authentication
Mark Taylor, IBM Technical Specialist for SONAS, N series and Storwize V7000 Unified, presented the nuances of authentication and authorization for NAS file systems. The differences between these two are:
Authentication - Yes, you are who you are.
Authorization - Yes, you are permitted to do what you are trying to do
(Prior to working with SONAS, my only experience with access and authentication in NAS was setting up my LAN at home, which I have connecting my Mac, Linux and Windows machines. I have both N series and SONAS at the IBM Executive Briefing Center in Tucson, Arizona, so I know first-hand how complicated NAS access and authentication systems can be.
A few months ago, I taught "Intro to NAS" as one of my topics at the Top Gun class in Argentina and Brazil. Several of the students had mentioned they thought they knew NAS solutions but had not realized all the technical issues with access and authentication that I discussed in my presentation.)
Mark explained the differences between Windows NTFS-style System identifiers (SID), versus UNIX-style user and group identifiers (UID, GID). For NAS solutions that support both CIFS and NFS, there are four options:
Microsoft Active Director (AD) extended with Identity Management for UNIX, formerly known as Services for UNIX (SFU). AD servers normally store SID information, but the extensions add extra columns to hold UID/GID mappings.
AD with Network Information Service (NIS) server. The problem with this approach is that AD and NIS are separate databases, and you need to coordinate updates to them, and their backups.
Lightweight Directory Access Protocol (LDAP) with SAMBA extensions. LDAP holds UID/GID information, and the SAMBA extensions adds extra columns to hold SID mapping.
Local mapping. The dangerous part of local mapping is that the storage admin is also the security admin, and you may want different people doing these roles.
Of these four methods, Mark recommends the first and third as best practices for multi-protocol authentication.
SID-to-UID mapping, UID-to-SID mapping
SONAS and Storwize V7000
SID-to-UID/GID mapping, NFS v4 ACLs
NFS v4 ACLs
Mark then explained how NFS v4 ACLs work, basically an ordered collection of "Access Control Elements" or ACEs. Each ACE on the ACL may "allow" or "deny" the request. You want to avoid "Inheritance" as that can cause problems and unxpected results.
That's it folks. Next week, I am spending time with my research buddies at the Almaden Research Center near San Jose, California, and then it is off to Moscow, Russia to kick off a series of IBM events called "Edge Comes to You" (ECTY).
The ECTY conferences will be a smaller subset of the Edge conference here in Orlando, but offered in other countries for those who were unable to travel to the United States.
This week I am in Orlando, Florida for the IBM Edge conference. Thursday evening after all the other sessions, we had a Free-for-All, a Q&A panel across all storage topics, moderated by Scott Drummond. The conference officially ends at noon tomorrow, but for many, this is the last session, as people fly out Friday morning. Here are the questions and the panel responses during the session.
When will IBM unify their storage management between Mainframe z/OS and the distributed systems platforms?
IBM offers a Change and Configuration Management Data Base (CCMDB) for this purpose with appropriate collectors from z/OS and distributed systems, but hasn't sold well.
When will IBM devices have RESTful interfaces?
Both IBM Systems Director and IBM Tivoli Storage Productivity Center (TPC) offer RESTful APIs. IBM Systems Director can manage z/VM and Linux on System z, as well as Power Systems and x86 based distributed systems. Since October 2008, IBM's Project Zero introduced RESTful interfaces to PHP and Groovy software running on WebSphere sMash environments. We have not heard much about this since 2008.
Will IBM TPC support NPIV on Power Systems?
TPC 5.1 has toleration support for this, showing the first port connection discovered, but not all connections, and we expect to retrofit this toleration to TPC 4.2.2 Fixpack 2. Hopefully, we will have full support in a future release.
We would like TPC for Replication to run on Linux for System z. We do not run z/OS at the disaster recovery site location.
Submit an IBM Request for Enhancement [RFE] for this. We have TPC for Replication on z/OS, as well as the distributed systems version that runs on Windows, Linux and AIX.
We have enhancements we would like to see for XIV and SONAS also, can we use the RFE process for this also?
Yes, submit the requirements for our review.
We heard the Statement of Direction that there would be storage integrated into the PureSystems. What exactly does that mean?
The PureSystems family of expert-integrated systems is based on a new chassis that has a front part, a midplane, and a back-part. All IBM System Storage products that support x86 and Power Systems can work with PureSystems. However, IBM does not yet offer storage that fits in the front part of the PureFlex chassis, but the Statement of Direction indicates that we intend to offer that option. Until then, the IBM Storwize V7000 is the storage of choice that can be put into the PureSystems rack, but outside the individual chasses.
We see some features like Real-Time Compression being put into the SAN Volume Controller (SVC), and other features put into the back-end devices. How are we supposed to make sense of this?
IBM's new pilot program, the SmartCloud Virtual Storage Center, to bring these all together. In general, we have design teams of system architects that determine which features go in which products, and prioritize accordingly.
We heard the IBM Executives during the opening session indicate that IBM's strategy involves supporting Big Data, but I haven't seen any storage that supports native Hadoop interfaces. Did I miss something?
First, I want to emphasize that Big Data is more than just MapReduce workloads. IBM offers Streams and BigInsights software to handle text, as well as Business Intelligence and Data Warehouse solutions for structured data. IBM's General Parallel File System (GPFS) has a Shared-Nothing-Cluster (SNC) mode with Hadoop interfaces that runs twice as fast as Hadoop's native HDFS file system. The storage products we recommend for Big Data are the SONAS and the DCS3700 disk systems, as both are optimized for the sequential workloads Big Data represents.
Everytime we upgrade our SVC, we review the list for SDDPCM multi-pathing and see that we need to upgrade our back-end DS8000 microcode up to recommended levels. Can we get a list of combinations that work from other customers?
The advantage of storage hypervisors like SVC is that we can separate the multi-pathing driver from the back-end managed disk systems. You only need the SDDPCM to support the SVC, not the back-end devices. For the most part, SVC has not dropped support for any level of previously supported OS or multi-pathing software.
On SVC, when we migrate volumes (vDisks) from one storage pool to another, we would like to throttle this process during FlashCopy.
Yes, we had several requests like this, which is why we now recommend using Volume Mirorring to perform migrations. In fact the GUI wizard uses Volume Mirroring by default when migrations are performed. As for throttling, IBM has implemented "I/O Priority Manager" that offers Quality of Service classes for DS8000 and XIV Gen3, and might consider porting this to other products in our portfolio.
Sizing systems is an art. I just need to know if the DS8000 is running hot. Can we have the equivalent of "red lines" for our disk systems similar to automobile engines?
Storage Optimizer was added to TPC 4.2 to help in this area, identifying heat-maps for IBM DS8000, DS6000, DS5000, DS4000, SVC and Storwize V7000. We recommend you look at the performance violation reports.
How can we evaluate the characteristics of our workloads?
Yes, TPC can do this.
When we are replacing non-IBM storage with IBM, we don't have good tools to evaluate the non-IBM equipment. What is IBM doing for this?
IBM's Disk Magic modeling tool can take inputs from a variety of sources, including iostat from the servers themselves. You can also install a 90-day trial of TPC to help with this.
We really like EMC's "Grab" program, does IBM have one also?
Updating the Host Attachment Kit (HAK) for AIX is quite painful for the SVC. We prefer the method employed for the XIV.
Thanks for the feedback.
For SVC, we need to correlate disk with VMware and VIOS. Can we get vSCSI information on VIOS?
TPC 5.1 has this support, and we believe it has been retrofitted to TPC 4.2.2 Fixpack 2, coming out this month.
Currently, with SVC, when volumes are part of a Global Mirror (GM) session, we need to cancel GM, expand the source volume, expand the target volume, then restart GM. We would like this to be fully automated and non-disruptive.
Sounds like a great requirement to submit for the RFE process.
Can we get an RSS Feed for the RFE community.
Yes, you can subscribe to it. You can also set up "Watch Lists".
Thanks to all of the IBM experts on the panel for their participation at this event!
This week I am in Orlando, Florida for the IBM Edge conference. Here is a recap of Day 4 afternoon sessions which related to Cloud computing.
IBM SmartCloud Enterprise -- Object Storage
George Contino, IBM GTS Consultant for Cloud Storage Service Enablement, presented IBM's latest Object Storage offering, based on an alliance IBM formed with Nirvanix last October 2011, launched January 31, 2012. It is part of the IBM SmartCloud Enterprise system.
IBM currently has two datacenters for this now, Secaucus NJ and Frankfurt Germany, but will have five by end of 2012, and hopefully seven datacenters by nid-year 2013.
The storage is then divided in several layers:
Customer master account, assigned a 128-bit encryption key
Name spaces by department or LOB
User file objects
The objects are given random names, with the real customer-assigned file names stored elsewhere, to provide additional privacy through obfuscation. For added security, it uses Two-Factor Authentication, requiring the users to provide both the 128-bit encryption key and the password.
There are three ways to access data:
Proprietary API - An API is available on Windows and Linux. Symantec NetBackup, BackupExec and Commvault Simpana have already coded to the Nirvanix API to allow backups to be stored in the Nirvanix storage cloud. IBM InfoSphere Optim can archive data to the Nirvanix storage cloud.
CloudNAS - Nirvanix provides software that provides CIFS and NFS interfaces, that converts to the Nivranix API. IBM Tivoli Storage Manager can send backups and archives to the Nirvanix storage cloud using this approach.
Cloud Storage Gateway - Third parties have developed hardware that runs the CloudNAS software, or directly codes to the API, to provide standard interfaces to the local clients, and provides access to the Nirvanix storage cloud. Two examples were Panzura File System Controller and Twinstrata Cloud Array Gateway.
One of Nirvanix's partners is OxygenCloud, which allows mobile/laptop access to work files. This includes security checks on Active Directory or LDAP, AES-256 bit encryption and HTTPS protocol support. For example, if you had to give a bunch of PDF files to your clients outside your company, you could create a folder, and send out a URL link to the clients, and this link would be valid for the next 14 days for them to download the files.
How University of Wisconsin-Milwaukee (UWM) moved SAP to the Cloud
Maik Gasterstaedt, IBM Technical Enablement for SAP, Storage and Cloud solutions, presented this session on the deployment of an SAP cloud at UWM. Worldwide, SAP has established five University Competency Centers (UCC) to provide SAP cloud services to other universities, and UWM is one of these five UCC.
Basically, the UWM manages SAP instances that are then "rented out" to 107 other universities. An SAP instance represents a "sample company" that could be used in a course curriculum, for example, "Global Bikes, Inc.", "Fitter Snacker", or IDES. An SAP Client represents a fresh copy of the data for this sample company.
UWM charges each University per "SAP client" per semester. Suppose a professor will teach three classes on SAP. He can arrange the SAP clients depending on how much he is willing to spend.
Get one SAP Client to be shared across all three classes. All three classes would be using the same sample company.
Get an SAP Client for each class. Each class could be based on the same or different sample companies.
Get one or more SAP Clients for each class. In this case, for example, a class could get two or more sample companies.
The problem was that they were running on Sun servers approaching end-of-life. They decided to switch to IBM, running 43 SAP Instances on AIX with two Power750 servers, 7 SAP instances on Windows guests of VMware across two BladeCenter chassis using HS22 blades, XIV storage, backed up by Tivoli Storage Manager and Tivoli Storage FlashCopy Manager. They can run 50 SAP clients on each SAP instance. Each client could be rented out to different professors at different universities.
They started installation April 1, and the entire system was running in production by August 15, less than five months end-to-end.
The results were stunning. SAP instance provisioning used to take 5 days, now takes 12 hours. Backups that used to take an hour complets in about 30 seconds.
The conference is almost over folks! Just a few sessions tomorrow and then it is all done.
With my colleague, Mike Griese, presenting TPC 5.1 and the IBM SmartCloud Virtual Storage Center earlier this week, you might wonder what is left to say. Mike's session was intended more for clients who already have TPC deployed, but my session is more of an introductory session.
I was the original architect of the product back in 2000-2003, so have some insight into the history, motivations and design principles applied to each version of the product. It has evolved nicely over the years, and while I am no longer working full-time on the product, I am still very much involved, and am consulted by the current architects and product managers for direction and opinion going forward.
I presented an overview of the overall product as it stands today in its current v4.2.2 version, and gave a few highlights of what to expect in the upcoming TPC 5.1 announced this week.
Encryption and Key Management in the Cloud: The Top 6 Concerns to Ensure a Secure and Reliable Solution
This was a split session with two speakers. The first speaker was Richard Moulds, VP of Strategy and Marketing from Thales, and the second speaker was Gordon Arnold, IBM Senior Technical Staff Member (STSM) and Software Architect for Tivoli Security Management.
Richard presented security issues in the cloud. He is an author of several books, including "Key Management for Dummies" and "Data Protection and PCI Compliance for Dummies". Thales is a large French companay of 70,000 people nobody in the USA has heard of, but is a major company in the area of IT Security. He presented survey results about people's perceptions and attitudes towards encryption and security issues in the cloud.
The security threats in the Cloud were presented as the "Seven Deadly Sins":
Data loss and leakage, including data that is not deleted with resources are re-used for other purposes
Shared technologies, especially in Cloud environments that do not have robust multi-tenancy
Malicious insiders, such as administrators being bribed to provide access to sensitive data
Account or service hijacking, including those that pretend to be someone else, asking for password resets
Insecure APIs for applications and services, many of these APIs were developed quickly, recently, and perhaps without the robust review from a security perspective
Abuse of the Cloud, such as using the Cloud itself to crack passwords or break decryption passwords through parallel processing
Unknown risk profile, as few Cloud providers have certified security capabilities
Gordon Arnold (IBM) presented IBM's Encryption and Key management. IBM has two products: IBM Tiovli Key Lifecycle Manager (TKLM) and IBM Security Key Lifecycle Manager (SKLM). These are KMIP v1.0 compliant today. The OASIS group is currently reviewing KMIP v1.1 enhancements that includes some suggestions from IBM.
IBM's use of Key Encrypting Keys on disk and tape has proven to be quite useful. The only copy of the encryption key is on the media, and is then encrypted by an authorization key. If you need to defensibly delete the data for compliance reasons, you can simply destroy the encrption key.
At lunch, I spoke with Scott Laningham who was doing video interviews. For years, Scott was the #1 blogger on IBM developerWorks until I took over the title last year. We discussed working on a video in the future on this.
Earlier this year, I wrote a Web article titled [Data Footprint Reduction] which covered data deduplication and compression, and was asked to present this at IBM Edge. I have expanded it to include:
Space-Efficient Point-in-Time copies
After I presented the basic concepts, Sanjay Bhikot, a Unix and Storage admin at RICOH, presented his real-world experiences with data deduplication using the IBM ProtecTIER and real-time compression Beta experience using the SAN Volume Controller (SVC).
IBM Active Cloud Engine Implementation on IBM SONAS 1.3 and IBM Storwize V7000 Unified
John Sing (IBM) presented the latest enhancements in the v1.3.2 release of SONAS and Storwize V7000 Unified.
Introducing VMware vSphere Storage Features
Fellow blogger Stephen Foskett presented this session on VMware's storage features. This included VMware APIs for Array Integration (VAAI), VMware Array Storage Awareness (VASA), vCenter plug-ins, and a new concept he called "vVol" which de-multiplexes the "I/O Blender" that server hypervisors do by tagging individual requests to individual OS guests to provide added benefit. IBM is a leading reseller of VMware, so it makes sense that most of our storage meets all of Steve's requirements for recommendation.
IBM's Storage Strategy in the Smarter Computing Era
Last year, I presented this on the fourth day of the conference, and feedback we received from attendees was that this should have been presented sooner in the week, as it provides great context for the more detailed product presentations.
To address this concern, the IBM executives presented IBM strategy on Monday's keynote session, but allowed me to present this on Wednesday for several reasons:
You may have missed the keynote session. For example, you may not have arrived in time to hear the executives speak due to weather or mechanical problems causing travel delays.
You may have attended the keynote session, but want to hear it again. Maybe you were a bit hung-over, or just may have been overwhelmed with the size and scope of this event. I have read for strategic topics, audiences may have to hear the message five to seven times before they truly appreciate and understand it.
You may want to ask questions, and explore the implications in more detail. While keynote sessions can reach a broader audience, the communication is very much uni-directional. With break-out sessions with a few hundred people, the venue is more intimate and can afford opportunties for information exchange.
The title of this session rolls off the tongue nicely, much like "James and the Giant Peach", "Harold and the Purple Crayon", or "Charlie and the Chocolate Factory".
When people say they are interested in "Cloud Storage", what exactly do they mean. After discussions with hundreds of clients, IBM has worked out a "taxonomy" that identifies four distinct types of storage:
In this session, I presented how IBM SONAS addresses all four of these categories, as well as other IBM storage products that can address specific categories in the taxonomy.
In the evening, the attendees at IBM Edge joined the attendees from Innovate2012 (focused on IBM Rational products) at SeaWorld, with BBQ dinner, rides, Shamu the whale show, and a concert featuring Foreigner!