Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Last Tuesday, we had our official "Grand Opening" for the new Tucson Executive Briefing Center!
We sent out fancy invitations to all the IBM executives who supported this center, local dignitaries from the Tucson and State of Arizona level, and all of the IBM employees on the Tucson campus.
Since our new center is significantly cozier (5700 square feet versus our previous 15,000 square feet), we split the day into two separate events. The first for the IBM executives and local VIPs, and the second for the rest of the IBM employees on campus.
Of course, there is no free lunch. The day started out with a series of speeches. My manager, Doug Davies, was the master of ceremonies to introduce each speaker.
Alistair Symon, IBM Vice President of Enterprise Storage, explained how important storage affects everyone's lives. If you use an ATM machine to withdraw money, for example, you are most probably using IBM System Storage behind the scenes. Nearly all of the IBM disk and tape storage products are designed here in Tucson.
Bruce Wright (shown here) directs the University of Arizona's Office of University Research Parks, serves as CEO of the UA Tech Park, and the founder and president of the Arizona Center for Innovation. Bruce said a few words on how please he was that IBM decided to reverse its July 2011 decision to leave Tucson. The UofA owns all the property, renting back four of the eleven buildings back to IBM, so is effectively our landlord. Next year will mark the 20th anniversary of IBM's sale of the technology park to the University.
Tucson Councilwoman Shirley Scott talked about the improtance of high-paying jobs to the local economy. While IBMers in Tucson are paid less than our counterparts in San Jose, Austin, Raleigh or Poughkeepsie, we are certainly [paid more than the average Tucsonan], thus helping to raise the standard of living here.
Dr. Michael Varney, president and CEO of the local Tucson Metropolitan Chamber of Commerce, praised IBM for its strong reputation in ethics and diversity.
My new second-line manager, Karl Duvalsaint, and my new third-line manager, Doug Dreyer, emphasized the importance of co-locating Briefing Centers in sites that have Research and Development activity. It is important for clients to interact directly with developers, and it is also good for developers to understand directly from clients their needs, preferences and requirements. Worldwide, the IBM Systems and Technology Group has only twelve Executive Briefing Centers, and the Tucson EBC is one of them.
This is not to say that IBM does not have centers in other locations. Our newest client center in Singapore is a shining example. Of course, if they want experts to speak to clients there, they need to be flown in. Doug Dreyer mentioned that IBM plans to launch six such centers in Africa as well.
Next was the ribbon cutting. From left to right, Lee Olguin (our Gunny Sargeant), Tucson Councilwoman Shirley Scott, UofA's Bruce Wright, IBM VP of Program Management Calline Sanchez, My second-line manager Karl Duvalsaint, IBM VP Allistair Simon, my first-line manager Doug Davies, Tucson Chamber of Commerce President Dr. Michael Varney, and my third-line manager Doug Dreyer. We had a member of the local high school band do the drum roll.
Once the ribbon was cut, the IBM Executves and local VIPs were brought in to see the new facility, which has two large rooms, one common dining area, an 800-square foot green data center to showcase our products, our own set of restrooms, a galley to stage up the food and beverage service, and two smaller rooms for private conversations or conference calls. A local high school band provided live music throughout the day.
Wrapping up my coverage of the 2013 IT Security and Storage Expo in Belgium, I noticed some interesting things in the other booths.
The EMC booth had a whiteboard so that clients could do some one-on-one collaboration. All of their cocktail waitresses were wearing sharp pin-stripe coats with matching mini-skirts.
Another booth had a "virtual graffiti wall". Using a "digital spraycan", you could write on the wall. I am not sure what connection this had with anything the company had to offer, but perhaps they also wanted to collaborate with attendees on solutions. In either case, it was very cool, and brought a lot of traffic.
(FTC Disclosure: I work for IBM. I was not paid to mention any of the other companies, their products or people on this blog post. Mentioning other companies is not to be considered an endorsement of any kind.)
There were some interesting costumes. Leila from [Aerohive] wearing a "bee costume" complete with black wings. Hans from STS in a bright orange business suit. (Orange is the national color of Belgium). Sophie from Fortinet handed out champagne. The plastic glassware were cones that snapped onto her tray, but they had no flat bottom to rest your glass down, so you had to hold it the entire time until you finished drinking it. The Homer Simpson sticker eating the Apple logo shows the Belgians have a sense of humor!
The NetApp booth had a huge banner claiming that "Data OnTap" was the #1 storage OS. Obviously Windows, AIX, Solaris and Linux aren't consider "storage Operating Systems" per se. Is NetApp claiming they outsell FreeNAS, the only other storage OS that I can think of?
While IBM and I.R.I.S-ICT easily won the "Best Looking Big Booth" award, I have to give the "Best Looking Small Booth" award to my friends at Hitachi Data Systems. Like EMC, the Hitachi team did not have any equipment on the floor, but they made use of their tiny space by having a Japanese theme, with cocktail waitresses in kimonos.
Continuing my coverage of the IT Security and Storage Expo in Brussels, Belgium, we had a nice reception Wednesday evening.
Clara handed out Ceasar Chicken salads. Joelle handed out small rolled up pieces of duck.
Ilsa is an IBM expert in System x, VMware and the PureSystems family on hand to help with the demos and any client questions. I.R.I.S.-ICT employee Ans is only in her 20's, but is recognized as one of Belgium's leading experts in System z mainframe. I used to be the lead architect for DFSMS on z/OS, so we had plenty to talk about.
Of course, the best time for the press to ask for interviews is during the reception, where everyone is relaxed and ready to speak. I am "media-trained" which allows me to speak to the press about IBM matters. I do a lot of these interviews either over the phone, or on camera.
I took a picture to capture the typical setup. Mandy on the left is asking me questions, while camera operator Lisa focuses on my body language. The trick is to spend 80 percent of the time focused on your interviewer, and then 20 percent looking into the camera for strategic pauses. If Mandy decides to use any of the footage, she will be sending me the YouTube video link!
Hans and Sophie from Veeam stopped by the IBM booth to say hello. (See 2010 Aug 27 blog post comparing Veeam to Tivoli Storage Manager). These two DJ's kept the IBM and I.R.I.S-ICT booth hopping.
Belgium is a small country, and many of the IT storage people know each other. This made for quite the party! Our group closed up the booth around 8:30pm and we went over to join their friends at Arrow and Huawei. Here is Maiva from Huawei.
Continuing my coverage of the IT Security and Storage Expo in Brussels, Belgium, we had some great storage solutions on display at the IBM and I.R.I.S-ICT booth.
Here my IBM colleague Tom Provost is showing the front of the "Smarter Office" solution. The second photo gives the view from behind. While I always explained the solution from the front of the box, many of the more technical attendees at this conference wanted to inspect the ports in the back.
This sound-isolated 11U solution combines the following:
The [IBM Storwize V3700] with 300GB small-form-factor (SFF) drives provides shared storage for the servers.
Two [IBM System x3550 M4 servers] that can run VMware, Hyper-V or Linux KVM server hypervisor software for your Windows and/or Linux applications. These are two socket servers that can have up to 16 x86 cores each.
A Juniper EX2200 switch to network the servers and storage together.
A Local Console Manager (LCM) with rackable keyboard, video, and mouse.
In this next example, the IBM team combined a BladeCenter S chassis that can hold six blade servers, with a Storwize V7000 Unified which offers FCP, iSCSI, FCoE, NFS, CIFS, HTTPS, SCP and FTP block and file protocols.
If those configurations are too small for your needs, consider the Flex System chassis or full PureFlex system frame. The rack-mountable 10U chassis can hold the Flex System V7000 and 10 compute notes. The PureFlex frame can hold up to four of these chasses.
IBM and I.R.I.S-ICT also had an IBM XIV Gen3 and a TS3500 Tape library on display.
Continuing my coverage of the IT Security and Storage Expo in Brussels, Belgium, here is my post on the presentations I gave during the week.
There were four presentations each day. Of the five rooms, I was assigned one room in which to give all of my presentations, room 3. My room was quite large, with sixty seats.
It is a good idea for public speakers to understand Dutch, French, German and English in Belgium. In recognition of the fact that Belgians are multi-lingual, I started each session with "Goede Middag, Bon Jour and Good Afternoon!" and ended each with "Dank U, Merci and Thank you for attending!"
12:00 to 12:30pm
What is big data? Architectures and Practical Use Cases
What is big data? Architectures and Practical Use Cases (repeat)
12:45 to 1:15pm
An IBM Storage solution for small and mid-size business? The Storwize V3700!
An IBM Storage solution for small and mid-size business? The Storwize V3700! (repeat)
1:30 to 2:00pm
A New Generation of Storage Tiering
A New Generation of Storage Tiering (repeat)
2:15 to 2:45pm
Replication for High Availability, Business Continuity and Disaster Recovery
Storage, Server and Network in one Flexible and Integrated solution! The PureSystems family
The sessions were all half-hour slots. The only presentation that I had a challenge getting down to 30 minutes was my session on "New Generation of Storage Tiering" in which I was asked to cover Easy Tier sub-LUN automated tiering, Server-to-Storage cooperative caching, Texas Memory Systems, hierarchical storage Management (HSM), Active Cloud Engine, and SmartCloud Storage!
Helping me out were three local IBM interns. From left to right: Joelle, Clara and Bryan. I hadn't noticed that there were only short breaks between sessions, all of this time consumed with one-on-one discussions with clients, so the interns were kind enough to fetch me snacks and drinks.
Joelle and Bryan speak Dutch, which is similar to the local Flemish language. Clara speaks French, which came in handy for translations.
I would like to thank my room monitors: Jolijn, Ella and Chloe. All three are local college students hired by the conference for the two days to scan name badges and count bodies in seats.
(I had to ask Jolijn to write her name on a piece of paper because it is Dutch and I had no clue how to spell it for this blog post.)
While it might appear that room 3 was "The Tony Pearson Show -- all Tony, all the time!" there were actually worthwhile sessions in the other rooms. Fellow blogger Jon Toigo [known for his DrunkenData blog] presented "Storage Infrastruggle 2013 -- Containing Storage Costs without Sacrificing Access, Protection or Management". My IBM colleague Ron Riffe presented a vendor-neutral look at Storage Hypervisors.
If the attendees wanted copies of my presentations, they were directed to get their name badge scanned at the IBM and I.R.I.S-ICT booth, all the way at the other end of the hall, and my presentations would be emailed to them.
(For those who have missed it, you can find all five of my presentations uploaded to the [IBM Expert Network] on Slideshare.)
Finally, I would like to thank my IBM colleagues who helped me develop and review my presentations: Brigitte Van Den Eynde, Joe Hayward, Jeff Jonas, Tom Deutsch, Chris Saul, Marisol Diaz, Iliana Garcia, Harley Puckett, Jack Arnold, and Steve McKinney.
The Belgium IT Security and Storage Expo was a great success!
(I am back to the USA in Portland, Oregon this week, so these posts relate to last week.)
However, that wasn't to say I didn't encounter a few challenges during my week in Belgium. The first was getting to the venue. The Belgium Expo is a large complex of buildings to the north of the city. The local IBM team suggested I go to the facility a day in advance so that I would be able to see where it was and how to get there.
I was staying in the center of town, in Place Rogier section. I had many transportation options:
Take a taxi. It was raining this week, so finding a taxi was difficult.
Take the bus. The Bus #260 goes directly from my hotel to the Belgium Expo, but only goes once an hour.
Take the metro. The metro operates frequently, and the Haysel stop is right in front of the Belgium Expo complex.
Upon arrival to the building complex, I was unsure of which building I needed to be in. Standing in front of the beautiful Building 5, I found this legend that provided the answer: Building 8. In front of Building 12 was a map that showed where Building 8 was located on the campus.
For this event, IBM joined forces with IBM Business Partner I.R.I.S-ICT to have a fabulous booth, with plenty of experts and equipment demos. As is often the case, the team had to work late into the night to get all the equipment set up, all the podiums and counters constructed, and the demos fully operational.
Apparently, I was not the only one to have troubles finding the place, so I did not feel alone. Some with cars drove around the complex several times before figuring out which parking lot to park in. Others parked at the first spot they found, and still ended up walking as much as I did.
For future reference, If you plan to attend any event at the Belgium Expo, either (a) ask for more explicit directions, and (b) plan to do lots of walking!
Well, I am back safely from my trip last week to Chicago, and now I am writing this in Madrid, Spain, on my way to Brussels, Belgium for the IT Storage Expo.
For those who have asked how the construction on the new Tucson EBC is going, here are a few pictures I took on Friday. As you can see, it is coming along nicely. The official grand opening will be April 2.
Did you miss IBM Pulse 2013 this week? I wasn't there either, having scheduled visits with clients in Washington DC this week, only to have those meetings cancelled due to the [U.S. sequestration cuts].
Fortunately, there are plenty of videos and materials to review from the event. Here's a [12-minute video] interview between Laura DuBois, Program VP of Storage for industry analyst firm [IDC], and fellow IBM executive Steve "Woj" Wojtowecz, VP of Tivoli Storage and Networking Software.
(Update: Apparently, IBM had not secured re-distribution rights from IDC to post this video prior to my blog post. IBM now has full permission to distribute. My apologies for any inconvenience last week.)
The two discuss client opportunities and requirements for storage clouds and compute clouds. Client cloud storage requirements include backup and archive clouds, file storage clouds, and storage that supports compute cloud environments.
Here are some upcoming events related to IBM Storage!
If you sell IBM and/or Oracle solutions, please join me for IBM Oracle Virtual University 2013!
A few weeks ago, I recorded a session on IBM Storage: Overview, Positioning and How to Sell that will be available on demand starting tomorrow, February 26th, at the IBM Oracle Virtual University 2013.
It's one of 65 new sessions that will help IBM to surround Oracle applications with IBM infrastructure, services and industry solutions. Oracle software, after all, runs best on IBM hardware. Other highlights of Oracle Virtual University include a live executive State of the Alliance session with Q&A, Oracle keynote, updates by Oracle product managers, sessions on PureSystems, Selling IBM into an Oracle environment, Cloud, and much more.
There will be live technical teams on hand throughout launch day to answer your questions in real time, so I hope you can carve out 30 minutes or more on February 26th to take advantage of these available resources.
After helping launch the first Pulse back in 2008, I have sadly not been back since. Last year, I was invited to attend as a last-minute replacement for another speaker, but I was busy [having emergency surgery].
This year's [Pulse 2013] conference looks amazing. It will be held in Las Vegas, Nevada. Guest Speaker Payton Manning, NFL 4-time MVP football player, and Carrie Underwood, 6-time Grammy award winner, join IBM's Software Group executives and experts on how IBM Tivoli can help optimize your IT infrastructure.
Sadly, once again, I will not be there at Pulse. This time, I will be on the East Coast visiting clients instead, but my on-premise correspondent, Tom Rauchut, has informed me that he will be there. Hopefully, he will provide me something to write about.
Later in March, I will be in Brussels, Belgium for the Storage Expo. This is held March 20-21, at the Brussels-Expo venue. I will be presenting several topics each day, as well as visit clients in the area. This event comes on behalf of IBM Belgium in association with IBM Business Partner IRIS-ICT.
If you plan to participate in any of these events, let me know!
Sadly, only 70 percent of doctors in the United States use Electronic Medical Record [EMR] systems. My own Primary Care Physician has made the switch, and told me he how much he loves having ready access to the information he needs. EMR systems reduce costs, help manage risk, and improve healthcare outcomes. It is no surprise that the U.S. government has taken a [stick-and-carrot approach] to encourage doctors to use them.
A frequent topic at the Tucson Executive Briefing Center where I work is how to make the most use of IT for healthcare and life sciences. For much of 2011 and 2012, I was also one of the technical advocates assigned to Wellpoint Insurance, in support of their adoption of IBM Watson technology for healthcare.
Recently, I spoke with Jarrett Potts, my long-time friend and former IBM colleague, who now works as Director of Strategic Marketing over at STORServer. If you have never heard of STORServer, it is a company that makes purpose-built backup appliances.
What is a Backup Appliance? It is an integrated solution of hardware and software that serves a single purpose: backup and recovery. STORServer Enterprise Backup Appliance (EBA) combines IBM's high-end x86 M4 server, IBM disk and tape storage, and IBM Tivoli Storage Manager (TSM) backup software.
(Fun Fact: The 2012 IBM year-end financial results were announced last month. IBM not only continues its #1 lead in servers overall, but has the #1 marketshare for high-end x86 servers, market-leading disk and tape storage hardware, and market leading backup software.)
To determine the appropriate size of your backup appliance, the folks at STORServer help you every step of the way. They figure out the number of TB you will backup every day, and even help configure all of the TSM server parameters to achieve the policies that make the most sense for your organization.
The appliance can backup every type of data, from databases and Virtual Machines (VMs) to documents, spreadsheets, and other unstructured data.
Are you then left with a solution too complicated to run yourself? No. The STORServer Console is an easy-to-use GUI for ongoing monitoring and maintenance. Plus, your friends at STORServer are only a phone call away in case you have any questions.
(FTC Disclosure: I work for IBM, and STORSever is an approved IBM Business Partner that uses IBM hardware and software to build their solution. I have no financial interest in STORServer, and was not paid by STORServer to mention their company or products on my blog. This post may be considered a celebrity endorsement of STORServer and its Enterprise Backup Appliances.)
Perhaps my readers feel that I am a bit biased in describing a TSM-based solution, and you want a second opinion. No worries, I understand. In the latest 165-page [2012 DCIG Backup Appliance Buyer's Guide], the STORServer models ranked very high. Here is an excerpt:
"Nowhere is this demand for purpose built appliances more evident than in the rise of purpose
built backup appliances (PBBAs) over the last few years and their anticipated growth rate
going forward. A recent market analysis performed by IDC found that worldwide PBBA revenue totaled $2.4 billion in 2011 which was a 42.4 percent increase over the prior year.
This scoring came into play in preparing this Buyer's Guide
as the STORServer EBA 3100 model scored so highly
overall that it fell outside of the two (2) standard deviations
that DCIG generally uses as a guideline for inclusion and
exclusion of products.
The reason DCIG included this model in this Buyer's Guide
whereas in other situations it might not is that DCIG is
unaware of any other backup appliance(s) from any other
providers that come close to matching the EBA 3100's
software and hardware attributes. As such, DCIG felt it
would be doing STORServer specifically and the market
generally a disservice by not highlighting in this Buyer's
Guide that such a backup appliance existed and was
generally available for purchase."
Backup Appliance Models
STORServer EBA 3100
Symantec NetBackup 5220 Backup Appliance
STORServer EBA 2100
STORServer EBA 1100
STORServer EBA 800
Symantec Backup Exec 3600 Appliance
The STORServer is ideal for small and medium-sized business (SMB), but can scale quite large to handle business growth. If you are currently unhappy with your current backup environment, and feel now is the time to look around for a better way of taking backups, you won't go wrong choosing a solution based on IBM's market-leading server and storage hardware with Tivoli Storage Manager software.
Well, it was Tuesday again, and we had quite a lot of announcements here at IBM this week!
Over 1,800 clients attended the [Live February 5 webcast]! The announcements were all part of IBM's SmartCloud Storage portfolio. Here are the highlights:
STN7800 Real-time Compression Appliance
Back in October 2010, IBM announced the acquisition of Storwize, Inc., renaming its NAS-compression units to the IBM Real-time Compression appliances. Some folks were confused, so I had a blog post [IBM Storwize Product Name Decoder Ring].
IBM initially offered two models:
The [STN6500 model] had 16 Ethernet ports 1GbE (16x1GbE) and a pair of four-core processors.
The [STN6800 model] had either eight 10GbE ports (8x10GbE), or four 10GbE plus eight 1GbE ports (4x10GbE+8x1GbE). It has a pair of six-core processors.
Now, IBM offers the [STN7800 model], which can replace either of the ones above, offering 16x1GbE, 8x10GbE, and 4x10GbE+8x1GBE port configurations. It has a pair of eight-core processors to handle more robust Cloud Storage environments. See [Announcement Letter 113-012] for more details.
New XIV Gen3 model 214
With its awesome support for VMware, the XIV is often chosen for Cloud storage. The new XIV model 214 now offers up to a dozen 10GbE ports, or you can stay with the 22 1GbE ports available on previous models. These can be used for iSCSI host attachment and/or IP-based replication.
IBM strives to make each new model of every storage device more energy efficient than the last.
The new XIV model is no exception. The original XIV, introduced in 2008, consumed 8.4 kVA fully loaded. The XIV Gen 3 model 114 consumed 7.0 kVA. This new model 214 consumes only 5.9 kVA!
It has been almost three years since my now infamous post [Double Drive Failure Debunked: XIV Two Years Later]. Back then, the XIV offered only 1TB and 2TB drives, with rebuild time for 1TB drive of less than 30 minutes, and for 2TB less than 60 minutes.
The new XIV Gen3 software 11.2 release, available for both the 114 and 214 models, can now rebuild a 2TB drive in less than 26 minutes, and a 3TB drive in less than 39 minutes. There is also support specific to Windows Server 2012 including thin provisioning, MSCS, VSS, and Hyper-V. See [Announcement Letter 113-013] for more details.
SmartCloud Storage Access
IBM is the first major storage vendor to offer a product of this kind, so understanding it may be a bit difficult.
The concept is simple. Rather than having end-users having to ask IT every time they need some storage space, IBM created a self-service portal that frees up the IT department to work on more important transformational projects.
This is basically what people can do with "Public Cloud" storage service providers, so basically IBM is now giving you the capability with your "Private Cloud" storage deployment.
Here is the sequence of events. End users point their favorite web browser to the self-service portal, and login using their credentials stored in your Active Directory or LDAP server database.
Once validated, the end-user now can request new storage space, expanding their existing space, or returning the space to the IT department. For new storage requests, users can have a choice of storage classes, -- such as Gold, Silver and Bronze-- defined in the Tivoli Storage Productivity Center (TPC), either stand-alone or in the SmartCloud Virtual Storage Center.
But wait! Do you want to give every end-user a blank check to provision their own storage? Most IT staff are horrified at the thought.
Knowing this, IBM has included an option to put in an approval process, based on the end-user and the amount of capacity requested. The approver can be the cloud administrator, or someone delegated for approvals, known as an environment owner.
For some users, policies may restrict the storage classes as well. For example, Fred can only have Silver or Bronze, but not Gold.
Once the approval is obtained, TPC then issues the appropriate commands to the appropriate SONAS or Storwize V7000 Unified device. SmartCloud Storage Access can do this for thousands of storage devices across dozens of geographically dispersed locations.
Before, the Cloud Admin had to configure storage pools of managed disks, define file systems, dole out file sets to hundreds or thousands of users with hard quotas, and then configure shares based on the protocols required, like CIFS, NFS, HTTPS, etc.
With SmartCloud Storage Access, the Cloud admin still defines the pools and file systems, but then lets the self-service capability of the software to create the file sets, set the quotas and configure shares with the appropriate protocols. This greatly reduces the work on the IT staff, and greatly improves the turn-around time for end-user requests to get exactly what they want, when they need it.
The next time you withdraw money from an ATM machine, fill up your gas tank at the self-service gas station, then serve your own salad at the salad bar and fill up your own soft drink at the fast food restaurant, you will realize and appreciate that SmartCloud Storage Access is a brilliant move for the IT staff.
Cloud administrators, environment owners, and end-users can all use SmartCloud Storage Access to monitor and report on storage usage.
(What does this have to do with Storage? When IBM got back into networking in a big way, they had to decide whether to combine it with one of the existing groups, or form its own group. IBM decided to merge networking with storage, which makes sense since the primary purpose of most networks is to access or transmit information stored somewhere else.)
Last April, the Wharton School and the Institute for the Future convened a one-day [After Broadband] workshop in San Francisco, California, that brought together a group of leading technologists, entrepreneurs, academics and policymakers to explore the future of broadband over the next decade.
Today is the last day of 2012, so it is only fitting to end the year looking forward to the future!
While I have been accused of being a historian, I consider myself a bit of a futurist. Since 2006, I have been blogging about the future of technology, including Cloud, Big Data, and the explosion of information. As a consultant for the IBM Executive Briefing Center, I present to clients IBM's future plans, strategies, and product roadmaps.
(Fellow blogger Mark Twomey on his Storagezilla blog has a humorous post titled [Stuff your Predictions], expressing his disdain for articles this time of year that predict what the next 12 months will bring. Don't worry, this is not one of those posts!)
What exactly is a futurist? Biologists study biology. Techologists study technology. But a person can't simply time-travel to the future, read the newspaper, make observations, take notes, and then go back in time to share his findings.
Here seem to be the key differences between Historians vs. Futurists:
There is only one past.
There are many possible futures.
Only six percent of humanity are alive today, so historians must study history through the writings, tools, and remains of those that have passed on.
Futurists study the past and the present, looking for patterns and trends.
Search for insight.
Search for foresight.
Framework to explain what happened and why.
Framework to express what is possible, probable, and perhaps even preferable.
A common framework for both is the concept of the various "Ages" that humanity has been through:
Around 200,000 years ago, in the middle of what archaeologists refer to as the [Paleolithic Era], man walked upright and used tools made of stone to hunt and gather food. Humans were nomadic and travelled in tribes to follow the herds of animals as they migrated season to season. The History Channel had a great eight-hour series called [Mankind: The Story of All of Us] that started here, and worked all the way up to modern times.
About 10,000 years ago, humans got tired of chasing after their meals, and settled down, growing their food instead. Grains like wheat, rice, and corn became staples of most diets around the world. Civilization evolved, and people traded what they grew or made in exchange for items they needed or wanted.
About 300 years ago, humans developed machines to help do things, and even to help build other machines. While farmers harnessed oxen to plow fields, and horses to speed up travel and communication, these were all based on muscle power.
Machines like the steam engine were powered by coal, petroleum, or natural gas. Today, one gallon of gasoline can do the work of 600 man-hours of human muscle power, or [move a ton of freight 400 miles].
Cities grew up with skyscrapers of steel, connected by trains, planes and automobiles. Communications with the telegraph, telephone, radio and television replaced sending message on horseback.
The forces that drove humanity to the Industrial age clashed with the culture and identity established during the Agricultural age. I highly recommend futurist Thomas Friedman's book [The Lexus and the Olive Tree] that covers these conflicts.
When exactly did the Information age begin? Did it start with Guttenberg's muscle-powered [Printing Press] in the year 1450, or the first punched card in 1725?
Futurist [Alvin Toffler] published his book The Third Wave in 1980. He coined the phrase "Third Wave" to describe the transition from the Industrial age to the Information age.
While IBM mainframes were processing information in the 1950's, many people associate the Information Age with the IBM Personal Computer (1981) or the World Wide Web (1991). Over 100 years ago, IBM started out in the Industrial age, with business machines like meat scales and cheese slicers. IBM led the charge into the Information Age, and continues that leadership today.
In any case, value went from atoms to bits. Computers and mobile devices transfer bits of data, information and ideas, from nearly anyplace on the planet to another, in seconds.
Ideas and content are now king, rather than land, buildings, machines and raw materials of the Industrial age. In 1975, less than 20 percent of a business assets were intangible. By 2005, over 80 percent is.
While the Industrial age was dominated by left-brain thinking, the Information Age requires the creativity of right-brain thinking. I highly recommend Daniel Pink's book, [A Whole New Mind] that covers this in detail.
"The future is already here -- it's just not very evenly distributed!" -- William Gibson (1993)
The problem with looking back through history as a series of "Ages" is that they really didn't start and end on specific days. The Agricultural age didn't end on a particular Sunday evening, with the Industrial age starting up the following Monday morning.
There are still people on the planet today in the Stone age. On my last visit to Kenya, I met a nomadic tribe that still lives this way. Huts were temporarily constructed from sticks and mud, and abandoned when it was time to move on.
A short-sighted charity built a one-room school house for them, hoping to convince the tribe that staying in one place for education was more important than hunting and gathering food in a nomadic lifestyle. Some stayed and starved.
In the United States, about 2 percent of Americans grow food for the rest of us, with enough left over to make ethanol and give food aid to other countries.
Sadly, the Standard American Diet continues to be foods mostly processed from wheat, rice and corn, even though our human genetic make-up has not yet evolved from a "Paleolithic" mix of [meats, nuts and berries].
There are still people on the planet today in the Industrial age. American schools are still geared to teach children for Industrial age jobs, but still take "summer vacation" to work in the fields of the Agricultural age? Seth Godin's book [Stop Stealing Dreams] is a great read on what we should do about this.
Wrapping up my series on a [Laptop for Grandma], I finally have something that I think meets all of my requirements! Special thanks to Guidomar and the rest of my readers who sent in suggestions!
I could have called this series "The Good, the Bad, and the Ugly". The [Cloud-oriented choices] weren't bad per se, but expected persistent Internet connection. The [Low-RAM choices] were not ugly per se, but had limited application options. The ones below were good, in that they helped me decide what would be just right for grandma.
Linux Mint 9
One of my readers, Guidomar, suggested Linux Mint Xfce. At LinuxFest Northwest 2012, Bryan Lunduke indicated that [Linux Mint] is the fastest growing Linux in popularity. You can watch his 43-minute presentation of [Why Linux Sucks!] on YouTube.
The latest version is Mint 14, but that has grown so big it has to be installed on a DVD, as it will no longer fit on a 700MB CD-ROM. Since I don't have a DVD drive on this Thinkpad R31, I dropped down to the latest Gnome edition that did fit on a LiveCD, which was Mint 9.
(In retrospect, I could have used the [PLoP Boot Manager CD], and installed the latest Linux Mint 14 from USB memory stick! My concern was that if a distribution didn't fit on a CD-ROM, it was expecting a more modern computer overall, and thus would probably require more than 384MB or RAM as well.)
Linux Mint is actually a variant of Ubuntu, which means that it can tap into the thousands of applications already available. Mint 9 is based on Ubuntu 10.04 LTS.
One of the nice features of Linux Mint is that there are versions with full [Codecs] installed. A codec is a coder/decoder software routine that can convert a digital data stream or signal, such as for audio or video data. Many formats are proprietary, so codecs are generally not open source, and often not included in most Linux distros. They can be installed manually by the Linux administrator. Windows and Mac OS are commercially sold and don't have this problem, as Microsoft and Apple take care of all the licensing issues behind the scenes.
The installation went smooth. It would have gladly set up a dual-boot with Windows for me, but instead I opted to wipe the disk clean and install fresh for each Linux distribution I tried.
Running it was a different matter. The screen would go black and crash. There just wasn't enough memory.
Since [Peppermint OS] was partially based on Lubuntu, I thought I would give [Lubuntu 12.04] a try. The difference is that Peppermint OS is based on Xfce (as is Xubuntu), but Lubuntu claims to have a smaller memory footprint using Lightweight X11 Desktop Environment (LXDE). This version claims to run in 384MB, which is what I have on grandma's Thinkpad R31.
There are two installers. The main installer requires more than 512MB to run, so I used the alternate text-based Installer-only CD, which needs only 192MB.
The LXDE GUI is simple and straightforward. As with Peppermint OS, I did have to install the Codec plugins. However, the time-to-first-note was less than two minutes, so we can count this as a success!
Linux Mint 12 LXDE edition
Circling back to Linux Mint, I realized that my problem up above was chosing the wrong edition. Apparently, Linux Mint comes in various editions, the main edition I had selected was based on Gnome which requires at least 512MB of RAM.
Other editions are based on KDE, xFCE and LXDE. Linux Mint 9 LXDE requires only 192MB of RAM, and the newer Linux Mint 12 LXDE requires only 256MB. I choose the latter, and the install went pretty much the same as Mint and Lubuntu above.
The music player that comes pre-installed is called [Exaile], which supports playlists, audio CDs, and a variety of other modern features, so no reason to install Rhythmbox or anything else. Grandma can even rip her existing audio CDs to import her music into MP3 format. Time-to-first-note was about two minutes.
The best part: the OS only takes up about 4GB of disk, leaving about 15GB for MP3 music files!
Lubuntu and Linux Mint LXDE were similar, but I decided to go with the latter because I like that they do not force version upgrades. This is a philosophical difference. Ubuntu likes to keep everyone on the latest supported releases, so will often remind you its time to upgrade. Linux Mint prefers to take an if-it-aint-broke-don't-fix-it approach that will be less on-going maintenance for me.
A few finishing touches to make the system complete:
A nice wallpaper from [InterfaceLift]. This website has high-res photography that are just stunning.
Power management with screen-saver settings to a nice pink background with white snowflakes falling.
A small collection of her MP3 music pre-loaded so that she would have something to listen to while she learns how to rip CDs and copy over the rest of her music.
Icons on the main desktop for Exaile, My Computer, Home Directory, and the Welcome Screen.
Larger Font size, to make it easier to read.
Update settings that only look for levels "1" and "2". There are five levels, but "1" and "2" are considered the safest, tested versions. Also, an update is only done if it does not involve installing or removing other packages. This should offer some added stability.
I considered installing [ClamAV] for anti-virus protection, but since this laptop will not be connected to the Internet, I decided not to burn up CPU cycles. I also considered installing [Team Viewer] which would allow me remote access to her system if anything should every fail. However, since she does not have Wi-Fi at home, and lives only a few minutes across town, I decided to leave this off.
Once again, I want to thank all of my readers for their suggestions! I learned quite a lot on this journey, and am glad that I have something that I am proud to present to grandma: boots quickly enough, simple to use, and does not require on-going maintenance!
Continuing my series on a [Laptop for Grandma], I thought I would pursue some of the "low-RAM" operating system choices. Grandma's Thinkpad R31 has only 384MB of RAM.
All of the ones below are based on Linux. For those who aren't familiar with installing or running the Linux operating system, here are some helpful tips:
Most Linux distributors allow you to download an ISO file for free. These can be either (a) burned to a CD, (b) burned to a DVD, or (c) written to a USB memory stick.
The ISO can be either a "LiveCD/LiveDVD" version, an installation program, or a combination of the two. The "Live" version allows you to boot up and try out the operating system without modifying the contents of your hard drive. Windows and Mac OS users can try out Linux without impact to their existing environment. Some Linux distributions offer both a full LiveCD+Installer version, as well as an alternate text-based Installer-only version. The latter often requires less RAM to use.
When installing, it is best to have the laptop plugged in to an electrical outlet, and hard-wired to the internet in case it needs to download the latest drivers for your particular hardware.
A CD can hold only 700MB. Many of the newer Linux distributions exceed that, requiring a DVD or USB stick instead. If your laptop has an older optical drive, it may not be able to read DVD media. Some older optical drives can only read CD's, not burn them. In my case, I burned the CDs on another machine, and then used them on grandma's Thinkpad R31.
To avoid burning "a set of coasters" when trying out multiple choices, consider using rewriteable optical media, or the USB option. If you don't like it, you can re-use for something else.
The program [Unetbootin] can take most ISO files and write them to a bootable USB stick. On my Red Hat Enterprise Linux 6 laptop, I had to also install p7zip and p7zip-plugins first.
The BIOS on some older machines, like my grandma's Thinkpad R31, cannot boot from USB. The [PLoP Boot Manager] allows you to first boot from floppy or CD-ROM, and then allows you to boot from the USB. This worked great on my grandma's system. The PLoP Boot Manager is also available on the [Ultimate Boot CD].
While I am a big fan of SUSE, Red Hat, and Ubuntu, these all require more RAM than available on grandma's laptop. Here are some Low-RAM alternatives I tried:
Damn Small Linux 4.11 RC2
The Damn Small Linux [DSL] project was dormant since 2008, but has a fresh new release for 2012. This baby can run in as little 16MB or RAM! If you have 128MB of RAM or more, the OS can run entirely from RAM, providing much faster performance.
Of course, there are always trade-offs, and in this case, apps were chosen for their size and memory footprint, not necessarily for their user-friendliness and eye candy. For example, the xMMS plays MP3 music, but I did not find it as friendly as iTunes or Rhythmbox.
Boot time is fast. From hitting the power-on button to playing the first note of MP3 music was about 1 minute.
Installing DSL Linux on the hard drive converts it into a Debian distribution, which then allows more options for applications.
Next up was [MacPup]. The latest version is 529, based on Pupply Linux 5.2.60 Precise, compatible with Ubuntu 12.04 Precise Pangolin. While traditional Puppy Linux clutters the screen with apps, the MacPup tries to have the look-and-feel of the MacOS by having a launcher tray at the bottom center of the screen.
Both MacPup and Puppy Linux can run in very small amounts of RAM and disk space. Like DSL above, you can opt to run MacPup entirely in 128MB of RAM. Unfortunately, the trade-off is a lack of application choices.
Installation to the hard drive was quite involved, certainly not for the beginner. First, you have to use Gparted to partition the disk. I created a 19GB (sda1) for my files, and 700MB (sda5) for swap. I had troubles with "ext4" file system, so re-formatted to "ext3". Second, you have to copy the files over from the LiveCD using the "Puppy Universal Installer". Third, you have to set up the Bootloader. Grub didn't work, so I installed Grub4Dos instead.
The music app is called "Alsa Player", and I was able to drag the icon into the startup tray. time-to-first-note was just over 1 minute. Fast, but not as "simple-to-use" as I would like.
SliTaz 4.0 claims to be able to run in as little as 48MB of RAM and 100MB of disk space. Time-to-first-note was similar to MacPup, but I didn't care for the TazPanel for setup, and the TazPkg for installing a limited set of software packages. I could not get Wi-Fi working at all on SliTaz, and just gave up trying.
All three of these ran on grandma's Thinkpad R31, and all three could play MP3 music. However, I was concerned that they were not as simple to use as grandma would like, and I would be concerned the amount of time and effort I might have to spend if things go wrong.
I've gotten suggestions to upgrade the memory and disk storage, and how to fine-tune the Microsoft Windows XP operating system. Others suggested replacing the OS with Linux, and to use the Cloud to avoid some of the storage space limitations.
But first, I have to mention the latest in our series of "Enterprise Systems" videos. The first was being [Data Ready]. The second was being [Security Ready]. The now the third in the series: the 3-minute
[Cloud Ready] video.
So I decided to try different Cloud-oriented Operating Systems, to see if any would be a good fit. Here is what I found:
(FTC Disclosure: I work for IBM and own IBM stock. This blog post is not meant to endorse one OS over another. I have financial interests in, and/or have friends and family who work at some of the various companies mentioned in this post. Some of these companies also have business relationships with IBM.)
Jolicloud and Joli OS 1.2
I gave this OS a try. This is based on Linux, but with an interesting approach. First, you have to be on-line all the time, and this OS is designed for 15-25 year-olds who are on social media websites like Facebook. By having a Jolicloud account, you can access this from any browser on any system, or run the Joli OS operating system, or buy the already pre-installed Jolibook netbook computer.
The Joli OS 1.2 LiveCD ran fine on my T410 with 4GB or RAM, giving me a chance to check it out, but sadly did not run on grandma's Thinkpad R31 with 384MB of RAM. According to the [Jolicloud specifications], Joli OS should run in as little as 384MB of RAM and 2GB of disk storage space, but it didn't for me.
Google Chrome and Chromium OS Vanilla
Like the Jolibook, Google has come out with a $249 Chromebook laptop that runs their "Chrome OS". This is only available via OEM install on desginated hardware, but the open source version is available called Chromium OS. These are also based on Linux.
Rather than compiling from source, Hexxeh has made nightly builds available. You can download [Chromium OS Vanilla] zip file, unzip the image file, and copy it to a 4GB USB memory stick. The compressed image is about 300MB, but uncompressed about 2.5GB, so too big to fit on a CD. The image on the USB stick is actually two partitions, and cannot be run from DVD either.
If you don't have a 4GB USB stick handy, and want to see what all the fuss is about, just install the Google Chrome browser on your Windows or Linux system, and then maximize the browser window. That's it. That is basically what Chromium OS is all about.
Files can be stored locally, or out on your Google Drive. Documents can be edited using "Google Docs" in the Cloud. You can run in "off-line" mode, for example, read your Gmail notes when not connected to the Internet. Music and video files can be played using the "Files" app.
If you really need to get out of the browser, you can hit the right combination of keys to get to the "crosh" command line shell.
Like Joli OS, I was able to run this from my Thinkpad T410 with 4GB of RAM, but not on grandma's Thinkpad R31. It appears that Chromium requires at least 1GB of RAM to run properly.
Android for x86
While researching the Chromium OS, I found that there is an open source community porting [Android to the x86] platform. Android is based on Linux, and would allow your laptop or netbook to run very much like a smartphone or tablet. Most of the apps available to Android should work here as well.
Unfortunately, the project has focused only on selected hardware:
ASUS Eee PCs/Laptops
Viewsonic Viewpad 10
Dell Inspiron Mini Duo
Lenovo ThinkPad x61 Tablet
I tried running the Thinkpad x61 version on both my Thinkpad T410 and grandma's Thinkpad R31, but with no success.
Peppermint OS Three
Next up was Peppermint OS, which claims to be a blend of Linux Mint, Lubuntu, and Xfce, but with a "twist" of aspiring to be a Cloud-oriented OS.
Rather than traditional apps to write documents or maintain a calendar, this OS offers a "Single-Site Browser" (SSB) experience, where you can configure "apps" by pointing to their respective URL. For documents, launch GWoffice, the client for Google Docs. For calendar, launch Google Calendar.
Most Linux distros have both a number and a project name associated with them. For example, Ubuntu 10.04 LTS is known as "Lucid Lynx". The Peppermint OS team avoided this by just calling their latest version "Three" which serves as both its number and its name.
The browser is Chromium, similar to Google Chrome OS above, and uses the "DuckDuckGo" search engine. This is how the Peppermint OS folks make their money to defray the costs of this effort.
Peppermint OS claims to run in systems as little as 192MB or RAM, and only 4GB of disk space. The LiveCD ran well on both my Thinkpad T410, as well as grandma's Thinkpad R31. More importantly, when I installed on the hard drive, it ran well.
The music app "Guayadeque" that came pre-installed was awful. It couldn't play MP3 music out-of-the-box. I had to install the Codec plugins from various "ubuntu-restricted-extras" libraries. I also installed the music app "Rhythmbox", and that worked great. Time from power-on to first-note was less than 2 minutes! However, the problems with the Guayadeque gave me the impression this OS might not be ready for primetime.
I contacted grandma to ask if she has Wi-Fi in her home, and sure enough, she doesn't. Her PC upstairs is direct attached to the cable modem. So, while the Cloud suggestion was worthy of investigation, I will continue to pursue other options that do not require being connected. I certainly do not want to spend any time and effort getting Wi-Fi installed there.
Happy Winter Solstice everyone! The Mayan calendar flipped over yesterday, and everything continued as normal.
The next date to watch out for is ... drumroll please ... April 8, 2014. This is the date Microsoft has decided to [drop support for Windows XP].
While many large corporations are actively planning to get off Windows XP, there are still many homes and individuals that are running on this platform.
When [Windows XP] was introduced in 2001, it could support systems with as little as 64MB of RAM. Nowadays, the latest versions of Windows now requires a minimum of 1GB for 32-bit systems, with 2GB or 3GB recommended.
That leaves Windows XP users on older hardware few choices:
Continue to run Windows XP, but without support (and hope for the best)
Upgrade their hardware with more RAM (and possibly more disk space) needed to run a newer level of Windows
Install a different operating system like Linux
Put the hardware in the recycle bin, and buy a new computer
Here is a personal example. A long time ago, I gave my sister a Thinkpad R31 laptop so that she could work from home. When she got a newer one, she passed this down to her daughter for doing homework. When my neice got a newer one, she passed this old laptop to her grandma.
Grandma is fairly happy with her modern PC running Windows XP. She plays all kinds of games, scans photographs, sends emails, listens to music on iTunes, and even uses Skype to talk to relatives. Her problem is that this PC is located upstairs, in her bedroom, and she wanted something portable that she could play music downstairs when she is playing cards with her friends.
"Why not use the laptop you have?" I asked. Her response: "It runs very slow. Perhaps it has a virus. Can you fix that?" I was up for the challenge, so I agreed.
(The Challenge: Update the Thinkpad R31 so that grandma can simply turn it on, launch iTunes or similar application, and just press a "play" button to listen to her music. It will be plugged in to an electrical outlet wherever she takes it, and she already has her collection of MP3 music files. My hope is to have something that is (a) simple to use, (b) starts up quickly, and (c) will not require a lot of on-going maintenance issues.)
Here are the relevant specifications of the Thinkpad R31 laptop:
The system was pre-installed with Windows XP, but was terribly down-level. I updated to Windows XP SP3 level, downloaded the latest anti-virus signatures, and installed iTunes. A full scan found no viruses. All this software takes up 14GB, leaving less than 6GB for MP3 music files.
The time it took from hitting the "Power-on" button to hearing the first note of music was over 14 minutes! Unacceptable!
If you can suggest what my next steps should be, please comment below or send me an email!
Tomorrow, according to the [Mayan calendar], the end of the 5,125 year cycle rolls over, so it only makes sense to party like it's 1999!
Of course, if you were in the IT industry 13 years ago, you may remember similar hoopla around [Year 2000] when the Gregorian calendar rolled over from "99" to "00". Some of us were asked to work right up to the last day of 1999, and be on-call the first week of 2000, just in case! Tomorrow may prove to be more or less a repeat of that.
Fortunately, there was plenty of other reasons to celebrate these past few weeks.
Birthdays in December Party
The IBM Tucson employees and contractors of building 9070 got together for a combination party, celebrating both the end of 2012 and for three people with birthdays in December: my former manager Bill, my colleague Kris, and myself. Here is our birthday cake! Afterwards, we allVacation movie.
(Note: This was sponsored by my third-line manager, David Gelardi, who one way or another, is responsible for all the IBMers in this building. Thank you David! )
This will be the last year for us to do this, as we are planning to move over to join the employees of building 9032 next year!
IBM Club Event
The IBM Club had its final event at [Golf N' Stuff] family fun park. Over 700 IBM employees and their family members came to eat breakfast burritos and play miniature golf and other games. It had rained earlier in the morning, so the go-kart track was wet, and the staff were trying to dry with leaf blowers. The rest of the park was fully operational, and the weather cleared up nicely. Mo, Rafael and I played golf but the turf was still wet in a few spots. There were also video games, bumper boats, and batting cages.
IBM volunteers dressed up as fictional characters for the kids to take pictures with.
I was proud to be a member of the seven-person IBM Club board for 2012. When I was nominated, I didn't think I stood a chance to be elected, as I was running against five or six other well-qualified candidates, but somehow it happened. I am glad to have been part of the 19-year tradition of the IBM Club history.
(Note: I didn't campaign for this position, but many IBMers in Tucson knew that I had previously owned and managed Tucson Fun & Adventures that organized 15-25 events every month for hundreds of single adults in the Tucson area. This might have helped my chances for election a bit!)
Next year, the IBM Club transitions to the more-efficient "Club Central" model, which is both board-less and cash-less. Instead of a seven-person board organizing events that are fully-funded or partially-subsidized by IBM, events will now be organized by IBM volunteers who post the details on Facebook. All participants simply pay for the events they attend directly to the venue or facility involved.
While the National Aeronautics and Space Administration [NASA] has put out videos and press releases these past 10 days to assure us [there will be a 2013], this shouldn't stop anyone from having a good time! If you did anything special to celebrate the end of the Mayan Calendar, please comment below!
In my last blog post [Full Disk Encryption for Your Laptop] explained my decisions relating to Full-Disk Encryption (FDE) for my laptop. Wrapping up my week's theme of Full-Disk Encryption, I thought I would explain the steps involved to make it happen.
Last April, I switched from running Windows and Linux dual-boot, to one with Linux running as the primary operating system, and Windows running as a Linux KVM guest. I have Full Disk Encryption (FDE) implemented using Linux Unified Key Setup (LUKS).
Here were the steps involved for encrypting my Thinkpad T410:
Step 0: Backup my System
Long-time readers know how I feel about taking backups. In my blog post [Separating Programs from Data], I emphasized this by calling it "Step 0". I backed up my system three ways:
Backed up all of my documents and home user directory with IBM Tivoli Storage Manager.
Backed up all of my files, including programs, bookmarks and operating settings, to an external disk drive (I used rsync for this). If you have a lot of bookmarks on your browser, there are ways to dump these out to a file to load them back in the later step.
Backed up the entire hard drive using [Clonezilla].
Clonezilla allows me to do a "Bare Machine Recovery" of my laptop back to its original dual-boot state in less than an hour, in case I need to start all over again.
Step 1: Re-Partition the Drive
"Full Disk Encryption" is a slight misnomer. For external drives, like the Maxtor BlackArmor from Seagate (Thank you Allen!), there is a small unencrypted portion that contains the encryption/decryption software to access the rest of the drive. Internal boot drives for laptops work the same way. I created two partitions:
A small unencrypted partition (2 GB) to hold the Master Boot Record [MBR], Grand Unified Bootlloader [GRUB], and the /boot directory. Even though there is no sensitive information on this partition, it is still protected the "old way" with the hard-drive password in the BIOS.
The rest of the drive (318GB) will be one big encrypted Logical Volume Manager [LVM] container, often referred to as a "Physical Volume" in LVM terminology.
Having one big encrypted partition means I only have to enter my ridiculously-long encryption password once during boot-up.
Step 2: Create Logical Volumes in the LVM container
I create three logical volumes on the encrypted physical container: swap, slash (/) directory, and home (/home). Some might question the logic behind putting swap space on an encrypted container. In theory, swap could contain sensitive information after a system [hybernation]. I separated /home from slash(/) so that in the event I completely fill up my home directory, I can still boot up my system.
Step 3: Install Linux
Ideally, I would have lifted my Linux partition "as is" for the primary OS, and a Physical-to-Virtual [P2V] conversion of my Windows image for the guest VM. Ha! To get the encryption, it was a lot simpler to just install Linux from scratch, so I did that.
Step 4: Install Windows guest KVM image
The folks in our "Open Client for Linux" team made this step super-easy. Select Windows XP or Windows 7, and press the "Install" button. This is a fresh install of the Windows operating system onto a 30GB "raw" image file.
(Note: Since my Thinkpad T410 is Intel-based, I had to turn on the 'Intel (R) Virtualization Technology' option in the BIOS!)
There are only a few programs that I need to run on Windows, so I installed them here in this step.
Step 5: Set up File Sharing between Linux and Windows
In my dual-boot set up, I had a separate "D:" drive that I could access from either Windows or Linux, so that I would only have to store each file once. For this new configuration, all of my files will be in my home directory on Linux, and then shared to the Windows guest via CIFS protocol using [samba].
In theory, I can share any of my Linux directories using this approach, but I decide to only share my home directory. This way, any Windows viruses will not be able to touch my Linux operating system kernels, programs or settings. This makes for a more secure platform.
Step 6: Transfer all of my files back
Here I used the external drive from "Step 0" to bring my data back to my home directory. This was a good time to re-organize my directory folders and do some [Spring cleaning].
Step 7: Re-establish my backup routine
Previously in my dual-boot configuration, I was using the TSM backup/archive client on the Windows partition to backup my C: and D: drives. Occasionally I would tar a few of my Linux directories and storage the tarball on D: so that it got included in the backup process. With my new Linux-based system, I switched over to the Linux version of TSM client. I had to re-work the include/exclude list, as the files are different on Linux than Windows.
One of my problems with the dual-boot configuration was that I had to manually boot up in Windows to do the TSM backup, which was disruptive if I was using Linux. With this new scheme, I am always running Linux, and so can run the TSM client any time, 24x7. I made this even better by automatically scheduling the backup every Monday and Thursday at lunch time.
There is no Linux support for my Maxtor BlackArmor external USB drive, but it is simple enough to LUKS-encrypt any regular external USB drive, and rsync files over. In fact, I have a fully running (and encrypted) version of my Linux system that I can boot directly from a 32GB USB memory stick. It has everyting I need except Windows (the "raw" image file didn't fit.)
I can still use Clonezilla to make a "Bare Machine Recovery" version to restore from. However, with the LVM container encrypted, this renders the compression capability worthless, and so takes a lot longer and consumes over 300GB of space on my external disk drive.
Backing up my Windows guest VM is just a matter of copying the "raw" image file to another file for safe keeping. I do this monthly, and keep two previous generations in case I get hit with viruses or "Patch Tuesday" destroys my working Windows image. Each is 30GB in size, so it was a trade-off between the number of versions and the amount of space on my hard drive. TSM backup puts these onto a system far away, for added protection.
Step 8: Protect your Encryption setup
In addition to backing up your data, there are a few extra things to do for added protection:
Add a second passphrase. The first one is the ridiculously-long one you memorize faithfully to boot the system every morning. The second one is a ridiculously-longer one that you give to your boss or admin assistant in case you get hit by a bus. In the event that your boss or admin assistant leaves the company, you can easily disable this second passprhase without affecting your original.
Backup the crypt-header. This is the small section in front that contains your passphrases, so if it gets corrupted, you would not be able to access the rest of your data. Create a backup image file and store it on an encrypted USB memory stick or external drive.
If you are one of the lucky 70,000 IBM employees switching from Windows to Linux this year, Welcome!
Earlier this year, IBM mandated that every employee provided a laptop had to implement Full-Disk Encryption for their primary hard drive, and any other drive, internal or external, that contained sensitive information. An exception was granted to anyone who NEVER took their laptop out of the IBM building. At IBM Tucson, we have five buildings, so if you are in the habit of taking your laptop from one building to another, then encryption is required!
The need to secure the information on your laptop has existed ever since laptops were given to employees. In my blog post [Biggest Mistakes of 2006], I wrote the following:
"Laptops made the news this year in a variety of ways. #1 was exploding batteries, and #6 were the stolen laptops that exposed private personal information. Someone I know was listed in one of these stolen databases, so this last one hits close to home. Security is becoming a bigger issue now, and IBM was the first to deliver device-based encryption with the TS1120 enterprise tape drive."
Not surprisingly, IBM laptops are tracked and monitored. In my blog post [Using ILM to Save Trees], I wrote the following:
"Some assets might be declared a 'necessary evil' like laptops, but are tracked to the n'th degree to ensure they are not lost, stolen or taken out of the building. Other assets are declared "strategically important" but are readily discarded, or at least allowed to [walk out the door each evening]."
Unfortunately, dual-boot environments won't cut it for Full-Disk Encryption. For Windows users, IBM has chosen Pretty Good Privacy [PGP]. For Linux users, IBM has chosen Linux Unified Key Setup [LUKS]. PGP doesn't work with Linux, and LUKS doesn't work with Windows.
For those of us who may need access to both Operating Systems, we have to choose. Select one as the primary OS, and run the other as a guest virtual machine. I opted for Red Hat Enterprise Linux 6 as my primary, with LUKS encryption, and Linux KVM to run Windows as the guest.
I am not alone. While I chose the Linux method voluntarily, IBM has decided that 70,000 employees must also set up their systems this way, switching them from Windows to Linux by year end, but allowing them to run Windows as a KVM guest image if needed.
Let's take a look at the pros and cons:
LUKS allows for up to 8 passphrases, so you can give one to your boss, one to your admin assistant, and in the event they leave the company, you can disable their passphrase without impacting anyone else or having to memorize a new one. PGP on Windows supports only a single passphrase.
Linux is a rock-solid operating system. I found that Windows as a KVM guest runs better than running it natively in a dual-boot configuration.
Linux is more secure against viruses. Most viruses run only on Windows operating systems. The Windows guest is well isolated from the Linux operating system files. Recovering from an infected or corrupted Windows guest is merely re-cloning a new "raw" image file.
Linux has a vibrant community of support. I am very impressed that anytime I need help, I can find answers or assistance quickly from other Linux users. Linux is also supported by our help desk, although in my experience, not as well as the community offers.
Employees that work with multiple clients can have a separate Windows guest for each one, preventing any cross-contamination between systems.
Linux is different from Windows, and some learning curve may be required. Not everyone is happy with this change.
(I often joke that the only people who are comfortable with change are babies with soiled diapers and prisoners on death row!)
Implementation is a full re-install of Linux, followed by a fresh install of Windows.
Not all software required for our jobs at IBM runs on Linux, so a Windows guest VM is a necessity. If you thought Windows ran slowly on a fully-encrypted disk, imagine how much slower it runs as a VM guest with limited memory resources.
In theory, I could have tried the Windows/PGP method for a few weeks, then gone through the entire process to switch over to Linux/LUKS, and then draw my comparisons that way. Instead, I just chose the Linux/LUKS method, and am happy with my decision.
For the past three decades, IBM has offered security solutions to protect against unauthorized access. Let's take a look at three different approaches available today for the encryption of data.
Approach 1: Server-based
Server-based encryption has been around for a while. This can be implemented in the operating system itself, such as z/OS on the System z mainframe platform, or with an applicaiton, such as IBM Tivoli Storage Manager for backup and archive.
While this has the advantage that you can selectively encrypt individual files, data sets, or columns in databases, it has several drawbacks. First, you consume server resources to perform the encryption. Secondly, as I mention in the video above, if you only encrypt selected data, the data you forget to, or choose not to, encrypt may result in data exposure. Third, you have to manage your encryption keys on a server-by-server basis. Fourth, you need encryption capability in the operating system or application. And fifth, encrypting the data first will undermine any storage or network compression capability down-line.
Approach 2: Network-based
Network-based solutions perform the encryption between the server and the storage device. Last year, when I was in Auckland, New Zealand, I covered the IBM SAN32B-E4 switch in my presentation [Understanding IBM's Storage Encryption Options]. This switch receives data from the server, encrypts it, and sends it on down to the storage device.
This has several advantages over the server-based approach. First, we offload the server resources to the switch. Second, you can encrypt all the files on the volume. You can select which volumes get encrypted, so there is still the risk that you encrypt only some volumes, and not others, and accidently expose your data. Third, the SAN32B-E4 can centralized the encryption key management to the IBM Tivoli Key Lifecycle Manager (TKLM). This is also operating system and application agnostic. However, network-based encryption has the same problem of undermining any storage device compression capability, and often has a limit on the amount of data bandwidth it can process. The SAN32B-E4 can handle 48 GB/sec, with a turbo-mode option to double this to 96 GB/sec.
Approach 3: Device-based
Device-based solutions perform the encryption at the storage device itself. Back in 2006, IBM was the first to introduce this method on its [TS1120 tape drive]. Later, it was offered on Linear Tape Open (LTO-4) drives. IBM was also first to introduce Full Disk Encryption (FDE) on its IBM System Storage DS8000. See my blog post [1Q09 Disk Announcements] for details.
As with the network-based approach, the device-based method offloads server resources, allows you to encrypt all the files on each volume, can centrally manage all of your keys with TKLM, and is agnostic to operating system and application used. The device can compress the data first, then encrypt, resulting in fewer tape cartridges or less disk capacity consumed. IBM's device-based approach scales nicely. IBM has an encryption chip is placed in each tape drive or disk drive. No matter how many drives you have, you will have all the encryption horsepower you need to scale up.
Not all device-based solutions use an encryption chip per drive. Some of our competitors encrypt in the controller instead, which operates much like the network-based approach. As more and more disk drives are added to your storage system, the controller may get overwhelmed to perform the encryption.
The need for security grows every year. Enterprise Systems are Security-ready to protect your most mission critical application data.
Mark your calednars! The dates are now official for IBM storage-related events in 2013. I know many of you plan your travel budgets early in the year, so I hope this will help you plan accordingly.
[IBM Pulse 2013] will be held March 3-6, 2013, at the MGM Grand in Las Vegas, Nevada. Back in 2008, I helped launch the inaugural event, combining previous events that focused on Tivoli and Maximo software solutions.
On a smarter planet, organizations must implement bold strategies to optimize business services, processes, and relationships. Cloud and mobility offer unlimited potential to create smarter infrastructures that fundamentally change the way we do business.
However, to deliver on this potential, you must manage your infrastructure through rapid change while changing the economics of IT: unleashing innovation, reinventing relationships and uncovering new markets.
Attend Pulse 2013 for the opportunity to share your expertise with thousands of your business and IT peers as you explore these strategies and more. With three days of top-notch keynotes, over 300 breakout sessions, labs, certification and our best Solution Expo ever, Pulse will provide the tools, insights and networking you need to turn opportunities into outcomes.
[IBM Edge 2013] will be held June 10-14, 2013, at the Mandalay Bay in Las Vegas, Nevada. Last year, I helped launch the inaugural event, combining previous storage events for storage admins, executives, and IBM Business Partners. Next year, Edge2013 will offer:
Over 400 technical sessions and hands on labs geared for novices to experts, with the ability to test drive the latest technology.
Exciting general sessions focused on Smarter Computing innovations and real-world success stories.
World class certification available on-site to validate your skills and demonstrate your proficiency in the latest IBM technology and solutions.
A comprehensive and expanded Solution Center giving you access to the latest storage, System x and PureSystems solutions from IBM and our sponsors.
The list of speakers have not yet been finalized, but I hope to participate at one or both of these events!
I hope all of my American readers had a wonderful Thanksgiving holiday! The day after Thanksgiving is "Black Friday", the unofficial starting data for shopping for upcoming holiday presents and decorations. The Monday after that is now often referred to as "Cyber Monday", where many people purchase items on-line.
I thought this would be good time to promote my book series, Inside System Storage, Volumes I through V. These are available direct from my publisher, [Lulu], or from other on-line retailers.
The old adage "Never judge a book by its cover" often leads technical authors to select bland cover designs. I designed the cover art for the series to have a consistent look, but be unique enough to know each book is different. They all have a beige background with black text, three or four graphics representing the various storage themes du jour, and a color stripe spread diagonally across the spine.
Several readers have asked if there was any rhyme or reason for the color of each spine. One guessed it was based on the [electronic color code] used on resistors to mark their value. When I was getting my college degree in Electrical Engineering, the mnemonic "Better Be Right Or Your Great Big Venture Goes West" helped us remember the sequence: Black, Brown, Red, Orange, Yellow, Green, Blue, Violet, Grey and White.
I can assure everyone I was not that clever. Here, instead, is the story behind each color chosen:
Volume I: Green
I received a flyer from Barnes and Noble advertising various books on sale. One caught my eye, so I went to buy it, but forgot to bring the flyer with me. A young woman offered to help me find it, but I could not remember the title, nor the editor, but it had a green cover, and was a collection of the world's shortest stories, all exactly 55 words in length, all winners in some high school contest. She found the flyer, looked up the book, and directed me to the shelf. After several minutes of her scanning the shelf by author, I reached for it, saying, "Here it is, the green one. This shade of green will fit perfectly in my collection of green books!" As I stood in line, the young woman told her boss, "That guy buys green books!" The rest of the folks in line overheard her, and all started laughing at her gullibility.
Volume II: Orange
In late 2007, I was under NDA to review the acquisition of a company called XIV. I was disclosed on the innovative design of the storage system, so that I could blog about it when the announcement was formal. This box would have a distinctive orange stripe across the disks. The announcement launch was a big success. Since then, every time the storage sales team needed a boost in sales for the [IBM XIV Storage System], I would write another blog about the clever features and capabilities.
Volume III: Purple
In 1996, I joined a social club called "Mile High Adventures and Entertainment", headquartered in Denver, Colorado, with locations in Phoenix, Tucson, San Diego, Los Angeles and Portland, Oregon. It was a group for singles to meet each other through social activities and events. A year later, it colapsed under the weight of heavy radio advertising debt. The local staff bought out the membership list, and launched a new club, under the name Tucson Fun and Adventures. It was a big part of my social life.
However, as the owners dropped out, one to start a family, another to take care of her father after her mother passed away, I started 2009 as the majority owner. The economic recession took its toll. Members were not spending as much of their disposable income of fun and entertainment. We restructured the company, revamped the website, and adopted Purple as our official color. Our event coordinators all wore purple shirts, and carried purple clipboards. Despite this major transformation, I just did not have time to run this company while still working full-time at IBM, so I sold it at year end.
Volume IV: Blue
As I mentioned in my blog post [IBM Introduces a New Era of Computing], IBM launched [PureSystems], a new family of expert-integrated systems. Since Volume IV was going to publish shortly after this announcement, I decided on the color blue to match the new door covers on the racks they came in. In less than a year, IBM has already sold over 1,000 of these systems in over 40 different countries.
Volume V: Grey
Chosing a color to represent the IBM Watson computer proved quite a challenge. I finally decided on grey, to represent "grey matter", a phrase often used to refer to the human brain. I picked a shade of grey that complements the three graphics that represent last year's strategic storage marketing themes. My blog post [How to Build Your Own Watson Jr. in your Basement] continues to be one of my highest read posts.
If you were having trouble getting ideas for gifts this holiday season, hopefully, this post gave you five new ideas for your friends, family, coworkers and clients! They are all available in hardcover, paperback, and eBook (PDF) for viewing on desktops, laptops, tablets or smartphones.
Well, it's Tuesday again, and you know what that means! IBM announcements!
Today, I am in New York visiting clients. The weather is a lot nicer than I expected. Here is a picture of the Hudson River through some trees with leaves turning color. Something we don't see in Tucson! Our cactus and pine trees stay green year-round!
The announcements today center around the IBM PureSystems family of expert integrated systems. The PureFlex is based on Flex System components. The Flex System chassis is 10U high that hold 14 bays, consisting of 7 rows by 2 columns. Computer and Storage nodes fit in the front, and switches, fans and power supplies in the back. Here is a quick recap:
IBM Flex System Compute Nodes
The x220 Compute Node is a single-bay low-power 2-socket x86 server. The x440 Compute Node is a powerful double-bay (1 row, 2 columns). The p260 Compute Node is a single-bay server based on the latest POWER7+ CPU processor.
IBM Flex System Expansion Nodes
Do you remember those old movies where a motorcycle would have a sidecar that could hold another passenger, or extra cargo? IBM introduces "Expansion Nodes" for the x200 series single-bay Compute nodes. The idea here is that in a single column, you have one bay for the Compute node, and then on the side in the next bay (same column) you have an Expanions node. There are two choices:
Storage Expansion Node allows you to have eight additional drives
PCIe Expansion Node allows to to have four PCIe cards, which could include the SSD-based PCIe cards from IBM's recent acquisition, Texas Memory Systems.
There are times where one or two internal drives are just not enough storage for a single server, and these expanion nodes could just be the perfect solution for some use cases.
IBM Flex System V7000 Storage Node
I saved the best for last! The Flex System V7000 Storage Node is basically the IBM Storwize V7000 repackaged to fit into the Flex System chassis. This means that in the front of the chassis, the Flex System V7000 takes up four bays (2 rows by 2 columns). In the back of the chassis are the power supplies, fans and switches.
The new Flex System V7000 supports everything the Storwize V7000 does except the upgrade to "Unified" through file modules. For those who want to have Storwize V7000 Unified in their PureFlex systems, IBM will continue to offer the outside-the-chassis original Storwize V7000 that can have two file modules added for NFS, CIFS, HTTPS, FTP and SCP protocol support.
IBM Flex System Converged Network Switch
The Converged Network Switch provide Fibre Channel over Ethernet (FCoE) directly from the chassis. This eliminates the need for a separate "Top-of-Rack" switch, and allows the new Flex System V7000 Storage Node to externally virtualize FCoE-based disk arrays.
Patterns of Expertise for Infrastructure
The original patterns of expertise focused on the PureApplication Systems. Now IBM has added some for the Infrastructure on PureFlex systems.
IBM has sold over 1,000 Flex System and PureFlex systems, across 40 different countries around the world, since their introduction a few months ago in April! These latest enhancements will help solidify IBM's industry leadership,
Well, it's Tuesday again, and you know what that means! IBM Announcements!
Today also happens to be [Election Day] in the United States, and some have questioned IBM's logic of making major storage announcements on Election Day. During the campaigns, a major theme was to help Small and Medium size businesses, because these are the engines of economic growth and improved employment.
Hopefully, you all saw today's Launch Webcast on these announcements, but in case you missed it, waiting in line at the polling station to cast your vote, or caught without electricity or Internet access from [Superstorm Sandy], it is now available [On-Demand].
The 2U control enclosure can have up to four additional 2U expansion enclosures, for a maximum of 120 drives, or 180TB of raw disk capacity. Like the Storwize V7000, the Storwize V3700 supports a [large number of servers and operating systems.]
Many of the features you already know from the Storwize V7000 are carried forward:
1GbE iSCSI + 8GbFC
8GbFC, 10GbE iSCSI/FCoE, Statement of Direction for 6Gb SAS
8GB per canister
4GB per canister, upgradeable to 8GB
Up to 4 control enclosures in a clustered system, each with up to 9 expansion enclosures
Up to 4 expansion enclosures
Maximum Number of drives/TB
Up to 120 drives/180TB
RAID levels supported
GUI, CLI, SMI-S API
GUI, CLI, SMI-S API
Internal (included), external (optional)
Internal only (included)
Non-disruptive data migration
One-directional (migrate to Storwize V3700, included)
Statement of direction
Up to 256 targets (included)
Up to 64 targets (included) Statement of Direction for optional 2,040 targets
Metro Mirror and Global Mirror (optional)
Statement of Direction (optional)
The IBM Storwize V3700 is offered at attractive leasing options through IBM Global Financing.
IBM LTO-6 drives and midrange tape libraries
Last month, IBM's [Tape and Storage Hypervisor Announcements] included LTO-6 for the enterprise-class TS3500 tape library. Today, the LTO-6 support is complete with support for midrange tape drives and libraries.
There are two tape drive models. The TS2260 is based on the half-height drive, intended for occasional 9-to-5 usage. The TS2360 is based on the full-height drive, intended for 24x7 access. These drives can read LTO-4 and LTO-5 tape cartridge media, and can write LTO-5 cartridge media. The new LTO-6 tape cartridge media is expected to be available next month.
In addition to the IBM TS3500 Enterprise Tape Library, LTO-6 is now supported on all of the midrange tape libraries: TS2900, TS3100, TS3200 and TS3310.
IBM Linear Tape File System Library Edition V2.1.2
There are two levels of [Linear Tape File System], or LTFS for short. The first is the Single Drive Edition (LTFS-SDE), which allows you to attach an LTO-5, LTO-6 or TS1140 tape drive to a single workstation, and allow you to mount tape cartridges as easy as mounting USB memory sticks. This presents a full file system view that allows you to read, edit, create, and even drag-and-drop files to other file systems. The LTFS-SDE driver is available for Windows, Linux, and Mac OS.
The second is the Library Edition (LTFS-LE), which allows you to mount the entire tape library as a file system. Each tape cartridge in the library is presented as a subdirectory folder, that you can access like any file system on disk. This was only available for Linux systems, which could then export the files through NFS, FTP or HTTP protocols to other clients. Now, with release v2.1.2, LTFS-LE supports Windows servers, so that you can share the files with other clients through CIFS as well.
Wow! Since my last blog post on this, we have over 600 registrants!
Smarter Storage for Midsize Businesses
Businesses of all sizes are getting buried in the avalanche of data. Data is coming in at faster rates and in greater volumes. The value of data is increasing. Old processes and technologies aren't working. Midsize businesses have the same issues managing the rapid growth of data as large enterprises, but they don't have the same size budget or staff. They need advanced capabilities at an affordable price that are easy to implement.
Speakers for this webcast include Brian Truskowski, General Manager, IBM System Storage and Networking; Ed Walsh, Vice President of Market and Strategy, IBM System Storage; and Tommy Rickard, IBM Director, UK Storage Development.
Date: Tuesday, November 6, 2012 Time: 8:00 AM PST / 9:00AM Arizona / 11:00 PM EST Duration: 60 Minutes
[Register now!] Learn how new IBM Smarter Storage solutions can help midsize businesses tame the explosion of information and their IT budgets.
Joining the IBM executive speakers are the following:
Clay Hales, President & CEO InfoSystems Inc.
Lief Morin, President, Key Information Systems, Inc.
Vincent Louvel, Storage Mgr, Agence France-Presse (AKA AFP)
Laurent Cervera, IT Manager, Agence France-Presse (AKA AFP)
I worked with the IBM Redbooks residency team to review this paper and ensure it had the right focus. I did not want a Redpaper that just listed all of the IBM technologies available, but rather spend some effort on the business benefits, and realistic use cases with actual client examples, that help illustrate not just what a Smart Storage Cloud is, but why your business may benefit from having one, and how others have already benefited from their deployment.
To help promote this new Redpaper, my colleagues Larry Coyne and Karen Orlando filmed me talking about the book. This has been posted as a [4-minute YouTube video]. This is the first time we have promoted a Redpaper using a video, so let me know what you thinkk in the comment section below.
We have some exciting webcasts in the upcoming weeks!
Smarter Enterprises Need Smarter Storage
In this [InformationWeek webcast], my IBM colleague Allen Marin will present a brief overview of IBM Smarter Storage for the enterprise with a focus on new high-end disk and Virtual Tape solutions.
Allen will take you through the recent enhancements [announced earlier this month], highlighting how the new capabilities can address the requirements of your mission-critical applications, as well as your evolving business analytics, and cloud initiatives.
Date: Wednesday, October 24, 2012 Time: 10:00 AM PDT / 10:00AM Arizona / 1:00 PM EDT Duration: 60 Minutes
[Register now!] All registrants will get the independent Clipper Group Report - "When Infrastructure Really Matters - A Focus on High-End Storage" - free!
Smarter Storage for Midsize Businesses
Businesses of all sizes are getting buried in the avalanche of data. Data is coming in at faster rates and in greater volumes. The value of data is increasing. Old processes and technologies aren't working. Midsize businesses have the same issues managing the rapid growth of data as large enterprises, but they don't have the same size budget or staff. They need advanced capabilities at an affordable price that are easy to implement.
Speakers for this webcast include Brian Truskowski, General Manager, IBM System Storage and Networking; Ed Walsh, Vice President of Market and Strategy, IBM System Storage; and Tommy Rickard, IBM Director, UK Storage Development.
Date: Tuesday, November 6, 2012 Time: 8:00 AM PST / 9:00AM Arizona / 11:00 AM EST Duration: 60 Minutes
[Register now!] Learn how new IBM Smarter Storage solutions can help midsize businesses tame the explosion of information and their IT budgets.
I hope you can find time in your busy schedule to participate in one or both of these webcasts.
New IBM PureData Systems help clients harness data for critical insights
Well it's Tuesday, and you know what that means! IBM Announcements! Actually, it is Wednesday, but I started writing this post yesterday, and had to do some additional research to finish.
This week, IBM introduced the newest member of the PureSystems family of expert integrated systems - IBM PureData System. The new systems are designed to help clients effectively harness the massive volume, variety and velocity of information being created every day. The result? They deliver critical insights to improve business results.
The new systems are available in three different models, each optimized specifically for different workloads.
PureData System for Transactions. Optimized for transactional processing workloads such as e-commerce and built to handle large volumes of transactions with flexibility, availability, scalability and integrity. Basically, this is IBM DB2 pureScale and InfoSphere Optim features running on Linux-x86 nodes. The system comes in small, medium and large tee-shirt sizes, and can support over 100 databases. If you have DB2 applications, these can work with PureData unchanged. If your applications are based on Oracle databases, these can work with minimal changes to use PureData systems.
PureData System for Analytics. Powered by Netezza technology, this data warehouse system features built-in database analytics to quickly explore and analyze large amounts of sturctured information. This is the beefed-up version of the Netezza TwinFin 1000. IBM DB2® Analytics Accelerator for z/OS® V3.1 (IDAA) supports both the new IBM PureData System for Analytics N1001 and existing IBM Netezza 1000 systems as accelerators.
PureData System for Operational Analytics. Capable of delivering actionable insights concurrently to more than 1,000 business operations, supporting real-time decision making for businesses. This is the follow-on product to the IBM Smart Analytics System 7700 based on POWER7 nodes. This uses IBM Storwize V7000 disk systems inside.
PureData System joins the PureSystems family which also includes the PureFlex System and PureApplication System, [both announced last April]. PureSystems provide built-in expertise, integration by design and simplification through the system lifecyle, helping businesses reduce complexity, accelerate value and improve IT economics.
In a related announcement, Andy Monshaw was recently named IBM General Manager, PureFlex. Some of you readers may remember that Andy Monshaw was previously the General Manager for IBM System Storage several years ago, and was my second line manager, and I am glad to welcome him back!
If you store your VMware bits on external SAN or NAS-based disk storage systems, this post is for you. The subject of the post, VM Volumes, is a potential storage management game changer!
Fellow blogger Stephen Foskett mentioned VM Volumes in his [Introducing VMware vSphere Storage Features] presentation at IBM Edge 2012 conference. His session on VMware's storage features included VMware APIs for Array Integration (VAAI), VMware Array Storage Awareness (VASA), vCenter plug-ins, and a new concept he called "vVol", now more formally known as VM Volumes. This post provides a follow-up to this, describing the VM Volumes concepts, architecture, and value proposition.
"VM Volumes" is a future architecture that VMware is developing in collaboration with IBM and other major storage system vendors. So far, very little information about VM Volumes has been released. At VMworld 2012 Barcelona, VMware highlights VM Volumes for the first time and IBM demonstrates VM Volumes with the IBM XIV Storage System (more about this demo below). VM Volumes is worth your attention -- when it becomes generally available, everyone using storage arrays will have to reconsider their storage management practices in a VMware environment -- no exaggeration!
But enough drama. What is this all about?
(Note: for the sake of clarity, this post refers to block storage only. However, the VM Volumes feature applies to NAS systems as well. Special thanks to Yossi Siles and the XIV development team for their help on this post!)
The VM Volumes concept is simple: VM disks are mapped directly to special volumes on a storage array system, as opposed to storing VMDK files on a vSphere datastore.
The following images illustrate the differences between the two storage management paradigms.
You may still be asking yourself: bottom line, how will I benefit from VM Volumes?
Well, take a VM snapshot for example. With VM Volumes, vSphere can simply offload the operation by invoking a hardware snapshot of the hardware volume. This has significant implications:
VM-Granularity: Only the right VMs are copied (with datastores, backing up or cloning individual-VM portions of hardware snapshot of a datastore would require more complex configuration, tools and work)
Hardware Offload: No ESXi server resources are consumed
XIV advantage: With XIV, snapshots consume no space upfront and are completed instantly.
Here's the first takeaway: With VM Volumes, advanced storage services (which cost a lot when you buy a storage array), will become available at an individual VM level. In a cloud world, this means that applications can be provisioned easily with advanced storage services, such as snapshots and mirroring.
Now, let's take a closer look at another relevant scenario where VM Volumes will make a lot of difference - provisioning an application with special mirroring requirements:
VM Volumes case: The application is ordered via the private cloud portal. The requestor checks a box requesting an asynchronous mirror. He changes the default RPO for his needs. When the request is submitted, the process wraps up automatically: Volumes are created on one of the storage arrays, configured with a mirror and RPO exactly as specified. A few minutes later, the requestor receives an automatic mail pointing to the application virtual machine.
Datastores case #1: As may be expected, a datastore that is mirrored with the special RPO does not exist. As a result, the automated workflow sets a pending status on the request, creates an urgent ticket to a VMware administrator and aborts. When the VMware admin handles that ticket, she re-assigns the ticket to the storage administrator, asking for a new volume which is mirrored with the special RPO, and mapped to the right ESXi cluster. The next day, the volume is created; the ticket is re-assigned to the storage admin, with the new LUN being pointed to. The VMware administrator follows and creates the datastore on top of it. Since the automated workflow was aborted, the admin re-assigns the ticket to the cloud administrator, who sometime later completes the application provisioning manually.
Datastores case #2: Luckily for the requestor, a datastore that is mirrored with the special RPO does exist. However, that particular datastore is consuming space from a high performance XIV Gen3 system with SSD caching, while the application does not require that level of performance, so the workflow requires a storage administrator approval. The approval is given to save time, but the storage administrator opens a ticket for himself to create a new volume on another array, as well as a follow-up ticket for the VMware admin to create a new datastore using the new volume and migrate the application to the other datastore. In this case, provisioning was relatively rapid, but required manual follow up, involving the two administrators.
Here's the second takeaway: With VM Volumes, management is simplified, and end-to-end automation is much more applicable. The reason is that there are no datastores. Datastores physically group VMs that may otherwise be totally unrelated, and require close coordination between storage and VMware administrators.
Now, the above mainly focuses on the VMware or cloud administrator perspective. How does VM Volumes impact storage management?
VM's are the new hosts: Today, storage administrators have visibility of physical hosts in their management environment. In a non-virtualized environment, this visibility is very helpful. The storage administrator knows exactly which applications in a data center are storage-provisioned or affected by storage management operations because the applications are running on well-known hosts. However, in virtualized environments the association of an application to a physical host is temporary. To keep at least the same level of visibility as in physical environments, VMs should become part of the storage management environment, like hosts. Hosts are still interesting, for example to manage physical storage mapping, but without VM visibility, storage administrators will know less about their operation than they are used to, or need to. VM Volumes enables such visibility, because volumes are provided to individual VMs. The XIV VM Volumes demonstration at VMworld Barcelona, although experimental, shows a view of VM volumes, in XIV's management GUI.
Here's a screenshot:
That's not all!
Storage Profiles and Storage Containers: A Storage Profile is a vSphere specification of a set of storage services. A storage profile can include properties like thin or thick provisioning, mirroring definition, snapshot policy, minimum IOPS, etc.
Storage administrators define a portfolio of supported storage services, maintained as a set of storage profiles, and published (via VASA integration) to vSphere.
VMware or cloud administrators define the required storage profiles for specific applications
VMware and storage administrators need to coordinate the typical storage requirements and the automatically-available storage services. When a request to provision an application is made, the associated storage profiles are matched against the published set of available storage profiles. The matching published profiles will be used to create volumes, which will be bound to the application VMs. All that will happen automatically.
Note that when a VM is created today, a datastore must be specified. With VM Volumes, a new management entity called Storage Container (also known as Capacity Pool) replaces the use of datastore as a management object. Each Storage Container exposes a subset of the available storage profiles, as appropriate. The storage container also has a capacity quota.
Here are some more takeaways:
New way to interface vSphere and storage management: Storage administrators structure and publish storage services to vSphere via storage profiles and storage containers.
Automated provisioning, out of the box: The provisioning process automatically matches application-required storage profiles against storage profiles available from the specified storage containers. There is no need to build custom scripts and custom processes to automate storage provisioning to applications
The XIV advantage:
XIV services are very simple to define and publish. The typical number of available storage profiles would be low. It would also be easy to define application storage profiles.
XIV provides consistent high performance, up to very high capacity utilization levels, without any maintenance. As a result, automated provisioning (which inherently implies less human attention) will not create an elevated risk of reduced performance.
Note: A storage vendor VASA provider is required to support VM Volumes, storage profiles, storage containers and automated provisioning. The IBM Storage VASA provider runs as a standalone service that needs to be deployed on a server.
To summarize the VM Volumes value proposition:
Streamline cloud operation by providing storage services at VM and application level, enabling end-to-end provisioning automation, and unifying VMware and storage administration around volumes and VMs.
Increase storage array ROI, improve vSphere scalability and response time, and reduce cloud provisioning lag, by offloading VM-level provisioning, failover, backup, storage migration, storage space recycling, monitoring, and more, to the storage array, using advanced storage operations such as mirroring and snapshots.
Simplify the adoption of VM Volumes using XIV, with smaller and simpler sets of storage profiles. Apply XIV's supreme fast cloning to individual VMs, and keep automation risks at bay with XIV's consistent high performance.
Until you can get your hands on a VM Volumes-capable environment, the VMware and IBM developer groups will be collaborating and working hard to realize this game-changing feature. The above information is definitely expected to trigger your questions or comments, and our development teams are eager to learn from them and respond. Enter your comments below, and I will try to answer them, and help shape the next post on this subject. There's much more to be told.
A lot was announced this week, so I decided to break it up into several separate posts. This is part 3 in my 3-part series, focusing on our Tivoli Storage products.
To read the rest of the series, see:
The latest release of FlashCopy Manager now supports NetApp and IBM N series storage devices. This provides application-aware snapshots, coordinated with applications like SAP, DB2 and Oracle.
FlashCopy Manager now integrates with Metro and Global Mirror capabilities, so that application-consistent copies are available at remote sites for disaster recovery, or to off-load the FlashCopy destination copy from disk to Tivoli Storage Manager storage pools.
Tivoli Storage Manager v6.4
IBM Tivoli Storage Manager is part of IBM's Unifed Recovery Management. Here are some highlights:
Enhanced Reporting. Cognos reporting to monitor backup and archive environments.
TSM for ERP. I remember when these were called "Tivoli Data Protection" modules. We still refer to them as "TDPs". The TSM for ERP provides backup capability for SAP environments, and this latest release adds support for in-memory SAP HANA databases.
TSM for Virtualization Environments IBM TSM is famous for its patented "Progressive Incremental Backup" which is far more efficient than full+incrementals or full+differentials. IBM now extends this method to VM images. With people consolidating more and more VMs onto fewer host servers, TSM-VE now offers multiple backup streams in parallel. TSM-VE can now take application-aware backups of Microsoft Exchange, SQL Server, and Active Directory running in VMs. TSM-VE will also support vApp and VM templates. If it takes you [a day and a half to build a VMware template], you would want to make sure all that work was backed up, right?
Enhanced Security. Complex password support and improved user authentication and management by integration with Lightweight Directory Access Protocol (LDAP)
A lot was announced yesterday, so I decided to break it up into several separate posts. This is part 2 in my 3-part series, focusing on: Storwize V7000 Unified, LTO-6 tape, and the SmartCloud Virtual Storage Center.
The Storwize V7000 Unified is a product that consists of a 2U-high Storwize V7000 control enclosure that provides block-based access, combined with two 2U-high File Modules that provide file-based NAS protocols: CIFS, NFS, HTTPS, SCP and FTP. The problem was that when it was introduced, it was based on Storwize V7000 v6.3, so when the Storwize V7000 v6.4 features were announced last June, they did not apply to the Storwize V7000 Unified.
That is all fixed now, so the Storwize V7000 Unified now supports the full v6.4 features, including Real-time Compression for both file and block-based access to primary data, and Fibre Channel over Ethernet (FCoE) for block access.
The two File Modules are no longer limited to a single Storwize V7000 control enclosure, you can now connect to up to four control enclosures clustered together. Combined with up to nine expansion enclosures for additional disk raises the total maximum to 960 drives.
If you don't already have an Active Directory or LDAP server, the Storwize V7000 Unified now offers an embedded LDAP server, for smaller deployments that want to reduce the number of servers they need to purchase for a complete solution.
Like the [IBM XIV Gen3 storage system], both the Storwize V7000 and V7000 Unified now also support the OpenStack Nova-volume interface.
Lastly, if you have a Storwize V7000 v6.4, you can upgrade it to a Storwize V7000 Unified by simply adding the two File Modules. This can be done in the field.
IBM LTO-6 for tape libraries and drives
IBM introduces the sixth generation of Linear Tape Open (LTO-6) drives, which can be used as stand-alone IBM TS1060 drives, or in IBM tape libraries. As with previous models of LTO, the LTO-6 can read two older generations (LTO-4 and LTO-5) tape media, and can write to previous generation (LTO-5) tape media. You can buy the LTO-6 drives now, and use the older media until LTO-6 tape cartridges are available (hopefully later this year!)
My friend, Brad Johns, from Brad Johns Consulting, has a great post on this [LTO-6 Announcement]. While you expect the new drives to be faster with a denser tape media format, the key advantage to the LTO-6 is that it improves the compression algorithm, from the previous 2:1 to the new 2.5:1 compression ratio:
Thus, with the improved compression, the LTO-6 is 40 percent faster, with double the tape cartridge density. This can reduce backup times by 30 percent, increase the amount of data that sits in your automated tape libraries, and reduce the courier costs sending tapes off-site.
IBM SmartCloud Virtual Storage Center v5.1
Last year, IBM coined the phrase "Storage Hypervisor" to refer to the underlying technology in the IBM SAN Volume Controller (SVC) and Storwize V7000 disk systems.
At the IBM Edge conference last June, my colleague Mike Griese presented [SmartCloud Virtual Storage Center]. Back then, it was a pilot program (beta test), and this week, IBM announces that it will be formally available as a product.
The idea was simple: take the basic storage hypervisor, and add the necessary software to make it a complete solution.
If all of your disk is currently virtualized behind IBM SAN Volume Controller (SVC), or you want to put all of your data behind SVC, then SmartCloud Virtual Storage Center is for you. Basically, for one per-TB price, you get all of the following:
The software features of SAN Volume Controller v6.4, including FlashCopy, Metro Mirror and Global Mirror.
The full advanced features of IBM Tivoli Storage Productivity Center v5.1, including the Storage Analytics Engine that does "Right-Tiering", recommending which LUNs should be moved entirely from one disk system to another, based on policies and access patterns.
IBM Tivoli Storage FlashCopy Manager v3.2 which manages FlashCopy with full coordination with applications, including Microsoft Exchange, SQL Server, DB2, Oracle, SAP, and VMware. This ensures that the FlashCopy destination copies are clean, eliminating the need to run backout or redo logs to correct any incomplete units of work.
If this combination sounds familiar, it was based on IBM's previous attempt called [Rapid Application Storage] which combined the Storwize V7000 with Tivoli Storage Productivity Center Midrange Edition and FlashCopy Manager.
The key difference is that SmartCloud VSC does not include the SVC hardware itself, you buy this separately. If you want Real-time Compression, that is charged separately for the subset of TB of the volumes that you select for compression.
Well it's Wednesday, and you know what that means... IBM Announcements.
(Normally, announcements are on Tuesdays, but we moved this one over to Wednesday to line up with our big launch event in Pinehurst, NC. )
A lot was announced today, so I decided to break it up into several separate posts. I will start with our Enterprise Systems: DS8870, TS7700 Release 3, and XIV Gen3.
Enterprise systems are the servers, storage and software at the core of an enterprise IT infrastructure. Enterprise systems enable a private cloud infrastructure at enterprise scale, with flexible service delivery models that provide dynamic efficiency for resource and workload management. They make sure critical data is always available across the enterprise, making it accessible in new ways so that actionable insights can be derived from advanced and operational analytics. They also provide ultimate security, ensuring the integrity of critical data while mitigating risk and providing assured compliance.
IBM System Storage DS8870® disk system
This new storage system is the next generation in IBM's DS8000 series, based on IBM's POWER7 chipset. Each CEC can have 2, 4, 8 or 16 cores. Like the DS8800, you can have a mix of 2.5-inch and 3.5-inch disk drives of different speeds and capacities, up to 1,536 drives in a four-frame configuration. The maximum cache is now 1TB usable. The combination of faster chipset and more cache can triple performance for some workloads!
All DS8870s ship standard with all Full Disk Encryption (FDE-capable) drives. The problem in the past was that people would buy DS8000 with non-FDE drives, and then later want to activate encryption, and discovered that they have to swap out their drives with those with the encryption chip built in. Now, all drives on the DS8870 will have the encryption chip. This also allows Easy Tier sub-volume automated tiering to move encrypted data between all media types.
Flash optimization with DS8000 Easy Tier can improve performance up to 3 times with 3% of data on solid-state storage. Easy Tier is easy to deploy and runs automatically.
Support of the American National Standards Institute's (ANSI) T10 Data Integrity Field (DIF) standard. This is a feature that the mainframe has had for years, and is now being extended to distributed operating systems. The concept is simple. When sending data between server and storage, generate a checksum at the source, and then validate the checksum at the target. When you write a block of data, the server generates the checksum, and the DS8870 validates the checksum on arrival. When you read the data back, the DS8870 generates the checksum, and the server validates it on arrival. This ensures that data was not corrupted in between. There is a great write-up on IBM developerWorks: [End-to-end data protection using T10 standard data integrity field].
Energy Efficient. The DS8870 consumes less energy than its predecessor, the DS8800. For example, a fully-configured four-frame DS8870 with 1,536 disk drives consumes only 23.2kW, compared to the same number of drives in a DS8800 consumed 26.3 kW. By comparison, the DS8700 with five frames and 1,024 drives consumed 29.2kW.
Support for new System z load balancing algorithm. System z Workload Manager now interacts with the DS8870 I/O Priority Manager to optimize designated Quality of Service (QoS) levels. We have also the fastest operational analytics solution with DB2 list Prefetch cache optimization with DS8870 High Performance FICON (zHPF) integration. This solution increases DB2 query performance up to 11 times with disk, and up to 60 times with solid-state drives (SSD). File scans are up to 30 percent faster using DS8870 zHPF support for sequential access methods (QSAM, BPAM, and BSAM).
VMware vStorage APIs for Array Integration (VAAI) support. Why should the IBM DS8800 series support VMware when IBM already offers great VMware support with SAN Volume Controller (SVC), Storwize V7000 and XIV storage sytsems? Good question. This was hotly debated between development and marketing. Several DS8000 customers have already added SVC to provide full VMware VAAI support. As a consultant, I am neither development nor marketing, but felt it necessary to weigh in on my opinion on this. The DS8000 is a consolidation platform. According to one analyst survey, 22 percent of companies run on a single disk platform, so for DS8000 to be the one, it needs to support VMware and exploit these special APIs.
Six Nines Availability. Critical enterprise systems need to deliver continuous data availability, or very close to it. IBM solutions can help deliver up to six “nines” of availability, or 99.9999 percent when combining DS8000 Metro Mirror and GDPS Hyperswap. That's less than 30 seconds of downtime per year.
The TS7700 Release 3 represents a refresh to our existing virtual tape libraries. These are mainframe-only, offered in two models: TS7720 is a disk-only device, and the TS7740 is a blended disk-and-tape solution.
Industry standard hardware encryption. This applies to user data stored on the TS7700 system cache (disk), and for data transferred between TS7700 systems. This is especially important for regulations, like Payment Card Industry Data Security Standard (PCI-DSS). In previous models, the data would not be encrypted until it was moved off disk and written to tape. Now, it is encrypted the minute in lands on the disk cache, and stays encrypted as it is replicated from one TS7700 to another in the grid.
Up to 4 Million logical volume capacity. This is twice the previous support.
More physical capacity for TS7720 systems. The maximum capacity for the disk-only model is raised from 440TB to 620TB, representing a 40 percent increase.
My latest book "Inside System Storage: Volume V" is now available!
I have published my fifth volume in my "Inside System Storage" series! Currently, it is only available in Paperback. My editor, Susan Pollard, is hoping to have the eBook and Hardcover versions ready for Cyber Monday. The foreword was written by my Dr. Sondra Ashmore.
You can order this, and all my other books, in all formats, directly from my [Author Spotlight] page. The paperback will also be available soon from other online booksellers, search for ISBN 978-1-300-26223-7.
Improved Scalability. A new Multi-system Manager (MSM) server reduces the operational complexity for large and multi-site XIV deployments. Previously, admins connected directly to XIV boxes. If you had 10 admins logged in, then every XIV box was managing 10 admin conversations. The new MSM acts as a go-between. The admins connect to the MSM, and the MSM connects to the XIV boxes. The MSM polls and caches the status of each XIV, greatly increasing the number of XIV boxes that an admin can manage.
Enhanced User Interface. A new Multi-system Manager server reduces the operational complexity for large and multi-site XIV deployments. We also added support for IPsec and US. Government (USGv6) certification for admistering the XIV over IPv6 networks. The XIV Mobile Dashboard app for iPhone and iPad is spiffed up. Finally, the GUI has been internationalized and translated to the Japanese language.
Enhanced Integration for Cloud. For OpenStack, XIV now offers a Nova-volume driver which provides persistent storage to OpenStack compute nodes. The Nova task force is now looking to move storage into its own project called Cinder. For VMware, XIV has full support for Site Recovery Manager (SRM) v4.1 and v5.0 releases. XIV now also supports the Microsoft System Center Virtual Machine Manager, which can manage Hyper-V, VMware and Citrix XenServer hypervisors.
Smaller entry point. The original XIV supported 1TB and 2TB drives, with the smallest offering being 27TB usable. When IBM introduced the XIV Gen3, the two choices were 2TB and 3TB disk drives. Unfortunately, this meant that the initial entry model was now 55TB in size, and each additional module would be more expensive as well. IBM is now going to offer 1TB support for XIV Gen3 for a lower price point, these are actually 2TB drives with half the capacity turned off.
The job is located in Tucson, Arizona, which is a great place to live! Tucson is the headquarters for IBM storage design and development, with the largest collection of engineers, software developers and testers. The IBM Tucson Executive Briefing Center is located on the [University of Arizona Science and Technology Park] campus that houses over 7,000 employees from 50 different companies.
What does the job entail?
Primarily, you will be developing, customizing and presenting Powerpoint presentations and live product demos. For some briefings, you will work with sales reps, IBM Business Partners, and clients to develop an agenda of topics to discuss. At times, the presentation may involve working to solve the client's problems, drawing on the whiteboard or flip charts to help capture the requirements and architect a solution.
Which products are we talking about?
The [IBM System Storage product line] includes solid-state drives (SSD), block and file-based disk systems, tape drives and libraries, storage virtualization, and storage management software.
Is there any opportunity for travel?
Most of the presentations will be performed in Tucson, either in person, by webcast or video conference call. Sometimes, this includes discussions over drinks, dinner or golfing. Occasionally, there will be travel to present at client locations, IBM branch offices, events or conferences. My manager estimates approximately 10 percent travel.
Is the pay based on a commission?
Absolutely not! We are consultants, not salespeople. To maintain our "trusted advisor" status, it is a flat salary, with possibility for year-end bonus based on how well our division does overall. This allows us to present and position all of the products fairly to the clients at briefings without bias. Our clients appreciate that! The job is considered pre-sales technical support.
Is training included?
Yes. Assuming you already have a strong background in storage hardware and software, and how these connect to SAN and LAN networks for a variety of operating systems like z/OS, AIX, Windows and Linux, there will be training for the latest updates and features of the IBM products throughout the year. Also, there will be professional training to build up your public speaking and meeting facilitation skills.
How do I apply?
If you are an American citizen, fluent in the English language, and have at least a Bachelor's Degree, go to the [IBM Employment website], look for "Storage Support Specialist" position using job code "STG-0524037" or "STG-0525309". IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Last year, the Austin Executive Briefing Center had a room full of experts to help customers learn about IBM hardware to run Oracle applications. This year, IBM is back in San Francisco, with subject matter experts representing Power Systems, System x servers, PureSystems, Storage and System z mainframes. If you are in San Francisco, consider taking 1-2 hours out of your schedule to speak to IBM experts. These are intended to answer the question: Why choose IBM for your Oracle (and other) workloads?
Event: IBM Mini-Briefings Location: San Francisco Marriott Marquis, 55 Fourth Street, very close to the Moscone Center Dates: Monday through Wednesday, October 1-3, 2012
Subject Matter Experts:
Pat O'Rourke, Austin Briefing Center, Power Systems
Dennis Wunder, Poughkeepsie Briefing Center, System z mainframes
Steve Loeschorn, Raleigh Briefing Center, System x servers
Curtis Neal, Tucson Briefing Center, Storage
IBM will also have a booth presence on the main Oracle OpenWorld showroom floor. Please stop by and visit my colleagues! To sign up for a Mini-Briefing at Oracle OpenWorld, for any or all of the topics above, visit the new [IBM STG Austin EBC] website.
Many thanks to the 186 people who registered for yesterday's webcast "Solving the Storage Capacity Crisis -- Tools and Practices for Effective Management!" We had some excellent questions posed during the live Q&A:
Do you recommend moving to a SAN before implementing the management techniques you described, or will these tactics work just as well on direct-attached storage?
How does data center tiering differ from hierarchical storage management?
How do you recommend decisions about data priority be made when there are multiple stakeholders competing for attention?
You didn't mention deduplication. Does that have much impact on capacity management?
When outsourcing to a storage service provider, do you have any recommendations of the merits of wholesale outsourcing vs. partial outsourcing?
What are the dangers of giving end-users the ability to manage their own storage? What kind of education should be put in place?
The webcast was recorded, so in case you missed it, or just want to hear it again, the recording is now available in the [On24 archives].
Now an avid reader of my blog has brought this to my attention. Apparently,
EMC has been showing customers a presentation
[Accelerating Storage Transformation with VMAX and VPLEX] with false and misleading comparison claims between IBM DS8000, HDS VSP and EMC VMAX 40K disk system performance.
(FTC Disclosure: This would be a good time to remind my readers that I work for IBM and own IBM stock. I do not endorse any of the EMC or HDS products mentioned in this post, and have no financial affiliation or investments directly with either EMC nor HDS. I am basing my information solely on the presentation posted on the internet and other sources publicly available, and not on any misrepresentations from EMC speakers at the various conferences where these charts might have been shown.)
The problem with misinformation is that it is not always obvious. The EMC presentation is quite pretty and professional-looking. It is the typical slick, attention-getting, low-content, over-simplified marketing puffery you have come to expect from EMC. There are two slides in particular that I have issue with.
This first graphic implies that IBM and HDS are nearly tied in performance, but that EMC VMAX 40K has nearly triple that bandwidth. Overall the slide has very little detail. That makes it difficult to determine what exactly is being claimed and whether a fair comparison is being made.
The title claims that VMAX 40K is "#1 in High Bandwidth Apps". Only three disk systems are shown so the claim appears to be relative to only the three systems. The wording "High Bandwidth Apps" is confusing considering the cited numbers are for disk systems and no application is identified. By comparison, IBM SONAS can drive up to 105 GB/sec sequential bandwidth, nearly double what EMC claims for its VMAX 40K, so EMC is certainly not even close to #1.
Is the workload random or sequential? That is not easy to determine. The use of "GB/s" along with the large block size of 128KB implies the I/O workload is sequential, which is great for some workloads like high performance computing, technical computing and video broadcasts. Random workloads, on the other hand, are usually measured in I/Os per second (IOPS) with a block size ranging 4KB to 64KB. (I am assuming the 128K blocks refers to 128KB block size, and not reading the same block of cache 128,000 times.)
The slide states "Maximum Sustainable RRH Bandwidth 128K Blocks". The acronym "RRH" is not defined; but I suspect this refers to "random read hits". For random workloads, 100 percent random read hits from cache represents one corner of the infamous "four corners" test. Real-world workloads have a mix of reads and writes, and a mix of cache hits and cache misses. It is also unclear whether the hits are from standard data cache or from internal buffers in adapters (perhaps accessing the same blocks repeatedly) or something else. So is this really for a random workload, or a sequential workload?
(The term "Hitachi Math" was coined by an EMC blogger precisely to slam Hitachi Data Systems for their blatant use of four-corners results, claiming that spouting ridiculously large, but equally unrealistic, 100 percent random read hit results don't provide any useful information. I agree. There are much better industry-standard benchmarks available, such as SPC-1 for random workloads, SPC-2 for sequential workloads, and even benchmarks for specific applications, that represent real-world IT environments. To shame HDS for their use of four-corners results, only for EMC themselves to use similar figures in their own presentation is truly hypocritical of them!)
The IBM system is identified as "DS8000". DS8000 is a generic family name that applies to multiple generations of systems first introduced in 2004. The specific model is not identified, but that is critical information. Is this a first generation DS8100, or the latest DS8800, or something in between?
The slide says "Full System Configs", but that is not defined and configuration details are not identified. Configuration details, also critical information in assessing system performance capabilities, are not specified. If the EMC box costs seven times more than IBM or HDS, would you really buy it to get 3x more performance? Is the EMC packed with the maximum amount of SSD? Were there any SSD in the IBM or HDS boxes to match?
The source of the claimed IBM DS8000 performance numbers is not identified. Did they run their own tests? While I cannot tell, the VMAX may have been configured with 64 Fibre Channel 8Gbps host connections. In that case each channel is theoretically capable of supporting about 800 MB/s at 100% channel utilization. Multiplying 64 x 800MB/s = 51.2GB/s, so did EMC just do the performance comparison on the back of a napkin, assuming there are no other bottlenecks in the system? Even then, I would not round up 51.2 to 52!
Response times were not identified. For random I/Os, response time is a very important metric. It is possible that the Symmetrix was operating with some resources at 100% utilization to get the highest GB/s result, but that would likely make I/O response times unacceptable for real-world random I/O workloads.
IBM and HDS have both published Storage Performance Council [SPC] industry-standard performance benchmarks. EMC has not published any SPC benchmarks for VMAX systems. If EMC is interested in providing customers with audited, detailed performance information along with detailed configuration information, all based on benchmarks designed to represent real-world workloads, EMC can always publish SPC benchmark results as IBM and other vendors have done. In past blog fights, EMC resorts to the excuse that SPC isn't perfect, but can they really argue that vague and unrealistic claims cited in its presentation are better?
The second graphic is so absurd, you would think it came directly from Larry Ellison at an Oracle OpenWorld keynote session. EMC is comparing a configuration with VMAX 40K plus an EMC VFCache host-side flash memory cache card to a configuration with an IBM and HDS disk system without host-side flash memory cache also configured. The comparison is clearly apples-to-oranges. Other disk system configuration details are also omitted.
FAST VP is EMC's name for its sub-volume drive tiering feature, comparable to IBM Easy Tier and Hitachi's Dynamic Tiering. The graph implies that IBM and HDS can only achieve a modest increment improvement from their sub-volume tiering. I beg to differ. I have seen various cases where a small amount of SSD on IBM DS8000 series can drastically improve performance 200 to 400 percent.
The "DBClassify" shown on the graph is a tool run as part of an EMC professional services offering called Database Performance Tiering Assessment, makes recommendations for storing various database objects on different drive tiers based on object usage and importance. Do you really need to pay for professional services? With IBM Easy Tier, you just turn it on, and it works. No analysis required, no tools, no professional services, and no additional charge!
VFCache is an optional product from EMC that currently has no integration whatsoever with VMAX. A fair comparison would have included a host-side flash memory cache (from any vendor) when the IBM or HDS storage system was configured. Or leave it out altogether and just focus on the sub-volume tiering comparison.
Keep in mind that EMC's VFCache supports only selected x86-based hosts. IBM has published a [Statement of Direction] indicating that it will also offer this for Power systems running AIX and Linux host-side flash memory cache integrated with DS8000 Easy Tier.
I feel EMC's claims about IBM DS8000 performance are vague and misleading. EMC appears to lack the kind of technical marketing integrity that IBM strives to attain.
Since EMC is not able or willing to publish fair and meaningful performance comparisons, it is up to me to set the record straight and point out EMC's failings in this matter.
Reminder: It's not to late to register for my Webcast "Solving the Storage Capacity Crisis" on Tuesday, September 25. See my blog post [Upcoming events in September] to register!
Can you believe it is September already? We have a number upcoming events that you might be interested in.
IBM Smarter Analytics by Design
Join the first of our 'Smarter Analytics by Design' virtual events to learn more from leading industry analyst IDC on how analytics can help you solve business challenges, and the capabilities you'll need to be successful in this ever-changing landscape. You'll also hear real case examples from AXTEL and Miami-Dade County and the results of their analytics approaches.
Webcast: IBM Smarter Analytics by Design Date: Thursday, September 13, 2012 Time: 1:00 pm ET / 12:00 pm CT / 10:00 am PT
Dan Vessett and Jean Bozman, International Data Corporation (IDC)
Gaspar Rivera Del Valle, AXTEL, Monterrey, Mexico
Adrienne DiPrima, Rosario Fiallos, Jaci Newmark, Miami-Dade County, South Florida
The problems that used to keep storage managers awake at night -- power, cooling and physical footprint -- are being successfully addressed by technology, but a more vexing issue still remains: How to get more out of the limited supply of skilled storage management professionals.
Webcast: Solving the Storage Capacity Crisis Date: Tuesday, September 25, 2012 Time: 12:00 pm ET / 10:00 am CT / 09:00 am PT
Demand for storage capacity continues to grow far faster than the pool of people to manage it. With no end in sight to data growth, businesses need to apply technology and practices that distribute management responsibility to the people who need storage, and multiply the volumes of storage that skilled professionals can handle.
In this presentation, in this session, I will cover best practices and new tools that are enabling leaps in productivity, in three main areas:
IBM is bringing back and expanding its Mini Briefing program to Oracle OpenWorld.
What is a Mini-Briefing you might ask? It is a small, customized briefing by the Executive Briefing Centers, held nearby a related conference, allowing conference attendees to take 1-2 hours out of their schedule to speak to IBM experts. These are intended to answer the question: Why choose IBM for your Oracle (and other) workloads?
Event: IBM Mini-Briefings Location: San Francisco Marriott Marquis, 55 Fourth Street, very close to the Moscone Center Dates: Monday through Wednesday, October 1-3, 2012
Last year, the Austin Executive Briefing Center had a room full of experts to help customers learn about IBM hardware to run Oracle applications. This year, IBM is back in San Francisco, with subject matter experts representing Power Systems, System x servers, PureSystems, Storage and System z mainframes.
Subject Matter Experts:
Pat O'Rourke, Austin Briefing Center, Power Systems
Dennis Wunder, Poughkeepsie Briefing Center, System z mainframes
Steve Loeschorn, Raleigh Briefing Center, System x servers
Curtis Neal, Tucson Briefing Center, Storage
Of course, IBM will also have a booth presence on the main Oracle OpenWorld showroom floor. Sadly, I will not be there myself this year. Please stop by and visit my colleagues!
To sign up for a Mini-Briefing at Oracle OpenWorld, for any or all of the topics above, visit the new [IBM STG Austin EBC] website.
I hope you can participate in one or more of these events!
This month, I am pleased to announce the new [IBM STG Executive Briefing Center] website, representing a huge improvement over the previous website we had been using over the past two years. STG refers to IBM's Systems and Technology Group, the division that focuses on servers, storage, switches and the system software that makes them run. This new website is for the dozen STG EBCs that span the globe. The new website reminds me of this famous quote:
"Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away"
-- Antoine de Saint-Exupery
Let's take a quick look at what makes it so much better.
The previous website required registration. At every briefing, those of us who work in the EBCs had to pass around a sign-up sheet for email addresses from each attendee so that we could send them an invitation to register for the site. We would have a hard time reading people's handwriting, resulting in some emails coming back rejected.
Inspired by self-service gas stations, automated teller machines, and the many self-service portals of Cloud Computing, the new website has everything up-front, without registration. IBM Business Partners and sales representatives can easily request a briefing at any of the dozen briefing centers represented!
IBM-managed and IBM-hosted
We had a difficult time explaining to our attendees why our previous website was hosted on a lone machine and maintained by a third party. Think about it, IBM manages the data centers of over 400 clients. IBM has provided web hosting to the most mission critical workloads, with high levels of availability and reliability, and is recognized as one of the "Big 5" Cloud companies. I have done web design myself in my career, and we were terribly disappointed with the third party chosen to create and maintain our previous website, constantly having to point out errors in their HTML and CSS.
For the new website, IBM took back control. Staff from each EBC, myself included, came up with a simple page to bring the essence of each location to life. Special thanks to my colleage Hal Jennings, from the Austin EBC, for bringing this altogether!
Despite two years of manually registering attendees to use the previous website, Google Analytics showed that few people visited, and the few that did spent little time exploring the vast repository of content.
The new website is vastly simpler. The front page points to all twelve EBCs, and a single mouse click gets you to the location you are interested in, with all the details you need to make a decision to book a briefing, and the contact information to make it happen.
Elimination of Wasted and Duplicate Effort
In the previous website, we spent as much as 15 hours just to create, voice over, edit and produce a single 15-minute recorded presentation. Less than six percent of the previous website visitors watched more than five minutes of these videos, making us feel that most of our effort was wasted.
The EBC staff kept wasting their time, month after month, thanks to all-stick, no-carrot tactics that mandated minimums for contributions for more and more content that nobody was ever looking at. Even more disappointing was that much of our work duplicated the formal responsibilities of our IBM Marketing team. They weren't happy about this either, causing confusion between the roles of our two teams.
Finally, we said enough was enough! The new STG EBC website is a marvel in minimalism. If you want to see presentations, videos, expert profiles, or partake in on-going conversations, I welcome you to visit the [IBM Expert Network], the [IBM Storage YouTube Channel], and the [Storage Community] where they belong.
With all of the distractions this week, from the Republican National Convention in Florida, to the Tropical Storm Isaac that hit New Orleans on the 7th anniversary of Hurricane Katrina, I thought I would continue this week's theme on the IBM zEnterprise EC12.
Processing an insurance claim: $56 U.S. dollars (USD) with mainframes, versus $92 USD with distributed servers.
Processing a mobile subscriber: $18.26 USD with mainframes, versus $26.12 USD with distributed servers.
IT cost for an ATM machine: $572 USD with mainframes, versus $1021 USD with distributed servers.
In the whitepaper [Total Economic Impact of IBM System z], Forrester Research interviews the executives of five existing mainframe clients, and through in-depth analysis of their deployments, is able to present a "composite" company with an IT-staff of 4,500 employees. The result is impressive: deploying an IBM System z had an ROI of 199 percent. That is a payback period of less than five months!
A finish this post with a quick [6-minute Youtube video], featuring my colleage, Nick Sardino. Nick and I have worked together in the past at various conferences and conventions.
Well it's Tuesday again, and you know what that means! IBM Announcements!
For nearly 50 years, IBM has been leading the IT industry with its mainframe servers. Today, IBM announced its 12th generation mainframe in its [System z product family], the IBM zEnterprise EC12, or zEC12 for short. I joined IBM in 1986, and my first job was to work on DFHSM for the MVS operating system. The product is now known as DFSMShsm as part of the Data Facility Storage Management System, and the operating systems went through several name changes: MVS/ESA, OS/390, and lately z/OS. I was the lead architect for DFSMS up until 2001. I then switched to be part of the team that brought Linux to the mainframe. Both of these experiences come in handy as I deal with mainframe storage clients at the Tucson Executive Briefing Center.
Let's take a look at some recent developments over the past few years.
In the 9th and 10th generations (IBM System z9 and z10, respectively), IBM introduced the concept of a large "Enterprise Class", and a small "Business Class" to offer customer choice. These were referred to as the EC and BC models.
For the 12th generation, IBM kept the name "zEnterprise", but went back to the "EC" to refer to Enterprise Class. Rather than offer a separate "small" Business Class version, the zEC12 comes in 60 different sub-capacity levels. Many software vendors charge per core, or per [MIPS], so offering sub-capacity means that some portion of the processors are turned off, so the software license is lower. The top rating for the zEC12 is 78,000 MIPS. (I would have thought by now that we would have switched over to BIPS by now!)
If you currently have a z10 or z196, then it can be upgraded to zEC12. The zEC12 can attach to up to four zBX model 003 frames that can run AIX, Microsoft Windows and Linux-x86. If you currently have zBX model 002 frames, these can be upgraded to model 003.
The key enhancements reflect the three key initiatives:
Operational Analytics - Most analytics are done after-the-fact, but IBM zEnterprise can enable operational analytics in real-time, such as fraud detection while the person is using the credit card at a retail outlet, or online websites providing real-time suggestions for related products while the person is still adding items to their shopping card. Operational analytics provides not just the insight, but in a timely manner that makes it actionable. There is even work in place to [certify Hadoop on the mainframe].
Security and Resiliency - IBM is famous for having the most secure solutions. With industry-leading EAL5+ security rating, it beats out competitive offerings that are typically only EAL4 or lower. IBM has a Crypto Express4S card to provide tamper-proof co-processing for the system. IBM introduces the new "zAware" feature, which is like "Operational Analytics" pointed inward, evaluating all of the internal processes, error logs and traces, to determine if something needs to be fixed or optimized.
Cloud Agile - When people hear the phrase "Cloud Agile" they immeidately think of IBM System Storage, but servers can be Cloud Agile also, and the mainframe can run Linux and Java better, faster, and at a lower cost, than many competitve alternatives.
Earlier this week, Jon Erickson from Forrester Reserch, and Chris Saul from IBM, co-presented a webcast on the economic impact of using SAN Volume Controller for storage virtualization. The event was co-sponsored by IBM, InformationWeek, and UBM TechWeb, The Global Leader in Business Technology Media, a Division of UBM LLC. Jon spoke first, covering the cost savings and financial benefits of using SAN Volume Controller in your environment. His analysis shows a payback period of only 18 months!
Chris Saul (IBM) then covered the latest features introduced last June for SAN Volume Controller v6.4 release. Many of these features are available on older hardware models of SAN Volume Controller. One of the most exciting features is Real-time Compression.
If you missed the webcast, you can listen to the [Replay]. There is also a [whitepaper] if you prefer that format.
The Real-time Compression benefits can vary by the type of data compressed. Some data compresses only 20% savings. Other data compresses 80% or more. The best way to find out how much Compression would benefit your environment is to run the [IBM Comprestimator Tool] that runs against your own data!
If you are constantly battling out-of-space conditions, and would like to make extra room on your existing storage devices, your dreams have come true!
IBM has announced it has entered into a definitive agreement to acquire Texas Memory Systems, Inc. (TMS), a privately held Houston, Texas-based company with about 100 employees, that focuses on solid-state flash optimized systems and solutions, including the RamSan family of external rack-mounted storage, as well as PCIe cards for internal storage that fit inside servers.
I've mentioned Solid-State Drive storage quite a few times over the past few years in this blog, which included some great interactions with my friends over at Texas Memory Systems. Here's a quick look:
In my now infamous blog post [Hybrid, Solid State and the future of RAID], I resort to a deck of [Tarot cards] in an effort to fight [writer's block] responding to query about combining solid-state with spinning disk. In the original post, I poked fun at Texas Memory Systems having the slogan "World's Fastest Storage". Woody Hutsell, then VP of marketing for Texas Memory Systems, explained that the reason that TMS did not have faster benchmark results was because it did not have a million dollars to buy the fastest IBM UNIX server.
In my post [Good News and Bad News], I mentioned that Texas Memory Systems has an impressive SPC benchmark result. The Storage Performance Council [SPC] publishes the benchmarking industry standard by which all block-based storage devices are measured. It looks like the TMS performance test department finally got the million-dollar IBM server they needed for this.
My colleagues in marketing were not amused, afraid that mentioning small companies like TMS would give them a huge boost in marketing awareness, above and beyond what TMS could do on their own modest marketing budget, similar to the [Colbert Bump]. I could call it the Pearson Bump. If you first heard of Texas Memory Systems from my blog, or bought TMS products based on my discussion, please post a comment below!
IBM made history as the first major storage vendor to [break the 1 million IOPS barrier with Solid State Disk]. The project was known as "Quicksilver", and was able to demonstrate that a product like SAN Volume Controller with Solid-State Drives (SSD) can indeed provide a significant boost in performance to external disk arrays. The IBM 2145-CF8 and 2145-CG8 models allow up to four SSD in each node. I was asked not to blog the entire month of August, so that our upcoming September announcements would get more notice, but I couldn't resist covering Quicksilver. The original post had mentioned Texas Memory Systems, but were later removed to avoid the "Pearson Bump".
In my post [Day 2 IBM Storage University - Solutions Expo - TMS After-party], I mentioned that I attended the TMS after-party. Texas Memory Systems had just been qualified as Solid-State Drive (SSD) storage behind the IBM SAN Volume Controller, and the two products work extremely well together for IBM Easy Tier, the sub-volume automated tiering capability to optimize storage performance. I was able to catch up with my friend Erik Eyberg, and meet CEO and Founder Holly Frost.
Nearly half (43 percent) of IT decision makers say they have plans to use SSD technology in the future or are already using it in their datacenter. Solid-state can refer to both volatile Random Access Memory (RAM) and non-volatile Flash, and Texas Memory Systems has built solutions around both types. The survey question referred to non-volatile Flash Solid-State Drives (SSD) that do not require a battery to keep the data from fading away after the power goes out. Nearly all storage in the datacenter has volatile Random Access Memory (RAM).
Speeding delivery of data was the motivation behind 75 percent of respondents who plan to use or already use SSD technology. I would have thought this would have been 100 percent, but the other options included reduced energy consumption, and improved drive reliability, which are both also true with Solid-State Drives.
However, for those who were not using SSD today, the major factor was cost, according to 71 percent of respondents. On a Dollar-per-GB basis, Solid-State Drives continue to be anywhere from 10 to 25 times more expensive spinning disk. Last year's tsunami in Japan, and the floods in Thailand, have caused spinning disk prices to rise to cover component shortages, thereby shrinking the price gap between SSD and spinning disk.
Nearly half (48 percent) say they plan on increasing storage investments in the area of virtualization, cloud (26 percent) and flash memory/solid state (24 percent) and analytics (22 percent).
I am back from lovely Taipei. The IBM Top Gun class went well. Here are a few pictures of things I found interesting while I was there.
On the first day of class, I asked for some coffee. Our lovely class assistant, Ashley, brought me a cup with an interesting paper filter hanging on the edge. I have since learned that there are two drinks never to order in Taiwan: coffee and wine. If you enjoy either, you won't here. Instead, I drank the local "Taiwan Beer" and various types of tea.
Our class was on the 14th floor of the building, and there was this warning sign posted in the elevator. I have no idea what Chinese characters say, but we found the cartoon depictions of elevator dangers amusing. We interpreted the lower left corner to mean "Don't let your evil twin sister push you out of a moving elevator!"
I have to say that the variety of food was excellent. One night, we had dinner at a [Spanish Tapas] restaurant. The Spanish had a settlement on Taiwan island, known as Formosa back then, until driven out by the Dutch in 1642. We also had a traditional Chinese lunch, with dumplings, pickled cabbage, and "Lion's Head" soup.
From the classroom floor, we could see the Taipei 101 building, considered the third [tallest skyscraper in the world]. This wasn't here the last time I was in Taiwan.
On the last day, we were treated to some [Bubble tea], a specialty drink that originated in Taiwan in the 1980's. The straw was unusually thick, about twice as thick as a normal straw. We quickly figured out why. It was so that we could slurp up the brown floating things at the bottom. We didn't realize this until after the first sip. These floaties were actually Boba Tapioca pearls. The tea itself was delicious and sweet.
Special thanks to Joe Ebidia for managing the class, his assistant Ashley, and our local support team Justin and Stewart. I would also like to thank the staff at the Sherwood Hotel.
This week, I am in Taipei, teaching Top Gun class. There was concern that another typhoon would hit the island of Taiwan later this week, but it looks like it is now headed for Hong Kong instead.
Elsewhere in the world, there are several events going on next week, so I thought I would bring them to your attention.
ECTY - South Africa
Next week, Jerry Kluck, IBM Global Sales Executive for Storage Optimization and Integration Services, will be the keynote speaker at "Edge Comes to You" (ECTY) conference in South Africa. This is a one-day event, similar to the [ECTY event in Moscow, Russia] that I spoke at last June.
Here is the schedule for South Africa next week:
Monday, August 20, 2012 - Johannesburg
Wednesday, August 22, 2012 - Cape Town
(I have been to both Jo'burg and Cape Town back in 1994. A month after Apartheid had just ended, I was part of a small group of IBMers sent to re-establish IBM's business operations there. I would have liked to have attended the events next week, not just to hear Jerry speak, but also to see how much the country has changed over the past 18 years, but I could not get a work permit in time.)
If you are interested in attending either of these next week, contact your local IBM Business Partner or sales rep to attend.
Forrester's Total Economic Impact Study of Virtualized Storage
Virtualized storage can help organizations stretch their storage investment dollar and storage administration and management resources. Jon Erickson from Forrester Research will review the latest findings from IBM SAN Volume Control (SVC) users studied as part of the recently completed Forrester Total Economic Impact Study of IBM System Storage SAN Volume Controller.
Date: Tuesday, August 21, 2012
Time: 10:00 AM PDT / 1:00 PM EDT
Duration: 60 minutes
Among the findings, users were able to:
Avoid the capital cost of additional storage
Increase IT productivity
Provide greater end user data availability
The second presenter is Chris Saul, IBM Storage Virtualization Manager, who will explain how SVC can manage heterogeneous disk from a single point of control, autonomously manage tiered disk storage and can store up to five times as much data on your existing disk using IBM Real-time Compression.
Not all virtualization solutions are created equal! That's true for storage virtualization, like the SAN Volume Controller mentioned above, and it's true for server virtualization as well.
This webcast discusses the real-world impact on businesses that deploy IBM's PowerVM®
virtualization technology as compared to those using Oracle® VM for SPARC (OVM SPARC), Microsoft® Hyper-V, VMware® vSphere or other competing products.
Date: Wednesday, August 22, 2012
Time: 10:00 AM PDT / 1:00 PM EDT
Duration: 60 minutes
This webcast will include findings from a [Solitaire Interglobal] study of over 61,000 customer sites on the value of virtualization from a business perspective and how IBM's PowerVM provides real business value.
Other key discussion points that will be covered during this webcast include:
Behavioral characteristics of server virtualization technologies that were examined and analyzed from survey participant's environments
How IT colleagues were able to obtain a faster time-to-market for business initiatives when using IBM PowerVM
Why the learning curve time for PowerVM is as much as 2.58 times faster than for other offerings
Why VM reboot comparisons for PowerVM vs competitive platforms resulted in downtime of 5.5 times less than with other options
A TCO reduction of up to 71.4% for PowerVM compared to alternative options
This webcast will also feature an in-depth discussion on the IBM PowerVM solution from an IBM product expert who will share the unique virtualization features available when PowerVM is utilized within the IBM Power Systems™ environment.
Every year, I teach hundreds of sellers how to sell IBM storage products. I have been doing this since the late 1990s, and it is one task that has carried forward from one job to another as I transitioned through various roles from development, to marketing, to consulting.
This week, I am in the city of Taipei [Taipei] to teach Top Gun sales class, part of IBM's [Sales Training] curriculum. This is only my second time here on the island of Taiwan.
As you can see from this photo, Taipei is a large city with just row after row of buildings. The metropolitan area has about seven million people, and I saw lots of construction for more on my ride in from the airport.
The student body consists of IBM Business Partners and field sales reps eager to learn how to become better sellers. Typically, some of the students might have just been hired on, just finished IBM Sales School, a few might have transferred from selling other product lines, while others are established storage sellers looking for a refresher on the latest solutions and technologies.
I am part of the teach team comprised of seven instructors from different countries. Here is what the week entails for me:
Monday - I will present "Selling Scale-Out NAS Solutions" that covers the IBM SONAS appliance and gateway configurations, and be part of a panel discussion on Disk with several other experts.
Tuesday - I have two topics, "Selling Disk Virtualization Solutions" and "Selling Unified Storage Solutions", which cover the IBM SAN Volume Controller (SVC), Storwize V7000 and Storwize V7000 Unified products.
Wednesday - I will explain how to position and sell IBM products against the competition.
Thursday - I will present "Selling Infrastructure Management Solutions" and "Selling Unified Recovery Management Solutions", which focus on the IBM Tivoli Storage portfolio, including Tivoli Storage Productivity Center, Tivoli Storage Manager (TSM), and Tivoli Storage FlashCopy Manager (FCM). The day ends with the dreaded "Final Exam".
Friday - The students will present their "Team Value Workshop" presentations, and the class concludes with a formal graduation ceremony for the subset of students who pass. A few outstanding students will be honored with "Top Gun" status.
These are the solution areas I present most often as a consultant at the IBM Executive Briefing Center in Tucson, so I can provide real-life stories of different client situations to help illustrate my examples.
The weather here in Taipei calls for rain every day! I was able to take this photo on Sunday morning while it was still nice and clear, but later in the afternoon, we had quite the downpour. I am glad I brought my raincoat!
With all the announcements we had in June, it is easy for some of the more subtle enhancements to get overlooked. While I was at Orlando for the IBM Edge conference, I was able to blog about some of the key featured announcements. Then, later, when I got back from Orlando to Tucson, I was able to then blog about [More IBM Storage Announcements]. For IBM's Scale-Out Network Attach Storage (SONAS), I had simply:
"SONAS v1.3.2 adds support for management by the newly announced IBM Tivoli Storage Productivity Center v5.1 release. Also, IBM now officially supports Gateway configurations that have the storage nodes connected to XIV or Storwize V7000 disk systems. These gateway configurations offer new flexible choices and options for our ever-expanding set of clients."
In my defense, IBM numbers its software releasees with version.release.modification, so 1.3.2 is Version 1, Release 3, Modification 2. Generally, modification announcements don't get much attention. The big announcement for v1.3.0 of SONAS happened last October, see my blog post [October 2011 Announcements - Part I] or
the nice summary post [IBM Scale-out Network Attached Storage 1.3.0] from fellow blogger Roger Luethy.
Here is a diagram showing the three configurations of SONAS.
I have covered the SONAS Appliance model in depth in previous blogs, with options for fast and slow disk speeds, choice of RAID protection levels, a collection of enterprise-class software features provided at no additional charge, and interfaces to support a variety of third party backup and anti-virus checking software.
The basics haven't changed. The SONAS appliance consists of 2 to 32 interface nodes, 2 to 60 storage nodes, and up to 7,200 disk drives. The maximum configuration takes up 17 frames and holds 21.6PB of raw disk capacity, which is about 17PB usable space when RAID6 is configured. An interface nodes has one or two hex-core processors with up to 144GB of RAM to offer up to 3.5GB/sec performance each. This makes IBM SONAS the fastest performing and most scalable disk system in IBM's System Storage product line.
I thought I would go a bit deeper on the gateway models. These models support up to ten storage nodes, organized in pairs. The key difference is that instead of internal disk controllers, the storage nodes connect to external disk systems. There is enough space in the base SONAS rack to hold up to six interface nodes, or you can add a second rack if you need more interface nodes for increased performance.
SONAS with XIV gateway
XIV offers a clever approach to storage that allows for incredibly fast access to data on relatively slow 7200 RPM drives. By scattering data across all drives and taking advantage of parallel processing, rebuild times for a failed 3TB drive are less than 75 minutes. Compare that to typical rebuild times for 3TB drives that could take as much as 9-10 hours under active I/O loads!
In the configuration, each pair of storage nodes can connect to external SAN Fabric switches that then connect to one or two XIV storage systems. How simple is that? These can be the original XIV systems that support 1TB and 2TB drives, or the new XIV Gen3 systems that support 400GB Solid-state drives (SSD) and 3TB spinning disk drives. In both cases, you can acquire additional storage capacity as little as 12 drives at a time (one XIV module holds 12 drives).
The maximum configuration of ten XIV boxes could hold 1,800 drives. At 3TB drive per drive, that would be 2.4PB usable capacity.
The SONAS with XIV gateway does not require the XIV devices to be dedicated for SONAS purposes. Rather, you can assign some XIV storage space for the SONAS, and the rest is available for other servers. In this manner, SONAS just looks like another set of Linux-based servers to the XIV storage system. This in effect gives you "Unified Storage", with a full complement of NAS protocols from the SONAS side (NFS, CIFS, FTP, HTTPS, SCP) as well as block-based protocols directly from the XIV (FCP, iSCSI).
SONAS with Storwize V7000 gateway
The other gateway offering is the SONAS with Storwize V7000. Like the SONAS with XIV gateway model, you connect a pair of SONAS storage nodes to 1 or 2 Storwize V7000 disk systems. However, you do not need a SAN Fabric switch in between. You can instead connect the SONAS storage nodes directly to the Storwize V7000 control enclosures.
To acquire additional storage capacity, you can purchase a single drive at a time. That's right. Not 12 drives, or 60 drives, at a time, but one at a time. The Storwize V7000 supports a wide range of SSD, SAS and NL-SAS drives at different sizes, speeds and capacities. The drives can be configured into various RAID protection levels: RAID 0, 1, 3, 5, 6 and 10.
Each Storwize V7000 control enclosure can have up to nine expansion drawers. If you choose the 2.5-inch 24-bay models, you can have up to 480 drives per storage node pair, for a total of 2,400 drives. If you choose the 3.5-inch 12-bay models, you can have up to 240 drives per node pair, 1,200 drives total. At 3TB per drive, this could be 3.6PB of raw capacity. The usable PB would depend on which RAID level you selected. Of course, you don't have to limit yourself all to one size or the other. Feel free to mix 2.5-inch and 3.5-inch drawers to provide different storage pool capabilities.
All three SONAS configurations support Active Cloud Engine. This is a collection of features that differentiate SONAS from the other scale-out NAS wannabees in the marketplace:
Policy-driven Data Placement -- Different files can be directed to different storage pools. You no longer have to associate certain file systems to certain storage technologies.
High-speed Scan Engine -- SONAS can scan 10 million files per minute, per node. These scans can be used to drive data migration, backups, expirations, or replications, for example. It is over 100 times faster than traditional walk-the-directory-tree approaches employed by other NAS solutions.
Policy-driven Migration -- You can migrate files from one storage pool to another, based on age, days since last reference, size, and other criteria. The files can be moved from disk to disk, or move out of SONAS and stored on external media, such as tape or a virtual tape library. A lot of data stored on NAS systems is dormant, with little or no likelihood of being looked at again. Why waste money keeping that kind of data on expensive disk? With SONAS, you can move those files to tape can save lots of money. The files are stubbed in the SONAS file system, so that an access request to a file will automatically trigger a recall to fetch the data from tape back to the SONAS system.
Policy-driven Expiration -- SONAS can help you keep your system clean, by helping you decide what files should be deleted. This is especially useful for things like logs and traces that tend to just hang around until some deletes them manually.
WAN Caching -- This allows one SONAS to act as a "Cloud Storage Gateway" for another SONAS at a remote location connected by Wide Area Network (WAN). Let's say your main data center has a large SONAS repository of files, and a small branch office has a smaller SONAS. This allows all locations to have a "Global" view of the all the interconnected SONAS systems, with a high-speed user experience for local LAN-based access to the most recent and frequently used files.
If you want to learn more, see the [IBM SONAS landing page]. Next week, I will be across the Pacific Ocean in [Taipei], to teach IBM Top Gun class to sales reps and IBM Business Partners. "Selling SONAS" will be one of the topics I will be covering!