Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2018, Tony celebrates his 32th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
This week and next, I am down under in Australia and New Zealand for a seven-city Storage Optimisation Breakfast series of presentations to clients and prospects. My first city for this seven-city tour was Sydney, Australia.
Here is the view from my room at the [Shangri-La hotel], including the famous [Sydney Opera House] and Circular Quay, from which to take a water taxi or ride the Manly Ferry. [Sydney harbour] is the deepest harbour in the Southern Hemisphere, allowing boats of all sizes to enter. This section of the city is known as "The Rocks".
Sydney is a very modern metropolis. The last time I was in Sydney was in May 2007 to teach an IBM Top Gun class. My post back then on [Dealing with Jet Lag] is as relevant now as it was back then. In addition to being 9 hours off-shifted from last week in Dallas, Texas, I also have to deal with the colder climate, about 40 degrees F cooler down here. The weather is crisp and clear, it is Winter going into Spring down here as the seasons are flipped below the equator.
Many of the buildings are recognizable from the movie ["The Matrix"] which was filmed here. We joked that this seven-city trip was also similar to [The Adventures of Priscilla, Queen of the Desert], in that both journeys started in Sydney. If you haven't seen the latter, I highly recommend it to get to learn more about Australia as a country.
(Completely useless trivia: Actor Hugo Weaving appeared in both movies. While most people associate him with Australia, where he has lived since 1976, he actually was born in Nigeria, and traveled extensively because his father worked in the computer industry.)
Here I am standing next to our banner.
The line-up for each event is simple. After all the attendees sit down for breakfast, we have the following three sessions:
First, Anna Wells, local IBM Executive for Storage Sales in Australia and New Zealand presents IBM's strategy for storage, and how IBM plans to address Storage Efficiency, Data Protection and Service Delivery. She then highlights various products that are currently available to help meet customer needs, including XIV and the SAN Volume Controller (SVC).
Second, we have a client or two share their success story. We will have different speakers at the different locations.
Third, I present on future trends that will impact the storage marketplace. With only 40 minutes for my section, I decided to focus on just three specific trends, with a mix of some colorful analogies to help emphasize my key points.
We had a great turn-out for our first event in Sydney, lots of clients and prospects came out for this. There is a lot of enthusiasm for IBM's vision, thought leadership, and broad portfolio of storage solutions.
This week I am down under, starting my 7-city Storage Optimisation Breakfast roadshow on Tuesday in Sydney, Australia. I can't be at two places at once, and it seems whenever I am one place, lots of my coworkers are somewhere else at another conference or event. For those at [VMworld 2010] conference in San Francisco this week, IBM is a Platinum Sponsor and hosting a variety of presentations and activities. Here are some things to look forward to:
Session ID SP9638 - Getting the MAX from your Virtualization Investment
Monday 1:30pm, Moscone South Room 309
Speaker: Bob Zuber, IBM System x Program Director
Speaker: Clod Barrera Distinguished Engineer and Chief Technical Strategist
Clod and I just finished Solutions University 2010 in Dallas, and here he is going to VMworld! You already know that virtualization is beneficial. Exploit virtualization to its MAXimum and move beyond virtualization 101 where you have virtualized web, file/print, and DHCP type workloads. Now it is time to take virtualization to the next step and virtualize business infrastructure applications such as ERP, Messaging, CRM, and Database. With IBM solutions you can take the virtualization journey to build a smarter data center through; 1) Consolidation, 2) Management, 3) Automation and 4) Optimization. Attend this session and learn the key considerations for virtualizing mission-critical workloads and the best practices for a virtual data center that delivers a REAL return on your investment.
Session ID TA8065 - Storage Best Practices, Performance Tuning and Troubleshooting
Speaker: Duane Fafard, Senior XIV Storage Architect, IBM
Monday 10:30 AM Moscone South Room 301
Wednesday 03:00 PM Moscone West Room 2005
The industry has solved many of the challenges of virtualization applications by delivering innovative server solutions that automatically migrate load to available resources, but the complete environment requires both the network and the storage to be part of the equation. Designing, managing, and troubleshooting intricate storage environments in today’s age have become more and more complex. This session will discuss storage best practices, performance challenges, and resolving issues in the storage area network using native tools within the environment. With the techniques learned in this session, the storage administrator will be able to use these best practices to design proper storage solutions and pinpoint troubled areas quickly and accurately.
Session ID SS1012 - Expert Panel: How Smarter Systems can Address your Business Challenges
Wednesday, 12-1pm, Room 135
This is IBM's "Super Session". At IBM, we know that all business challenges such as sprawling IT infrastructure, poor performance and rising management costs are solvable on a smarter planet. With Smarter Systems, IBM can help you increase utilization and flexibility, reduce complexity and cost, respond to business changes swiftly and effectively, and enable end-to-end resiliency and security. Alex Yost, Vice President and Business Line Executive for IBM System x and BladeCenter hosts a panel of Virtualization experts:
James Northington, Vice President and Business Line Executive, IBM System x
Donn Bullock, Vice President of Sales, Mainline Information Systems, Inc.
Dylan Larson, Director of Advanced Software and Server Technologies, Intel Data Center Group
Richard, McAniff, Chief Development Officer and Member of the Office of the President, VMware
Siddhartha (Sid) Chatterjee, Ph.D, Vice President, Strategy & Partnerships, IBM Systems Software
David Guzman, Chief Information Officer and Senior Vice President, Global Technology Solution, Acxiom
Next week is [VMworld 2010], so I thought today would be a good day to write a blog post about reporting and managing virtual guest images.
As the original lead architect for IBM Tivoli Storage Productivity Center, I am no stranger to reporting and management tools. Needless to say, if you have lots of virtual guest images, it makes sense to deploy reporting and management software. I had never heard of Veeam before, but I decided to check out Veeam Reporter 4.0, an enterprise-level reporting solution specifically designed for large Virtual Infrastructure (VI3) and vSphere virtual environments that allows you to automatically discover and collect information about your VMware virtual environment.
Their 90-page User Guide offered these helpful "First Steps" on page 9 which I used as the master plan for my evaluation.
Install Veeam Reporter 4.0
The instructions appeared fairly straightforward: Download [the latest version] of the application. Unpack the downloaded archive and run the VeeamReporter.exe file. Then follow the installation wizard steps. What could go wrong?
I should have known better. Like IBM Tivoli Storage Productivity Center, Veeam Reporter is designed to be installed on its own server-class machine with its own application web server and database. I wasn't going to stand up a new server in our lab just for this contest, so I decided to just install it on my Windows XP SP3, which Veeam had listed as a supported operating system level. I ran into a series of installation issues, including installing IIS, installing SQL server, and installing the SRSS component. I am more familiar with IBM's WebSphere Application Server and DB2 combination used in IBM's own products, and have experience with Apache and MySQL on a standard LAMP stack, so my lack of experience with IIS and SQL server made the installation more difficult. Many thanks to all the support personnel at Veeam, Microsoft, and my internal IT department to finally get all of this working.
It appears you can set this up as a client/server environment, where the Veeam Reporter server runs IIS and SQL Server, and then you have a browser on your client machine point to that server. In my cases, I have client browser and server all on one machine.
Create and run a collection job
This step also seemed fairly standard for reporting tools. Once you launch Veeam Reporter 4.0 for the first time, you need to retrieve data from your virtual infrastructure to be able to generate reports. To start the created collection job, select it and click the Start button on the toolbar. If you have a vCenter server in your VI environment, we recommend that you create a job for it to immediately collect data for all objects in its hierarchy. After that, you will be able to select VI objects that were engaged in the performed job using the Workspace, and generate reports for it.
I signed up for this contest August 7, but step 1 above took me two weeks to resolve all the installation iissues. I wanted to get my blog post entry for the contest BEFORE the start of VMworld. Since I am in Dallas, Texas this week for the IBM Storage Solutions University, I had to go through several firewalls for my laptop to tunnel through and get to my VMware Center back in Tucson.
Click on the graphic above to see larger view.
I was able to create and run a collection job. I have a WMware ESX 3.5 host running five guest images and 14 datastores. This seemed to be enough to evaluate the basic features of this reporting tool. Veeam Reporter let's you run the collection process manually, or set a "periodic" schedule to collect data every hour.
Generate reports manually or create a reporting job
Finally, I get to the fun part: To generate report manually, click the Workspace tab, select a necessary VI object from the tree view, date and collection job session, choose reports and click the Create Report button.
At this point, I am reminded of a famous poem:
To see a world in a grain of sand
And a heaven in a wild flower,
Hold infinity in the palm of your hand
And eternity in an hour.
- William Blake
When evaluating products, try to imagine what the reports would look like with hundreds of virtual guest images. Certainly, I can see some potential, even though I had rather limited data to work with. In theory, the tool can create Visio output files, but you need to have Microsoft Visio installed. I have only "Visio Viewer" so I was unable to create any visio files with this product.
The reports can be exported to PDF, Word or Excel formats. Here is an example of an Excel spreadsheet export. While it has 14 bars for the 14 datastores, there are no labels, and the misleading details link in the lower right corner is non-functional. The only way for me to figure out what each referred to was to go back to my vCenter client, which kind of defeats the purpose of having a separate reporting tool.
This same report exported to PDF spanned across four pages, leaving the re-assembly to be done with a pair of scissors and celophane tape.
When you create reports, you can use SRSS or Veeam's internal proprietary format. Only SRSS reports can be put on the dashboard, so I recommend that.
Customize your dashboard
The fourth and final step is to configure your own dashboard: To add reports to the Dashboard, you should first create and save them using Workspace of Veeam Reporter 4.0. Keep in mind that you can add to the Dashboard only saved SSRS-based reports. To customize the Dashboard, click the Dashboard tab and then click the Edit Dashboard button. Customize the layout by dragging blue borders from the right and the bottom of the screen. Then, drag reports from the Reports list and drop them onto the created cells.
The "Free Edition" only allows you to put a single report on the dashboard, so as in step 3, you have to use your imagination of what the potential of the full license would looke like with multiple reports are on a single pane of glass.
(FTC Disclosure: I work for IBM, the leader in server virtualization worldwide, and the number #1 reseller of VMware. In this post, I review [Veeam Reporter 4.0] as my official entry for their blogging contest. IBM and Veeam do not have any business relationshiop that I know of, other than both being VMware business partners, so I am treating them here as an Independent Software Vendor (ISV). Veeam has not compensated me in any manner for this review, this review is not to be taken as an endorsement of Veeam or its products, and I was not provided any full or evaluator license keys. The review is based entirely on my experience using the "Free Edition" available to all for download. None of this blog post was pre-reviewed by anyone from Veeam. IBM, of course, also offers similar software, which I mention below for comparison purposes.)
At this point, you might be thinking, "Doesn't IBM offer something like this?" Of course it does! IBM is the leader in infrastructure reporting, monitoring and management software. Last October, [IBM unveiled IBM Systems Director VMcontrol] software. Not only does IBM Systems Director VMcontrol provide similar support for your VMware environment, it also manages Microsoft Hyper-V and Xen deployments, PowerVM on POWER-based serves, and even z/VM guest images on the System z mainframes. Combined with the rest of the IBM Systems Director, you can manage all of your physical and virtual servers with a single tool from a single pane of glass. How cool is that?
I would like to think Doug Hazelman, Senior Director of Product Strategy at Veeam, for organizing this awesome blogging contest. If you liked this blog post, click here to [vote for me] to get counted for this contest.
Well, it's Tuesday again, and you know what that means! IBM Announcements!
Today, IBM announced its latest IBM Tivoli Key Lifecycle Manager (TKLM) 2.0 version. Here's a quick recap:
Centralized Key Management
Centralized and simplified encryption key management through Tivoli Key Lifecycle Manager's lifecycle of creation, storage, rotation, and protection of encryption keys and key serving through industry standards. TKLM is available to manage the encryption keys for LTO-4, LTO-5, TS1120 and TS1130 tape drives enabled for encryption, as well as DS8000 and DS5000 disk systems using Full Disk Encryption (FDE) disk drives.
Partitioning of Access Control for Multitenancy
Access control and partitioning of the key serving functions, including end-to-end authentication of encryption clients and security of exchange of encryption keys, such that groups of devices have different sets of encryption keys with different administrators. This enables [multitenancy] or multilayer security of a shared infrastructure using encryption as an enforcement mechanism for access control. As Information Technology shifts from on-premises to the cloud, multitenancy will become growingly more important.
Support for KMIP 1.0 Standard
Support for the new key management standard, Key Management Interoperability Protocol (KMIP), released through the Organization for the Advancement of Structured Information Standards [OASIS]. This new standard enables encryption key management for a wide variety of devices and endpoints. See the
[22-page KMIP whitepaper] for more information.
As much as I like to poke fun at Oracle, with hundreds of their Sun/StorageTek clients switching over to IBM tape solutions every quarter, I have to give them kudos for working cooperatively with IBM to come up with this KMIP standard that we can both support.
Support for non-IBM devices from Emulex, Brocade and LSI
Support for IBM self-encrypting storage offerings as well as suppliers of IT components which support KMIP, including a number of supported non-IBM devices announced by business partners such as Emulex, Brocade, and LSI. KMIP support permits you to deploy Tivoli Key Lifecycle Manager without having to worry about being locked into a proprietary key management solution. If you are a client with multiple "Encryption Key Management" software packages, now is a good time to consolidate onto IBM TKLM.
Role-based access control for administrators that allows multiple administrators with different roles and permissions to be defined, helping increase the security of sensitive key management operations and better separation of duties. For example, that new-hire college kid might get a read-only authorization level, so that he can generate reports, and pack the right tapes into cardboard boxes. Meanwhile, for that storage admin who has been running the tape operations for the past ten years, she might get full access. The advantage of role-based authorization is that for large organizations, you can assign people to their appropriate roles, and you can designate primary and secondary roles in case one has to provide backup while the other is out of town, for example.
Wrapping up my week's theme of storage optimization, I thought I would help clarify the confusion between data reduction and storage efficiency. I have seen many articles and blog posts that either use these two terms interchangeably, as if they were synonyms for each other, or as if one is merely a subset of the other.
Data Reduction is LOSSY
By "Lossy", I mean that reducing data is an irreversible process. Details are lost, but insight is gained. In his paper, [Data Reduction Techniques", Rajana Agarwal defines this simply:
"Data reduction techniques are applied where the goal is to aggregate or amalgamate the information contained in large data sets into manageable (smaller) information nuggets."
Data reduction has been around since the 18th century.
Take for example this histogram from [SearchSoftwareQuality.com]. We have reduced ninety individual student scores, and reduced them down to just five numbers, the counts in each range. This can provide for easier comprehension and comparison with other distributions.
The process is lossy. I cannot determine or re-create an individual student's score from these five histogram values.
This next example, complements of [Michael Hardy], represents another form of data reduction known as ["linear regression analysis"]. The idea is to take a large set of data points between two variables, the x axis along the horizontal and the y axis along the vertical, and find the best line that fits. Thus the data is reduced from many points to just two, slope(a) and intercept(b), resulting in an equation of y=ax+b.
The process is lossy. I cannot determine or re-create any original data point from this slope and intercept equation.
In this last example, from [Yahoo Finance], reduces millions of stock trades to a single point per day, typically closing price, to show the overall growth trend over the course of the past year.
The process is lossy. Even if I knew the low, high and closing price of a particular stock on a particular day, I would not be able to determine or re-create the actual price paid for individual trades that occurred.
Storage Efficiency is LOSSLESS
By contrast, there are many IT methods that can be used to store data in ways that are more efficient, without losing any of the fine detail. Here are some examples:
Thin Provisioning: Instead of storing 30GB of data on 100GB of disk capacity, you store it on 30GB of capacity. All of the data is still there, just none of the wasteful empty space.
Space-efficient Copy: Instead of copying every block of data from source to destination, you copy over only those blocks that have changed since the copy began. The blocks not copied are still available on the source volume, so there is no need to duplicate this data.
Archiving and Space Management: Data can be moved out of production databases and stored elsewhere on disk or tape. Enough XML metadata is carried along so that there is no loss in the fine detail of what each row and column represent.
Data Deduplication: The idea is simple. Find large chunks of data that contain the same exact information as an existing chunk already stored, and merely set a pointer to avoid storing the duplicate copy. This can be done in-line as data is written, or as a post-process task when things are otherwise slow and idle.
When data deduplication first came out, some lawyers were concerned that this was a "lossy" approach, that somehow documents were coming back without some of their original contents. How else can you explain storing 25PB of data on only 1PB of disk?
(In some countries, companies must retain data in their original file formats, as there is concern that converting business documents to PDF or HTML would lose some critical "metadata" information such as modificatoin dates, authorship information, underlying formulae, and so on.)
Well, the concern applies only to those data deduplication methods that calculate a hash code or fingerprint, such as EMC Centera or EMC Data Domain. If the hash code of new incoming data matches the hash code of existing data, then the new data is discarded and assumed to be identical. This is rare, and I have only read of a few occurrences of unique data being discarded in the past five years. To ensure full integrity, IBM ProtecTIER data deduplication solution and IBM N series disk systems chose instead to do full byte-for-byte comparisons.
Compression: There are both lossy and lossless compression techniques. The lossless Lempel-Ziv algorithm is the basis for LTO-DC algorithm used in IBM's Linear Tape Open [LTO] tape drives, the Streaming Lossless Data Compression (SLDC) algorithm used in IBM's [Enterprise-class TS1130] tape drives, and the Adaptive Lossless Data Compression (ALDC) used by the IBM Information Archive for its disk pool collections.
Last month, IBM announced that it was [acquiring Storwize. It's Random Access Compression Engine (RACE) is also a lossless compression algorithm based on Lempel-Ziv. As servers write files, Storwize compresses those files and passes them on to the destination NAS device. When files are read back, Storwize retrieves and decompresses the data back to its original form.
As with tape, the savings from compression can vary, typically from 20 to 80 percent. In other words, 10TB of primary data could take up from 2TB to 8TB of physical space. To estimate what savings you might achieve for your mix of data types, try out the free [Storwize Predictive Modeling Tool].
So why am I making a distinction on terminology here?
Data reduction is already a well-known concept among specific industries, like High-Performance Computing (HPC) and Business Analytics. IBM has the largest marketshare in supercomputers that do data reduction for all kinds of use cases, for scientific research, weather prediction, financial projections, and decision support systems. IBM has also recently acquired a lot of companies related to Business Analytics, such as Cognos, SPSS, CoreMetrics and Unica Corp. These use data reduction on large amounts of business and marketing data to help drive new sources of revenues, provide insight for new products and services, create more focused advertising campaigns, and help understand the marketplace better.
There are certainly enough methods of reducing the quantity of storage capacity consumed, like thin provisioning, data deduplication and compression, to warrant an "umbrella term" that refers to all of them generically. I would prefer we do not "overload" the existing phrase "data reduction" but rather come up with a new phrase, such as "storage efficiency" or "capacity optimization" to refer to this category of features.
IBM is certainly quite involved in both data reduction as well as storage efficiency. If any of my readers can suggest a better phrase, please comment below.