The first day had various breakout sessions in the afternoon.
- Understanding Your Options for Storing Archive Data to Meet Compliance Challenges
I presented IBM's Smart Archive strategy and the storage products IBM offers to archive data and meet compliance regulations:
- The differences between backup and archive, including a few of my own personal horror stories helping companies who had foolishly thought that keeping backup copies for years would adequately serve as their archive strategy
- The differences between Write-Once Read-Many (WORM) media, and Non-Erasable, Non-Rewriteable (NENR) storage options.
- How disk-only archive solutions become "space heaters" for your data center.
- An overview of the various storage hardware options from IBM.
- How LTFS can be incorporated into an archive solution, such as [Crossroads Systems' StrongBox® solution].
- An explanation of the different IBM software offerings to help complement the storage hardware choices.
- IBM TotalStorage Productivity Center (TPC): New Features and Functions
Mike Griese, IBM program manager for TPC, presented the latest in TPC 5.1 version announced this week. His session was organized into four key sections:
- Insights - TPC 5.1 integrates COGNOS reporting, which allows custonmization of reports and ad-hoc exploration and analysis. Since the reports are not binary-compiled into the product, IBM can ship new COGNOS reports as templates outside the normal TPC release schedule. Also, TPC 5.1 got smarter on reporting on server virtualization hypervisor environments to avoid double-counting.
- Recommendations - TPC 5.1 can analyze your usage patterns across the entire data center and make recommendations to move data from one storage tier to another. You can then act on these recommendations by moving data from one tier to another, either "up-tier" to faster storage, or "down-tier" to less expensive storage, using a storage hypervisor like IBM SAN Volume Controller. This is complementary to features like Easy Tier which optimize within a single disk system.
- Performance - TPC 5.1 uses a new web-based GUI, based on AJAX, HTML5 and Dojo widgets, inspired by the IBM XIV GUI, and similar to the web-based GUI of SAN Volume Controller, Storwize V7000 and SONAS.
- Optimization - TPC 5.1 allows you to optimize for Cloud by introducing a new RESTful API for storage provisioning and support for SONAS environments. This will allow upward-integration to products like [IBM Service Delivery Manager] and [Tivoli Storage Automation Manager].
Mike also explained the new TPC 5.1 packaging. Instead of having a variety of components like "TPC for Disk", "TPC for Data", and "TPC for Replication", the new packaging simplifies this down to two levels of functionality. The basic level supports block-level devices, including disk performance, replication and SAN fabric management. The advanced level adds support for files and databases, including support for Cloud management such as SONAS environments.
Dan Zehnpfennig, Solution Architect, talked about his experiences installing TPC 5.1 and how this was much improved over previous TPC versions.
- IBM Watson: How it Works and What it Means for Society Beyond Winning Jeopardy!
I presented how IBM Watson works, how it played the Jeopardy! game show last year, and how IBM has helped clients use the technology to solve real-world problems.
- Understanding the IBM Grand Challenge, how it compares to the IBM Deep Blue chess playing computer
- How IBM Watson works, the hardware, the software, and the algorithms involved
- How to build your own "Watson Jr." in your own basement, based on my [popular instructions I published last year].
- Examples of how the technology is being used in Healthcare and Financial Services
If you missed it, I will be repeating this session on IBM Watson on Thursday.
Tonight we have the grand opening reception of the Solution Center and a concert featuring Grace Potter & the Nocturnals!
technorati tags: IBM, Archive, Compliance, WORM, NENR, Mike Griese, , Dan Zehnpfennig, Tivoli Storage, Productivity Center, TPC, Watson, Healthcare, Financial Services, Wellpoint, Seton, CitiGroup
There is still time to enroll for [IBM Edge], a conference focused on storage, to be held June 4-8 in Orlando, Florida. There is an early-bird discount until May 6!
I will be there all week! Here are the seven sessions I will be presenting at the Technical Edge side of the event:
- Understanding Your Options for Storing Archive Data to Meet Compliance Challenges
This session will cover the IBM software and hardware solutions that your organization can use to store archive data, including features like immutability, Write-Once-Read-Many (WORM) technology and Non-Erasable, Non-Rewriteable (NENR) enforcement. The discussion will include high-level concepts like chronological and event-based retention, litigation hold and release, as well as an overview of the products and solutions from IBM that you can deploy today.
IBM Watson: How it Works and What it Means for Society Beyond Winning Jeopardy!
In 2011, the IBM Watson computer was able to beat the top-earning human winners on the trivia game-show “Jeopardy!” As I was the author of [How to Build Your Own Watson Junior in Your Basement], I have been asked to explain how the IBM Watson system was put together, how it works, and what examples of text mining and big data analytics means for society as we apply technology to meet tomorrow's challenges.
Using Social Media for IBM System Storage - Birds of a Feather
I will be moderating this Birds of a Feather, or BOF, session that will bring together a Q&A panel of experts on how social media can be leveraged to help you do your job, get the information you need to solve problems, and share your knowledge with others.
Data Footprint Reduction: Understanding IBM Storage Efficiency Options
Data Footprint Reduction is the catch-all term for a variety of technologies designed to help reduce storage costs. In this session, I will cover thin provisioning, space-efficient copies, deduplication and compression technologies, and describe the IBM storage products that provide these capabilities.
IBM's Storage Strategy in the Smarter Computing Era
Confused about IBM's new initiatives for Big Data analytics, Workload Optimized Systems, and Cloud Computing? This session will explain it all, and how IBM's strategy for its various storage products and solutions fit into these overall themes.
IBM SONAS and the IBM Cloud Storage Taxonomy
Confused over the different types of cloud storage? IBM's scale-out Network Attached Storage (SONAS) can be used in a variety of use cases. This session will provide an overview of IBM's SONAS solution, provide an update on the latest features and functions recently announced, and explain how it can be deployed in various private, public and hybrid cloud environments.
IBM Tivoli Storage Productivity Center Overview and Update
IBM has enhanced its premier storage infrastructure management tool: IBM Tivoli Storage Productivity Center. This session will provide both an overview of the product, and explain the latest features and functions recently announced.
I hope to see you all there!
technorati tags: IBM, Edge, Archive, Social Media, BOF, Data Footprint Reduction, Strategy, Smarter Planet, Smarter Computing, SONAS, Cloud, Taxonomy, Tivoli Storage, Productivity Center, TPC, IBM Watson
Continuing my coverage of the 30th annual [Data Center Conference]. here is a recap of Wednesday breakout sessions.
- Private Cloud Computing at Bank of America – One Year Later
Prentice Dees, Senior VP for Systems Automation Engineering at Bank of America, did the happy dance celebrating their success implementing a private cloud. Bank of America merged with Merrill Lynch, has 29 million users residing in over 100 countries, and 5900 retail offices in 40 countries. They manage $1 billion US dollars in deposits, and $2.2 trillion in assets.
Rather than IaaS or PaaS, his team focused on Application-as-a-Service (AaaS). Their goal is to transform and move IT out of the way of the business. In his view, if a human has to touch a keyboard, then his team has failed.
He divides the work up into three layers:
- Bones: These are the physical components, such as servers, storage, switches that provide capacity and interconnect.
- Muscle: This is the translation layer, providing actions and reporting.
- Brains: This is the layer for intelligent automation
Provisioning new servers with storage involves three sets of steps. The first set of steps involves requesting approval. The second set of steps deploys the server. The third involves installing the application, loading the data and using it until End-of-Life. The second set of steps took 14 to 60 days before, and has been automated down to one to three hours.
The results is that he has improved server utilization 10x, and storage is over-provisioned 4x, and are now hosting over 11,000 server images, saving $20 million US dollars. Not only is this lower cost per application deployed, but the process allows for lower-skilled personnel. He has over 500TB of virtual storage deployed, using thin provisioning, with only 128TB of physical disk. But they have only scratched the surface. Only 15 to 20 percent are virtualized in this manner, and they want to get to 80 percent within the next three years.
What makes an application not "Cloud-ready"? Prentice is a big fan of Linux and Open Source solutions. Some applications consume the entire server. In other cases, code changes are required. If possible, try to split up large applications into smaller Cloud-ready chunks?
How many people on his team? There are currently 16 to 20 people on the team, but at its peak there were 30 people.
Rather than wasting time on capacity planning, his team focuses on a cost recovery model instead. Seed capital in combination with rock-solid recovery is the way to go. "All models are wrong," the saying goes, "but some are useful!"
A nice side benefit to this new approach is maintenance is greatly improved. Rather than rushing to fix problems, you roll the application over to another host machine, and then take your time fixing the failed hardware.
How does the team deal with requests for dedicated resources? Give them the keys to their own miniature private cloud. Let them provision from their dedicated resources using the same methods you use to provision everyone else. This allows them to get comfortable with the process, and eventually join the rest of the shared pool. Analytics can be used to find "rogue VMs" that don't play well with others.
Their automation is a mix of commercial and open source software, with home-grown scripts. They have one "Orchestration Management Data Base" (OMDB) to manage multiple disparate Configuration Management Data bases (CMDBs). The chargeback is not quite per individual pay-per-use, but more at the departmental level.
- Aging Data: The Challenges of Long-Term Data Retention
The analyst defined "aging data" to be any data that is older than 90 days. A quick poll of the audience showed the what type of data was the biggest challenge:
In addition to aging data, the analyst used the term "vintage" to refer to aging data that you might actually need in the future, and "digital waste" being data you have no use for. She also defined "orphaned" data as data that has been archived but not actively owned or managed by anyone.
You need policies for retention, deletion, legal hold, and access. Most people forget to include access policies. How are people dealing with data and retention policies? Here were the poll results:
The analyst predicts that half of all applications running today will be retired by 2020. Tools like "IBM InfoSphere Optim" can help with application retirement by preserving both the data and metadata needed to make sense of the information after the application is no longer available. App retirement has a strong ROI.
Another problem is that there is data growth in unstructured data, but nobody is given the responsibility of "archivist" for this data, so it goes un-managed and becomes a "dumping ground". Long-term retention involves hardware, software and process working together. The reason that purpose-built archive hardware (such as IBM's Information Archive or EMC's Centera) was that companies failed to get the appropriate software and process to complete the solution.
Cloud computing will help. The analyst estimates that 40 percent of new email deployments will be done in the cloud, such as IBM LotusLive, Google Apps, and Microsoft Online365. This offloads the archive requirement to the public cloud provider.
A case study is University of Minnesota Supercomputing Institute that has three tiers for their storage: 136TB of fast storage for scratch space, 600TB of slower disk for project space, and 640 TB of tape for long-term retention.
What are people using today to hold their long-term retention data? Here were the poll results:
Bottom line is that retention of aging data is a business problem, techology problem, economic problem and 100-year problem.
- A Case Study for Deploying a Unified 10G Ethernet Network
Brian Johnson from Intel presented the latest developments on 10Gb Ethernet. Case studies from Yahoo and NASA, both members of the [Open Data Center Alliance] found that upgrading from 1Gb to 10Gb Ethernet was more than just an improvement in speed. Other benefits include:
- 45 percent reduction in energy costs for Ethernet switching gear
- 80 percent fewer cables
- 15 percent lower costs
- doubled bandwidth per server
Ruiping Sun, from Yahoo, found that 10Gb FCoE achieved 920 MB/sec, which was 15 percent faster than the 8Gb FCP they were using before.
IBM, Dell and other Intel-based servers support Single Root I/O Virtualization, or SR-IOV for short. NASA found that cloud-based HPC is feasible with SR-IOV. Using IBM General Parallel File System (GPFS) and 10Gb Ethernet were able to replace a previous environment based on 20 Gbps DDR Infiniband.
While some companies are still arguing over whether to implement a private cloud, an archive retention policy, or 10Gb Ethernet, other companies have shown great success moving forward.
technorati tags: IBM, BofA, Prentice+Dees, AaaS, Linux, Open Source, OMDB, CMDB, Aging data, Archive, Retention, , InfoSphere, Optim, LotusLive, University Minnesota, , 10GbE, SR-IOV, GPFS, private cloud
IBM had over a dozen storage-related announcements this week. This is my third and final part in my series to provide a quick overview of the announcements.
- IBM Tivoli® Storage Manager v6.3
IBM Tivoli Storage Manager is market-leading software that provides not just backup, but also HSM and archive capabilities across a wide variety of operating systems. Originally developed in the IBM Almaden Research Center, it then moved about 15 years ago to Tucson to become a commercial product.
The new TSM v6.3 introduces site-to-site hot-standby disaster recovery feature that replicates the TSM meta data and data for fast recovery. The maximum number of objects supported has doubled to four billion. Reporting has been enhanced using technologies borrowed from IBM Cognos. Lastly, a feature on Tivoli Storage Productivity Center has been carried forward to deploy and update agents on the various clients.
For more details, see fellow IBM blogger Richard Vining's post on
[TSM v6.3 Announcements]
- IBM Tivoli Storage FlashCopy® Manager v3.1
IBM Tivoli Storage FlashCopy Manager coordinates application-aware backups through the use of point-in-time copy services such as FlashCopy or Snapshot on various IBM and non-IBM disk systems. The versions can remain on disk, or optionally processed by Tivoli Storage Manager to move them to external storage such as tape for added protection.
There will always be a spot in my heart for this product, as the method to use FlashCopy for application-aware backups on the mainframe was my 19th patent, and subsequently delivered as a series of enhancements to DFSMS over the past decade on the z/OS operating system. It is good to see this innovation has "jumped over" to distributed systems.
The new FlashCopy Manager v3.1 adds support for HP-UX and VMware, expands support for IBM DB2 and Oraqcle databases, and introduces an interface for custom business applications.
For more details, see fellow IBM blogger Del Hoobler's post on
[TSM FlashCopy Manager v3.1 Announcements].
- IBM Tivoli Storage Manager for Virtual Environments v6.3
TSM for VE is a new addition to the TSM family, focused on being able to coordinate hypervisor-aware data protection. Initially it supports VMware, but IBM has plans to support a variety of other server virtualization hypervisors as well, as over 40 percent of companies run two or more hypervisors in their data center.
The new TSM for VE v6.3 adds a VMware vCenter plug-in, and support for hardware-based disk snapshots.
- IBM Tivoli Storage Productivity Center v4.2.2
A long time ago, I was the chief architect IBM Tivoli Storage Productivity Center v1, now we are already up to v4.2.2 release!
IBM has added enhanced reporting based on IBM Cognos technology, including storage tiering analysis reports (STAR). Few companies keep all of their storage tiers in a single disk system. Rather, they have different boxes, and often from different vendors. IBM's Productivity Center can report on both IBM and non-IBM disk systems. New this release is support for the internal disks of the Storwize V7000 midrange disk system.
Productivity Center's "SAN Planner" has been enhanced to consider XIV replication criteria. This SAN Planner helps clients decide where to carve LUNs, and to make sure they pick the right place given all of the criteria such as remote copy replications.
Last year, we introduced Productivity Center for Disk Midrange Edition (MRE) which to offer lower price when you are only managing midrange disk systems DS5000, DS3000, Storwize V7000 and SVC managing these. This was so successful, that we now have TPC Select, which is basically Productivity Center Standard Edition (SE) for these midrange disk systems.
Whew! I have already heard from some of my readers to slow down, that this is too much information to deal with all at once. IBM has tried everything from having just a few announcements nearly every Tuesday, to having huge launches every two to three years, and settled in the middle with announcements about four to five times per year.
technorati tags: IBM, Tivoli, Storage, TSM, backup, HSM, archive, FlashCopy, FlashCopy hManager, , VE, VMware, vCenter, Cognos, TPC, MRE, TPC Select
Webcast: How to Diagnose and Cure What Ails Your Storage Infrastructure
Wednesday, March 23, 2011 at 11:00 AM PDT / 11:00 AM Arizona MST / 2:00 PM EDT
Storage is the most poorly utilized infrastructure element -- and the most costly part of hardware budgets -- in most IT shops today. And it’s getting worse. Storage management typically involves nightmarish mash-up of tools for capacity management, performance management and data protection management unique to each array deployed in heterogeneous fabrics. Server and desktop virtualization seem to have made management issues worse, and coming on the heels of changing workloads and data proliferation is the requirement to add data management to the set of responsibilities shouldered by fewer and fewer storage professionals. Forecast for Storage in 2012: more pain as long delayed storage infrastructure refresh becomes mandatory.
In this webcast, fellow blogger Jon Toigo, CEO of Toigo Partners International, of [DrunkenData] fame, and I will take turns assessing the challenges and suggesting real-world solutions to the many issues that confound storage efficiency in contemporary IT. Integrating real world case studies and technology insights, our storage experts will deliver a must see webcast that sets down a strategy for fixing storage...before it fixes you.
Don't miss this event, unless you like the stress of knowing that your next disaster may be a data disaster.
Register for this webcast to come hear me and Jon Toigo talk!
technorati tags: IBM, Webcast, Jon Toigo, Storage Efficiency, Data Protection, Retention, Archive, Smarter Computing, Big Data, Optimized Systems, Cloud Computing
As time progresses, things change, sometimes for the better in the right direction, sometimes a step backwards, and sometimes just different enough to be annoying. I wrote my blog post about [A Box Full of Floppies] a week ago, and posted in Monday. Let's take a look at how time and change impacted that one post.
- The weather has warmed up here in Tucson so I started my Spring Cleaning early this year...
If there is ever a good time to brag about how beautiful the weather here in Tucson, it would be when everyone else in the country is digging themselves out of piles of snow. When my friends on Twitter were complaining how cold it was in Scottland, Ireland, Canada, or the East Coast of the United States, I would remind them that I am wearing a tee-shirt and shorts. I played golf for a week last December!
Sadly, a few days after my post, Tucson had the coldest days of February, breaking records set back in the year 1899. Water pipes were frozen, outdoor plants have suffered, and over 14,000 homes and businesses were shut off from natural gas. The 1,400-plus employees at the IBM Tucson facility have been asked to telecommute until restroom facilities can be restored to working order.
While we should all pay more attention to [climate change], this latest chill is probably just a seasonal flucation thanks to [La Niña] that happens every 10-15 years.
- Here is a YouTube video of an astronaut ejecting a floppy disk...
Back in 2009, YouTube decided to [stop supporting Internet Explorer 6 (IE6)] to view its videos. However, that is what most IBMers were on, and this posted a problem when I embedded a video on my blog. To get around that, my friends at Microsoft provided special "conditional HTML tags" that allows me to suppress YouTube videos when viewed from Internet Explorer. The video shows up for those using Chrome, Opera, Firefox or other browsers, but is suppressed for IE users, and that allowed IBM employees to at least read the text.
Fortunately, last July, IBM decided to switch from IE6 over to Mozilla Firefox as the standard browser, so I thought this would no longer be an issue.
Unfortunately, my friends at YouTube have done it again. They changed the generated embed code from using "object" tags to "iframe" which messes up blogs written in various blogging systems, including Lotus Connections that I have here on DeveloperWorks, as well as WordPress. The new method is intended to either promote the new HTML5 standard, or to piss off [iPhone users]. In any case, several readers found they could not read my entire post about floppies because the "iframe" prevented the rest of the post to be shown. I have since reverted back to the old "object" tags and re-posted for everyone's benefit.
- I may have to stand up an OS/2 machine just to check out what is actually on those floppies...
For any data that you keep for long term retention, it is important that you be able to access the data in a meaningful way when you need it. IBM has identified five ways that this can be done:
- Museum approach -- keep old servers, storage and applications around. In my case, I have computers that can handle 3.5-inch floppy diskettes, but no hardware to read my Zip cartridges or 5.25-inch floppies.
- Emulation approach -- emulating old systems with new systems. I remember the first CD players had "tape cassette" attachements so they can be used in car stereos.
- Migration approach -- migrating data and applications to new technology. This is what most businesses do. For example, if you keep archives through IBM Tivoli Storage Manager or DFSMShsm, the software will migrate data from old tapes to new tapes as part of its tape reclamation process.
- Descriptive approach -- including sufficiently descriptive metadata, such as with HTML or XML tags, that would enable future rendering.
- Ecapsulation approach -- encapsulate the data, metadata and related application logic for future processing. While the "descriptive" approach might help display the contents of proprietary formats, the encapsulation approach would include application logic, perhaps written in Java, that could be used to actually operate built-in macros, pivot tables, or other active features of a document or database.
IBM Research is working closely with industry standards groups, like the Organization for the Advancement of Structured Information Standards [OASIS], to help promote the use of open standards for long-term retention.
For my readers who follow American Football, enjoy the [SuperBowl]!
technorati tags: IBM, La+Nina, floppy, diskette, astronaut, YouTube, archive, OASIS, climate change, global warming, inconvenient truth
In keeping with the spirit to be a more kinder, gentler 2011, I decided last week to refrain from being the rain on someone else's parade that occurs immediately before, during or after a competitor's announcement or annual conference, and let EMC have their few moments in the spotlight last week. This of course allows me more time to learn about the announcements and reflect on marketplace reactions. Here's a quick look at the [EMC Press Release]:
- A new VNXe disk system
Of the 41 new storage technologies and products EMC announced last week, the VNXe is EMC's "me-too" product to compete against other low-end disk systems like the IBM System Storage DS3524 and N3000 series. It looks truly new, developed organically from the ground up, with a new architecture, new OS. It comes in either the 2U-high VNXe3100 or the 3U-high VNXe3300. These employ 3.5-inch SAS drives to provide Ethernet-based NFS, CIFS and iSCSI host attachment. The $10K USD price tag appears to be for the hardware only. As is typical for EMC, they charge software features in bundles or "suites", so the actual TCO will be much higher. I have not seen any announcements whether Dell plans to resell either the VNXe nor the VNX models, now that they have acquired Compellent.
- A new VNX disk system
Despite having a similar name as the VNXe, the VNX appears to be a re-hash of the Celerra/CLARiiON mess that EMC has been selling already, based on the old FLARE and DART operating systems of these older disk systems. This scales from 75 to 1000 SAS drives. While EMC calls the VNX "unified", it currently is only available in block-only and file-only models, with a future promise from EMC that they will offer a combined block-and-file version sometime in the future. EMC claims that the VNX will be faster than the predecessors, so hopefully that means EMC has joined the rest of the planet and will publish SPC-1 and SPC-2 benchmarks to back up that claim. They can compare against the SPC-1 benchmarks that our friends at NetApp ran against EMC CLARiiON.
- New software for the VMAX
A long time ago, EMC announced they would provide non-disruptive automated tiering. Their first delivery "FAST V1" handled entire LUNs at a time. EMC now has finally "FAST VP" which we expected was going to be called "FAST V2", which provides sub-LUN automated tiering between Solid-state and spinning disk drives.. Meanwhile, IBM has been delivering "Easy Tier" on the IBM System Storage DS8000 series, SAN Volume Controller, and Storwize V7000 disk systems.
- Data Domain Archiver
Competing against IBM, HP and Oracle in the tape arena, EMC's latest addition to the Data Domain family is designed for the long-term retention of backups? Archives of backups? Backups are short-lived, protecting against the unexpected loss from hardware failure or data corruption. Keeping backups as "archives" is generally a bad mistake, as it makes it hard to e-Discover the data you need when you need it, and may not have the appropriate hardware tor restore these old backups when you do find them.
I will have to dig deeper into all of these different technologies in separate posts in the future.
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], I presented a session on Storage for the Green Data Center, and attended a System x session on Greening the Data Center. Since they were related, I thought I would cover both in this post.
- Storage for the Green Data Center
I presented this topic in four general categories:
- Drivers and Metrics - I explained the three key drivers for consuming less energy, and the two key metrics: Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE).
- Storage Technologies - I compared the four key storage media types: Solid State Drives (SSD), high-speed (15K RPM) FC and SAS hard disk, slower (7200 RPM) SATA disk, and tape. I had comparison slides that showed how IBM disk was more energy efficient than competition, for example DS8700 consumes less energy than EMC Symmetrix when compared with the exact same number and type of physical drives. Likewise, IBM LTO-5 and TS1130 tape drives consume less energy than comparable HP or Oracle/Sun tape drives.
- Integrated Systems - IBM combines multiple storage tiers in a set of integrated systems managed by smart software. For example, the IBM DS8700 offers [Easy Tier] to offer smart data placement and movement across Solid-State drives and spinning disk. I also covered several blended disk-and-tape solutions, such as the Information Archive and SONAS.
- Actions and Next Steps - I wrapped up the talk with actions that data center managers can take to help them be more energy efficient, from deploying the IBM Rear Door Heat Exchanger, or improving the management of their data.
- Greening of the Data Center
Janet Beaver, IBM Senior Manager of Americas Group facilities for Infrastructure and Facilities, presented on IBM's success in becoming more energy efficient. The price of electricity has gone up 10 percent per year, and in some locations, 30 percent. For every 1 Watt used by IT equipment, there are an additional 27 Watts for power, cooling and other uses to keep the IT equipment comfortable. At IBM, data centers represent only 6 percent of total floor space, but 45 percent of all energy consumption. Janet covered two specific data centers, Boulder and Raleigh.
At Boulder, IBM keeps 48 hours reserve of gasoline (to generate electricity in case of outage from the power company) and 48 hours of chilled water. Many power outages are less than 10 minutes, which can easily be handled by the UPS systems. At least 25 percent of the Computer Room Air Conditioners (CRAC) are also on UPS as well, so that there is some cooling during those minutes, within the ASHRAE guidelines of 72-80 degrees Fahrenheit. Since gasoline gets stale, IBM runs the generators once a month, which serves as a monthly test of the system, and clears out the lines to make room for fresh fuel.
The IBM Boulder data center is the largest in the company: 300,000 square feet (the equivalent of five football fields)! Because of its location in Colorado, IBM enjoys "free cooling" using outside air temperature 63 percent of the year, resulting in a PUE of 1.3 rating. Electricity is only 4.5 US cents per kWh. The center also uses 1 Million KwH per year of wind energy.
The Raleigh data center is only 100,000 Square feet, with a PUE 1.4 rating. The Raleigh area enjoys 44 percent "free cooling" and electricity costs at 5.7 US cents per kWh. The Leadership in Energy and Environmental Design [LEED] has been updated to certify data centers. The IBM Boulder data center has achieved LEED Silver certification, and IBM Raleigh data center has LEED Gold certification.
Free cooling, electricity costs, and disaster susceptibility are just three of the 25 criteria IBM uses to locate its data centers. In addition to the 7 data centers it manages for its own operations, and 5 data centers for web hosting, IBM manages over 400 data centers of other clients.
It seems that Green IT initiatives are more important to the storage-oriented attendees than the x86-oriented folks. I suspect that is because many System x servers are deployed in small and medium businesses that do not have data centers, per se.
technorati tags: IBM, Technical University, Green Data Center, PUE, DCiE, Free Cooling, ASHRAE, LEED, SSD, Disk, Tape, SONAS, Archive
A long time ago, perhaps in the early 1990s, I was an architect on the component known today as DFSMShsm on z/OS mainframe operationg system. One of my job responsibilities was to attend the biannual [SHARE conference to listen to the requirements of the attendees on what they would like added or changed to the DFSMS, and ask enough questions so that I can accurately present the reasoning to the rest of the architects and software designers on my team. One person requested that the DFSMShsm RELEASE HARDCOPY should release "all" the hardcopy. This command sends all the activity logs to the designated SYSOUT printer. I asked what he meant by "all", and the entire audience of 120 some attendees nearly fell on the floor laughing. He complained that some clever programmer wrote code to test if the activity log contained only "Starting" and "Ending" message, but no error messages, and skip those from being sent to SYSOUT. I explained that this was done to save paper, good for the environment, and so on. Again, howls of laughter. Most customers reroute the SYSOUT from DFSMS from a physical printer to a logical one that saves the logs as data sets, with date and time stamps, so having any "skipped" leaves gaps in the sequence. The client wanted a complete set of data sets for his records. Fair enough.
When I returned to Tucson, I presented the list of requests, and the immediate reaction when I presented the one above was, "What did he mean by ALL? Doesn't it release ALL of the logs already?" I then had to recap our entire dialogue, and then it all made sense to the rest of the team. At the following SHARE conference six months later, I was presented with my own official "All" tee-shirt that listed, and I am not kidding, some 33 definitions for the word "all", in small font covering the front of the shirt.
I am reminded of this story because of the challenges explaining complicated IT concepts using the English language which is so full of overloaded words that have multiple meanings. Take for example the word "protect". What does it mean when a client asks for a solution or system to "protect my data" or "protect my information". Let's take a look at three different meanings:
- Unethical Tampering
The first meaning is to protect the integrity of the data from within, especially from executives or accountants that might want to "fudge the numbers" to make quarterly results look better than they are, or to "change the terms of the contract" after agreements have been signed. Clients need to make sure that the people authorized to read/write data can be trusted to do so, and to store data in Non-Erasable, Non-Rewriteable (NENR) protected storage for added confidence. NENR storage includes Write-Once, Read-Many (WORM) tape and optical media, disk and disk-and-tape blended solutions such as the IBM Grid Medical Archive Solution (GMAS) and IBM Information Archive integrated system.
- Unauthorized Access
The second meaning is to protect access from without, especially hackers or other criminals that might want to gather personally-identifiably information (PII) such as social security numbers, health records, or credit card numbers and use these for identity theft. This is why it is so important to encrypt your data. As I mentioned in my post [Eliminating Technology Trade-Offs], IBM supports hardware-based encryption FDE drives in its IBM System Storage DS8000 and DS5000 series. These FDE drives have an AES-128 bit encryption built-in to perform the encryption in real-time. Neither HDS or EMC support these drives (yet). Fellow blogger Hu Yoshida (HDS) indicates that their USP-V has implemented data-at-rest in their array differently, using backend directors instead. I am told EMC relies on the consumption of CPU-cycles on the host servers to perform software-based encryption, either as MIPS consumed on the mainframe, or using their Powerpath multi-pathing driver on distributed systems.
There is also concern about internal employees have the right "need-to-know" of various research projects or upcoming acquisitions. On SANs, this is normally handled with zoning, and on NAS with appropriate group/owner bits and access control lists. That's fine for LUNs and files, but what about databases? IBM's DB2 offers Label-Based Access Control [LBAC] that provides a finer level of granularity, down to the row or column level. For example, if a hospital database contained patient information, the doctors and nurses would not see the columns containing credit card details, the accountants would not see the columnts containing healthcare details, and the individual patients, if they had any access at all, would only be able to access the rows related to their own records, and possibly the records of their children or other family members.
- Unexpected Loss
The third meaning is to protect against the unexpected. There are lots of ways to lose data: physical failure, theft or even incorrect application logic. Whatever the way, you can protect against this by having multiple copies of the data. You can either have multiple copies of the data in its entirety, or use RAID or similar encoding scheme to store parts of the data in multiple separate locations. For example, with RAID-5 rank containing 6+P+S configuration, you would have six parts of data and one part parity code scattered across seven drives. If you lost one of the disk drives, the data can be rebuilt from the remaining portions and written to the spare disk set aside for this purpose.
But what if the drive is stolen? Someone can walk up to a disk system, snap out the hot-swappable drive, and walk off with it. Since it contains only part of the data, the thief would not have the entire copy of the data, so no reason to encrypt it, right? Wrong! Even with part of the data, people can get enough information to cause your company or customers harm, lose business, or otherwise get you in hot water. Encryption of the data at rest can help protect against unauthorized access to the data, even in the case when the data is scattered in this manner across multiple drives.
To protect against site-wide loss, such as from a natural disaster, fire, flood, earthquake and so on, you might consider having data replicated to remote locations. For example, IBM's DS8000 offers two-site and three-site mirroring. Two-site options include Metro Mirror (synchronous) and Global Mirror (asynchronous). The three-site is cascaded Metro/Global Mirror with the second site nearby (within 300km) and the third site far away. For example, you can have two copies of your data at site 1, a third copy at nearby site 2, and two more copies at site 3. Five copies of data in three locations. IBM DS8000 can send this data over from one box to another with only a single round trip (sending the data out, and getting an acknowledgment back). By comparison, EMC SRDF/S (synchronous) takes one or two trips depending on blocksize, for example blocks larger than 32KB require two trips, and EMC SRDF/A (asynchronous) always takes two trips. This is important because for many companies, disk is cheap but long-distance bandwidth is quite expensive. Having five copies in three locations could be less expensive than four copies in four locations.
Fellow blogger BarryB (EMC Storage Anarchist) felt I was unfair pointing out that their EMC Atmos GeoProtect feature only protects against "unexpected loss" and does not eliminate the need for encryption or appropriate access control lists to protect against "unauthorized access" or "unethical tampering".
(It appears I stepped too far on to ChuckH's lawn, as his Rottweiler BarryB came out barking, both in the [comments on my own blog post], as well as his latest titled [IBM dumbs down IBM marketing (again)]. Before I get another rash of comments, I want to emphasize this is a metaphor only, and that I am not accusing BarryB of having any canine DNA running through his veins, nor that Chuck Hollis has a lawn.)
As far as I know, the EMC Atmos does not support FDE disks that do this encryption for you, so you might need to find another way to encrypt the data and set up the appropriate access control lists. I agree with BarryB that "erasure codes" have been around for a while and that there is nothing unsafe about using them in this manner. All forms of RAID-5, RAID-6 and even RAID-X on the IBM XIV storage system can be considered a form of such encoding as well. As for the amount of long-distance bandwidth that Atmos GeoProtect would consume to provide this protection against loss, you might question any cost savings from this space-efficient solution. As always, you should consider both space and bandwidth costs in your total cost of ownership calculations.
Of course, if saving money is your main concern, you should consider tape, which can be ten to twenty times cheaper than disk, affording you to keep a dozen or more copies, in as many time zones, at substantially lower cost. These can be encrypted and written to WORM media for even more thorough protection.
If these three methods of protection sound familiar, I mentioned them in my post about [Pulse conference, Data Protection Strategies] back in May 2008.
It's Tuesday, and that means more IBM announcements!
I haven't even finished blogging about all the other stuff that got announced last week, and here we are with more announcements. Since IBM's big [Pulse 2010 Conference] is next week, I thought I would cover this week's announcement on Tivoli Storage Manager (TSM) v6.2 release. Here are the highlights:
- Client-Side Data Deduplication
This is sometimes referred to as "source-side" deduplication, as storage admins can get confused on which servers are clients in a TSM client-server deployment. The idea is to identify duplicates at the TSM client node, before sending to the TSM server. This is done at the block level, so even files that are similar but not identical, such as slight variations from a master copy, can benefit. The dedupe process is based on a shared index across all clients, and the TSM server, so if you have a file that is similar to a file on a different node, the duplicate blocks that are identical in both would be deduplicated.
This feature is available for both backup and archive data, and can also be useful for archives using the IBM System Storage Archive Manager (SSAM) v6.2 interface.
- Simplified management of Server virtualization
TSM 6.2 improves its support of VMware guests by adding auto-discovery. Now, when you spontaneously create a new virtual machine OS guest image, you won't have to tell TSM, it will discover this automatically! TSM's legendary support of VMware Consolidated Backup (VCB) now eliminates the manual process of keeping track of guest images. TSM also added support of the Vstorage API for file level backup and recovery.
While IBM is the #1 reseller of VMware, we also support other forms of server virtualization. In this release, IBM adds support for Microsoft Hyper-V, including support using Microsoft's Volume Shadow Copy Services (VSS).
- Automated Client Deployment
Do you have clients at all different levels of TSM backup-archive client code deployed all over the place? TSM v6.2 can upgrade these clients up to the latest client level automatically, using push technology, from any client running v5.4 and above. This can be scheduled so that only certain clients are upgraded at a time.
- Simultaneous Background Tasks
The TSM server has many background administrative tasks:
- Migration of data from one storage pool to another, based on policies, such as moving backups and archives on a disk pool over to a tape pools to make room for new incoming data.
- Storage pool backup, typically data on a disk pool is copied to a tape pool to be kept off-site.
- Copy active data. In TSM terminology, if you have multiple backup versions, the most recent version is called the active version, and the older versions are called inactive. TSM can copy just the active versions to a separate, smaller disk pool.
In previous releases, these were done one at a time, so it could make for a long service window. With TSM v6.2, these three tasks are now run simultaneously, in parallel, so that they all get done in less time, greatly reducing the server maintenance window, and freeing up tape drives for incoming backup and archive data. Often, the same file on a disk pool is going to be processed by two or more of these scheduled tasks, so it makes sense to read it once and do all the copies and migrations at one time while the data is in buffer memory.
- Enhanced Security during Data Transmission
Previous releases of TSM offered secure in-flight transmission of data for Windows and AIX clients. This security uses Secure Socket Layer (SSL) with 256-bit AES encryption. With TSM v6.2, this feature is expanded to support Linux, HP-UX and Solaris.
- Improved support for Enterprise Resource Planning (ERP) applications
I remember back when we used to call these TDPs (Tivoli Data Protectors). TSM for ERP allows backup of ERP applications, seemlessly integrating with database-specific tools like IBM DB2, Oracle RMAN, and SAP BR*Tools. This allows one-to-many and many-to-one configurations between SAP servers and TSM servers. In other words, you can have one SAP server backup to several TSM servers, or several SAP servers backup to a single TSM server. This is done by splitting up data bases into "sub-database objects", and then process each object separately. This can be extremely helpful if you have databases over 1TB in size. In the event that backing up an object fails and has to be re-started, it does not impact the backup of the other objects.
technorati tags: , announcements, IBM, Pulse, conference, TSM, Tivoli, SSAM, backup, archive, VMware, VCB, Hyper-V, Microsoft, SSL, AES, encryption, in-flight, Linux, HP-UX, Solaris, ERP, DB2, Oracle, RMAN, SAP, BR*Tools, ibm-pulse, pulse2010
Hey everyone, its Tuesday, and that means IBM announcements!
Last October, IBM [unveiled their plans to take over the world] of archive solutions, and
today, we make good on that threat with the announcement of the [Information Archive (IA) v1.1].
(Insert evil villain laugh here)
To avoid overwhelming people with too many features and functions, IBM decided to keep things simple for the first release. Let's take a look:
The base frame (2231-IA3) supports a single collection, from as small as 3.6 TB to as large as 72 TB of usable capacity. You can attach one expansion frame (2231-IS3) that holds two additional collections, 63 TB usable capacity for each collection. Disk capacity is increased in eight-drive (half-drawer) increments of 3.6 TB usable capacity each. A full configured IA system (304 drives, 1 TB raw capacity per drive) provides 198 TB usable capacity.
Of course, that is just the disk side of the solution. Like its predecessor, the IBM System Storage DR550, the IA v1.1 can also attach to external tape storage to store and protect petabytes (PB) of archive data. Hundreds of different IBM and non-IBM tape drives and libraries are supported, so that this can be easily incorporated into existing tape environments.
- Protection Levels
Each collection can be configured to one of three protection levels: basic, intermediate, and maximum.
- Basic protection provides RAID protection of data using standard NFS group/user controls for access to read and write data. This can be useful for databases that need full read/write access. Users can assign expiration dates, but in Basic mode they can delete the data before the expiration date is reached.
- Intermediate adds Non-Erasable Non-Rewriteable (NENR) protection against user actions to delete or modify protected data. However, similar to IBM N series "Enterprise SnapLock", intermediate mode allows authorized storage admins to clean up the mess, increase or reduce retention periods, and delete data if it is inadvertently protected. I often refer to this as "training wheels" for those who are trying to work out their workflow procedures before moving on to Maximum mode.
- Maximum provides the strictest NENR protection for business, legal, government and industry requirements, comparable to IBM N series "Compliance SnapLock" mode, for data that traditionally were written to WORM optical media. Data cannot be deleted until the retention period ends. Retention periods of individual files and objects can be increased, but not decreased. Retention Hold (often referred to as Litigation Hold) can be used to keep a set of related data even longer in specific circumstances.
You can decide to upgrade your protection after data is written to a collection. Basic mode can be upgraded to Intermediate mode, for example, or Intermediate mode upgraded to Maximum.
To keep things simple, v1.1 of the Information Archive supports only two industry standard protocols: NFS and SSAM API. The NFS option allows standard file commands to read/write data. The System Storage Archive Manager (SSAM) API allows smooth transition from earlier IBM System Storage DR550 deployments. With this announcement, IBM will [discontinue selling the DR550 DR2 models].
As we say here at IBM, "Today is the best day to stop using EMC Centera." For more information, see the
IBM [Announcement Letter].
technorati tags: IBM, IA, archive, 2231-IA3, 2231-IS3, protection, protocols, NFS, SSAM, API, EMC, Centera, DR550