The first day had various breakout sessions in the afternoon.
- Understanding Your Options for Storing Archive Data to Meet Compliance Challenges
-
I presented IBM's Smart Archive strategy and the storage products IBM offers to archive data and meet compliance regulations:
- The differences between backup and archive, including a few of my own personal horror stories helping companies who had foolishly thought that keeping backup copies for years would adequately serve as their archive strategy
- The differences between Write-Once Read-Many (WORM) media, and Non-Erasable, Non-Rewriteable (NENR) storage options.
- How disk-only archive solutions become "space heaters" for your data center.
- An overview of the various storage hardware options from IBM.
- How LTFS can be incorporated into an archive solution, such as [Crossroads Systems' StrongBox® solution].
- An explanation of the different IBM software offerings to help complement the storage hardware choices.
- IBM TotalStorage Productivity Center (TPC): New Features and Functions
Mike Griese, IBM program manager for TPC, presented the latest in TPC 5.1 version announced this week. His session was organized into four key sections:
- Insights - TPC 5.1 integrates COGNOS reporting, which allows custonmization of reports and ad-hoc exploration and analysis. Since the reports are not binary-compiled into the product, IBM can ship new COGNOS reports as templates outside the normal TPC release schedule. Also, TPC 5.1 got smarter on reporting on server virtualization hypervisor environments to avoid double-counting.
- Recommendations - TPC 5.1 can analyze your usage patterns across the entire data center and make recommendations to move data from one storage tier to another. You can then act on these recommendations by moving data from one tier to another, either "up-tier" to faster storage, or "down-tier" to less expensive storage, using a storage hypervisor like IBM SAN Volume Controller. This is complementary to features like Easy Tier which optimize within a single disk system.
- Performance - TPC 5.1 uses a new web-based GUI, based on AJAX, HTML5 and Dojo widgets, inspired by the IBM XIV GUI, and similar to the web-based GUI of SAN Volume Controller, Storwize V7000 and SONAS.
- Optimization - TPC 5.1 allows you to optimize for Cloud by introducing a new RESTful API for storage provisioning and support for SONAS environments. This will allow upward-integration to products like [IBM Service Delivery Manager] and [Tivoli Storage Automation Manager].
Mike also explained the new TPC 5.1 packaging. Instead of having a variety of components like "TPC for Disk", "TPC for Data", and "TPC for Replication", the new packaging simplifies this down to two levels of functionality. The basic level supports block-level devices, including disk performance, replication and SAN fabric management. The advanced level adds support for files and databases, including support for Cloud management such as SONAS environments.
Dan Zehnpfennig, Solution Architect, talked about his experiences installing TPC 5.1 and how this was much improved over previous TPC versions.
- IBM Watson: How it Works and What it Means for Society Beyond Winning Jeopardy!
-
I presented how IBM Watson works, how it played the Jeopardy! game show last year, and how IBM has helped clients use the technology to solve real-world problems.
- Understanding the IBM Grand Challenge, how it compares to the IBM Deep Blue chess playing computer
- How IBM Watson works, the hardware, the software, and the algorithms involved
- How to build your own "Watson Jr." in your own basement, based on my [popular instructions I published last year].
- Examples of how the technology is being used in Healthcare and Financial Services
If you missed it, I will be repeating this session on IBM Watson on Thursday.
Tonight we have the grand opening reception of the Solution Center and a concert featuring Grace Potter & the Nocturnals!
technorati tags: IBM, Archive, Compliance, WORM, NENR, Mike Griese, , Dan Zehnpfennig, Tivoli Storage, Productivity Center, TPC, Watson, Healthcare, Financial Services, Wellpoint, Seton, CitiGroup
Tags: 
nenr
watson
ibm
seton
citigroup
tpc
archive
productivity+center
healthcare
mike+griese
financial+services
worm
compliance
wellpoint
tivoli+storage
|
I finished of the first day of the [IBM System Storage Technical University 2011] by presenting two topics. These were repeated on day 3 for those who missed it today.
- IBM Information Archive for email, files and eDiscovery
Not too many people have heard of IBM's Smart Archive strategy and the storage products IBM offers to meet compliance regulations. This session covered the following:
- The differences between backup and archive, including a few of my own personal horror stories helping companies who had foolishly thought that keeping backup copies for years would adequately serve as their archive strategy
- The differences between optical media, Write-Once Read-Many (WORM) media, and Non-Erasable, Non-Rewriteable (NENR) storage options.
- Why putting a [space heater] on your data center floor is a bad idea, driving up power and cooling costs for little business value to the enterprise once the unit is full of rarely accessed read-only data.
- An overview of the [IBM Information Archive], an integrated stack of servers, storage and software that replaces previous offerings such as the IBM System Storage DR550 and the IBM Grid Medical Archive Solution (GMAS).
- The marketing bundle known as the [Information Archive for Email, Files and eDiscovery] that combines the Information Archive storage appliance with Content Collectors for email and file systems, as well as eDiscovery tools, and implementation services for a solution that can support a small or medium size business, up to 1400 employees.
- IBM Tivoli Storage Productivity Center v4.2 Overview and Update
Many of the concerns raised when I [presented v4.1 at this conference last year] were addressed this year in v4.2, including full performance statistics for IBM XIV storage system, storage resource agent support for HP-UX and Solaris, and a variety of other issues.
I presented this overview in stages:
- "Productivity Center Basic Edition" that comes pre-installed on the IBM System Storage Productivity Center hardware console, that provides discover of devices, basic configuration, and a clever topology viewer of what is connected to what.
- "Productivity Center for Disk" and "Productivity Center for Disk Midrange Edition (MRE)" that provides real-time and historical performance monitoring, asset and capacity reporting.
- "Productivity Center for Replication" which supports monitoring, failover and failback for FlashCopy, Metro Mirror and Global Mirror on the SVC, Storwize V7000, DS8000, DS6000 and ESS 800.
- "Productivity Center for Data" which supports reporting on files, file systems and databases on DAS, SAN and NAS attached storage from a Operating System viewpoint.
- "Productivity Center Standard Edition" which includes all of the above except "Replication", and adds performance monitoring of SAN Fabric gear, and some very clever analytics to improver performance and problem determination.
One of the questions that came up was "How big does my company have to be to consider using Productivity Center?" which I answered as follows:
"If you are a small company, and the "IT Person" has responsibilities outside the IT, and managing the few pieces of kit is just part of his job, then consider just using the web-based GUI through a Firefox or similar browser. If you are a medium sized company with dedicated IT personnel, but mostly run by system admins or database admins that manage storage and networks on the side, you might want to consider the "Storage Control" plug-in for IBM Systems Director. But if you are larger shop, and there are employees with the title "Storage Administrator" and/or "SAN Administrator", then Tivoli Storage Productivity Center is for you."
Tivoli Storage Productivity Center has matured into a fine piece of software that truly can help medium and large sized data centers manage their storage and storage networking infrastructure.
I like speaking the first day of these events. Often people come in just to hear the keynote speakers, and stay the afternoon to hear a few break-out sessions before they leave Tuesday or Wednesday for other meetings.
technorati tags: IBM, IA, DR550, Information Archive, email, files, eDiscovery, WORM, NENR, compliance, CAS, EMC Centera, TPC, Tivoli Storage, Productivity Center, Storage Administrator, Management Tools
Tags: 
dr550
tivoli+storage
ia
productivity+center
cas
emc+centera
compliance
worm
nenr
management+tools
storage+administator
ediscovery
email
ibm
information+archive
files
tpc
|
This year marks the 10 year anniversary of IBM's introduction of LTO tape technology. IBM is a member of the Linear Tape Open consortium which consists of IBM, HP and Quantum, referred to as "Technology Provider Companies" or TPCs. In an earlier job role, I was the "portfolio manager" for both LTO and Enterprise tape product lines.
|
Today, we held a celebration in Tucson, with cake and refreshments.
IBM Executives Doug Balog, IBM VP of Storage Platform, and Sanjay Tripathi, the new IBM Director and Business Line Executive for Tape, VTL and Archive systems, presented the successes of LTO tape over the past 10 years.
To date over 3.5 million LTO tape drives, and over 150 million LTO tape media cartridges have been shipped which is a testament to the remarkable marketplace acceptance of the technology.
|
In honor of this event, I decided to interview Bruce Master, IBM Senior Program Manager for Data Protection Systems, about this 10 year anniversary.
10 years of LTO technology is a great milestone. How is this especially significant to IBM and its clients?
According to IDC data, IBM has held the #1 leader position in market share for total world wide branded tape revenue for over 7 years and that IBM is still #1 in branded midrange tape revenue which includes the LTO tape technologies. IBM was the first drive manufacturer to deliver LTO-1 drives, back in September 2000, the first to deliver tape drive encryption to the marketplace on LTO-4 drives, and is shipping LTO generation 5 drives and libraries. IBM is the author of the new Linear Tape File System (LTFS) specification that has been adopted by the TPCs. This file system revolutionizes how tape can be used as if it were a giant 1.5 terabyte removable USB memory stick with the capability to be accessed with directory tree structures and drag and drop functionality. With LTO's built-in real-time compression, a single tape cartridge can hold up to 3TB of data.
The Linear Tape File System has been getting a lot of attention. Where can we learn more about it?
Researchers at IBM's Almaden Research Center invented the [Linear Tape File System], released it as Open Source under the name [IBM Long Term File System], and contributed the specification to the LTO consortium. On the [Ultrium.com] website, you can read articles written about the file system, the specification [60-page PDF] document and a [video demo] of the file system in action. There is also an article out on [Wikipedia].
Why is tape still a critical part of a storage infrastructure?
Tape is low cost and provides critical off-line portable storage to help protect data from attacks that can occur with on-line data. For instance, on-line data is at risk of attack from a virus, hacker, system error, disgruntled employee, and more. Since tape is off-line, not accessible by the system, it protects against these forms of corruption. LTO technology also provides write-once read-many (WORM) tape media to help address compliance issues that specify non-erasable, non-rewriteable (NENR) storage, hardware encryption to secure data, as well as a low cost long term archive media. When data cools off, or becomes infrequently accessed, why keep it on spinning disk? Move it to tape where it is much greener and lower cost. A tape in a slot on a shelf consumes minimal energy.
So tape is not dead?
Ha! Far from it. Seems like disk-only "specialty shop" storage vendors that don’t have tape in their sales portfolio are the ones that propagate that myth. In reality, storage managers are tasked with meeting complex objectives for performance, compliance, security, data protection, archive and total cost of ownership. Optimally, a blend of disk and tape in a tiered infrastructure can best address these objectives. You can’t build a house with just a hammer. IBM has a rich tool kit of storage offerings including disk, tape, software, services and deduplication technologies to help clients address their needs.
Do you have an example of a client who was saved by tape?
Yes indeed. Estes Express, a large trucking firm, was hit by a hurricane that flooded their data center and destroyed all systems. Fortunately the company survived because the night before they had backed up all data on to IBM tape and moved the cartridges offsite! The company survived and has since implemented a best practices data protection strategy with a combination of disk-to-disk-to-tape (D2D2T) using LTO tape at the primary site, and a remote global mirrored site that is also backed up to LTO tape.
So tape saved the day. What is the outlook for tape innovation in the future?
The future is bright for tape. Earlier this year, IBM and Fujifilm were able to [demonstrate a tape density achievement] that could enable a native 35TB tape cartridge capacity! This shows a long roadmap ahead for tape and a continued good night’s sleep for storage managers knowing that their precious data will be safe.
Of course, LTO tape is just one of the many reasons IBM is a successful and profitable leader in the IT storage industry. Doug Balog talked about his experiences in London for the [October 7th launch] of IBM DS8800, Storwize V7000 and SAN Volume Controller 6.1. Sanjay Tripathi showed recent successes with IBM's ProtecTIER Data Deduplication Solution and Information Archive products.
I would like to thank Bruce Master for his time in completing this interview. To learn more about IBM tape and storage offerings, visit [ibm.com/storage].
technorati tags: IBM, Linear Tape Open, LTO, LTO-1, LTO-2, LTO-3, LTO-4, LTO-5, Doug Balog, Sanjay Tripathi, Bruce Master
Tags: 
bruce+master
nenr
lto-4
ibm
worm
linear+tape+open
doug+balog
lto-5
lto-3
sanjay+tripathi
lto-2
lto-1
lto
|
A long time ago, perhaps in the early 1990s, I was an architect on the component known today as DFSMShsm on z/OS mainframe operationg system. One of my job responsibilities was to attend the biannual [SHARE conference to listen to the requirements of the attendees on what they would like added or changed to the DFSMS, and ask enough questions so that I can accurately present the reasoning to the rest of the architects and software designers on my team. One person requested that the DFSMShsm RELEASE HARDCOPY should release "all" the hardcopy. This command sends all the activity logs to the designated SYSOUT printer. I asked what he meant by "all", and the entire audience of 120 some attendees nearly fell on the floor laughing. He complained that some clever programmer wrote code to test if the activity log contained only "Starting" and "Ending" message, but no error messages, and skip those from being sent to SYSOUT. I explained that this was done to save paper, good for the environment, and so on. Again, howls of laughter. Most customers reroute the SYSOUT from DFSMS from a physical printer to a logical one that saves the logs as data sets, with date and time stamps, so having any "skipped" leaves gaps in the sequence. The client wanted a complete set of data sets for his records. Fair enough.
When I returned to Tucson, I presented the list of requests, and the immediate reaction when I presented the one above was, "What did he mean by ALL? Doesn't it release ALL of the logs already?" I then had to recap our entire dialogue, and then it all made sense to the rest of the team. At the following SHARE conference six months later, I was presented with my own official "All" tee-shirt that listed, and I am not kidding, some 33 definitions for the word "all", in small font covering the front of the shirt.
I am reminded of this story because of the challenges explaining complicated IT concepts using the English language which is so full of overloaded words that have multiple meanings. Take for example the word "protect". What does it mean when a client asks for a solution or system to "protect my data" or "protect my information". Let's take a look at three different meanings:
- Unethical Tampering
The first meaning is to protect the integrity of the data from within, especially from executives or accountants that might want to "fudge the numbers" to make quarterly results look better than they are, or to "change the terms of the contract" after agreements have been signed. Clients need to make sure that the people authorized to read/write data can be trusted to do so, and to store data in Non-Erasable, Non-Rewriteable (NENR) protected storage for added confidence. NENR storage includes Write-Once, Read-Many (WORM) tape and optical media, disk and disk-and-tape blended solutions such as the IBM Grid Medical Archive Solution (GMAS) and IBM Information Archive integrated system.
- Unauthorized Access
The second meaning is to protect access from without, especially hackers or other criminals that might want to gather personally-identifiably information (PII) such as social security numbers, health records, or credit card numbers and use these for identity theft. This is why it is so important to encrypt your data. As I mentioned in my post [Eliminating Technology Trade-Offs], IBM supports hardware-based encryption FDE drives in its IBM System Storage DS8000 and DS5000 series. These FDE drives have an AES-128 bit encryption built-in to perform the encryption in real-time. Neither HDS or EMC support these drives (yet). Fellow blogger Hu Yoshida (HDS) indicates that their USP-V has implemented data-at-rest in their array differently, using backend directors instead. I am told EMC relies on the consumption of CPU-cycles on the host servers to perform software-based encryption, either as MIPS consumed on the mainframe, or using their Powerpath multi-pathing driver on distributed systems.
There is also concern about internal employees have the right "need-to-know" of various research projects or upcoming acquisitions. On SANs, this is normally handled with zoning, and on NAS with appropriate group/owner bits and access control lists. That's fine for LUNs and files, but what about databases? IBM's DB2 offers Label-Based Access Control [LBAC] that provides a finer level of granularity, down to the row or column level. For example, if a hospital database contained patient information, the doctors and nurses would not see the columns containing credit card details, the accountants would not see the columnts containing healthcare details, and the individual patients, if they had any access at all, would only be able to access the rows related to their own records, and possibly the records of their children or other family members.
- Unexpected Loss
The third meaning is to protect against the unexpected. There are lots of ways to lose data: physical failure, theft or even incorrect application logic. Whatever the way, you can protect against this by having multiple copies of the data. You can either have multiple copies of the data in its entirety, or use RAID or similar encoding scheme to store parts of the data in multiple separate locations. For example, with RAID-5 rank containing 6+P+S configuration, you would have six parts of data and one part parity code scattered across seven drives. If you lost one of the disk drives, the data can be rebuilt from the remaining portions and written to the spare disk set aside for this purpose.
But what if the drive is stolen? Someone can walk up to a disk system, snap out the hot-swappable drive, and walk off with it. Since it contains only part of the data, the thief would not have the entire copy of the data, so no reason to encrypt it, right? Wrong! Even with part of the data, people can get enough information to cause your company or customers harm, lose business, or otherwise get you in hot water. Encryption of the data at rest can help protect against unauthorized access to the data, even in the case when the data is scattered in this manner across multiple drives.
To protect against site-wide loss, such as from a natural disaster, fire, flood, earthquake and so on, you might consider having data replicated to remote locations. For example, IBM's DS8000 offers two-site and three-site mirroring. Two-site options include Metro Mirror (synchronous) and Global Mirror (asynchronous). The three-site is cascaded Metro/Global Mirror with the second site nearby (within 300km) and the third site far away. For example, you can have two copies of your data at site 1, a third copy at nearby site 2, and two more copies at site 3. Five copies of data in three locations. IBM DS8000 can send this data over from one box to another with only a single round trip (sending the data out, and getting an acknowledgment back). By comparison, EMC SRDF/S (synchronous) takes one or two trips depending on blocksize, for example blocks larger than 32KB require two trips, and EMC SRDF/A (asynchronous) always takes two trips. This is important because for many companies, disk is cheap but long-distance bandwidth is quite expensive. Having five copies in three locations could be less expensive than four copies in four locations.
Fellow blogger BarryB (EMC Storage Anarchist) felt I was unfair pointing out that their EMC Atmos GeoProtect feature only protects against "unexpected loss" and does not eliminate the need for encryption or appropriate access control lists to protect against "unauthorized access" or "unethical tampering".
(It appears I stepped too far on to ChuckH's lawn, as his Rottweiler BarryB came out barking, both in the [comments on my own blog post], as well as his latest titled [IBM dumbs down IBM marketing (again)]. Before I get another rash of comments, I want to emphasize this is a metaphor only, and that I am not accusing BarryB of having any canine DNA running through his veins, nor that Chuck Hollis has a lawn.)
As far as I know, the EMC Atmos does not support FDE disks that do this encryption for you, so you might need to find another way to encrypt the data and set up the appropriate access control lists. I agree with BarryB that "erasure codes" have been around for a while and that there is nothing unsafe about using them in this manner. All forms of RAID-5, RAID-6 and even RAID-X on the IBM XIV storage system can be considered a form of such encoding as well. As for the amount of long-distance bandwidth that Atmos GeoProtect would consume to provide this protection against loss, you might question any cost savings from this space-efficient solution. As always, you should consider both space and bandwidth costs in your total cost of ownership calculations.
Of course, if saving money is your main concern, you should consider tape, which can be ten to twenty times cheaper than disk, affording you to keep a dozen or more copies, in as many time zones, at substantially lower cost. These can be encrypted and written to WORM media for even more thorough protection.
If these three methods of protection sound familiar, I mentioned them in my post about [Pulse conference, Data Protection Strategies] back in May 2008.
Tags: 
encryption
geoprotect
raid
srdf
dfsms
emc
share
nas
protect
lbac
dfsmshsm
fde
chuckh
hds
nenr
worm
usp-v
raid-5
gmas
atmos
mips
barryb
archive
db2
z/os
information
|