Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
When I was a kid, we didn't have online access to anything. Either yourparents were rich and generous and bought you the latest set of encyclopedias, or they were poor or cheap, and you hoofed it to thenearest library.
Now, I rely heavily on Wikipedia, and other wikis, to find information I need.The key here is the ability to find stuff. With the old 27-volume set ofencyclopedias, you had to know what word something would be filed under, andhow to spell it, so that you could find it. Today's search facilities are much moreforgiving. If you guess wrong, you are only a few clicks away from what youwere really looking for, in a Kevin Bacon six-degrees-of-separation kind of way.
Wikipedia is now looked at more often than CNN.com or the New York Times website.Why? It is amazingly good at summarizing a situation in succinct terms, even fornews "as it happens". The recent episode at Heathrow airport a few weeks agoserves as a good example. I was in Washington DC that week, on my way to Miami and Sao Paulo,Brazil, so it is good to have the news I needed, when I needed it.[Read More]
My father's favorite question is "What's the worst that could happen?" He is retired now, but workedat the famous [Kitt Peak National Observatory] designing some of the largesttelescopes. Designing telescopes followed well-established mechanical engineering best practices, but each design was unique,so there was always a chance that the end result would not deliver the expected results. What's the worst that can happen? For telescopes, a few billion dollars are wasted and a few years are added to the schedule. Scrap it and start over. Nothing unrecoverable for the US government with unlimited resources and patience.
... the rest of the grimness on the front page today will matter a bit, though, if two men pursuing a lawsuit in federal court in Hawaii turn out to be right. They think a giant particle accelerator that will begin smashing protons together outside Geneva this summer might produce a black hole or something else that will spell the end of the Earth — and maybe the universe.
Scientists say that is very unlikely — though they have done some checking just to make sure.
The world’s physicists have spent 14 years and $8 billion (US dollars) building the Large Hadron Collider, in which the colliding protons will recreate energies and conditions last seen a trillionth of a second after the Big Bang. Researchers will sift the debris from these primordial recreations for clues to the nature of mass and new forces and symmetries of nature.
But Walter L. Wagner and Luis Sancho contend that scientists at the European Center for Nuclear Research, or CERN, have played down the chances that the collider could produce, among other horrors, a tiny black hole, which, they say, could eat the Earth. Or it could spit out something called a “strangelet” that would convert our planet to a shrunken dense dead lump of something called “strange matter.” Their suit also says CERN has failed to provide an environmental impact statement as required under the National Environmental Policy Act.
Although it sounds bizarre, the case touches on a serious issue that has bothered scholars and scientists in recent years — namely how to estimate the risk of new groundbreaking experiments and who gets to decide whether or not to go ahead.
What's the worst that can happen? Scientists now agree that it is sometimes difficult to predict, and someeffects may be unrecoverable.
Unfortunately, this is not the only example of people attempting things they may not understand well enough. Theweb comic below has someone complaining they are out of disk space, and the sales rep suggests solving this with a few commands which will result in deleting all her files. Hopefully, most people reading will recognize this is meant as humor, and not actually attempt the code fragments to "see what they do".
This is a webcomic called "Geek and Poke". If you dare to read the punchline, click here: Funny Geeks - Part 5.
Warning: Do not try the code fragments unless you know what to expect!
Sadly, I often encounter clients who have a "keep forever" approach to their production data. When they are seriously out of space, they feel forced to either buy more disk storage, or start "the big Purge": deleting rows from their database tables, emails older than 90 days, or some other drastic measures. With a focus on keeping down IT budgets, I fear that thesedrastic measures are growing more common. What's the worst that could happen? You might need that data for defending yourself against a lawsuit, or need it to continue to provide service to a loyal client, or just continue normal business operations.I have visited companies where a junior administrator chose the "big Purge" option, without a full understanding ofwhat they were doing, resulting in business disruption until the data could be recovered or re-entered.
IBM offers a better way. Data that may not be needed on disk forever could be moved to lower-cost tape, using up less energy and less floorspace in your data center. Solutions can automatically delete the data systematically based on chronological or event-based retention policies, with the option to keep some data longer in response to a "legal hold" request.
That's certainly better than to risk shrinking your business into a "dense dead lump"!
Our industry is full of acronyms, and sometimes spelling out what words an acronym stands for is not enough to explain it fully.
It reminds me of an old story within IBM. A customer engineer (or "CE" for short) was repairing an air-cooled server, and found the failing part being a "FAN". Not knowing what this stood for, he looked up the acronym in the offical "IBM list of acronyms" and found that it stood for "Forced Air Network". Apparently, so many people did not realize that a FAN was just a "fan" that they needed to add an entry to remind people what this little motorized propeller was for.
This brings me to Tony Asaro's Fun with FAN blog entry which mentions yet another definition for FAN, that of "File Area Network". The concept is not new, but some developments this year help make it more a reality.
IBM's General Parallel File System (GPFS) has been enhanced earlier this year with cool ILM-like functionality borrowed from SAN File System, such as policy-based data placement, movement and automatic expiration. This can include policies to place data on the fastest Fibre Channel drives at first, then move them to slower less costly SATA disks after a few months when fewer access reqeusts are expected.
IBM has paired up N series with SAN Volume Controller (SVC), so that an N series gateway can now provide iSCSI, CIFS and NFS access to virtual disks presented from SVC. The problem with NAS appliances in the past, is that once they fill up, moving files to newer technologies is awkward and difficult. With SVC, file systems can now be moved from one physical disk system to another, all while applications are reading and writing data.
To better understand the importance of this, consider the first "FAN", the mainframe z/OS operating system using DFSMS. The mainframe uses the concept of "data sets", a data set can be a stream of fixed 80-character records, representing the original punched cards, a library of related documents, or a random-access data base. All mainframes in a system complex, or "sysplex" for short, could look up the location of any data set, and access it directly. Data sets could be moved from one disk system to another, migrated off to tape, and brought back to disk, all without re-writing any applications.
To join the rest of the world, new types of data set were created for the z/OS operating system, known as HFS and zFS. These held file systems in the sense we know them today, comparable in hierarchical organization of files on Windows, Linux and UNIX platforms. These could be linked and mounted together in larger hierarchical structures across the sysplex.
The concept of files and file systems is a fairly new concept. Prior to this, applications read and wrote directly in terms of blocks, typically fixed length multiples of 512 bytes. For a while, database management systems offered a choice, direct block access or file level access. The former may have offered slightly better performance, but the latter was easier to administer. Without file system, specialized tools were often required to diagnose and fix problems on block-oriented "raw logical" volumes.
This launched a "my file system is better than yours" war which continues today. The official standard is POSIX, but every file system tries to give some proprietary advantage by offering unique features. Sun's file system offers support for "sparse" files, which is ideal for certain mathematical processing of tables. Microsoft's NTFS offers biult-in compression, designed for the laptop user. IBM's JFS2 and Linux's EXT3 file systems support journaling, which tracks updates to file system structures in a separate journal to minimize data corruption in the event of a power outage, and thus speed up the re-boot process. Anyone who has ever waited for a "Scan Disk" or "fsck" process to finish knows what I'm talking about. Of course, if an application deviates from POSIX standards, and exploits some unique feature of a file system, it then limits its portability and market appeal.
The two competing NAS file systems are also different. Common Internet File System (CIFS) was developed initially by IBM and Microsoft to provide interoperability between DOS, Windows and OS/2. Meanwhile, Network File System (NFS) was the darling of nearly every UNIX and Linux distribution, and even has clients on operating platforms as diverse as MacOS, i5/OS, and z/OS. Today, nearly every platform supports one or both of these standards.
Bottom line, file systems are here to stay. Any slight advantages to use raw logical volumes for databases and applications are losing out to the robust set of file system utilities that can be used across a broad set of platforms and applications.
Last year, I covered Chris Anderson's book [The Long Tail]. This year, Chris Anderson, editor-in-chief of Wired.com, has an upcoming book titled Free, the past and future of a radical price. Chris talked about his book here at Nokia World 2007 conference, and the [46-minute video] is worth watching.He asks the big question "What if certain resources were free?" This could be electricity, bandwidth, or storage capacity. He explores how this changes the world, and createsopportunities for new business models. However, many people are stuck in a "scarcity" modeland treat nearly-free resources as expensive, and find themselves doing traditional things thatdon't work anymore. Chris mentions [Second Life] as aneconomy where many resources are free, and seeing how people respond to that.Rather than focusing on making money, new businesses are focused on gainingattention and building their reputation. Here are some example business models:
Cross-subsidy: give away the razors, sell the razor blades; or give away cell phones and sell minutes
Ad-Supported: magazines and newspapers sell for less than production costs
Freemium: 99% use the free version, but a handful pay extra for something more
Digital economics: give away digital music to promote concert tours
Free-sample marketing: give away samples to get word-of-mouth advertising
Gift economy: give people an opportunity and platform to contribute like Wikipedia
Nick Carr writes a post [Dominating the Cloud], indicatingthat IBM, Google, Microsoft, Yahoo and Amazon are the five computing giants to watch, as they are more efficient atconverting electricity into computing than anyone else. Last month, I mentioned IBM and Google partnership on cloud computing in my post[Innovationthat matters: cell phones and cloud computing].Nick's upcoming book titled[The Big Switch] looks into "Utility Computing",comparing the change of companies generating their own electricity to using an electric grid, to the recent developments of cloud computing and software as a service (SaaS). Amazon's latest "SimpleDB" online databaseis cited as an example.
Last, but not least, Seth Godin writes in his post [Meatballs and Permeability] about the bits-vs-atoms issue, what Chris Anderson above refers to as the new digital economy. The idea here is that value carried electronically as bits (digital documents, for example) have completely different economics than value carried as atoms (physical objects), andrequires new marketing techniques. Methods from traditional marketing will not be effective in this new age.Here is a [review] of Seth's new book Meatball Sundae: Is Your Marketing Out of Sync?
All three of these books seem to be covering the same phenomenon, just from different viewpoints. I lookforward to reading them.
Many often associate CAS with EMC's Centera offering, but with IBM's comprehensive set of compliance storageofferings, EMC doesn't talk about CAS or Centera much anymore.I covered the confusion around CAS in a previous post. When clients ask for "CAS" what they really are looking for is storage designed forfixed content, unstructured data that doesn't change once written. A lot of data falls under this category, such as scanned documents, audio and video recordings, medical images, and so on. Some laws and regulations further require enforcement that the data is not deleted or tampered with, until some time after an event or expiration date is met.
In the past, clients used write-once read-many (WORM) optical media, but today we have disk and tape offerings instead. Since the term "WORM" is inappropriate fordisk-based solutions, IBM has standardized to the use of the term "non-erasable, non-rewriteable" (NENR) to discusstoday's solutions and offerings.
Let's recap what IBM has to offer:
IBM System Storage DR550
This comes in both large version (DR550) andsmall version (DR550 Express).Both offerings provide NENR protection of fixed content data with your choice of a disk-only or disk-and-tape configuration. IBM also announced a DR550 file system gateway, extending the number of applications that can take advantage of this offering.
IBM System Storage N series with SnapLock(tm)
IBM has seen great success with the N series disk systems. A specificfeature called SnapLock allows some of the data stored to be NENR protected until an expiration date is met. As partof IBM's emphasis for "unified storage", a single N series appliance or gateway can manage both regular (erasable/modifiable) data with NENR data. Combining this with our recently announced Advanced Single InstanceStorage (A-SIS) de-duplication feature, and you get a very cost-effective offering!
IBM System Storage Multilevel Grid Access Manager Software
A fourth option for NENR data is WORM tape. IBM supports WORM cartridge media in both the enterprise TS1120 drive as well as LTO3 and LTO4 drives. The advantage is that you don't need unique tape drives for WORM support. IBM drives can read and write both regular and WORM cartridges, and provide a cost-effective alternative to optical media.
As you see, IBM doesn't limit itself to disk-only offerings. Our leadership in tape allows us to innovate tape and disk-and-tape offerings that can provide more cost-effective solutions to store fixed content, retention managed data.The next time you have a conversation with a storage vendor, don't ask for CAS, ask instead for archive and compliance storage. Broaden your mind, and broaden the set of options and choices that might provide a better fit for your requirements.
Continuing my week's theme on how bad things can get following the "Do-it-yourself" plan, I start with James Rogers' piece in Byte and Switch, titled[Washington Gets E-Discovery Wakeup Call]. Here's an excerpt:
"A court filing today reveals there may be gaps in the backup tapes the White House IT shop used to store email. It appears that messages from the crucial early stages of the Iraq War, between March 1 and May 22, 2003, can't be found on tape. So, far from exonerating the White House staffers, the latest turn of events casts an even harsher light on their email policies.
Things are not exactly perfect elsewhere in the federal government, either. A recent [report from the Government Accountability Office (GAO)] identified glaring holes in agencies’ antiquated email preservation techniques. Case in point: printing out emails and storing them in physical files."
You might think that laws requiring email archives are fairly recent. For corporations, they began with laws like Sarbanes-Oxley that the second President Bush signed into law back in 2002. However, it appears that laws for US Presidents to keep their emails were in force since 1993, back when the first President Clinton was in office. (we might as all get used to saying this in case we have a "second" President Clinton next January!)
"The Federal Record Act requires the head of each federal agency to ensure that documents related to that agency's official business be preserved for federal archives. The Watergate-era Presidential Records Act augmented the FRA framework by specifically requiring the president to preserve documents related to the performance of his official duties. A [1993 court decision] held that these laws applied to electronic records, including e-mails, which means that the president has an obligation to ensure that the e-mails of senior executive branch officials are preserved.
In 1994, the Clinton administration reacted to the previous year's court decision by rolling out an automated e-mail-archiving system to work with the Lotus-Notes-based e-mail software that was in use at the time. The system automatically categorized e-mails based on the requirements of the FRA and PRA, and it included safeguards to ensure that e-mails were not deliberately or unintentionally altered or deleted.
When the Bush administration took office, it decided to replace the Lotus Notes-based e-mail system used under the Clinton Administration with Microsoft Outlook and Exchange. The transition broke compatibility with the old archiving system, and the White House IT shop did not immediately have a new one to put in its place.
Instead, the White House has instituted a comically primitive system called "journaling," in which (to quote from a [recent Congressional report]) "a White House staffer or contractor would collect from a 'journal' e-mail folder in the Microsoft Exchange system copies of e-mails sent and received by White House employees." These would be manually named and saved as ".pst" files on White House servers.
One of the more vocal critics of the White House's e-mail-retention policies is Steven McDevitt, who was a senior official in the White House IT shop from September 2002 until he left in disgust in October 2006. He points out what would be obvious to anyone with IT experience: the system wasn't especially reliable or tamper-proof."
So we have White House staffers manually creating PST files, and other government agencies printing out their emails and storing them in file cabinets. When I first started at IBM in 1986, before Notes or Exchange existed, we used PROFS on VM on the mainframe, and some of my colleagues printed out their emails and filed them in cabinets. I can understand how government employees, who might have grown up using mainframe systems like PROFS, might have just continued the practice when they switched to Personal Computers.
Perhaps the new incoming White House staff hired by George W. Bush were more familiar with Outlook and Exchange, and ratherthan learning to use IBM Lotus Notes and Domino, found it easier just to switch over. I am not going to debatethe pros and cons of "Lotus Notes/Domino" versus "Microsoft Outlook/Exchange" as IBM has automated email archiving systems that work great for both of these, as well as also for Novell Groupwise. So, taking the benefit of the doubt,when President Bush took over, he tossed out the previous administration's staff, and brought in his own people, andlet them choose the office productivity tools they were most comfortable with.Fair enough, happens every time a new President takes office. No big surprise there.
However, doing this without a clear plan on how to continue to comply with the email archive laws already on the books, and that it continues to be bad several years later, is appalling. I can understand why business are upset in deploying mandated archiving solutions when their own government doesn't have similar automation in place.
IBM also has a vision for the future, and like Martin Luther King's speech, is startingto enable change. Last February 2006, IBM launched "Information on Demand", a visionthat involves bringing together our hardware, software, and services.
The impact has not gone unnoticed. Barron's featured IBM in an article titled "The New IBM".
I suspect bloggers helped get the word out. Here's a graph fromYahoo! Financeshowing the IBM stock price over the pastsix months. This blog started in September, when stock was in the low 80's, and now it is in thehigh 90's. I can't take all the credit, of course, as there are now over 3000 IBMers blogging, either inside thecompany, or externally to the rest of the world.
Last week, a writer for a magazine contacted us at IBM to confirm a quote that writing a Terabyte (TB) on disk saves 50,000 trees. I explained that this was cited from UC Berkeley's famousHow Much Information? 2003 study.
To be fair, the USA Today article explains that AT&T also offers "summary billing" as well as "on-line billing", but apparently neither of these are the default choice. I can understand that phone companies send out bills on paper because not everyone who has a phone has internet access, but in the case of its iPhone customers, internet access is in the palm of your hands! Since all iPhone customers have internet access, and AT&T knows which customers are using an iPhone, it would make sense for either on-line billing or summary billing to be the default choice, and let only those that hate trees explicitly request the full billing option.
Sending a box of 300 pages of printed paper is expensive, both for the sender and the recipient. This informationcould have been shipped less expensively on computer media, a single floppy diskette or CDrom for example. Forthose who prefer getting this level of detail, a searchable digitized version might be more useful to the consumer.
Which brings me to the concept of Information Lifecycle Management (ILM). You can read my recent posts on ILM byclicking the Lifecycle tab on the right panel, or my now infamous post from last year about ILM for my iPod.
His recollection of the history and evolution of ILM fairly matches mine:
The phrase "Information Lifecycle Management" was originally coined by StorageTek in early 1990s as a way to sell its tape systems into mainframe environments. Automated tape libraries eliminated most if not all of the concerns that disk-only vendors tout as the problem with manual tape. I began my IBM career in a product now called DFSMShsm which specifically moved data from disk to tape when it no longer needed the service level of disk. IBM had been delivering ILM offerings since the 1970s, so while StorageTek can't claim inventing the concept, we give them credit for giving it a catchy phrase.
EMC then started using the phrase four years ago in its marketing to sell its disk systems, including slower less-expensive SATA disk. The ILM concept helped EMC provide context for the many acquisitions of smaller companies that filled gaps in the EMC portfolio. Question: Why did EMC acquire company X? Answer: To be more like IBM and broaden its ILM solution portfolio.
Information Lifecycle Management is comprised of the policies, processes,practices, and tools used to align the business value of information with the mostappropriate and cost effective IT infrastructure from the time information isconceived through its final disposition. Information is aligned with businessrequirements through management policies and service levels associated withapplications, metadata, and data.
Whitepapers and other materials you might read from IBM, EMC, Sun/StorageTek, HP and others will all pretty much tell you what ILM is, consistent with this SNIA definition, why it is good for most companies, and how it is not just about buying disk and tape hardware. Software, services, and some discipline are needed to complete the implementation.
While the SNIA definition provides a vendor-independent platform to start the conversation, it can be intimidatingto some, and is difficult to memorize word for word.When I am briefing clients, especially high-level executives, they often ask for ILM to be explained in simpler terms. My simplified version is:
Information starts its life captured or entered as an "asset" ...
This asset can sometimes provide competitive advantage, or is just something needed for daily operations. Digital assets vary in business value in much the same way that other physical assets for a company might. Some assets might be declared a "necessary evil" like laptops, but are tracked to the n'th degree to ensure they are not lost, stolen or taken out of the building. Other assetsare declared "strategically important" but are readily discarded, or at least allowed to walk out the door each evening.
... then transitions into becoming just an "expense" ...
After 30-60 days, many of the pieces of information are kept around for a variety of reasons. However, if it isn'tneeded for daily operations, you might save some money moving it to less expensive storage media, throughless expensive SAN or LAN network gear, via less expensive host application servers. If you don't need instantaccess, then perhaps the 30 seconds or so to fetch it from much-less-expensive tape in an automated tape librarycould be a reasonable business trade-off.
... and ends up as a "liability".
Keeping data around too long can be a problem. In some cases, incriminating, and in other cases, just having toomuch data clogs up your datacenter arteries. If not handled properly within privacy guidelines, data potentially exposes sensitive personal or financial information of your employees and clients. Most regulations require certain data to be kept, in a manner protected against unexpected loss, unethical tampering, and unauthorized access, for a specific amount of time, after which it can be destroyed, deleted or shredded.
So ILM is not just a good idea to save a company money, it can keep them out of the court room, as well as help save the environment and not kill so many trees. Now that 100 percent of iPhone customers have internet access, and a goodnumber of non-iPhone customers have internet access at home, work, school or public library, it makes sense for companies to ask people to "opt-in" to getting their statements on paper, rather than forcing them to "opt-out".
Continuing this week's theme about new products that were mentioned in last week's launch, today I willcover the new [S24 and S54 frames].
Before these new frames, customers had two choices for their tape cartridges: keep them in an automatedtape library, or on an external shelf. Most of the critics of tape focus almost entirely on the problemsrelated to the latter. When tapes are placed outside of automation, you need human intervention to findand fetch the tapes, tapes can be misplaced or misfiled, tapes can be dropped, tapes can get liquids spilledon them, and so on. These problems just don't happen when stored in automated tape libraries.
Until now, the number of cartridges were limited to the surface area of the wall accessible by the roboticpicker. Whether the robot rotates in a circle picking from dodecagon walls, or back and forth from longrectangular walls, the problem was the same.
But what about tapes that may not need to be readily accessible, but still automated? With the newhigh density frames, you can now stack tapes several cartridges deep, spring loaded deep shelves thatpush the tape cartridges up to the front one at a time. The high-density frame design might have been inspired by thefamous [Pez] candy dispenser, but at 70.9 inches, does not beat the[World's Tallest Pez Dispenser].
(Note: PEZ® is a registered trademark of Pez Candy, Inc.)
In a regular cartridge-only frame, like the D23, you have slots for 200 cartridges on the left, and 200 cartridges on the right, and the robotic picker can pull out and push back cartridges into any of theseslot positions. In the new S24, there are still 200 slots on the left, now referred to as "tier 0",but up to 800 cartridges on the right. In each slot there are up to four 3592 cartridges, the positionimmediately reachable to the picker is referred to as "tier 1", and the ones tucked behindare "tier 2", "tier 3" and "tier 4".
<- - - S24 frame - - - >
We have fun slow-motion videos we show customers on how these work. For example, in the diagram above, let'ssuppose you want to fetch Tape E in the "tier 4" position. The following sequence happens:
Robotic picker pulls "tier 1" tape cartridge B, and pushes it into another shelf slot. Tapes C, D and E get pushed up to be Tiers 1, 2 and 3 now.
Robotic picker pulls "tier 1" tape cartridge C, and puts it in another shelf slot. Tapes D and E get pushed up to be Tiers 1 and 2 now.
Robotic picker pulls "tier 1" tape cartridge D, and puts it in another shelf slot. Tape E gets pushed up to be Tier 1 now.
Robotic picker pulls "tier 1" tape cartridge E, this is the tape we wanted, and can move it to the drive.
The other three cartridges (B, C and D) are then pulled out of the temporary slot, and pushed back into their original order.
In this manner, the most recently referenced tape cartridges will be immediately accessible, and the ones leastreferenced will eventually migrate to the deeper tiers. The 3592 cartridges can be used with either TS1120 orTS1130 drives. Each cartridge can hold up to 3TB of data (1TB raw, at 3:1 compression), so the entire framecould hold 3PB in just 10 square feet of floor space. Five D23 frames could be consolidated down to two S24 frames.The S24 frame comes in "Capacity on Demand" pricing options. The base model of the S24 has just tiers 0, 1 and 2, for a total capacity of 600 cartridges. You can then later license tiers 3 and 4 when needed.
The S54 is basically similar in operation, but for LTO cartridges. It works with any mix of LTO-1, LTO-2, LTO-3 andLTO-4 cartridges.The left side holds tier 0 as before, but the right side has up to five LTO cartridges deep. For Capacity on Demand pricing,the base model supports 660 cartridges (tiers 0,1,2), with options to upgrade for the additional 660 cartridges.The total 1320 cartridges could hold up to 2.1 PB of data (at 2:1 compression). One S54 frame could replacethree traditional S53 frames that held only 440 LTO cartridges each.
If you have both TS1100 series and LTO drives in your TS3500 tape library, then you can haveboth S24 and S54 frames side by side.
Seth Godin has an interesting post titled Times a Million.He recounts how many people determine the fuel savings of higher-mileage cars to be only $300-$900 per year,and that this is not enough to motivate the purchase of a more-efficient vehicle, such as a hybrid orelectric car. Of course, if everyone drove more efficient vehicles, the benefits "times a million" wouldbenefit everyone and the world's ecology.
When I discuss storage-related concepts, many executives mistakenly relate them to the one area of information technologythey know best: their laptop. Let's take a look at some examples:
Information Lifecycle Management
Information Lifecycle Management (ILM) includes classifying data by business value, and then using this to determineplacement, movement or deletion. If you think about the amount of time and effort to review the files on yourindividual laptop, and to manually select and move or delete data, versus the benefits for the individual laptopowner, you would dismiss the concept. Most administrative tasks are done manually on laptops, because automatedsoftware is either unavailable or too expensive to justify for a single owner.
In medium and large size enterprises, automated software to help classify, move and delete data makes a lot of sense.Executives who decide that ILM is not for their data center, based on their experiences with their laptop, are losingout on the "times a million" effect.
Laptops have various controls to minimize the use of battery, and these controls are equally available when pluggedin. Many users don't bother turning off the features and functions they don't need when plugged in, because theyfeel the cost savings would only amount to pennies per day.
Times a million, energy savings do add up, and options to reduce the amount used per server, per TB of data stored, not only save millions of dollars per year, but can also postpone the need to build a new data center, or upgrade the electrical systems in your existing data center.
Backup and Disaster Recovery planning
I am not surprised how many laptops do not have adequate backup and disaster recovery plans. When executives thinkin terms of the time and effort to backup their data, often crudely copying key files to CDrom or USB key, and worryingabout the management of those copies, which copies are the latest, and when those copies can be destroyed, theymight reject deploying appropriate backup policies for others.
Times a million, the collected data stored on laptops could easily be half of your companies emails and intellectual property. Products like IBM Tivoli Storage Manager can manage a large number of clients with a few administrators,keeping track of how many copies to keep, and how long to keep them.
So, next time you are looking at technology or solutions for your data center, don't suffer from "Laptop Mentality". Focus instead on the data center as a whole.
This week I'm in beautiful Guadalajara, Mexico teaching at our[System Storage Portfolio Top Gun class].We have all of our various routes-to-market represented here, including our direct sales force, our technicalteams, our online IBM.COM website sales, as well as IBM Business Partners.Everyone is excited over last week's IBM announcement of [4Q07 and full year 2007 results], which includesdouble-digit growth in our IBM System Storage business, led by sales of our DS8000, SAN Volume Controller and Tapesystems. Obviously, as an IBM employee and stockholder, I am biased, so instead I thought I would provide someexcerpts from other bloggers and journalists.
But what was striking in the company’s conference call on Thursday afternoon was the unhedged optimism in its outlook for 2008, given the strong whiff of recession fear elsewhere.
The questions from Wall Street analysts in the conference call had a common theme. Why are you so comfortable about the 2008 outlook? Now, that might just be professional churlishness, since so many of them have been so wrong recently about I.B.M. Wall Street had understandably thought, for example, that I.B.M.’s sales to financial services companies — the technology giant’s largest single customer category — would suffer in the fourth quarter, given the way banks have been battered by the mortgage credit crunch.
But Mr. Loughridge said that revenue from financial services customers rose 11 percent in the fourth quarter, to $8 billion. The United States, he noted, accounts for only 25 percent of I.B.M.’s financial services business.
The other thing that seems apparent is how much I.B.M.’s long-term strategy of moving up to higher-profit businesses and increasingly relying on services and software is working. Its huge services business grew 17 percent to $14.9 billion in the quarter. After the currency benefit, the gain was 10 percent, but still impressive. Software sales rose 12 percent to $6.3 billion.
Looking at IBM's business segments, it can be seen that they offer far more coverage of the technology space that those of the typical tech company:
IBM is just so big and diversified that there is little comparison between it and most other tech companies. IBM is a member of an elite group of companies like Cisco Systems (CSCO), Microsoft (MSFT), Oracle (ORCL) or Hewlett-Packard (HPQ).
IBM's wide international coverage and deep technological capabilities dwarf those of most tech companies. Not only do they have sales organizations worldwide but they have developers, consultants, R&D workers and supply chain workers in each geographic region. Their product mix runs from custom software to packaged enterprise software, hardware (mainframes and servers), semiconductors, databases, middleware technology, etc., etc. There are few tech companies that even attempt to support that many kinds and variations of products.
As color on the fourth quarter earnings announcement, there are a couple of observations that I would like to make. The first one speaks to IBM's international prowess. The company indicated that growth in the Americas was only 5%. International sales were a primary driver of IBM's good results. As an insight on the difference between IBM and most other tech companies, it is clear that nowadays, a tech company that isn't adept at selling internationally is going to be in trouble.
Terrific performance in a terrific year - no doubt a result of its strong global model. IBM operates in 170 countries, with about 65% of its employees outside US and about 30% in Asia Pacific. For fiscal 2007, revenues from Americas grew 4% to $41.1 billion (42% of total revenue), [EMEA] grew 14% to $34.7 billion (35%of total revenue), and Asia-Pacific grew by 11% to $19.5 billion (19.7% of total revenue). IBM sees growth prospects not just in [BRIC] but also countries like Malaysia, Poland, South Africa, Peru, and Singapore.
Thus far 2008–all two weeks of it–hasn’t been a pretty for the tech industry. Worries about the economy prevail. And even companies that had relatively good things to say like Intel get clobbered. It’s ugly out there–unless you’re IBM.
I am sure there will be more write-ups and analyses on this over the next coming weeks, and others will probably waituntil more tech companies announce their results for comparison.
Yesterday marked the first day of Spring here in the Northern hemisphere, and often this means it is timefor some "Spring cleaning". This is a great time to re-evaluate all of your stuff and clean house.
In the bits-vs-atoms discussion, Annie Leonard has a quick [20-minute video] about the atoms side of stuff,from extraction of natural resources, production, distribution, consumption, to final disposal.
On the bits side of things, the picture is much different.
We don't really extract information,rather we capture it, and lately that process is done directly into digital formats, from digital photography, digital recording of music, and so on. A lot of medical equipmentnow take X-rays and other medical images directly into digital format. By 2011, it is estimated that as much as 30 percent of all storage will be for holding medical images.
Production refers to the process of combining raw materials and making them into something useful. The sameapplies to information, there are a variety of ways to make information more presentable. In the Web 2.0 world, these are called Mashups, combiningraw information in a manner that are more usable.Fellow IBM blogger Bob Sutor discusses IBM's latest contribution, SMash, in his post[Secure Mashups via SMash].
According to Tim Sanders, 90 percent of business information is distributed by email, but less than 10 percentof employees are formally trained to distribute information correctly. Here's a quick 3-minute trailerto his "Dirty Dozen" rules of how to do email properly.
I have not watched the DVD that this trailer is promoting, but I certainly agree with the overall concept.
This week I also had the pleasure to hear [Art Mortell], author ofthe book The Courage to Fail: Art Mortell's Secrets to Business Success. He gave an inspirational talk about how to deal with our stressful lives. One key pointwas that stress often came from our own expectations. This is certainly true on how we consume information.Often times our expectations determine how well we read, watch or listen to information being presented.Sometimes information is factually correct, but presented in such a boring manner that it is just toodifficult to consume.
John Windsor on YouBlog takes this one step further, asking [Are you predictable?]He makes a strong case on why presenting in a predictable manner can actually hurt your chances of communication.
And finally, there is disposal. We are all a bunch of digital pack-rats. With atoms, you eventuallyrun out of closet space, with bits the problem is not as obvious, and often can be resolved by spendingyour way out of it. On average, companies are expanding their storage capacity by 57 percent every year. Thatworked well when dollar-per-GB prices of disk dropped to match, but now technology advancements are slowing down. Diskwill not be dropping in price as fast as you need, and now might be a good time to re-evaluate your"Keep everything forever" strategy.
Consider "Spring cleaning" to be an excellent excuse to evaluate the data you have on your disk systems.Should it be on disk? Will it be accessed often enough to justify that cost? Does it need immediateonline access times, or can waiting a minute or two for a tape mount from an automated library be sufficient?Does it represent business value?
I have been to customers that have discovered a lot of "orphan data" on their disk systems. This isdata that does not belong to anyone currently working at the company. Maybe the owners of the data retired,were laid off, or even fired, but nobody bothered to clean up their files after they left the company.
I've also seen a lot of "stale data" on disk, data that has not be read or written in the past 90 days.Are you spending 13-18 watts of energy to spin each disk drive just to contain data nobody ever looks at?
In some cases, orphan or stale data represents business value, and need to be kept around for businessor legal reasons. Perhaps some government regulation requires you to retain this information for someyears. In that case, rather than deleting it, move it to tape, perhaps using theIBM System Storage DR550 to protect it for the time required and handle its eventual disposal.
Certainly something to think about, while you snap the ears off those chocolate bunnies, watching yourkids run around looking for eggs. Enjoy your weekend!
An astute reader brought this to my attention. The newest addition to our "IBM Express Portfolio"set of SMB-oriented offerings is the new TS3100 tape library. This has one LTO Gen 3 drive and up to 22 cartridges, which can be a mix of WORM and rewriteable cartridges,beautifully packaged in a small 2U high (3.5 inch) rack-mountable chassis. Each cartridge can hold up to 800GB uncompressed, or 1.6TB with typical 2-to-1 compression.
This tape library would be a great complement to TSM Express for backup, and to theDR550 Express for archive and compliance storage.
And now, for a limited time, there is a $1500 rebate, check website for details.