Last week, I covered backup issues in [Deduplicationversus Best Practice for Backups
]. This week, I thought I would cover issues with email.
At IBM, our standard is to have a limit of 200MB per user mailbox. A few of us get exceptions and have up to500MB limit because of the work we do. By comparison, my personal Gmail account is now up to 6500MB. Whenthis limit is exceeded, you are unable to send out any mail until it is brought down below the limit, and a request to be "re-enabled for send" is approved, a situation we call "mail jail".
The biggest culprit are attachments. Only 10 percent of emails have attachments, but those that do take up 90percent of the total space! People attach a 15MB presentation or document, and copy the world ondistribution list. Everyone saves their notes with these attachments, and soon, the limits are blown. Not surprisingly, deduplication has been cited as a "killer app" to address email storage, exactly for this reason.If all the users have their mailboxes all stored on the same deduplication storage device, it might find theseduplicate blocks, and manage to reduce the space consumed.
A better practice would be to avoid this in the first place. Here are the techniques I use instead:
- Point to the document in a database
We are heavy users of Lotus Notes databases. These can be encrypted and controlled with Access Control Lists (ACL)that determine who can create or read documents in each database. Annually, all the database ACLs are validatedso that people can confirm that they continue to have a need-to-know for the documents in each database. Sendinga confidential document as a "document link" to a database entry takes only a few bytes, and all the recipientsthat are already on the ACL have access to that document.
- Point to the document on a web page
If the document is available on an internal or external website, just send the URL instead of attaching the file.Again, this takes only a few bytes. We have websites accessible only to all internal employees, websites thatcan be accessed only by a subset of employees with special permissions and credentials based on their job role, and websites that are accessible to our IBM Business Partners.
In my case, if I happen to have a blog posting that answers a question or helps illustrate an idea, I will sendthe "permalink" URL of that blog post in my email.
- Point to the document on shared NAS file system
Internally, IBM uses a "Global Storage Architecture" (GSA) based on IBM's Scale-Out File Services [SoFS] with everyone getting initially 10GB of disk space to store files, with the option to request more if needed. The system has policy-based support for placing and migrating older data to tape to reduce actual disk usage, and combines a clustered file system with a global name space.
My SoFS space is now up to 25GB, and I store a lot of presentationsand whitepapers that are useful to others. A URL with "ftp://" or "http://" is all you need to point to a filein this manner, and greatly reduces the need for attachments. I can map my space as "Drive X:" on my Windows system,or as a NFS mount point on my Linux system, which allows me to easily drag files back and forth.
Departments that don't need to offer "worldwide access" use NAS boxes instead, such as the IBM System Storage N series.
Pointing to files in a shared space, rather than as attachments in email, may take some getting used to. I've hada few recipients send me requests such as "can you send that as an attachment (not a URL)" because they plan toread it on the airplane or train, where they won't have online connectivity.
This all relates to new ways for employees to collaborate. Shawn from Anecdote writes in the post[Fostering a Collaboration Culture]:
"Have you invested in the latest and greatest in collaboration technology but still feel people are still not collaborating? How many Microsoft Sharepoint servers and IBM Quickplaces remain relatively untouched or only used by the organization's technorati? I think it's a big problem because this narrow view of collaboration starts to get the concept a bad name: "yeah, we did collaboration but no one used it." And then there the issue of the vast amount of money wasted and opportunities lost. We can't afford to loose faith in collaboration because the external environment is moving in a direction that mandates we collaborate. The problems we face now and into the future will only increase in complexity and it will require teams of people within and across organizations to solve them."
Well, sending pointers instead of attachments works for me, and has kept me out of "mail jail" for quite some timenow.
technorati tags: IBM, deduplication, email, mailbox, Gmail, attachment, Lotus, Notes, database, URL, Permalink, GSA, NAS, SoFS, disk, Anecdote
IDC, an independent industry analyst firm, put out their 4Q07"Worldwide Disk Storage Systems Quarterly Tracker" report. Here is an excerpts from their [press release
"Worldwide external disk storage systems factory revenues posted 9.8 percent year-over-year growth in the fourth quarter of 2007 (4Q07) and totaling $5.3 billion (USD), according to the IDC Worldwide Disk Storage Systems Quarterly Tracker. For the quarter, the total disk storage systems market grew to $7.5 billion (USD), up 7.6 percent from the prior year's fourth quarter. Total disk storage systems capacity shipped reach 1,645 petabytes, growing 56.3 percent."
For those wondering how an industry could grow 56.3 percent in capacity, but only 7.6 percent in revenue, it isbecause the average dollar-per-GB dropped in 2007 from $6.63 down to $4.56 (USD), representing a 31 percent decline.In the past, disk prices dropped 40 to 60 percent each year, so making single digit growth was the best major vendorscould hope for. However, lately this has slowed down to 25 to 35 percent decline, but the client demand for capacity continues at the 60 percent pace, which means that vendors could achieve double digit revenue growth soon.
Once again, IBM was ranked number 1 in total disk storage. No surprise there. Here are the details:
"Total Disk Storage Systems Market
In the total worldwide disk storage systems market, IBM lead the market with 22.9 percent followed by HP with 18.1 percent revenue share. EMC maintained the third position with 16.0 percent revenue share.
For the full year, the total disk storage systems market posted 6.6 percent growth to $26.3 billion (USD). In the total worldwide disk storage systems market, IBM and HP lead the market in statistical tie with 20.1 percent and 19.4 percent revenue share, respectively. EMC maintained the third position with 15.2 revenue revenue share."
But why focus just on disk? IDC also released their"Worldwide Combined Disk and Tape Storage 3Q07 Market Share Update", and IBM was number one for that as well,taking in 21.9 percent share. Here's a quote of IBM VP Barry Rudolph in[CNN Money]:
"IBM's continued leadership in the storage hardware market reaffirms our strategy to provide the most comprehensive tiered portfolio of storage offerings, ranging from software and services to disk and tape storage solutions," said Barry Rudolph, Vice President, Storage Stack Solutions, IBM. "IBM is the clear choice for providing information infrastructure solutions that offer the most cost-efficient, streamlined approach to help our customers increase overall productivity and maximize performance."
It is looking like 2008 is going to be a good year for IBM!
technorati tags: IBM, IDC, 4Q07, 3Q07, marketshare, market share, EMC, HDS, HP, Sun, NetApp, Dell, disk, tape, systems
No, this is not an announcement about myself moving to Nepal.
My friends over at OLE Nepal are [looking for a Super SysAdmin]willing to live in Nepal for five months and help out with their project to help the students in the localschools there. I think this might be a great opportunity for someone to help changethe world. Those of you who have read my past blog posts about the One Laptop per Child [OLPC], such as [Understanding the LAMP platform] and [Supporting OLPC Schools with LAMP stacks] may understand the type of work involved.
- You dream in Bash
- IPv4, IPv6, Wireless Mesh networking? No problem! You know linux networking inside and out
- Extensive knowledge of BIND, DHCPD, Squid, Apache, security, etc.
- Experience working with [Moodle] would be most excellent (it is basically a PHP web application that maintains MySQL databases for lesson plans, homework assignments and other school related information)
- Adept with Python scripting or could learn it quickly. OLPC has standardized on Python for scripting (although knowledge in Perl and PHP won't hurt either)
- You look to implement a practical solution that less skilled sysadmins can easily maintain over a cooler but more complicated solution.
- You play well with others. You don’t alienate collaborators with rude e-mails that assert your technical superiority (even though you are)
- Your primary concern is meeting the educational needs of kids and teachers. Your rate technical awesomeness a distant second to meeting those critical needs.
I've been working with Dev, Bryan and Sulochan for the past three months (remotely here from Tucson, AZ)but we've come to a point where we need on-site expertise. I will continue to provide remote support.
Given the number of readers who have contacted me over the past year looking for an IT job (or a different job because they are not happy where they are), this could be an amazing experience.
technorati tags: OLE Nepal, OLPC, Bash, Linux, IPv6, Mesh, networking, Squid, Apache, security, Moodle, LAMP, PHP, Perl, Python
It's been a while since I've talked about [Second Life
The latest post on eightbar[Spimes, Motes and Data centers]discusses IBM's use of virtual world technology to analyze data centers in three dimensions.New World Note asks[What's The Point Of 3D Data Centers?]One would think that a simple monitoring tool based on a two-dimensional floor plan would be enough to evaluate a data center.
Enter Michael Osias, IBM (a.k.a Illuminous Beltran in Second Life). Some of the leading news sites havebegun to notice some 3D data centers that he has helped pioneer. UgoTrade writes up an article aboutMichael and the media attention in [The Wizard of IBM's 3DData Centers].
Of course, in presenting these "Real Life/Second Life" (RL/SL) interactive technologies, IBM is sometimes the target of ridicule. Why? Because IBM is 10 years ahead of everyone else. So, are there aspects of a data center where 3D interfaces makes sense? I think there is.
- Topology Viewer
IBM TotalStorage Productivity Center has an awesome "topology viewer" that shows what servers are connectedto which switches, to which disk systems and tape libraries. This is all done in a 2D diagram, generated dynamicallywith data discovered through open standard interfaces, similar to what you might draw manually with toolslike Visio. Imagine, however, howmore powerful if it were a 3D viewer, with virtual equipment mapped to the physical location of each pieceof hardware on the data center floor, including the position on the rack and location on the data center floor.
- Temperature Flow
Designing computer room air conditioning (CRAC) systems is actually a three dimensional problem. Cold air isfed underneath the raised floor, comes up through strategically placed "vent" tiles, taken in the front ofeach rack. Hot air comes out the back of each rack, and hopefully finds ceiling duct intake to get cooled again.The temperature six inches off the floor is different than the temperature six feet off the floor, and 3Dmonitor tools could be helpful in identifying "hot spots" that need attention. In this case "spimes" representsensors in the 3D virtual world, able to report back information to help diagnose problems or monitor events.
- Server consolidation
After many people left the mainframe in favor of running a single application per distributed server, the pendulumhas finally swung back. Companies are discovering the many benefits of changing this behavior. "Re-centralization" is the task at hand. Thanks to virtualization of servers, networks and storage, sharing common resources canonce again claim the benefits of economies of scale. In many cases, servers work together in collective unitsfor specific applications that might benefit better if consolidated together onto the same equipment.
IBM's "New Enterprise Data Center" vision recognizes that people will need to focus on the management aspectsof their IT infrastructure, and 3D virtual world technologies might be an effective way to getthe job done.
technorati tags: secondlife, eightbar, spimes, motes, 3D, data center, virtual world, IBM, TotalStorage, Productivity Center, CRAC, re-centralization, New Enterprise Data Center
I am always amused in the manner the IT industry tries to solve problems. Take, for example, theprocess of backups. The simplest approach is to backup everything, and keep "n" versions of that.Simple enough for a small customer who has only a handful of machines, but does not scale well. Inmy post [Times a Million
],I coined the phrase "laptop mentality", referring to people's inability to think through solutions in large scale.
Apparently, I am not alone.Steve Duplessie (ESG) wrote in his post[Random Thoughts]:
"I may even get to stop yelling at people to stop doing full backups every week on non-changing data (which is 80 %+) just because that's how they used to do it. They won't have a choice. You can't back up 5X your current data the way you do (or don't) today."
Hu Yoshida (HDS) does a great job explaining that thereare three ways to perform deduplication for backups:
- Pre-processing. Have the backup software not backup unchanged data.
- Inline processing. Have an index to filter the output of the backup as it sends data to storage.
- Post-processing. Have the receiving storage detect duplicates and handle them accordingly.
Here's an excerpt from his post[Deduplication Ratios]:
"A full backup of 1TB data base tablespace is taken on day one. The next day another full backup is taken and only 2GB of that backup has any changes.
Using traditional full backup approaches after 2 nights, the backup capacity required is 2 x 1TB = 2TB
One method of calculating de-duplication ratios could yield a low ratio:
- Total de-duplicated backup capacity used = 1TB + 2GB = 1.002TB
- If the de-duplication ratio compares the amount of total physical storage used to the total amount that would have been used by traditional backup methods, the ratio = 2TB / 1.002TB = approximately 2:1
Another method of calculating de-duplication ratios could yield a high ratio:
- Total de-duplicated backup capacity used still = 1.002TB
- If the de-duplication ratio compares the amount of data stored in the most recent (second) backup to the amount that would have been used by traditional backup methods, the ratio 1TB / 2GB = 1000GB / 2GB = 500:1"
While IBM also offers deduplication in the IBM System Storage N series disk systems, I find that for backup, itis often more effective to apply best practices via IBM Tivoli Storage Manager (TSM). Let's take a look at some:
- Exclude Operating System files
Why take full backups of your operating system every day? Yes, deduplication will find a lot to reduce fromthis, but best practices would exclude these. TSM has an include/exclude list, and the default version excludesall the operating system files that would be recovered from "bare machine recovery" or "new system install"procedures. Often, if the replacement machine has different gear inside, your OS backups aren't what you need,and a fresh OS install may determine this and install different drivers or different settings.
- Exclude Application programs
Again, yes if there are several machines running the same application, you probably have opportunity for deduplication. However, unless you match these up with the appropriate registry or settings buried down in theoperating system, recovering just application program files may render an unusable system. Applications are bestinstalled from a common source that are either "pushed" through software distribution, or "pulled" from an application installation space.
If you have TB-sized databases, and are only doing full backups daily to protect it, have I got a solution for you.IBM and others have software that are "application-aware" and "database-aware" enough to determine what haschanged since the last backup and copy only that delta. Taking advantage of the TSM Application ProgrammingInterface (API) allows for both IBM and third party tools to take these delta backups correctly.
- User Files
Which leaves us with user files, which are often unique enough on their own from the files of other users,that would not benefit from file-level deduplication. Backing up changed data only, as TSM does with its patented ["progressive incremental backup"] method, generally gets most of the benefits described by deduplication, without having to purchase storage hardware features.
Of course, if two or more users have identical files, the question might be why these are not stored on acommon file share. NAS file share repositories can greatly reduce each user keeping their own set of duplicates.It is interesting that some block-oriented deduplication,such as that found in the IBM System Storage N series, can get some benefit because some user files are oftenderivatives of other files, and there might be some 4 KB blocks of data in common.
Last November, I visited a customer in Canada. All of their problems were a direct result of taking full backupsevery weekend. It put a strain on their network; it used up too many disk and tape resources; and it took too long tocomplete. They asked about virtual tape libraries, deduplication, and anything else that could help them. The answer was simple: switch to IBM Tivoli Storage Manager and apply best practices.
technorati tags: Steve Duplessie, ESG, Hu Yoshida, HDS, deduplication, N series, application-aware, database-aware, database, tablespace, best practice, Tivoli, Storage Manager, TSM, progressive, incremental, backup