Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Since the [IBM System Storage Technical University 2011] runs concurrently with the System x Technical University, attendees are allowed to mix-and-match. I attended several presentations regarding server virtualization and hypervisors.
Matt Archibald is an IT Management Consultant in IBM's Systems Agenda Delivery team. He started with a history of hypervisors, from IBM's early CP/CMS in 1967, through the latest VMware Vsphere 5 just announced.
He explained that there are three types of Hypervisor architectures today:
Type 1 - often referred to as "Bare Metal" runs directly on the server host hardware, and allows different operating system virtual machines to run as guests. IBM's System z [PR/SM] and [PowerVM] as well as the popular VMware ESXi are examples of this type.
Type 2 - often referred to as "Hosted" runs above an existing operating system, and allows different operating system virtual machines to run as guests. The popular [Oracle/Sun VirtualBox] is an example of this type.
OS Containers - runs above an existing operating system base, and allows multiple "guests" that all run the same operating system as the base. This affords some isolation between applications. [Parallels Virtuozzo Containers] is an example of this type.
The dominant architecture is Type 1. For x86, IBM is the number one reseller of VMware. VMware recently announced [Vsphere 5], which changes its licensing model from CPU-based to memory-based. For example, a virtual machine with 32 virtual CPUs and 1TB of virtual RAM (VRAM) would cost over $73,000 per year to license the VMware "Enterprise Plus" software. The only plus-side to this new licensing is that the "memory" entitlement transfers during Disaster Recovery to the remote location.
"Xen is dead." was the way Matt introduced the section discussing Hybrid Type-1 hypervisors like Xen and Hyper-V. These run bare-metal, but require networking and storage I/O to be processed by a single bottleneck partition referred to as "Dom 0". As such, this hybrid approach does not scale well on larger multi-sock host servers. So, his Xen-is-dead message was referring to all Hybrid-based Hypervisors including Hyper-V, not just those based on Xen itself.
The new up-and-comer is "Linux KVM". Last year, in my blog post about [System x KVM solutions], I mentioned the confusion over KVM acronym used with two different meanings. Many people use KVM to refer to Keyboard-Video-Mouse switches that allow access to multiple machines. IBM has renamed these switches to Local Console Managers (LCM) and Global Console Manager (GCM). This year, the System x team have adopted the use of "Linux KVM" to refer to the second meaning, the [Kernel-based Virtual Machine] hypervisor.
Linux KVM is not a product, but an open-source project. As such, it is built into every Linux kernel. Red Hat has created two specific deliverables under the name Red Hat Enterprise Virtualization (RHEV):
RHEV-H, a tiny ESXi-like bare-metal hypervisor that fits in 78MB, making it small enough to be on a USB stick, CD-rom or memory chip.
RHEV-M, a vCenter-like management software to manage multiple virtual machines across multiple hosts.
Personally, I run RHEL 6.1 with KVM on my IBM laptop as my primary operating system, with a Windows XP guest image to run a few Windows-specific applications.
A complaint of the current RHEV 2.2 release from Linux fanboys is that RHEV-M requires a Windows server, and uses Windows Powershell for scripting. The next release of RHEV is likely to provide a Linux-based option for management server.
Of the various hypervisors evaluated, KVM appears to be poised to offer the best scalability for multi-socket host machines. The next release is expected to support up to 4096 threads, 64TB of RAM, and over 2000 virtual machines. Compare that to VMware Vsphere 5 that supports only 160 threads, 2TB of RAM and up to 512 virtual machines.
Linux KVM Overview
Matt also presented a session focused on Linux KVM. While IBM is the leading reseller of VMware for the x86 server platform, it has chosen Linux KVM to run all of its internal x86 Cloud Computing facilities, as it can offer 40 to 80 percent savings, based on Total Cost of Ownership (TCO).
Linux KVM can run unmodified Windows and Linux guest operating systems as guest images with less than 5 percent overhead. Since KVM is built into the Linux kernel, any certification testing automatically benefits KVM as well. KVM takes advantage of modern CPU extensions like Intel's VT and AMD's AMD-V.
For high availability, in the event that a host fails, KVM can restart the guest images on other KVM hosts. RHEV offers "prioritized restart order" which allows mision-critical images to be started before less important ones.
RHEV also provides "Virtual Desktop Infrastructure", known as VDI. This allows a lightweight client with a browser to access an OS image running on a KVM host. Matt was able to demonstrate this with Firefox browser running on his Android-based Nexus One smartphone.
RHEV also adds features that make it ideal for cloud deployments, including hot-pluggable CPU, network and storage; service Level Agreement monitoring for CPU, memory and I/O resources; storage live migrations to move the raw image files while guests are running; and a self-service user portal.
IBM has been doing server virtualization for decades. When I first started at IBM in 1986, I was doing z/OS development and testing on z/VM guest images. Later, around 1999, I started working with the "Linux on z" team, running multiple Linux images under PR/SM and z/VM. While the server virtualization solutions most people are familiar with (VMware, Hyper-V, Xen) have only been around the last five years or so, IBM has a much deeper and robust understanding and long heritage. This helps to set IBM apart from the competition when helping clients.
Wrapping up my coverage of the annual [2010 System Storage Technical University], I attended what might be perhaps the best session of the conference. Jim Nolting, IBM Semiconductor Manufacturing Engineer, presented the new IBM zEnterprise mainframe, "A New Dimension in Computing", under the Federal track.
The zEnterprises debunks the "one processor fits all" myth. For some I/O-intensive workloads, the mainframe continues to be the most cost-effective platform. However, there are other workloads where a memory-rich Intel or AMD x86 instance might be the best fit, and yet other workloads where the high number of parallel threads of reduced instruction set computing [RISC] such as IBM's POWER7 processor is more cost-effective. The IBM zEnterprise combines all three processor types into a single system, so that you can now run each workload on the processor that is optimized for that workload.
IBM zEnterprise z196 Central Processing Complex (CPC)
Let's start with the new mainframe z196 central processing complex (CPC). Many thought this would be called the z11, but that didn't happen. Basically, the z196 machine has a maximum 96 cores versus z10's 64 core maximum, and each core runs 5.2GHz instead of z10's cores running at 4.7GHz. It is available in air-cooled and water-cooled models. The primary operating system that runs on this is called "z/OS", which when used with its integrated UNIX System Services subsystem, is fully UNIX-certified. The z196 server can also run z/VM, z/VSE, z/TPF and Linux on z, which is just Linux recompiled for the z/Architecture chip set. In my June 2008 post [Yes, Jon, there is a mainframe that can help replace 1500 servers], I mentioned the z10 mainframe had a top speed of nearly 30,000 MIPS (Million Instructions per Second). The new z196 machine can do 50,000 MIPS, a 60 percent increase!
The z196 runs a hypervisor called PR/SM that allows the box to be divided into dozens of logical partitions (LPAR), and the z/VM operating system can also act as a hypervisor running hundreds or thousands of guest OS images. Each core can be assigned a specialty engine "personality": GP for general processor, IFL for z/VM and Linux, zAAP for Java and XML processing, and zIIP for database, communications and remote disk mirroring. Like the z9 and z10, the z196 can attach to external disk and tape storage via ESCON, FICON or FCP protocols, and through NFS via 1GbE and 10GbE Ethernet.
IBM zEnterprise BladeCenter Extension (zBX)
There is a new frame called the zBX that basically holds two IBM BladeCenter chassis, each capable of 14 blades, so total of 28 blades per zBX frame. For now, only select blade servers are supported inside, but IBM plans to expand this to include more as testing continues. The POWER-based blades can run native AIX, IBM's other UNIX operating system, and the x86-based blades can run Linux-x86 workloads, for example. Each of these blade servers can run a single OS natively, or run a hypervisor to have multiple guest OS images. IBM plans to look into running other POWER and x86-based operating systems in the future.
If you are already familiar with IBM's BladeCenter, then you can skip this paragraph. Basically, you have a chassis that holds 14 blades connected to a "mid-plane". On the back of the chassis, you have hot-swappable modules that snap into the other side of the mid-plane. There are modules for FCP, FCoE and Ethernet connectivity, which allows blades to talk to each other, as well as external storage. BladeCenter Management modules serve as both the service processor as well as the keyboard, video and mouse Local Console Manager (LCM). All of the IBM storage options available to IBM BladeCenter apply to zBX as well.
Besides general purpose blades, IBM will offer "accelerator" blades that will offload work from the z196. For example, let's say an OLAP-style query is issued via SQL to DB2 on z/OS. In the process of parsing the complicated query, it creates a Materialized Query Table (MQT) to temporarily hold some data. This MQT contains just the columnar data required, which can then be transferred to a set of blade servers known as the Smart Analytics Optimizer (SAO), then processes the request and sends the results back. The Smart Analytics Optimizer comes in various sizes, from small (7 blades) to extra large (56 blades, 28 in each of two zBX frames). A 14-blade configuration can hold about 1TB of compressed DB2 data in memory for processing.
IBM zEnterprise Unified Resource Manager
You can have up to eight z196 machines and up to four zBX frames connected together into a monstrously large system. There are two internal networks. The Inter-ensemble data network (IEDN) is a 10GbE that connects all the OS images together, and can be further subdivided into separate virtual LANs (VLAN). The Inter-node management network (INMN) is a 1000 Mbps Base-T Ethernet that connects all the host servers together to be managed under a single pane of glass known as the Unified Resource Manager. It is based on IBM Systems Director.
By integrating service management, the Unified Resource Manager can handle Operations, Energy Management, Hypervisor Management, Virtual Server Lifecycle Management, Platform Performance Management, and Network Management, all from one place.
IBM Rational Developer for System z Unit Test (RDz)
But what about developers and testers, such as those Independent Software Vendors (ISV) that produce mainframe software. How can IBM make their lives easier?
Phil Smith on z/Journal provides a history of [IBM Mainframe Emulation]. Back in 2007, three emulation options were in use in various shops:
Open Mainframe, from Platform Solutions, Inc. (PSI)
FLEX-ES, from Fundamental Software, Inc.
Hercules, which is an open source package
None of these are viable options today. Nobody wanted to pay IBM for its Intellectual Property on the z/Architecture or license the use of the z/OS operating system. To fill the void, IBM put out an officially-supported emulation environment called IBM System z Professional Development Tool (zPDT) available to IBM employees, IBM Business Partners and ISVs that register through IBM Partnerworld. To help out developers and testers who work at clients that run mainframes, IBM now offers IBM Rational Developer for System z Unit Test, which is a modified version of zPDT that can run on a x86-based laptop or shared IBM System x server. Based on the open source [Eclipse IDE], the RDz emulates GP, IFL, zAAP and zIIP engines on a Linux-x86 base. A four-core x86 server can emulate a 3-engine mainframe.
With RDz, a developer can write code, compile and unit test all without consuming any mainframe MIPS. The interface is similar to Rational Application Developer (RAD), and so similar skills, tools and interfaces used to write Java, C/C++ and Fortran code can also be used for JCL, CICS, IMS, COBOL and PL/I on the mainframe. An IBM study ["Benchmarking IDE Efficiency"] found that developers using RDz were 30 percent more productive than using native z/OS ISPF. (I mention the use of RAD in my post [Three Things to do on the IBM Cloud]).
What does this all mean for the IT industry? First, the zEnterprise is perfectly positioned for [three-tier architecture] applications. A typical example could be a client-facing web-server on x86, talking to business logic running on POWER7, which in turn talks to database on z/OS in the z196 mainframe. Second, the zEnterprise is well-positioned for government agencies looking to modernize their operations and significantly reduce costs, corporations looking to consolidate data centers, and service providers looking to deploy public cloud offerings. Third, IBM storage is a great fit for the zEnterprise, with the IBM DS8000 series, XIV, SONAS and Information Archive accessible from both z196 and zBX servers.
This week, IBM celebrates its Centennial, 100 years since its incorporation on June 16, 1911.
A few months ago, the Tucson Executive Briefing Center ordered its latest IBM System Storage [DS8800] to be on display for demos. This was manufactured in Vác, Hungary (about an hour north of Budapest), and was going to be shipped over to the United States.
However, Sam Palmisano, IBM Chairman and CEO, was in Hannover, Germany for the [CeBIT conference] and wanted this DS8800 to be re-directed to Germany first for this event. He was kind enough to sign it for us. Brian Truskowski, IBM General Manager for Storage, and Rod Adkins, IBM Senior Vice President for IBM Systems Technolgoy Group (and my fifth-line manager), also signed this as well!
I am pleased to say this "signed" DS8000 has arrived to Tucson. This is the latest model in a family of market-leading high-end enterprise-class disk systems designed to attach to all computers, including System z mainframes, POWER systems running AIX and IBM i, as well as servers running HP-UX, Solaris, Linux or Windows.
For more on IBM's other innovations over the past 100 years, check out the [Icons of Progress], which includes these storage innovations:
Jon Toigo over at DrunkenData writes in his post[A Wink and a Nod] about thebenefits of the new IBM System z10 Enterprise Class mainframe. Here's an excerpt about storage:
"The other key point worth making about this scenario is that storage behind a z10 must conform to IBM DASD rules. That means no more BS standards wars between knuckle-draggers in the storage world who continue to mitigate the heterogeneous interoperability and manageability of distributed systems storage using proprietary lock in technologies designed as much to lock in the consumer and lock out the competition as to deliver any real value. That has got to be worth something."
For z/OS and TPF operating systems, disk must support CCW commands over ESCON or FICON connections, or NFS commandsover the Local Area Network. However, most of the workloads that are being ported over from x86 platforms willprobably be running Linux on System z images, and as such Linux supports both CCW and SCSI protocols, the latterover native FCP connections through a Storage Area Network (SAN) or via iSCSI over the Local Area Network. Many SAN directors support both FCP and FICON, and the z10 also supports both 1Gbps and 10Gbps Ethernet, so you may not have to invest in any new networking gear.
The best part is that you may not have to migrate your data. The IBM System Storage SAN Volume Controller is supported for Linux on System z, and with "image mode" you can leave the data in its original format on its original disk array. Many file systems are now supported by Linux, including Windows NTFS with the latest NTFS-3G driver.
If your data is already on NAS storage, such as the IBM System Storage N series disk systems, then the IBM z10can access it directly, from z/OS, z/VM or Linux.
Have lots of LTO tape data? Linux on System z supports LTO as well.
Jon continues his rant with a question about porting Microsoft Windows applications. Here's another excerpt:
"For one, what do we do with all the Microsoft servers. There is no Redmond-sanctioned approach to my knowledge for virtualizing Microsoft SQL Server or Exchange Server in a mainframe partition."
Yes, it is possible to run Windows on a mainframe through emulation, but I feel that's the wrong approach. Instead, the focus should be on running "functionally equivalent" programs on the native mainframe operating systems, and again Linuxis often the best choice for this. Switching from Windows to Linux may not be "Redmond-sanctioned", but it getsthe job done.
Instead of SQL Server, consider something functionally equivalent like IBM's DB2 Universal Database, or perhaps an open source database like MySQL, PostgreSQL or Apache Derby. Well-written applications use standard SQL calls, so ifthe application does not try to use unique, proprietary features of MS SQL Server, you are in good shape.
In my discussion last November on [Microsoft Exchange email server], I mentioned that Bynari makes a functionally equivalent email server on Linux that works with your existing Microsoft Outlook clients. Your end-users wouldn't know you migrated to a mainframe! (well, they might notice their email runs faster)
So if your data center has three or more racks of Sun, Dell or HP "pizza box" or "blade" x86 servers, chances are you can migrate the processing over to a shiny new IBM z10 EC mainframe, save some money in the process, without too much impact to your existing Ethernet, SAN or storage system infrastructure. IBM can even help you dispose of the oldx86 machines so that their toxic chemicals don't end up in any landfill.
In his blog Rough Type, Nick Carr asks Where is my CloudBook?and points to John Markoff's 2-part series in the New York Times on computing in the clouds.(Read it here: Part 1, Part 2)
At first, I thought he meant computing while in an airplane, but instead, he is talking about computing on a laptop or other hand-held device that does not have an internal disk drive, no installedoperating system, no internal data storage. Instead, the idea is that you boot from a CD, accessyour data, and even some of your programs, over the internet. John used an Ubuntu Linux LiveCD in his example.
This week, I am in Sao Paulo, Brazil, and was "in the clouds" for over 10 hours flying from Dallas to here.The one time I am guaranteed "off-line" from the internet is on the plane, and I spend enough time on planesthat I am able to get work done despite being "disconnected".
The same reasons people want to get out of having a disk drive on their laptop, are the reasons data centersare getting out of internal disk on their servers.
disks crash, and typically are not protected in any RAID configuration on most laptops
operating systems get infected with viruses and malware
storage on one server is generally inaccessible to every other server
Booting from CD is especially clever. No more worrying about fixing your Windows registry, viruses,corrupted operating system files, or the cruft that accumulates on your C: drive that slowsyou down. The CD is the sameevery time, so it is like running your system with a freshly installed operating system every day.
The need for central repositories of data harkens back to the years of the IBM mainframe. Of course, whatmade sense back then continues to make sense now. The old 3270 terminals stored no data, and instead merelyprovided keyboard input and display text screen output to the vast amount of data stored on the central system.Today, the inputs are different, using your finger or mouse instead to point to what you want, sliding itacross to make things happen, and the output may now include photos, audio and video, but the concept isstill the same.
I carry my Ubuntu Linux LiveCD with me on every business trip. Combined with externally rewriteable media,such as a USB key, you can get work done even when you are in an airplane, and upload it whenyou are back on the net.
Well, it's that Back-To-School time again! Mo's thirteen-year-old reluctantly enters the eight grade, still upset the summer ended so abruptly. Richard's nephew returns to the University of Arizona for another year. Natalie has chosen to move to Phoenix and pursue a post-grad degree at Arizona State University. They all have two things in common, they all want a new computer, and they are all on a budget.
Fellow blogger Bob Sutor (IBM) pointed me to an excellent article on [How to Build Your Own $200 PC], which reminded me of the [XS server I built] for my 2008 Google Summer of Code project with the One Laptop per Child organization. Now that the project is over, I have upgraded it to Ubuntu Desktop 10.04 LTS, known as Lucid Lynx. Building your own PC with your student is a great learning experience in itself. Of course, this is just the computer itself, you still need to buy the keyboard, mouse and video monitor separately, if you don't already have these.
If you are not interested in building a PC from scratch, consider taking an old Windows-based PC and installing Linux to bring it new life. Many of the older PCs don't have enough processor or memory to run Windows Vista or the latest Windows 7, but they will all run Linux.
(If you think your old system has resale value, try checking out the ["trade-in estimator"] at the BestBuy website to straighten out your misperception. However, if you do decide to sell your system, consider replacing the disk drive with a fresh empty one, or wipe the old drive clean with one of the many free Linux utilities. Jason Striegel on Engadget has a nice [HOWTO Erase your old hard disk drive] article. If you don't have your original manufacturer's Windows installation discs, installing Linux instead may help keep you out of legal hot water.)
Depending on what your school projects require, you want to make sure that you can use a printer or scanner with your Linux system. Don't buy a printer unless it is supported by Linux. The Linux Foundation maintains a [Printer Compatability database]. Printing was one of the first things I got working for my Linux-based OLPC laptop, which I documented in my December 2007 post [Printing on XO Laptop with CUPS and LPR] and got a surprising following over at [OLPC News].
To reduce paper, many schools are having students email their assignments, or use Cloud Computing services like Google Docs. Both the University of Arizona and Arizona State University use Google Docs, and the students I have talked with love the idea. Whether they use a Mac, Linux or Windows PC, all students can access Google Docs through their browser. An alternative to Google Docs is Windows Live Skydrive, which has the option to upload and edit the latest Office format documents from the Firefox browser on Linux. Both offer you the option to upload GBs of files, which could be helpful transferring data from an old PC to a new one.
Lastly, there are many free video games for Linux, for when you need to take a break from all that studying. Ever since IBM's [36-page Global Innovation Outlook 2.0] study showed that playing video games made you a better business leader, I have been encouraging all students that I tutor or mentor that playing games is a more valuable use of your time than watching television. IBM considers video games the [future of learning]. Even the [Violent Video Games are Good for Kids]. It is no wonder that IBM provides the technology that runs all the major game platforms, including Microsoft Xbox360, Nintendo Wii and Sony PlayStation.
(FTC disclosure: I work for IBM. IBM has working relationships with Apple, Google, Microsoft, Nintendo and Sony. I use both Google Docs and Microsoft Live Skydrive for personal use, and base my recommendations purely on my own experience. I own stock in IBM, Google and Apple. I have friends and family that work at Microsoft. I own an Apple Mac Mini and Sony PlayStation. I was a Linux developer earlier in my IBM career. IBM considers Linux a strategic operating system for both personal and professional use. IBM has selected Firefox as its standard browser internally for all employees. I run Linux both at home and at the office. I graduated from the University of Arizona, and have friends who either work or take classes there, as well as at Arizona State University.)
Linux skills are marketable and growing more in demand. Linux is used in everything from cellphones to mainframes, as well as many IBM storage devices such as the IBM SAN Volume Controller, XIV and ProtecTIER data deduplication solution. In addition to writing term papers, spreadsheets and presentations with OpenOffice, your Linux PC can help you learn programming skills, web design, and database administration.
To all the students in my life, I wish you all good things in the upcoming school year!
Last week, I presented "An Introduction to Cloud Computing" for two hours to the local Institute of Management Accountants [IMA] for their Continuing Professional Education [CPE]. Since I present IBM's leadership in Cloud Storage offerings, I have had to become an expert in Cloud Computing overall. The audience was a mix of bookkeepers, accountants, auditors, comptrollers, CPAs, and accounting teachers.
Here is a sample of the questions I took during and after my presentation:
If I need to shut down host machine, I lose all my virtual machines as well?
No, it is possible to seemlessly move virtual machines from one host to another. If you need to shut down a host machine, move all the VMs to other hosts, then you can shut down the empty host without impacting business.
Does the SaaS provider have to build their own app, can they not buy an app and then rent it out?
Yes, but they won't have competitive differentiation, and the software development they buy from will want a big cut of the action. SaaS developers that build their own applications can keep more of the profits for themselves.
How do backups work in cloud computing? Do I have to contact someone at the cloud computing company to find the backup tape?
Large datacenters often keep the most recent backups on disk, and older versions on tape in automated tape libraries that can fetch your backup in less than 2 minutes. Because of this, there is no need to talk to anyone, you can schedule or invoke your own backups, and often perform the recovery yourself using self-service tools.
Last month, my sister tried to rent a car during the week the Tucson Gem Show, but they were out of cars she wanted to drive. Could this happen with Cloud Computing?
Not likely. With rental cars, the cars have to be physically in Tucson to rent them. Rental companies could have brought cars down from Phoenix to satisfy demand. With Cloud Computing, it is all accessible over the global network, you are not limited to the cloud providers nearest you.
Is there a reason why Amazon Web Services (AWS) charges more for a Windows image than a Linux image?
Yes, Amazon and Microsoft have a patent cross-licensing agreement where Amazon pays Microsoft for the priveledge of offering Windows-based images on their EC2 cloud infrastructure. It just makes business sense to pass those costs onto the consumer. Linux is a free open source operating system, and is often the better choice.
So if we rent a machine from Amazon, they send it to my accounting office? What exactly am I getting for 12 cents per hour?
No. The computer remains in their datacenter. You get a virtual machine that runs 1.2Ghz Intel processor, with 1700MB of RAM, and 160GB of hard disk space, with Windows operating system running on it, comparable to a machine you can get at the local BestBuy, but instead of it running in the next room, it is running in a datacenter somewhere else in the United States with electricity and air conditioning.
You access it remotely from your desktop or laptop PC.
Why would I ever rent more than one computer?
It depends on your workload. For example, Derek Gottfrid at the New York Times needed to convert 11 million articles from TIFF format to PDF format so that he could put them up on the web. This would have taken him months using a single computer, so he rented 100 computers and got the entire stack converted in 24 hours, for a cost of about $240. See the articles [Self-Service, Prorated, Super Computing] and [TimesMachine] for details.
What about throughput? Won't I need to run cables from my accounting office to this cloud computing data center?
You will need connectivity, most likely from connections provided by your local telephone or cable company, or through the Internet. Certainly, there can be cases where direct privately-owned fiber optic cables, known as "dark fiber", can directly connect consumers to local Cloud service providers, for added security.
What about medical records? Will Cloud Computing help the Healthcare industry?
Yes, hospitals are finding that digitizing their records greatly reduces costs. IBM offers the Grid Medical Archive Solution [GMAS] as a private cloud storage solution to store X-ray images and other electronic medical records on disk and tape, and these records can be accessed from multiple hospitals and clinics, wherever the doctor or patient happens to be.
The advantage of personal computers was individualization, I could put on my own choices of software, and customize my own settings, won't we lose this with Cloud Computing?
Yes, customized software and settings cost companies millions of dollars with help desk calls. Cloud Computing attempts to provide some standardization, reducing the amount of effort to support IT operations.
Won't putting all the computers into a big datacenter make them more vulnerable to hackers?
Security is a well-known concern, but this is being addressed with encryption, access control lists, multi-tenancy isolation, and VPN connections.
My daughter has a BlackBerry or iPod or something, and when we mentioned that someone in Phoenix wore a monkey suit to avoid photo-radar speed cameras, she was able to pull up a picture on her little hand-held thing, is this the future?
Yes, mobile phones and other hand-held devices now have internet access to take advantage of Cloud Computing services. People will be able to access the information they need from wherever they happen to be. (You can see the picture here: [Man Dons Mask for Speed-Camera Photos])
IBM offers a variety of Cloud Computing services, as well as customized solutions and integrated systems that can be deployed on-premises behind your corporate firewall. To learn more, go to [ibm.com/cloud].
The second speaker was local celebrity Dan Ryan presenting the financials for the upcoming [Rosemont Copper] mining operations. Copper is needed for emerging markets, such as hybrid vehicles and wind turbines. Copper is a major industry in Arizona.
Continuing my post-week coverage of the [Data Center 2010 conference], Wendesday afternoon included a mix of sessions that covered storage and servers.
Enabling 5x Storage Efficiency
Steve Kenniston, who now works for IBM from recent acquisition of Storwize Inc, presented IBM's new Real-Time Compression appliance. There are two appliances, one handles 1 GbE networks, and the other supports mixed 1GbE/10GbE connectivity. Files are compressed in real-time with no impact to performance, and in some cases can improve performance because there is less data written to back-end NAS devices. The appliance is not limited to IBM's N series and NetApp, but is vendor-agnostic. IBM is qualifying the solution with other NAS devices in the market. The compression can compress up to 80 percent, providing a 5x storage efficiency.
Townhall - Storage
The townhall was a Q&A session to ask the analysts their thoughts on Storage. Here I will present the answer from the analyst, and then my own commentary.
Are there any gotchas deploying Automated Storage Tiering?
Analyst: you need to fully understand your workload before investing any money into expensive Solid-State Drives (SSD).
Commentary: IBM offers Easy Tier for the IBM DS8000, SAN Volume Controller, and Storwize V7000 disk systems. Before buying any SSD, these systems will measure the workload activity and IBM offers the Storage Tier Advisory Tool (STAT) that can help identify how much SSD will benefit each workload. If you don't have these specific storage devices, IBM Tivoli Storage Productivity Center for Disk can help identify disk performance to determine if SSD is cost-justified.
Wouldn't it be simpler to just have separate storage arrays for different performance levels?
Analyst: No, because that would complicate BC/DR planning, as many storage devices do not coordinate consistency group processing from one array to another.
Commentary: IBM DS8000, SAN Volume Controller and Storwize V7000 disk systems support consistency groups across storage arrays, for those customers that want to take advantage of lower cost disk tiers on separate lower cost storage devices.
Can storage virtualization play a role in private cloud deployments?
Analyst: Yes, by definition, but today's storage virtualization products don't work with public cloud storage providers. None of the major public cloud providers use storage virtualization.
Commentary: IBM uses storage virtualization for its public cloud offerings, but the question was about private cloud deployments. IBM CloudBurst integrated private cloud stack supports the IBM SAN Volume Controller which makes it easy for storage to be provisioned in the self-service catalog.
Can you suggest one thing we can do Monday when we get back to the office?
Analyst: Create a team to develop a storage strategy and plan, based on input from your end-users.
Commentary: Put IBM on your short list for your next disk, tape or storage software purchase decision. Visit
[ibm.com/storage] to re-discover all of IBM's storage offerings.
What is the future of Fibre Channel?
Analyst 1: Fibre Channel is still growing, will go from 8Gbps to 16Gbps, the transition to Ethernet is slow, so FC will remain the dominant protocol through year 2014.
Analyst 2: Fibre Channel will still be around, but NAS, iSCSI and FCoE are all growing at a faster pace. Fibre Channel will only be dominant in the largest of data centers.
Commentary: Ask a vague question, get a vague answer. Fibre Channel will still be around for the next five years.
However, SAN administrators might want to investigate Ethernet-based approaches like NAS, iSCSI and FCoE where appropriate, and start beefing up their Ethernet skills.
Will Linux become the Next UNIX?
Linux in your datacenter is inevitable. In the past, Linux was limited to x86 architectures, and UNIX operating systems ran on specialized CPU architectures: IBM AIX on POWER7, Solaris on SPARC, HP-UX on PA-RISC and Itanium, and IBM z/OS on System z Architecture, to name a few. But today, Linux now runs on many of these other CPU chipsets as well.
Two common workloads, Web/App serving and DBMS, are shifting from UNIX to Linux. Linux Reliability, Availability and Serviceability (RAS) is approaching the levels of UNIX. Linux has been a mixed blessing for UNIX vendors, with x86 server margins shrinking, but the high-margin UNIX market has shrunk 25 percent in the past three years.
UNIX vendors must make the "mainframe argument" that their flavor of UNIX is more resilient than any OS that runs on Intel or AMD x86 chipsets. In 2008, Sun Solaris was the number #1 UNIX, but today, it is IBM AIX with 40 percent marketshare. Meanwhile HP has focused on extending its Windows/x86 lead with a partnership with Microsoft.
The analyst asks "Are the three UNIX vendors in it for the long haul, or are they planning graceful exits?" The four options for each vendor are:
Milk it as it declines
Accelerate the decline by focusing elsewhere
Impede the market to protect margins
Re-energize UNIX base through added value
Here is the analyst's view on each UNIX vendor.
IBM AIX now owns 40 percent marketshare of the UNIX market. While the POWER7 chipset supports multiple operating systems, IBM has not been able to get an ecosystem to adopt Linux-on-POWER. The "Other" includes z/OS, IBM i, and other x86-based OS.
HP has multi-OS Itanium from Intel, but is moving to Multi-OS blades instead. Their "x86 plus HP-UX" strategy is a two-pronged attack against IBM AIX and z/OS. Intel Nehalem chipset is approaching the RAS of Itanium, making the "mainframe argument" more difficult for HP-UX.
Before Oracle acquired Sun Microsystems, Oracle was focused on Linux as a UNIX replacement. After the acquisition, they now claim to support Linux and Solaris equally. They are now focused on trying to protect their rapidly declining install base by keeping IBM and HP out. They will work hard to differentiate Solaris as having "secret sauce" that is not in Linux. They will continue to compete head-on against Red Hat Linux.
An interactive poll of the audience indicated that the most strategic Linux/UNIX platform over the next next five years was Red Hat Linux. This beat out AIX, Solaris and HP-UX, as well as all of the other distributions of Linux.
The rooms emptied quickly after the last session, as everyone wanted to get to the "Hospitality Suites".
The IBM Storage and Storage Networking Symposium concludes today. As typical for manysuch conferences, it ended at noon, so that people can catch airline flights.
TS1120 Tape Encryption - Customer Experiences
Jonathan Barney had implemented many deployments of tape encryption, and shared hisexperiences at two customer locations.
The first company had decided to implement their EKM servers on dedicated 64-bitWindows servers. They had three sites, one in Chicago, Alphareta, and New York City,each with two EKM servers. Each library had a single TS3500 tape library, and pointedto four EKM servers, two local, and two remote.
The clever trick was managing the keystore. They decided that EKM-1 was their trustedsource, made all changes to that, and then copied it to the other five EKM servers.His team deployed one site at a time, which turned out to be ok, but he would notrecommend it. Better to design your complete solution, and make sure that all librariescan access all EKM servers.
This company decided to have a single key-label/key-pair for all three locations, but change it every 6 months. You have to keep the old keys for as long as you have tapesencrypted with those keys, perhaps 10-20 years.The customer found the IBM encryption implementation "elegant" and it can be easily replicated to a fourth site if needed.
The second company had both z/OS and Sun Solaris. Initially they planned to have botha hardware-based keystore on System z, and software-based keystore on Sun, but they realized that System z version was so much more secure and reliable, that it made nosense to have anything on the Sun Solaris platform.
On System z, they had two EKM images, and used VIPA to ensure load balancing fromthe library. Tapes written from z/OS used DFSMS Data Class to determine which tapesare encrypted and which aren't. All Tapes written from Sun Solaris were encryptied, written to a separate logical library partition of the TS3500, which in turn contactedthe System z for the EKM management to provide the keys to use for the encryption.
The "gotcha" for this case was that when they tested Disaster Recovery, they had torecover the two EKM servers first, before any other restores could take place, and thistook way too long. Instead, they developed a scaled-down 10-volume "rescue recovery" z/OS image that would contain the RACF database and all EKM related software to actas the keystore during a disaster recovery. Anytime they make updates, they only haveto dump 10 volumes to tape. Restore time is down to only 2 hours.
He gave this advice to deploy tape encryption:
Some third party z/OS security products, like Computer Associates Top Secret orACF2, require some PTFs to work with the EKM. The latest IBM RACF is good to go.
Getting IP support from IOS to OMVS requires IPL.
At one customer, an OMVS monitor software program killed the EKM because it wasn'tin their list of "acceptable Java programs". They updated the list and EKM ran fine.
DO not update EKM properties file while EKM is running. EKM keeps a lot of stuffin memory, and when it is recycled, copies this back to the EKM properties file, reversing any changes you may have done. It is best to shut down EKM, update theproperties file, then start up EKM back up again. This is why you should always haveat least two EKM servers for redundancy.
TSM for Linux on System z
Randy Larson from our Tivoli group presented this session.There is a lot of interest in deploying IBM Tivoli Storage Manager backup and archivesoftware on Linux for System z. Many customers are already invested in a mainframeinfrastructure, may have TSM for z/OS or z/VM, and want the newer features and functions that are available for TSM on Linux.
TSM has special support for Lotus Domino, Oracle, DB2 and WebSphere Application Servers.TSM clients can send backup data to a TSM server internally via Hipersockets, a virtualLAN feature on the System z platform that uses shared memory to emulate TCP/IP stack.
One of the big questions is whether to run Linux as guests under z/VM, or natively onLPAR. The general deployment is to carve an LPAR and run Linux natively untilyour server and storage administration staff have taken z/VM training classes. Oncetrained, they can easily move native LPAR images to z/VM guests. Unlike VMware that takesa hefty 40% overhead on x86 platforms to manage guests, z/VM only takes 5-10% overhead.
For the TSM database and disk storage pools, Randy recommends FC/SCSI disk, with ext3 file system, combined with LVM2 into logical volumes. ECKD disk and reiserfsworks too. Avoid use of z/VM minidisks. Under LVM2, consider 32KB stripes for the TSM database, and 256KB stripes for the disk storage pools. For multipathing, usefailover rather than multibus method. Read IC45459 before you activate "directio".
The TSM for Linux on z is very much like the TSM on AIX or Windows, and not like theTSM for z/OS. For tape, TSM for Linux on z does not support ESCON/FICON attached tape,you need to use FC/SCSI attached tape and tape libraries. TSM owns the library anddrives it uses, so give it a logical library partition separate from z/OS. ForSun/StorageTek customers, TSM works with or without the Gersham Enterprise Distrbu-Tape(EDT) software. Use the IBM-provided drivers for IBM tape. For non-IBM tape, TSM providessome drivers that you can use instead.
That wraps up my week. This was a great conference! If you missed it, look for the one in Montpelier, France this October. Check out the list of IBM Technical Conferencesto find others that might interest you.
Continuing my coverage of the 27th annual[Data Center Conference], the weather here in Las Vegas has been partly cloudy,which leads me to discuss some of the "Cloud Computing" sessions thatI attended on Wednesday.
The x86 Server Virtualization Storm 2008-2012
Along with IBM, Microsoft is recognized as one of the "Big 5" of Cloud Computing. With theirrecent announcements of Hyper-V and Azure, the speaker presented pros-and-cons between thesenew technologies versus established offerings from VMware. For example, Microsoft's Hyper-Vis about three times cheaper than VMware and offers better management tools. That could beenough to justify some pilot projects. By contrast, VMware is more lightweight, only 32MB,versus Microsoft Hyper-V that takes up to 1.5GB. VMware has a 2-3 year lead ahead of Microsoft, and offers some features that Microsoft does not yet offer.
Electronic surveys of the audience offered some insight. Today, 69 percent were using VMware only, 8 percent had VMware plus other, including Xen-based offerings from Citrix,Virtual Iron and others. However, by 2010, the audience estimated that 39 percent would be VMware+Microsoft and another 23 percent VMware plus Xen, showing a shift away from VMware'scurrent dominance. Today, there are 11 VMware implementations to Microsoft Hyper-V, and thisis expected to drop to 3-to-1 by 2010.
Of the Xen-based offerings, Citrix was the most popular supplier. Others included Novell/PlateSpin,Red Hat, Oracle, Sun and Virtual Iron. Red Hat is also experimenting with kernel-based KVM.However, the analyst estimated that Xen-based virtualization schemes would never get past8 percent marketshare. The analyst felt that VMware and Microsoft would be the two dominant players with the bulk of the marketshare.
For cloud computing deployments, the speaker suggested separating "static" VMs from "dynamic" ones. Centralize your external storage first, and implement data deduplicationfor the OS load images. Which x86 workloads are best for server virtualization? The speaker offered this guidance:
The "good" are CPU-bound workloads, small/peaky in nature.
The "bad" are IO-intensive, those that exploit the features of native hardware
The "ugly" refers to workloads based on software with restrictive licenses and those not fully supported on VMs. If you have problems, the software vendor may not help resolve them.
Moving to the Cloud: Transforming the Traditional Data Center
IBM VP Willie Chiu presented the various levels of cloud computing.
Software-as-a-Service (SaaS) provides the software application, operating system and hardware infrastructure, such as SalesForce.com or Google Apps. Either the software meets your needs or it doesn't, but has the advantage that the SaaS provider takes care of all the maintenance.
Platform-as-a-Service (PaaS) provides operating system, perhaps some middleware like database or web application server, and the hardware infrastructure to run it on. The PaaS provider maintains the operating system patches, but you as the client must maintain your own applications. IBM has cloud computing centers deployed in nine different countries across the globe offering PaaS today.
Infrastructure-as-a-Service (IaaS) provides the hardware infrastructure only. The client must maintain and patch the operating system, middleware and software applications. This can be very useful if you have unique requirements.
In one case study, Willie indicated that moving a workload from a traditional data center to the cloud lowered the costs from $3.9 million to $0.6 million, an 84 percent savings!
We've Got a New World in Our View
Robert Rosier, CEO of iTricity, presented their "IaaS" offering. "iTricity" was coined from the concept of "IT as electricity". iTricity is the largest Cloud Computing company in continental Europe, hosting 2500 servers with 500TB of disk storage across three locations in the Netherlands and Germany.
Those attendees I talked to that were at this conference before commented that this year's focus on virtualization and cloud computing is noticeably more than in previous years. For more on this, read this 12-page whitepaper:[IBM Perspective on Cloud Computing]
Continuing my coverage of the Data Center Conference 2009, held Dec 1-4 in Las Vegas, the title of this session refers to the mess of "management standards" for Cloud Computing.
The analyst quickly reviewed the concepts of IaaS (Amazon EC2, for example), PaaS (Microsoft Azure, for example), and SaaS (IBM LotusLive, for example). The problem is that each provider has developed their own set of APIs.
(One exception was [Eucalyptus], which adopts the Amazon EC2, S3 and EBS style of interfaces. Eucalyptus is an open-source infrastrcture that stands for "Elastic Utility Computing Architecture Linking Your Programs To Useful Systems". You can build your own private cloud using the new Cloud APIs included Ubuntu Linux 9.10 Karmic Koala termed Ubuntu Enterprise Cloud (UEC). See these instructions in InformationWeek article [Roll Your Own Ubuntu Private Cloud].)
The analyst went into specific Virtual Infrastructure (VI) and public cloud providers.
Private Clouds can be managed by VMware tools. For remote management of public IaaS clouds, there is [vCloud Express], and for SaaS, a new service called [VMware Go].
Citrix is the Open Service Champion. For private clouds based on Xen Server, they have launched the [Xen Cloud Project] to help manage. For public clouds, they have [Citrix Cloud Center, C3], including an Amazon-based "Citrix C3 Labs" for developing and testing applications. For SaaS, they have [GoToMyPC and [GoToAssist].
Amazon offers a set of Cloud computing capabilities called Amazon Web Services [AWS]. For virtual private clouds, use the AWS Management Console. For IaaS (Amazon EC2), use [CloudWatch] which includes Elastic Load Balancing.
If you prefer a common management system independent of cloud provider, or perhaps across multiple cloud providers, you may want to consider one of the "Big 4" instead. These are the top four system management software vendors: IBM, HP, BMC Software, and Computer Associates (CA).
A survey of the audience found the number one challenge was "integration". How to integrate new cloud services into an existing traditional data center. Who will give you confidence to deliver not tools for remote management of external cloud services? Survey shows:
28 percent: VI Providers (VMware, Citrix, Microsoft)
19 percent: Big 4 System Management software vendors (IBM, HP, BMC, CA)
13 percent: Public cloud providers (Amazon, Google)
40 percent: Other/Don't Know
For internal private on-promise Clouds, the results were different:
40 percent: VI Providers (VMware, Citrix, Microsoft)
21 percent: Big 4 System Management software vendors (IBM, HP, BMC, CA)
13 percent: Emerging players (Eucalyptus)
26 percent: Other/Don't Know
Some final thoughts offered by the analyst. First, nearly a third of all IT vendors disappear after two years, and the cloud will probably have similar, if not worse, track record. Traditional server, storage and network administrators should not consider Cloud technologies as a death knell for in-house on-premises IT. Companies should probably explore a mix of private and public cloud options.
Continuing my coverage of the 30th annual [Data Center Conference]. here is a recap of Wednesday breakout sessions.
Aging Data: The Challenges of Long-Term Data Retention
The analyst defined "aging data" to be any data that is older than 90 days. A quick poll of the audience showed the what type of data was the biggest challenge:
In addition to aging data, the analyst used the term "vintage" to refer to aging data that you might actually need in the future, and "digital waste" being data you have no use for. She also defined "orphaned" data as data that has been archived but not actively owned or managed by anyone.
You need policies for retention, deletion, legal hold, and access. Most people forget to include access policies. How are people dealing with data and retention policies? Here were the poll results:
The analyst predicts that half of all applications running today will be retired by 2020. Tools like "IBM InfoSphere Optim" can help with application retirement by preserving both the data and metadata needed to make sense of the information after the application is no longer available. App retirement has a strong ROI.
Another problem is that there is data growth in unstructured data, but nobody is given the responsibility of "archivist" for this data, so it goes un-managed and becomes a "dumping ground". Long-term retention involves hardware, software and process working together. The reason that purpose-built archive hardware (such as IBM's Information Archive or EMC's Centera) was that companies failed to get the appropriate software and process to complete the solution.
Cloud computing will help. The analyst estimates that 40 percent of new email deployments will be done in the cloud, such as IBM LotusLive, Google Apps, and Microsoft Online365. This offloads the archive requirement to the public cloud provider.
A case study is University of Minnesota Supercomputing Institute that has three tiers for their storage: 136TB of fast storage for scratch space, 600TB of slower disk for project space, and 640 TB of tape for long-term retention.
What are people using today to hold their long-term retention data? Here were the poll results:
Bottom line is that retention of aging data is a business problem, techology problem, economic problem and 100-year problem.
A Case Study for Deploying a Unified 10G Ethernet Network
Brian Johnson from Intel presented the latest developments on 10Gb Ethernet. Case studies from Yahoo and NASA, both members of the [Open Data Center Alliance] found that upgrading from 1Gb to 10Gb Ethernet was more than just an improvement in speed. Other benefits include:
45 percent reduction in energy costs for Ethernet switching gear
80 percent fewer cables
15 percent lower costs
doubled bandwidth per server
Ruiping Sun, from Yahoo, found that 10Gb FCoE achieved 920 MB/sec, which was 15 percent faster than the 8Gb FCP they were using before.
IBM, Dell and other Intel-based servers support Single Root I/O Virtualization, or SR-IOV for short. NASA found that cloud-based HPC is feasible with SR-IOV. Using IBM General Parallel File System (GPFS) and 10Gb Ethernet were able to replace a previous environment based on 20 Gbps DDR Infiniband.
While some companies are still arguing over whether to implement a private cloud, an archive retention policy, or 10Gb Ethernet, other companies have shown great success moving forward!
Continuing my coverage of last week's Data Center Conference 2009, held Dec 1-4 in Las Vegas, I attended an interesting session related to the battles between Linux, UNIX, Windows and other operating systems. Of course, it is no longer between general purpose operating systems, there are also thin appliances and "Meta OS" such as cloud or Real Time Infrastructure (RTI).
One big development is "context awareness". For the most part, Operating Systems assume they are one-to-one with the hardware they are running on, and Hypervisors like PowerVM, VMware, Xen and Hyper-V have worked by giving OS guests the appearance that this is the case. However, there is growing technology for OS guests to be "aware" they are running as guests, and to be aware of other guests running on the same Hypervisor.
The analyst divided up Operating Systems into three categories:
Operating systems that are typically used to support other OS by offering Web support or other infrastructure. Linux on POWER was an example given.
DBMS/Industry Vertical Applications
Operating systems that are strong for Data Base Management Systems (DBMS) and vertical industry applications. z/OS, AIX, HP-UX, HP NonStop, HP OpenVMS were given as examples.
General Purpose for a variety of applications
Operating systems that can run a range of applications, from Web/Infrastructure, DBMS/Vertical Apps, to others. Windows, Linux x86 and Solaris were offered as examples.
The analyst indicated that what really drove the acceptance or decline of Operating Systems were the applications available. When Software Development firms must choose which OS to support, they typically have to evaluate the different categories of marketplace acceptance:
For developing new applications: Windows-x86 and Linux-x86 are must-haves now
Declining but still valid are UNIX-RISC and UNIX-Itanium platforms
Viable niche are Non-x86 Windows (such as Windows-Itanium) and non-x86 Linux (Linux on POWER, Linux on System z)
Entrenched Legacy including z/OS and IBM i (formerly known as i5/OS or OS/400)
For the UNIX world, there is a three-legged stool. If any leg breaks, the entire system falls apart.
The CPU architecture: Itanium, SPARC and POWER based chipsets
Operating System: AIX, HP-UX and Solaris
Software stacks: SAP, Oracle, etc.
Of these, the analyst consider IBM POWER running AIX to be the safest investment. For those who prefer HP Integrity, consider waiting until "Tukwilla" codename project which will introduce new Itanium chipset in 2Q2010. For Sun SPARC, the European Union (EU) delay could impact user confidence in this platform. The future of SPARC remains now in the hands of Fujitsu and Oracle.
What platform will the audience invest in most over the next 5 years?
45 percent Windows
14 percent UNIX
37 percent Linux
4 percent z/OS
A survey of the audience about current comfort level of Solaris:
10 percent: still consider Solaris to be Strategic for their data center operations and will continue to use it
25 percent: will continue to use Solaris, but in more of a tactical way on a case-by-case basis
30 percent: have already begun migrating away
35 percent: Do not run Solaris
The analyst mentioned Microsoft's upcoming Windows Server 2008 R2, which will run only on 64-bit hardware but support both 32-bit and 64-bit applications. It will provide scalability up to 256 processor cores. Microsoft wants Windows to get into the High Performance Computing (HPC) marketplace, but this is currently dominated by Linux and AIX. The analyst's advice to Microsoft: System Center should manage both Windows and Linux.
Has Linux lost its popularity? The analyst indicated that companies are still running mission critical applications on non-Linux platforms, primarily z/OS, Solaris and Windows. What does help Linux are old UNIX Legacy applications, the existence of OpenSolaris x86, Oracle's Enterprise Linux, VMware and Hyper-V support for Linux, Linux on System z mainframe, and other legacy operating systems that are growing obsolete. One issue cited with Linux is scalability. Performance on systems with more than 32 processor cores is unpredictable. More mature operating systems like z/OS and AIX have stronger support for high-core environments.
A survey of the audience of which Linux or UNIX OS were most strategic to their operations resulted in the following weighted scores:
140 points: Red Hat Linux
71 points: AIX
80 points: Solaris
40 points: HP-UX
41 points: Novell SUSE Linux
19 points: Oracle Enterprise Linux
29 points: Other
The analyst wrapped up with an incredibly useful chart that summarizes the key reasons companies migrate from one OS platform to another:
Reduce Costs, Adopt HPC
DBMS, Complex projects
Availability of Admin Skills
Performance, Mission Critical Applications
Availability of Apps, leave incumbent UNIX server vendor
Consolidation, Reduce Costs
Certainly, all three types of operating system have a place, but there are definite trends and shifts in this marketspace.
These disk capacities can have up to 25x times their effective capacity with IBM's HyperFactorin-line deduplication capability. So the smallest 7TB model could be as effective as 175TB of traditionaldisk storage.
IBM Tivoli Storage Manager (TSM) v6
After years and years in development, IBM announces[TSM v6]. Here's a quick summary of the key features:
DB2 instead of an internal database
For years, people have complained that IBM used its own internal relational database. This was becausewhen TSM was first launched back in 1993, the DB2 did not have all the features on all of the various server platforms that TSM needed. Today, DB2is the leading relational database on all the key platforms that TSM server runs on, and therefore good enough for use within Tivoli Storage Manager. If you don't already have DB2, it is included for use with TSM v6.1 at no additional charge. Do you have to become a DB2 expert to use TSM? No! The TSM administration commands have been updated to hide all the complexity of DB2 away, behind the scenes. You now just use TSM commands to administer the database,as you did before. IBM will provide conversion utilities to help existing TSM customers migrate to thisnew database environment.
Better Operational Reporting
Another big complaint was that TSM had fixed reporting, and administrators that wanted customized reportsoften had to resort to purchasing third party products. With the change over to DB2, TSM now enables youto create your own reports using Eclipse's Business Intelligence and Reporting Tools[BIRT]! If you haven't used BIRT, you can downloada free open source copy and start playing around with its capabilities. This is combined with a revamped GUI that provides a customizable dashboard using IBM's Integrated Solutions Console (ISC)infrastructure.
Lastly, IBM has incorporated deduplication capability within the TSM v6.1 software for its own diskstorage pools. This is done in a post-process manner so as to dedupe all of your legacy backup dataas well, not just the new stuff, without impacting the current TSM server performance.
At this point, you might be thinking "Wait, what about IBM TS7650 ProtecTIER deduplication?" which is really two questions.
Can I use TSM v6.1 with IBM TS7650 ProtecTIER?
Yes, however since TSM progressive incremental method is vastly more efficientthan other backup products like Veritas NetBackup or EMC Legato NetWorker, the TS7650 may only get 10x reductionof TSM backups, versus up to 25x with full-backups-every-night backup schemes. TSM only dedupes itsdisk storage pools, so it won't dedupe data directed at tape systems like the TS7650 or othertape libraries. This avoids the "double dedupe" concern.
When should I use TSM's software version versus TS7650's hardware deduplication?
This is a positioning question. For now, the cut-over point is about 10TB per night backup processing. If youbackup more than 10TB per night, TS7650 hardware may be the better approach. If you are a smaller customer nowhere near that volume of data, then using TSM v6.1 software deduplication may be a morecost-effective solution. If you start small, and grow beyond 10TB per night, it is easy to bring ina TS7650 into an existing TSM environment and migrate the data over.
If you run TSM server on a logical partition (LPAR) or virtual guest OS under VMware ESX, Xen or Microsoft'sHyper-V environment, why should you have to license it for the whole box? With TSM v6.1, you nowcan pay for only the amount of processors you use, down to a single core even.If you currently run TSM v5 on z/OS, you can migrate over to TSM v6.1 server for Linux on System z totake advantage of cost savings using IFL engines.
IBM Tivoli Key Lifecycle Manager (TKLM) v1.0
Don't let the "v1.0" scare you, this is the successor to IBM's Encryption Key Manager (EKM) that hasthousands of clients using today with IBM encrypting tape drives. The new TKLM adds support for full disk encryption (FDE) drives--like those for the DS8000 I mentioned in [yesterday's post]--as well as new features to support key rotation for compliance and business controls.
IBM Tivoli Storage Productivity Center
Last, but not least, we have IBM Tivoli Storage Productivity Center [TSPC]. No, that is not a typo. IBM is renaming IBM TotalStorage Productivity Center to Tivoli Storage Productivity Center toavoid trademark conflicts with the [Professional Golfer's Association].
This is not just renaming existing product. Here some key improvements:
TSPC brings back together Productivity Center Standard Edition (Disk, Tape, SAN and Data) with Productivity Center for Replication, which were separate at birth a few years ago.
TSPC adds support for IBM's Storage Enterprise Resource Planner[SERP] from theNovusCG acquisition.
End-to-end view for EMC storage devices connected to supported servers via EMC Powerpath multipathing driver. As customers switch away from EMC Control Center over to IBM's Productivity Center, IBM can continue to provide support for existing EMC gear.
Of course, IBM will still offer IBM System Storage Productivity Center[SSPC] which is a piece of hardware pre-installed with Productivity Center software.
Hopefully, you can now see why I had to split up all these announcements into separate posts acrossmultiple days!
Continuing my week's theme on the XO laptop from the One Laptop Per Child [OLPC] foundation, I successfully managedto emulate my XO on another system.
Part of what is attractive of the XO laptop is the hardware, the high-resolution200dpi screen, the clever screen that rotates and folds flat into an eBook reader,and the water-tight, dust-proof keyboard. The other part is the software, howthey managed to pack an entire operating system, with useful applications, intoa 1GB NAND flash drive.
The drawback for developers like me is the risk of changing something that breaks the system. For example, my first attempt to create my own activityresulted in a blank space in my action bar, and my journal went into someinfinite loop, blinking as if it were still loading for minutes on end. I fixed it by deleting out the activity I created and rebooting.
To get around this, I successfully ran the disk-image under Linux's Virtual Machinesoftware called Qemu. This is an open source offering, with a proprietary add-onaccelerator called Kqemu. Here were the steps involved:
Base Operating System
Qemu is now available to run on Linux, Windows and OS X-Intel. I have an Ubuntu 7.04"Feisty Faun" version of Linux installed on my system from a project I did last year, so decided to use that.
Normally, "apt-get install qemu" would be enough, but I wanted to get the latest release, so I downloaded the [0.9.0 version]tarball of compiled binaries. Note that trying to compile Qemu from source requiresa downlevel gcc-3.x compiler, and my attempts to do this failed. The compiled binariesworked fine.
The Kqemu author hasn't packaged this for distribution, so I download the source code anddid my own compiles. You can do the "configure-make-install" using the regular gcc 4.1compiler and it went smoothly.
Getting Kqemu active was bit of a challenge. I had to make sense of Nando Florestan's[Installing Kqumu in Ubuntu] article,and the subsequent comments that followed.
There is a tiny [8MB Linux image]that should be used to verify the Kqemu is activated correctly.
The Disk Image
As with other development efforts, there are the older stable versions, and the bleedingedge development versions. I chose the 650 Build from the [Ship.2 stable versions], whichmatches the version on my XO laptop. The image comes as a *.bz2, which is a highly-compressedfile. Using "Bunzip2", the 221MB file expands to something like 932MB.
I renamed the resulting file to "build650.img"
Once I got all this done, I then made a simple script "launch" in my /home/tpearson/bin directory:
Then "launch build650.img" was all I needed to run the emulation. The full-screen mode helpsemulate the view on XO laptop. I was able to change the jabber server to "xochat.org" and see otherXO laptops online on my neighborhood view.
When running under Qemu, you can't just press Ctrl-Alt-something. For example, Ctrl-Alt-Erase onthe XO reboots the Sugar interface. However, do this on a Linux system, and it reboots your nativeX interface, blowing away everything.Instead, you press Ctrl-Alt-2 to get to the Qemu console, designated by (qemu) prompt,and then type:
Press "Ctrl-Alt-1" followed by "Ctrl-Alt" to get back to the emulated XO screen.
With this emulation, I am more likely to try new things, change files around, edit system files,and so on, without worrying about rendering my actual XO laptop unusable. Once debugged, I canthen work on moving them over to my XO, one at a time.
Well, it's Tuesday again, and that means more IBM announcements!
Today, IBM announced the enhanced IBM System Storage DS3200 disk system.It is in our DS3000 series, the DS3200 is SAS-attach, DS3300 is iSCSI-attach, and DS3400 is FC-attach. All of them support up to 48 drives, which can be a mix of SAS and SATA drives.
The DS3200 supports the following operating environments (see IBM's [Interop Matrix] for details):
Linux (both Linux-x86 and Linux on POWER)
With today's announcements, the DS3200 can be used to boot from, as well as contain data. This is ideal to combine with IBM BladeCenter. With the IBM BladeCenter you can have 14 blades, either x86 or POWER based processors, attached to a DS3200 via SAS switch modules in the back of the chassis.
Let's take an example of how this can be used for a Scale-Out File Services[SoFS] deployment.
First, we start with servers. We can have either three [IBM System x3650] servers, but this would use up all six of the direct-attach ports. Instead, we'll choose the [BladeCenter H chassis], with three HS21 blades for SoFS, and that leaves us with eleven empty blade slots we could put in a management node, or other blades to run applications.
SAS connectivity modules
The IBM BladeCenter [SAS Connectivity Module] allows the blade servers to connect to a DS3200. Two of them fit right in the back of the BladeCenter chassis, providing full redundancy without consuming additional rack space.
DS3200 and EXP3000 expansion drawers
We'll have one DS3200 controller with twelve internal drives, and three expansion EXP3000 drawers with twelve drives each, for a total of 48 drives. Using 1TB SATA, this would be 48 TB raw capacity.
The end result? You get a 48TB NAS scalable storage solution, supporting up to 7500 concurrent CIFS and NFS users, with up to 700 MB/sec with large block transfers. By using BladeCenter, you can expand performance by adding more blades to the Chassis, or have some blades running SAP or Oracle RAC have direct read/write access to the SoFS data.
Just another example on how IBM can bring together all the components of a solution to provide customer value!
Fellow blogger Chuck Hollis from EMC has a post titled[Whither Frankenstorage] causing quite a stir in the [Stor-o-Sphere]. He is not the firstEMC blogger to use this phrase, I credit [BarryB] for coining the term back in September 2008.Frankenstein serves as the ideal icon for EMC's FUD machine. In the novel, Dr. Frankenstein wasattempting to do something nobody else had ever attempted, to create human life from variousdead body parts, a process full of uncertainty and doubt, with frightful results.
Perhaps it was a coincidence that I discussed IBM's storage strategy in my post[Foundations and Flavorings] on January 28, shortly followed by NetApp's announcing V-series gateway [support of Texas Memory Systems' RamSan-500] on February 3. These two events mighthave been the trigger that pushed ChuckH over the edge to put pen to paper, .. finger to keyboard.
Flinging FUD in all directions was ChuckH's not-so-subtle way to remind the world that EMC is the only major storage vendor to not offer a successful storage virtualization product. Withoutfirst-hand experience with well-designed storage virtualization, ChuckH conjectures that a configuration matching intelligent front-ends to reliable back-ends might be more expensive, might be more difficult to manage, or might be harder to support.
(Note: Rest assured, IBM can demonstrate that a modular approach, combining intelligent front-ends to reliableback-ends can help reduce costs, be easier to manage, and be fully supported. Contact yourlocal IBM Business Partner or storage sales rep for details.)
My favorite was from Nigel Poulton's post on[Ruptured Monkey]. Here's an excerpt:
In fact, I'm fairly certain that EMC don't back away from customers who run HP or IBM servers and say "sorry we cant help you here, an end to end HP or IBM solution would be much better for you when it comes to troubleshooting……. putting our storage in would only add extra layers of complexity and make things messy….."
On most other days, ChuckH has well-written, insightful blog posts that show that EMC brings some value to the industry. I could have made a snarky reference to[Dr Jekyll and Mr Hyde], or indicate this post proves that nobody at EMC is editing or reviewingChuck's thoughts before they get posted. But it's too late, Chuck already got the message, and added the following to bring the discussion back to civility:
When considering the broad range of storage media service levels available today (flash, FC, SATA, spin-down, etc.) what's the best way to offer these media choices in an array? Is the answer (a) combine smaller arrays from different vendors together behind a virtualization head, or (b) invest the time and effort to build arrays that can directly support all of these media types?
Would anyone like to try a cogent response to the question posed, please?
To address ChuckH's question, Nigel's post gave me the idea to use today's 200th year celebration of [Charles Darwin].
Over millions of years, Charles Darwin argued, evolution results in change in the inherited traits of a population of organisms from one generation to the next.A key component of this is a biological process called [mitosis] that allows a single cell to split and become two cells. In some cases, these individual daughter cells can then specialize to specific functions, such as nerve cells, muscle cells or bone cells. Over time, adaptations that work well carry forward, and thosethat don't get left behind.
I find it interesting that before [On the Origin of Species] was published in 1859, works of fiction like Mary Shelley's[Frankenstein] had monsters being"created", and afterward, monsters were the result of mutation or selective adaptation.
Nigel compares EMC's monolithic approach to placing an intelligent front-end with a reliable back-end as "One man band, where one guy is trying playing all the instruments himself" versus the "Philharmonic Orchestra". I would take it one step further, comparing single-cell organisms to multi-cell life forms.
Innovative companies like Google and Amazon can't wait for a completely integrated solution from a major IT vendor to meet their needs. Why should they? There are open standards, and ways to interconnect the best intelligence into a [dynamic infrastructure®.].You don't need to wait another million years to see which way the IT marketplace considers the better approach. Just look at the last 60 years. Back then, computer systems were all integrated, server, storage, and the wires that connected them were all inside a huge container. Then, mitosis happened, and IBM created external tape storage in 1952, and external disk storage in 1956. Open standards for interfaces allowed third party manufacturers like HDS, StorageTek and EMC to offer plug-compatible storage devices.
On the server side, it didn't take long for functionality in mainframes to split off. Mitosis happened again, with front-end UNIX systems processing incoming data, and mainframes handling the back-end data bases and printing. The client-server era replaced dumb terminals with more intelligent desktops and workstations, and these could handle the front-end processing to display information, with the back-end storage and number-crunching being handled by the UNIX and mainframe systems they connected to.Connections between desktops and servers, and from servers to storage, have also evolved. From thousands of direct-attach cables to networks of switches and directors.
Charles Darwin was particularly interested in cases where evolution happened faster or slower than in other cases. While IBM and Microsoft encouraged third-party innovations on the PC side, Apple resisted mitosis, trying to keep its machines pure single-cell, integrated solutions.For the same reasons that you can't fight the laws of nature, Apple ended up having to support I/O ports to external devices. Thanks to open standards like USB and Firewire, you can connect third-party storage to Apple computers. My little Mac Mini at home has more devices hanging off it than any of my Windows or Linux boxes! And Apple's iPod is successful because its iTunes software runs on both Windows and Mac OS operating systems.
Every time mitosis happens in the IT industry, it opens up opportunities to specialize, to innovate, to adapt to a dynamically changing world. When mitosis is suppressed, you get limiting products and frustratedengineers leaving to form their own start-up companies.But when mitosis is encouraged, you get successful products, solutions and partnerships positioned for a smarter planet.
Earlier this year, IBM mandated that every employee provided a laptop had to implement Full-Disk Encryption for their primary hard drive, and any other drive, internal or external, that contained sensitive information. An exception was granted to anyone who NEVER took their laptop out of the IBM building. At IBM Tucson, we have five buildings, so if you are in the habit of taking your laptop from one building to another, then encryption is required!
The need to secure the information on your laptop has existed ever since laptops were given to employees. In my blog post [Biggest Mistakes of 2006], I wrote the following:
"Laptops made the news this year in a variety of ways. #1 was exploding batteries, and #6 were the stolen laptops that exposed private personal information. Someone I know was listed in one of these stolen databases, so this last one hits close to home. Security is becoming a bigger issue now, and IBM was the first to deliver device-based encryption with the TS1120 enterprise tape drive."
Not surprisingly, IBM laptops are tracked and monitored. In my blog post [Using ILM to Save Trees], I wrote the following:
"Some assets might be declared a 'necessary evil' like laptops, but are tracked to the n'th degree to ensure they are not lost, stolen or taken out of the building. Other assets are declared "strategically important" but are readily discarded, or at least allowed to [walk out the door each evening]."
Unfortunately, dual-boot environments won't cut it for Full-Disk Encryption. For Windows users, IBM has chosen Pretty Good Privacy [PGP]. For Linux users, IBM has chosen Linux Unified Key Setup [LUKS]. PGP doesn't work with Linux, and LUKS doesn't work with Windows.
For those of us who may need access to both Operating Systems, we have to choose. Select one as the primary OS, and run the other as a guest virtual machine. I opted for Red Hat Enterprise Linux 6 as my primary, with LUKS encryption, and Linux KVM to run Windows as the guest.
I am not alone. While I chose the Linux method voluntarily, IBM has decided that 70,000 employees must also set up their systems this way, switching them from Windows to Linux by year end, but allowing them to run Windows as a KVM guest image if needed.
Let's take a look at the pros and cons:
LUKS allows for up to 8 passphrases, so you can give one to your boss, one to your admin assistant, and in the event they leave the company, you can disable their passphrase without impacting anyone else or having to memorize a new one. PGP on Windows supports only a single passphrase.
Linux is a rock-solid operating system. I found that Windows as a KVM guest runs better than running it natively in a dual-boot configuration.
Linux is more secure against viruses. Most viruses run only on Windows operating systems. The Windows guest is well isolated from the Linux operating system files. Recovering from an infected or corrupted Windows guest is merely re-cloning a new "raw" image file.
Linux has a vibrant community of support. I am very impressed that anytime I need help, I can find answers or assistance quickly from other Linux users. Linux is also supported by our help desk, although in my experience, not as well as the community offers.
Employees that work with multiple clients can have a separate Windows guest for each one, preventing any cross-contamination between systems.
Linux is different from Windows, and some learning curve may be required. Not everyone is happy with this change.
(I often joke that the only people who are comfortable with change are babies with soiled diapers and prisoners on death row!)
Implementation is a full re-install of Linux, followed by a fresh install of Windows.
Not all software required for our jobs at IBM runs on Linux, so a Windows guest VM is a necessity. If you thought Windows ran slowly on a fully-encrypted disk, imagine how much slower it runs as a VM guest with limited memory resources.
In theory, I could have tried the Windows/PGP method for a few weeks, then gone through the entire process to switch over to Linux/LUKS, and then draw my comparisons that way. Instead, I just chose the Linux/LUKS method, and am happy with my decision.
Many people have asked me if there was any logic with the IBM naming convention of IBM Systems branded servers. Here's your quick and easy cheat sheet:
System x -- "x" for cross-platform architecture. Technologies from our mainframe and UNIX servers were brought into chips that sit next to the Intel or AMD processors to provide a more reliable x86 server experience. For example, some models have a POWER processor-based Remote Supervisor Adapter (RSA).
System p -- "p" for POWER architecture.
System z -- "z" for Zero-downtime, zero-exposures. Our lawyers prefer "near-zero", but this is about as close as you get to ["six-nines" availability] in our industry, with the highest level of security and encryption, no other vendor comes close, so you get the idea.
But what about the "i" for System i? Officially, it stands for "Integrated" in that it could integrate different applications running on different operating systems onto a [COMMON] platform. Options were available to insert Intel-based processor cards that ran Windows, or attach special cables that allowed separate System x servers running Windows to attach to a System i. Both allowed Windows applications to share the internal LAN and SAN inside the System i machine. Later, IBM allowed [AIX on System i] and [Linux on Power] operating systems to run as well.
From a storage perspective, we often joked that the "i" stood for "island", as most System i machines used internal disk, or attached externally to only a fewselected models of disk from IBM and EMC that had special support for i5/OS using a special, non-standard 520-byte disk block size. This meant only our popular IBM System Storage DS6000 and DS8000 series disk systems were available. This block size requirement only applies to disk. For tape, i5/OS supports both IBM TS1120 and LTO tape systems. For the most part,System i machines stood separate from the mainframe, and the rest of the Linux, UNIX and Windows distributed serverson the data center floor.
Often, when I am talking to customers, they ask when will product xyz be supported on System z or System i?I explained that IBM's strategy is not to make all storage devices connect via ESCON/FICON or support non-standard block sizes, but rather to get the servers to use standard 512-byte block size, Fibre Channel and other standard protocols.(The old adage applies: If you can't get Mohamed to move to the mountain, get the mountain to move to Mohamed).
On the System z mainframe, we are 60 percent there, allowing three of the five operating systems (z/VM, z/VSE and Linux) to access FCP-based disk and tape devices. (Four out of six if you include [OpenSolaris for the mainframe])But what about System i? As the characters on the popular television show [LOST] would say: It's time to get off the island!
Last week, IBM announced the new [i5/OS V6R1 operating system] with features that will greatly improve the use of external storage on this platform. Check this out:
POWER6-based System i 570 model server
Our latest, most powerful POWER processor brought to the System i platform. The 570 model will be the first in the System i family of servers to make use of new processing technology, using up to 16 (sixteen!) POWER6 processors (running at 4.7GHZ) in each machine.The advantage of the new processors is the increased commercial processing workload (CPW) rating, 31 percent greater than the POWER5+ version and 72 percent greater than the POWER5 version. CPW is the "MIPS" or "TeraFlops" rating for comparing System i servers.Here is the[Announcement Letter].
Fibre Channel Adapter for System i hardware
That's right, these are [Smart IOAs], so an I/O Processor (IOP) is no longer required! You can even boot the Initial Program Load (IPL) direclty from SAN-attached tape.This brings System i to the 21st century for Business Continuity options.
Virtual I/O Server (VIOS)
[VirtualI/O Server] has been around for System p machines, but now available on System i as well. This allows multiplelogical partitions (LPARs) to access resources like Ethernet cards and FCP host bus adapters. In the case of storage, the VIOS handles the 520-byte to 512-byte conversion, so that i5/OS systems can now read and write to standard FCP devices like the IBM System Storage DS4800 and DS4700 disk systems.
IBM System Storage DS4000 series
Initially, we have certified DS4700 and DS4800 disk systems to work with i5/OS, but more devices are in plan.This means that you can now share your DS4700 between i5/OS and your other Linux, UNIX and Windowsservers, take advantage of a mix of FC and SATA disk capacities, RAID6 protection, and so on.
To call [IBM PowerVM] the "VMware for the POWER architecture" would not do it quite justice. In combination with VIOS, IBM PowerVM is able to run a variety of AIX, Linux and i5/OS guest images.The "Live Partition Mobility" feature allows you to easily move guest images from one system to another, while they are running, just like VMotion for x86 machines.
And while we are on the topic of x86, PowerVM is also able to represent a Linux-x86 emulation base to run x86-compiled applications. While many Linux applications could be re-complied from source code for the POWER architecture "as is", others required perhaps 1-2 percent modification to port them over, and that was too much for some software development houses. Now, we can run most x86-compiled Linux application binaries in their original form on POWER architecture servers.
BladeCenter JS22 Express
The POWER6-based [JS22 Express blade] can run i5/OS, taking advantage of PowerVM and VIOS to access all of the BladeCenterresources. The BladeCenter lets you mix and match POWER and x86-based blades in the same chassis, providing theultimate in flexibility.
I can't believe I have been blogging for a year now!
I have Jennifer Jones from IBM to thank for getting this started. She was my predecessor in the job I have now, and she was moving on to bigger and better things, and during the transition for me to take over, she suggested that we start a blog, podcast, or similar. While there are many blogs and podcasts inside the firewall of IBM, I wanted something to be accessible to all of our IBM sales team, IBM Business Partners, existing and prospective clients, and to enable comments, to enable two-waycommunication. Podcasts are very one-way, so we chose a blog instead.Getting it set up took a while, convincing our own management that this was worthwhile, and dealing with our legal department on the IBM blogging guidelines of what we can and cannot write about, we finally got it going last year, launching September 1, just in time for our 50 years of disk systems innovation campaign.
It has been a wild ride, a great learning experience, and has proven quite fulfilling for job satisfaction. Here are some observations and lessons I have learned along the way.
Roller is the open source blog server that drives Sun Microsystem's blogs.sun.com employee blogging site, IBM DeveloperWorks blogs that this blog exists on, thousands of internal blogs at IBM Blog Central, the JRoller Java community site, and hundreds of others world-wide.Whereas there might be fancier blog systems elsewhere that I could have chosen, hosting my blog with IBM Developerworksseemed like a good choice. I can access from any web-browser capable machine, and enter my blog posts in nativeHTML, that I develop in the tool itself, or offline with a standard basic text editor like Microsoft Notepad that I can then cut-and-paste back in.
One lesson I learned the hard way was that Roller generates the Permalink URL for each blog post based on the first five words of the title. For that reason, it is important to chose an appropriate and unique title, avoiding the use of punctuation, quotation marks, or pharmaceutical "enhancement products" that might get rejected by SPAM filters.Once chosen, you can't change the title afterwards as it won't match the Permalink anymore.My blog post "Aperi is (enhancement product) for SMI-S" caused no end of grief to our Press Release team.
Writing blog posts in native HTML is not as hard as it sounds. I am limited to hosting a maximum of 24MB of files, and they can only be jpg, jpeg, gif, png, mp3, pdf or ppt format.So, wherever possible, I point to other websites for content.For those new to blogging, I recommendThe Barebones Guide to HTML.
Roller also generates for me a spreadsheet of all my page views for the week. Tracking blog traffic closely is as crazyas checking your company's stock price every day. These "web-stat" e-mails get filed directly into my Bacn folder on Lotus Notes.
In my earlyadvice to bloggers, I mentioned my choice of Bloglines as my RSS feed reader. When I subscribe to a new blog, I specify Full entries, not Partial,which allows me to scan it quickly, but filters out many of the non-text content like videos. It also allowed meto see what my own blog posts looked like from within a reader, so that I can write them appropriately.
I find if valuable to read other blogs, including those written by employees of our toughest competitors. Evenif you don't blog yourself, following blogs can be extremely valuable. Be careful what you leave as comments onother blogs, they may come back to haunt you later.
Currently, I track 55 blogs, some about storage,marketing, Web 2.0 issues, Second Life, Linux, or other areas of interest. I prefer blogs that make only 1-5 postsper week, so blogs like LifeHacker and LifeRemix are off my Bloglines list, but are excellent resourceswhen I am searching for something specific. If you think 55 is a lot of blogs, consider Timothy Ferriss' post onHow RobertScoble reads 622 RSS feeds each morning.
I have quite an international readership, so I have to be careful using American idioms and pop cultural references.For example, in my blog post IBM acquires Softek, I mentioned "shotgun weddings" and had various responses asking what exactly did that mean,all from readers outside the USA. I've learned that sometimes you need to link them to an American Slang dictionary,or Wikipedia encyclopedia entry to explain these terms and phrases.
Technoraticurrently tracks over 100 million blogs and over 250 million pieces of tagged social media. Getting my blogtracked had some issues. You have to join, thenpost a "claim"on your own blog. My mistake was having a case-sensitive URL with a mix of upper and lower case letters, but Technorati prefers all lower case. IBM worked with Technorati to get this resolved.
Del.icio.us is a social bookmarking website -- the primary use is to store your bookmarks online, which allows you to access the same bookmarks from any computer and add bookmarks from anywhere, too. On del.icio.us, you can use tags to organize and remember your bookmarks, which is a much more flexible system than folders.
I use Firefox, Safari, Dillo and Internet Explorer web browsers, so it is nice that I have access to allmy bookmarks in the same consistent manner. When I see content on a website that I might like to reference laterin a blog, I tag it with del.icio.us so that I can get to it later.
Fellow GTD-ers will quickly recognize this acronym, but for the rest of you, it refers to David Allen's book "Getting Things Done®".This is a great book! I learned about it reading other people's blogs, and found it incrediblyuseful helping me organize my time.There are various online tools available to help employ this method. I use Lotus Connections Activitiesfor group projects with co-workers at IBM, and BackPack for projects withmy friends outside of work.
The success of YouTube encouraged IBM to launch IBM TV, a portal for IBM's video and multimedia assets and make it easier for IBM employees, customers, partners and prospects to access and view IBM multimedia. The plan is to have eight anchor episodes per year, professionally hosted by TV personality, Joe Washington, and point to related offers and other resources for viewers to learn more.
Blogging also introduced me to Second Life. I asked around if anyone else within IBM was using Second Life, anddiscovered quite a few. I got invited to join our internal Eightbar group, and participated in various events, including an IBM Holidayparty that I discussed in my blog post"Building a Snowman in Second Life".
In April, we had a launch of our newest products in Second Life, and we plan to have two more Second Life events,September 20 and another in November, staged as "Meet the Experts" question and answer panels.
I wrap up with Facebook. Actually, whereas most of my Web 2.0 efforts have been work-related, I have quite a few friends and family who follow my blog. Several were inspired to start their own blogs, such asPassages from Pamand Barry Whyte on Storage Virtualization. Bridging the gap is Facebook, something I can use to keep tabs on my friends, as well as my storage industry-related contacts.
Wow, that's quite a lot in one year. Well, I am done with my meetings down here in Sao Paulo, Brazil. My colleauges and I are returning tonight to enjoy the long Labor Day weekend.
(FTC Disclosure: I do not work or have any financial investments in ENC Security Systems. ENC Security Systems did not paid me to mention them on this blog. Their mention in this blog is not an endorsement of either their company or any of their products. Information about EncryptStick was based solely on publicly available information and my own personal experiences. My friends at ENC Security Systems provided me a full-version pre-loaded stick for this review.)
The EncryptStick software comes in two flavors, a free/trial version, and the full/paid version. The free trial version has [limits on capacity and time] but provides enough glimpse of the product to decide before you buy the full version. You can download the software yourself and put in on your own USB device, or purchase the pre-loaded stick that comes with the full-version license.
Whichever you choose, the EncryptStick offers three nice protection features:
Encryption for data organized in "storage vaults", which can be either on the stick itself, or on any other machine the stick is connected to. That is a nice feature, because you are not limited to the capacity of the USB stick.
Encrypted password list for all your websites and programs.
A secure browser, that prevents any key-logging or malware that might be on the host Windows machine.
I have tried out all three functions and everything works as advertised. However, there is always room for improvement, so here are my suggestions.
The first problem is that the pre-loaded stick looks like it is worth a million dollars. It is in a shiny bronze color with "EncryptStick" emblazoned on it. This is NOT subtle advertising! This 8GB capacity stick looks like it would be worth stealing solely on being a nice piece of jewelry, and then the added bonus that there might be "valuable secrets" just makes that possibility even more likely.
If you want to keep your information secure, it would help to have "plausible deniability" that there is nothing of value on a stick. Either have some corporate logo on it, of have the stick look like a cute animal, like these pig or chicken USB sticks.
It reminds me how the first Apple iPod's were in bright [Mug-me White]. I use black headphones with my black iPod to avoid this problem.
Of course, you can always install the downloadable version of EncryptStick software onto a less conspicuous stick if you are concerned about theft. The full/paid version of EncryptStick offers an option for "lost key recovery" which would allow you to backup the contents of the stick and be able to retrieve them on a newly purchased stick in the event your first one is lost or stolen.
Imagine how "unlucky" I felt when I notice that I had lost my "rabbits feet" on this cute animal-themed USB stick.
I sense trouble for losing the cap on my EncryptStick as well. This might seem trivial, but is a pet-peeve of mine that USB sticks should plan for this. Not only is there nothing to keep the cap on (it slides on and off quite smoothly), but there is no loop to attach the cap to anything if you wanted to.
Since then, I got smart and try to look for ways to keep the cap connected. Some designs, like this IBM-logoed stick shown above, just rotate around an axle, giving you access when you need it, and protection when it is folded closed.
Alternatively, get a little chain that allows you to attach the cap to the main stick. In the case of the pig and chicken, the memory section had a hole pre-drilled and a chain to put through it. I drilled an extra hole in the cap section of each USB stick, and connected the chain through both pieces.
(Warning: Kids, be sure to ask for assistance from your parents before using any power tools on small plastic objects.)
The EncryptStick can run on either Microsoft Windows or Mac OS. The instructions indicate that you can install both versions of download software onto a single stick, so why not do that for the pre-loaded full version? The stick I have had only the Windows version pre-loaded. I don't know if the Windows and Mac OS versions can unlock the same "storage vaults" on the stick.
Certainly, I have been to many companies where either everyone runs Windows or everyone runs Mac OS. If the primary target audience is to use this stick at work in one of those places, then no changes are required. However, at IBM, we have employees using Windows, Mac OS and Linux. In my case, I have all three! Ideally, I would like a version of EncryptStick that I could take on trips with me that would allow me to use it regardless of the Operating System I encountered.
Since there isn't a Linux-version of EncryptStick software, I decided to modify my stick to support booting Linux. I am finding more and more Linux kiosks when I travel, especially at airports and high-traffic locations, so having a stick that works both in Windows or Linux would be useful. Here are some suggestions if you want to try this at home:
Use fdisk to change the FAT32 partition type from "b" to "c". Apparently, Grub2 requires type "c", but the pre-loaded EncryptStick was set to "b". The Windows version of EncryptStick> seems to work fine in either mode, so this is a harmless change.
Install Grub2 with "grub-install" from a working Linux system.
Once Grub2 is installed, you can boot ISO images of various Linux Rescue CDs, like [PartedMagic] which includes the open-source [TrueCrypt] encryption software that you could use for Linux purposes.
This USB stick could also be used to help repair a damaged or compromised Windows system. Consider installing [Ophcrack] or [Avira].
Certainly, 8GB is big enough to run a full Linux distribution. The latest 32-bit version of [Ubuntu] could run on any 32-bit or 64-bit Intel or AMD x86 machine, and have enough room to store an [encrypted home directory].
Since the stick is formatted FAT32, you should be able to run your original Windows or Mac OS version of EncryptStick with these changes.
Depending on where you are, you may not have the luxury to reboot a system from the USB memory stick. Certainly, this may require changes to the boot sequence in the BIOS and/or hitting the right keys at the right time during the boot sequence. I have been to some "Internet Cafes" that frown on this, or have blocked this altogether, forcing you to boot only from the hard drive.
Well, those are my suggestions. Whether you go on a trip with or without your laptop, it can't hurt to take this EncryptStick along. If you get a virus on your laptop, or have your laptop stolen, then it could be handy to have around. If you don't bring your laptop, you can use this at Internet cafes, hotel business centers, libraries, or other places where public computers are available.
Well, it feels like Tuesday and you know what that means... "IBM Announcement Day!" Actually, today is Wednesday, but since Monday was Memorial Day holiday here in the USA, my week is day-shifted. Yesterday, IBM announced its latest IBM FlashCopy Manager v2.2 release. Fellow blogger, Del Hoobler (IBM) has also posted something on this out atthe [Tivoli Storage Blog].
IBM FlashCopy Manager replaces two previous products. One was called Tivoli Storage Manager for Copy Services, the other was called Tivoli Storage Manager for Advanced Copy Services. To say people were confused between these two was an understatement, the first was for Windows, and the second was for UNIX and Linux operating systems. The solution? A new product that replaces both of these former products to support Windows, UNIX and Linux! Thus, IBM FlashCopy Manager was born. I introduced this product back in 2009 in my post [New DS8700 and other announcements].
IBM Tivoli Storage FlashCopy Manager provides what most people with "N series SnapManager envy" are looking for: application-aware point-in-time copies. This product takes advantage of the underlying point-in-time interfaces available on various disk storage systems:
FlashCopy on the DS8000 and SAN Volume Controller (SVC)
Snapshot on the XIV storage system
Volume Shadow Copy Services (VSS) interface on the DS3000, DS4000, DS5000 and non-IBM gear that supports this Microsoft Windows protocol
For Windows, IBM FlashCopy Manager can coordinate the backup of Microsoft Exchange and SQL Server. The new version 2.2 adds support for Exchange 2010 and SQL Server 2008 R2. This includes the ability to recover an individual mailbox or mail item from an Exchange backup. The data can be recovered directly to an Exchange server, or to a PST file.
For UNIX and Linux, IBM FlashCopy Manager can coordinate the backup of DB2, SAP and Oracle databases. Version 2.2 adds support specific Linux and Solaris operating systems, and provides a new capability for database cloning. Basically, database cloning restores a database under a new name with all the appropriate changes to allow its use for other purposes, like development, test or education training. A new "fcmcli" command line interface allows IBM FlashCopy Manager to be used for custom applications or file systems.
A common misperception is that IBM FlashCopy Manager requires IBM Tivoli Storage Manager backup software to function. That is not true. You have two options:
In Stand-alone mode, it's just you, the application, IBM FlashCopy Manager and your disk system. IBM FlashCopy Manager coordinates the point-in-time copies, maintains the correct number of versions, and allows you to backup and restore directly disk-to-disk.
Unified Recovery Management with Tivoli Storage Manager
Of course, the risk with relying only on point-in-time copies is that in most cases, they are on the same disk system as the original data. The exception being virtual disks from the SAN Volume Controller. IBM FlashCopy Manager can be combined with IBM Tivoli Storage Manager so that the point-in-time copies can be copied off to a local or remote TSM server, so that if the disk system that contains both the source and the point-in-time copies fails, you have a backup copy from TSM. In this approach, you can still restore from the point-in-time copies, but you can also restore from the TSM backups as well.
IBM FlashCopy Manager is an excellent platform to connect application-aware fucntionality with hardware-based copy services.
Well, it's Wednesday, and you know what that means... IBM Announcements!
(Actually most IBM announcements are on Tuesdays, but IBM gave me extra time to recover from my trip to Europe!)
Today, IBM announced [IBM PureSystems], a new family of expert-integrated systems that combine storage, servers, networking, and software, based on IBM's decades of experience in the IT industry. You can register for the [Launch Event] today (April 11) at 2pm EDT, and download the companion "Integrated Expertise" event app for Apple, Android or Blackberry smartphones.
(If you are thinking, "Hey, wait a minute, hasn't this been done before?" you are not alone. Yes, IBM introduced the System/360 back in 1964, and the AS/400 back in 1988, so today's announcement is on scheduled for this 24-year cycle. Based on IBM's past success in this area, others have followed, most recently, Oracle, HP and Cisco.)
Initially, there are two offerings:
IBM PureFlex™ System
IBM PureFlex is like IaaS-in-a-box, allowing you to manage the system as a pool of virtual resources. It can be used for private cloud deployments, hybrid cloud deployments, or by service providers to offer public cloud solutions. IBM drinks its own champagne, and will have no problem integrating these into its [IBM SmartCloud] offerings.
To simplify ordering, the IBM PureFlex comes in three tee-shirt sizes: Express, Standard and Enterprise.
IBM PureFlex is based on a 10U-high, 19-inch wide, standard rack-mountable chassis that holds 14 bays, organized in a 7 by 2 matrix. Unlike BladeCenter where blades are inserted vertically, the IBM PureFlex nodes are horizontal. Some of the nodes take up a single bay (half-wide), but a few are full-wide, take up two bays, the full 19-inch width of the chassis. Compute and storage snap in the front, while power supplies, fans, and networking snap in the back. You can fit up to four chassis in a standard 42U rack.
Unlike competitive offerings, IBM does not limit you to x86 architectures. Both x86 and POWER-based compute nodes can be mixed into a single chassis. Out of the box, the IBM PureFlex supports four operating systems (AIX, IBM i, Linux and Windows), four server hypervisors (Hyper-V, Linux KVM, PowerVM, and VMware), and two storage hypervisors (SAN Volume Controller and Storwize V7000).
There are a variety of storage options for this. IBM will offer SSD and HDD inside the compute nodes themselves, direct-attached storage nodes, and an integrated version of the Storwize V7000 disk system. Of course, every IBM System Storage product is supported as external storage. Since Storwize V7000 and SAN Volume Controller support external virtualization, many non-IBM devices will be supported automatically as well.
Networking is also optimized, with options for 10Gb and 40Gb Ethernet/FCoE, 40Gb and 56Gb Infiniband, 8Gbps and 16Gbps Fibre Channel. Much of the networking traffic can be handled within the chassis, to minimize traffic on external switches and directors.
For management, IBM offers the Flex System Manager, that allows you to manage all the resources from a single pane of glass. The goal is to greatly simplify the IT lifecycle experience of procurement, installation, deployment and maintenance.
IBM PureApplication™ System
IBM PureApplication is like PaaS-in-a-box. Based on the IBM PureFlex infrastructure, the IBM PureApplication adds additional software layers focused on transactional web, business logic, and database workloads. Initially, it will offer two platforms: Linux platform based on x86 processors, Linux KVM and Red Hat Enterprise Linux (RHEL); and a UNIX platform based on POWER7 processors, PowerVM and AIX operating system. It will be offered in four tee-shirt sizes (small, medium, large and extra large).
In addition to having IBM's middleware like DB2 and WebSphere optimized for this platform, over 600 companies will announce this week that they will support and participate in the IBM PureSystems ecosystem as well. Already, there are 150 "Patterns of Expertise" ready to deploy from IBM PureSystem Centre, a kind of a "data center app store", borrowing an idea used today with smartphones.
By packaging applications in this manner, workloads can easily shift between private, hybrid and public clouds.
If you are unhappy with the inflexibility of your VCE Vblock, HP Integrity, or Oracle ExaLogic, talk to your local IBM Business Partner or Sales Representative. We might be able to buy your boat anchor off your hands, as part of an IBM PureSystems sale, with an attractive IBM Global Financing plan.
Despite this, or perhaps because of this, over 30 percent of IBM's Linux server revenue is onnon-x86 platforms, avoiding the XenSource vs. VMware decision altogether. Both System z (traditional mainframe servers) and System p (traditional UNIX servers) are able to run many Linux images in a fully virtualized manner, without VMware or XenSource.
In addition to dominating the gaming world, producing chips for the Nintendo Wii, Sony PlayStation, and Microsoft Xbox 360, IBM also dominates the world of Linux and UNIX servers. Today, IBM announced its new POWER7 processor, and a line of servers that use this technology. Here is a quick [3-minute video] about the POWER7.
While others might be [Dancing on Sun's grave], IBM instead is focused on providing value to the marketplace. Here is another quick [2-minute video] about why thousands of companies have switched from Sun, HP and Dell over to IBM.
[R&D Magazine] recently conducted a survey that prompted readers to identify the world's most successful Research and Development (R&D) companies. The results are in: IBM was recognized as the best R&D company in the world when several different categories were evaluated, including:
R&D spending as a percentage of revenue
the number of patents
new products in development
The survey considered additional information on more than 130 companies such as data on intellectual property, community service and financial growth trends. Readers were also asked five distinct questions, including the following:
Where would you like to work based on their R&D?
What companies have the most improved R&D in the past five years?
What companies are the leaders in R&D?
Which company's R&D has the strongest influence on society?
Which company's R&D is the most proactive in high tech challenges?
Since it is often 5-15 years between when a scientist in one of our many research labs comes up with a clever idea, to when it is a market success, it is good to have external recognition for the R&D efforts we are doing right now.Here is a link to a [four-page PDF] of the magazine article.
Take for example IBM's recent breakthrough in Silicon photonics. Supercomputers that consist of thousands of individual processing nodes, typically running Linux on dual-core or quad-core processors, connected by miles of copper wires could one day fit into a laptop PC. And while today’s supercomputers can use the equivalent energy required to power hundreds of homes, these future tiny supercomputers-on-a-chip would expend the energy of a light bulb, so this solution is more "green" for the environment.According to the [IBM Press Release]:
The breakthrough -- known in the industry as a silicon Mach-Zehnder electro-optic modulator -- performs the function of converting electrical signals into pulses of light. The IBM modulator is 100 to 1,000 times smaller in size compared to previously demonstrated modulators of its kind, paving the way for many such devices and eventually complete optical routing networks to be integrated onto a single chip. This could significantly reduce cost, energy and heat while increasing communications bandwidth between the cores more than a hundred times over wired chips.
“Work is underway within IBM and in the industry to pack many more computing cores on a single chip, but today’s on-chip communications technology would overheat and be far too slow to handle that increase in workload,” said Dr. T.C. Chen, vice president, Science and Technology, IBM Research. “What we have done is a significant step toward building a vastly smaller and more power-efficient way to connect those cores, in a way that nobody has done before.”
Today, one of the most advanced chips in the world -- IBM’s Cell processor which powers the Sony Playstation 3 -- contains nine cores on a single chip. The new technology aims to enable a power-efficient method to connect hundreds or thousands of cores together on a tiny chip by eliminating the wires required to connect them. Using light instead of wires to send information between the cores can be 100 times faster and use 10 times less power than wires.
Well, it's Tuesday again, which means IBM announcement day. With our [big launches] we had this year, there might be some confusion on IBM terminology on how announcements are handled.Basically, there are three levels:
Technology demonstrations show IBM's leadership, innovation and investment direction, without having to detail a specificproduct offering.Last month's[Project Quicksilver], for example, demonstrated the ability to handle over 1 million IOPS with Solid State Disk.IBM is committed to develop solid state storage to create real-world uses across a broad range of applications, middleware, and systems offerings.
A preview announcement does entail a specific product offering, but may not necessarily include pricing, packagingor specific availability dates.
An announcement also entails a specific product offering, and does include pricing, packaging and specific availability dates.
With our September 8 launch of the IBM Information Infrastructure strategic initiative, there were a mix of all three of these. Many of the preview announcements will be followed up with full announcements later this year. Today, the IBM Tivoli Advanced Backup andRecovery for z/OS v2.1 was announced.
Note: If you don't use z/OS on a System z mainframe, you can stop reading now.
As many of my loyal readers know, I was lead architect for DFSMS until 2001, and so functions related to DFSMS and z/OS are very near and dear to my heart. For Business Continuity, IBM created Aggregate Backup andRecovery Support (ABARS) as part of the DFSMShsm component. This feature created a self-contained backupimage from data that could be either on disk or tape, including migrated data. In the event of a disaster,an ABARS backup image can be used to bring back just the exact programs and data needed for a specific application, speeding up the recovery process, and allowing BC/DR plans to prioritize what is most important.
To help manage ABARS, IBM has partnered with [Mainstar Software Corporation]to offer a product that helps before, during and after the ABARS processing.
ABARS requires the storage admin to have a "selection list" of data sets to process as an aggregate.IBM Tivoli Advanced Backup and Recovery for z/OS includes Mainstar® ASAP™ to help identify the appropriatedata sets for specific applications, using information from job schedulers, JCL, and SMF records.
ABARS has two simple commands: ABACKUP to produce the backup image, and ARECOVER to recover it. However, ifyou have hundreds of aggregates, and each aggregate has several backups, you may need some help identifyingwhich image to recover from.IBM Tivoli Advanced Backup and Recovery for z/OS includes Mainstar® ABARS Manager™ to present a list ofinformation, making it easy to choose from. To help prep the ICF Catalogs, there is a CATSCRUB feature for either"empty" or "full" catalog recovery at the recovery site.
The fact that storage admins may not be intimately familiar with the applications they are backing up is a commonsource of human error. IBM Tivoli Advanced Backup and Recovery for z/OS includes Mainstar® All/Star™ to help validate that the data setsprocessed by ABACKUP are complete, to support any regulatory audit or application team verification.This critical data tracking/inventory reporting not only identifies what isn't backed up, so you can ensure that you are not missing critical data, but also can identify which data sets are being backed up multiple times by more than one utility, so you can reduce the occurrence of redundant backups.
With v2.1 of Tivoli Advanced Backup and Recovery for z/OS, IBM has integrated Tivoli Enterprise Portal (TEP)support. This allows you to access these functions through IBM Tivoli Monitor v6 GUI on a Linux, UNIX or Windowsworkstation. IBM Tivoli Monitor has full support to integrate Web 2.0, multi-media and frames. This meansthat any other product that can be rendered in a browser can be embedded and supported with launch-in-contextcapability.
(If you have not separately purchased a license to IBM Tivoli Monitoring V6.2, don't worry, you can obtainthe TEP-based function by acquiring a no-charge, limited use license to IBM Tivoli MonitoringServices on z/OS, V6.2.)
In addition to supporting IBM's many DFSMS backup methods, from ABARS to IDCAMS to IEBGENER, IBM Tivoli Advanced Backup and Recovery v2.1 can also support third-party products from Innovation Data Processing and Computer Associates.
As many people re-discover the mainframe as the cost-effective platform that it has always been, migratingapplications back to the mainframe to reduce costs, they need solutions that work across both mainframe anddistributed systems during this transition. IBM Tivoli Advanced Backup and Recovery for z/OS can help.
Well it's Tuesday, and ["election day"] here in the USA, and again IBM has more announcements.
IBM announced [IBM Tivoli Key Lifecycle Manager v1.0] (TKLM) to manage encryption keys. This provides a graphical interface to manage encryption keys, including retention criteria when sharing keys with other companies.
TKLM is supported on AIX, Solaris, Windows, Red Hat and SUSE Linux. IBM plans to offer TKLM forz/OS in 2009. TKLM can be used with Firefox or Internet Explorer web browser. This will include the Encryption Key Manager (EKM) that IBM offered initially to support encryption keys for the TS1120, TS1130, and LTO-4 drives.
While this is needed today for tape, IBM positions this software to also manage the encryption keys for "Full Drive Encryption" (FDE) disk drive modules (DDM) in IBM disk systems in 2009.
Last week, I got the following comment from Bob Swann:
I am looking for the IBM VM Poster or a picture of the IBM VM "Catch the Wave"
Do you know where I might find it?
Well, Bob, I made some phone calls. The company that published these posters no longer exists, butI found a coworker at the Poughkeepsie Briefing Center who still had the poster on his wall, and he was kind enough to take a picture of it for you.
VM: The Wave of the Future (click thumbnail at left to see larger image)
Some may recognize this as a [mash-up] using as a base the famous Japanese 10-inch by 15-inch block print[The Great Wave off Kanagawa] byartist [Katsushika Hokusai]. I had this as my laptop'swallpaper screen image until last year when I was presenting in Kuala Lumpur, Malaysia. I was told that it reminded people about the horrible tsunami caused by the [Indian Ocean earthquake] back in 2004.I was actually scheduled to fly the last week of December 2004 to Jakarta, Indonesia, but at the last minute ourclient team changed plans. I would have been on route over the Pacific ocean when the tsunami hit, and probably stranded over there for weeks or months until the airports re-opened.
The Wave theme was in part to honor the IBM users group called World Alliance VSE VM and Linux (WAVV) which is havingtheir next meeting [April 18-22, 2008] in Chattanooga, Tennessee. I presentedat this conference back in 1996 in Green Bay, Wisconsin, as part of the IBM Linux for S/390 team. It started onthe Sunday that Wisconsin switched their clocks for [DaylightSaving Time], and the few of us from Arizona or other places that don't both with this, all showed up forbreakfast an hour early.
When I was in Australia last year, I was told the wave that sports fans do, by raising their hands in coordinatedsequence, was called the [Mexican Wave]in most other countries. When I was there, Melbourne was trying to outlaw this practice at their cricket matches.
The "wave" represents a powerful metaphor, from z/VM operating system on System z mainframes to VMware and Xenon Intel-based processor machines, as the direction of virtualization that we are heading for future data centers.The Mexican wave represents a glimpse of what humans can accomplish with collaboration on a globalscale. It can also represent the tidal wave of data arising from nearly 60 percent annual growth instorage capacity. (I had to mention storage eventually, to avoid being completely off-topic on this post!)
I hope this is the graphic you were looking for Bob. If anyone else has wave-themed posters they would like to contribute, please post a comment below.
For the longest time, people thought that humans could not run a mile in less than four minutes. Then, in 1954, [Sir Roger Bannister] beat that perception, and shortly thereafter, once he showed it was possible, many other runners were able to achieve this also. The same is being said now about the IBM Watson computer which appeared this week against two human contestants on Jeopardy!
(2014 Update: A lot has happened since I originally wrote this blog post! I intended this as a fun project for college students to work on during their summer break. However, IBM is concerned that some businesses might be led to believe they could simply stand up their own systems based entirely on open source and internally developed code for business use. IBM recommends instead the [IBM InfoSphere BigInsights] which packages much of the software described below. IBM has also launched a new "Watson Group" that has [Watson-as-a-Service] capabilities in the Cloud. To raise awareness to these developments, IBM has asked me to rename this post from IBM Watson - How to build your own "Watson Jr." in your basement to the new title IBM Watson -- How to replicate Watson hardware and systems design for your own use in your basement. I also took this opportunity to improve the formatting layout.)
Often, when a company demonstrates new techology, these are prototypes not yet ready for commercial deployment until several years later. IBM Watson, however, was made mostly from commercially available hardware, software and information resources. As several have noted, the 1TB of data used to search for answers could fit on a single USB drive that you buy at your local computer store.
Take a look at the [IBM Research Team] to determine how the project was organized. Let's decide what we need, and what we don't in our version for personal use:
Do we need it for personal use?
Yes, That's you. Assuming this is a one-person project, you will act as Team Lead.
Yes, I hope you know computer programming!
No, since this version for personal use won't be appearing on Jeopardy, we won't need strategy on wager amounts for the Daily Double, or what clues to pick next. Let's focus merely on a computer that can accept a question in text, and provide an answer back, in text.
Yes, this team focused on how to wire all the hardware together. We need to do that, although this version for personal use will have fewer components.
Optional. For now, let's have this version for personal use just return its answer in plain text. Consider this Extra Credit after you get the rest of the system working. Consider using [eSpeak], [FreeTTS], or the Modular Architecture for Research on speech sYnthesis [MARY] Text-to-Speech synthesizers.
Yes, I will explain what this is, and why you need it.
Yes, we will need to get information for personal use to process
Yes, this team developed a system for parsing the question being asked, and to attach meaning to the different words involved.
No, this team focused on making IBM Watson optimized to answer in 3 seconds or less. We can accept a slower response, so we can skip this.
(Disclaimer: As with any Do-It-Yourself (DIY) project, I am not responsible if you are not happy with your version for personal use I am basing the approach on what I read from publicly available sources, and my work in Linux, supercomputers, XIV, and SONAS. For our purposes, this version for personal use is based entirely on commodity hardware, open source software, and publicly available sources of information. Your implementation will certainly not be as fast or as clever as the IBM Watson you saw on television.)
Step 1: Buy the Hardware
Supercomputers are built as a cluster of identical compute servers lashed together by a network. You will be installing Linux on them, so if you can avoid paying extra for Microsoft Windows, that would save you some money. Here is your shopping list:
Three x86 hosts, with the following:
64-bit quad-core processor, either Intel-VT or AMD-V capable,
8GB of DRAM, or larger
300GB of hard disk, or larger
CD or DVD Read/Write drive
Computer Monitor, mouse and keyboard
Ethernet 1GbE 4-port hub, and appropriate RJ45 cables
Surge protector and Power strip
Local Console Monitor (LCM) 4-port switch (formerly known as a KVM switch) and appropriate cables. This is optional, but will make it easier during the development. Once your implementation is operational, you will only need the monitor and keyboard attached to one machine. The other two machines can remain "headless" servers.
Step 2: Establish Networking
IBM Watson used Juniper switches running at 10Gbps Ethernet (10GbE) speeds, but was not connected to the Internet while playing Jeopardy! Instead, these Ethernet links were for the POWER7 servers to talk to each other, and to access files over the Network File System (NFS) protocol to the internal customized SONAS storage I/O nodes.
The implementation will be able to run "disconnected from the Internet" as well. However, you will need Internet access to download the code and information sources. For our purposes, 1GbE should be sufficient. Connect your Ethernet hub to your DSL or Cable modem. Connect all three hosts to the Ethernet switch. Connect your keyboard, video monitor and mouse to the LCM, and connect the LCM to the three hosts.
Step 3: Install Linux and Middleware
To say I use Linux on a daily basis is an understatement. Linux runs on my Android-based cell phone, my laptop at work, my personal computers at home, most of our IBM storage devices from SAN Volume Controller to XIV to SONAS, and even on my Tivo at home which recorded my televised episodes of Jeopardy!
For this project, you can use any modern Linux distribution that supports KVM. IBM Watson used Novel SUSE Linux Enterprise Server [SLES 11]. Alternatively, I can also recommend either Red Hat Enterprise Linux [RHEL 6] or Canonical [Ubuntu v10]. Each distribution of Linux comes in different orientations. Download the the 64-bit "ISO" files for each version, and burn them to CDs.
Graphical User Interface (GUI) oriented, often referred to as "Desktop" or "HPC-Head"
Command Line Interface (CLI) oriented, often referred to as "Server" or "HPC-Compute"
Guest OS oriented, to run in a Hypervisor such as KVM, Xen, or VMware. Novell calls theirs "Just Enough Operating System" [JeOS].
For this version for personal use, I have chosen a [multitier architecture], sometimes referred to as an "n-tier" or "client/server" architecture.
Host 1 - Presentation Server
For the Human-Computer Interface [HCI], the IBM Watson received categories and clues as text files via TCP/IP, had a [beautiful avatar] representing a planet with 42 circles streaking across in orbit, and text-to-speech synthesizer to respond in a computerized voice. Your implementation will not be this sophisticated. Instead, we will have a simple text-based Query Panel web interface accessible from a browser like Mozilla Firefox.
Host 1 will be your Presentation Server, the connection to your keyboard, video monitor and mouse. Install the "Desktop" or "HPC Head Node" version of Linux. Install [Apache Web Server and Tomcat] to run the Query Panel. Host 1 will also be your "programming" host. Install the [Java SDK] and the [Eclipse IDE for Java Developers]. If you always wanted to learn Java, now is your chance. There are plenty of books on Java if that is not the language you normally write code.
While three little systems doesn't constitute an "Extreme Cloud" environment, you might like to try out the "Extreme Cloud Administration Tool", called [xCat], which was used to manage the many servers in IBM Watson.
Host 2 - Business Logic Server
Host 2 will be driving most of the "thinking". Install the "Server" or "HPC Compute Node" version of Linux. This will be running a server virtualization Hypervisor. I recommend KVM, but you can probably run Xen or VMware instead if you like.
Host 3 - File and Database Server
Host 3 will hold your information sources, indices, and databases. Install the "Server" or "HPC Compute Node" version of Linux. This will be your NFS server, which might come up as a question during the installation process.
Technically, you could run different Linux distributions on different machines. For example, you could run "Ubuntu Desktop" for host 1, "RHEL 6 Server" for host 2, and "SLES 11" for host 3. In general, Red Hat tries to be the best "Server" platform, and Novell tries to make SLES be the best "Guest OS".
My advice is to pick a single distribution and use it for everything, Desktop, Server, and Guest OS. If you are new to Linux, choose Ubuntu. There are plenty of books on Linux in general, and Ubuntu in particular, and Ubuntu has a helpful community of volunteers to answer your questions.
Step 4: Download Information Sources
You will need some documents for your implementation to process.
IBM Watson used a modified SONAS to provide a highly-available clustered NFS server. For this version, we won't need that level of sophistication. Configure Host 3 as the NFS server, and Hosts 1 and 2 as NFS clients. See the [Linux-NFS-HOWTO] for details. To optimize performance, host 3 will be the "official master copy", but we will use a Linux utility called rsync to copy the information sources over to the hosts 1 and 2. This allows the task engines on those hosts to access local disk resources during question-answer processing.
We will also need a relational database. You won't need a high-powered IBM DB2. Your implementation can do fine with something like [Apache Derby] which is the open source version of IBM CloudScape from its Informix acquisition. Set up Host 3 as the Derby Network Server, and Hosts 1 and 2 as Derby Network Clients. For more about structured content in relational databases, see my post [IBM Watson - Business Intelligence, Data Retrieval and Text Mining].
Linux includes a utility called wget which allows you to download content from the Internet to your system. What documents you decide to download is up to you, based on what types of questions you want answered. For example, if you like Literature, check out the vast resources at [FullBooks.com]. You can automate the download by writing a shell script or program to invoke wget to all the places you want to fetch data from. Rename the downloaded files to something unique, as often they are just "index.html". For more on wget utility, see [IBM Developerworks].
Step 5: The Query Panel - Parsing the Question
Next, we need to parse the question and have some sense of what is being asked for. For this we will use [OpenNLP] for Natural Language Processing, and [OpenCyc] for the conceptual logic reasoning. See Doug Lenat presenting this 75-minute video [Computers versus Common Sense]. To learn more, see the [CYC 101 Tutorial].
Unlike Jeopardy! where Alex Trebek provides the answer and contestants must respond with the correct question, we will do normal Question-and-Answer processing. To keep things simple, we will limit questions to the following formats:
Who is ...?
Where is ...?
When did ... happen?
What is ...?
Host 1 will have a simple Query Panel web interface. At the top, a place to enter your question, and a "submit" button, and a place at the bottom for the answer to be shown. When "submit" is pressed, this will pass the question to "main.jsp", the Java servlet program that will start the Question-answering analysis. Limiting the types of questions that can be posed will simplify hypothesis generation, reduce the candidate set and evidence evaluation, allowing the analytics processing to continue in reasonable time.
Step 6: Unstructured Information Management Architecture
The "heart and soul" of IBM Watson is Unstructured Information Management Architecture [UIMA]. IBM developed this, then made it available to the world as open source. It is maintained by the [Apache Software Foundation], and overseen by the Organization for the Advancement of Structured Information Standards [OASIS].
Basically, UIMA lets you scan unstructured documents, gleam the important points, and put that into a database for later retrieval. In the graph above, DBs means 'databases' and KBs means 'knowledge bases'. See the 4-minute YouTube video of [IBM Content Analytics], the commercial version of UIMA.
Starting from the left, the Collection Reader selects each document to process, and creates an empty Common Analysis Structure (CAS) which serves as a standardized container for information. This CAS is passed to Analysis Engines , composed of one or more Annotators which analyze the text and fill the CAS with the information found. The CAS are passed to CAS Consumers which do something with the information found, such as enter an entry into a database, update an index, or update a vote count.
(Note: This point requires, what we in the industry call a small matter of programming, or [SMOP]. If you've always wanted to learn Java programming, XML, and JDBC, you will get to do plenty here. )
If you are not familiar with UIMA, consider this [UIMA Tutorial].
Step 7: Parallel Processing
People have asked me why IBM Watson is so big. Did we really need 2,880 cores of processing power? As a supercomputer, the 80 TeraFLOPs of IBM Watson would place it only in 94th place on the [Top 500 Supercomputers]. While IBM Watson may be the [Smartest Machine on Earth], the most powerful supercomputer at this time is the Tianhe-1A with more than 186,000 cores, capable of 2,566 TeraFLOPs.
To determine how big IBM Watson needed to be, the IBM Research team ran the DeepQA algorithm on a single core. It took 2 hours to answer a single Jeopardy question! Let's look at the performance data:
Number of cores
Time to answer one Jeopardy question
Single IBM Power750 server
< 4 minutes
Single rack (10 servers)
< 30 seconds
IBM Watson (90 servers)
< 3 seconds
The old adage applies, [many hands make for light work]. The idea is to divide-and-conquer. For example, if you wanted to find a particular street address in the Manhattan phone book, you could dispatch fifty pages to each friend and they could all scan those pages at the same time. This is known as "Parallel Processing" and is how supercomputers are able to work so well. However, not all algorithms lend well to parallel processing, and the phrase [nine women can't have a baby in one month] is often used to remind us of this.
Fortuantely, UIMA is designed for parallel processing. You need to install UIMA-AS for Asynchronous Scale-out processing, an add-on to the base UIMA Java framework, supporting a very flexible scale-out capability based on JMS (Java Messaging Services) and ActiveMQ. We will also need Apache Hadoop, an open source implementation used by Yahoo Search engine. Hadoop has a "MapReduce" engine that allows you to divide the work, dispatch pieces to different "task engines", and the combine the results afterwards.
Host 2 will run Hadoop and drive the MapReduce process. Plan to have three KVM guests on Host 1, four on Host 2, and three on Host 3. That means you have 10 task engines to work with. These task engines can be deployed for Content Readers, Analysis Engines, and CAS Consumers. When all processing is done, the resulting votes will be tabulated and the top answer displayed on the Query Panel on Host 1.
Step 8: Testing
To simplify testing, use a batch processing approach. Rather than entering questions by hand in the Query Panel, generate a long list of questions in a file, and submit for processing. This will allow you to fine-tune the environment, optimize for performance, and validate the answers returned.
There you have it. By the time you get your implementation fully operational, you will have learned a lot of useful skills, including Linux administration, Ethernet networking, NFS file system configuration, Java programming, UIMA text mining analysis, and MapReduce parallel processing. Hopefully, you will also gain an appreciation for how difficult it was for the IBM Research team to accomplish what they had for the Grand Challenge on Jeopardy! Not surprisingly, IBM Watson is making IBM [as sexy to work for as Apple, Google or Facebook], all of which started their business in a garage or a basement with a system as small as this version for personal use.
By combining multiple components into a single "integrated system", IBM can offer a blended disk-and-tape storage solutions. This provides the best of both worlds, high speed access using disk, while providing lower costs and more energy efficiency with tape. According to a study by the Clipper Group, tape can be 23 times less expensive than disk over a 5 year total cost of ownership (TCO).
I've also covered Hierarchical Storage Management, such as my post [Seven Tiers of Storage at ABN Amro], and my role as lead architect for DFSMS on z/OS in general, and DFSMShsm in particular.
However, some explanation might be warranted in the use of these two terms in regards to SONAS. In this case, ILM refers to policy-based file placement, movement and expiration on internal disk pools. This is actually a GPFS feature that has existed for some time, and was tested to work in this new configuration. Files can be individually placed on either SAS (15K RPM) or SATA (7200 RPM) drives. Policies can be written to move them from SAS to SATA based on size, age and days non-referenced.
HSM is also a form of ILM, in that it moves data from SONAS disk to external storage pools managed by IBM Tivoli Storage Manager. A small stub is left behind in the GPFS file system indicating the file has been "migrated". Any reference to read or update this file will cause the file to be "recalled" back from TSM to SONAS for processing. The external storage pools can be disk, tape or any other media supported by TSM. Some estimate that as much as 60 to 80 percent of files on NAS have low reference and should be stored on tape instead of disk, and now SONAS with HSM makes that possible.
This distinction allows the ILM movement to be done internally, within GPFS, and the HSM movement to be done externally, via TSM. Both ILM and HSM movement take advantage of the GPFS high-speed policy engine, which can process 10 million files per node, run in parallel across all interface nodes. Note that TSM is not required for ILM movement. In effect, SONAS brings the policy-based management features of DFSMS for z/OS mainframe to all the rest of the operating systems that access SONAS.
HTTP and NIS support
In addition to NFS v2, NFS v3, and CIFS, the SONAS v1.1.1 adds the HTTP protocol. Over time, IBM plans to add more protocols in subsequent releases. Let me know which protocols you are interested in, so I can pass that along to the architects designing future releases!
SONAS v1.1.1 also adds support for Network Information Service (NIS), a client/server based model for user administration. In SONAS, NIS is used for netgroup and ID mapping only. Authentication is done via Active Directory, LDAP or Samba PDC.
SONAS already had synchronous replication, which was limited in distance. Now, SONAS v1.1.1 provides asynchronous replication, using rsync, at the file level. This is done over Wide Area Network (WAN) across to any other SONAS at any distance.
Interface modules can now be configured with either 64GB or 128GB of cache. Storage now supports both 450GB and 600GB SAS (15K RPM) and both 1TB and 2TB SATA (7200 RPM) drives. However, at this time, an entire 60-drive drawer must be either all one type of SAS or all one type of SATA. I have been pushing the architects to allow each 10-pack RAID rank to be independently selectable. For now, a storage pod can have 240 drives, 60 drives of each type of disk, to provide four different tiers of storage. You can have up to 30 storage pods per SONAS, for a total of 7200 drives.
An alternative to internal drawers of disk is a new "Gateway" iRPQ that allows the two storage nodes of a SONAS storage pod to connect via Fibre Channel to one or two XIV disk systems. You cannot mix and match, a storage pod is either all internal disk, or all external XIV. A SONAS gateway combined with external XIV is referred to as a "Smart Business Storage Cloud" (SBSC), which can be configured off premises and managed by third-party personnel so your IT staff can focus on other things.
See the Announcement Letters for the SONAS [hardware] and [software] for more details.
For those who are wondering how this positions against IBM's other NAS solution, the IBM System Storage N series, the rule of thumb is simple. If your capacity needs can be satisfied with a single N series box per location, use that. If not, consider SONAS instead. For those with non-IBM NAS filers that realize now that SONAS is a better approach, IBM offers migration services.
Both the Information Archive and the SONAS can be accessed from z/OS or Linux on System z mainframe, from "IBM i", AIX and Linux on POWER systems, all x86-based operating systems that run on System x servers, as well as any non-IBM server that has a supported NAS client.
They say "Great Minds think alike" and that imitation is "the sincerest form of flattery." Both of these quotes came to mind when I read fellow blogger Chuck Hollis' (EMC) excellent April 7th blog post [The 10 Big Ideas That Are Shaping IT Infrastructure Today]. Not surprisingly, some of his thoughts are similar to those I had presented two weeks ago in my March 22nd post [Cloud Computing for Accountants]. Here are two charts that caught my eye:
On page 13 of my deck, I had an old black and white photo of telephone operators, as part of a section on the history of selecting "cloud" as the iconic graphic to represent all networks. Chuck has this same graphic on his chart titled "#1 The Industrialization of IT Infrastructure".
Looks like Chuck and I use the same "stock photo" search facility!
On page 45 on my deck, I had a list of major "arms dealers" that deliver the hardware and software components needed to build Cloud Computing. Chuck has a similar chart, titled "#2 The Consolidation of the IT Industry", but with some interesting differences.
Let's look at some of the key differences:
The left-to-right order is slightly different. I chose a 1-2-4-2-1 symmetrical pattern purely on aesthetic reasons. My presentation was to a bunch of accountants, and so I was trying not to make it sound like an "Infomercial" for IBM products and offerings. My sequence is roughly chronological, in that Oracle announced its intention to acquire Sun, then Cisco, VMware and EMC announced their VCE coalition, followed closely by Cisco, VMware and NetApp announcing they work together well also, followed by [HP extended alliance with Microsoft] on Jan 13, 2010. As the IT marketplace is maturing, more and more customers are looking for an IBM-like one-stop shopping experience, and certainly various "mini-mall" alliances have formed to try to compete in this space.
I had HP and Microsoft in the same column, referring only to the above-mentioned January announcement. HP is all about private cloud hardware infrastructures, but Microsoft is all about "three screens and the public cloud", so not sure how well this alliance will work out from a Cloud Computing perspective. This was not to imply that the other stacks don't work well with Microsoft software. They all do. Perhaps to avoid that controversy, Chuck chose to highlight HP's acquisition of EDS services instead.
I used the vendor logos in their actual colors. Notice that the colors black, blue and red occur most often. These happen to be the three most popular ballpoint pen ink colors found on the very same paper documents these computer companies are trying to eliminate. Paper-less office, anyone? Chuck chose instead to colorize each stack with his own color scheme. While blue for IBM and orange for Sun Microsystems make some sense, it is not clear if he chose green for Cisco/VMware/EMC for any particular reason. Perhaps he was trying to subtly imply that the VCE stack is more energy efficient? Or maybe the green refers to money to indicate that the VCE stack is the most expensive? Either way, I would pit IBM's server/storage/software stack up against anything of comparable price from these other stacks in any energy efficiency bake-off.
What about the Cisco/VMware/NetApp combination? All three got together to assure customers this was a viable combination. IBM is the number one reseller of VMware, and VMware runs great with IBM's N series NAS storage, so I do not dispute Cisco's motivation here. It makes sense for Cisco to two-time EMC in this manner. Why should Cisco limit itself to a single storage supplier? Et tu VMware? Having VMware chose NetApp over its parent company EMC was a bit of a shock. No surprise that Chuck left NetApp out of his chart.
No love for Dell? I give Dell credit for their work with Virtual Desktop Images (VDI), and for embracing Ubuntu Linux for their servers. Dell's acquisitions of EqualLogic iSCSI-based disk systems and Perot Systems for services are also worth noting. Dell used to resell some of EMC's gear, but perhaps that relationship continues to fade away, as I [predicted back in 2007]. Chuck's decision to leave Dell off his chart speaks volumes to where this relationship stands, and where it is going.
Perhaps we are all in just one big ["echo chamber"], as we are all coming up with similar observations, talking to similar customers, and reviewing similar market analyst reports. I am glad, at least this time, that Chuck and I for the most part agree where the marketplace is going. We live in interesting times!
In my last blog post [Full Disk Encryption for Your Laptop] explained my decisions relating to Full-Disk Encryption (FDE) for my laptop. Wrapping up my week's theme of Full-Disk Encryption, I thought I would explain the steps involved to make it happen.
Last April, I switched from running Windows and Linux dual-boot, to one with Linux running as the primary operating system, and Windows running as a Linux KVM guest. I have Full Disk Encryption (FDE) implemented using Linux Unified Key Setup (LUKS).
Here were the steps involved for encrypting my Thinkpad T410:
Step 0: Backup my System
Long-time readers know how I feel about taking backups. In my blog post [Separating Programs from Data], I emphasized this by calling it "Step 0". I backed up my system three ways:
Backed up all of my documents and home user directory with IBM Tivoli Storage Manager.
Backed up all of my files, including programs, bookmarks and operating settings, to an external disk drive (I used rsync for this). If you have a lot of bookmarks on your browser, there are ways to dump these out to a file to load them back in the later step.
Backed up the entire hard drive using [Clonezilla].
Clonezilla allows me to do a "Bare Machine Recovery" of my laptop back to its original dual-boot state in less than an hour, in case I need to start all over again.
Step 1: Re-Partition the Drive
"Full Disk Encryption" is a slight misnomer. For external drives, like the Maxtor BlackArmor from Seagate (Thank you Allen!), there is a small unencrypted portion that contains the encryption/decryption software to access the rest of the drive. Internal boot drives for laptops work the same way. I created two partitions:
A small unencrypted partition (2 GB) to hold the Master Boot Record [MBR], Grand Unified Bootlloader [GRUB], and the /boot directory. Even though there is no sensitive information on this partition, it is still protected the "old way" with the hard-drive password in the BIOS.
The rest of the drive (318GB) will be one big encrypted Logical Volume Manager [LVM] container, often referred to as a "Physical Volume" in LVM terminology.
Having one big encrypted partition means I only have to enter my ridiculously-long encryption password once during boot-up.
Step 2: Create Logical Volumes in the LVM container
I create three logical volumes on the encrypted physical container: swap, slash (/) directory, and home (/home). Some might question the logic behind putting swap space on an encrypted container. In theory, swap could contain sensitive information after a system [hybernation]. I separated /home from slash(/) so that in the event I completely fill up my home directory, I can still boot up my system.
Step 3: Install Linux
Ideally, I would have lifted my Linux partition "as is" for the primary OS, and a Physical-to-Virtual [P2V] conversion of my Windows image for the guest VM. Ha! To get the encryption, it was a lot simpler to just install Linux from scratch, so I did that.
Step 4: Install Windows guest KVM image
The folks in our "Open Client for Linux" team made this step super-easy. Select Windows XP or Windows 7, and press the "Install" button. This is a fresh install of the Windows operating system onto a 30GB "raw" image file.
(Note: Since my Thinkpad T410 is Intel-based, I had to turn on the 'Intel (R) Virtualization Technology' option in the BIOS!)
There are only a few programs that I need to run on Windows, so I installed them here in this step.
Step 5: Set up File Sharing between Linux and Windows
In my dual-boot set up, I had a separate "D:" drive that I could access from either Windows or Linux, so that I would only have to store each file once. For this new configuration, all of my files will be in my home directory on Linux, and then shared to the Windows guest via CIFS protocol using [samba].
In theory, I can share any of my Linux directories using this approach, but I decide to only share my home directory. This way, any Windows viruses will not be able to touch my Linux operating system kernels, programs or settings. This makes for a more secure platform.
Step 6: Transfer all of my files back
Here I used the external drive from "Step 0" to bring my data back to my home directory. This was a good time to re-organize my directory folders and do some [Spring cleaning].
Step 7: Re-establish my backup routine
Previously in my dual-boot configuration, I was using the TSM backup/archive client on the Windows partition to backup my C: and D: drives. Occasionally I would tar a few of my Linux directories and storage the tarball on D: so that it got included in the backup process. With my new Linux-based system, I switched over to the Linux version of TSM client. I had to re-work the include/exclude list, as the files are different on Linux than Windows.
One of my problems with the dual-boot configuration was that I had to manually boot up in Windows to do the TSM backup, which was disruptive if I was using Linux. With this new scheme, I am always running Linux, and so can run the TSM client any time, 24x7. I made this even better by automatically scheduling the backup every Monday and Thursday at lunch time.
There is no Linux support for my Maxtor BlackArmor external USB drive, but it is simple enough to LUKS-encrypt any regular external USB drive, and rsync files over. In fact, I have a fully running (and encrypted) version of my Linux system that I can boot directly from a 32GB USB memory stick. It has everyting I need except Windows (the "raw" image file didn't fit.)
I can still use Clonezilla to make a "Bare Machine Recovery" version to restore from. However, with the LVM container encrypted, this renders the compression capability worthless, and so takes a lot longer and consumes over 300GB of space on my external disk drive.
Backing up my Windows guest VM is just a matter of copying the "raw" image file to another file for safe keeping. I do this monthly, and keep two previous generations in case I get hit with viruses or "Patch Tuesday" destroys my working Windows image. Each is 30GB in size, so it was a trade-off between the number of versions and the amount of space on my hard drive. TSM backup puts these onto a system far away, for added protection.
Step 8: Protect your Encryption setup
In addition to backing up your data, there are a few extra things to do for added protection:
Add a second passphrase. The first one is the ridiculously-long one you memorize faithfully to boot the system every morning. The second one is a ridiculously-longer one that you give to your boss or admin assistant in case you get hit by a bus. In the event that your boss or admin assistant leaves the company, you can easily disable this second passprhase without affecting your original.
Backup the crypt-header. This is the small section in front that contains your passphrases, so if it gets corrupted, you would not be able to access the rest of your data. Create a backup image file and store it on an encrypted USB memory stick or external drive.
If you are one of the lucky 70,000 IBM employees switching from Windows to Linux this year, Welcome!
IBM announced that it will offer [three free months of IBM Smart Business Cloud] computing and storage services to government agencies, charitable non-profit organizations, and other organizations involved with reconstruction resulting from the earthquakes and tsunami in Japan and the northern Pacific region.
With traditional communications down, and many data centers incapacitated, Cloud Computing can be a great way to resume operations. According to the announcement, organizations can submit their requests now until April 30, and the program will run until July 31, 2011. Options include:
Virtual machine images running 32-bit and 64-bit versions of Linux or Windows.
60GB to 2 TB of disk storage per instance.
Options for various IBM middleware (DB2, Informix, Lotus, and WebSphere)
Rational Application Lifecycle Management and Tivoli Monitoring software
The offer also includes [LotusLive] Software-as-a-Service (SaaS) for email and online collaboration. For more about LotusLive, see this [Red Paper].
Continuing my series on a [Laptop for Grandma], I thought I would pursue some of the "low-RAM" operating system choices. Grandma's Thinkpad R31 has only 384MB of RAM.
All of the ones below are based on Linux. For those who aren't familiar with installing or running the Linux operating system, here are some helpful tips:
Most Linux distributors allow you to download an ISO file for free. These can be either (a) burned to a CD, (b) burned to a DVD, or (c) written to a USB memory stick.
The ISO can be either a "LiveCD/LiveDVD" version, an installation program, or a combination of the two. The "Live" version allows you to boot up and try out the operating system without modifying the contents of your hard drive. Windows and Mac OS users can try out Linux without impact to their existing environment. Some Linux distributions offer both a full LiveCD+Installer version, as well as an alternate text-based Installer-only version. The latter often requires less RAM to use.
When installing, it is best to have the laptop plugged in to an electrical outlet, and hard-wired to the internet in case it needs to download the latest drivers for your particular hardware.
A CD can hold only 700MB. Many of the newer Linux distributions exceed that, requiring a DVD or USB stick instead. If your laptop has an older optical drive, it may not be able to read DVD media. Some older optical drives can only read CD's, not burn them. In my case, I burned the CDs on another machine, and then used them on grandma's Thinkpad R31.
To avoid burning "a set of coasters" when trying out multiple choices, consider using rewriteable optical media, or the USB option. If you don't like it, you can re-use for something else.
The program [Unetbootin] can take most ISO files and write them to a bootable USB stick. On my Red Hat Enterprise Linux 6 laptop, I had to also install p7zip and p7zip-plugins first.
The BIOS on some older machines, like my grandma's Thinkpad R31, cannot boot from USB. The [PLoP Boot Manager] allows you to first boot from floppy or CD-ROM, and then allows you to boot from the USB. This worked great on my grandma's system. The PLoP Boot Manager is also available on the [Ultimate Boot CD].
While I am a big fan of SUSE, Red Hat, and Ubuntu, these all require more RAM than available on grandma's laptop. Here are some Low-RAM alternatives I tried:
Damn Small Linux 4.11 RC2
The Damn Small Linux [DSL] project was dormant since 2008, but has a fresh new release for 2012. This baby can run in as little 16MB or RAM! If you have 128MB of RAM or more, the OS can run entirely from RAM, providing much faster performance.
Of course, there are always trade-offs, and in this case, apps were chosen for their size and memory footprint, not necessarily for their user-friendliness and eye candy. For example, the xMMS plays MP3 music, but I did not find it as friendly as iTunes or Rhythmbox.
Boot time is fast. From hitting the power-on button to playing the first note of MP3 music was about 1 minute.
Installing DSL Linux on the hard drive converts it into a Debian distribution, which then allows more options for applications.
Next up was [MacPup]. The latest version is 529, based on Pupply Linux 5.2.60 Precise, compatible with Ubuntu 12.04 Precise Pangolin. While traditional Puppy Linux clutters the screen with apps, the MacPup tries to have the look-and-feel of the MacOS by having a launcher tray at the bottom center of the screen.
Both MacPup and Puppy Linux can run in very small amounts of RAM and disk space. Like DSL above, you can opt to run MacPup entirely in 128MB of RAM. Unfortunately, the trade-off is a lack of application choices.
Installation to the hard drive was quite involved, certainly not for the beginner. First, you have to use Gparted to partition the disk. I created a 19GB (sda1) for my files, and 700MB (sda5) for swap. I had troubles with "ext4" file system, so re-formatted to "ext3". Second, you have to copy the files over from the LiveCD using the "Puppy Universal Installer". Third, you have to set up the Bootloader. Grub didn't work, so I installed Grub4Dos instead.
The music app is called "Alsa Player", and I was able to drag the icon into the startup tray. time-to-first-note was just over 1 minute. Fast, but not as "simple-to-use" as I would like.
SliTaz 4.0 claims to be able to run in as little as 48MB of RAM and 100MB of disk space. Time-to-first-note was similar to MacPup, but I didn't care for the TazPanel for setup, and the TazPkg for installing a limited set of software packages. I could not get Wi-Fi working at all on SliTaz, and just gave up trying.
All three of these ran on grandma's Thinkpad R31, and all three could play MP3 music. However, I was concerned that they were not as simple to use as grandma would like, and I would be concerned the amount of time and effort I might have to spend if things go wrong.
It's Tuesday again, and that means one thing.... IBM Announcements! On the heels of [last week's announcements], IBM announced some additional products of interest to storage administrators.
IBM Information Archive
Back in 2008, IBM [unveiled the Information Archive]. This storage solution provides automated policy-based tiering between disk and tape, with non-erasable non-rewriteable enforcement to protect against unethical tampering of data. The initial release supported [both files and object storage], with support for different collections, each with its own set of policies for management. However, it only supported NFS initially for the file protocol. Today, IBM announces the addition of CIFS protocol support, which will be especially helpful in healthcare and life sciences, as much of the medical equipment is designed for CIFS protocol storage.
Also, Information Archive will now provide a full index and search feature capability to help with e-Discovery. Searches and retrievals can be done in the background without disrupting applications or the archiving operations.
IBM Tivoli Storage Manager for Virtual Environments V6.2 extends capabilities that currently exist in IBM Tivoli Storage Manager. TSM backup/archive clients run fine on guest operating systems, but now this new extension improves backup for VMware environments. TSM provides incremental block-level backups utilizing VMware's vStorage APIs for Data Protection and Changed Block Tracking features.
To minimize impact to the VMware host, TSM for VE make use of non-disruptive snapshots and offload the backup processing to a vStorage backup server. This supports file-level recovery, volume-level recovery, and full VM recovery. Of course, since it is based on TSM v6, you get advanced storage efficiency features such as compression and deduplication to minimize consumption of disk storage pools.
IBM Tivoli Monitor has been extended to support virtual servers, including VMware, Linux KVM, and Citrix XenServer. This can help with capacity planning, performance monitoring, and availability. Tivoli Monitor will help you understand the relationships between physical and virtual resources to help isolate problems to the correct resource, reducing the time it takes for debug issues between servers and storage. See the
Next week is [IBM Pulse2011 Conference] in Las Vegas, February 27 to March 2. Sorry, I don't plan to be there this year. It is looking to be a great conference, with fellow inventor Dean Kamen as the keynote speaker. For a blast from the past, read my blog posts from Pulse2008 [Main Tent sessions] and [Breakout sessions].
My theme this week was to focus on "Do-it-Yourself" solutions, such as the "open storage" concept presentedby Sun Microsystems, but it has morphed into a discussion on vendor lock-in. Both deserve a bit of furtherexploration.
There were several reasons offered on why someone might pursue a "Do-it-Yourself" course of action.
Building up skills
In my post [Simply Dinners and Open Storage], I suggested that building a server-as-storage solution based on Sun's OpenSolaris operating system could serve to learn more about [OpenSolaris], and by extension, the Solaris operating system.Like Linux, OpenSolaris is open source and has distributions that run on a variety of chipsets, from Sun's ownSPARC, to commodity x86 and x86-64 hardware. And as I mentioned in my post [Getting off the island], a version of OpenSolaris was even shown to run successfully on the IBM System z mainframe.
"Learning by Doing" is a strong part of the [Constructivism] movement in education. TheOne Laptop Per Child [OLPC] uses this approach. IBM volunteers in Tucson and 40other sites [help young students build robots]constructed from [Lego Mindstorms]building blocks.Edward De Bono uses the term [operacy] to refer to the"skills of doing", preferred over just "knowing" facts and figures.
However, I feel OpenSolaris is late to the game. Linux, Windows and MacOS are all well-established x86-based operating systems that most home office/small office users would be familiar with, and OpenSolaris is positioning itself as "the fourth choice".
In my post[WashingtonGets e-Discovery Wakeup Call], I suggested that the primary motivation for the White House to switch from Lotus Notes over to Microsoft Outlookwas familiarity with Microsoft's offerings. Unfortunately, that also meant abandoning a fully-operational automated email archive system, fora manual do-it-yourself approach copying PST files from journal folders.
Familiarity also explains why other government employees might print out their emails and archive them on paperin filing cabinets. They are familiar with this process, it allows them to treat email in the same manner as they have treated paper documents in the past.
Cost, Control and Unique Requirements
The last category of reasons can often result if what you want is smaller or bigger than what is availablecommercially. There are minimum entry-points for many vendors. If you want something so small that it is notprofitable, you may end up doing it yourself. On the other end of the scale, both Yahoo and Google ended up building their data centers with a do-it-yourself approach, because no commercial solutions were available atthe time. (IBM now offers [iDataPlex], so that has changed!)
While you could hire a vendor to build a customized solution to meet your unique requirements, it might turn outto be less costly to do-it-yourself. This might also provide some added control over the technologies and components employed. However, as EMC blogger Chuck Hollis correctly pointed out for[Do-it-yourself storage],your solution may not be less costly than existingoff-the-shelf solutions from existing storage vendors, when you factor in scalability and support costs.
Of course, this all assumes that storage admins building the do-it-yourself storage have enough spare time to do so. When was the last time your storage admins had spare time of any kind?Will your storage admins provide the 24x7 support you could get from established storage vendors? Will theybe able to fix the problem fast enough to keep your business running?
From this, I would gather that if you have storage admins more familiar with Solaris than Linux, Windows or MacOS,and select commodity x86 servers from IBM, Sun, HP, or Dell, they could build a solution that has less vendor lock-in than something off-the-shelf from Sun. Let's explore the fears of vendor lock-in further.
The storage vendor goes out of business
Sun has not been doing so well, so perhaps "open storage" was a way to warn existing Sun storage customers thatbuilding your own may be the next alternative.The New York Times title of their article says it all:["Sun Microsystems Posts Loss and Plans to Reduce Jobs"]. Sun is a big company, so I don't expect them to close their doors entirely this year,but certainly fear of being locked-in to any storage vendor's solution gets worse if you fear the vendor might go out of business.
The storage vendor will get acquired by a vendor you don't like
We've seen this before. You don't like vendor A, so you buy kit from vendor B, only to have vendor A acquire vendorB after your purchase. Surprise!
The storage vendor will not support new applications, operating systems, or other new equipment
Here the fear is that the decisions you make today might prevent you from choices you want to make in the future.You might want to upgrade to the latest level of your operating system, but your storage vendor doesn't supportit yet. Or maybe you want to upgrade your SAN to a faster bandwidth speed, like 8 Gbps, but your storage vendordoesn't support it yet. Or perhaps that change would require re-writing lots of scripts using the existingcommand line interfaces (CLI). Or perhaps your admins would require new training for the new configuration.
The storage vendor will raise prices or charge you more than you expect on follow-on upgrades
For most monolithic storage arrays, adding additional disk capacity means buying it from the same vendor as the controller. I heard of one company recently who tried to order entry-level disk expansion drawer, at a lower price, solely to move the individual disk drives into a higher-end disk system. Guess what? It didn't work. Most storage vendors would not support such mixed configurations.
If you are going to purchase additional storage capacity to an existing disk system, it should cost no more thanthe capacity price rate of your original purchase. IBM offers upgrades at the going market rate, but not all competitors are this nice. Some take advantage of the vendor lock-in, charging more for upgrades and pocketing the difference as profit.
Vendor lock-in represents the obstacles in switching vendors in the event the vendor goes out of business, failsto support new software or hardware in the data center, or charges more than you are comfortable with. These obstacles can make it difficult to switch storage vendors, upgrade your applications, or meet otherbusiness obligations. IBM SANVolume Controller and TotalStorage Productivity Center can help reduce or eliminate many of these concerns. IBMGlobal Services can help you, as much or as little, as you want in this transformation. Here are the four levelsof the do-it-yourself continuum:
Let me figure it out myself
Tell me what to do
Help me do it
Do it for me
This is the self-service approach. Go to our website, download an [IBM Redbook], figure out whatyou need, and order the parts to do-it-yourself.
IBM Global Business Services can help understand your business requirementsand tell you what you need to meet them.
IBM Global Technology Services can help design, assemble and deploy asolution, working with your staff to ensure skill and knowledge transfer.
IBM Managed Storage Services can manage your storage, on-site at your location, or at an IBM facility. IBM provides a varietyof cloud computing and managed hosting services.
So, if you are currently a Sun server or storage customer concerned about these latest Sun announcements, give IBM a call, we'll help you switch over!
IPv4, IPv6, Wireless Mesh networking? No problem! You know linux networking inside and out
Extensive knowledge of BIND, DHCPD, Squid, Apache, security, etc.
Experience working with [Moodle] would be most excellent (it is basically a PHP web application that maintains MySQL databases for lesson plans, homework assignments and other school related information)
Adept with Python scripting or could learn it quickly. OLPC has standardized on Python for scripting (although knowledge in Perl and PHP won't hurt either)
You look to implement a practical solution that less skilled sysadmins can easily maintain over a cooler but more complicated solution.
You play well with others. You don’t alienate collaborators with rude e-mails that assert your technical superiority (even though you are)
Your primary concern is meeting the educational needs of kids and teachers. Your rate technical awesomeness a distant second to meeting those critical needs.
I've been working with Dev, Bryan and Sulochan for the past three months (remotely here from Tucson, AZ)but we've come to a point where we need on-site expertise. I will continue to provide remote support.
Given the number of readers who have contacted me over the past year looking for an IT job (or a different job because they are not happy where they are), this could be an amazing experience.
My XO laptop arrived Friday, December 21, this was from the [Give 1 Get 1 (G1G1)] program fromthe One Laptop Per Child (OLPC) foundation. The program continuesto the end of this month (December 31).
Here are my first impressions.
Setup was Easy
Open the box, put in battery, and plug in the adapter. Enter your name and choose your favorite color for your stick figurine. No passwords, no parameters. Software is pre-installed and ready to use.
The four pages of instructions included how to open the unit (not intuitive), where the various connection ports are located, what the home screen and neighborhood screen look like, safety warnings, and a nice letter from Nicholas Negroponte with an 800 phone number and website in case more help is needed.
Connecting to the internet was the first thing I did. The neighborhood screen shows all the Wi-Fi access points. It recognized mineand three others. I clicked on mine, entered my WEP key, and was connected.
This is a Linux operating system running the Sugar user interface.There are four screens:
Neighborhood - shows all Wi-Fi access points
Friends - shows all other XO laptops nearby, in my case I am all alone
Home - your stick figurine with all the applications you can choose from are represented as icons at the bottom, just like OS X on my Mac Mini, or the launchpad on my Windows XP. Left panel for clipboard items.
Application - Applications run in full-screen mode
Four buttons across the top allow you to jump to any screen instantly.Everything else is single left-click. No double-clicks or right-clicks.
A circle on the home screen designates which applications are running, and how much of the available 256MB RAM they are consuming. This makes it easy to seeif you can run more applications or need to shut something down. Youcan jump to any application, or shut it down, from this view.
Shutting down the XO is done by clicking your stick figurine,and choosing shutdown.
I fired up the browser. The default 'home page' offers some help offline, as well as links to online resources and a google search bar. The full-color 1200x900 is very easy to read. You can hit ctrl+plus to make the fonts bigger. In bright sunlight, the screen turns automatically to greyscale.The built-in browser is easy enough to use, with standard back, forward, re-load, and bookmark buttons. The URL entry field also shows the pages title. It doesn't have tabs to see multiple pages at the same time, but I was able to fire up a second instance of the browser, so thatI could alt-tab back and forth between the two web sites.
There are so many applications that they don't all fit on the bottom of the screen.Left and right tab buttons will display the next set. I don't know if it is possible to re-order the icons, but I can certainly see some applications appealing to different ages, and perhaps re-ordering them into age-specific groups might be helpful.
Basic applications include the Abiword word processor, a PDF viewer, a simple paint program, calculator, chat, and news RSS feed reader; TamTam music to play and edit compositions; and some learn-to-program-a-computer software including Pippy, Etoys, and TurtleArt.
The 'record' program lets you take 640x480 pictures with the built-in camera, up to 45 seconds of video and audio recording. The picture abovewas taken with my XO, and edited online using [snipshot.com]. Another program can be usedto make video calls to another computer, similar to Skype or IBM Lotus Sametime.
The XO has built-in microphone and speakers, but also microphone and speaker ports, as well as three USB ports, and a slot for an SD memory card.
The QWERTY keyboard is designed for small children hands, I found myself using my two index fingers in a hunt-and-peck style. People who use Blackberry's or other hand-held devices might be able to use their two thumbs instead. Also, I am not used to a touchpad as the pointing device. My other laptops have a red knob between the G/H/B keys that acts like a joystick. So, I decided to attach my Apple keyboard/mouse to one USB port, which allows me faster typing and better resolution with my mouse.
I also inserted a 1GB SD card into the slot. Getting to the SD slot was challenging--you have to rotate the screen 90 degrees so that the lower right corner is over the laptop handle. It appears I need to purchase some tweasers to get my SD card back out, so until then, it will remain there as permanent addition to my XO.
A terminal application provides a command line interface into Linux.
The 'vi' editor is installed, in case I need to make changes to fstab or anythingelse in my /etc directory.
There is no S-video or VGA port. However, a teacher could probably fold thislaptop up in e-book mode and lay it flat on an [overhead projector] since the screen can handle bright sunlight in black-and-white mode.
The Journal and the Clipboard
There are no folders or subdirectories here. The journal acts as your desktop, holding all the files you have referenced, sorted in chronological order with the most recent on top. The journal application is started automatically when you boot up.My SD card is shown as a separate entry at the bottom right corner, but I have access only to files on my top-level directory on the card. The journal allows you to drag and drop between the system and the SD flash card.The list can be filtered by file type and application, so finding things is easy.You can also copy anything in the journal to the clipboard, appearing on the leftpanel of the home screen. You can then launch or paste this into other applications.
Pressing Alt-1 takes a 1200x900 snapshot of the current screen, and puts it into the journal.On websites that allow you to upload a file, including GMAIL, snipshot.com, etc. the browse button brings up the journal. So, for example, you could take a snapshot of the current webpage or paint creation, and send it as an attachment to someone via GMAIL. Google has an XO-enabled version of GMAIL that you can download from the OLPC activities page.
This entire post, including the picture above, was done with the XO laptop itself. I am impressed with the thought that went into this design, and I see great potential here. The interface adequately hides the Linux operating system for those who just want to use the computer, but makes it readily accessible for those who want to learn more about the Linux operating system and computer programming.
Continuing my rant from Monday's post [Time for a New Laptop], I got my new laptop Wednesday afternoon. I was hoping the transition would be quick, but that was not the case. Here were my initial steps prior to connecting my two laptops together for the big file transfer:
Document what my old workstation has
Back in 2007, I wrote a blog post on how to [Separate Programs from Data]. I have since added a Linux partition for dual-boot on my ThinkPad T60.
Windows XP SP3 operating system and programs
Red Hat Enterprise Linux 5.4
My Documents and other data
I also created a spreadsheet of all my tools, utilities and applications. I combined and deduplicated the list from the following sources:
Control Panel -> Add/Remove programs
Start -> Programs panels
Program taskbar at bottom of screen
The last one was critical. Over the years, I have gotten in the habit of saving those ZIP or EXE files that self-install programs into a separate directory, D:/Install-Files, so that if I had to unintsall an application, due to conflicts or compatability issues, I could re-install it without having to download them again.
So, I have a total of 134 applications, which I have put into the following rough categories:
AV - editing and manipulating audio, video or graphics
Files - backup, copy or manipulate disks, files and file systems
Browser - Internet Explorer, Firefox, Opera and Google Chrome
Communications - Lotus Notes and Lotus Sametime
Connect - programs to connect to different Web and Wi-Fi services
Demo - programs I demonstrate to clients at briefings
Drivers - attach or sync to external devices, cell phones, PDAs
Games - not much here, the basic solitaire, mindsweeper and pinball
Help Desk - programs to diagnose, test and gather system information
Projects - special projects like Second Life or Lego Mindstorms
Lookup - programs to lookup information, like American Airlines TravelDesk
Meeting - I have FIVE different webinar conferencing tools
Office - presentations, spreadsheets and documents
Platform - Java, Adobe Air and other application runtime environments
Player - do I really need SIXTEEN different audio/video players?
Printer - print drivers and printer management software
Scanners - programs that scan for viruses, malware and adware
Tools - calculators, configurators, sizing tools, and estimators
Uploaders - programs to upload photos or files to various Web services
Backup my new workstation
My new ThinkPad T410 has a dual-core i5 64-bit Intel processor, so I burned a 64-bit version of [Clonezilla LiveCD] and booted the new system with that. The new system has the following configuration:
Windows XP SP3 operating system, programs and data
There were only 14.4GB of data, it took 10 minutes to backup to an external USB disk. I ran it twice: first, using the option to dump the entire disk, and the second to dump the selected partition. The results were roughly the same.
Run Workstation Setup Wizard
The Workstation Setup Wizard asks for all the pertinent location information, time zone, userid/password, needed to complete the installation.
I made two small changes to connect C: to D: drive.
Changed "My Documents" to point to D:\Documents which will move the files over from C: to D: to accomodate its new target location. See [Microsoft procedure] for details.
Edited C:\notes\notes.ini to point to D:\notes\data to store all the local replicas of my email and databases.
Install Ubuntu Desktop 10.04 LTS
My plan is to run Windows and Linux guests through virtualization. I decided to try out Ubuntu Desktop 10.04 LTS, affectionately known as Lucid Lynx, which can support a variety of different virtualization tools, including KVM, VirtualBox-OSE and Xen. I have two identical 15GB partitions (sda2 and sda3) that I can use to hold two different systems, or one can be a subdirectory of the other. For now, I'll leave sda3 empty.
Take another backup of my new workstation
I took a fresh new backup of paritions (sda1, sda2, sda6) with Clonezilla.
The next step involved a cross-over Ethernet cable, which I don't have. So that will have to wait until Thursday morning.
Continuing my saga for my [New Laptop], I have gotten all my programs operational, and now it is a good time to re-evaluate how I organize my data. You can read my previous posts on this series: [Day 1], [Day 2], [Day 3].
I started my career at IBM developing mainframe software. The naming convention was simple, you had 44 character dataset names (DSN), which can be divided into qualifiers separated by periods. Each qualifier could be up to 8 characters long. The first qualifier was called the "high level qualifier" (HLQ) and the last one was the "low level qualifier" (LLQ). Standard naming conventions helped with ownership and security (RACF), catalog management, policy-based management (DFSMS), and data format identification. For example:
In the first case, we see that the HLQ is "PROD" for production, the application is PAYROLL and this file holds job control language (JCL). The LLQ often identified the file type. The second can be a version for testing a newer version of this application. The third represents user data, in which case my userid PEARSON would have my own written TEST JCL. I have seen successful naming conventions with 3, 4, 5 and even 6 qualifiers. The full dataset name remains the same, even if it is moved from one disk to another, or migrated to tape.
(We had to help one client who had all their files with single qualifier names, no more than 8 characters long, all in the Master Catalog (root directory). They wanted to implement RACF and DFSMS, and needed help converting all of their file names and related JCL to a 4-qualifer naming convention. It took seven months to make this transformation, but the client was quite pleased with the end result.)
While the mainframe has a restrictive approach to naming files, the operating systems on personal computers provide practically unlimited choices. File systems like NTFS or EXT3 support filenames as long as 254 characters, and pathnames up to 32,000 characters. The problem is that when you move a file from one disk to another, or even from one directory structure to another, the pathname will change. If you rely on the pathname to provide critical information about the meaning or purpose of a file, that could get lost when moving the files around.
I found several websites that offered organization advice. On The Happiness Project blog, Gretchen Rubin [busts 11 myths] about organization. On Zenhabits blog, Leo Babauta offers [18 De-cluttering tips].
Peter Walsh's [Tip No. 185] suggests using nouns to describe each folder. Granted these are about physical objects in your home or office, but some of the concepts can apply to digital objects on your disk drive.
"Use the computer’s sorting function. Put “AAA” (or a space) in front of the names of the most-used folders and “ZZZ” (or a bullet) in front of the least-used ones, so the former float to the top of an alphabetical list and the latter go to the bottom."
Personally, I hate spaces anywhere in directory and file names, and the thought of putting a space at the front of one to make it float to the top is even worse. Rather than resorting to naming folders with AAA or ZZZ, why not just limit the total number of files or directories so they are all visible on the screen. I often sort by date to access my most frequently-accessed or most-recently-updated files.
Of all the suggestions I found, Peter Walsh's "Use Nouns" seemed to be the most useful. Wikipedia has a fascinating article on [Biological Classification]. Certainly, if all living things can be put into classifications with only seven levels, we should not need more than seven levels of file system directory structure either! So, this is how I decided to organize my files on my new Thinkad T410:
Windows XP operating system programs and applications. I have structured this so that if I had to replace my hard disk entirely while traveling, I could get a new drive and restore just the operating system on this drive, and a few critical data files needed for the trip. I could then do a full recovery when I was back in the office. If I was hit with a virus that prevented Windows from booting up, I could re-install the Windows (or Linux) operating system without affecting any of my data.
This will be for my most active data, files and databases. I have the Windows "My Documents" point to D:\Documents directory. Under Archives, I will keep files for events that have completed, projects that have finished, and presentations I used that year. If I ever run out of space on my disk drive, I would delete or move off these archives first. I have a single folder for all Downloads, which I can then move to a more appropriate folder after I decide where to put them. My Office folder holds administrative items, like org charts, procedures, and so on.
As a consultant, many of my files relate to Events, these could be Briefings, Conferences, Meetings or Workshops. These are usually one to five days in duration, so I can hold here background materials for the clients involved, agendas, my notes on what transpired, and so on. I keep my Presentations separately, organized by topic. I also am involved with Projects that might span several months or ongoing tasks and assignments. I also keep my Resources separately, these could be templates, training materials, marketing research, whitepapers, and analyst reports.
A few folders I keep outside of this structure on the D: drive. [Evernote] is an application that provides "folksonomy" tagging. This is great in that I can access it from my phone, my laptop, or my desktop at home. Install-files are all those ZIP and EXE files to install applications after a fresh Windows install. If I ever had to wipe clean my C: drive and re-install Windows, I would then have this folder on D: drive to upgrade my system. Finally, I keep my Lotus Notes database directory on my D: drive. Since these are databases (NSF) files accessed directly by Lotus Notes, I saw no reason to put them under the D:\Documents directory structure.
This will be for my multimedia files. These don't change often, are mostly read-only, and could be restored quickly as needed.
I'll give this new re-organization a try. Since I have to take a fresh backup to Tivoli Storage Manager anyways, now is the best time to re-organize the directory structure and update my dsm.opt options file.
In my presentations in Australia and New Zealand, I mentioned that people were re-discovering the benefits of removable media. While floppy diskettes were convenient way of passing information from one person to another, they unfortunately did not have enough capacity. In today's world, you may need Gigabytes or Terabytes of re-writeable storage with a file system interface that can easily be passed from one person to another. In this post, I explore three options.
(FCC Disclaimer: I work for IBM, and IBM has no business relationship with Cirago at the time of this writing. Cirago has not paid me to mention their product, but instead provided me a free loaner that I promised to return to them after my evaluation is completed. This post should not be considered an endorsement for Cirago's products. List prices for Cirago and IBM products were determined from publicly available sources for the United States, and may vary in different countries. The views expressed herein may not necessarily reflect the views and opinions of either IBM or Cirago.)
I took a few photos so you can see what exactly this device looks like. Basically, it is a plastic box that holds a single naked disk drive. It has four little rubber feet so that it does not slip on your desk surface.
The inside is quite simple. The power and SATA connections match those of either a standard 3.5 inch drive, or the smaller form factor (SFF) 2.5 inch drive. However, to my dismay, it does not handle EIDE drives which I have a ton of. After taking apart six different computer systems, I found only one had SATA drives for me to try this unit out with.
The unit comes with a USB cable and AC/DC power adapter. In my case, I found the USB 3.0 cable too short for my liking. My tower systems are under my desk, but I like keeping docking stations like this on the top of the desk, within easy reach, but that wasn't going to happen because the USB cable was not long enough.
Instead, I ended up putting it half-way in between, behind my desk, sitting on another spare system. Not ideal, but in theory there are USB-extension cables that probably could fix this.
Here it is with the drive inside. I had a 3.5 inch Western Digital [1600AAJS drive] 160 GB, SATA 3 Gbps, 8 MB Cache, 7200 RPM.
To compare the performance, I used a dual-core AMD [Athlon X2] system that I had built for my 2008 [One Laptop Per Child] project. To compare the performance, I ran with the drive externally in the Cirago docking station, then ran the same tests with the same drive internally on the native SATA controller. Although the Cirago documentation indicated that Windows was required, I used Ubuntu Linux 10.04 LTS just fine, using the flexible I/O [fio] benchmarking tool against an ext3 file system.
Sequential Write - a common use for external disk drive is backup.
Random read - randomly read files ranging from 5KB to 10MB in size.
Random mixed - randomly read/write files (50/50 mix) ranging from 5KB to 10MB in size.
Random Mixed (50/50)
Latency (msec) read
Latency (msec) write
Bandwidth (KB/s) read
Bandwidth (KB/s) write
For sequential write, the Cirago performed well, only about 15 percent slower than native SATA. For random workloads, however, it was 30-40 percent slower. If you are wondering why I did not get USB 3.0 speeds, there are several factors involved here. First, with overheads, 5 Gbps USB 3.0 is expected to get only about 400 MB/sec. My SATA 2.0 controller maxes out at 375 MB/sec, and my USB 2.0 ports on my system are rated for 57 MB/sec, but with overheads will only get 20-25 MB/sec. Most spinning drives only get 75 to 110 MB/sec. Even solid-state drives top out at 250 MB/sec for sustained activity. Despite all that, my internal SATA drive only got 16 MB/sec, and externally with the Cirago 14 MB/sec in sustained write activity.
Here is the mess that is inside my system. The slot for drive 2 was blocked by cables, memory chips and the heat sink for my processor. It is possible to damage a system just trying to squeeze between these obstacles.
However, the point of this post is "removable media". Having to open up the case and insert the second drive and wire it up to the correct SATA port was a pain, and certainly a more difficult challenge than the average PC user wishes to tackle.
Price-wise, the Cirago lists for $49 USD, and the 160GB drive I used lists for $69, so the combination $118 is about what you would pay for a fully integrated external USB drive. However, if you had lots of loose drives, then this could be more convenient and start to save you some money.
IBM RDX disk backup system
Another problem with the Cirago approach is that the disk drives are naked, with printed circuit board (PCB) exposed. When not in the docking station, where do you put your drive? Did you keep the [anti-static ESD bag] that it came in when you bought it? And once inside the bag, now what? Do you want to just stack it up in a pile with your other pieces of equipment?
To solve this, IBM offers the RDX backup system. These are fully compatible with other RDX sytems from Dell, HP, Imation, NEC, Quantum, and Tandberg Data. The concept is to have a docking station that takes removable, rugged plastic-coated disk-enclosed cartridges. The docking station can be part of the PC itself, similar to how CD/DVD drives are installed, or as a stand-alone USB 2.0 system, capable of processing data up to 25 MB/sec.
The idea is not new, about 10 years ago we had [Iomega "zip" drives] that offered disk-enclosed cartridges with capacities of 100, 250 and 750MB in size. Iomega had its fair share of problems with the zip drive, which were ranked in 2006 as the 15th worst technology product of all time, and were eventually were bought out by EMC two years later (as if EMC has not had enough failures on its own!)
The problem with zip drives was that they did not hold as much as CD or DVD media, and were more expensive. By comparison, IBM RDX cartridges come in 160GB to 750GB in size, at list prices starting at $127 USD.
IBM LTO tape with Long-Term File System
Removable media is not just for backup. Disk cartridges, like the IBM RDX above, had the advantage of being random access, but most tape are accessed sequentially. IBM has solved this also, with the new IBM Long Term File System [LTFS], available for LTO-5 tape cartridges.
With LFTS, the LTO-5 tape cartridge now can act as a super-large USB memory stick for passing information from one person to the next. The LTO-5 cartridge can handle up to 3TB of compressed data at up to SAS speeds of 140 MB/sec. An LTO-5 tape cartridge lists for only $87 USD.
The LTO-5 drives, such as the IBM [TS2250 drive] can read LTO-3, LTO-4 and LTO-5cartridges, and can write LTO-4 and LTO-5 cartridges, in a manner that is fully compatible with LTO drives from HP or Quantum. LTO-3, LTO-4 and LTO-5 cartridges are available in WORM or rewriteable formats. LTO-4 and LTO-5 cartridges can be encrypted with 256-bit AES built-in encryption. With three drive manufacturers, and seven cartridge manufacturers, there is no threat of vendor lock-in with this approach.
These three options offer various trade-offs in price, performance, security and convenience. Not surprisingly, tape continues to be the cheapest option.
It seems everyone is talking about stacks, appliances and clouds.
On StorageBod, fellow blogger Martin Glassborow has a post titled [Pancakes!] He feels that everyone from Hitachi to Oracle is turning into the IT equivalent of the International House of Pancakes [IHOP] offering integrated stacks of software, servers and storage.
Cisco introduced its "Unified Computing System" about a year ago, [reinventing the datacenter with an all-Ethernet approach]. Cisco does not offer its own hypervisor software nor storage, so there are two choices. First, Cisco has entered a joint venture, called Acadia, with VMware and EMC, to form the Virtual Computing Environment (VCE) coalition. The resulting stack was named Vblock, which one blogger had hyphenated as Vb-lock to raise awareness to the proprietary vendor lock-in nature of this stack. Second, Cisco, VMware and NetApp had a similar set of [Barney press releases] to announce a viable storage alternative to those not married to EMC.
"Only when it makes sense. Oracle/Sun has the better argument: when you know exactly what you want from your database, we’ll sell you an integrated appliance that will do exactly that. And it’s fine if you roll your own.
But those are industry-wide issues. There are UCS/VCE specific issue as well:
Cost. All the integration work among 3 different companies costs money. They aren’t replacing existing costs – they are adding costs. Without, in theory, charging more.
Lock-in. UCS/Vblock is, effectively, a mainframe with a network backplane.
Barriers to entry. Are there any? Cisco flagged hypervisor bypass and large memory support as unique value-add – and neither seems any more than a medium-term advantage.
BOT? Build, Operate, Transfer. In theory Vblocks are easier and faster to install and manage. But customers are asking that Acadia BOT their new Vblocks. The customer benefit over current integrator practice? Lower BOT costs? Or?
Price. The 3 most expensive IT vendors banding together?
Longevity. Industry “partnerships” don’t have a good record of long-term success. Each of these companies has its own competitive stresses and financial imperatives, and while the stars may be aligned today, where will they be in 3 years? Unless Cisco is piloting an eventual takeover."
Fellow blogger Bob Sutor (IBM) has an excellent post titled
[Appliances and Linux]. Here is an excerpt:
"In your kitchen you have special appliances that, presumably, do individual things well. Your refrigerator keeps things cold, your oven makes them hot, and your blender purees and liquifies them. There is room in a kitchen for each of these. They work individually but when you are making a meal they each have a role to play in creating the whole.
You could go out and buy the metal, glass, wires, electrical gadgets, and so on that you would need to make each appliance but it is is faster, cheaper, and undoubtably safer to buy them already manufactured. For each device you have a choice of providers and you can pay more for additional features and quality.
In the IT world it is far more common to buy the bits and pieces that make up a final solution. That is, you might separately order the hardware components, the operating system, and the applications, and then have someone put them all together for you. If you have an existing configuration you might add more blades or more storage devices.
You don’t have to do this, however, in every situation. Just from a hardware perspective, you can buy a ready-made machine just waiting for the on switch to be flicked and the software installed. Conversely, you might get a pre-made software image with operating system and applications in place, ready to be provisioned to your choice of hardware. We can get even fancier in that the software image might be deployable onto a virtual machine and so be a ready made solution runnable on a cloud.
Thus in the IT world we can talk about hardware-only appliances, software-only appliances (often called virtual software appliances), and complete hardware and software combinations. The last is most comparable to that refrigerator or oven in your kitchen."
If your company was a restaurant, how many employees would you have on hand to produce your own electricity from gas generators, pump your own water from a well, and assemble your own toasters and blenders from wires and motors? I think this is why companies are re-thinking the way they do their own IT.
Rather than business-as-usual, perhaps a mix of pre-configured appliances, consisting of software, server and storage stacked to meet a specific workload, connected to public cloud utility companies, might be the better approach. By 2013, some analysts feel that as many as 20 percent of companies might not even have a traditional IT datacenter anymore.
“By employing techniques like virtualization, automated management, and utility-billing models, IT managers can evolve the internal datacenter into a ‘private cloud’ that offers many of the performance, scalability, and cost-saving benefits associated with public clouds. Microsoft provides the foundation for private clouds with infrastructure solutions to match a range of customer sizes, needs and geographies.
The public cloud:
“Cloud computing is expanding the traditional web-hosting model to a point where enterprises are able to off-load commodity applications to third-party service providers (hosters) and, in the near future, the Microsoft Azure Services Platform. Using Microsoft infrastructure software and Web-based applications, the public cloud allows companies to move applications between private and public clouds.”
Finally, I saw this from fellow blogger, Barry Burke(EMC), aka the Storage Anarchist, titled [a walk through the clouds] which is really a two-part post.
The first part describes a possible future for EMC customers written by EMC employee David Meiri, envisioning a wonderful world with "No more Metas, Hypers, BIN Files...."
The vision is a pleasant one, and not far from reality. While EMC prefers to use the term "private cloud" to refer to both on-premises and off-premises-but-only-your-employees-can-VPN-to-it-and-your-IT-staff-still-manages-it flavors, the overall vision is available today from a variety of Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) providers.
A good analogy for "private cloud" might be a corporate "intranet" that is accessible only within the company's firewall. This allowed internal websites where information to be disseminated to employees could be posted, using standard HTML and standard web browsers that are already deployed on most PCs and workstations. Web pages running on an intranet can easily be moved to an external-facing website without too much rework or trouble.
The second part has Barry claiming that EMC has made progress towards a "Virtual Storage Server" that might be announced at next month's EMC World conference.
When people hear "Storage Virtualization" most immediately think of the two market leaders, IBM SAN Volume Controller and Hitachi Data Systems (HDS) Universal Storage Platform (USP) products. Those with a tape bent might throw in IBM's TS7000 virtual tape libraries or Oracle/Sun's Virtual Storage Manager (VSM). And those focused on software-only solutions might recall Symantec's Veritas Volume Manager (VxVM), DataCore's SANsymphony, or FalconStor's IPStor products.
But what about EMC's failed attempt at storage virtualization, the Invista? After five years of failing to deliver value, EMC has so far only publicised ONE customer reference account, and I estimate that perhaps only a few dozen actual customers are still running on this platform. Compare that to IBM selling tens of thousands of SAN Volume Controllers, and HDS selling thousands of their various USP-V and USP-VM products, and you quickly realize that EMC has a lot of catching up to do. EMC's first delivered Invista about 18 months after IBM SAN Volume Controller, similar to their introduction of Atmos being 18 months after our Scale-Out File Services (SoFS) and their latest CLARiiON-based V-Max coming out 18 months after IBM's XIV storage system.
So what will EMC's Invista follow-on "Virtual Storage Server" product look like? No idea. It might be another five years before you actually hear about a customer using it. But why wait for EMC to get their act together?
IBM offers solutions TODAY that can make life as easy as envisioned here. IBM offers integrated systems sold as ready-to-use appliances, customized "stacks" that can be built to handle particular workloads, residing on-premises or hosted at an IBM facility, and public cloud "as-a-service" offerings on the IBM Cloud.
Wrapping up this week's theme on the XO laptop, I decided to take on thechallenge of printing. I managed to print from my XO laptop to my laserjet printer.I checked the One Laptop Per Child [OLPC] website,and found there is no built-in support for printers, but there have been several peopleasking how to print from the XO, so here are the steps I did to make it happen.
(Note: I did all of these steps successfully on my Qemu-emulated system first, and then performed them on my XO laptop)
Step 1: Determine if you have an acceptable printer
The XO laptop can only connect to a printer via USB cable or over the network.Check your printer to see if it supports either of these two options. In my case, my printer is connected to my Linksys hub that offers Wi-Fi in my home.
The XO runs a modified version of Red Hat's Fedora 7, so we need to also determineif the printer is supported on Linux.Check the [Open Printing Database]for the level of support. This database has come up with the following ranking system.Printers are categorized according to how well they work under Linux and Unix. The ratings do not pertain to whether or not the printer will be auto-recognized or auto-configured, but merely to the highest level of functionality achieved.
Perfectly - everything the printer can do is working also under Linux
Mostly - work almost perfectly - funny enhanced resolution modes may be missing, or the color is a bit off, but nothing that would make the printouts not useful
Partially - mostly don't work; you may be able to print only in black and white on a color printer, or the printouts look horrible
Paperweight - These printers don't work at all. They may work in the future, but don't count on it
If your printer only supports a parallel cable connection, or does not have a high enough ranking above, go buy another printer. The [Linux Foundation] websiteoffers a list of suggested printers and tutorials.
In my case, I have a Brother HL5250-DN black-and-white laserjet printer connected over a network to Windows XP, OS X and my other Linux systems. It is rated as supporting Linux perfectly, so I decided to use this for my XO laptop.
Step 2: Install Common UNIX Printing System (CUPS)
Technically, Linux is not UNIX, but for our purposes, close enough. Start the Terminalactivity, use "su" to change to root, and then use "yum" to install CUPS. Yum will automatically determine what other packages are needed, in this case paps and tmpwatch. Once installed, use "/usr/sbin/cupsd" to get the CUPS daemon started, and add this to the end ofrc.local so that it gets started every time you reboot.
Click graphic on the left to see larger view
[olpc@xo-10-CC-6F ~]$ subash-3.2# yum install cups...Total download size = 3.0 MIs this OK [y/N]? y
To download the appropriate drivers, you may need a browser that can handle file downloads. I have triedto do this with the built-in Browse activity (aka Gecko) but encountered problems. I have both Opera and Firefox installed, but I will focus on Opera for this effort.I also installed the older126.96.36.199 version of the Flash player (worked better than the latest 188.8.131.52 version) and Java JRE.Follow the OLPC Wiki instructions for [Opera, Adobe Flash,and Sun Java] installation, thenverify with the following [Java and Flash] testers.
Step 4: Download drivers and packages unique for your printer
In my case, I used Opera to get to the [Brother Linux Driver Homepage], and downloaded the RPM's for LPR and CUPS wrapper. These are the ones listed under "Drivers for Red Hat, Mandrake (Mandriva), SuSE". I saved these under "/home/olpc" directory.
By default, the root user has no password. However, you will need it to be something for later steps,so here is the process to create a root password. I set mine to "tony" which normallywould be considered too simple a password, but ignore those messages and continue.We will remove it in step 8 (below) to put things back to normal.
[olpc@xo-10-CC-6F ~]$ subash-3.2# passwdChanging password for user root.New UNIX password: tonyBAD PASSWORD: it is too shortRetype new UNIX password: tonypasswd: all authentication tokens updated successfullybash-3.2# exit[olpc@xo-10-CC-6F ~]$
Step 6: Launch CUPS administration
Here I followed the instructions in Robert Spotswood's [Printing In Linux with CUPS] tutorial.Launch the Opera browser, and enter "http://localhost:631/admin" as the URL. The localhostrefers to the laptop itself, and 631 is the special port that CUPS listens to from browsers. You can alsouse 127.0.0.1 as a shortcut for "localhost", and can be used interchangeably.
In my case, it detected both of my networked printers, so I selected the HL5250DN, entered thelocation of my PPD file "/usr/share/cups/model/HL5250DN.ppd" that was created in Step 4. I set the URI to "lpd://192.168.0.75/binary_p1" per the instructions [Network Setting in CUPS based Linux system] in the Brother FAQ page. I chage the page size from "A4" to "Letter".I set this printer as the default printer. When it asks for userid and password, that is whereyou would enter "root" for the user, and "tony" or whatever you decided to set your root password to.
Select "Print a Test Page" to verify that everything is working.
Step 7: Printing actual files
Sadly, I don't know Opera well enough to know how to print from there. So, I went over to my trustedFirefox browser. Select File->Page Setup to specify the settings, File->Print Preview tosee what it will look like, and then File->Print to send it to the printer.
To print the file "out.txt" that is in your /home/olpc directory, for example, enter"file:///home/olpc/out.txt" as the URL of the firefox browser. This will show the file,which you can then print to your printer. I had to specify 200% scaling otherwise the fontswere too small to read.
Step 8: Remove the "root" password
If you want to remove the root password, here are the steps.
[olpc@xo-10-CC-6F ~]$ suPassword: tonybash-3.2# passwd -d rootRemoving password for user root.passwd: Successbash-3.2# exit[olpc@xo-10-CC-6F ~]$
Now the problem is that there is no way to print stuff from any of the Sugar activities. The best place toput in print support would be the Journal activity. Along the bottom where the mounted USB keys arelocated could be an icon for a printer, and dragging a file down to the printer ojbect could cause it tobe send to the printer.
The alternative is to write some scripts invocable from the Terminal activity to determine what isin the journal, and send them to LPR with the appropriate parameters.
I did not have time to do either of these, but perhaps someone out there can take on that as a project.
Based on this success, and perhaps because I am also fluent in Spanish, I was asked to help with Proyecto Ceibal, the team for OLPC Uruguay. Normally theXS school server resides at the school location itself, so that even if the internet connection is disrupted or limited, the school kids can continue to access each other and the web cache content until internet connection is resumed.However, with a diverse developmentteam with people in United States, Uruguay, and India, we first looked to Linux hosting providers that wouldagree to provide free or low-cost monthly access. We spent (make that "wasted") the month of May investigating.Most that I talked to were not interested in having a customized Linux kernel on non-standard hardware on their shop floor, and wanted instead to offer their own standard Linux build on existing standard servers, managed by theirown system administrators, or were not interested in providing it for free. Since the XS-163 kernel is customizedfor the x86 architecture, it is one of those exceptions where we could not host it on an IBM POWER or mainframe as a virtual guest.
This got picked up as an [idea] for the Google's[Summer of Code] and we are mentoring Tarun, a 19-year-old student to actas lead software developer. However, summer was fast approaching, and we wanted this ready for the next semester. In June, our project leader, Greg, came up with a new plan. Build a machine and have it connected at an internet service provider that would cover the cost of bandwidth, and be willing to accept this with remote administration. We found a volunteer organization to cover this -- Thank you Glen and Vicki!
We found a location, so the request to me sounded simple enough: put together a PC from commodity parts that meet the requirements of the customizedLinux kernel, the latest release being called [XS-163]. The server would have two disk drives, three Ethernet ports, and 2GB of memory; and be installed with the customized XS-163 software, SSHD for remote administration, Apache web server, PostgreSQL database and PHP programming language.Of course, the team wanted this for as little cost as possible, and for me to document the process, so that it could be repeated elsewhere. Some stretch goals included having a dual-boot with Debian 4.0 Etch Linux for development/test purposes, an alternative database such as MySQL for testing, a backup procedure, and a Recover-DVD in case something goes wrong.
Some interesting things happened:
The XS-163 is shipped as an ISO file representing a LiveCD bootable Linux that will wipe your system cleanand lay down the exact customized software for a one-drive, three-Ethernet-port server. Since it is based on Red Hat's Fedora 7 Linux base, I found it helpful to install that instead, and experiment moving sections of code over.This is similar to geneticists extracting the DNA from the cell of a pit bull and putting it into the cell for a poodle. I would not recommend this for anyone not familiar with Linux.
I also experimented with modifying the pre-built XS-163 CD image by cracking open the squashfs, hacking thecontents, and then putting it back together and burning a new CD. This provided some interesting insight, but in the end was able to do it all from the standard XS-163 image.
Once I figured out the appropriate "scaffolding" required, I managed to proceed quickly, with running versionsof XS-163, plain vanilla Fedora 7, and Debian 4, in a multi-boot configuration.
The BIOS "raid" capability was really more like BIOS-assisted RAID for Windows operating system drivers. This"fake raid" wasn't supported by Linux, so I used Linux's built-in "software raid" instead, which allowed somepartitions to be raid-mirrored, and other partitions to be un-mirrored. Why not mirror everything? With two160GB SATA drives, you have three choices:
No RAID, for a total space of 320GB
RAID everything, for a total space of 160GB
Tiered information infrastructure, use RAID for some partitions, but not all.
The last approach made sense, as a lot of of the data is cache web page images, and is easily retrievable fromthe internet. This also allowed to have some "scratch space" for downloading large files and so on. For example,90GB mirrored that contained the OS images, settings and critical applications, and 70GB on each drive for scratchand web cache, results in a total of 230GB of disk space, which is 43 percent improvement over an all-RAID solution.
While [Linux LVM2] provides software-based "storage virtualization" similar to the hardware-based IBM System Storage SAN Volume Controller (SVC), it was a bad idea putting different "root" directories of my many OS images on there. With Linux, as with mostoperating systems, it expects things to be in the same place where it last shutdown, but in a multi-boot environment, you might boot the first OS, move things around, and then when you try to boot second OS, it doesn'twork anymore, or corrupts what it does find, or hangs with a "kernel panic". In the end, I decided to use RAIDnon-LVM partitions for the root directories, and only use LVM2 for data that is not needed at boot time.
While they are both Linux, Debian and Fedora were different enough to cause me headaches. Settings weredifferent, parameters were different, file directories were different. Not quite as religious as MacOS-versus-Windows,but you get the picture.
During this time, the facility was out getting a domain name, IP address, subnet mask and so on, so I testedwith my internal 192.168.x.y and figured I would change this to whatever it should be the day I shipped the unit.(I'll find out next week if that was the right approach!)
Afraid that something might go wrong while I am in Tokyo, Japan next week (July 7-11), or Mumbai, India the following week (July 14-18), I added a Secure Shell [SSH] daemon that runs automaticallyat boot time. This involves putting the public key on the server, and each remote admin has their own private key on their own client machine.I know all about public/private key pairs, as IBM is a leader in encryption technology, and was the first todeliver built-in encryption with the IBM System Storage TS1120 tape drive.
To have users have access to all their files from any OS image required that I either (a) have identical copieseverywhere, or (b) have a shared partition. The latter turned out to be the best choice, with an LVM2 logical volumefor "/home" directory that is shared among all of the OS images. As we develop the application, we might findother directories that make sense to share as well.
For developing across platforms, I wanted the Ethernet devices (eth0, eth1, and so on) match the actual ports they aresupposed to be connected to in a static IP configuration. Most people use DHCP so it doesn't matter, but the XSsoftware requires this, so it did. For example, "eth0" as the 1 Gbps port to the WAN, and "eth1/eth2" as the two 10/100 Mbps PCI NIC cards to other servers.Naming the internet interfaces to specific hardware ports wasdifferent on Fedora and Debian, but I got it working.
While it was a stretch goal to develop a backup method, one that could perform Bare Machine Recovery frommedia burned by the DVD, it turned out I needed to do this anyways just to prevent me from losing my work in case thingswent wrong. I used an external USB drive to develop the process, and got everything to fit onto a single 4GB DVD. Using IBM Tivoli Storage Manager (TSM) for this seemed overkill, and [Mondo Rescue] didn't handle LVM2+RAID as well as I wanted, so I chose [partimage] instead, which backs up each primary partition, mirrored partition, or LVM2 logical volume, keeping all the time stamps, ownerships, and symbolic links in tact. It has the ability to chop up the output into fixed sized pieces, which is helpful if you are goingto burn them on 700MB CDs or 4.7GB DVDs. In my case, my FAT32-formatted external USB disk drive can't handle files bigger than 2GB, so this feature was helpful for that as well. I standardized to 660 GiB [about 692GB] per piece, sincethat met all criteria.
The folks at [SysRescCD] saved the day. The standard "SysRescueCD" assigned eth0, eth1, and eth2 differently than the three base OS images, but the nice folks in France that write SysRescCD created a customized[kernel parameter that allowed the assignments to be fixed per MAC address ] in support of this project. With this in place, I was able to make a live Boot-CD that brings up SSH, with all the users, passwords,and Ethernet devices to match the hardware. Install this LiveCD as the "Rescue Image" on the hard disk itself, and also made a Recovery-DVD that boots up just like the Boot-CD, but contains the 4GB of backup files.
For testing, I used Linux's built-in Kernel-based Virtual Machine [KVM]which works like VMware, but is open source and included into the 2.6.20 kernels that I am using. IBM is the leadingreseller of Vmware and has been doing server virtualization for the past 40 years, so I am comfortable with thetechnology. The XS-163 platform with Apache and PostgreSQL servers as a platform for [Moodle], an open source class management system, and the combination is memory-intensive enough that I did not want to incur the overheads running production this manner, but it wasgreat for testing!
With all this in place, it is designed to not need a Linux system admin or XS-163/Moodle expert at the facility. Instead, all we need is someone to insert the Boot-CD or Recover-DVD and reboot the system if needed.
Just before packing up the unit for shipment, I changed the IP addresses to the values they need at the destination facility, updated the [GRUB boot loader] default, and made a final backup which burned the Recover-DVD. Hopefully, it works by just turning on the unit,[headless], without any keyboard, monitor or configuration required. Fingers crossed!
So, thanks to the rest of my team: Greg, Glen, Vicki, Tarun, Marcel, Pablo and Said. I am very excited to bepart of this, and look forward to seeing this become something remarkable!
I had a great weekend, participating in this year's ["World Laughter Day"] yesterday, and preparingfor tonight's festivities, found me pulling out the various packages from "Simply Dinners" from my freezer.
A Tucson-based company, [Simply Dinners] offers an alternative to restaurant eating.My sister went there, assembled a set of freezer-proof plastic bags containingall the right ingredients based on specific recipes, and gave them to me for my birthday, and they have been sitting in my freezer ever since... until last weekend.
My sister was careful to choose items that fit my [Paleolithic Diet] that my nutritionist has me on. However, I was skepticalthat any plastic bag full of frozen groceries would be any better than anything I could assemble on my own.I did, after all, attend "chef school" and do know how to cook well. Each package was intended to be a "dinner for two" but since I am single, was two meals each for me.
So, I decided to try them out, which would also give me more room in my freezer for incoming items, and theycame out very well. The outside of each plastic bag was a label that explained all the steps required to heatthe food. Partially-cooked vegetables were wrapped in foil, and went in for the last 10 minutes of cooking the meat.The process was straightforward, and the meals were delicious, but nothing I could not have done on my own witha recipe and a trip to the grocery store.
The question is whether someone with little or no skills could achieve similar, or acceptable results. I havefriends who are limited to assembling sandwiches from luncheon meats and cheese slices, as anything involvingheat other than simply boiling water is beyond their skills.
The key difference between "cooking for yourself" and "building your own storage" is that you aren't buildingstorage for just yourself. Unless you are a one-person SMB company, you are building storage that all of youremployees and managers count on to do their jobs, and by extension your customers and stockholders count on.
Of course I had to read responses from others before jumping in with my thoughts.Dave Raffo from Storage Soup writes [Sun going down in storage],feeling this is yet another indication that Sun has lost their mind, recounting previous events that supportthat theory.EMC blogger Mark Twomey in his StorageZilla posts [When Open Isn't] felt a littlebit guilty kicking a competitor when down. EMC blogger Chuck Hollis questions the reasons peoplemight be tempted to even try this in his post [Do-it-Yourself Storage]. Here'san excerpt:
I really, really struggle with this concept, I do. Here's why:
Anything I use and get comfortable with -- well, I'm "locked in" to a certain degree. If I use a lot of storage software X; well, I'm sorta locked in, aren't I? Or, if I put my servers-as-storage on a three-year lease, I'm kind of locked in, aren't I?"
(For EMC, vendor lock-in is great when customers are using and comfortable with EMC products, and awful when they use andare comfortable with storage from someone else. But nobody who is "comfortable" with what they have ever complain about"vendor lock-in" do they? It's the ones who are growing uncomfortable and feel trapped in changing. Howinvolved a company's use of EMC's proprietary interfaces are can greatly determine the obstacles in switching toa different vendor.Of course, if you count yourself as someone growing uncomfortable with your existing storage vendor, IBM can help you fix that problem, but that is a subject for another post.)
Worried about "vendor lock-in"? Try "admin lock-in" where you must keep a storage admin around because he or shewas the one that put your storage together. I've seen several companies held hostage by their system adminsfor home-grown scripts that serve as "duct tape for the enterprise".The other issue is whether you have storage admins who have the necessary hardware and software engineering skillsto put suitable storage together. There are some very smart storage admins I know who could, and others that wouldhave a difficult time with this.
No doubt this is promising for the home office. I myself have taken several PCs that were running older versions of Windows,but not powerful enough to upgrade to Windows Vista, wiped them clean, loaded Linux, and configured them from everythingfrom simple browser workstations to full LAMP application server configurations. While this might sound easy, I am a professional hardwareand software engineer with Linux skills.I have no doubt that someone with sufficient engineering and Solaris skills could put together a storage system for home use.
One area where Sun definitely benefits from this "Open Storage" approach is to develop Solaris skills. I have no personal experience with OpenSolaris, but assume that if you learn it, you would be able to switch overto full Solaris quite easily.Today, most people have Windows, Linux and/or MacOS skills coming into the workforce, and this could be Sun's way of getting new fresh faces who understand Solaris commands to replace retiring "baby boomers". The lack of Solaris-knowledgeable admins is perhaps one reason why companies are switching to IBM AIX, Linux or Windows in theirdata center.
Certainly, IBM's strategic choice to support Linuxhas been a great success. People learn Linux on their home systems, and at school, and are able to carry those skillsto Linux running on everything from the smallest IBM blade server to IBM's biggest mainframe.
The videos on Sun for the "recipes" on how to put together various "storage configurations in ten minutes" appear simplerthan last summer's "How to hack an Apple iPhone to switch away from AT&T" procedures.
This week I'm in Los Angeles for the Systems Technology Conference (STC '08).We have over 1900 IT professionals attending, of which 1200 IBMers from North America, Latin America,and Asia Pacific regions, as well as another 350 IBM Business Partners. The rest, including me, are world wideor from other areas.
Last January, IBM reorganized its team to be more client-focused. Instead of focused on products, we are nowclient-centric, and have teams to cover our large enterprise systems through direct sales force, business systemsfor sales through our channel business partners, and industry systems for specific areas like deep computing,digital surveillance and retail systems solutions.
In addition to 788 sessions to attend these next four days, we had a few main tent sessions.My third line (my boss' boss' boss) David Gelardi presented Enterprise Systems. This is the group I am in.
Akemi Watanabe presented for Business Systems. Her native language is Japanese, so to do an entire talk inEnglish was quite impressive. Her focus is on SMB accounts, those customers with less than 1000 employeesthat are looking for easy-to-use solutions. She mentioned IBM's new [Blue Business Platform] which includesLotus Foundation Start, an Application Integration Toolkit, and the Global Application Marketplace.
Part of this process is the merger of System p and System i into "POWER" systems, and then offering both midrangeand enterprise versions of these that run AIX, i5/OS and Linux on POWER. It turns out that only 9 percent of ourSystem i customers are only on this platform. Another 87 percent have Windows, so it makes sense to offer i5/OSon BladeCenter, to consolidate Windows servers from HP, Dell or Sun over to IBM.
Meanwhile, IBM's strategy to support Linux has proven successful. 25 percent of x86 servers now run Linux. IBMhas 600 full-time developers for Linux, over 500 of which contributed to the latest 2.6 kernel development. Our ["chiphopper"] program has successfullyported over 900 applications. There are now over 6500 applications that run on Linux applications, on our strategic alliances with Red Hat (RHEL) and Novell (SUSE) distributions of Linux.
Her recommendation to SMB reps: learn POWER systems, BladeCenter, and Linux. I agree!
Mary Coucher presented Industry systems. In addition to the game chips for the Sony Playstation, Nintendo Wii,and Microsoft Xbox-360, this segment focuses on Digital Video Surveillance (DVS), Retail Solutions, Healthcare and Life sciences (HCLS), OEM and embedded solutions, and Deep computing. She mentioned our recently announcediDataPlex solution.
IBM is focused on "real-world-aware" applications, which includes traffic, crime, surveillance, fraud, andRFID enablement. These are streams of data that happen real-time, that need to be dealt with now, not later.
Most people know that IBM has the majority of the top 500 supercomputers, but few may not realize that IBMalso has delivered solutions to the top 100 green companies. IBM success is explained in more detail in this[Press Release].
The group split up to four different platform meetings: Storage, Modular, Power, and Mainframe. Barry Rudolphpresented for the Storage platform. He talked about the explosion in information, business opportunities,risk and cost management. IBM has shifted from being product-focused, to the stack of servers and storage,to our latest focus on solutions across the infrastructure. He mentioned our DARPA win for [PERCS] which stands for productive,easy-to-use, reliable computing system.
My session was the first in the morning, at 8:30am, but managed to pack the room full of people. A few looklike they just rolled in from Brocade's special get-together in Casey's Irish Pub the night before.I presented how IBM's storage strategy for the information infrastructure fits into the greater corporate-wide themes.To liven things up, I gave out copies of my book[Inside System Storage: Volume I] to those who asked or answered the toughest questions.
Data Deduplication and IBM Tivoli Storage Manager (TSM)
IBM Toby Marek compared and contrasted the various data deduplication technologies and products available, andhow to deploy them as the repository for TSM workloads. She is a software engineer for our TSM software product,and gave a fair comparison between IBM System Storage N series Advanced Single Instance Storage (A-SIS), IBMDiligent, and other solutions out in the marketplace.If you are going to combine technologies, then it isbest to dedupe first, then compress, and finally encrypt the data. She also explained about the many cleverways that TSM does data reduction at the client side greatly reduces the bandwidth traffic over the LAN,as well as reducing disk and tape resources for storage. This includes progressive "incremental forever" backup for file selection, incremental backups for databases, and adaptive sub-file backup.Because of these data reduction techniques, you may not get as much benefit as deduplication vendors claim.
The Business Value of Energy Efficiency Data Centers
Scott Barielle did a great job presenting the issues related to the Green IT data center. He is part of IBM"STG Lab Services" team that does energy efficiency studies for customers. It is not unusual for his teamto find potential savings of up to 80 percent of the Watts consumed in a client's data center.
IBM has done a lot to make its products more energy efficient. For example, in the United States, most datacenters are supplied three-phase 480V AC current, but this is often stepped down to 208V or 110V with powerdistribution units (PDUs). IBM's equipment allows for direct connection to this 480V, eliminating the step-downloss. This is available for the IBM System z mainframe, the IBM System Storage DS8000disk system, and larger full-frame models of our POWER-based servers, and will probably be rolled out to someof our other offerings later this year. The end result saves 8 to 14 percent in energy costs.
Scott had some interesting statistics. Typical US data centers only spend about 9 percent of their IT budgeton power and cooling costs. The majority of clients that engage IBM for an energy efficiency study are not tryingto reduce their operational expenditures (OPEX), but have run out, or close to running out, of total kW ratingof their current facility, and have been turned down by their upper management to spend the average $20 million USDneeded to build a new one. The cost of electricity in the USA has risen very slowly over the past 35 years, andis more tied the to fluctuations of Natural Gas than it is to Oil prices.(a recent article in the Dallas News confirmed this:["As electricity rates go up, natural gas' high prices, deregulation blamed"])
Cognos v8 - Delivering Operational Business Intelligence (BI) on Mainframe
Mike Biere, author of the book [BusinessIntelligence for the Enterprise], presented Cognos v8 and how it is being deployed for the IBMSystem z mainframe. Typically, customers do their BI processing on distributed systems, but 70 percent of the world's business data is on mainframes, so it makes sense to do yourBI there as well. Cognos v8 runs on Linux for System z, connecting to z/OS via [Hypersockets].
There are a variety of other BI applications on the mainframe already, including DataQuant,AlphaBlox, IBI WebFocus and SAS Enterprise Business Intelligence. In addition to accessing traditional onlinetransaction processing (OLTP) repositories like DB2, IMS and VSAM, using the [IBM WebSphere ClassicFederation Server], Cognos v8 can also read Lotus databases.
Business Intelligence is traditionally query, reporting and online analytics process (OLAP) for the top 10 to 15 percent of the company, mostly executives andanalysts, for activities like business planning, budgeting and forecasting. Cognos PowerPlay stores numericaldata in an [OLAP cube] for faster processing.OLAP cubes are typically constructed with a batch cycle, using either "Extract, Transfer, Load" [ETL], or "Change Data Capture" [CDC], which playsto the strength of IBM System z mainframe batch processing capabilities.If you are not familiar with OLAP, Nigel Pendse has an article[What is OLAP?] for background information.
Over the past five years, BI is now being more andmore deployed for the rest of the company, knowledge workers tasked with doing day-to-day operations. Thisphenomenom is being called "Operational" Business Intelligence.
IBM Glen Corneau, who is on the Advanced Technical Support team for AIX and System p, presented the IBMGeneral Parellel File System (GPFS), which is available for AIX, Linux-x86 and Linux on POWER.Unfortunately, many of the questions were related to Scale Out File Services (SOFS), which my colleague GlennHechler was presenting in another room during this same time slot.
GPFS is now in its 11th release since its introducing in 1997. All of the IBM supercomputers on the [Top 500 list] use GPFS. The largest deployment of GPFS is 2241 nodes.A GPFS environment can support up to 256 file systems, each file system can have up to 2 billion filesacross 2 PB of storage. GPFS supports "Direct I/O" making it a great candidate for Oracle RAC deployments.Oracle 10g automatically detects if it is using GPFS, and sets the appropriate DIO bits in the stream totake advantage of GPFS features.
Glen also covered the many new features of GPFS, such as the ability to place data on different tiers ofstorage, with policies to move to lower tiers of storage, or delete after a certain time period, all conceptswe call Information Lifecycle Management. GPFS also supports access across multiple locations and offersa variety of choices for disaster recovery (DR) data replication.
Perhaps the only problem with conferences like this is that it can be an overwhelming["fire hose"] of information!
Continuing this week in Los Angeles, I went to some interesting sessions today at theSystems Technical Conference (STC08).
System Storage Productivity Center (SSPC) - Install and Configuration
Dominic Pruitt, an IBM IT specialist in our Advanced Technical Support team, presented SSPC and howto install and configure it. For those confused between the difference of TotalStorage ProductivityCenter and System Storage Productivity Center, the former is pure software that you install on aWindows or Linux server, and the latter is an IBM server, pre-installed with Windows 2003, TotalStorageProductivity Center software, TPCTOOL command line interface, DB2 Universal Database, the DS8000 Element Manager, SVC GUI and CIMOM, and [PuTTY] rLogin/SSH/Telnet terminal application software.
Of course, the problem with having a server pre-installed with a lot of software is that there is alwayssomeone that wants to customize it further. For those who just want to manage their DS8000 disk systems,for example, it is possible to uninstall the SVC GUI, CIMOM and PuTTY, and re-install them later when youchange your mind. As a general rule, it is not wise to mix CIMOMs on the same machine, as it might causeconflicts with TCP ports or Java level requirements, so if you want a different CIMOM than SVC, uninstallthe SVC CIMOM first. For those who have SVC, the SSPC replaces the SVC Master Console, so you can safelyturn off the SVC CIMOM on your existing SVC Master Consoles.
The base level is TotalStorage Productivity Center "Basic Edition", but you can upgrade the Productivity Centerfor Disk, Data and Fabric components with license keys. You can also run Productivity Center for Replication,but IBM recommends adding processor and memory to do this (IBM offers this as an orderable option).Whether you have the TotalStorage software or SSPC hardware, Productivity Center has a cool role-to-groups mapping feature.You can create user groups, either on the Windows server, the Active Directory, or other LDAP, and then map which roles should be assigned to users in each group.
Since Productivity Center manages a variety of different disk systems, it has made anattempt to standardize some terminology. The term "storage pool" refers to an extentpool on the DS8000, or a managed disk group on the SAN Volume Controller. Since the DS8000 can support both mainframe CKD volumes and LUNs for distributed systems, theterm "volume" refers to a CKD volume or LUN, and "disk" refers to the hard disk drive (HDD).
To help people learn Productivity Center, IBM offers single-day "remote workshops"that use Windows Remote Desktop to allow participants to install, customize and usethe software with no travel required.
IBM Integrated Approach to Archiving
Dan Marshall, IBM global program manager for storage and data services on our Global Technology Services team, presented IBM's corporate-wide integration to support archive across systems, software and services.One attendee asked me why I was there, given that "archive" is one of my areas of subject matter expertise that I present often at the Tucson Executive Briefing Center. I find it useful to watch others present the material, even material that I helped to develop, to see a different slant or spin on each talking point.
Archive is one area that brings all parts of IBM together: systems, software and services.Dan provided a look at archive from the services angle, providing an objective unbiasedview of the different software and systems available to solve specific challenges.
Encryption Key Manager (EKM) Design and Implementation
Jeff Ziehm, IBM tape technical sales specialist, presented IBM's EKM software, how it works in a tape environment, and how to deploy it in various environments. Since IBM is allabout being open and non-proprietary, the EKM software runs on Java on a variety ofIBM and non-IBM operating systems. IBM offers "keytool" command line interface (CLI) for the LTO4 and TS1120 tape systems, and "iKeyMan" graphical user interface (GUI) for theTS1120. Since it runs on Java, IBM Business Partners and technical support personneloften just [download and install EKM]onto their own laptops to learn how to use it.
Virtual Tape Update
We had three presenters at this one. First, Jeff Mulliken, formerly from Diligent and now a full IBM employee, presented the current ProtecTier softwarewith the HyperFactor technology, then Abbe Woodcock, IBM tape systems, compared Diligent with IBM's TS7520 and just-announced TS7530virtual tape libraries, and finally Randy Fleenor, IBM tape sales leader, presented IBM's strategy going forward in tape virtualization.
Let's start with Diligent. The ProtecTier software runs on any x86-64 server withat least four cores and the correct Emulex host bus adapter (HBA) cards. Using Red HatEnterprise Linux (RHEL) as a base, the ProtecTier software performs its deduplication entirely in-lineat an "ingest rate" of 400-450 MB/sec. This is all possible using 4GB memory-resident "dictionary table" that can map up to 1 PB of back end physical storage, which could represent as much as 25PB of "nominal" storage. Theserver is then point-to-point or SAN-attached to Fibre Channel disk systems.
As we learned yesterday from Toby Marek's session, there are four ways to performdeduplication:
full-file comparisons. Store only one copy of identical files.
fixed-chunk comparisons. Files are carved up into fixed-size chunks, and each chunkis compared or hashed to existing chunks to eliminate duplicates.
variable-chunk comparisons. Variable-length chunks are hashed or diffed to eliminate duplicate data.
content-aware comparisons. If you knew data was in Powerpoint format, for example,you could compare text, photos or charts against other existing Powerpoint files toeliminate duplicates.
IBM System Storage N series Advanced Single Instance Storage (A-SIS) uses fixed-chunkmethod, and Diligent uses variable-chunk comparisons. Diligent does this using "dataprofiling". For example, let's say most of my photographs are pictures of people, buildings, landscapes, flowers and IT equipment. When I back these up, the Diligentserver "profiles" each, and determines if any existing data have a similar profilethat might have at least 50 percent similar content. Diligent than reads in the data that is mostly likely similar, does a byte-for-byte ["diff" comparison], and creates variable-lengthchunks that are either identical or unique to sections of the existing data. Theunique data is compressed with LZH and written to disk, and the sequential series of pointer segments representing the ingested file is written in a separate section on disk.
That Diligent can represent profiles for 1PB of data in as little as 4GB memory-residentdictionary is incredible. By comparison, 10TB data would require 10 million entries on a content-aware solution, and 1.25 billion entries for one based on hash-codes.
Abbe Woodcock presented the TS7530 tape system that IBM announced on Tuesday. It has some advantages over the current Diligent offering:
Hardware-based compression (TS7520 and Diligent use software-based compression)
1200 MB/sec (faster ingest rate than Diligent)
1.7PB of SATA disk (more disk capacity than Diligent)
Support for i5/OS (Diligent's emulation of ATL P3000 with DLT7000 tapes not supported on IBM's POWER systems running i5/OS)
Ability to attach a real tape library
NDMP backup to tape
tape "shredding" (virtual equivalent of degaussing a physical tape to erase all previously stored data)
Randy Fleenor wrapped up the session telling us IBM's strategy going forward with all of thevirtual tape systems technologies. Until then, IBM is working on "recipes" or "bundles", puttingDiligent software with specific models of IBM System x servers and IBM System Storage DS4000 disk systemsto avoid the "do-it-yourself" problems of its current software-only packaging.
Understanding Web 2.0 and Digital Archive Workloads
I got to present this in the last time slot of the day, just before everyone headed off to the [Westin Bonaventure hotel] for our big fancy barbecue dinner. Like my previous sessionon IBM Strategy, this session was more oriented toward a sales audience, but both garnereda huge turn-out and were well-received by the technical attendees.
This session was requested because these new applications and workloads are what is driving IBM to acquire small start-ups like XIV, deploy Scale-Out File Services (SOFS), and develop the innovative iDataPlex server rack.
The session was fun because it was a mix of explanation of the characteristics ofWeb 2.0 services; my own experience as a blogger and user of Google Docs, FlickR, Second Life andTivo; and an exploration in how database and digital archives will impact thegrowth in computing and storage requirements.
I'll expand on some of these topics in later blog posts.
"With Cisco Systems, EMC, and VMware teaming up to sell integrated IT stacks, Oracle buying Sun Microsystems to create its own integrated stacks, and IBM having sold integrated legacy system stacks and rolling in profits from them for decades, it was only a matter of time before other big IT players paired off."
Once again we are reminded that IBM, as an IT "supermarket", is able to deliver integrated software/server/storage solutions, and our competitors are scrambling to form their own alliances to be "more like IBM." This week, IBM announced new ordering options for storage software with System x servers, including BladeCenter blade servers and IntelliStation workstations. Here's a quick recap:
IBM Tivoli Storage Manager FastBack v6.1 supports both Windows and Linux! FastBack is a data protection solution for ROBO (Remote Office, Branch Office) locations. It can protect Microsoft Exchange, Lotus Domino, DB2, Oracle applications. FastBack can provide full volume-level recovery, as well as individual file recovery, and in some cases Bare Machine Recovery. FastBack v6.1 can be run stand-alone, or integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution.
FlashCopy Manager v2.1
FlashCopy Manager uses point-in-time copy capabilities, such as SnapShot or FlashCopy, to protect application data using an application-aware approach for Microsoft Exchange, Microsoft SQL server, DB2, Oracle, and SAP. It can be used with IBM SAN Volume Controller (SVC), DS8000 series, DS5000 series, DS4000 series, DS3000 series, and XIV storage systems. When applicable, FlashCopy manager coordinates its work with Microsoft's Volume Shadow Copy Services (VSS) interface. FlashCopy Manager can provide data protection using just point-in-time disk-resident copies, or can be integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution to move backup images to external storage pools, such as low-cost, energy-efficient tape cartridges.
General Parallel File System (GPFS) v3.3 Multiplatform
GPFS can support AIX, Linux, and Windows! Version 3.3 adds support for Windows 2008 Server on 64-bit chipset architectures from AMD and Intel. Now you can have a common GPFS cluster with AIX, Linux and Windows servers all sharing and accessing the same files. A GPFS cluster can have up to 256 file systems. Each of these file systems can be up to 1 billion files, up to 1PB of data, and can have up to 256 snapshots. GPFS can be used stand-alone, or integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution with parallel backup streams.
For full details on these new ordering options, see the IBM [Press Release].