Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
As we wrap up the year, people's thoughts turn to archive anddata retention.
The [Robert Frances Group] have put out a research paper titled Optimizing Data Retention and Archiving - November 2007 that helps IT executives understand the cost differences for a disk-only archive approach versus disk/tape archive approach and how an [IBM System Storage DR550] offering can help address the long-term storage archive requirements with a world-class storage strategy that reduces cost, improves efficiency and supports compliance. Here is an excerpt:
Ongoing legal, audit, and regulatory requirementswill continue to drive IT groups to improvearchive policies, processes, strategy, andefficiency. The choice of which technologies touse will have a profound impact on the success ofsuch efforts, since technologies like the DR550embody many aspects of the strategy, processes,and policies that must be decided upon. When itcomes to tape, IBM's DR550 is unique inproviding that support. Competitors tout disk-onlysolutions as the wave of the future, but researchindicates otherwise. The most basic benefits arecost and mobility, and despite the various vendorproclamations to the contrary, tape is still only afraction of the cost of disk and will remain so inthe foreseeable future.
This paper is yet another nail in the coffin of EMC Centera.In his post [Anyone Naughty on Your List…], Jon W Toigo points to an eBay fire sale of an EMC Centera Gen 4.
There has never been a better time to switch from EMC Centera to theIBM System Storage DR550.
This is a reasonable question. Since Invista 2.0 came out months ago in August, and Invista 2.1 is rumored to be out by end of this month, why put out a press release now, rather than just wait a few weeks? Thesignificant part of this announcement was that EMC finally has their first customer reference.To be fair, getting a customer to agree to be a reference is difficult for any vendor. Some non-profitsand government agencies have rules against it, and some corporations just don't want to be bothered byjournalists, or take phone calls from other prospective customers. I suspect EMC wanted to put the good folks from Purdue University in front of the cameras and microphones before they:
In Moore's terminology, Purdue University would be a "technology enthusiast", interested in exploring the technologyof the EMC Invista. Universities by their very nature often see themselves as early adopters, willing to take big risks in hopes to reap big rewards. The chasm happens later, when there are a lot of early adopters, all willing to be reference accounts. The mainstream market--shown here as pragmatists, conservatives, and skeptics-- are unwillingto accept reference claims from early adopters, searching instead for moderate gains from minimal risks. They prefer references from customers that are similar in size and industry. Whether a vendor can get a product to cross this chasm is the focus of the book.
Why "SAN" virtualization?
Technically, Invista is "storage" virtualization, not "SAN" virtualization. Virtualizationis any technology that makes one set of resources look and feel like a different setof resources, preferably with more desirable characteristics. You can virtualizeservers, SANs, and storage resources.
Virtual SAN (VSAN) technology, supported bythe Cisco MDS 9500 Series Multilayer Director Switch, partitions a single physical SAN into multipleVSANs, allowing different business functions and requirements to share a common physical infrastructure.
How does Invista advance Cisco's VSAN functionality? It doesn't, but that doesn't makethe title a falsehood, or the press release by association full of lies.If you read the entire press release, EMCcorrectly states that Invista is "storage" virtualization. Some storagevirtualization products, like EMC Invista and IBM System Storage SAN Volume Controller (SVC), require a SAN as a platform for which to perform their magic.Marketing people might use the term "SAN" torefer not just the network gear that provides the plumbing, but also to include the storage devices that are attached to the SAN. In that light, theuse of "SAN virtualization" can be understood in the title.
More importantly, it appears that EMC no longer requires that you purchase new SAN equipment from themwith Invista. When the Invista first came out, it cost over a quarter-million US dollars to cover thecost of the intelligent switches, but with the price drop to $100K, I imagine this means theyassume everyone has an appropriately-supported intelligent switch already deployed.
Why this architecture?
In his post [Storage Virtualization and Invista 2.0], EMC blogger ChuckH does a fair job explaining why EMC went in this direction for Invista, and how it is different thanother storage virtualization products.
Most storage virtualization products are cache-based. The world's first disk storagevirtualization product, the IBM 3850 Mass Storage System, introduced in 1974, and thefirst tape virtualization product, the IBM 3494 Virtual tape Server, introduced in 1997, bothused disk cache in front of tape storage. Later virtualization products, like IBM SVC and HDS USP-V, use DRAM memory cache in front of disk storage, but the concept is the same.People are comfortable with cache-based solutions, because the technology is matureand well proven in the marketplace, and excited and delighted that these can offer the following features in a mixed heterogeneous disk environment:
instantaneous point-in-time copy
None of these features are provided by Invista, as there is no cache in the switch. Instead,Invista is a "packet cracker"; it cracks open each FCP packet, inspects and modifies the contents, then passes theFCP packet along to the appropriate storage device. This process slows down each read andwrite by some amount, perhaps 20 microseconds. The disadvantage of slowing down every readand write is offset by having other benefits, like non-disruptive data migration.
To compensate for Invista's inability to provide these features,EMC offers a second solution called EMC RecoverPoint, which is an in-band cache-based appliancesimilar in design to SVC, but maps all virtual disks one-to-one to physical disks. It offersremote distance asynchronous mirroring between heterogeneous devices.EMC supports RecoverPoint in front of Invista, but if you are considering buying bothto get the combined set of features, you might as well buy an IBM SVC or HDS USP-V instead,in one system, rather than two, which is much less complicated. IBM SVC and HDS USP-Vhave both "crossed the chasm" having sold thousands of units to every type and size of customer.
Hopefully, this answers the questions you might have about EMC Invista.
Last year, I covered Chris Anderson's book [The Long Tail]. This year, Chris Anderson, editor-in-chief of Wired.com, has an upcoming book titled Free, the past and future of a radical price. Chris talked about his book here at Nokia World 2007 conference, and the [46-minute video] is worth watching.He asks the big question "What if certain resources were free?" This could be electricity, bandwidth, or storage capacity. He explores how this changes the world, and createsopportunities for new business models. However, many people are stuck in a "scarcity" modeland treat nearly-free resources as expensive, and find themselves doing traditional things thatdon't work anymore. Chris mentions [Second Life] as aneconomy where many resources are free, and seeing how people respond to that.Rather than focusing on making money, new businesses are focused on gainingattention and building their reputation. Here are some example business models:
Cross-subsidy: give away the razors, sell the razor blades; or give away cell phones and sell minutes
Ad-Supported: magazines and newspapers sell for less than production costs
Freemium: 99% use the free version, but a handful pay extra for something more
Digital economics: give away digital music to promote concert tours
Free-sample marketing: give away samples to get word-of-mouth advertising
Gift economy: give people an opportunity and platform to contribute like Wikipedia
Nick Carr writes a post [Dominating the Cloud], indicatingthat IBM, Google, Microsoft, Yahoo and Amazon are the five computing giants to watch, as they are more efficient atconverting electricity into computing than anyone else. Last month, I mentioned IBM and Google partnership on cloud computing in my post[Innovationthat matters: cell phones and cloud computing].Nick's upcoming book titled[The Big Switch] looks into "Utility Computing",comparing the change of companies generating their own electricity to using an electric grid, to the recent developments of cloud computing and software as a service (SaaS). Amazon's latest "SimpleDB" online databaseis cited as an example.
Last, but not least, Seth Godin writes in his post [Meatballs and Permeability] about the bits-vs-atoms issue, what Chris Anderson above refers to as the new digital economy. The idea here is that value carried electronically as bits (digital documents, for example) have completely different economics than value carried as atoms (physical objects), andrequires new marketing techniques. Methods from traditional marketing will not be effective in this new age.Here is a [review] of Seth's new book Meatball Sundae: Is Your Marketing Out of Sync?
All three of these books seem to be covering the same phenomenon, just from different viewpoints. I lookforward to reading them.
While Bill Gates is personally benefiting from code he wrote 30 years ago, most software engineers don't getroyalties for their creative efforts. Robin Harris on StorageMojo has a great piece on [Why are the writers striking?]The writers in this case are those who write scripts for television programs. They get 4 cents for every$19.99 DVD sold today, and want this bumped up to 8 cents. More importantly, they want the same deal forcontent shown over the internet. Currently, they get nothing when content they wrote for is shown on the internet, and they would like that fixed also.
Paying royalties to creative writers encourages them to write good stuff. The best stuff will result in moreroyalties, and we want to encourage this. What about software engineers? Don't we want them to write the beststuff also? Shouldn't they get royalties too, not just a flat salary and continued employment?
From a technology-oriented to a service-oriented approach to IT management
Companies are challenged with shifting from a technology/resource-oriented to a service-oriented approach to IT management. This involves new processes, a new reportingstructure for the IT staff, new tools and technologies, and new data to be captured.A top-down approach is recommended for large organizations, but a bottom-up approachmight be easier to implement for small and medium sized businesses.
IT service management architecture and autonomic computing
IBM has been promoting the concept of Autonomic Computing since 2001. A self-managed resource can have an autonomic manager with sensor and effector. The sensor is used to monitor status, a knowledge basecan analyze and plan for appropriate modifications, and execute these through theeffector. The Autonomic Computing Reference Architecture (ACRA) aligns with the Information Technology Service Management (ITSM) model well, with the CMDB acting asthe knowledge base for the autonomic managers. See my earlier post[Self tuning guitars and storage].
Evolving standards for IT service management
Changes to the IT infrastructure must be closely managed to avoid disruptions.IT organizations recognize that standards-based solutions enable interoperability,with less risk, to connect internal and external applications. Standards can be formally developed by standards bodies like ISO, IETF, W3C, OASIS, and DMTF; or be de facto standards that become widely used by companies, which can then laterbe adopted by standards bodies. SML and SDD are emerging standards that are incompatible with the current set of Web Services-based protocols, like WSDM, but work isunderway to try to determine a unifying standard to support all of these under ITSM.
Prospects for simplifying ITSM-based management through self-managing resources
An ideal computing system would take over a great deal of its own management.Today's IT systems are brittle, difficult to understand, and dangerous to change.The savings from automating some tasks are dwarfed by the irreducible costs of humandecision making, agreements and approvals built in formal processes. A true self-managing, scalable IT system would consist of a number of nearly-identical boxes,with a web interface to define high-level policies and provide information on utilization and performance. As the system needs to expand, it can automatically place the order. When the new boxes arrive, they are placed and connectedinto the data center, and the system configures and provisions them appropriately.
IT Autopilot: A flexible IT service management and deliver platform for smalland medium business
Using an airplane analogy, the pilot performs manual steps to get the plane safelyoff the ground, then turns it over to the autopilot for normal operations. The ITAutopilot intends to do this for IT service management in small and medium business (SMB)that may not have a large dedicated IT staff, using an SOA approach that isloosely coupled, stateless, and adhering to Web Services standards. The IT Autopilotemploys workflow-based controls, the autonomic computing MAPE model, and customizedpolicies to address SMB requirements. It could be deployed as an appliance, similarto IBM System Storage Productivity Center.
I'm continuing my coverage of IBM Systems Journal's [fifteen articles about IBM Service Management].As storage hardware cost per GB declines 25 percent per year, the cost of labor has grown to nearly 70percent of the total IT budget. This brings new focus on how we do things, rather than what things siton the raised floor. Yesterday, my post summarized[the first five articles].Here is what I got out of the next five articles:
Integrated change and configuration management
IT Infrastructure Library (ITIL) best practice covers a variety of disciplines, including incident management,problem management, release management, service help desk, change management, and configuration management.IBM has combined the last two into a single database, and this paper provides insights gained fromimplementing these in practice. A special section talks about how service providers can support multipleclients that must be kept separate from each other.
The process of building a Process Manager: Architecture and design patterns
Business processes coordinate and sequence the work done by a collection of people.Most companies define their business process from scratch, and develop their own applicationsto support their implementation. Process Managers are "out of the box" applications that help customers integrateand automate more quickly than building from scratch. These Process Managers leverage and update informationabout configuration items (CIs) in the configuration management database (CMDB). One of the first developedby IBM was the IBM Tivoli Storage Process Manager.
Integration of domain-specific IT processes and tools in IBM Service Management
ITIL tells you what needs to get done, but it doesn't tell you exactly how to do it. Completing a simplechange request to the IT environment can have a drastic impact on service level agreements (SLAs), utilization of existing storage capacity, and business operations. Sometimes it is important to use multipleProcess Manager applications together. To accomplish this, it is important to launch and land in contextat the appropriate points for smooth transition.
Using a model-driven transformational approach and service-oriented architecture for service deliver management
Companies are considering outsourcing as a way to focus on core competencies. However, the trend is towardselective outsourcing, where the customer controls the IT solution architecture and retains their legacy tools.As a result, service providers inherit the business and IT processes from their clients. IBM Research has developed the model-driven business transformation (MDBT) method that choreographs workflow tools with humanactivities. A "balanced scorecard" allows both client and outsourcer monitor progress towards strategic goals.
Catalog-based service request management
Service providers (outsourcers) are able to bring the latest IT technology, best practices, and skilledservice delivery teams. Unfortunately, unique business processes from each client limits the ability to leveragethese resources effectively. A service delivery management platform (SDMP) catalog serves as a repositoryof atomic services and the delivery teams that can perform them. This allows outsourcers to leverage resourcesacross multiple clients, while still being able to tailor business compositions of these atomic services to an individual client's requirements.
The latest IBM Systems Journal has [fifteen articles about IBM Service Management], which includes the disciplines for managing storage resources as part of an overall IT data center.As with most journals, these articles are heavy academic efforts, not light summer reading.
However, since I have moved from marketing to consulting, I need to read these kinds of articles to keep up with the industry. I realize many people don't have time to read allof these, so over the next three days, I will give some quick highlights in hopefully more understandablelanguage. Here is what I got out of the first five articles:
An Overview of IBM Service Management
This 10-page article provides a good overview of what the other articles go into greater detail.The role of information has changed, from supporting back-office tasks like payroll andinventory, to enabling growth in the business itself, providing insight and competitive advantage. The challenges are summarized under "Four C's": Complexity, Change, Cost, and Compliance. The recommended approach is to engage with IBM,who has thousands of practitioners with years of experience in ITIL, eTOM, COBIT, CMMI and SOA.
Adding value to the IT organization with the Component Business Model
Many Service Level Agreements (SLAs) are loaded with technological jargon rather than concentratingon intended business results. CIOs must change this, and learn to run IT as a business witha service delivery focus.IBM Process Reference Model for IT (PRM-IT) is the foundationfor the Component Business Model for the Business of IT (CBMBoIT) that can assist with strategic decision making to transform IT into this new role.
An Integration model for organizing IT service management
There are so many ways to implement Information Technology Service Management (ITSM) that it is hard to tell if there are gaps or overlaps between products and offerings. A seamless solution requires common terminology and approaches. An integration model helps to bring all this together, focusedon being consistent with existing practice, with clarity of expression, and practical to implement.
IBM Service Management architecture
Today's systems management tools are fragmented by resource domain--servers are managed here, networksmanaged there, and storage is another story altogether. IBM Service Management intends to integratea portal-based User Interface, a process runtime layer, a configuration management database (CMDB), and all the various operational management products (OMPs) for each resource. For example, IBM TotalStorage ProductivityCenter is an OMP for IBM and non-IBM storage resources.
A configuration management database architecture in support of IBM Service Management
IBM Tivoli Change and Configuration Management Database (CCMDB) holds all the configuration data of IT resources in the data center, including individual "configuration items" (CIs), as well as tracks changes. The database is populated with data from different sources, includingautomatic discovery. Relationships between CIs provides a visual representation of application dependencies.The data model uses a clever combination of Unified Modeling Language (UML) with Java persistent objects.
[R&D Magazine] recently conducted a survey that prompted readers to identify the world's most successful Research and Development (R&D) companies. The results are in: IBM was recognized as the best R&D company in the world when several different categories were evaluated, including:
R&D spending as a percentage of revenue
the number of patents
new products in development
The survey considered additional information on more than 130 companies such as data on intellectual property, community service and financial growth trends. Readers were also asked five distinct questions, including the following:
Where would you like to work based on their R&D?
What companies have the most improved R&D in the past five years?
What companies are the leaders in R&D?
Which company's R&D has the strongest influence on society?
Which company's R&D is the most proactive in high tech challenges?
Since it is often 5-15 years between when a scientist in one of our many research labs comes up with a clever idea, to when it is a market success, it is good to have external recognition for the R&D efforts we are doing right now.Here is a link to a [four-page PDF] of the magazine article.
Take for example IBM's recent breakthrough in Silicon photonics. Supercomputers that consist of thousands of individual processing nodes, typically running Linux on dual-core or quad-core processors, connected by miles of copper wires could one day fit into a laptop PC. And while today’s supercomputers can use the equivalent energy required to power hundreds of homes, these future tiny supercomputers-on-a-chip would expend the energy of a light bulb, so this solution is more "green" for the environment.According to the [IBM Press Release]:
The breakthrough -- known in the industry as a silicon Mach-Zehnder electro-optic modulator -- performs the function of converting electrical signals into pulses of light. The IBM modulator is 100 to 1,000 times smaller in size compared to previously demonstrated modulators of its kind, paving the way for many such devices and eventually complete optical routing networks to be integrated onto a single chip. This could significantly reduce cost, energy and heat while increasing communications bandwidth between the cores more than a hundred times over wired chips.
“Work is underway within IBM and in the industry to pack many more computing cores on a single chip, but today’s on-chip communications technology would overheat and be far too slow to handle that increase in workload,” said Dr. T.C. Chen, vice president, Science and Technology, IBM Research. “What we have done is a significant step toward building a vastly smaller and more power-efficient way to connect those cores, in a way that nobody has done before.”
Today, one of the most advanced chips in the world -- IBM’s Cell processor which powers the Sony Playstation 3 -- contains nine cores on a single chip. The new technology aims to enable a power-efficient method to connect hundreds or thousands of cores together on a tiny chip by eliminating the wires required to connect them. Using light instead of wires to send information between the cores can be 100 times faster and use 10 times less power than wires.
Over 4,000 issues of your favorite magazine now sit, ready for you to search and savor, on an 80GB incredibly lightweight and travel-friendly drive. This high-performance, brushed-aluminum Hard Drive measures only 3x5-inch and can easily fit inside a purse or briefcase so show it off to your tech-savvy friends and co-workers. Plus, there is plenty of extra room on the drive for future updates. Simply install The Complete New Yorker Program (installation CD provided), then connect the drive to a USB port on your Computer and have instant access to every article, poem, short story, and cartoon including every advertisement that has appeared in the magazine since 1925.
System Requirements: Windows 2000 or XP, Mac OS X 10.3 or higher, USB 2.0 port, CD-ROM drive, 750 MB of free hard drive space, 1024 x 768 minimum screen Resolution
The 750MB of disk space required on your system probably contains the indexing/metadata search system to find articles by subject, title or author. Linux is not listed, and if 750MB of disk space are required to run the program, then perhaps this system won't work with Linux at all.
The system claims that there is extra room on the disk to ingest future issues of the magazine. I wonder why they didn't put the indexing/metadata search software on the drive itself, so that it would be self-contained, rather than having a separate installation CD.
I think this is a sign of our times. The New Yorker magazine has taken the archives that they keep anyways, and made them available in bulk, in a handy disk drive delivery system. I know several people who keep boxes and boxes of back issues of all kinds of magazines, and this certainly is an improvement.
Registration for [IBM Pulse 2008] is now open! This is the first ever global conference to cover not just Tivoli Storage software, but also the rest of Tivoli portfolio,Maximo and Tivoli Netcool products, and disciplined service management and governance practices and procedures.
Join us on May 18-22 in Orlando, Florida. You'll learn how IBM service management solutions can give you the visibility needed to see all aspects of your business and manage it against objectives, control to secure assets, and automation to drive business agility for competitive advantage.
Leverage this opportunity to meet with fellow clients, IBM partners, industry analysts, and IBM experts in an environment dedicated to the latest technology, trends, and best practices in service management. Whether youl are in network and service operations, IT, the executive office, line of business or services sales, IBM Pulse offers keynote presentations, in-depth seminar sessions, exhibitions and hands-on labs.
But wait, there's more!
One-on-one meetings with IBM executives and industry experts
Presentations by more than 100 customers sharing their real-world experiences and lessons learned
An evening of "Speed Training" (a la [speed dating]) for technology consulting: Ask specific questions of our technical subject matter experts – and get answers instantly
I realize this conference is five months away, however one of my pet peeves is learning about a conference, especially a first-of-its-kind conference like this one, at the last minute, and not having time to plan accordingly. Travel budgets are tight for lots of people, so as an added incentive there is a $600 US dollar discount per person if you register before February 1, 2008. So don't wait! Sign up today!
Continuing my business trip through Canada, an article by Richard Blackwell titled [The Double Bottom Line] yesterday's Globe and Mail newspaper caught my attention.Here is an excerpt, citing Tim Brodhead, president of the J.W. McConnell Family Foundation in Montreal:
The bottom line for any business is making a profit, right?
But how about considering a different, or additional bottom line: helping make the world a better place to live in.
That's the radical proposition underlying the concept of "social entrepreneurship," the harnessing of business skills for the benefit of the disadvantaged.
Young investors, in particular, now want their investments to produce both financial and social returns, he noted.
Until recently, "we could either make a donation [to a charity] and get zero financial return, or we could invest and get zero social return." People now want more of both, but rules governing charities and business make that tough to accomplish.
One stumbling block is the imperative - entrenched in corporate law - that managers and directors of for-profit companies have a fiduciary duty to maximize profits. That structure is a brick wall that limits the expansion of social entrepreneurship, Mr. Brodhead said.
Some companies have embraced the new paradigm of a double bottom line, even if they are uncomfortable with the "social entrepreneur" label.
This fiduciary duty to maximize profits is discussed in the 2003 documentary[Corporation]. However, some organizations are now trying to aligntheir goals, finding ways to benefit their investers, as well as society overall. For example, organization [ONE.org] helped launch [Product (RED)]:
If you buy a (RED) product from GAP, Motorola, Armani, Converse or Apple, they will give up to 50% of their profit to buy AIDS drugs for mothers and children in Africa. (RED) is the consumer battalion gathering in the shopping malls. You buy the jeans, phones, iPods, shoes, sunglasses, and someone - somebody’s mother, father, daughter or son - will live instead of dying in the poorest part of the world. It’s a different kind of fashion statement.
The company, which has operated in Africa for nearly six decades, expects to increase its investment by more than $US120 million (more than R820 million) over the next two years. In the coming year, IBM expects to hire up to 100 students from Sub-Saharan universities to meet the growing demand in services, global delivery and software development.
"The Sub-Saharan African market is poised for double-digit growth flowing from the development and expansion of telecommunications networks, power grids and transport infrastructure," said Mark Harris, Managing Director, IBM South and Central Africa. "Private and public sector investment in the region is transforming the ability of the market to participate in the global economy."
A recent IBM Global Innovation Outlook (GIO) [report on Africa] indicates that the economies ofdozens of African nations are growing at healthy rates, the best in the past 30 years, with 5.5 to 5.8 percent averageacross the continent. This supports last month's news that [Top IBM thinkers to mentor African students]:
Hundreds of IBM scientists and researchers will mentor college students in Africa. Called Makocha Minds (after the Swahili word for "teacher"), the program will reach hundreds of computer science, engineering and mathematics students.
Makocha Minds is an off-shoot of IBM’s Global Innovation Outlook, an annual symposium of top government, business and academic leaders that uncovers new opportunities for business and societal innovation. "African students need to be trained in entrepreneurship so that they get out there and not just make jobs for themselves but create opportunities to employ others as well,” said Athman Fadhili, a graduate student at the University of Nairobi (Kenya).
Most of the mentoring will be via email and online collaboration.
Mentoring via email and online collaboration is very reasonable. I have mentored both high school and collegestudents through a partnership between IBM Tucson and the Society of Hispanic Professional Engineers[SHPE]. While thekids were all located in Tucson, I rarely am, traveling nearly every week, but I madetime for the kids via email and online collaboration wherever I happened to be.
To make this work, we need to get email and online collaboration in the hands who need them.I got my email thanking me for being a "first day donor" to the One Laptop Per Child "Give 1 Get 1" (G1G1) project,and have added this "badge" to the right panel of my blog. If you click on the badge, you will be takento a series of YouTube videos that further describe the project.
According to the email my donated XO laptop will soon be delivered into the hands of a child in Afghanistan, Cambodia, Haiti, Mongolia or Rwanda.
How do these work? Instead of buying your uncle yet another $25 necktie, consider buying a $25 Kiva certificate.The $25 dollar "micro loan" goes to someone in the third world to improve their situation, start a business, geta job, and so on, and you give your uncle a Kiva certificate so that he can track the progress. I think that isvery clever and innovative.
IBM doesn't publicly report subset numbers on individual product lines, but we are growing, albeit single-digit growth, on the high-end with our IBM System Storage DS8000 and DS6000 series products. Single digit growth is not "booming", but it is what we expected in this space, so it is not like we are"feeling the chill" as Robin stated.Obviously, if the U.S. market overall is doing poorly, then it must be from something else. IBM's success appears to be from organic growth in our Asia and Europe markets, and taking marketshare away from the top two contenders, EMC and HDS. Here are my thoughts why:
EMC is remodeling its kitchen
Not happy with its status as #1 disk hardware specialty shop, EMC is admirably trying to redefine itself as an ["information infrastructure"] company, buying up software companies and introducing new storage services. [Byte and Switch] reports onEMC's recent acquisitions:
EMC is the latest vendor to pin its colors to the SaaS mast, revealing its plan to offer SaaS-based archiving services during its recent Innovation Day in Boston.
EMC gave another clear indication of its SaaS intentions last month, when it spent $76 million to acquire online backup specialist Mozy.
IBM has offered[Managed Storage Services] foryears through our Global Technology Services (GTS) division. Gartner recognized IBM as the #1 leader in storageservices, with three times more revenues than EMC in this space.
As with a restaurant that is remodeling its kitchen, it can expect a temporary drop inrevenue. If it is done right, customers will come back to a bigger brighter restaurant. If not, the restaurant re-opens as a much smaller lesser version of itself. Recent events this year might incent EMC to get that kitchen done quickly:
A recent [class-action lawsuit]might result in having EMC's "86 percent male" sales force goes to sexual harassment sensitivity training, takingtime away from selling high-end storage arrays in the field. Analysts consider "high-end" boxes as those costingover $300,000 US dollars. Because of the money involved, there is a lot of competition for high-end storage, so face-to-face time with prospective customers is crucial to making the sale.Anytime any vendor is mentioned in a lawsuit (andcertainly IBM has had its share in the past, as Chuck Hollis correctly points out in the comment below), priorities get shifted, and there is potential dip in revenues.
Dell acquires EMC's rival EqualLogic. Dell resold EMC midrange storage, like CLARiiON, so this should notimpact their high-end storage sales. While Dell will be allowed to sell EMC until 2011, this new acquisition mightmean Dell leads with the EqualLogic offerings, and that could potentially reduce EMC revenues in the midrange space.
IBM went through a similar phase in the 1990's, redefining itself from an "IT Technology" company, intoa "Systems, Software and Services" company. These transitions can't be done in a quarter, or even a year, theytake several years. IBM lost business to EMC in the 1990s, but is back with a stronger portfolio in the 2000's, and so IBM's kitchen remodeling effort appears to be paying off. We will see what happens with EMC in a few years.
HDS puts on the white lab coats
Meanwhile, HDS appears interested in taking over as #1 disk hardware specialty shop.For years, Hitachi was the stereotypical JCM (Japanese IBM-compatible manufacturer) that made well-engineered"me, too" storage arrays. They would see what innovators like IBM and EMC were doing, and copy them. Recently,however, they seemed to have changed strategy, introducing new featuresand functions on their high-end USP-V device, like[Dynamic Provisioning].
The problem is that customers don't want to feel like [Guinea pigs] in an experimental lab, especially withmission-critical data that they trust to their most-available, most-reliable high-end disk storage systems.Like IBM and EMC and the rest of the major storage vendors, Hitachi has top-notch engineers making quality products, but new features scare people, and so there is a lag in the adoption of new technologies.
In our youth, we might have preferred beer with recent born-on dates, and tequila aged less than 90 days. But as weget older, we switch to drinks like wine and whiskey, aged years, not weeks. The same is true for themarketplace. New start-ups and other "early adopters"might be willing to try fresh new features and functions on their storage systems, but more established enterprises prefer storage with more mature and stable microcode.Storage admins want to leave at the end of the day, knowing that the data will still be there the next morning. In tough financial times, many established companies want the technological equivalent to ["comfort food"], nothing spicy or exotic, but simplehearty fare that fills the belly and keeps you satisfied.
Recognizing this, IBM often introduces new features and functions on its midrange lines first, and position them accordingly. Once customers are comfortable with the concepts, IBM then can consider moving them into the high-end lines. For example, dynamic volume expansion was introduced on the DS4000 and SAN Volume Controller first, and once proven safe and effective, brought over to the DS8000 series. This strategy has served us well.
Well those are my theories. If you have a different explanation of why storage vendors are not doing well in thehigh-end, drop me a comment!
I'll love to hear from you (I post letters from authors!) about how you put the blook together. Many folks have used cut and paste from blog page into word processor. Others have simply backed up their blogs, then cut and pasted. Some folks had the foresight to compose their posts in a word processor before posting!
Anyway, I'd like to know whatever ins and outs you'd like to share. Thanks.
Well Cheryl, I couldn't find any email address to send you a response, so Idecided to post here instead and post a traceback on your blog.
Software: Office 2003 version of Microsoft Word on Windows XP system
Front matter: Title, Copyright, Dedication, Table of Contents, Foreword, Introduction
Back matter: Blog Roll, Blogging Guidelines, Glossary, Reference table, What people have written about me and my blog
According to Lulu, you could use OpenOffice instead with RTF files. I didn't try that. I did tryusing CutePDF to upload ready-made PDFs, that didn't work. I also tried saving text in PDF formaton my Mac Mini running OS X 10.4 Tiger, but Lulu didn't like that either.IBM now offers a free download of [LotusSymphony] that might be an alternative for my next book.
For my blook, the "Blog Roll" serves instead of a more formal [Bibliography]. I could have also includedonline magazines and other web resources.
Decision 2: Chapter Configuration
I reviewed other blooks to see how they were organized. I thought I might organize the blog posts by topic or category, but all the blooks I looked atwere strictly chronological, oldest post first. This of course is exactly opposite as theyappear on the web browser. I decided to keep things simple, with just 12 chapters, one for each calendar month.
Each chapter was separated by a section break with unique footers, starting on odd page number. The footers have the page numbers on the outside edges, so that even pages had numbers on the left, and odd pages on the right. I also added the name of the chapter and the book, like so:
--------------------------------| |---------------------------- 40 ................December 2006| |Inside System Storage.... 41
This was a lot of work, but makes the book look more "professional".
Decision 3: Cut-and-Paste
People have asked me why it took three months to put my blook together, and I explainedthat the cut-and-paste process was manually intensive. My posts are either HTML entereddirectly into Roller webLogger, or typed in HTML on Windows Notepad and cut-and-pastedover to Roller later. I have access to the HTML source of each post, as wellas how it appears on the webpage, and tried cut-and-paste both ways. Copying theHTML source meant having to edit out all the HTML tags. I hadn't even looked into the idea of "backing up" through Roller all the entries, but they would probably have been HTMLsource as well.
In turned out that copying the webpage directly from the browser was better, which retains more of the formatting,and automatically eliminates all of the pesky HTML tags. I wanted the printed versions to resemblethe web page version.
Microsoft Word indicates all hyperlinks as bright blue underlined text which I didn't like, so I removedall hyperlinks, to avoid having to pay extra for "colored pages". This can be done manually, one by one, or pasting with the "text only" option butthis removes out all the other formatting as well. (Specifying black-and-white interior on Lulu might have converted all of these automaticallyto greyscale, so I might have been safe to leave them in,which I probably could have done if I wanted an online e-book version with links active, ... oh well)
To indicate where the hyperlinks would have been, I wrapped all the linked text in[square brackets]. I have now gotten in the habit of doing this for future blog posts, soif I ever make another book, it will cut down the work and effort on the cut-and-paste.
Some of the items I linked to posed a problem. I had to convert YouTube videos to flat imagesof the first frame to include them into the book. Older links were broken, and I had tofind the original graphics. I also sent a note to Scott Adams related about the use of one of his Dilbert cartoons.
I decided to also cut-and-paste my technorati tags and comments. For comments I mademyself, I labeled them "Addition" or "Response". A few people did not realize thatI was "az990tony" making the comments as the blog author, so I changed all to say "az990tony (Tony Pearson)" to make this more clear, and now do this on all future blogposts to minimize the work for my next book.
Because I used a lot of technical terms and acronyms, Microsoft Word actually gave mean error message that there were so many gramattical and spelling errors that it wasunable to track them all, and would no longer put wavy green or red lines underneath.
I did all the cut-and-paste work myself, but since the website is publicly accessible,I could have gotten someone else to do this for me.Had I read Timothy Ferriss' book The Four Hour Work Week sooner,I might have taken his advice on [Outsourcing the project to someone in India]. I might consider doing this for my next book.
Decision 4: Numbering the Posts
I decided I wanted to standardize the title of each post. The date was not uniqueenough, as there were days that I made multiple posts. So, I decided to assign eacha unique number, from 001 to 165, like so:
2006 Dec 12 - The Dilemma over future storage formats (033)
Posts that referred back to one of my earlier posts within the book had (#nnn) added so that readers couldgo jump back to them if they were interested. This eliminated trying to keep track of pagenumbers.
Decision 5: Adding behind-the-scenes commentary
One of the reasons I rent or buy DVDs is for the director's audio commentary and deleted scenes. These extras provided that added-value over what I saw in the movietheatre. Likewise, 80 percent of a blook is already out in the public for reading, so I felt I needed to provide some added value. At the beginning of each month, I describewhat is going on behind the scenes, and then in front of specific posts, I providedadditional context. This could be context of what was going on in the blogosphere at thetime, announcements or acquisitions that happened, what country I was blogging from, orwhat unannounced products or projects that were being developed that I can now talk aboutsince they are now announced and available.
To distinguish these side comments from the rest of the blog posts,I decorated them with graphics. Searching for copyright-free/royalty-free clip-art, graphics, and photos that represented eachconcept was time-consuming. I shrunk each down to about 1 inch square in size, and changed themfrom color to greyscale. (LuLu conversion to PDF probably would have automaticallyconverted the color graphics to greyscale for me, in which case leaving them in full colormight have been nice for an e-book edition, ... oh well)
I did complete each chapter one at a time. So, for each month, I cut-and-pasted all the blog posts,tags and comments, then fixed up and numbered all the post titles, then added all the behindthe scenes commentary, and cleaned up all the font styles and sizes. I recommend you do this at least for the first chapter, so you can get a good feel for what the finished version will look like.
Decision 6: Adding a Glossary
I sent early copies of the books to five of my coworkers knowledgeable about storage, andfive local friends who know nothing about storage.
Some of my early reviewers suggested having an index, so that people can find a specific poston a particular topic. Others suggested I spell out all the acronyms that appear everywhereand put that into the Reference section, rather than on each and every occurrence inthe book itself. Both were good ideas, and my IBM colleague Mike Stanek suggested calling ita GOAT (Glossary of Acronyms and Terms). Acronyms are spelled out, and terms or phrasesthat need additional explanation have a glossary definition. For eachitem, I put the post or posts that uses that term. Some terms are covered in dozens ofposts, so I tried to pick five or fewer posts representing the most pertinent.
The glossary was far more time-consuming than I first imagined, with over 50 pages containingover 900 entries. I struggled deciding which terms and acronyms needed explanation, and which were obvious enough. On the good side, itforced me to read and re-read the entire book cover to cover, and I caught a lot of othermistakes, misspellings, and formatting errors that way. Also, I have a large internationalreadership on my blog, so the glossary will help those whose English is not their native language,and will help those readers who are not necessarily experts in the storage industry.
Decision 7: Designing the Covers
Up to this point, I had been printing early drafts with simple solid color covers. Lulu hasthree choices for covers:
Just type in the text, upload an "author's photo" and chose a background color or pattern
Upload PNG files, one for the front cover, one for the back cover, and chose the textand color of the spine.
Upload a single one-piece PDF file that wraps around the entire book.
I had no software to generate the PDF for the third option, so I decided to try the secondoption. My first attempt was to format the front title page in WORD, capture the screen,convert to PNG and upload it as the front cover. I did same for the back cover, with a smallpicture of me and some paragraphs about the book.
I chose a simple straightforward title on purpose. Thousands of IBM and other IT marketing and technicalpeople will be ordering this book, and submitting their expenses for reimbursement as work-related, and didn't want to cause problems with a cute title like "An Engineer in Marketing La-La Land".
The next step was to use [the GIMP] GNU image manipulationprogram, similar to PhotoShop, to add a cream colored background, a slanted green spine, and some graphics that we had developed professionally for some of our IBM presentations.I learned how to use the GIMP when making tee-shirts and coffee mugs for our [Second Life] events, so I was already familiar. For newblook authors, I suggest they learn how to use this for their covers, or find someone who can do thisfor them.
I did the paperback version first, and once done, it was easy to use the same PNG files forthe dust jacket of the hardcover edition, adding some extra words for the front and back flaps.
The adage "Don't judge a book by its cover" seems to apply to everything except booksthemselves. The book cover is the first impression online, and in a bookstore. I have seenpeople pick books up off the shelf at my local Barnes & Noble, read the front and back covers, peruse the front and backflaps, and make a purchase decision without ever flipping a single page of the contents inside.From an article on Book Catcher [SELF-PUBLISHING BOOK PRODUCTION & MARKETING MISTAKES TO AVOID]:
According to selfpublishingresources website, three-fourths of 300 booksellers surveyed (half from independent bookstores and half from chains) identified the look and design of the book cover as the most important component of the entire book. All agreed that the jacket is the prime real estate for promoting a book.
While many struggle to find the right title and cover art, I think it is interesting that Lululets you post the same book with slightly different titles and covers, each as separate projects, and let market forces decide which one people like best. This is a common practice among marketresearch firms.
Decision 8: Finding someone to write the Foreword
With the book nearly done, I thought it would be a nice touch to have an IBM executive write a Foreword at the frontof the book. Several turned me down, so I am glad I found a prominent Worldwide IBM executiveto do it. I should have started this process sooner, as she wanted to read my book in its entirety beforeputting pen to paper. I had not planned for this. I was hoping to be done by end of October,but waiting for her to finish writing the Foreword added some extra weeks. Next time,I will start this process sooner.
Decision 9: Printing Early Drafts
You need to have Lulu print at least one copy to review before making it available to the public,and it doesn't hurt to order a few intermediary draft copies to make sure everything looks right.However, from the time I order it on Lulu, to the time it is in my hands, is over two weeks withstandard shipping, so I needed a way to print drafts to look at in between.
To avoid wear-and-tear on my color ink-jet printer, I went and bought a large black-and-white[Brother HL-5250DN] laser printer. Rather than buying specialty 6x9 paper, I used standard 8.5x11 paperusing the following 2-up duplex method:
Upload the DOC file to Lulu, and get it converted to PDF
Download the resulting PDF from Lulu back to your computer
View the PDF in Adobe Reader, and print it using 2-up "Booklet" mode.
For example, if you print 60 pages in booklet mode, it prints two mini-pages on thefront side, and two more mini-pages on the back side of each sheet of paper, resulting in 15 standard 8.5" x 11" pages that can be folded, stapled, and read like a mini-booklet. My entire blook could be printed on seven of these mini-booklets, saving paper, and giving me a close approximation to what the final book would look like. Eachmini-page is 5.5"x8.5", so just slightly smaller than the final 6"x9" form factor.I fount that 60 pages/15 sheets was about the maximum before it becomes hard to fold in half.
So, if I had to do it all over again, I might have chosen 11pt Garamond (the default), or changedthe default to 11pt Book Antiqua up front, so as not to have spend so much time converting thefonts. I might have left out the glossary. I might have left in all the hyperlinks and graphicsin full color for a separate e-book edition. And I definitely would have looked for an author formy Foreword much earlier in the process.
I didn't plan to write a blook when I started blogging. I have started putting [square brackets]around all my links. I have started putting "az990tony (Tony Pearson)" on all my comments. I hadassumed that people were jumping to all the links I provided in context, but I learned that the blogpost has to stand on its own, so now I make sure that I either paraphrase the important parts, oractually quote the text that I feel is important, so that the blog post makes sense on its own.This is perhaps good advice in general, but even more important if you plan to write a blook later.
Lastly, I decided up front to write blog posts that were 500-700 words long, about the average lengthof magazine or newspaper articles. In my blook, the average is 639 words per post, so I hit thatgoal. I have seen some blogs where each post is just a few sentences. Maybe they are posting fromtheir cell phone, or don't have time to think out a full thought, but who wants to read a year'sworth of [twitter] entries.
Well Cheryl, I hope that helps. If you need anymore, click on the "email" box on the right panel.
HealthAlliance Hospital has implemented an IBM System Storage Grid Medical Archive Solution (GMAS) to make patient records available to clinicians anytime, anywhere. IBM has a [Case Study] on this implementation.Here is an excerpt from the IBM [Press Release]:
HealthAlliance Hospital, a member of UMass Memorial Health Care, serves the communities of north-central Massachusetts and southern New Hampshire with acute care facilities, a cancer center, outpatient physical therapy facilities and a remote home health agency. As an investment in continued high-quality patient care, the hospital has implemented a picture archiving and communication system (PACS) from Siemens Medical Solutions so that it can move toward digital health records while eliminating traditional paper and film.
HealthAlliance is now able to make all of their data, including PACS images, available instantly, using the IBM GMAS, a cross-IBM offering comprised of storage, software, servers and services. The GMAS solution provides hospitals, clinics, research institutions and pharmaceutical companies with an automated and resilient enterprise storage archive for delivering medical images, patient records and other critical healthcare reference information on demand.
"Fast, easy access to diagnostic images is a priority," said Rick Mohnk, Vice President and Chief Information Officer of HealthAlliance. "Being paperless not only helps our staff improve their productivity and the quality of patient care, but also lowers our costs and improves our competitiveness. The IBM GMAS has helped us stay competitive and offer the leading edge technology that attracts top physicians to our staff and keeps patients feeling comfortable and well cared for."
Normally when you read or hear the term "grid", you might think of supercomputers, but in this case we are talking about information that is accessible from different interconnected locations. I've mentioned GMAS before in my posts [Blocks, Files and Content Addressable Storage and What Happened to CAS?] but I thought I would provide more detail on the elements of the solution.
Medical imaging equipment are called "modalities", which is just fancy hospital talk for "method of treatment".These have Ethernet connections designed to write to any storage with a CIFS or NFS interface. For example, press the button on the "X-ray" machine, and the digitized version of the X-ray is stored as a file to whatever NAS storage on the other end.
[Picture Archiving and Communication System] refer to the application and the computer equipment to manage these medical images, often stored in a DICOM format and indexed with HL7 metadata headers. There are many PACS vendors, GE Medical Systems, Siemens Medical, Agfa, Fuji, Philips, Kodak, Stentor, Emageon, Brit Systems, Mckesson, Amicus, Cerner, Medweb and Teramedica, to name a few. Many PACS providers embedded specific storage as part of their solution, but now are starting to realize that they need to be part of a larger storage infrastructure.
IBM System Storage [Multi-Level Grid Access Manager] is softwareon IBM System x servers that manages access across the grid of inter-connected hospitals, clinics and imaging facilities. It provides the NFS and CIFS interfaces to the modalities, and places the data into a GPFS file system on DS4000 series disk.
GPFS and DS4000 series disk
IBM [General Parallel File System] has all the Information Lifecycle Management (ILM) capabilities to move data from one disk storage level to another, automates deletion based on expiration date, and can provide concurrent access from multiple requesters.The IBM System Storage DS4000 series disk products can support both high-speed FC disk as well as low-cost SATA disk.For large medical images, the SATA disk is often a good fit. The advantage of GPFS is that you can have policies todecide which images are placed on FC disk, and which on SATA, and then later move these files based on access reference. Images that are accessed the most frequently can be on FC disk, and those that haven't been accessed in a while on SATA disk.
TSM space management
IBM [Tivoli Storage Manager for Space Management] supports moving files out of the GPFS file system and onto tape, based on policies. For example,keep the most recent 18 months on disk, and anything older than that gets moved to tape. This is similar to themigrate/recall technology used in DFSMShsm on the mainframe.
Tape Library automation
Before GMAS, paper and film images had to be retrieved manually from shelves and filing cabinets. The massive amountsof data being stored, and for such long periods of time, makes it impractical to store all of it on disk. With tape automation, any medical image more than 18 months old can be retrieved in minutes. Patients with an appointment can have all of their medical images retrieved in bulk the night before. Emergency room patients can have previous images retrieved while admission clerks check for insurance coverage and perform triage.
Images archived on the IBM GMAS are accessible in numerous ways. For example, all clinicians can access GMAS through hospital record system, which provides complete paperless and filmless access to the patient record including medical images, lab results, radiology reports, and pharmacy records. Medical workers at any location can also access the grid using their Web browsers. This allows each employee to use the display systems they are already familiar with.
Unlike disk-only based NAS systems, IBM's blended disk-and-tape approach makes this a much more cost-effective solution.For more details on IBM GMAS, read this 6-page[Frost & Sullivan whitepaper].
Today, IBM announced a software/server/storage combo that out-performed both HP and Sun. Here is an excerpt from the[IBM Press Release]:
IBM today announced that its recently introduced E7100 Balanced Warehouse(TM), consisting of the IBM POWER6(TM) processor-based System p(TM) 570 server, the IBM System Storage(TM) DS4800 and DB2(R) Warehouse 9.5, is already lapping the field in performance. The new data warehousing solution is now ranked number one in both performance and in price/performance in the TPC-H business:
2 x speed-up over HP system with Oracle 10g and equal number of cores;
3.17 x speed up over Sun with Oracle 10g and 38 percent price advantage;
A new world record by loading 10 terabytes (TB) data at six TB per hour (TB/hr).
"These latest benchmark results further prove IBM's strength and leadership in the business intelligence arena," said Scott Handy, vice president of marketing and strategy, IBM Power Systems. "The E7100 Balanced Warehouse is a complete data warehousing solution comprised of pre-tested, scalable and fully integrated system and storage components, designed to get customers up and running quickly to get to the real benefit of unprecedented business insight and intellect."
Those not familiar with the [IBM Balanced Warehouse], it is the productized version of DB2's ["Balanced Configuration Unit" or BCU] reference configuration. The IBM Balanced Warehouse presents a pre-tested, pre-configured solution for Business Intelligence (BI) applications. These are in the form of "building blocks" thatcan be combined to get to the size you need, with incremental growth as your business expands. Each building block expertly matches the CPU processor and RAM memory of the server, with the appropriate I/O bus, cabling, and capacity of the disk system, resulting in optimal performance.
IBM DB2 software is designed to allow you to combine multiple building blocks into a single system image. This greatly simplifies your data warehouse deployment, and can help ensure success. For example, for a 50TB deployment, you can take a base 2TB building block, add 24 more, each with 2TB of disk capacity, and have a completely balanced environment. IBM clients have built systems over 300TB in this manner with these building blocks.
The IBM Balanced Warehouse is offered in several configurations:
The [C-class models] are designed for SMB customers, employing an IBM System x server with internal or direct attached EXP3000 disk.
The [D-class models] are the next step up, offering department-level data marts and data warehouse for larger deployments, employing an IBM System x server with EXP3000 or System Storage DS3400 entry level disk.
The [E-class models] represent our top-of-the line configurations for our largest enterprise deployments. The [E6000] run Linux on an IBM System x server with System Storage DS48000 disk. The [E7000] run AIX on an IBM System p575 server with DS4800 disk. The new [E7100] mentioned above runsAIX on a POWER6-based IBM System p570 with DS4800 disk.
As I have mentioned before, in my post[Supermarketsand Specialty Shops],companies are looking for complete solutions, preferably from a single vendor like IBM, HP and Sun, rather than buying piece part components from different vendors and hoping the combined ["Frankenstein"] configuration meets business requirements.
The DS4800 is an obvious choice for this solution, providing an excellent balance of cost and performance, in a modular packaging that is ideal for the incremental growth design inherent in the IBM Balanced Warehouse philosophy. To learn more about this disk system, see the official [DS4800 website] for details, descriptions and specifications.