Wrapping up this week's theme on Cloud Computing, I finish with an IBM announcement for two new products to help clients build private cloud environments from their existing Service Oriented Architecture (SOA) deployments.
With more than 7,000 customer implementations worldwide, IBM is the SOA market leader. Of course, both of these products above can be used with IBM System Storage solutions, including Cloud-Optimized Storage offerings like Grid Medical Archive Solution (GMAS), Grid Access Manager software, Scale-Out File Services (SoFS), and the IBM XIV disk system.
IBM is part of the "Cloud Computing 5" major vendors pushing the envelope (the other four are Google, Microsoft, Amazon and Yahoo). In fact, IBM has a number of initiatives that allow customers to leverage IBM software in a cloud. IBM is working in collaboration with Amazon Web Services (AWS), a subsidiary of Amazon.com, Inc. to make IBM software available in the Amazon Elastic Compute Cloud (Amazon EC2). WebSphere sMash, Informix Dynamic Server, DB2, and WebSphere Portal with Lotus Web Content Management Standard Edition are available today through a "pay as you go" model for both development and production instances. In addition to those products, IBM is also announcing the availability of IBM Mashup Center and Lotus Forms Turbo for development and test use in Amazon EC2, and intends to add WebSphere Application Server and WebSphere eXtreme Scale to these offerings.
For more about IBM's leadership in Cloud Computing, see the IBM [Press Release].Read More]
Comments (6) Visits (7159)
Looks like fellow blogger and arch nemesis BarryB from EMC is once again stirring up trouble, this time he focuses his attention on IBM's leadership in Solid State Disk (SSD) on the IBM System Storage DS8000 disk systems in his post [IBM's amazing splash dance, part deux], a follow-up to [IBM's amazing splash dance] and multi-vendor tirade [don't miss the amazing vendor flash dance].
(Note: IBM [Guidelines] prevent me from picking blogfights, so this post is only to set the record straight on some misunderstandings, point to some positive press about IBM's leadership in this area, and for me to provide a different point of view.)
First, let's set the record straight on a few things. The [RedPaper is still in draft form] under review, and so some information has not yet been updated to reflect the current situation.
I find it amusing that BarryB's basic argument is that IBM's initial release of SSD disk on DS8000 is less than what the potential architecture could be extended to support much more. Actually, if you look at EMC's November release of Atmos, as well as their most recent announcement of V-Max, they basically say the same thing "Stay Tuned, this is just our initial release, with various restrictions and limitations, but more will follow." Architecturally, IBM DS8000 could support a mix of SSD and non-SSD on the same DA pairs, could support RAID6 and RAID10 as well, and could support larger capacity drives or use higher-capacity read-intensive formats. These could all be done via RPQ if needed, or in a follow-on release.
BarryB's second argument is that IBM is somehow "throwing cold water" on SSD technology. That somehow IBM is trying to discourage people from using SSD by offering disk systems with this technology. IBM offered SSD storage on BladeCenter servers LONG BEFORE any EMC disk system offering, and IBM continues to innovate in ways that allow the best business value of this new technology. Take for example this 24-page IBM Technical Brief:[IBM System z® and System Storage DS8000:Accelerating the SAP® Deposits Management Workload With Solid State Drives]. It is full of example configurations that show that SSD on IBM DS8000 can help in practical business applications. IBM takes a solution view, and worked with DB2, DFSMS, z/OS, High Performance FICON (zHPF), and down the stack to optimize performance to provide real business value innovation. Thanks to this synergy,IBM can provide 90 percent of the performance improvement with only 10 percent of the SSD disk capacity as EMC offerings. Now that's innovative!
The price and performance differences between FC and SATA (what EMC was mostly used to) is only 30-50 percent. But the price and performance differences between SSD and HDD is more than an order of magnitude in some cases 10-30x, similar to the differences between HDD and tape. Of course, if you want hybrid solutions that take best advantage of SSD+HDD, it makes more sense to go to IBM, the leading storage vendor that has been doing HDD+Tape hybrid solutions for the past 30 years. IBM understands this better, and has more experience dealing with these orders of magnitude than EMC.
But don't just take my word for it. Here is an excerpt from Jim Handy, from [Objective Analysis] market research firm, in a recent Weekly Review from [Pund-IT] (Volume 5, Issue 23--May 6, 2009):
As for why STEC put out a press release on their own this week without a corresponding IBM press release, I can only say that IBM already announced all of this support back in February, and I blogged about it in my post [Dynamic Infrastructure - Disk Announcements 1Q09]. This is not the first time one of IBM's suppliers has tried to drum up business in this manner. Intel often funds promotions for IBM System x servers (the leading Intel-based servers in the industry) to help drive more business for their Xeon processor.
So, BarryB, perhaps its time for you to take out your green pen and work up another one of your all-too-common retraction and corrections.[Read More]
Comments (4) Visits (5884)
My post last week [Solid State Disk on DS8000 Disk Systems] kicked up some dust in the comment section.Fellow blogger BarryB (a member of the elite [Anti-Social Media gang from EMC]) tried to imply that 200GB solid state disk (SSD) drives were different or better than the 146GB drives used in IBM System Storage DS8000 disk systems. I pointed out that they are the actual same physical drive, just formatted differently.
To explain the difference, I will first have to go back to regular spinning Hard Disk Drives (HDD). There are variances in manufacturing, so how do you make sure that a spinning disk has AT LEAST the amount of space you are selling it as? The solution is to include extra. This is the same way that rice, flour, and a variety of other commodities are sold. Legally, if it says you are buying a pound or kilo of flour, then it must be AT LEAST that much to be legal labeling. Including some extra is a safe way to comply with the law. In the case of disk capacity, having some spare capacity and the means to use it follows the same general concept.
(Disk capacity is measured in multiples of 1000, in this case a Gigabyte (GB) = 1,000,000,000 bytes, not to be confused with [Gibibyte (GiB)] = 1,073,741,824 bytes, based on multiples of 1024.)
Let's say a manufacturer plans to sell 146GB HDD. We know that in some cases there might be bad sectors on the disk that won't accept written data on day 1, and there are other marginally-bad sectors that might fail to accept written data a few years later, after wear and tear. A manufacturer might design a 156GB drive with 10GB of spare capacity and format this with a defective-sector table that redirects reads/writes of known bad sectors to good ones. When a bad sector is discovered, it is added to the table, and a new sector is assigned out of the spare capacity.Over time, the amount of space that a drive can store diminishes year after year, and once it drops below its rated capacity, it fails to meet its legal requirements. Based on averages of manufacturing runs and material variances, these could then be sold as 146GB drives, with a life expectancy of 3-5 years.
With Solid State Disk, the technology requires a lot of tricks and techniques to stay above the rated capacity. For example, you can format a 256GB drive as a conservative 146GB usable, with an additional 110GB (75 percent) spare capacity to handle all of the wear-leveling. You could lose up to 22GB of cells per year, and still have the rated capacity for the full five-year life expectancy.
Alternatively, you could take a more aggressive format, say 200GB usable, with only 56GB (28 percent) of spare capacity. If you lost 22GB of cells per year, then sometime during the third year, hopefully under warranty, your vendor could replace the drive with a fresh new one, and it should last the rest of the five year time frame. The failed drive, having 190GB or so usable capacity, could then be re-issued legally as a refurbished 146GB drive to someone else.
The wear and tear on SSD happens mostly during erase-write cycles, so for read-intensive workloads, such as boot disks for operating system images, the aggressive 200GB format might be fine, and might last the full five years.For traditional business applications (70 percent read, 30 percent write) or more write-intensive workloads, IBM feels the more conservative 146GB format is a safer bet.
This should be of no surprise to anyone. When it comes to the safety, security and integrity of our client's data, IBM has always emphasized the conservative approach.[Read More]
Recently, IBM and the University of Texas Medical Branch (UTMB) [launched an effort] using IBM's World Community Grid "virtual supercomputer" to allow laboratory tests on drug candidates for drug-resistant influenza strains and new strains, such as H1N1 (aka "swineflu"), in less than a month.
Researchers at the University of Texas Medical Branch will use [World Community Grid] to identify the chemical compounds most likely to stop the spread of the influenza viruses and begin testing these under laboratory conditions. The computational work adds up to thousands of years of computer time which will be compressed into just months using World Community Grid. As many as 10 percent of the drug candidates identified by calculations on World Community Grid are likely to show antiviral activity in the laboratory and move to further testing.
According to the researchers, without access to World Community Grid's virtual super computing power, the search for drug candidates would take a prohibitive amount of time and laboratory testing.
A few months after Larry's "call to action" in 2006, IBM and over twenty major worldwide public health institutions, including the World Health Organization [WHO] and the Centers for Disease Control and Prevention [CDC], [announced the Global Pandemic Initiative], a collaborative effort to help stem the spread of infectious diseases.
One might think that with our proximity to Mexico that the first cases would have been the border states, such as Arizona, but instead there were cases as far away as New York and Florida. The NYT explains in an article [Predicting Flu With the Aid of (George) Washington] that two rival universities, Northwestern University and Indiana University, both predicted that there would be about 2500 cases in the United States, based on air traffic control flight patterns, and the tracking data from a Web site called ["Where's George"] which tracks the movement of US dollar bills stamped with the Web site URL.
The estimates were fairly close. According to the Centers for Disease Control and Prevention [H1N1 Flu virus tracking page], there are currently 3009 cases of H1N1 in 45 states, as of this writing.
This is just another example on how an information infrastructure, used properly to provide insight, make predictions, and analyze potential cures, can help the world be a smarter planet. Fortunately, IBM is leading the way.Read More]
Comments (4) Visits (9713)
Continuing my ongoing discussion on Solid State Disk (SSD), fellow blogger BarryB (EMC) points out in his [latest post]:
Oh – and for the record TonyP, I don't think I ever said EMC was using a newer or different EFDs than IBM. I just asserted that EMC knows more than IBM about these EFDs and how they actually work a storage array under real-world workloads.
(Here "EFD" is refers to "Enterprise Flash Drive", EMC's marketing term for Single Layer Cell (SLC) NAND Flash non-volatile solid-state storage devices. Both IBM and EMC have been selling solid-state storage for quite some time now, but EMC felt that a new term was required to distinguish the SLC NAND Flash devices sold in their disk systems from solid-state devices sold in laptops or blade servers. The rest of the industry, including IBM, continues to use the term SSD to refer to these same SLC NAND Flash devices that EMC is referring to.)
The disagreement resulted from his earlier statement from his post[IBM's amazing...part deux]:
Although STEC asserts that IBM is using the latest ZeusIOPS drives, IBM is only offering the 73GB and 146GB STEC drives (EMC is shipping the latest ZeusIOPS drives in 200GB and 400GB capacities for DMX4 and V-Max, affording customers a lower $/GB, higher density and lower power/footprint per usable GB.)
Here is where I enjoy the subtleties between marketing and engineering. Does the above seem like he is saying EMC is using newer or different drives? What are typical readers expected to infer from the statement above?
Uncontested, some readers might infer the above and come to the wrong conclusions. I made an effort to set the record straight. I'll summarize with a simple table:
So, we all agree now that the 256GB drives that are formatted as 146GB or 200GB are in fact the same drives, that IBM and EMC both sell the latest drives offered by STEC, and that the STEC press release was in fact correct in its claims.
I also wanted to emphasize that IBM chose the more conservative format on purpose. BarryB [did the math himself] and proved my key points:
I agree with BarryB that an aggressive format can offer a lower $/GB than the conservative format. Cost-conscious consumers often look for less-expensive alternatives, and are often willing to accept less-reliable or shorter life expectancy as a trade-off. However, "cost-conscious" is not the typical EMC targeted customer, who often pay a premiumfor the EMC label. To compensate, EMC offers RAID-6 and RAID-10 configurations to provide added protection. With a conservative format, RAID-5 provides sufficient protection.
(Just so BarryB won't accuse me of not doing my own math, a 7+P RAID-5 using conservative format 146GB drives would provide 1022GB of capacity, versus 4+4 RAID-10 configuration using aggressive format 200GB drives only 800GB total.)
In an ideal world, you the consumer would know exactly how many IOPS your application will generate over the next five years, exactly how much capacity you will require, be offered all three drives in either format to choose from, and make a smart business decision. Nothing, however, is ever this simple in IT.Read More]
Comment (1) Visits (5321)
Wrapping up my week in Seattle, Washington, I presented at a[Dynamic Infrastructure] client event. This is the third one in a six-city tour. I will not be at the remaining cities next week, as I will be at the [Forrester IT Forum 2009 Conference] instead in Las Vegas, Nevada.
This was a typical roadshow event, serve breakfast, meet everyone, have four main-tent sessions, answer questions, then finish with a nice lunch. Here was the speaker line-up:
Next week, Las Vegas!
technorati tags: IBM, Dynamic Infrastructure, Seattle, Washington, Kirkland, Woodinville, Global Services, System x, BladeCenter, HP, Dell, POWER systems, AIX, Solaris, HP-UX, improve service, reduce costs, manage risk, XIV, TS7650, TS7650G, ProtecTIER, N series[Read More]
Well, I have arrived safely in Las Vegas, and the [Forrester IT Forum 2009 conference] started today at noon, with a Welcome Reception at the Technology Showcase. As a platinum sponsor, IBM has a booth manned by several subject matter experts, on the fourth floor of the Sands Expo, between the Venetian and Palazzo hotels.
I'm here all week to blog about this event. When I am not in sessions, I will probably hang out at the Technology Showcase with my colleagues above. If you are in Las Vegas, and want to connect, please contact me.Read More]
Forrester analysts kicked off the keynote sessions for Day 1 of the Forrester IT Forum 2009 event. The theme for this conference is "Redefining IT's value to the Enterprise."Rather than focusing on blue-sky futures that are decades away, Forrester wants to present instead a blend of pragmatic informationthat is actionable now in the next 90 days along with some forward-looking trends.
If you ask CEOs how well their IT operations are doing, 75 percent will saythey are doing great. However, if you dig down, and ask how their companies are leveraging IT to help generate revenues, reduce costs, improve employee morale, drive profits, improve customer service, or manage risks, then the percentage drops down to 30 to 35 percent.
What are the root causes of this "perception gap" in value between business and IT? Several ideas come to mind:
This conference is focused on CIOs and IT professionals, and how they can bridge the tech/business gap. The first two executive keynote presentations emphasized this point.
Continuing my week blogging at the [Forrester IT Forum 2009 conference] in Las Vegas, Nevada, IBM sponsored Tuesday night's evening reception.
This was a long day, and looks to be a long week ahead.Read More]
Continuing my blog coverage of the [Forrester IT Forum 2009 conference], we start with some keynote sessions on Wednesday (Day 2).
A Forrester Analyst drew the analogy of a river to the upcoming onslaught of millennials. Some 100 years ago, smart companies positioned themselves near rivers, the water provided power as well as a means of transporting products. However, today, being positioned near a river doesn't ensure company success, and there are plenty of examples of companies that have existed a long time now filing for bankruptcy.
As we get out of this recession, the war for people will be intense. In the United States, as many as 76 million[Baby Boomers], born between 1946 and 1964, are retiring or approaching retirement, being replaced by 46 million [Gen X], born between 1965 and 1976. By 2010, there will be as many as 31 million [Millenials], born between 1977 and 1998, in the workforce.
To drive the point home, the Forrester analyst cited [Whirlpool] as an example, a company more than 100 years old, with 73,000 employees across 170 countries. Whirlpool manufactures kitchen, laundry and other home appliances. From 1997 to 2002, however, Whirlpool's per-ticket sales were dropping at a rate of 3.4 percent per year. To reverse this trend, they established the Whirlpool Young Professional program, assigned I-mentors, and invested in Web 2.0 collaboration tools. They realized that they needed to harness the Gen X and Millenial energy. The result?From 2002 to 2006, they had a compete turn-around, with per-ticket sales growing 5.9 percent per year.
Since I covered IBM's keynote session yesterday, I thought it would only be fair to cover HP's today.IBM and HP are the top two IT vendors in the world, and not surprisingly also the top two IT storage vendors, and are both platinum sponsors for this event.
Companies that have been around for awhile, like IBM, Whirlpool and Levi Strauss & Co., have learned to adapt to the changing business and IT landscape, and adopt new ideas for new ways of doing things.