Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
I've gotten some strange emails lately, so I thought I would address them here.
Dear Tony, In your last post about [New Years Resolutions for 2009], you mention spending more time with friends and family, which is typically a phrase usedby people leaving a company.
Are you announcing your retirement?
No, I don't plan to retire anytime soon. Like most companies, IBM had [changed its retirement plans]. Those lucky enough to be on the old plan could retire after 30 years of service to IBM, and get 12 percent of their last five year's salary as an annual pension the rest of their lives. If you averaged $100K per year the last five years, then you could retire on $60K per year. Many IBMers in Tucson took their pension and moved to Mexico, and lived like kings!
To qualify for the old plan, you had to be a certain age, have a minimum number of years service working for IBM, or be an executive of Italian-American descent. I missed it by a few months, so I am on the new plan instead. This involves employer contribution matches to a 401(k) plan and reflects the trend from working for a single company all your life, tochanging careers or companies every 5 to 10 years.Many of my colleagues on the old plan had announced early last year their plans to retire by the end of 2008, but then changed their minds after the economic downturn.
For both personal and professional reasons, I plan to travel less in 2009,so that will give me more time to reconnect with friends and family, especially my friends over at[Tucson Fun and Adventures], the premiere singles activities club in Southern Arizona; the [Tucson Laughter Club], recognized as one of the oldest laughter clubs in the United States; and the Tucson Film Society at the[Loft Cinema].
Dear Tony, Why not make a New Years Resolution for an "exercise regime"?
I made that lifestyle change back in 2003, joining [Performance at McMahon's] fitnesstraining facility, and have been lifting weights there, several times per week, ever since.
This is my personal trainer Christine. Our gym had their annual Elite Performer athletic contest running August to November last year, and I came in fifth place. If you are looking fora personal trainer in the Tucson area to jump-start your own fitness goals, call Christine at 907-4510.
Normally, I considerNew Years Resolutions for starting new things, changing bad behaviors, or revisiting things I have long forgotten, not really intended for continuing to do the same as the year before. However, if it makes you happy, I resolve to continue my exercise regime of lifting weightsthree times a week, and will try to do more [cardio] as well.
Dear Tony, What's up with your fellow blogger Chuck Hollis from EMC and his post[Timely Reading], suggesting we should read Ayn Rand's hefty novel Atlas Shrugged?
What's your take on this?
I don't talk with Chuck personally about his posts, so I can only guess that he is underthe same blackout period rules, which typically commences the day following the end of the fiscal quarter and ends after the issuance of a news release disclosing the quarterly financial results.
That said, Chuck is an avid reader, and often recommends books he likes. For example,based on his recommendation, I read Tim Harford's [The Undercover Economist] and found it an excellent choice.In the book Atlas Shrugged, Ayn Rand renounces religion, socialism and a variety of other ills facing societyin 1950s America. Since [93 percent] of scientists and engineers are Atheist,Agnostic or other form of non-believer, I suspect most readers of the Storage blogosphere areat least somewhat familiar with Ayn Rand's works. Personally, I prefer the works of fellow atheist authors[Douglas Adams], [Sir Isaac Asimov], [Richard Dawkins], and [George Orwell].
Chuck mentions he saw Stephen Moore's article ['Atlas Shrugged': From Fiction to Fact in 52 Years in the Wall Street Journal, which considers this tometo be the second most influential book, second only to the Bible. No doubt many of the bailoutplans proposed today sound similar to the government acts covered in the novel. One warningrings true for me:
When profits and wealth and creativity are denigrated in society, they start to disappear -- leaving everyone the poorer.
However, I suspect his postmight also be partly motivated by Josh Bernoff's report [Time To Rethink Your Corporate Blogging Ideas] from Forrester Research. I've read the full report, andit has some interesting results.Only 16 percent of those surveyed who use company blogssay they trust them. The situation improves slightly if you look at people who are activein the blogosphere.Among those who read blogs regularly, only 24 percent trust company blogs. And only 39 percent of bloggers, who actually write their own blogs, trust company blogs. This ranks lower than every other form of content Forrester asked about, includingbroadcast and print media, direct mail, and email from companies.
This would mean company blogs are justslightly more trustworthy than self-proclaimed UFO alien abductees, tabloidsat the grocery store checkout lane, and perhaps politicians like Vice President Dick Cheneyor former Secretary of Defense Donald Rumsfeld.Josh insists that this report is not meant as a plea for existing corporate bloggers to give up blogging, but rather to be more thoughtful on how and why they blog.
Perhaps Chuck is suggesting that bloggers are like the creative types in Atlas Shrugged who felt under-appreciated, and that perhaps all IT Storage bloggers should go on strike?
Well, I'm not retiring, not quitting my exercise routine, and not planning to stop blogging.Last year, thanks to you my dear readers, I was ranked the third most influential blog on IBMDeveloperworks. Congratulations to my fellow IBM bloggers [Bobby Wolf] and ["Turbo" Todd Watson], who ranked first and second!
Next week, October 2-6, I am in San Francisco to support the IBM exhibition boot at [Oracle OpenWorld 2011] conference. IBM is a Grand Level Sponsor for this event. IBM and Oracle have been partners since 1986, and IBM is a [Diamond Level Partner in the Oracle OpenNetwork], the highest level available. I will be joined by dozens of other subject matter experts from various parts of IBM. Here is my schedule:
5:30pm - 7:00pm
Keynote session, Moscone North, Hall D
7:15pm - 10:30pm
IBM Team Dinner
8:00am - 9:15am
Keynote session, Moscone North, Hall D
9:45am - 4:30pm
IBM Booth #1111, Moscone South
5:00pm - 7:00pm
JD Edwards Customer Appreciation Event
8:00am - 9:15am
Keynote session, Moscone North, Hall D
9:45am - 6:00pm
IBM Booth #1111, Moscone South
7:00pm - 9:30pm
Titan Award Gala, SF City Hall
8:00am - 9:15am
Keynote session, Moscone North, Hall D
9:45am - 4:00pm
IBM Booth #1111, Moscone South
I won't have my laptop at the IBM booth, so if you need to reach me, send me an SMS text message to my cell phone, or send me a tweet on my Twitter account: [@az990tony]
IBM will also have experts in the following areas throughout the week:
Intel: Booth #711 at Moscone South
Java One: Booth #5608 at the Hilton San Francisco, Continental Ballroom
JD Edwards Pavilion: Booth HSJ-002 at the Westin St. Francis Hotel
Netezza, a newly acquired IBM company: Booth #3723 at Moscone West
I arrive Sunday afternoon. If you arrive Sunday, here are some things IBM is featuring:
Network with Other Quest IBM Customers on PeopleSoft
10/02/11, 10:00 a.m. – 11:00 a.m., OpenWorld session #29020
Presenter: Steve Johnston, IBM
Discuss topics of interest with your peers in this special interest group meeting for IBM customers using Oracle's PeopleSoft Enterprise applications.
Network with Other Quest IBM Customers on JD Edwards
10/02/11, 11:15 a.m. – 12:15 p.m., OpenWorld session #29001
Presenter: Steve Johnston, IBM
Discuss topics of interest with your peers in this special interest group meeting for IBM customers using Oracle's JD Edwards EnterpriseOne or JD Edwards World applications.
IOUG: Oracle Business Intelligence Enterprise Edition/Oracle Business Intelligence Applications (27380)
10/02/11, 12:15 p.m. – 1:15 p.m., OpenWorld session #27380
Presenters: Shyam Nath, IBM; Florian Schouten, Oracle
This session looks at Oracle Business Intelligence Enterprise Edition (OBIEE) and Oracle Business Intelligence Applications solutions. Hear what's new in OBIEE Release 188.8.131.52 and how that affects Oracle BI Applications implementations. Learn how mobile BI support in OBIEE adds new meaning to pervasive BI.
IOUG: Oracle Exadata Customer Panel
10/02/11, 1:30 p.m. – 2:30 p.m., OpenWorld session #27261
Presenters: Shyam Nath, IBM; Vinod Haval, Bank of America
This moderated panel discussion includes Oracle Exadata customers, Oracle product managers, and implementers who will share their real work implementation experiences and how they overcame the challenges in the process.
Managing Your Oracle Applications in Today's Economy: Ask the Experts
10/02/11, 1:30 p.m. – 3:30 p.m., OpenWorld session #29280
Presenter: Frances Wells, IBM
Attend a panel discussion of your peers as they discuss how effective data management strategies have helped them reduce costs, streamline test and development projects, and improve Oracle application performance while increasing IT efficiencies.
Download IBM’s mobile app for Oracle
OpenWorld and receive a Starbucks
gift card! (While supplies last!)
Visit [myIBMmobile.com] and get the IBM mobile
app—your guide to navigating IBM events at Oracle OpenWorld 2011.
Optimized for mobile devices—tablet friendly.
Uncover the best award-winning restaurants in San
Fransisco with the free Zagat guide to local restaurants
Easily navigate the show floor and the city with special
Stay on schedule with a helpful list of all IBM sessions
Learn more about the IBM/Oracle relationship
Find Starbucks locations close to Moscone Center
Of course, IBM is going all out on the social media side as well:
Besides, I have been in airplanes and airports nearly every week since March 1, so driving to Las Vegas was a pleasent alternative.
While driving to Las Vegas was pleasant, driving in Las Vegas was not. I would go crazy as a taxi driver here! I think I will leave my car in the free parking garage all week, and limit myself mostly to the Mandalay Hotel where the conference is being held, and only venture out to other hotels that are walking distance, like the Luxor next door.
In the evening, IBM hosted some of the industry's top analysts and press at an invitation-only reception. Several other IBMers were there, including Barry Whyte, Steve Kenniston, Nicki Rich and Ron Riffe. This event was organized by IBM Analyst Relations, including David Rasmussen and Leanna Holmquist.
Ron mentioned my penchant for taking pictures with other people and posting them on my blog, so I am glad that Leanna volunteered to take a picture with me for my first post of the week!
I would also like to mention that Ron Riffe has joined the ranks of storage bloggers. His blog is called [The Line]. Here is Ron's post on his "Day 0" observations here at Edge: [Rainy Days and Sunshine].
Monday marked the first official day of [IBM Edge 2013] conference. This is actually three conferences in one: Executive Edge for the high-level executives, Winning Edge for the Business Partners, and Technical Edge for storage administrators and IT manager/directors. I attended the latter.
The General Session was kicked off by an awesome drumbeat-heavy song performed by a band from North Carolina called [Delta Rae]. Their use of drums reminded me of Adam Ant.
Deon Newman, IBM VP of Marketing, Systems and Technology Group, North America, served as today's master of ceremonies. He was pleased to announce there were more then 4,700 attendees at this event -- representing more than 60 countries -- a huge increase over the attendance we had last year. Here are my notes of the opening General Session:
Stephen Leonard, IBM General Manager, Sales, Systems & Technology Group
Consumers expect an always-on technology experience. We, as consumers, are leaving a trail of data that is getting wider and wider every day. Data is the new "natural resource", but plentiful and never ending.
In 1996, about 29 percent of IT spend was for adminstration and management, today it has grown to 68 percent. Some 34 percent of IT projects deploy late.
Stephen emphasized the themes of Smarter Computing: (a) systems that are designed for the data, (b) software-defined environments, that are (c) open and collaborative.
Stephen cited a customer example from [Jaguar Land Rover], a manufacturer of sporty automobiles and rugged 4x4 vehicles. IBM developed a ["Virtual Dealership"] for them. Rather that trying to maintain additional physical bricks-and-mortar facilities, which can be expensive to staff and fill with vehicles across their wide portfolio, the virtual dealership allows prospective customers to try out vehicles through simulation. This virtual dealership could be taken to where prospective clients are, such as a sporting event or shopping mall.
Ed Walsh, IBM VP of Marketing, System Storage and Networking
Ed presented the "data economics" of all-Flash arrays. IBM recently acquired Texas Memory Systems, and renamed the RamSan products to IBM FlashSystem, and committed to invest an additional $1 Billion US dollars in flash technologies.
On a $-per-IOPS basis, IBM FlashSystems can be 30 percent lower total-cost-of-ownership TCO than disk-based alternatives. The cost of Flash is offset by 17 percent fewer servers from having higher CPU utilization rates, resuling in 38 percent lower software license fees. Flash is also more efficient, with 74 percent lower in environmental costs, and 35 percent lower operational support costs. For many situations, Flash is the solution for poorly written software applications.
Ed also mentioned IBM's strong support for open source and open standards. Over the past 15 years, IBM as been a major contributor for open source efforts like Linux, Eclipse and Apache. IBM continues that tradition, with contributions to OpenStack and Hadoop.
Without going into any details, Ed also hinted that IBM announced 65 new or refreshed products in Storage, Networking and PureSystems. The details of each announcement would be explained during the break-out sessions during the week.
Charles Long, Founder and CEO of Centerline Digital
[Centerline Digital] does computer-generated animations in support of corporate marketing efforts.
(FTC disclosure: I work for IBM, and have worked closely with Centerline Digital marketing agency when I was the chief marketing strategist for System Storage back in 2006-2007. I was not paid or provided any products or services to mention any of the clients mentioned in this post.)
Charles indicates that internet technologies have converted "Analog dollars to digital pennies". Using IBM PureFlex with Storwize V7000 storage, real-time compression, and Tivoli Endpoint Manager, Centerline was able to drastically improve their business. He feels the old joke of "Better, Faster, Cheaper - Choose Any Two!" no longer applies with IBM solutions!
Ambuj Goyal, IBM General Manager, System Storage and Networking
Formerly my fifth-line manager in charge of Software and Systems, Ambuj switched to be the General Manager of System Storage and Networking group earlier this year.
In his former roles, Ambuj managed software and hardware product lines, but he feels storage is a completely different animal. In the past, clients focused on choosing the best servers, then chose their storage as an afterthought. Today, Ambuj feels that processors are now a commodity, and that storage is becoming the forethought.
Ambuj also highlighted the evolution of IBM's Software-Defined Environment:
In 2003, IBM introduced its the SAN Volume Controller, a storage hypervisor. Now, over 10,000 clients enjoy the benefits of a Software-Defined Environment using SAN Volume Controller.
SmartCloud Virtual Storage Center represents the "third generation" for policy-driven management, combining SAN Volume Controller, Tivoli Storage Productivity Center, FlashCopy Manager and the Storage Analytics Engine.
IBM is trying to help people keep their business critical apps running securely, to be able to start quickly, add value and functions at scale, and to leverage all of this data-intensive solutions to help drive new business and gain customer insight.
Joseph Balsamo, VP of Platform Engineering at Prudential Insurance
While the IT department of [Prudential Insurance] is focused on the three V's -- Volume, Velocity and Variety -- Joe is more focused on solutions, status and cost. His mission was to strengthen the role of IT as a partner through business aligned services. Prudential has deployed XIV, N series, SAN Volume Controller (SVC) and Storwize V7000 disk systems, with the following results:
Reduced their $-per-IOPS by 75 percent
No additional storage administrators
85 percent utilization through thick-to-thin migrations
Reduced their $-per-MB by 50 percent
Reduced their 72-hour RPO to 15 seconds
These benefits were achieved over the past 24 months of deployment.
Paulo Carvao, IBM Vice President, North America Systems & Technology Group
Paulo is Deon Newman's boss. He presented BlueInsight, IBM's internal "Business Analytics" cloud accessible by over 200,000 users, with over 1 PB of content.
Inside IBM, the deployment of a Smarter Infrastructure has allowed for 25 percent capacity growth at flat IT budget, with 30,000 fewer Megawatts and 103,000 square feet.
Why is this significant? Today's disk writes each bit of information across 1200 atoms, and the smallest number of atoms that can retain information is 12 bits, so sometime in the next 7 to 10 years, the improvements in magnetic bit density for disk will stop.
For silicon chips, the smallest practical feature is 7 nanometers, about 35 atoms wide. We are quickly approaching that limit also.
I can already tell that it's going to be a busy week! Follow me on twitter (@az990tony) and tag your posts and tweets with #IBMedge hashtag.
Tuesday (Day 2) of the [IBM Edge 2013] conference once again started with live music from the rock band [Delta Rae]. I had the pleasure to meet one of the lead singers, Liz Hopkins, before their set! In the picture on the right, she is the brunette in the middle.
(FTC Disclosure: I work for IBM, and not for Sprint, Wellpoint or any other company mentioned on this blog post. I was not paid by any other company to mention their company, products or services. I have used Sprint in the past for my cellphone service, and I can say they are a great company from end user experience. As part of my job at IBM, I was a technical advocate for Wellpoint from 2009 to 2011 as they deployed their IBM Watson-based solution. I am an extended member of Jeff Jonas' G2 team.)
Here is a quick summary of the general sessions on Day 2:
Tom Rosamilla, IBM Senior VP of Systems and Technolgoy Group, Integrated Supply Chain
I have known Tom for a long time, since the 1990s when we both attended [SHARE user group] conferences, and he recently took over as Senior VP of our group. He started his talk about the innovative uses of "big data" analytics. For example, retailers can tell which shoppers are pregnant six months before birth of their child, based entirely on changes in shopping patterns, and can then send out "Hey, you're having a baby!" promotions targeted specifically to them.
Instead of the [Spray-and-Prey] of traditional direct-mail advertising that targets demographics based on broad categories of gender, race or income brackets, big data analytics allows our clients to get down to a "Demographic of One".
This is all part of IBM's "Smarter Planet" campaign that it launched five years ago. IBM has 3,000 research scientists (full disclosure: I was one, myself, before I switched over to development), investing over $6 billion USD per year, half of which is invested for our Systems and Technology Group that developers servers and storage hardware (or as we like to call it internally, the "M" in IBM). Here are some of the recent investments:
$1 Billion USD in Flash technology, including the acquisition of Texas Memory Systems
$800 Million USD in the development of eX5 for the System x server line
$2 Billion USD for PureSystems, including Flex System, PureFlex, PureApplication and PureData models. IBM has sold more than 4,000 PureSystems in 90 countries
$4 Billion USD for Power7 and Power7+ processors and the Power Systems that are based on them, which has helped IBM complete 3,400 displacements of competitive UNIX servers.
$1 Billion USD for zEC12, the latest System zEnterprise mainframe. Across all server types, IBM is #1 in worldwide server share, but the recent surge in mainframe sales certainly helps. Of the top 100 banks in the world, 96 run their mission critical applications on System z mainframes.
$10 Billion USD in acquisitions since 2010 (20 last year, 160 in last 12 years), including
SoftLayer Cloud, Kenexa Human Capital, Worklight mobile app development, Netezza analytics
IBM is also getting serious about being a "Social" business, and is already #1 in Enterprise Social Software. (This blog runs on IBM Connections, which is available to our clients as well for their social efforts).
The right infrastructure is required for innovation. Corporate cultural change is also required. Transformation is the new business imperative
Karim Abdullah, Director IT Operations at Sprint
What I like about Edge is that instead of listening to one IBM executive after another, IBM invites key reference clients to provide their testimonials.
Over 71 percent of CIOs at leading companies are trying to figure out how to best take advantage of new technologies to improve their customer experience. [Sprint] is one of them, ranking #3 telecom in the United States.
Flash is a Game Changer. Leveraging technology of IBM Flash allowed Sprint to achieve 45 times improvement in performance of targeted queries for the call centers. Not only has it helped increase performance at Sprint, but also to reduce energy, floorspace, power & cooling costs.
Dr. Samuel R. Nussbaum, M.D., Executive VP Clinical Health Policy, Chief Medical Officer, Wellpoint
[Wellpoint] is the largest health benefits company in the United States, with 36 million patients, and 600,000 physicians and medical specialists in its network.
Dr. Nussbaum spoke about the power of information. Citing a famous quote from Charles Dickens, he feels we are in the best of times, and the worst of times, when it comes to healthcare. On the best of times, we have genomics research that helps cure disease, and a variety of other science and technology breakthroughs.
On the worst of times, the industry is not without its own set of problems. Why are there such huge variations in healthcare, expenses and quality? We get the right care only 55% of the time. Part of the problem is that our reimbursement systems which focus on volumes, not outcomes. Wellpoint is working to fix this.
Dr. Nussbaum shared some shocking statistics:
$2.6 Billion USD is spent on Healthcare in USA, one third of this is wasteful and unnecessary
20 percent of patients are re-hospitalized within 30 days
From 2002 to 2010, annual U.S. household income grew only 7 percent, from $49,000 to only $52,000 per year, but medical expenses nearly doubled in the same timeframe, from $9,235 to $18,074 per year.
It's not enough to just spend nearly $100 Billion USD in public and private reserach in healthcare to get innovation, you have to put them to good use. Why did it take so long to put wheels on luggage for airplanes? It took six thousand years, from the invention of the wheel, to putting them on luggage.
Part of the challenge is that there is too much information, not enough time. Medical information doubles every 5 years. There are more than 21 million articles in [PubMed/MEDLINE], with 1 million being added every year. Only 12 percent of physicians' time is spent with patients and examinations, while 80 hours per week are spent with payors and administrators. For pre-authorizations for certain medical procedures or tests, 66 percent of physicians experience delay in pre-certifications.
Computer Science has evolved, from tabulation on punched cards, to programmatic logic, to new forms of [Cognitive Computing]. The Watson computer thinks like a physician does, and can understand natural language. Wellpoint's Anthem Watson Application can analyze the entire "Longitudinal Patient Record" of payors, labs, hospital EMR, physical office EMR, and Imaging. Watson crunches all this information available to recommend treatment options, dLiz Hopkinsecision support for oncology, and evidence-based care through pre-authorization.
Wellpoint is working with [Memorial Sloan-Kettering] to focus Watson-based efforts on cancer, based on analysis leverage 1.5 million patient records. More than 1500 people die of cancer every day. Wellpoint and Memorial Sloan-Kettering are going after 22 different cancers, including lung cancer and breast cancer.
Bernie Meyerson, IBM Fellow and VP of Innovation
Many people felt that Bernie did not get enough time to speak on Monday, so he is back today for a second topic! He started with a quote:
"Cyber Security threat facing America is a pre-9/11 moment. We know foreign cyber actors are probing America's critical infrastructure networks."
--- Leon Panetta, U.S. Secretary of Defense
Bernie gave examples how cyber-terrorists can easily bring down the US government and its financial system. In a recent analysis, more than 50% of software was found to have "back doors". Recent attacks show the extent of the problem:
A perimeter defense is not enough. Thus, the primary weapon to fight these threats is Real-time data analytics. IBM has four specific platforms: Cyber Security Platform, Insider Threat Platform, Mobile Secuirty Analytics, and Cloud Security Analytics. These allow security teams to see threats "visually".
Various parts of IBM are focused on security issues. IBM Research, Security Systems, X-force, and IBM Security Services are constantly innovating because the bad guys are innovating too! IBM's Watson vast cognitive computing is being put to work to help address security issues.
Innovation is transforming IT. If your laptop did not benefit from [Moore's Law], the computing capability would weigh quarter of a million tons! Of course, some people fear the worst. Bernie cited HAL in the movie ["2001: A Space Odyssey"] and SkyNet in ["The Terminator"] anthology.
IBM recently launched [MobileFirst], to bring together all aspects of mobile computing, including smartphones and tablets. In some countries, your mobile phone is your only connection to your bank, your internet, your friends and families. Unfortunately, there are a few malicious apps readily available for download from respective "app stores" for each device.
Jeff Jonas (IBM Fellow, Chief Scientist, Entity Analytics) and David Baker (Pew Charitable Trust, Director of Election Initiatives at Pew)
David started out with a funny analogy. A government employee suggested that elections should be as simple as getting your oil change at [Jiffy Lube]. Think about it, changing your car's oil used to be quite a hassle, and now you can drive in, and have your oil changed in 15 minutes or so.
David's response was that elections are already like oil changes, if everyone got their oil change only once every four years, and all got them on the same day, at buildings that have never been designed for oil changes, by people who have never seen the underside of a car, being paid less than minimum wage.
Jiffy Lube performs oil changes every day. Elections, on the other hand, are on a 48 month cycle, with little to no activity for 47 months, then for one month they have Black Friday-meets-Day-after-Christmas times ten.
One of the biggest factors to the problems of elections are the voter lists. Here are some astonishing facts about U.S. elections:
12.7 out-of-date records at any given moment, mostly because Americans are quite mobile. One out of eight Americans moved between the 2008 and 2010 elections. One out of four among young Americans move every year.
1.8 million deceased listed as voters
2.7 million people are registered in multiple states, often because they update their registration in their new location, but fail to notify their previous state's voter registration.
51 million (1 in 4) not registered to vote
One out of three voters think voter registration is updated automatically when they move
More than 50 percent of voters are unaware that the local Department of Motor Vehicles (DMV) can be used to update voter registration when drivers license information is updated.
Most voter registrations happen within 30 days of election, in paper form. Some states like Michigan process 99.7 percent of voter registrations correctly, but other states like Indiana process only 28.3 percent correctly.
When IBM's Jeff Jonas was invited by Pew to work on a task force on elections, he felt like Jim Carrey in the movie ["Yes Man!"].
Jeff Jonas showed via whiteboading, how to connect voter records that match by some key pieces of information, like birthdate and social security number, by cross-referencing voter registration lists with information from each state's DMV.
Technology is "G2", analyzing the observation space like "puzzle pieces" as a metaphor. Data finds matching data. Relevance finds you.
To address privacy concerns, Jeff added seven key privacy features, including a "Data anonomization" features for date-of-birth, Drivers license number and Social Security number, using a one-way hash that cannot be reversed to get the original number. The information from each state is anonomized before it leaves the state, so it is secure from the very beginning.
To explain the one-way hash, you take a pig through a special grinder and create sausage. Even if a malicious party had access to both the grinder and the sausage, they would not be able to recreate the pig in its original form.
The result is the Electronic Registration Information Center, or [ERIC] for short, which is a collaboration across seven states. ERIC has already identified 5.7 million eligible voters in these seven states. Over 300,000 registered months before deadline, using efficient online methods, now offered in 13 states, and is more cost-effective.
How cost-effective? By comparison, the cost to process a paper voter registration form is about 83 cents, but online processing is only 3 cents. This means huge savings for taxpayers and governments.
The [Edge2013 livestream replays] are still available. If you went to Edge2013, and want to see something again, or if you weren't there, and want to see what you missed, check it out!
Gosh, is it October already? Last month marked my Seventh "Blogoversary". I started this blog seven years ago, in September 2006, to celebrate IBM's 50th anniversary of disk systems.
Several readers have expressed concern that I have not been blogging as much lately. For all my readers looking for a lame excuse, I just have two words: Jury Duty. Last month, I was selected for a specific trial. While many people dread the thought of jury duty, I found it a refreshing change of pace. However, I am glad to be back at work where I belong!
(For my readers outside the United States, jury service in USA is compulsory, Jurors listen to all of the testimony in a criminal or civil trail, ask questions, reviews evidence, and take notes. Thanks to a power called [jury nullification], members of the jury can disagree with the law the defendant has been charged with, and even reach a verdict contrary to the letter of the law, on the belief that the law should not be applied in that particular case.)
Continuing my belated coverage of the of the [ IBM Edge 2013] conference, I participated in the storage "Meet the Experts" panel, which is a long-time tradition, started at SHARE User Group conference, and carried forward to other IT conferences. The free-for-all is a Q&A Panel of experts to allow anyone to ask any question. These are sometimes called "Birds of a Feather" (BOF).
(Disclaimer: Do not shoot the messenger! We had a dozen or more experts on the panel, representing System Storage hardware, Tivoli Storage software, and Storage services. I took notes, trying to capture the essence of the questions, and the answers given by the various IBM experts. I have spelled out acronyms and provided links to relevant materials. The answers from individual IBMers may not reflect the official position of IBM management. Where appropriate, my own commentary will be in italics.)
How should storage administrators deal with server virtualization?
We recommend you investigate the use of OpenStack. IBM storage systems like XIV, SVC and the rest of the Storwize Family support the OpenStack Cinder interfaces to provision block storage in support of server virtualization.
What are the interactions between SVC and Flash?
Depending on which hardware model you have, SVC nodes can support up to four Solid-State Drives (SSD) each, a maximum of 32 drives in an 8-node cluster. IBM also announced the [ "IBM FlashSystem Solution"] that combines SAN Volume Controller (SVC) with All-Flash arrays, offering features like volume mirroring, thin provisioning, real-time compression and remote site replication.
Unlike block-level storage, object storage is access through HTTP interfaces known as [RESTful APIs]. OpenStack offers [Swift APIs] for this. Many cosider such APIs as a pre-requisite for deploying Software-Defined Storage. Object storage may be less expensive by employing commodity hardware.
Are five-minute intervals sufficient to determine storage performance problems?
IBM Tivoli Storage Productivity Center uses 5-minute intervals, gathering performance data from a broad variety of devices in your datacenter. Generally, this is sufficient for identifying and troubleshooting performance issues. If you need finer XIV-like granularity, you may need to use device-specific tools.
How can we bring processing closer to the data like with Oracle's Exadata?
When will SVC, Storwize V7000 or DS8000 series products offer the Object Storage interfaces you discussed in the previous question?
IBM already supports OpenStack's Cinder interfaces for block-level access to storage, and is contributing as a Platinum Sponsor to the OpenStack for object-based Swift. Watch this space!
Rather than having to put separate ProtecTIER gateways in front of SVC or Storwize V7000, can we have SVC/V7000 just add the "Virtual Tape Library" protocol to its stack of host-attachment protocols?
Great idea! We will pass this on to IBM development.
With all of this "read acceleration" won't this increase the likelihood of a "write storm"?
Yes, write storms are coming, but can be controlled.
Are there plans to offer SVC behind SONAS gateways?
Yes, an iRPQ is available.
What is the biggest performance bottleneck for Flash?
Data moving through SAN switches adds only 5-8 microseconds of latency. Distributed systems often do not measure in sub-millisecond units, making it difficult to see improvements below 1 millisecond. Many performance issues arise from lazily-written applications. It helps to have Flash-optimized middleware, such as IBM DB2 BLU and WebSphere.
Since SVC adds 60 to 100 microseconds of added latency in front of IBM FlashSystem, is there a way to optimize the path through the SVC stack?
Existing parameters allows you to disable the SVC cache for particular volumes. We are investigating a more formal solution, a leaner code path for SVC with FlashSystem.
Are there any exciting enhancements to the [ SDDPCM] multi-pathing drivers you can share with us?
No, IBM is focused on MPIO multi-pathing instead.
Thanks to all my readers who expressed concern over my lack of blogging. As you all know, [ blogging is like jogging], so getting back into the full swing of things requires extra effort on my part.
The [IBM Edge2015 conference] is premiere conference covering Infrastructure Innovations for IBM System Storage, as well as sessions about z Systems and POWER Systems from our IBM Enterprise conference. Check out this short two-minute [YouTube video on IBM Edge2015].
Here is my quick recap of the kickoffs and keynote sessions on the first day, Monday, May 11, 2015.
Storage Systems Technical Kickoff
At the dreadful hour of 8:30am on Monday morning, Clod Barrera and Axel Koester kicked off the Storage portion of Technical Edge.
Clod is IBM Distinguished Engineer and Chief Technical Strategist for IBM's System Storage product line. He discussed IBM's investments in Software Defined Storage, FlashSystem products, and Storage Virtualization.
Axel Koester is IBM Executive IT Specialist and Storage Chief Technologist for the European Storage Center of Competency. Axel discussed IBM's invention of Dynamic Random Access Memory (DRAM) in 1966, and how this led to the development of Programmable Read-Only Memory (PROM), Electronically Erasable PROM (EE-PROM), and NAND Flash systems. Running out of two-dimensional surface for NAND Flash has led to the development of 3D Flash.
Stephen Leonard, Tom Rosamilia and Jen Crozier presented the opening keynote. It is available as a 90-minute YouTube video [IBM Edge 2015 - General Session] compliments of SiliconAngle.
Tom Rosamilia is my fifth-line manager and IBM Senior Vice President of the recently formed "IBM Systems" business unit, comprised of System Storage, z Systems, POWER Systems and Middleware.
By 2016, there will be 26 billion things on the Internet. Connected cars, for example, can serve as Wi-Fi "hot spots" to connect multiple mobile devices in the vehicle. Each mobile transaction triggers up to 100 back-end system transactions. Security is non-negotiable at every stage of these transactions. As an example, client testimonial from TravelPort and Priceline.com indicated that it takes over 90 billion back-end transactions to handle 120 million travel reservations.
Analytics converts raw data into actionable insights. Unfortunately, as much as 90 percent of data never gets analyzed. By combining Systems of Record, Systems of Engagement and Systems of Insight, your IT infrastructure empowers you to engage your customers in the manner they expect. A Hybrid Cloud can help bring these systems together.
Jen Crozier is IBM Vice President, Global Citizenship Initiatives. She asked mayors of cities across the world a simple question, if you had access to six IBM executives and technical experts, what problem would you want them to solve? In partnership with Twitter, IBM donated over $100 million dollars in expertise as "Smarter Cities" grants to address the most challenging problems. The following 16 cities won the grant for 2015 (I have been to eight of them!):
Allahabad, India - Prime Minister Modi of India is interested in having over 100 "Smarter Cities" across the country. IBM will help Allahabad to improve waste management.
Amsterdam, Netherlands - to help the city support new business startups
Athens, Greece - to reduce traffic congestion and offer car-free transportation alternatives
Denver, United States - to coordinate services for the homeless
Detroit, United States - to help with urban recycling, debris and blight to rebuild the city infrastructure
Huizhou, China - to help with tourism management
Melbourne, Australia - to help with disaster preparedness
Memphis, United States - to help coordinate emergency calls across fire, police and medical departments
Rochester, New York, United States - to help with assistance to families with children living in poverty
San Isidro, Peru - to help with traffic congestion and related pollution
Santiago, Chile - to help with disaster preparedness, especially important given the recent earthquakes, landslides, floods and fires
Sekondi-Takoradi, Ghana - to help expand its tax base to reduce corruption
Surat, India - to help integrate urban planning across agencies
Taichung, Taiwan - to help with road safety and traffic congestion
Vizag, India - to help with disaster preparedness in flood and cyclone prone areas
Xuzhou, China - to help optimize transportation as a regional hub
The mayor of Memphis TN, A C Wharton, gave a quick acceptance speech, introducing his chiefs of fire department and police departments, and explaining his focus to better serve his citizens.
Stephen Leonard highlighted some of the key products announced this year, including the z13 System mainframe, the new 4-socket E840 POWER System, and the FlashSystem V9000 storage system. Nobody supports more open standard than IBM, including Linux, OpenStack, Apache, Eclipse, Cloud Foundry, SPARK and Hadoop.
The kickoff sessions and keynote presentations are always a great way to set the context for the rest of the week.
The [IBM Edge2015 conference] is premiere conference covering Infrastructure Innovations for IBM System Storage, as well as sessions about z Systems and POWER Systems from our IBM Enterprise conference. Check out this short two-minute [YouTube video on IBM Edge2015].
Here is my quick recap of my sessions on the first day, Monday, May 11, 2015.
Solution Center Setup and Training
IBM hires [George P. Johnson Experience Marketing], or GPJ for short, to help with its events. I was asked to be on-hand for the Monday morning training in case I was needed to fill in for anyone else during the lunch hours and evening receptions.
There were quite a lot of demos. We had SAN Volume Controller, Spectrum Accelerate and Spectrum Scale at various booths. There were also plenty for POWER and z Systems as well. I cover a wide variety of these topics, so am often used as the "universal substitute" in case some needs to take a break, or just gets caught up in a one-on-one discussion with an attendee.
It didn't help that while we are trying to listen to the GPJ ladies on how to scan barcodes on attendee badges and use the interactive kiosks, large machinery is placing the demo hardware in place.
Here a forklift operator is putting a VersaStack converged system that has a mix of Cisco UCS, NEXUS and MDS hardware with IBM Storwize V7000 Unified storage.
I presented as session explaining why our clients are excited about Software Defined Environments, including an overview of IBM's Software Defined Storage offers in the IBM Spectrum Storage™ family of products, and how to get started.
According to IDC, an independent IT analyst firm, IBM has over 40 percent marketshare in Software Defined Storage, ranking IBM #1 in this market.
Not surprisingly, this was by far my most attended session for the week, and I presented twice to fully packed rooms.
IBM is ranked #2 in Cloud Storage. This session covered the different types of Cloud storage, including persistent, ephemeral, hosted and reference storage categories. I covered the advantages of block, file and object level access.
Lastly, I covered the various IBM products for each type. For block-level transactional storage, I covered IBM XIV Storage, XIV Cloud Storage for Service Providers, Spectrum Accelerate software and the Storage Hypervisors built with Spectrum Virtualize such as SAN Volume Controller (SVC) and the rest of the Storwize family, and FlashSystem family products.
For file and object level storage, I covered Spectrum Scale software, including Elastic Storage Server and Storwize V7000 Unified pre-built systems. I also included how these fit into a file sync-and-share deployment using IBM partnership with OwnCloud or Funambol.
Finally, I mentioned the Cloud storage offerings from IBM SoftLayer and IBM Cloud Managed Services.
Data Footprint Reduction - Understanding IBM Storage Efficiency Options
I have presented this topic now for several years, but never fails to draw an audience.
I start with the basics of Thin Provisioning, explaining the difference between coarse-grained and fine-grained designs, and how these are employed in IBM DS8000 disk system, XIV Storage system, SVC and Storwize family products, DCS3700 and DCS3860 disk systems.
I then covered Space-efficient Snapshots, including IBM FlashCopy. These can be used with either fully-allocated or thin-provisioned source volumes, and can substantially reduce the amount of storage needed to keep immediate copies.
Next I covered Data Deduplication, including IBM ProtecTIER family of products, Spectrum Protect software, and IBM's partnership with Atlantis ILIO for IBM FlashSystem for Virtual Desktop Infrastructure (VDI) deployments.
Lastly, I covered Compression, explaining the unique advantages of IBM's Real-time Compression compared to the performance-degrading methods used by our competitors. IBM Real-time Compression provides better capacity savings than Data Deduplication for 95 percent of your active data workloads, and is available on FlashSystem V9000, SAN Volume Controller, Storwize V7000 and this week IBM announced it available on the XIV Storage systems as well!
As you can imagine, I get invited to a lot of client dinners during the week. For Monday evening, I managed to combine two clients into a single dinner! The two clients were from completely different industries, but from the same part of the country. Everyone all got along, so it worked out very well.
The [IBM Edge2015 conference] is premiere conference covering Infrastructure Innovations for IBM System Storage, as well as sessions about z Systems and POWER Systems from our IBM Enterprise conference. Check out this short two-minute [YouTube video on IBM Edge2015].
Doug Brown, IBM Vice President of Marketing, kicked off the second general session. Here is my quick recap of the general session on the second day, Tuesday, May 12, 2015.
IBM Corporate Strategy
Ken Keverian is IBM Senior Vice President Corporate Strategy. He feels that when they write the history of IT industry of the past 100 years, the key innovations were the transistor, the Internet, and analytics. (IBM was involved in all three!)
IBM organizes all of its strategies in three segments. Ken presented the three pillars of IBM's corporate strategy. The first pillar is the set of IBM strategic imperatives: Data, Cloud and Engagement. Engagement includes Mobile, Social and Security concerns.
The second pillar is the effort and expertise needed to connect these new strategic imperatives together with existing traditional workloads. Hybrid Cloud is a good example of this, linking together traditional IT or on-premise private Clouds with off-premise offerings. IBM is committed to open standards to make this happen.
The third pillar is moving up the value chain. Some 30 years ago, IBM relied heavily on its hardware business that represented as much as 85 percent of its total revenues. Today, IBM continues with Software, Services and Systems as its core foundation. However, a new portion of IBM will focus on delivering deep industry offerings and expertise, automation of services, and insights-as-a-service.
Client Testimonial from Walmart
Rich Jackson, Senior Technical Expert at Walmart, presented his client testimonial. He started with the following quote:
"There is only one boss. The customer. And he can fire everyone from the chairman on down, simply by spending his money somewhere else."
-- Sam Walton
Walmart is one of one of the largest retailers in the world, with over 11,000 stores across 27 countries, generating over $480 Billion US dollars in revenue last year. But Walmart is not just large from putting big stores in small towns, but by shifting from inventory to information.
As with many retailers, the last two months of the year, November 1 to December 31, represent a huge spike in holiday sales for Walmart. Cyber Monday in 2014 resulted in 1.5 billion page views online. About 70 percent of the online sales are from mobile devices. Walmart has trusted its business to the robust scalability and reliability of IBM z System mainframe servers and storage.
Analytics in Healthcare
Inhi Suh is IBM Vice President of Strategy for IBM Analytics. She presented the use of IBM Streams computing in hospitals to help deal with all the alerts and beeping sounds that nurses and technicians just can't act upon.
Thanks to IBM technology, the data of an incoming patient can be retrieved to the hospital before the patient does. Doctors can also access the data, to let the nurses and technicians start on things before the doctor arrives to the hospital.
Dr. Gustavo Stolovitzky is IBM Program Director for Translational Systems Biology and Nanobiotechnology. He explained the challenges of breaking down the silos by using the "wisdom of crowds". IBM launched "Dream Challenges" to see if crowd-sourcing can help with medical challenges. The result, two very accurate algorithms to predict the progression for ALS. These two algorithms were more accurate than 12 ALS medical experts!
Scott McGill is President and CEO of Coriell Life Sciences. He explained that deaths from drug interactions now causes more deaths than automobile accidents. Their product is called GeneDose Live, which uses genomics and DNA science to help doctors determine if this pill is right for that patient, and whether a cocktail of medicines will work together, or against each other. This tool can help doctors swap out different medicines to reduce risk and increase effectiveness for individual patients.
IBM Research projects
Arvind Krishna is IBM Senior Vice President and Director of Research. When it comes to medical data about a patient, only 10 percent is in the medical records. Another 30 percent is your genetics and family history. The last 60 percent is your lifestyle, what countries you have visited, and what foods you have eaten.
Analytics can also be used in the food supply chain to increase food safety. This can help reduce forborne illnesses which affects 1 out of 6 people every year, resulting in over $80 billion dollars in lost productivity. Analytics can also help food growers to reduce water usage and increase crop yields.
By the end of this decade, IBM plans to have "Exascale" systems that can have ExaFLOP of compute capability connected to an Exabyte of data. Your brain can do amazing things with just 50 Watts of energy, but supercomputers consume 50 Megawatts!
IBM has developed "Cognitive Computing" chips that emulate thousands of neurons and millions of synapses. It can be "trained" to perform certain functions with just 200 miliwatts of power. By combining these chips into boards and racks, IBM can amass a large cognitive computing environment to give Watson the ability to reason.
Lastly, Arvind covered IBM's advancements in Quantum Computing. They were able to successfully combine 4 Quantum Bit circuits (QuBits) together. IBM estimates that just 50 QuBits would outperform any combination of supercomputers from the TOP500 list.
IBM's innovations can be applied not just to Retail and Healthcare, but a variety of other industries as well!
The [IBM Edge2015 conference] is premiere conference covering Infrastructure Innovations for IBM System Storage, as well as sessions about z Systems and POWER Systems from our IBM Enterprise conference. Check out this short two-minute [YouTube video on IBM Edge2015].
Here is my quick recap of the sessions that I either presented myself, or presented by others that I found interesting, on Tuesday, May 12, 2015.
What Is Big Data? Architectures and Practical Use Cases
Not everyone understands the storage implications of Big Data analytics. I started this session explaining the basics of Big Data, and how it changes the entire information pipeline, from storage administrators to data scientists to empowered employees making decisions and taking actions.
I then gave some real-life use cases, from Vestas using Big Data to shorted a 3-week decision process down to 15 minutes, to University of Ontario using Big Data to save the lives of new-born babies.
I then provided a broad overview of IBM's Analytics platform, including IBM InfoSphere BigInsights, BigSQL and Platform Symphony. IBM is a major backer of the Open Data Platform to help provide standards-based choices in the analytics marketplace.
I wrapped up the session with IBM Spectrum Scale™ which has a Hadoop Connector which allows Map/Reduce programs to run unchanged against Spectrum Scale data. This eliminates the waste of ingesting data from other sources into an HDFS file system, then discarding the data after the analytics processing completes.
At past events, I normally present this on the first day, to provide context for all other presentations later in the week. However, this time, Ken Keverian presented IBM's Corporate strategy on the Tuesday keynote general session, so the event coordinators scheduled my session afterward. I was able to explain how IBM's Smarter Storage strategy fits hand-in-glove IBM's larger Corporate strategy.
As with all IBM strategies, there were three parts. First, IBM is helping clients deal with data growth, resulting from everything from the Internet of Things to Big Data analytics. IBM offers the market leading Real-time Compression capability, for example, to help reduce the amount of capacity consumed.
Second, IBM cannot forget its support of traditional "Systems of Record" applications, like ERP, SCM and CRM transactional workloads. IBM is helping clients deal with business pressures to balance performance versus cost across a variety of storage media, from the world's fastest non-volatile flash storage, IBM FlashSystem, to the least expensive options with tape.
Third, IBM strongly feels the IT industry is shifting to Cloud deployments, including private, public and hybrid clouds. IBM is helping clients with this transition, with support for Software Defined Environments from OpenStack, VMware and Microsoft. IBM ranks #1 in Software Defined Storage with over 40 percent marketshare.
Royal Caribbean Cruise Line's Success with IBM FlashSystem
This was a part IBM, part client testimonial session. Joe Rendace,
IBM Technical Flash Channel Manager, and Barry Whyte, fellow IBM Master Inventor and IBM ATS for Storage Virtualization, provided IBM's point of view on Flash technology. Last year, IBM shipped more Flash capacity than the next two closest competitors combined!
Jorge L. González, Enterprise SAN & Storage Architect for Royal Caribbean Cruise Line (RCCL) presented his company's success using IBM's FlashSystem products.
Archive Strategies in the Software-Defined Data Center
Jon Toigo, fellow author and blogger, Managing Principal Toigo Partners International and long-time friend presented this lively topic. Here is a great quote from his presentation:
"Moving data intelligently across different storage tiers (and into archives) is a lot like using a claw machine to get your crying kid a toy at the Chuck E. Cheese!
I can always rely on Jon to provide a unique viewpoint on the latest strategies and technologies. He never disappoints.
IBM Edge Special Events
For Tuesday evening, I went to see the world-famous [Penn & Teller] perform their unique form of magic and comedy show.
For the past 40 years, Penn & Teller have performed magic together, and watching them, up close and personal from just a few dozen feet away on stage, was truly amazing!
The [IBM Edge2015 conference] is premiere conference covering Infrastructure Innovations for IBM System Storage, as well as sessions about z Systems and POWER Systems from our IBM Enterprise conference.
Here is a quick recap of my sessions I presented on the third day, Wednesday, May 13, 2015.
New Generation of Storage Tiering: Less Management, Lower Costs and Increased Performance
I organized this into three sections. In the first section, I talked about single-system optimization by moving extents within single volumes on a single system. For IBM, I focused on Easy Tier on DS8000 as an example of this methodology, and all the enhancements IBM introduced since its introduction, including Easy Tier Server, Easy Tear Application API, and Easy Tear Heat Map Transfer utility.
In the second section, I covered data center optimization using Spectrum Control Storage Analytics Engine. This involves moving entire volumes/LUNs from one storage system to another. At IBM's Boulder Facility, this methodology saved $17 million dollars per year, roughly 50 percent reduction of its storage budget.
The third section covered the global optimization with Information Lifecycle Management (ILM), Hierarchical storage Management (HSM) and Active File Management (AFM) features in Spectrum Scale. This provides a seamless movement of data from flash to disk to tape media. Spectrum Scale has been around in one form or another since 1998, and over 200 of the TOP500 supercomputers are using it today.
At the IBM Edge conference last year,IBM announced "Codename: Elastic Storage." IBM had to rename its General Parallel File System (GPFS) because it is not just a file system for two very good reasons:
Spectrum Scale can support volumes, files and objects.
Spectrum Scale provides active data management, including Information Lifecycle Management (ILM), Hierarchical storage Management (HSM) and Active File Management (AFM) features.
This year, IBM now has several offerings: Spectrum Scale software, Elastic Storage Server pre-built system, Storwize V7000 Unified pre-built system, and Elastic Storage on IBM Managed Cloud services.
IBM Winning Edge - CASE Training
IBM Winning Edge is focused on Business Partner and IBM seller training. Part of this is CASE (Cloud and Analytics Sales Enablement). I co-presented "Systems Infrastructure Offerings for Cloud" with Elan Freedberg.
For Wednesday evening, I had dinner with Dr. Steve Hetzler, IBM Almaden Research, to discuss his paper on "touch rate" that Clod Barrera mentioned at the Monday kickoff.
Later that evening, I was invited to enjoy some champagne and cigars with Eric Herzog, Jamie Thomas and other IBM Executives. I brought along fellow blogger Elisabeth Stahl, who recently was promoted to Distinguished Engineer!
The [IBM Edge2015 conference] is premiere conference covering Infrastructure Innovations for IBM System Storage, as well as sessions about z Systems and POWER Systems from our IBM Enterprise conference.
Here is my quick recap of various sessions on the fourth day, Wednesday, May 14, 2015.
Object Storage and Its Use with OpenStack
Rich Swain, IBM Storage Team Lead Engineer, presented this in three sections. First, he covered the advantages of object storage versus block or file-based storage. This includes the idea that each object comes with rich metadata that can be searched.
Rich then went to explain the specifics of OpenStack Swift, an open standard for object storage. A simple three-tier hierarchy of account, container, and object.
Rich wrapped up his talk with an overview of Spectrum Scale and Elastic Storage Server offerings from IBM.
Driving Timely Business Insights on the IBM Data Engine for Analytics
Linton Ward, IBM Distinguished Engineer, POWER Systems Big Data and Analytics, presented Big Data analytics from a POWER Systems perspective. He gave an overview of Big Data analytics, and how IBM POWER Systems provide advantages to analyze data quickly.
Linton had helped me with my Big Data presentation, so I decided to participate in his presentation during his section on Spectrum Scale, but no questions came up that he couldn't handle on his own.
The Pendulum Swings Back -- Understanding Converged and Hyper-converged Environments
This presentation has an interesting back-story. At a client briefing, I was asked to explain the difference between "Converged" and "Hyper-converged" systems, which I did with the analogy of a pendulum. I used the whiteboard, and then later made it into a single chart.
At the far left, I start with mainframe systems of the early 1950s that had internal storage. As the pendulum swings to the middle, I discuss the added benefits of external storage, from RAID protection to centralized management.
To the far right, the pendulum swings over to networked storage, from NAS to SAN attached devices for flash, disk and tape. This offers excellent advantages, including greater host connectivity, and greater distances supported to help with things like disaster recovery.
Here is where the pendulum swings back. IBM introduced PureSystems that combined servers, storage and switches into a single rack configuration. Other vendors had similar offerings, such as VCE vBlock, Flexpod from NetApp and Cisco, and Oracle Exadata.
Lately, the pendulum has swung fully back to internal storage, with storage-rich servers running specialized software. There are two kinds. First there are pre-built systems like Nutanix, Simplivity or EVO:Rail which are x86 based server systems with built-in flash and disk. Second, there is software that can be deployed on your own choice of hardware, such as IBM Spectrum Scale FPO or VMware VSAN.
So, what I presented on a single slide before, has been fleshed out into a full blown hour-long presentation!
Common Performance Pitfalls and the Value of Latency
Erik Eyberg and Woody Hutsell from IBM FlashSystem team presented the differences between MB/sec, IOPS and latency. IBM FlashSystem is the world's fastest storage with incredibly low latency that is two to five times faster than most major competitors in the all-flash arrays category.
Accelerate with OpenStack: Flexible and Rapid Deployed Orchestration for Your Cloud
Ohad Atia, IBM Systems Development Manager for XIV Cloud Storage Solutions, presented how the OpenStack Cinder interface works, then explained the new IBM Spectrum Accelerate, based on the software from XIV.
Ronen Kat, IBM Research Manager and Cloud Storage Research Scientist, gave a live demo on how easy it is to deploy Spectrum Accelerate in a public cloud. First, you request three or more bare metal servers with specific amounts of RAM and spinning disk. Optionally, Spectrum Accelerate can support a single Solid State Driver (SSD) as read cache in each server. You can either install VMware ESXi 5.5 yourself, or have the Cloud provider do it for you. This step can be done quickly to initiate, but then the Cloud provider might take 24-72 hours depending on how busy they are.
Like cooking shows, where they take out an already prepared item from the oven or refrigerator to save time, Ronen started with four servers that were already configured above from IBM SoftLayer. The second step was to deploy the Spectrum Accelerate code, supplied as an OVF file that VMware can use to start up each virtual machine.
The third step is to connect all of the IP addresses, since Spectrum Accelerate uses TCP/IP for everything from iSCSI host attachment to inter-node communication.
Steps 2 and 3 took less than 5 minutes! I was impressed how simple and easy it was. Even when you factor in the few days it might take IBM SoftLayer to provide you access to the servers, it is still way faster than ordering your own on-premise storage.
A Powerful Virtualization Solutions Delivered by the VMware and IBM Storage Partnership
Impressed with their last presentation, I stayed in the room for this one. Ohad Atia presented IBM Spectrum Control Base edition, and how this provides VMware VVol support through its VASA 2.0 provider code.
IBM Spectrum Control Base edition is entitled to all IBM owners of XIV, Spectrum Accelerate, DS8000 and Storwize family products to provide a consistent VMware interface experience.
Storage Meet the Experts, hosted by Maurice McCullough
For those not familiar with Edge, Maurice "Mo" McCullough is the lead organizer of the storage portion of Technical Edge, as well as various Systems Technical University events held throughout the year.
On the Thursday evening of Edge every year, Mo hosts this popular session for everyone to ask their questions to the experts at the front of the room. There were similar sessions for z Systems and POWER Systems experts in the adjoining rooms.
Joining me on the storage expert panel were Clod Barrera, Shelly Howrigon, Mike Griese, Barry Whyte, Jim Blue, Sven Oehme, and several others. Generally, there isn't a question we don't have an answer to, but if you stump the panel, we will take you out to dinner. The audience was ready to take that challenge!
After Storage Meet the Experts, I had dinner and went to see [Frank: The Man, The Music] musical show at the Venetian hotel. Bob Anderson impersonates Frank Sinatra, singing a popular selection of Frank's many recordings, intermixed with highlights from his television and film career. Bob was accompanied by a 32-piece orchestra that brought the music of the era back to life.
The [IBM Edge2015 conference] is premiere conference covering Infrastructure Innovations for IBM System Storage, as well as sessions about z Systems and POWER Systems from our IBM Enterprise conference.
Here is my quick recap of my fifth and final day, Friday, May 15, 2015.
At the Systems Technical University in Prague last month, I had submitted "IBM Spectrum Storage overview", while another speaker submitted "Storage Integration with OpenStack" and somehow the two topics got merged into a single title "IBM Spectrum Storage Integration with OpenStack" through perhaps some cut-and-paste error.
I first had to explain the basics of OpenStack, how OpenStack manages pools of compute, storage and network resources. Then I explained specific details on Cinder, Swift and Manila interfaces. Finally, having laid the groundwork and reviewed the basics, I was able to explain how IBM's various storage offerings support these OpenStack interfaces.
The feedback from the audience was that this should have been presented earlier in the week! Attendees mentioned that other presentations earlier in the week merely assumed the audience was already familiar with OpenStack concepts and terminology, which obviously is not the case.
Storwize V7000 Unified with Spectrum Scale (formerly Elastic Storage)
Cameron McAllister, IBM Systems Architect for Spectrum Scale, presented an overview how Storwize V7000 Unified can interconnect with IBM Spectrum Scale deployments. The secret is a feature in both called Active File Management (AFM).
Shankar Balasubramanian, IBM Senior Technical Staff Member for Active File Management, went into details on how to set up Active File Management for a variety of use cases. For example, you could have Storwize V7000 Unified boxes in Remote Office/Branch Office (ROBO) locations replicating data to a centralized Spectrum Scale datacenter.
This week was a great conference! I received great feedback overall from many attendees about all the quality presentations they enjoyed this week.
Next year, Edge will be held in October 10-14, 2016. Save the date! Mark your calendars now!
It's that time again. Every year, IBM hosts the "System Storage Technical University". I have been going to these since they first started in the 1990s. This time we are at the lovely [Hilton Orlando] in Orlando, Florida.
For those who want to relive past events, here are my blog posts from this event in 2010:
As was the case last year, IBM once again will run this conference alongside the [IBM System x Technical University] the same week, in the same hotel. This allows attendees to cross over to the other side to see a few sessions of the other conference. I took advantage of this last year, and plan to do so again this year as well!
For those on Twitter, you can follow my tweets at [@az990tony] or search on the hash tag #ibmtechu.
Maria Boonie is the IBM Director for IBM Worldwide Training and Technical Conferences. She indicated that there were 1500 attendees this week crossing both the System Storage and System x conferences at this hotel. There are 35 vendors that have sponsored this event, and they will be at the "Solutions Center" being held Monday through Wednesday this week.
She took this opportunity to plug IBM's latest education offerings, including Guaranteed-to-Run implementation classes, and Instructor-Led Online (ILO) technical classes.
Brian Truskowski is IBM General Manager for System Storage and Networking. I used to directly report to him in a previous role, and a few years ago he used to be the IBM CIO that helped with IBM's internal IT transformation.
Brian indicates that the previous approach to growth was to "Just Buy More", but this has some unintended consequences. He argued that companies need to adopt one or more of the following approaches to growth:
Stop storing so much - reduce data footprint using storage efficiency capabilities like data deduplication and compression
Store more with what is already on the floor - improve storage utilization with technologies like storage virtualization and thin provisioning
Move data to the right place - implement automated tiering, such as "Flash & Stash" between Solid-state drives and spinning disk, and/or Information Lifecycle Management (ILM) between disk and tape. Studies at some clients have found over 70 percent of data has not beed touched in the last 90 days
This time of dramatic change is the result of a "perfect storm" of influences, including the rising costs and risks associated with losing data, the increased need to index and search data, the desire for "Business Analytics", and the expectation for 100 percent up-time. This is driving IBM to offer hyper-efficient backup, Continuous Data Availability, and Smart Archive solutions.
The case study of SPRINT is a good example. SPRINT is a Telecommunications provider for cell phone users. They were challenged with 35 percent utilization, 165 storage arrays from six different vendors, and an expected 100 percent increase in their IT maintenance costs. After implementing IBM SAN Volume Controller (SVC) and Tivoli Storage Productivity Center (TPC) to manager 2.9 PB of data, SPRINT increased their utilization to 82 percent, reduced down to 70 storage arrays from only three vendors, and reduced their maintenance costs by 57 percent. Today, SPRINT now manages over 5 PB of data with SVC and TPC, have reduced their power and cooling by 3.5 million KWh, representing $320,000 USD in savings.
Roland Hagan is the IBM Vice President for the System x server platform. He talked about the "IT Conundrum" that represents a vicious cycle of "IT Sprawl", "Untrusted Data" and "Inflexible IT" that seem to feed each other. IBM is trying to change behavior, from thinking and dealing with physical boxes representing servers, storage and network gear, to a more holistic view focused on workloads, shared resource pools, independent scaling, and automated management.
IBM is leading the server marketplace, in part because of clever things IBM is doing, especially in developing the eX5 chipset that surrounds x86 commondity processors, and in part because of actions or decisions the competition have taken:
It doesn't break IBM's heart that Oracle decided to drop software support of their database on Itanium, which focued entirley against HP. Oracle runs on IBM servers better than Oracle/Sun or HP servers today, so it does not impact us, other than IBM has had a lot of people leaving HP to switch over to IBM.
HP has taken on a new CEO and reduced their R&D budget, causing them to be late-to-market on some of their offerings.
Dell continues to focus on the small and medium sized customer, and have not really broken into the "Enterprise".
Newcomer Cisco has some great technology that only seems to be adoptable in "Green Field" situations, as it does not integrate well with existing data center infrastructures.
The combination of ex5 chip-set architecture, Max5 memory expansion capabilities and Virtual Network Interface Cards (NICs), provide for a very VM-aware platform. For those who are not ready to fully adopt an integrated stack like IBM CloudBurst, IBM offers the Tivoli Service Automation software on its own, and a new [IBM BladeCenter Foundation for Cloud] as stepping stones to get there.
There are certainly more attendees here than last year, which reflects either the change in location (Orlando, Florida rather than Washington DC) as well as the economic recovery. I'm looking forward to an excellent week!
Jim is an IBM Fellow for IBM Systems and Technology Group. There are only 73 IBM Fellows currently working for IBM, and this is the highest honor IBM can bestow on an employee. He has been working with IBM since 1968.
He is tasked with predicting the future of IT, and help drive strategic direction for IBM. Cost pressures, requirements for growth, accelerating innovation and changing business needs help influence this direction.
IBM's approach is to integrate four different "IT building blocks":
Scale-up Systems, like the IBM System Storage DS8000 and TS3500 Tape Library
Resource Pools, such as IBM Storage Pools formed from managed disks by IBM SAN Volume Controller (SVC)
Integrated stacks and appliances, integrated software and hardware stacks, from Storwize V7000 to full rack systems like IBM Smart Analytics Server or CloudBurst.
Mobility of workloads and resources requires unified end-to-end service management. Fortunately, IBM is the #1 leader in IT Service Management solutions.
Jim addressed three myths:
Myth 1: IT Infrastructures will be homogenous.
Jim feels that innovations are happening too rapidly for this to ever happen, and is not a desirable end-goal. Instead, a focus to find the right balance of the IT building blocks might be a better approach.
Myth 2: All of your problems can be solved by replacing everything with product X.
Jim feels that the days of "rip-and-replace" are fading away. As IBM Executive Steve Mills said, "It isn't about the next new thing, but how well new things integrate with established applications and processes."
Myth 3: All IT will move to the Cloud model.
Jim feels a substantial portion of IT will move to the Cloud, but not all of it. There will always be exceptions where the old traditional ways of doing things might be appropriate. Clouds are just one of the many building blocks to choose from.
Jim's focus lately has been finding new ways to take advantage of virtualization concepts. Server, storage and network virtualization are helping address these challenges through four key methods:
Sharing - virtualization that allows a single resource to be used by multiple users. For example, hypervisors allow several guest VM operating systems share common hardware on a single physical server.
Aggregation - virtualization that allows multiple resources to be managed as a single pool. For example, SAN Volume Controller can virtualize the storage of multiple disk arrays and create a single storage pool.
Emulation - virtualization that allows one set of resources to look and feel like a different set of resources. Some hypervisors can emulate different kinds of CPU processors, for example.
Insulation - virtualization that hides the complexity from the end-user application or other higher levels of infrastructure, making it easier to make changes of the underlying managed resources. For example, both SONAS and SAN Volume Controller allow disk capacity to be removed and replaced without disruption to the application.
In today's economy, IT transformation costs must be low enough to yield near-term benefits. The long-term benefits are real, but near-term benefits are needed for projects to get started.
What set's IBM ahead of the pack? Here was Jim's list:
100 Years of Innovation, including being the U.S. Patent leader for the last 18 years in a row
IBM's huge investment in IBM Research, with labs all over the globe
Leadership products in a broad portfolio
Workload-optimized designs with integration from middleware all the way down to underlying hardware
Comprehensive management software for IBM and non-IBM equipment
Clod is an IBM Distinguished Engineer and Chief Technical Strategist for IBM System Storage. His presentation focused on trends and directions in the IT storage industry. Clod started with five workload categories:
To address these unique workload categories, IBM will offer workload-optimized systems. The four drivers on the design for these are performance, efficiency, scalability, and integration. For example, to address performance, companies can adopt Solid-State Drives (SSD). Unfortunately, these are 20 times more expensive dollar-per-GB than spinning disk, and the complexity involved in deciding what data to place on SSD was daunting. IBM solved this with an elegant solution called IBM System Storage Easy Tier, which provides automated data tiering for IBM DS8000, SAN Volume Controller (SVC) and Storwize V7000.
For scalability, IBM has adopted Scale-Out architectures, as seen in the XIV, SVC, and SONAS. SONAS is based on the highly scalable IBM General Parallel File System (GPFS). File systems are like wine, they get better with age. GPFS was introduced 15 years ago, and is more mature than many of the other "scalable file systems" from our competition.
Areal Density advancements on Hard Disk Drives (HDD) are slowing down. During the 1990s, the IT industry enjoyed 60 to 100 percent annual improvement in areal density (bits per square inch). In the 2000s, this dropped to 25 to 40 percent, as engineers are starting to hit various physical limitations.
Storage Efficiency features like compression have been around for a while, but are being deployed in new ways. For example, IBM invented WAN compression needed for Mainframe HASP. WAN compression became industry standard. Then IBM introduced compression on tape, and now compression on tape is an industry standard. ProtecTIER and Information Archive are able to combine compression with data deduplication to store backups and archive copies. Lastly, IBM now offers compression on primary data, through the IBM Real-Time Compression appliance.
For the rest of this decade, IBM predicts that tape will continue to enjoy (at least) 10 times lower dollar-per-GB than the least expensive spinning disk. Disk and Tape share common technologies, so all of the R&D investment for these products apply to both types of storage media.
For integration, IBM is leading the effort to help companies converge their SAN and LAN networks. By 2015, Clod predicts that there will be more FCoE purchased than FCP. IBM is also driving integration between hypervisors and storage virtualization. For example, IBM already supports VMware API for Array Integration (VAAI) in various storage products, including XIV, SVC and Storwize V7000.
Lastly, Clod could not finish a presentation without mentioning Cloud Computing. Cloud storage is expected to grow 32 percent CAGR from year 2010 to 2015. Roughly 10 percent of all servers and storage will be in some type of cloud by 2015.
As is often the case, I am torn between getting short posts out in a timely manner versus spending some more time to improve the length and quality of information, but posted much later. I will spread out the blog posts in consumable amounts throughout the next week or two, to achieve this balance.
IBM Information Archive for email, files and eDiscovery
Not too many people have heard of IBM's Smart Archive strategy and the storage products IBM offers to meet compliance regulations. This session covered the following:
The differences between backup and archive, including a few of my own personal horror stories helping companies who had foolishly thought that keeping backup copies for years would adequately serve as their archive strategy
The differences between optical media, Write-Once Read-Many (WORM) media, and Non-Erasable, Non-Rewriteable (NENR) storage options.
Why putting a [space heater] on your data center floor is a bad idea, driving up power and cooling costs for little business value to the enterprise once the unit is full of rarely accessed read-only data.
An overview of the [IBM Information Archive], an integrated stack of servers, storage and software that replaces previous offerings such as the IBM System Storage DR550 and the IBM Grid Medical Archive Solution (GMAS).
The marketing bundle known as the [Information Archive for Email, Files and eDiscovery] that combines the Information Archive storage appliance with Content Collectors for email and file systems, as well as eDiscovery tools, and implementation services for a solution that can support a small or medium size business, up to 1400 employees.
IBM Tivoli Storage Productivity Center v4.2 Overview and Update
Many of the concerns raised when I [presented v4.1 at this conference last year] were addressed this year in v4.2, including full performance statistics for IBM XIV storage system, storage resource agent support for HP-UX and Solaris, and a variety of other issues.
I presented this overview in stages:
"Productivity Center Basic Edition" that comes pre-installed on the IBM System Storage Productivity Center hardware console, that provides discover of devices, basic configuration, and a clever topology viewer of what is connected to what.
"Productivity Center for Disk" and "Productivity Center for Disk Midrange Edition (MRE)" that provides real-time and historical performance monitoring, asset and capacity reporting.
"Productivity Center for Replication" which supports monitoring, failover and failback for FlashCopy, Metro Mirror and Global Mirror on the SVC, Storwize V7000, DS8000, DS6000 and ESS 800.
"Productivity Center for Data" which supports reporting on files, file systems and databases on DAS, SAN and NAS attached storage from a Operating System viewpoint.
"Productivity Center Standard Edition" which includes all of the above except "Replication", and adds performance monitoring of SAN Fabric gear, and some very clever analytics to improver performance and problem determination.
One of the questions that came up was "How big does my company have to be to consider using Productivity Center?" which I answered as follows:
"If you are a small company, and the "IT Person" has responsibilities outside the IT, and managing the few pieces of kit is just part of his job, then consider just using the web-based GUI through a Firefox or similar browser. If you are a medium sized company with dedicated IT personnel, but mostly run by system admins or database admins that manage storage and networks on the side, you might want to consider the "Storage Control" plug-in for IBM Systems Director. But if you are larger shop, and there are employees with the title "Storage Administrator" and/or "SAN Administrator", then Tivoli Storage Productivity Center is for you."
Tivoli Storage Productivity Center has matured into a fine piece of software that truly can help medium and large sized data centers manage their storage and storage networking infrastructure.
I like speaking the first day of these events. Often people come in just to hear the keynote speakers, and stay the afternoon to hear a few break-out sessions before they leave Tuesday or Wednesday for other meetings.
Clod Barrera is an IBM Distinguished Engineer and Chief Technical Strategist for IBM System Storage. He predicts that by 2015, 10 percent of the servers and storage purchases, as well as 25 percent of the network gear purchases, will be related to Cloud deployments. Cloud Storage is expected to grow at a compound annual growth rate (CAGR) of 32 percent through 2015, compared to only 3.8 percent growth for non-Cloud storage.
Cloud Computing is allowing companies to rethink their IT infrastructure, and reinvent their business. Clod presented an interesting chart on the "Taxonomy" of storage in Cloud environments. On the left he had examples of Storage that was part of a Cloud Compute application. On the right he had storage that was accessed directly through protocols or APIs. Under each he had several examples for transactional data, stream data, backups and archives.
Clod feels the only difference between Private and Public clouds is a matter of ownership. In private clouds, these are owned by the company that uses them via their private Intranet network. Public clouds are owned by Cloud Service providers and are accessed over the public Internet. Clod presented IBM's strategy to deliver Cloud at five levels:
Private Cloud: on-site equipment, behind company firewall, managed by IT staff
Managed Private Cloud: on-site equipment, behind company firewall, managed by IBM or other Cloud Service provider
Hosted Private Cloud: dedicated, off-premises equipment, located and managed by IBM or other Cloud Service Provider, and access through VPN
Shared Cloud Services: shared, off-premises equipment, located at IBM or other Cloud Service Provider, managed by IBM or Cloud Service provider, and access through VPN. The facility is intended for enterprises only, on a contractual basis, and will be auditable for compliance to government regulations, etc.
Public Cloud: shared, off-premises equipment, located and managed by IBM or other Cloud Service provider, targeted to offer cloud compute and storage resources, with standardized platforms of operating systems and middleware, for individuals, small and medium sized businesses.
As with storage in traditional data center deployments, storage in clouds will be tiered, with Tier 0 being the fastest tier, to Tier 4 for "deep and cheap" archive storage. IBM SONAS is an example of Cloud-ready storage that can help make these tiers accessible through standard Ethernet protocols. Cloud Service providers will use metering and Service Level Agreements (SLAs) to offer different rates for different tiers of storage in the cloud.
Clod wrapped up his session explaining IBM's Cloud Computing Reference Architecture (CCRA). This is an all-encompassing diagram that shows how all of IBM's hardware, software and services fit into Cloud deployments.
Since the [IBM System Storage Technical University 2011] runs concurrently with the System x Technical University, attendees are allowed to mix-and-match. I attended several presentations regarding server virtualization and hypervisors.
Matt Archibald is an IT Management Consultant in IBM's Systems Agenda Delivery team. He started with a history of hypervisors, from IBM's early CP/CMS in 1967, through the latest VMware Vsphere 5 just announced.
He explained that there are three types of Hypervisor architectures today:
Type 1 - often referred to as "Bare Metal" runs directly on the server host hardware, and allows different operating system virtual machines to run as guests. IBM's System z [PR/SM] and [PowerVM] as well as the popular VMware ESXi are examples of this type.
Type 2 - often referred to as "Hosted" runs above an existing operating system, and allows different operating system virtual machines to run as guests. The popular [Oracle/Sun VirtualBox] is an example of this type.
OS Containers - runs above an existing operating system base, and allows multiple "guests" that all run the same operating system as the base. This affords some isolation between applications. [Parallels Virtuozzo Containers] is an example of this type.
The dominant architecture is Type 1. For x86, IBM is the number one reseller of VMware. VMware recently announced [Vsphere 5], which changes its licensing model from CPU-based to memory-based. For example, a virtual machine with 32 virtual CPUs and 1TB of virtual RAM (VRAM) would cost over $73,000 per year to license the VMware "Enterprise Plus" software. The only plus-side to this new licensing is that the "memory" entitlement transfers during Disaster Recovery to the remote location.
"Xen is dead." was the way Matt introduced the section discussing Hybrid Type-1 hypervisors like Xen and Hyper-V. These run bare-metal, but require networking and storage I/O to be processed by a single bottleneck partition referred to as "Dom 0". As such, this hybrid approach does not scale well on larger multi-sock host servers. So, his Xen-is-dead message was referring to all Hybrid-based Hypervisors including Hyper-V, not just those based on Xen itself.
The new up-and-comer is "Linux KVM". Last year, in my blog post about [System x KVM solutions], I mentioned the confusion over KVM acronym used with two different meanings. Many people use KVM to refer to Keyboard-Video-Mouse switches that allow access to multiple machines. IBM has renamed these switches to Local Console Managers (LCM) and Global Console Manager (GCM). This year, the System x team have adopted the use of "Linux KVM" to refer to the second meaning, the [Kernel-based Virtual Machine] hypervisor.
Linux KVM is not a product, but an open-source project. As such, it is built into every Linux kernel. Red Hat has created two specific deliverables under the name Red Hat Enterprise Virtualization (RHEV):
RHEV-H, a tiny ESXi-like bare-metal hypervisor that fits in 78MB, making it small enough to be on a USB stick, CD-rom or memory chip.
RHEV-M, a vCenter-like management software to manage multiple virtual machines across multiple hosts.
Personally, I run RHEL 6.1 with KVM on my IBM laptop as my primary operating system, with a Windows XP guest image to run a few Windows-specific applications.
A complaint of the current RHEV 2.2 release from Linux fanboys is that RHEV-M requires a Windows server, and uses Windows Powershell for scripting. The next release of RHEV is likely to provide a Linux-based option for management server.
Of the various hypervisors evaluated, KVM appears to be poised to offer the best scalability for multi-socket host machines. The next release is expected to support up to 4096 threads, 64TB of RAM, and over 2000 virtual machines. Compare that to VMware Vsphere 5 that supports only 160 threads, 2TB of RAM and up to 512 virtual machines.
Linux KVM Overview
Matt also presented a session focused on Linux KVM. While IBM is the leading reseller of VMware for the x86 server platform, it has chosen Linux KVM to run all of its internal x86 Cloud Computing facilities, as it can offer 40 to 80 percent savings, based on Total Cost of Ownership (TCO).
Linux KVM can run unmodified Windows and Linux guest operating systems as guest images with less than 5 percent overhead. Since KVM is built into the Linux kernel, any certification testing automatically benefits KVM as well. KVM takes advantage of modern CPU extensions like Intel's VT and AMD's AMD-V.
For high availability, in the event that a host fails, KVM can restart the guest images on other KVM hosts. RHEV offers "prioritized restart order" which allows mision-critical images to be started before less important ones.
RHEV also provides "Virtual Desktop Infrastructure", known as VDI. This allows a lightweight client with a browser to access an OS image running on a KVM host. Matt was able to demonstrate this with Firefox browser running on his Android-based Nexus One smartphone.
RHEV also adds features that make it ideal for cloud deployments, including hot-pluggable CPU, network and storage; service Level Agreement monitoring for CPU, memory and I/O resources; storage live migrations to move the raw image files while guests are running; and a self-service user portal.
IBM has been doing server virtualization for decades. When I first started at IBM in 1986, I was doing z/OS development and testing on z/VM guest images. Later, around 1999, I started working with the "Linux on z" team, running multiple Linux images under PR/SM and z/VM. While the server virtualization solutions most people are familiar with (VMware, Hyper-V, Xen) have only been around the last five years or so, IBM has a much deeper and robust understanding and long heritage. This helps to set IBM apart from the competition when helping clients.
I gotten several emails expressing worry that I have fallen off the face of th earth. The last two weeks have been educational and eye-opening for me. I can't provide details in my blog, so I will just say that it involved government agencies that IBM refers to as "dark accounts", and that I am now back safely in the USA. Between adjusting to time zone differences, ridiculously long hours, and restricted access to the internet, I was unable to blog lately.
Instead, I will resume my coverage of the [IBM System Storage Technical University 2011]. The "Solutions Expo" runs Monday evening through Wednesday lunch. This is a chance for people to explore all the solutions that are part of IBM's large "eco-system" for IBM System storage and System x products. There were several sponsors for this event.
As is often the case at these conferences, the various booths hand out fun items. The hot items this year were tie-dyed tee-shirts from Qlogic, and propeller beanies from the IBM rack and power systems team. Here is Amanda, one of the bartenders showing off the latter.
After the expo on Tuesday night, my friends at [Texas Memory Systems] held an after-party. Unlike the pens, tee-shirts and keychains at the Expo, these guys had a raffle for real storage products. Here is Erik Eyberg handing out a RamSan PCIe card, valued at $14,000 or so. IBM recently certified the TMS RamSan as External SSD storage for the IBM SAN Volume Controller (SVC). The SVC can optimize performance using this for automated sub-LUN tiering with the IBM System Storage Easy Tier feature.
I always try to catch a session from Jim Blue, who works in our "SAN Central" center of competency team. This session was a long list of useful hints and tips, based on his many years of experience helping clients.
SAN Zoning works by inclusion, limiting the impact of failing devices. The best approach is to zone by individual initiator port. The default policy for your SAN zoning should be "deny".
Ports should be named to identify who, what, where and how.
While many people know not to mix both disk and tape devices on the same HBA, Jim also recommends not mixing dissimilar disks, test and production, FCP and FICON.
The sweet spot is FOUR paths. Too many paths can impact performance.
When making changes to redundant fabrics, make changes to the first fabric, then allow sufficient time before making the same changes to the other fabric.
Use software tools like Tivoli Storage Productivity Center (Standard Edition) to validate all changes to your SAN fabric.
Do not mix 62.5 and 50.0 micron technology.
Use port caps to disable inactive ports. In one amusing anecdote, he mention that an uncovered port was hit by sunlight every day, sending error messages that took a while to figure out.
Save your SAN configuration to non-SAN storage for backup
Consider firmware about two months old to be stable
Rule of thumb for estimating IOPS: 75-100 IOPS per 7200 RPM drive, 120-150 IOPS per 10K RPM drive, and 150-200 IOPS per 15K RPM drive.
Decide whether your shop is just-in-time or just-in-case provisioning. Just-in-time gets additional capacity on demand as needed, and just-in-case over-provisions to avoid scrambling last minute.
Avoid oversubscribing your inter-switch links (ISL). Aim for around 7:1 to 10:1 ratio.
Don't go cheap on bandwidth between sites for long-distance replication
Next Generation Network Fabrics - Strategy and Innovations
Mike Easterly, IBM Director of Global Field Marketing, presented IBM System Networking strategy, in light of IBM's recent acquisition of Blade Network Technologies (BNT). BNT is used in 350 of the Fortune 500 companies, and is ranked #2 behind Cisco in sales of non-core Ethernet switches (based on number of units sold).
Based on a recent survey, companies are upgrading their Ethernet networks for a variety of reasons:
56 percent for Live Partition Mobility and VMware Vmotion
45 percent for integrated compute stacks, like IBM CloudBurst
43 percent for private, public and hybrid cloud computing deployments
40 percent for network convergences
Many companies adopt a three-level approach, with core directors, distribution switches, and then access switches at the edge that connect servers and storage devices. IBM's BNT allows you to flatten the network to lower latency by collapsing the access and distribution levels into one.
IBM's strategy is to focus on BNT for the access/distribution level, and to continue its strategic partnerships for the core level.
IBM BNT provides better price/performance and lower energy consumption. To help with hot-aisle/cold-aisle rack deployments, IBM BNT provides both F and R models. F models have ports on the front, and R models have ports in the rear.
IBM BNT supports virtual fabric and HW-offload iSCSI traffic, and future-enabled for FCoE. Support for TRILL (transparent interconnect of lots of links) and OpenFlow will be implemented through software updates to the switches.
While Cisco Nexus 1000v is focused on VMware Enterprise Plus, IBM BNT's VMready works with VMware, Hyper-V, Linux KVM, XEN, OracleVM, and PowerVM. This allows single pane of management of VMready and ESX vSwitches.
In preparation for Converged Enhanced Ethernet (CEE), IBM BNT will provide full 40GbE support sometime next year, and offer switches that support 100GbE uplinks. IBM offers extended length cables, including passive SFP+ DAC at 8.5 meters, and 10Gbase-T Cat7 cables up to 100 meters.
Inter-datacenter Workload Mobility with VMware vSphere and SAN Volume Controller (SVC)
This session was co-presented between Bill Wiegand, IBM Advanced Technical Services, and Rawley Burbridge, IBM VMware and midrange storage consultant. IBM is the leader in storage virtualization product (SVC), and is the leading reseller of VMware.
Like MetroCluster on IBM N series, or EMC's VPLEX Metro, the IBM SAN Volume Controller can support a stretched cluster across distance that allows virtual machines to move seamlessly from one datacenter to another. This is a feature IBM introduced with SVC 5.1 back in 2009. This can be used for PowerVM Live Partition Mobility, VMware vMotion, and Hyper-V Quick Migration.
SVC stretched cluster can help with both Disaster Avoidance and Disaster Recovery. For Disaster Avoidance, in anticipation of an outage, VMs can be moved to the secondary datacenter. For Disaster Recover, additional automation, such as VMware High Availability (HA) is needed to restart the VMs at the secondary datacenter.
IBM stretched cluster is further improved with a feature called Volume Mirroring (formerly vDisk Mirroring) which creates two physical copies of one logical volume. To the VMware ESX hosts, there is only one volume, regardless of which datacenter it is in. The two physical copies can be on any kind of managed disk, as there is no requirement or dependency of copy services on the back-end storage arrays.
Another recent improvement is the idea of spreading the three quorum disks to three different locations or "failure domains". One in each data center, and a third one in a separate building, somewhere in between the other two, perhaps.
Of course, there are regional disasters that could affect both datacenters. For this reason, SVC stretched cluster volumes can be replicated to a third location up to 8000 km away. This can be done with any back-end disk arrays, as again there is not requirement for copy services from the managed devices. SVC takes care of it all.
Networking is going to be very important for a variety of transformational projects going forward in the next five years.
I have been working on Information Lifecycle Management (ILM) since before they coined the phrase. There were several break-out sessions on the third day at the [IBM System Storage Technical University 2011] related to new twists to ILM.
The Intelligent Storage Service Catalog (ISSC) and Smarter ILM
Hans Ammitzboll, Solution Rep for IBM Global Technology Services (GTS), presented an approach to ILM focused on using different storage products for different tiers. Is this new? Not at all! The original use of the phrase "Information Lifecycle Management" was coined in the early 1990s by StorageTek to help sell automated tape libraries.
Unfortunately, disk-only vendors started using the term ILM to refer to disk-to-disk tiering inside the disk array. Hans feels it does not make sense to put the least expensive penny-per-GB 7200 RPM disk inside the most expense enterprise-class high-end disk arrays.
IBM GTS manages not only IBM's internal operations, but the IT operations of hundreds of other clients. To help manage all this storage, they developed software to supplement reporting, monitoring and movement of data from one tier to another.
The Intelligent Storage Service Catalog (ISSC) can save up to 80 percent of planning time for managing storage. What did people use before? Hans poked fun at chargeback and showback systems that "offer savings" but don't actually "impose savings". He referred to these as Name-and-Shame, where the top 10 offenders of storage usage.
His storage pyramid involves a variety of devices, with IBM DS8000, SVC and XIV for the high-end, midrange disk like Storwize V7000, and blended disk-and-tape solutions like SONAS and Information Archive (IA) for the lower tiers.
Mark Taylor, IBM Advanced Technical Services, presented the policy-driven automation of IBM's Scale-Out NAS (SONAS). A SONAS system can hold 1 to 256 file systems, and each file system is further divided into fileset containers. Think of fileset containers like 'tree branches' of the file system.es.
SONAS supports policies for file placement, file movement, and file deletion. These are SQL-like statements that are then applied to specific file systems in the SONAS. Input variables include date last modified, date last accessed, file name, file size, fileset container name, user id and group id. You can choose to have the rules be case-sensitive or case-insensitive. The rules support macros. A macro pre-processor can help simplify calculations and other definitions that are used repeatedly.
Each file system in SONAS consists of one or more storage pools. For file systems with multiple pools, file placement policies can determine which pool to place each file. Normally, when a set of files are in a specific sub-directory on other NAS systems, all the files will be on the same type of disk. With SONAS, some files can be placed on 15K RPM drives, and other files on slower 7200 RPM drives. This file virtualization separates the logical grouping of files from the physical placement of them.
Once files are placed, other policies can be written to migrate from one disk pool to another, migrate from disk to tape, or delete the file. Migrating from one disk pool to another is done by relocation. The next time the file is accessed, it will be accessed directly from the new pool. When migrating from disk to tape, a stub is left in the directory structure metadata, so that subsequent access will cause the file to be recalled automatically from tape, back to disk. Policies can determine which storage pool files are recalled to when this happens.
Migrating from disk to tape involves sending the data from SONAS to external storage pool manager, such as IBM Tivoli Storage Manager (TSM) server connected to a tape library. SONAS supports pre-migration, which allows the data to be copied to tape, but left on disk, until space is needed to be freed up. For example, a policy with THRESHOLD(90,70,50) will kick in when the file system is 90 percent full, file will be migrated (moved) to tape until it reaches 70 percent, and then files will be pre-migrated (copied) to tape until it reaches 50 percent.
Policies to delete files can apply to both disk and tape pools. Files deleted on tape remove the stub from the directory structure metadata and notify the external storage pool manager to clean up its records for the tape data.
If this all sounds like a radically new way of managing data, it isn't. Many of these functions are based on IBM's Data Facility Storage Management Subsystem (DFSMS) for the mainframe. In effect, SONAS brings mainframe-class functionality to distributed systems.
Understanding IBM SONAS Use Cases
For many, the concept of a scale-out NAS is new. Stephen Edel, IBM SONAS product offering manager, presented a variety of use cases where SONAS has been successful.
First, let's consider backup. IBM SONAS has built-in support for Tivoli Storage Manager (TSM), as well as supporting the NDMP industry standard protocol, for use with Symantec NetBackup, Commvault Simpana, and EMC Legato Networker. While many NAS solutions support NDMP, IBM SONAS can support up to 128 session per interface node, and up to 30 interface nodes, for parallel processing. SONAS has a high-speed file scan to identify files to be backed up, and will pre-fetch the small files into cache to speed up the backup process. A SONAS system can support up to 256 systems, and each file system can be backed up on its own unique schedule if you like. Different file systems can be backed up to different backup servers.
SONAS also has anti-virus support, with your choice of Symantec or McAfee. An anti-virus scan can be run on demand, as needed, or as files are individually accessed. When a Windows client reads a file, SONAS will determine if it has been already scanned with the most recent anti-virus signatures, and if not, will scan before allowing the file to be read. SONAS will also scan new files created.
Successful SONAS deployments addressed the following workloads:
content capture including video capture
high performance computing, research and business analytics
"Cheap and Deep" archive
worldwide information exchange and geographically distant collaboration
SONAS is selling well in Government, Universities, Healthcare, and Media/Entertainment, but is not limited to these industries. It can be used for private cloud deployments and public cloud deployments. Having centralized management for Petabytes of data can be cost-effective either way.
IBM SONAS brings the latest techologies to bring a Smarter ILM to a variety of workloads and use cases.
IBM Storage Strategy for the Smarter Computing Era
I presented this session on Thursday morning. It is a session I give frequently at the IBM Tucson Executive Briefing Center (EBC). IBM launched [Smarter Computing initiative at IBM Pulse conference]. My presentation covered the role of storage in Business Analytics, Workload Optimized Systems, and Cloud Computing.
Layer 8: Cloud Computing and the new IT Delivery Model
Ed Batewell, IBM Field Technical Support Specialist, presented this overview on Cloud Computing. The "Layer 8" is a subtle reference to the [7-layer OSI Model] for networking protocols. Ed cites insights from the [2011 IBM Global CIO Survey]. Of the 3000 companies surveyed, 60 percent plan to use or deploy clouds. In USA, 70 percent of CIOs have significant plans for cloud within the next 3-5 years. These numbers are double the statistics gleamed from the 2009 Global CIO survey. Clouds are one of IBM's big four initiatives, expecting to generate $7 Billion USD annual revenues by 2015.
IBM is recognized in the industry as one of "Big 5" vendors (Google, Yahoo, Microsoft, and Amazon round out the rest). As such, IBM has contributed to the industry a set of best practices known as the [Cloud Computing Reference Architect (36-page document)]. As is typical for IBM, this architecture is end-to-end complete, covering the three main participants for successful cloud deployments:
Consumers: the people and systems that use cloud computing services
Providers: the people, infrastructure and business operations needed to deliver IT services to consumers
Developers: the people and their development tools that create apps and platforms for cloud computing
IBM is working hard to eliminate all barriers to adoption for Cloud Computing. [Mirage image management] can patch VM images offline to address "Day 0" viruses. [Hybrid Cloud Integrator can help integrate new Cloud technologies to legacy applications. [IBM Systems Director VMcontrol] can manage VM images from z/VM on the mainframe, to PowerVM on UNIX servers, to VMware, Microsoft, Xen and KVM for x86 servers. IBM's [Cloud Service Provider Platform (CSP2)] is designed for Telecoms to offer Cloud Computing services. IBM CloudBurst is a "Cloud-in-a-Can" optimized stack of servers, storage and switches that can be installed in five days and comes in various "tee-shirt sizes" (Small, Medium, Large and Extra Large), depending on how many VMs you want to run.
Ed mentioned that companies trying to build their own traditional IT applications and environments, in an effort to compete against the cost-effective Clouds, reminded him of Thomas Thwaites' project of building a toaster from scratch. You can watch the [TED video, 11 minutes]:
An interesting project is [Reservoir] which IBM is working with other industry leaders to develop a way to seamlessly migrate VMs from one location to another, globally, without requiring shared storage, SAN zones or Ethernet subnets. This is similar to how energy companies buy and sell electricity to each other, as needed, or the way telecommunications companies allow roaming acorss each others networks.
IBM System Networking - Convergence
Jeff Currier, IBM Executive Consultant for the new IBM System Networking group, presented this session on Network Convergence. Storage is expected to grow 44x, from 0.8 [Zettabytes] in 2009, to 35 Zetabytes by the year 2020. The role of the network is growing in importance. IBM refers to this converged loss-less Ethernet network as "Convergence Enhanced Ethernet" (CEE), which Cisco uses the term "Data Center Ethernet" (DCE), and the rest of the industry uses "Data Center Bridging" (DCB).
To make this happen, we need to replace Spanning Tree Protocol [STP] that eliminates walking in circles in a multi-hop network configuration, with a new Layer 2 Multipathing (L2MP) protocol. The two competing for the title are Shortest Path Bridging (IEEE 802.1aq) and Transparent Interconnect of Lots of Links (IETF TRILL).
All roads lead to Ethernet. While FCoE has not caught on as fast as everyone hoped, iSCSI has benefited from all the enhancements to the Ethernet standard. iSCSI works in both lossy and lossless versions of Ethernet, and seems to be the preferred choice for new greenfield deployments for Small and Medium sized Businesses (SMB). Larger enterprises continue to use Fibre Channel (FCP and FICON), but might use single-hop FCoE from the servers to top-of-rack switches. Both iSCSI and FCoE scale well, but FCoE is considered more efficient.
IBM has a strategy, and is investing heavily in these standards, technologies, and core competencies.
Continuing my coverage of the [IBM System Storage Technical University 2011], I participated in the storage free-for-all, which is a long-time tradition, started at SHARE User Group conference, and carried forward to other IT conferences. The free-for-all is a Q&A Panel of experts to allow anyone to ask any question. These are sometimes called "Birds of a Feather" (BOF). Last year, we had two: one focused on Tivoli Storage software, and the second to cover storage hardware. This year, we had two, one for System x called "Ask the eXperts", and one for System Storage called "Storage Free-for-All". This post covers the latter.
(Disclaimer: Do not shoot the messenger! We had a dozen or more experts on the panel, representing System Storage hardware, Tivoli Storage software, and Storage services. I took notes, trying to capture the essence of the questions, and the answers given by the various IBM experts. I have spelled out acronyms and provided links to relevant materials. The answers from individual IBMers may not reflect the official position of IBM management. Where appropriate, my own commentary will be in italics.)
You are in the wrong session! Go to "Ask the eXperts" session next door!
The TSM GUI sucks! Are there any plans to improve it?
Yes, we are aware that products like IBM XIV have raised the bar for what people expect from graphical user interfaces. We have plans to improve the TSM GUI. IBM's new GUI for the SAN Volume Controller and Storwize V7000 has been well-received, and will be used as a template for the GUIs of other storage hardware and software products. The GUI uses the latest HTML5, Dojo widgets and AJAX technologies, eliminating Java dependencies on the client browser.
Can we run the TSM Admin GUI from a non-Windows host?
IBM has plans to offer this. Most likely, this will be browser-based, so that any OS with a modern browser can be used.
As hard disk drives grow larger in capacity, RAID-5 becomes less viable. What is IBM doing to address this?
IBM is aware of this problem. IBM offers RAID-DP on the IBM N series, RAID-X on the IBM XIV, and RAID-6 on its other disk systems.
TPC licensing is outrageous! What is IBM going to do about it?
About 25 percent of DS8000 disk systems have SSD installed. Now that IBM DS8000 Easy Tier supports "any two" tiers, roughly 50 percent of DS8000 now have Easy Tier activated. No idea on how Easy Tier has been adopted on SVC or Storwize V7000.
We have an 8-node SVC cluster, should we put 8 SSD drives into a single node-pair, or spread them out?
We recommend putting a separate Solid-State Drive in each SVC node, with RAID-1 between nodes of a node-pair. By separating the SSD across I/O groups, you can reduce node-to-node traffic.
How well has SVC 6.2 been adopted?
The inventory call-home data is not yet available. The only SVC hardware model that does not support this level of software was the 2145-4F2 introduced in 2003. Every other model since then can be updated to this level.
Will IBM offer 600GB FDE drives for the IBM DS8700?
Currently, IBM offers 300GB and 450GB 15K RPM drives with the Full-Disk Encryption (FDE) capability for the DS8700, and 450GB and 600GB 10K RPM drives with FDE for the IBM DS8800. IBM is working with its disk suppliers to offer FDE on other disk capacities, and on SSD and NL-SAS drives as well, so that all can be used with IBM Easy Tier.
Is there a reason for the feature lag between the Easy Tier capabilities of the DS8000, and that of the SVC/Storwize V7000?
We have one team for Easy Tier, so they implement it first on DS8000, then port it over to SVC/Storwize V7000.
Does it even make sense to have separate storage tiers, especially when you factor in the cost of SVC and TPC to make it manageable?
It depends! We understand this is a trade-off between cost and complexity. Most data centers have three or more storage tiers already, so products like SVC can help simplify interoperability.
Are there best practices for combining SVC with DS8000? Can we share one DS8000 system across two or more SVC clusters?
Yes, you can share one DS8000 across multiple SVC clusters. DS8000 has auto-restripe, so consider having two big extent pools. The queue depth is 3 to 60, so aim to have up to 60 managed disks on your DS8000 assigned to SVC. The more managed disks the better.
The IBM System Storage Interopability Center (SSIC) site does not seem to be designed well for SAN Volume Controller.
Yes, we are aware of that. It was designed based on traditional Hardware Compatability Lists (HCL), but storage virtualization presents unique challenges.
How does the 24-hour learning period work for IBM Easy Tier? We have batch processing that runs from 2am to 8am on Sundays.
You can have Easy Tier monitor across this batch job window, and turn Easy Tier management between tiers on and off as needed.
Now that NetApp has acquired LSI, is the DS3000 still viable?
Yes, IBM has a strong OEM relationship with both NetApp and LSI, and this continues after the acquisition.
If have managed disks from a DS8000 multi-rank extent pool assigned to multiple SVC clusters, won't this affect performance?
Yes, possibly. Keep managed disks on seperate extent pools if this is a big concern. A PERL script is available to re-balance SVC striped volumes as needed after these changes.
Is the IBM [TPC Reporter] a replacement for IBM Tivoli Storage Productivity Center?
No, it is software, available at no additional charge, that provides additional reporting to those who have already licensed Tivoli Storage Productivity Center 4.1 and above. It will be updated as needed when new versions of Productivity Center are released.
We are experiencing lots of stability issues with SDD, SDD-PCM and SDD-DSM multipathing drivers. Are these getting the development attention they deserve?
IBM's direction is to shift toward native OS-based multipathing drivers.
Is anyone actually thinking of deploying public cloud storage in the near-term?
A few hands in the audience were raised.
None of the IBM storage devices seem to have [REST API]. Cloud storage providers are demanding this. What are IBM plans?
IBM plans to offer REST on SONAS. IBM uses SONAS internally for its own cloud storage offerings.
If you ask a DB2 specialist, an AIX specialist, and a System Storage specialist, on how to configure System p and System Storage for optimal performance, you get three different answers. Are there any IBMers who are cross-functional that can help?
Yes, for example, Earl Jew is an IBM Field Technical Support Specialist (FTSS) for both System p and Storage, and can help you with that.
Both Oracle and Microsoft recommend RAID-10 for their applications.
Don't listen to them. Feel free to use RAID-5, RAID-6 or RAID-X instead.
Resizing SVC source volumes forces ongoing FlashCopy or Metro Mirror relatiohships to be stopped. Does IBM plan to address this?
Currently, you have to stop, resize both source and target, then start the relationship again. Consider getting IBM Tivoli Storage Productivity Center for Replication (TPC-R).
IBM continues to support this for exising clients. For new deployments, IBM offers SONAS and the Information Archive (IA).
When will I be able to move SVC volumes between I/O groups?
You can today, but it is disruptive to the operating system. IBM is investigating making this less disruptive.
Will XIV ever support the mainframe?
It does already, with support for both Linux and z/VM today. For VSE support, use SVC with XIV. For those with the new zBX extension, XIV storage can be used with all of the POWER and x86-based operating systems supported. IBM has no plans to offer direct FICON attachment for z/OS or z/TPF.
Not a question - Kudos to the TSM and ProtecTIER team in supporting native IP-based replication!
When will IBM offer POWER-based models of the XIV, SVC and other storage devices?
IBM's decision to use industry-standard x86 technology has proven quite successful. However, IBM re-looks at this decision every so many years. Once again, the last iteration determined that it was not worth doing. A POWER-based model might not beat the price/performance of current x86 models, and maintaining two separate code bases would hinder development of new innovations.
We have both System i and System z, what is IBM doing to address the fact that PowerHA and GDPS are different?
IBM TPC-R has a service offering extension to support "IBM i" environments. GDPS plans to support multi-platform environments as well.
This was a great interactive session. I am glad everyone stayed late Thursday evening to participate in this discussion.
Wrapping up my coverage of the [IBM System Storage Technical University 2011], I attended a few sessions on Friday morning. The last session was Glenn Anderson's "IT Game Changers: the IT Professional's Guide to Becoming a Technology Trailblazer." Glenn used to run the Storage University events, but now is the conference manager for the System z mainframe events.
Glenn organized this talk from lessons from the following books:
Glen suggested that IT professionals should understand the dissatisfaction with IT that is driving companies to switch over to Cloud Computing. IT professionals should adopt a service-oriented approach, realize the full potential of new disruptive technologies, and know when to "jump the curve" to the next generation of technology. For example, IT professionals should lead the movement to Cloud. If you build your own private cloud, or purchase some time for instances on a public cloud, you will be in a better position to be the "trusted advisor" to IT management.
CIOs should encourage IT to be part of the corporate strategy, but may have to fix the broken IT funding model. The IT department should be a "value center" not a "cost center" as it has been traditionally treated. When treated as a "cost center", IT departments only focus on cost reductions, and not looking at ways that the IT department can help drive revenues, improve customer service, or enhance employee productivity. A well-orgnized IT department can be a competitive advantage.
Taking a "service-oriented" approach allows IT and Business Process to come together. Often times, IT and business professionals don't communicate well, and this new service-oriented approach can bridge the gap. Service Oriented Architecture [SOA] can help connect existing legacy applications to the new Cloud Computing environment.
IT budgets should consist of two parts. Strategic funding for new IT projects, and an operational budget for keeping current applications running. Roughly 45 percent of capital investment in USA goes toward IT. Too often, the IT department is focused on itself, on technology and reducing costs, and not enough on aligning IT with business transformation. When IT is used in conjunction with a sound business strategy, their can be significant payoff.
After 550 years, the printing press and printed materials are being pushed from center. While other electronic media like radio and television have been around for a while, the internet and digital publishing are constantly available, and represent a shift from traditional printed materials.
When evaluating new technologies, IT professionals should ask themselves a few questions. Is it easy to use? Does it enable people to connect in new ways? Is it more cost-effective, or tap new sources of revenue? Does it shift power from one player to another? A new intellectual ethic is taking hold. Becoming an IT Game Changer can help stay one step ahead as Cloud Computing and other new IT platforms are adopted.
This week, July 26-30, 2010, I am in Washington DC for the annual [2010 System Storage Technical University]. As with last year, we have joined forces with the System x team. Since we are in Washington DC this time, IBM added a "Federal Track" to focus on government challenges and solutions. So, basically, offering attendees the option to attend three conferences for one low price.
This conference was previously called the "Symposium", but IBM changed the name to "Technical University" to emphasize the technical nature of the conference. No marketing puffery like "Journey to the Private Cloud" here! Instead, this is bona fide technical training, qualifying attendees to count this towards their Continuing Professional Education (CPE).
(Note to my readers:The blogosphere is like a playground. In the center are four-year-olds throwing sand into each other's faces, while mature adults sit on benches watching the action, and only jumping in as needed. For example, fellow blogger Chuck Hollis (EMC) got sand in his face for promising to resign if EMC ever offered a tacky storage guarantee, and then [failed to follow through on his promise] when it happened.
Several of my readers asked me to respond to another EMC blogger's latest [fistful of sand].
A few months ago, fellow blogger Barry Burke (EMC) committed to [stick to facts] in posts on his Storage Anarchist blog. That didn't last long! BarryB apparently has fallen in line with EMC's over-promise-then-under-deliver approach. Unfortunately, I will be busy covering the conference and IBM's robust portfolio of offerings, so won't have time to address BarryB's stinking pile of rumor and hearsay until next week or later. I am sorry to disappoint.)
This conference is designed to help IT professionals make their business and IT infrastructure more dynamic and, in the process, help reduce costs, mitigate risks, and improve service. This technical conference event is geared to IT and Business Managers, Data Center Managers, Project Managers, System Programmers, Server and Storage Administrators, Database Administrators, Business Continuity and Capacity Planners, IBM Business Partners and other IT Professionals. This week will offer over 300 different sessions and hands-on labs, certification exams, and a Solutions Center.
For those who want a quick stroll through memory lane, here are my posts from past events:
In keeping up with IBM's leadership in Social Media, IBM Systems Lab Services and Training team running this event have their own [Facebook Fan Page] and
[blog]. IBM Technical University has a Twitter account [@ibmtechconfs], and hashtag #ibmtechu. You can also follow me on Twitter [@az990tony].
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], here is my quick recap of the keynote sessions presented Monday morning. Marlin Maddy, Worldwide Technical Events Executive for IBM Systems Lab Services and Training, served as emcee.
Roland Hagan, IBM Vice President for IBM System x server platform, presented on how IBM is redefining the x86 computing experience. More than 50 percent of all servers are x86 based. These x86 servers are easy to acquire, enjoy a large application base, and can take advantage of readily available skilled workforce for administration. The problem is that 85 percent of x86 processing power remains idle, energy costs are 8 times what they were 12 years ago, and management costs are now 70 percent of the IT budget.
IBM has the number one market share for scalable x86 servers. Roland covered the newly announced eX5 architecture that has been deployed in both rack-optimized models as well as IBM BladeCenter blade servers. These can offer 2x the memory capacity as competitive offerings, which is important for today's server virtualization, database and analytics workloads. This includes 40 and 80 DIMM models of blades, and 64 to 96 DIMM models of rack-optimized systems. IBM also announced eXFlash, internal Solid State Drives accessible at bus speeds. FlexNode allows a 4-node system to dynamically change to 2 separate 2-node systems.
By 2013, analysts estimate that 69 percent of x86 workloads will be virtualized, and that 22 percent of servers will be running some form of hypervisor software. By 2015, this grows to 78 percent of x86 workloads being virtualized, and 29 percent of servers running hypervisor.
Doug Balog, IBM Vice President and Disk Storage Business Line Executive, presented how the growth of information results in a "perfect storom" for the storage industry. Storage Admins are focused on managing storage growth and the related costs and complexity, proper forecasting and capacity planning, and backup administration. IBM's strategy is to help clients in the following areas:
Storage Efficiency - getting the most use out of the resources you invest
Service Delivery - ensuring that information gets to the right people at the right time, simplify reporting and provisioning
Data Protection - protecting data against unethical tampering, unauthorized access, and unexpected loss and corruption
He wrapped up his talk covering the success of DS8700 and XIV. In fact, 60 percent of XIV sales are to EMC customers. The TCO of an XIV is less than half the TCO of a comparable EMC VMAX disk system.
Dave McQueeney, IBM Vice President for Strategy and CTO for US Federal, covered how IBM's Smarter Planet vision for smarter cities, smarter healthcare, smarter energy grid and smarter traffic are being adopted by the public sector. Almost every data center in US Federal government is out of power, floor space and/or cooling capability. An estimated 80 percent of US Federal government IT budgets are spent on maintenance and ongoing operations, leaving very little left over for the big transformational projects that President Barack Obama wants to accomplish.
Who has the most active Online Transaction Processing (OLTP)? You might guess a big bank, but it is the US Department of Homeland Security (DHS), with a system processing 600 million transactions per day. Another government agency is #2, and the top Banking application is finally #3. The IBM mainframe has solved problems 10 to 15 years ago that the distributed systems are just now encountering today. Worldwide, more than 80 percent of banks use mainframes to handle their financial transactions.
IBM's recent POWER7 set of servers are proving successful in the field. For example, Allianz was able to consolidate 60 servers to 1. Running DB2 on POWER7 server is 38 percent less expensive than Oracle on x86 Nehalem processors. For Java, running JVM on POWER7 is 73 percent better than JVM on x86 Nehalem.
The US federal government ingests a large amount of data. It has huge 10-20 PB data warehouses. In fact, the amount of GB received every year by the US federal government alone exceed the production of all disk drives produced by all drive manufacturers. This means that all data must be processed through "data reduction" or it is gone forever.
The last keynote for Monday was given by Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist for System Storage. He started out shocking the audience with his view that the "disk drive industry is a train wreck". While R&D in disk drives enjoyed a healthy improvement curve up to about 2004, it has now slowed down, getting more difficult and more expensive to improve performance and capacity of disk drives. The rest of his presentation was organized around three themes:
Integrated Stacks - while new-comers like Oralce/Sun and the VCE coalition are promoting the benefits of integrated stacks, IBM has been doing this for the past five decades. New advancements in Server and Storage virtualization provide exciting new opportunities.
Integrated Systems - solutions like IBM Information Archive and SONAS, and new features like Easy Tier that help adopt SSD transparently. As it gets harder and harder to scale-up, IBM has moved to innovative scale-out architectures.
Integrated Data Center management - companies are now realizing that management and governance are critical factors of success, and that this needs to be integrated between traditional IT, private, public and hybrid cloud computing.
This was a great inspiring start for what looks like an awesome week!
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], I presented a session on Storage for the Green Data Center, and attended a System x session on Greening the Data Center. Since they were related, I thought I would cover both in this post.
Storage for the Green Data Center
I presented this topic in four general categories:
Drivers and Metrics - I explained the three key drivers for consuming less energy, and the two key metrics: Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE).
Storage Technologies - I compared the four key storage media types: Solid State Drives (SSD), high-speed (15K RPM) FC and SAS hard disk, slower (7200 RPM) SATA disk, and tape. I had comparison slides that showed how IBM disk was more energy efficient than competition, for example DS8700 consumes less energy than EMC Symmetrix when compared with the exact same number and type of physical drives. Likewise, IBM LTO-5 and TS1130 tape drives consume less energy than comparable HP or Oracle/Sun tape drives.
Integrated Systems - IBM combines multiple storage tiers in a set of integrated systems managed by smart software. For example, the IBM DS8700 offers [Easy Tier] to offer smart data placement and movement across Solid-State drives and spinning disk. I also covered several blended disk-and-tape solutions, such as the Information Archive and SONAS.
Actions and Next Steps - I wrapped up the talk with actions that data center managers can take to help them be more energy efficient, from deploying the IBM Rear Door Heat Exchanger, or improving the management of their data.
Greening of the Data Center
Janet Beaver, IBM Senior Manager of Americas Group facilities for Infrastructure and Facilities, presented on IBM's success in becoming more energy efficient. The price of electricity has gone up 10 percent per year, and in some locations, 30 percent. For every 1 Watt used by IT equipment, there are an additional 27 Watts for power, cooling and other uses to keep the IT equipment comfortable. At IBM, data centers represent only 6 percent of total floor space, but 45 percent of all energy consumption. Janet covered two specific data centers, Boulder and Raleigh.
At Boulder, IBM keeps 48 hours reserve of gasoline (to generate electricity in case of outage from the power company) and 48 hours of chilled water. Many power outages are less than 10 minutes, which can easily be handled by the UPS systems. At least 25 percent of the Computer Room Air Conditioners (CRAC) are also on UPS as well, so that there is some cooling during those minutes, within the ASHRAE guidelines of 72-80 degrees Fahrenheit. Since gasoline gets stale, IBM runs the generators once a month, which serves as a monthly test of the system, and clears out the lines to make room for fresh fuel.
The IBM Boulder data center is the largest in the company: 300,000 square feet (the equivalent of five football fields)! Because of its location in Colorado, IBM enjoys "free cooling" using outside air temperature 63 percent of the year, resulting in a PUE of 1.3 rating. Electricity is only 4.5 US cents per kWh. The center also uses 1 Million KwH per year of wind energy.
The Raleigh data center is only 100,000 Square feet, with a PUE 1.4 rating. The Raleigh area enjoys 44 percent "free cooling" and electricity costs at 5.7 US cents per kWh. The Leadership in Energy and Environmental Design [LEED] has been updated to certify data centers. The IBM Boulder data center has achieved LEED Silver certification, and IBM Raleigh data center has LEED Gold certification.
Free cooling, electricity costs, and disaster susceptibility are just three of the 25 criteria IBM uses to locate its data centers. In addition to the 7 data centers it manages for its own operations, and 5 data centers for web hosting, IBM manages over 400 data centers of other clients.
It seems that Green IT initiatives are more important to the storage-oriented attendees than the x86-oriented folks. I suspect that is because many System x servers are deployed in small and medium businesses that do not have data centers, per se.
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], here is my quick recap of the keynote sessions presented Monday morning. Marlin Maddy, Worldwide Technical Events Executive for IBM Systems Lab Services and Training, served as emcee.
Jim Northington, IBM System x Business Line Executive, covered the IT industry's "Love/Hate Relationship" with x86 platform. Many of the physical limitations that were previously a pain on this platform are now addressed, through a combination of IBM's new innovative eX5 architecture and virtualization technologies.
Jim also presented the [IBM CloudBurst] solution. IBM CloudBurst is one of the many "Integrated Systems" designed to help simplify deployment. Based on IBM BladeCenter, the IBM CloudBurst is basically a Private Cloud rack for those that are ready to deploy in their own data center.
Jim feels that server virtualization on x86 platforms is still in its infancy. IBM calls it the 70/30 rule: 70 percent of x86 workloads are running virtualized on 30 percent of the physical servers.
Maria Azua, IBM Vice President of Cloud Computing Enablement, presented on Cloud Computing. Technology is being adopted at faster rates. It took 40 years for radio to get 60 million listeners, 20 years for 60 million television viewers, 3 years to get 60 million surfers on the Internet, but it only took 4 months to get 60 million players on Farmville!
Maria covered various aspects of Cloud Computing: virtualization images, service catalog, provisioning elasticity, management and billing services, and virtual networks. With Cloud Computing, the combination of virtualization technologies, standardization, and automation can reduce costs and improve flexibility.
We've seen this happen before. Telcos transitioned from human operators to automated digital switches. Manufacturers went from having small teams of craftsmen to assembly lines of robots. Banks went from long lines of bank tellers to short lines at the ATM.
Maria said that companies are faced with three practical choices:
Do-it-Yourself, buy the servers, storage and switches and connect everything together.
Purchase pre-installed "integrated systems" to simplify deployment.
Subscribe to Cloud computing, allowing a service provider do all this for you.
In countries where network access is not ubiquitous, IBM has developed tools for the cloud that work in "offline" mode. IBM has also developed or modified tools to run better in the cloud. Launching a computer instance from the cloud from the service catalog is so easy to do, your 5-year-old child can do this!
Want to see Cloud Computing in action? Check out [Innovation.ed.gov], which is run in the IBM cloud, for the US Department of Education's website to foster innovation.
Whether you adopt public, private or a hybrid cloud computing approach, Maria suggests you take time to plan, test your applications for standardization, examine all risks, and explore new workloads that might be good candidates. Otherwise, moving to the cloud might just mean "More mess for less". Maria provided a list of applications that IBM considers good fit for Cloud Computing today.
I heard several audience members indicate that this is the first time someone finally explained Cloud Computing to them in a way that made sense!
IBM Tivoli Storage Productivity Center version 4.1 Overview
In conferences like these, there are two types of product-level presentations. An "Overview" explains how products work today to those who are not familiar with it. An "Update" explains what's new in this version of the product for those who are already familiar with previous releases. This session was an Overview of [Tivoli Storage Productivity Center], plus some information of IBM's Storage Enterprise Resource Planner [SERP] from IBM's acquisition of NovusCG.
I was one of the original lead architects of Productivity Center many years ago, and was able to share many personal experiences about its evolution in development and in the field at client facilities. Analysts have repeatedly rated IBM Productivity Center as one of the top Storage Resource Management (SRM) tools available in the marketplace.
I would like to thank my colleague Harley Puckett for his assistance in putting the finishing touches on this presentation. This was my best attended session of the week, indicating there is a lot of interest in this product in particular, and managing a heterogeneous mix of storage devices in general. To hear a quick video introduction, see Harley Puckett's presentation at the [IBM Virtual Briefing Center].
Information Lifecycle Management (ILM) Overview
Can you believe I have been doing ILM since 1986? I was the lead architect for DFSMS which provides ILM support for z/OS mainframes. In 2003-2005, I spent 18 months in the field performing ILM assessments for clients, and now there are dozens of IBM practitioners in Global Services and Lab Services that do this full time. This is a topic I cover frequently at the IBM Executive Briefing Center [EBC], because it addresses several top business challenges:
Reducing costs and simplifying management
Improving efficiency of personnel and application workloads
Managing risks and regulatory compliance
IBM has a solution based on five "entry points". The advantage of this approach is that it allows our consultants to craft the right solution to meet the specific requirements of each client situation. These entry points are:
Tiered Information Infrastructure - we don't limit ourselves to just "Tiered Storage" as storage is only part of a complete[information infrastructure] of servers,networks and storage
Storage Optimization and Virtualization - including virtual disk, virtual tape and virtual file solutions
Process Enhancement and Automation - an important part of ILM are the policies and procedures, such as IT Infrastructure Library [ITIL] best practices
Archive and Retention - space management and data retention solutions for email, database and file systems
When I presented ILM last year, I did not get many attendees. This time I had more, perhaps because of the recent announcement of ILM and HSM support in IBM SONAS and our April announcement of IBM DS8700 Easy Tier has renewed interest in this area.
I have safely returned back to Tucson, but have still a lot of notes of the other sessions I attended, so will cover them this week.
Continuing coverage of my week in Washington DC for the annual [2010 System Storage Technical University], I attended several XIV sessions throughout the week. There were many XIV sessions. I could not attend all of them. Jack Arnold, one of my colleagues at the IBM Tucson Executive Briefing Center, often presents XIV to clients and Business Partners. He covered all the basics of XIV architecture, configuration, and features like snapshots and migration. Carlos Lizarralde presented "Solving VMware Challenges with XIV". Ola Mayer presented "XIV Active Data Migration and Disaster Recovery".
Here is my quick recap of two in particular that I attended:
XIV Client Success Stories - Randy Arseneau
Randy reported that IBM had its best quarter ever for the XIV, reflecting an unexpected surge shortly after my blog post debunking the DDF myth last April. He presented successful case studies of client deployments. Many followed a familiar pattern. First, the client would only purchase one or two XIV units. Second, the client would beat the crap out of them, putting all kinds of stress from different workloads. Third, the client would discover that the XIV is really as amazing as IBM and IBM Business Partners have told them. Finally, in the fourth phase, the client would deploy the XIV for mission-critical production applications.
A large US bank holding company managed to get 5.3 GB/sec from a pair of XIV boxes for their analytics environment. They now have 14 XIV boxes deployed in mission-critical applications.
A large equipment manufacturer compared the offerings among seven different storage vendors, and IBM XIV came out the winner. They now have 11 XIV boxes in production and another four boxes for development/test. They have moved their entire VMware infrastructure to IBM XIV, running over 12,000 guest instances.
A financial services company bought their first XIV in early 2009 and now has 34 XIV units in production attached to a variety of Windows, Solaris, AIX, Linux servers and VMware hosts. Their entire Microsoft Exchange was moved from HP and EMC disk to IBM XIV, and experienced noticeable performance improvement.
When a University health system replaced two competitive disk systems with XIV, their data center temperature dropped from 74 to 68 degrees Fahrenheit. In general, XIV systems are 20 to 30 percent more energy efficient per usable TB than traditional disk systems.
A service provider that had used EMC disk systems for over 10 years evaluated the IBM XIV versus upgrading to EMC V-Max. The three year total cost of ownership (TCO) of EMC's V-Max was $7 Million US dollars higher, so EMC counter-proposed CLARiiON CX4 instead. But, in the end, IBM XIV proved to be the better fit, and now the customer is happy having made the switch.
The manager of an information communications technology service provider was impressed that the XIV was up and running in just a couple of days. They now have over two dozen XIV systems.
Another XIV client had lost all of their Computer Room Air Conditioning (CRAC) units for several hours. The data center heated up to 126 degrees Fahrenheit, but the customer did not lose any data on either of their two XIV boxes, which continued to run in these extreme conditions.
Optimizing XIV Performance - Brian Cormody
This session was an update from the [one presented last year] by Izhar Sharon. Brian presented various best practices for optimizing the performance when using specific application workloads with IBM XIV disk systems.
Oracle ASM: Many people allocate lots of small LUNs, because this made sense a long time ago when all you had was just a bunch of disks (JBOD). In fact, many of the practices that DBAs use to configure databases across disks become unnecessary with XIV. Wth XIV, you are better off allocating a few number of very large LUNs from the XIV. The best option was a 1-volume ASM pool with 8MB AU stripe. A single LUN can contain multiple Oracle databases. A single LUN can be used to store all of the logs.
VMware: Over 70 percent of XIV customers use it with VMware. For VMFS, IBM recommends allocating a few number of large LUNs. You can specify the maximum of 2181 GB. Do not use VMware's internal LUN extension capability, as IBM XIV already has thin provisioning and works better to allow XIV to do this for you. XIV Snapshots provide crash-consistent copies without all the VMware overhead of VMware Snapshots.
SAP: For planning purposes, the "SAPS" unit equates roughly to 0.4 IOPS for ERP OLTP workloads, and 0.6 IOPS for BW/BI OLAP workloads. In general, an XIV can deliver 25-30,000 IOPS at 10-15 msec response time, and 60,000 IOPS at 30 msec response time. With SAP, our clients have managed to get 60,000 IOPS at less than 15 msec.
Microsoft Exchange: Even my friends in Redmond could not believe how awesome XIV was during ESRP testing. Five Exchange 2010 servers connected two a pair of XIV boxes using the new 2TB drawers managed 40,000 mailboxes at the high profile (0.15 IOPS per mailbox). Another client found four XIV boxes (720 drives) was able to handle 60,000 mailboxes (5GB max), which would have taken over 4000 drives if internal disk drives were used instead. Who said SANs are obsolete for MS Exchange?
Asynchronous Replication: IBM now has an "Async Calculator" to model and help design an XIV async replication solution. In general, dark fiber works best, and MPLS clouds had the worst results. The latest 10.2.2 microcode for the IBM XIV can now handle 10 Mbps at less than 250 msec roundtrip. During the initial sync between locations, IBM recommends setting the "schedule=never" to consume as much bandwidth as possible. If you don't trust the bandwidth measurements your telco provider is reporting, consider testing the bandwidth yourself with [iPerf] open source tool.
Continuing my coverage of the annual [2010 System Storage Technical University], I participated in the storage free-for-all, which is a long-time tradition, started at SHARE User Group conference, and carried forward to other IT conferences. The free-for-all is a Q&A Panel of experts to allow anyone to ask any question. These are sometimes called "Birds of a Feather" (BOF). Last year, they were called "Meet the Experts", one for mainframe storage, and the other for storage attached to distributed systems. This year, we had two: one focused on Tivoli Storage software, and the second to cover storage hardware. This post provides a recap of the Storage Hardware free-for-all.
The emcee for the event was Scott Drummond. The other experts on the panel included Dan Thompson, Carlos Pratt, Jack Arnold, Jim Blue, Scott Schroder, Ed Baker, Mike Wood, Steve Branch, Randy Arseneau, Tony Abete, Jim Fisher, Scott Wein, Rob Wilson, Jason Auvenshine, Dave Canan, Al Watson, and myself, yours truly, Tony Pearson.
What can I do to improve performance on my DS8100 disk system? It is running a mix of sequential batch processing and my medical application (EPIC). I have 16GB of cache and everything is formatted as RAID-5.
We are familiar with EPIC. It does not "play well with others", so IBM recommends you consider dedicating resources for just the EPIC data. Also consider RAID-10 instead for the EPIC data.
How do I evaluate IBM storage solutions in regards to [PCI-DSS] requirements.
Well, we are not lawyers, and some aspects of the PCI-DSS requirements are outside the storage realm. In March 2010, IBM was named ["Best Security Company"] by SC Magazine, and we have secure storage solutions for both disk and tape systems. IBM DS8000 and DS5000 series offer Full Disk Encryption (FDE) disk drives. IBM LTO-4/LTO-5 and TS1120/TS1130 tape drives meet FIPS requirements for encryption. We will provide you contact information on an encryption expert to address the other parts of your PCI-DSS specific concerns.
My telco will only offer FCIP routing for long-distance disk replication, but my CIO wants to use Fibre Channel routing over CWDM, what do I do?
IBM XIV, DS8000 and DS5000 all support FC-based long distance replication across CWDM. However, if you don't have dark fiber, and your telco won't provide this option, you may need to re-negotiate your options.
My DS4800 sometimes reboots repeatedly, what should I do.
This was a known problem with microcode level 760.28, it was detecting a failed drive. You need to replace the drive, and upgrade to the latest microcode.
Should I use VMware snapshots or DS5000 FlashCopy?
VMware snapshots are not free, you need to upgrade to the appropriate level of VMware to get this function, and it would be limited to your VMware data only. The advantage of DS5000 FlashCopy is that it applies to all of your operating systems and hypervisors in use, and eliminates the consumption of VMware overhead. It provides crash-consistent copies of your data. If your DS5000 disk system is dedicated to VMware, then you may want to compare costs versus trade-offs.
Any truth to the rumor that Fibre Channel protocol will be replaced by SAS?
SAS has some definite cost advantages, but is limited to 8 meters in length. Therefore, you will see more and more usage of SAS within storage devices, but outside the box, there will continue to be Fibre Channel, including FCP, FICON and FCoE. The Fibre Channel Industry Alliance [FCIA] has a healthy roadmap for 16 Gbps support and 20 Gbps interswitch link (ISL) connections.
What about Fibre Channel drives, are these going away?
We need to differentiate the connector from the drive itself. Manufacturers are able to produce 10K and 15K RPM drives with SAS instead of FC connectors. While many have suggested that a "Flash-and-Stash" approach of SSD+SATA would eliminate the need for high-speed drives, IBM predicts that there just won't be enough SSD produced to meet the performance needs of our clients over the next five years, so 15K RPM drives, more likely with SAS instead of FC connectors, will continue to be deployed for the next five years.
We'd like more advanced hands-on labs, and to have the certification exams be more product-specific rather than exams for midrange disk or enterprise disk that are too wide-ranging.
Ok, we will take that feedback to the conference organizers.
IBM Tivoli Storage Manager is focused on disaster recovery from tape, how do I incorporate remote disk replication.
This is IBM's Unified Recovery Management, based on the seven tiers of disaster recovery established in 1983 at GUIDE conference. You can combine local recovery with FastBack, data center server recovery with TSM and FlashCopy manager, and combine that with IBM Tivoli Storage Productivity Center for Replication (TPC-R), GDOC and GDPS to manage disk replication across business continuity/disaster recovery (BC/DR) locations.
IBM Tivoli Storage Productivity Center for Replication only manages the LUNs, what about server failover and mapping the new servers to the replicated LUNs?
There are seven tiers of disaster recovery. The sixth tier is to manage the storage replication only, as TPC-R does. The seventh tier adds full server and network failover. For that you need something like IBM GDPS or GDOC that adds this capability.
All of my other vendor kit has bold advertising, prominent lettering, neon lights, bright colors, but our IBM kit is just black, often not even identifying the specific make or model, just "IBM" or "IBM System Storage".
IBM has opted for simplified packaging and our sleek, signature "raven black" color, and pass these savings on to you.
Bring back the SHARK fins!
We will bring that feedback to our development team. ("Shark" was the codename for IBM's ESS 800 disk model. Fiberglass "fins" were made as promotional items and placed on top of ESS 800 disk systems to help "identify them" on the data center floor. Unfortunately, professional golfer [<a href="http://www.shark.com/">Greg Norman</a>] complained, so IBM discontinued the use of the codename back in 2005.)
Where is Infiniband?
Like SAS, Infiniband had limited distance, about 10 to 15 meters, which proved unusable for server-to-storage network connections across data center floorspace. However, there are now 150 meter optical cables available, and you will find Infiniband used in server-to-server communications and inside storage systems. IBM SONAS uses Infiniband today internally. IBM DCS9900 offers Infiniband host-attachment for HPC customers.
We need midrange storage for our mainframe please?
In addition to the IBM System Storage DS8000 series, the IBM SAN Volume Controller and IBM XIV are able to connect to Linux on System z mainframes.
We need "Do's and Don'ts" on which software to run with which hardware.
IBM [Redbooks] are a good source for that, and we prioritize our efforts based on all those cards and letters you send the IBM Redbooks team.
The new TPC v4 reporting tool requires a bit of a learning curve.
The new reporting tool, based on Eclipse's Business Intelligence Reporting Tool [BIRT], is now standardized across the most of the Tivoli portfolio. Check out the [Tivoli Common Reporting] community page for assistance.
An unfortunate side-effect of using server virtualization like VMware is that it worsens management and backup issues. We now have many guests on each blade server.
IBM is the leading reseller of VMware, and understands that VMware adds an added layer of complexity. Thankfully, IBM Tivoli Storage Manager backups uses a lightweight agent. IBM [System Director VMcontrol] can help you manage a variety of hypervisor environments.
This was a great interactive session. I am glad everyone stayed late Thursday evening to participate in this discussion.
Bill Bauman, IBM System x Field Technical Support Specialist and System x University celebrity, presented the differences between Grid, SOA and Cloud Computing. I thought this was an odd combination to compare and contrast, but his presentation was well attended.
Grid - this is when two or more independently owned and managed computers are brought together to solve a problem. Some research facilities do this. IBM helped four hospitals connect their computers together into a grid to help analyze breast cancer. IBM also supports the [World Community Grid] which allows your personal computer to be connected to the grid and help process calculations.
SOA - SOA, which stands for Service Oriented Architecture, is an approach to building business applications as a combination of loosely-coupled black-box components orchestrated to deliver a well-defined level of service by linking together business processes. I often explain SOA as the the business version of Web 2.0. You can download a free copy of the eBook "SOA for Dummies" at the [IBM Smart SOA] landing page.
Cloud - A Cloud is a dynamic, scalable, expandable, and completely contractible architecture. It may consist of multiple, disparate, on-premise and off-premise hardware and virtualized platforms hosting legacy, fully installed, stateless, or virtualized instances of operating systems and application workloads.
Tom Vezina, IBM Advanced Technical Sales Specialist, presented "Chaos to Cloud Computing". Survey results show that roughly 70 percent of cloud spend will be for private clouds, and 30 percent for public, hybrid or community clouds. Of the key motivations for public cloud, 77 percent or respondents cited reducing costs, 72 percent time to value, and 50 percent improving reliability.
Tom ran over 500 "server utilization" studies for x86 deployments during the past eight years. Of these, the worst was 0.52 percent CPU utilization, the best was 13.4 percent, and the average was 6.8 percent. When IBM mentions that 85 percent of server capacity is idle, it is mostly due to x86 servers. At this rate, it seems easy to put five to 20 guest images onto a machine. However, many companies encounter "VM stall" where they get stuck after only 25 percent of their operating system images virtualized.
He feels the problem is with the fact most Physical-to-Virtual (P2V) migrations are manual efforts. There are tools available like Novell [PlateSpin Recon] to help automate and reduce the total number of hours spent per migration.
System x KVM Solutions
Boy, I walked into this one. Many of IBM's cloud offerings are based on the Linux hypervisor called Kernel-based Virtual Machine [a href="http://www.linux-kvm.org/page/Main_Page">KVM] instead of VMware or Microsoft Hyper-V. However, this session was about the "other KVM": keyboard video and mouse switches, which thankfully, IBM has renamed to Console Managers to avoid confusion. Presenters Ben Hilmus (IBM) and Steve Hahn (Avocent) presented IBM's line of Local Console Managers (LCM) and Global Console Managers (GCM) products.
LCM are the traditional KVM switches that people are familiar with. A single keyboard, video and mouse can select among hundreds of servers to perform maintenance or check on status. GCM adds KVM-over-IP capabilities, which means that now you can access selected systems over the Ethernet from a laptop or personal computer. Both LCM and GCM allow for two-level tiering, which means that you can have an LCM in each rack, and an LCM or GCM that points to each rack, greatly increasing the number of servers that can be managed from a single pane of glass.
Many severs have a "service processor" to manage the rest of the machine. IBM RSA II, HP iLO, and Dell DRAC4 are some examples. These allow you to turn on and off selected servers. IBM BladeCenter offers an Management Module that allows the chassis to be connected to a Console Manager and select a specific blade server inside. These can also be used with VMware viewer, Virtual Network Computing (VNC), or Remote Desktop Protocol (RDP).
IBM's offerings are unique it that you can have an optical CD/DVD drive or USB external storage attached at the LCM or GCM, and make it look like the storage is attached to the selected server. This can be used to install or upgrade software, transfer log files, and so on. Another great use, and apparently the motivation for having this session in the "Federal Track", is that the USB can be used to attach a reader for a smart card, known as a Common Access Card [CAC] used by various government agencies. This provides two-factor authentication [TFA]. For example, to log into the system, you enter your password (something you know) and swipe your employee badge smart card (something you have). The combination are validated at the selected server to provide access.
I find it amusing that server people limit themselves to server sessions, and storage people to storage sessions. Sometimes, you have to step "outside your comfort zone" and learn something new, something different. Open your eyes and look around a bit. You might just be surprised what you find.
(FTC note: I work for IBM. IBM considers Novell a strategic Linux partner. Novell did not provide me a copy of Platespin Recon, I have no experience using it, and I mention it only in context of the presentation made. IBM resells Avocent solutions, and we use LCM gear in the Tucson Executive Briefing Center.)
Wrapping up my coverage of the annual [2010 System Storage Technical University], I attended what might be perhaps the best session of the conference. Jim Nolting, IBM Semiconductor Manufacturing Engineer, presented the new IBM zEnterprise mainframe, "A New Dimension in Computing", under the Federal track.
The zEnterprises debunks the "one processor fits all" myth. For some I/O-intensive workloads, the mainframe continues to be the most cost-effective platform. However, there are other workloads where a memory-rich Intel or AMD x86 instance might be the best fit, and yet other workloads where the high number of parallel threads of reduced instruction set computing [RISC] such as IBM's POWER7 processor is more cost-effective. The IBM zEnterprise combines all three processor types into a single system, so that you can now run each workload on the processor that is optimized for that workload.
IBM zEnterprise z196 Central Processing Complex (CPC)
Let's start with the new mainframe z196 central processing complex (CPC). Many thought this would be called the z11, but that didn't happen. Basically, the z196 machine has a maximum 96 cores versus z10's 64 core maximum, and each core runs 5.2GHz instead of z10's cores running at 4.7GHz. It is available in air-cooled and water-cooled models. The primary operating system that runs on this is called "z/OS", which when used with its integrated UNIX System Services subsystem, is fully UNIX-certified. The z196 server can also run z/VM, z/VSE, z/TPF and Linux on z, which is just Linux recompiled for the z/Architecture chip set. In my June 2008 post [Yes, Jon, there is a mainframe that can help replace 1500 servers], I mentioned the z10 mainframe had a top speed of nearly 30,000 MIPS (Million Instructions per Second). The new z196 machine can do 50,000 MIPS, a 60 percent increase!
The z196 runs a hypervisor called PR/SM that allows the box to be divided into dozens of logical partitions (LPAR), and the z/VM operating system can also act as a hypervisor running hundreds or thousands of guest OS images. Each core can be assigned a specialty engine "personality": GP for general processor, IFL for z/VM and Linux, zAAP for Java and XML processing, and zIIP for database, communications and remote disk mirroring. Like the z9 and z10, the z196 can attach to external disk and tape storage via ESCON, FICON or FCP protocols, and through NFS via 1GbE and 10GbE Ethernet.
IBM zEnterprise BladeCenter Extension (zBX)
There is a new frame called the zBX that basically holds two IBM BladeCenter chassis, each capable of 14 blades, so total of 28 blades per zBX frame. For now, only select blade servers are supported inside, but IBM plans to expand this to include more as testing continues. The POWER-based blades can run native AIX, IBM's other UNIX operating system, and the x86-based blades can run Linux-x86 workloads, for example. Each of these blade servers can run a single OS natively, or run a hypervisor to have multiple guest OS images. IBM plans to look into running other POWER and x86-based operating systems in the future.
If you are already familiar with IBM's BladeCenter, then you can skip this paragraph. Basically, you have a chassis that holds 14 blades connected to a "mid-plane". On the back of the chassis, you have hot-swappable modules that snap into the other side of the mid-plane. There are modules for FCP, FCoE and Ethernet connectivity, which allows blades to talk to each other, as well as external storage. BladeCenter Management modules serve as both the service processor as well as the keyboard, video and mouse Local Console Manager (LCM). All of the IBM storage options available to IBM BladeCenter apply to zBX as well.
Besides general purpose blades, IBM will offer "accelerator" blades that will offload work from the z196. For example, let's say an OLAP-style query is issued via SQL to DB2 on z/OS. In the process of parsing the complicated query, it creates a Materialized Query Table (MQT) to temporarily hold some data. This MQT contains just the columnar data required, which can then be transferred to a set of blade servers known as the Smart Analytics Optimizer (SAO), then processes the request and sends the results back. The Smart Analytics Optimizer comes in various sizes, from small (7 blades) to extra large (56 blades, 28 in each of two zBX frames). A 14-blade configuration can hold about 1TB of compressed DB2 data in memory for processing.
IBM zEnterprise Unified Resource Manager
You can have up to eight z196 machines and up to four zBX frames connected together into a monstrously large system. There are two internal networks. The Inter-ensemble data network (IEDN) is a 10GbE that connects all the OS images together, and can be further subdivided into separate virtual LANs (VLAN). The Inter-node management network (INMN) is a 1000 Mbps Base-T Ethernet that connects all the host servers together to be managed under a single pane of glass known as the Unified Resource Manager. It is based on IBM Systems Director.
By integrating service management, the Unified Resource Manager can handle Operations, Energy Management, Hypervisor Management, Virtual Server Lifecycle Management, Platform Performance Management, and Network Management, all from one place.
IBM Rational Developer for System z Unit Test (RDz)
But what about developers and testers, such as those Independent Software Vendors (ISV) that produce mainframe software. How can IBM make their lives easier?
Phil Smith on z/Journal provides a history of [IBM Mainframe Emulation]. Back in 2007, three emulation options were in use in various shops:
Open Mainframe, from Platform Solutions, Inc. (PSI)
FLEX-ES, from Fundamental Software, Inc.
Hercules, which is an open source package
None of these are viable options today. Nobody wanted to pay IBM for its Intellectual Property on the z/Architecture or license the use of the z/OS operating system. To fill the void, IBM put out an officially-supported emulation environment called IBM System z Professional Development Tool (zPDT) available to IBM employees, IBM Business Partners and ISVs that register through IBM Partnerworld. To help out developers and testers who work at clients that run mainframes, IBM now offers IBM Rational Developer for System z Unit Test, which is a modified version of zPDT that can run on a x86-based laptop or shared IBM System x server. Based on the open source [Eclipse IDE], the RDz emulates GP, IFL, zAAP and zIIP engines on a Linux-x86 base. A four-core x86 server can emulate a 3-engine mainframe.
With RDz, a developer can write code, compile and unit test all without consuming any mainframe MIPS. The interface is similar to Rational Application Developer (RAD), and so similar skills, tools and interfaces used to write Java, C/C++ and Fortran code can also be used for JCL, CICS, IMS, COBOL and PL/I on the mainframe. An IBM study ["Benchmarking IDE Efficiency"] found that developers using RDz were 30 percent more productive than using native z/OS ISPF. (I mention the use of RAD in my post [Three Things to do on the IBM Cloud]).
What does this all mean for the IT industry? First, the zEnterprise is perfectly positioned for [three-tier architecture] applications. A typical example could be a client-facing web-server on x86, talking to business logic running on POWER7, which in turn talks to database on z/OS in the z196 mainframe. Second, the zEnterprise is well-positioned for government agencies looking to modernize their operations and significantly reduce costs, corporations looking to consolidate data centers, and service providers looking to deploy public cloud offerings. Third, IBM storage is a great fit for the zEnterprise, with the IBM DS8000 series, XIV, SONAS and Information Archive accessible from both z196 and zBX servers.
I have arrived safely to San Francisco, and was able to check-in at the hotel, pick up my registration badge for Oracle OpenWorld 2011, and attend the first keynote session. This is the largest Oracle OpenWorld event to-date, with over 45,000 attendees from 117 different countries. There are 520,000 square feet of exhibition floor, and over 2,400 educational sessions. The conference is spread across the different buildings of the Moscone center, as well as nearby hotels. On average, attendees will walk seven miles during the week.
Larry Ellison was the keynote speaker for this first kick-off session. He focused almost exclusively on server and storage hardware. He feels that business is all about moving data, not doing integer math.
At the beginning of 2011, Oracle had only sold about 1,000 Exadata, but they have a sales target to sell an additional 3,000 Exadata boxes by year end.
The Exadata offers up to 10x columnar compression, and has 10x faster bandwidth (40Gbps Infiniband versus 4Gbps FCP). If you have a 100TB database, it would take up only 10TB of disk with this approach. He claims that the 90TB of disk you don't have to buy can then be used to buy more DRAM and/or Flash SSD.
(Realistically, since SSD is 15x more expensive than spinning disk, you can only purchase about 6TB of Flash for the 90TB you save on disk!)
Larry claims the design point for Exadata and Exalogic was to offer a system that was more powerful than IBM's fastest P795 computer, but cheaper than commodity x86 hardware. His secret is to "Parallel everything" for faster performance, and no single points of failure (SPOF). Exadata offers up to 10-50x faster query, and 4-10x faster OLTP. To keep costs low, Exadata uses all commodity hardware except the Infiniband. He cited various customer examples:
A company replaced 36 Teradata with 3 Exadata and result was application was 8x faster.
Banco Chile 9x faster than previous system
Deutsche Post 60x faster
Sogetti gets 60x faster backups.
French bank BNP Paribas 17x faster and no change to applications.
Proctor & Gamble 18x faster
Merck 5x faster
Turkcell 250TB compressed to 25TB, 10x faster
The problem was that in each example, he said what it was compared against was the old previous system, which varies and could have been an older Sun system, or an old system from HP, IBM or Dell. Perhaps it was a freudian slip, but Larry mistakenly said "Paralyze" your applications, when he probably meant to "Parallelize".
Of all their 380,000 Oracle customers, 70 percent have SPARC/Solaris and/or Linux. Last week, Oracle announced the new SPARC-T4, which Larry claimed was 5x faster than the previous SPARC-T3. Larry feels that for the first time ever, a non-IBM CPU can challenge the long-standing rein of the IBM POWER series processor. Larry admitted that the IBM POWER7 chip actually did some tasks faster than the SPARC-T4, so his work is not yet done, but they plan to offer a new SPARC-T5 next year that will be 2x better than the SPARC-T4.
Larry compared the I/O bandwidth of serv ers based on SPARC-T4, compared to POWER7, and found that the SPARC-T4 has double the I/O bandwidth, for a cost that was only about 1/4 the cost of a mainframe. IBM offers both. POWER7-based servers for CPU-intensive workloads, and System z (S/390)-based systems for I/O-intensive workloads. Larry feels that even though POWER7 is superior than SPARC-T4 for mathematical calculations, all business applications are focused on I/O-bandwidth to move data, not computations.
Larry claims the new SPARC-T4 can do 1.2 million IOPS. He uses 40 Gbps Infiniband instead of traditional SAN-attached FCP solutions.
A new "box" called Exalytics, combines their commodity hardware platform with a hueristic adaptive in-memory cache, their latest "me-too" solution that compares with what IBM already offers in [IBM SolidDB]. In fact, their me-too is not even internally developed, but rather the result of an acquisition of a company called "Times Ten". I thought it was interesting that the only piece of Oracle software mentioned during Larry's 90-minute speach, was this piece of acquired technology. The new Exalytics product run on a small rack and grow, analyzing relational data, non-relational OLAP, as well as unstructured documents. The result is what Larry called "the Speed of Light".
He also mentioned that Bob Shimp would kick-off the Cloud later in the week. Given that Larry himself thought that Cloud was a stupid, over-marketed term that nobody has deployed over the past few years, to a complete believer, claiming that over 20 live demos will be given this year on Cloud.
Perhaps the funniest quote was his motivation to use Infiniband as the interconnect
"Ethernet was invented by Xerox when I was a child."
-- Larry Ellison
Here are some sessions that IBM is featuring on Monday. Note the first two are Solution Spotlight sessions at the IBM Booth #1111 where I will be most of the time.
IBM Cloud Computing Solutions for Oracle
10/03/11, 10:30 a.m. – 11:00 a.m., Solution Spotlight, Booth #1111 Moscone South
Presenter: Chuck Calio,Technical Strategist, IBM Systems & Technology Group
IBM is recognized in the IT industry as one of the "Big 6" cloud providers, along with Amazon, Google, Microsoft, Salesforce and Yahoo. This session will highlight how IBM Cloud offerings apply to Oracle applications.
Lowering Cost and increasing efficiency in your long term support of Oracle EPM and BI
10/03/11, 3:00 p.m. -- 3:30 p.m., Solution Spotlight, Booth #1111 Moscone South
Presenter: Matthew Angelstad, IBM Global Business Solutions - Oracle EPM (Hyperion) Practice Lead
In 2007, Oracle acquired Hyperion, a leading provider of performance management software. This session will show how IBM helps Oracle clients unify Enterprise Performance Management (EPM) and Business Intelligence (BI) in a cost-effective manner, supporting a broad range of strategic, financial and operational management processes.
Application Strategy: Charting the Course for Maximum Business Value
10/03/11, 3:30 p.m. – 4:30 p.m., OpenWorld session #39061
Presenter: Mike Marchildon, IBM
The industry is undergoing a shift from single Enteprise Resource Planning (ERP) application to second-generation platforms containing diverse yet interdependent systems. This shift presents opportunities and challenges for both IT and the business.
Monday morning of the [Oracle OpenWorld 2011] conference had Joe Tucci, CEO of EMC, present the keynote. Joe indicated that I.T. stands for "Industry in Transition". He had a chart that showed the history of IT, from the mainframe and mini-computer, to the PC and client/server era, and now to the Cloud era. He called these "waves of disruption". The catalysts for change are a "Budge Dilemma", "Information Deluge" and "Cyber Security". The keynote was very similar to what EMC presented at [VMworld] conference earlier this summer.
"We have failed our customers. Over the past 10 years, they spend 73% to maintain their existing systems, and only 27% for new."
--- Joe Tucci, EMC
While many people equate "EMC" and "Failure", I believe Joe was referring not just to his own company, but most of the other IT vendors as well. Analysts predict that from January 1, 2010 to December 31, 2019, the world of stored data will grow from 0.9 ZB to 35.2 ZB, which represents a 44x increase. During that same time, IT staff is only expected to grow 50 percent. A staggering 90 percent of this data will be unstructured (non-database) content. Meanwhile, the average company gets cyber-attacked 300 times per week.
The answer is Cloud Computing. A few years ago, EMC was trying to get people to go "private cloud" route instead of "public cloud", they now have a more realistic "hybrid cloud" approach similar to IBM. Of the clients that EMC works with, 35 percent are implementing some form of cloud, and another 30 percent are planning to. The tenents of Hybrid Cloud are "Efficiency", "Control" and "Choice" which equals "Agility".
Joe also mentioned that there is now a new "layering" for IT. Instead of storage, switches and servers, we have a cloud platform of shared resources, mobile devices like smartphones and tablets, and management.
Joe feels there is a massive opportunity where Cloud meets Big Data. A cute video showed a driver wearing a motorcycle helmet so you can't see his face get into an under-powered car with "VNXe" on the license plate. He punches in "Cloud and Big Data" into the GPS navigation system, and starts out on city streets. Then the car transforms to an under-utilized family sedan "VNX" on a highway in the middle of the desert, then transforms to an over-priced sports car labeled "VMAX" as it climbs into the mountains surrounded by fog. The video borrowed the "CARS" theme from the videos IBM developed for its 2008 launch of "Information Infrastructure" initiative.
EMC's Pat Gelsinger (CTO) and fellow blogger Chad Sakac did some demos of VMware vCenter. They called the VMware vSphere "the Datacenter-wide OS" indicating that EMC storage has 75 points of integration with their "partner" (VMware is majority-owned by EMC, so I am not sure if partner is the right term). If you don't count Itanium, SPARC, POWER and IBM Syste z architectures, VMware enjoys over 80 percent marketshare for server virtualization.
(Full disclosure: IBM is the leading reseller of VMware.)
Pat claims that 40 percent of Oracle Apps at EMC run VMware. For the longest time, Oracle refused support its apps on VMware, but they relaxed this restrictive policy back in 2009. Today, nearly 25 percent of Oracle Apps run virtualized. EMC claims that they can support 5 million VMs on a single VMAX, and can generate 1 million IOPS from a single VMware ESX host.
Chad did a demo of vFabric which allows a vCenter plug-in to kick up Database instances of OracleDB, MySQL, Hadoop, PostgreSQL, and GreenPlum (GreenPlum is EMC's version of open-source PostgreSQL).
Chad showed that VMware vMition could move workloads from servers without solid-state, to servers that are flash-enabled. Lightweight workloads can be moved from DAS-enabled servers to compute-enabled storage devices like their EMC Isilon. (EMC acquired Isilon to offer their me-too version of IBM's Scale-Out NAS [SONAS] product.) EMC announced their first "Solid-State on a PCIe card" from their Project Lightning initiative. These are 320 GB capacity, so they sounded like a me-too versino of IBM's [Fusion-io IOdrive] cards that IBM has had available for quite some time now.
Next, Pat and Chad talked about Big Data. The world is transforming from a manual scale-up model to an automated scale-out architecture. Moving from "islands" to "pools". They used a cute example of Car Insurance. Business Analytics were able to review a safe drivers record, including the driver's Facebook and Twitter activity, and give him a discount, and then review the bad driving habits of another driver, and raise the bad driver's rates.
EMC announced their "GreenPlum Analytics Platform" (GAP?). I often tell people that if you want to predict what EMC will announce next, just look at what IBM announced 18 months ago. This new platform sounds like their me-too version of IBM's [Smart Analytics System].
After EMC, Judith Sim from Oracle introduced the Ed Lee, the Mayor of San Francisco which was just named the "Greenest city in North America". He thanked the audience for contributing an estimated $100 million USD to his local economy. Also, he was happy that by eliminating paper-based handouts and conference materials, the audience saved 1,636 trees.
Mark Hurd, formerly CEO of HP, and now president of Oracle, gave some highlights of 2011, and what Oracle's strategy is going forward. He said that Oracle plans to provide complete stacks, complete choice, and have each component of the stack be best-of-breed. In 2011, Oracle introduced the new MySQL 5.5 database, Java 7 programming language, and the Solaris 11 operating system with ZFS file system. Oracle spent $4 Billion in R&D, and gained 20 percent growth in software licenses, which gave them 33 percent growth fiscally for 2011 year. Oracle acquired Larry Ellison's [Pillar Data] storage company. Oracle also launched a [Database Appliance].
Thomas Kurian, another Oracle executive, finished the keynote session. He started with yet another chart showing the historical transition from Mainframe to Tablet. He indicated that leading-edge OracleDB and their Fusion middleware combined with industry standard hardware provides 5-30x faster queries, 4-10x less disk space, and simplifies the data center footprint. Their Exadata provides what he likes to call "Hierarchical Storage Management" between DRAM, Flash Solid-State, and spinning disk.
(Note: I started my career at IBM in 1986 working on a product called DFHSM, the Data Facility Hierarchical Storage Manager! It is now a vibrant component of DFSMS, part of IBM's z/OS mainframe operating system.)
ps this new announcement is to address that deficiency.
Finally, Oracle announced their "Exadata Storage Expansion Rack". Many people realized that the Exadata was under-provisioned for storage, which explains why they have only sold a few thousand of them, so perha
If you are attending Oracle OpenWorld, here are sessions for Tuesday that IBM is featuring. Note the first two are Solution Spotlight sessions at the IBM Booth #1111 where I will be most of the time.
Securing Heterogeneous Database Infrastructures: A Comprehensive Approach
10/04/11, 9:45 a.m. -- 10:15 a.m., Solution Spotlight, Booth #1111 Moscone South
Presenter: Al Cooley, Director, IBM InfoSphere Guardium
IBM Business Analystics for Oracle Solutions
10/04/11, 2:15 p.m. -- 2:45 p.m., Solution Spotlight, Booth #1111 Moscone South
Presenter: John Strazdins, ERP Strategy Executive
Consolidated Global View of Your Customer with One Global Billing System
10/04/11, 3:30 p.m. -- 4:30 p.m., OpenWorld session #23650
Presenter: John Waterman, IBM
Enterprise billing system technologies are emerging to assist with global customer views and other challenges banks struggle with today. In this session, Citi discusses its challenges and successes in implementing a global billing system.
Upgrading Your Siebel CRM with Reduced Risk and Lowered Cost: Customer Successes
10/04/11, 3:30 p.m. -- 4:30 p.m., OpenWorld session #18222
Presenters: Arnaud Wingelaar, IBM; Geetha Sundaram; Agnes Zhang, Oracle
Hear customer success stories about upgrading Siebel CRM. Learn best practices on upgrading with lowered cost, or achieving a high-availability upgrade with zero downtime and reduced risk.
Tuesday morning at the [Oracle OpenWorld 2011] conference started with another keynote session. This time, Michael Dell, founder, chairman and CEO of Dell, Inc., presented. Over the past nine years, he feels "the line between business and IT is going away." Michael claims that "Dell is no longer a PC company", and instead is focusing on data center solutions and services to be more like IBM.
John Fowler, Executive VP for Oracle Hardware, claims that Oracle has a single team for hardware development. The SPARC-T4 is their newest chip, with 8 cores and 64 dynamic threads, running at 3.0 GHz. It has on-chip 10GbE ethernet, PCIe, DDR3 Memory controllers and Crypto features. For storage, Oracle now offers four different offerings:
Exadata (as Database storage)
ZFS Storage Array (NAS)
Pillar Axiom (block-level I/O)
Edward Screven, Chief Corporate Architect at Oracle, indicated that the new Oracle Linux kernel allows for zero downtime patches, meaning that you can update the OS while applications are running without a reboot. The OracleVM (based on open-source XEN) supports both x86 and SPARC-based server hosts. On x86, it can run Linux, Solaris and Windows guests. On SPARC, it can run Linux and Solaris guests.
John Loaiza, Oracle Senior VP, explained the Exadata. It has 168 disk drives and 56 PCIe Flash Cards, connected via 40Gbps Infiniband. The Exadata keeps all data on spinning disk, with "warm data" cached on Flash, and "hot data" cached on DRAM. This is similar to IBM's Easy Tier feature on the DS8000, SVC and Storwize V7000.
Brad Cameron, Senior Director, explained Exalogic, which pre-dates Oracle's acquisition of Sun Microsystems. The idea was to build an x86 machine for running Java applications on Oracle WebLogic. The Exalogic can connect via Infiniband to an Exadata to access database information, and to 10GbE ethernet for the rest of the servers and clients. Whether you get the quarter, half or full-rack system, you get 40TB of NAS storage.
Ganesh Ramamurthy, Oracle VP of Hardware Engineering, presented the SPARC Supercluster. This combines the storage cells from Exadata, the compute nodes from Exalogic, shared NAS storage using ZFS file system, and Solaris 11 with OracleVM. Taking a cue from IBM's zEnterprise Unified Resource Manager, Oracle is offering centralized management for all the layers in their SPARC Supercluster stack. The SPARC Supercluster is intended as general purpose machine, and can be used to run non-Oracle applications like SAP. From a storage perspective, he claims that the storage in the SPARC Supercluster is 2.5x better than EMC VMAX, which basically puts it comparable to IBM XIV pricing.
For my readers in San Francisco attending Oracle OpenWorld, here are some sessions that IBM is featuring on Wednesday. Note the first two are Solution Spotlight sessions at the IBM Booth #1111 where I will be most of the time.
Data Management Best Practices for Oracle Applications
Oracle RAC and Cloud: Tips from IBM Global Business Services
10/05/11, 10:00 a.m. -- 11:00 a.m., OpenWorld session #15733
Presenters: David Simpson, IBM; Nalin Sahoo, Oracle
In this session, gain valuable insight into high-availability systems leveraging Oracle Database 11g Release 2 and Oracle Real Application Clusters (Oracle RAC). Hear best practices and lessons learned with these Oracle technologies as well as how IBM utilizes cloud infrastructure with Oracle Clusterware and server pools.
In the Heat of the Oracle Fusion Decision-Making Process: What's Your Next Move?
10/05/11, 10:00 a.m. -- 11:00 a.m., OpenWorld session #9423
Presenter: Esther Parker, IBM
This session discusses how companies can embrace Oracle Fusion so they can meet their business objectives today and in the future.
Wednesday morning at the [Oracle OpenWorld 2011] conference started with another keynote session. This time, Safra Catz, CFO and President of Oracle, introduced John Chambers, CEO of Cisco.
John says Cisco is helping to "empower the customer through market transitions." This includes helping customers decide how to deploy new technology, choosing between integrated stacks and interoperable components, scaling the business with a flat IT budget, and how/when to decide on moving to the cloud.
(FTC Disclosure: IBM resells Cisco switches and directors and are considered a partner in this sense. If you are going to buy Cisco switches and directors, please consider buying them through IBM.)
The information economy is transitioning to a networked one. Access to information is not as important as access to expertise. Process and Procedures are not as important as Communities and Relationships. The old style Command-and-Control management is giving way to Collaboration. He showed a chart that showed the evolution from routed/bridged networks to packet/mobile and video. He also had a chart that showed the evolution from Mainframe/Mini-computers, to Client/Server and Web, to Virtualization in the Cloud. He also indicated that Google's acquisition of Motorola was indicative of the "Death of the PC".
High Tech companies must re-invent themselves to stay relevant. Here were Cisco's five "Foundational Priorities":
Leadership in the Core. This refers to his core business of high-end Ethernet and Fibre Channel directors.
Collaboration. This was the original promise of networking computers together, was to bring people together also. He feels that "Collaboration" will take off in the 2010's.
Data Center/Virtualization/Cloud. Cisco is now in the business of selling computers. They are now #2 in North America for x86 server sales, and #3 globally. In this regard, they are a direct competitor to both IBM and Oracle at this conference. John wants to create "borderless" networks between Private and Public clouds. He claims that they have now 8,228 Ciscu UCS customers over the past 18 months. This was a slam at Oracle, who hasn't sold half that many new systems in the same time period.
Video. John indicated that every product in the Cisco family is video-enabled, from the Cius tablet, to WebEx, to TelePresence, to all of his switches and directors. In theory, the "Flip" video cam that Cisco dropped in their latest round of layoffs would have also been counted in that category. John indicates that he envisions video will take over as the predominant communication mechanism. Back in 2006, at Oracle OpenWorld, John showed a chart that indicated that people will transition from passive TV-watchers to active video producers. Here we are five years later, and while 24 hours' worth of video are uploaded to YouTube every second, most people are still TV-watchers.
Architectures for Business Transformation. He elaborated on this to refer to issues like reliability, security, and products that are designed to work together. Business and Government leaders are focused on their business, not technology.
He gave a demo of Cisco UCS. This is a 4U collection of server blades, with up to 384GB of DRAM using 8GB DIMMs, or 192GB using much-cheaper 4GB DIMMs. There are 2 switches with 8 ports each 10GbE, for a total of 160 Gbps, that can carry both Ethernet and FCoE traffic. The UCS System Manager is similar to IBM's Unified Resource Manager in that it manages the entire box. A "service profile" has 40 to 50 BIOS settings that can be applied to give each x86 blade a specific personality. You can re-provision these by changing their service profile as needed.
The next demo was really cool. They took video that involved people talking, and had it "machine transcribed" so that you can read the words being said in the video. Type in a word like "tolerances" in the search engine, and the video advances exactly to the spot where that word is uttered.
The next demo after that involved a special camera for monitoring High-Occupancy Vehicle (HOV) lanes in traffic. In an example used in London, UK, the camera can see inside the car and confirm there are enough people to justify HOV usage, and if not, scan the license plate and charge the owner of the vehicle a fine. (In a sense, "Big Data" analytics combined with Cisco's vision of ubiquitous video equals [Big Brother])
In another slam against Oracle, John actually backed up his claims with published benchmarks. He wrapped up his talk with: "If I have done my job well, then you will all leave this room a bit uncomfortable." Not surprisingly, John didn't mention either the vBlock relationship with EMC, or the FlexPod relationship with NetApp.
Wednesday afternoon at the [Oracle OpenWorld 2011] conference started with another keynote session.
In a last minute substitution, Oracle OpenWorld rescheduled Salesforce.com CEO Marc Benioff's keynote from Wednesday to a Thursday 8:00am morning general session, to make room for S. D. Shibulal, CEO of InfoSys Consulting.
(Forbes Magazine considers InfoSys the #15 [most innovative company]. To give this some context, Salesforce.com and Amazon.com are #1 and #2, Google and Apple are in the top 10, and Oracle is #77.
S.D. started out saying that "Today is October 4, a very important day in history!" This was October 6, so everybody was a bit confused, checking their watches and tablets to confirm what day it was. What he was referring to was the first trans-pacific flight that happened October 4 exactly 80 years ago, the pilots were awarded medals and accolades for this tremendous achievement. This year, trans-pacific flights happen every day, and nobody raises an eyebrow. I looked this up, and the first trans-pacific flight happened [June 9, 1928] from California to Australia involved stops in Hawaii and Fiji, but [Clyde Edward Pangborn] is remembered for his October 4, 1931 flight as the first non-stop flight, from Japan to Seattle, Washington. His point, however, is that innovation has a lot of "firsts" that people don't realize until things are commonplace.
If you look at the 1991 list of Fortune 500 companies, only 25 percent of these still are in operation today (IBM is one of them!) The rest failed to stay relevant, to reach and scale as needed for market transitions. He gave examples of travel agencies and the Encyclopedia Brittanica that failed to adapt in the face of [disintermediation]. Success in today's marketplace requires three things:
Predicting and sensing tomorrow's demand
Influencing tomorrow's demand
Fulfilling tomorrow's demand
Ming Tsai, Managing Director and Chief Client Office of InfoSys, asked a series of questions:
"Is Market Research dead?" In 2010, over 4 EB of data were generated. Marketeers do not need to conduct surveys to generate more data, they are drowning in data that is all around them. 80 percent of profits come from 20 percent of your clients.
"Who controls the message?" This was perhaps a tip of the hat to ousted Marc Benioff of Salesforce.com for being able to organize his own last-minute keynote at the St. Regis hotel using twitter and other social media. I was not there, but apparently the place was packed with a line around the building to hear Marc talk.
"Is Product Development backwards? This is referring to the standard waterfall approach of designing a product, shipping a product, with the first interaction with the end customer being the last stage. In new "agile" development models, customers are engaged up front, with highly iterative deployments, to ensure that the final product meets customer requirements.
He showed a product called "Social Edge" from InfoSys that determines "sentiment analysis". Users can co-create their own user experience.
Paul Gottsegen, InfoSys, explained how "Mobility" is challenging existing business models. Their latest product "mConnect" offers to link web applications to any mobile device. This includes healthcare monitoring, cash for the 50 percent of the world that are "unbanked", and even Cable TV on an iPad. He brought on stage Bill Tucker, VP of IT at Nordstrom, a retail outlet of fine clothing.
(I have a collection of Nordstrom jackets that I had bought in San Francisco Union Square throughout the years. Every time that I fly from Tucson to San Francisco, especially in the summer, it is freezing cold, and I need to buy a jacket. This time I was prepared, and brought several of me jackets with me.)
Bill explained that people are not comparing their end-user experience at Nordstrom's with direct competitors like Macy's, but rather with all of their other end-user experiences like that at Starbuck's coffee, or the Apple store. This raises the bar in customer expectations. Nordstrom has been force to make drastic improvements to keep up with these expectations.
Prasad Thirkutam, InfoSys, asked if supply chains are agile and adaptive enough. He mentioned that 40 percent of Flash memory and 60 percent of circuit boards are made in Japan that were recently hit by an earthquake and tsunami. He explained product "Demand-to-Deliver" solution from InfoSys that provides multi-level inventory, identifying the safety stock levels based on various analytics. This reduces waste by 7 percent, and shortens cash from 60 days to 40 days.
Prasad introduced Vin Melvin, CIO of Arrow Electronics. His focus is to get data "correct". To increase end-to-end speed to handle order changes and cancelations, and to optimize and re-balance supply chain as needed.
(FTC Disclosure: Arrow is a distributor of IBM equipment. I have worked with Arrow many years.)
The last keynote session of the [Oracle OpenWorld 2011] conference was Oracle making a few major announcements.
Steve Miranda, Senior VP for Oracle Applications, explained the new "Fusion 11g Apps" which are now generally available. Basically, they took all the scattered applications they have from acquisitions of PeopleSoft, JD Edwards, Siebel and so on, and re-wrote them to industry-standard Java so that they would all run either on-premise or in the Cloud. The Enterprise Apps come in seven categories: Financials like General Ledger and Payroll; Human Capital Management (HCM) formerly known as Human Resources; Supply Chain Management (SCM); Customer Relationship Management (CRM); Governance Risk and Compliance (GRC); Procurement; and Project/Portfolio Management (PPM). Oracle also has "Industry Apps" for specific verticals.
All of these apps have "embedded BI" (business intelligence), such as dashboards, multi-dimensional calculations, decision support, and real-time optimization. This is intended to help the end-user answer four questions:
What do you need to do today?
How to get it done?
What you need to know today?
Who can help you?
Larry Ellison, Oracle CEO, said that it took six years to rewrite all the Fusion Apps. They used an "agile" development model with over 200 early adopters to ensure that these applications were successful. They were under a "controlled release program" but now that is over, and the applications are generally available. Larry indicates that these applications were developed under the concepts of Service Oriented Architecture [SOA], which neither Salesforce.com nor SAP R3 have.
(This made me chuckle. SOA was initially developed by IBM and Microsoft, but is now industry standard. There is no reason not to develop software that isn't SOA.)
Following the IBM model, Oracle has built-in the security at the OS, Database and Middleware layer, rather than in each application. As IBM has understood for several decades, a secure infrastructure is the way to go so that all applications are secure.
With all these Fusion Apps now re-written so that they work on industry-standard Java (J2EE, actually), allowing them to run either on-premise or out on the Cloud, Larry Ellison said "I guess we need a Cloud!" This started his announcement of the "Oracle Public Cloud" [OPC]. OPC has both PaaS and SaaS. The PaaS would offer VM instances with support for database and Java services. The SaaS would be all the Fusion Apps rented on the "as-a-service" model. Rather than force everyone to Oracle 11g, you can run any Oracle database on OPC, and you can run any Java or J2EE application on the OPC.
Your data is portable. Larry is pro-choice, and wants people to be able to move from any cloud to any cloud. Since it's based on industry-standard Java, applications can move seamlessly between OPC, Amazon EC2 and IBM SmartCloud. IBM has been a major force behind [Open Cloud Standards], so it is always good that other major vendors follow suit.
He quoted [someone as saying "Beware of False Clouds"] This was Salesforce.com CEO Marc Benioff's attack against all "Private Cloud" IT vendors. Larry twisted this to say he agrees, "True Clouds" are based on open industry standards, and "False Clouds" are vendor-lockin. OPC is based on Java, J2EE, XML, BPEL and Ruby on Rails, whereas Salesforce.com is based on proprietary Heroku and APEX. He called Salesforce.com the "Roach Motel of Cloud Computing" .. you can check in, but you can't check out.
OPC plans to offer some "data sources", including Dun&Bradstreet news feed, Twitter, Facebook and other social networks. It is based on a monthly subscription using a self-service portal. The resources are elastic, with capacity delivered on demand. He claims that Salesforce.com is rate-limited, and cancels long-running jobs if they are consuming too many resources. Larry said OPC would never do that.
Larry said that there are private-only offerings like SAP R3, and public-only offerings like Salesforce.com, Workday, and Taleo, but Oracle instead has adopted the IBM model of supporting choice between private, public and hybrid clouds.
Larry then attacked "Multi-tenancy", specifically, the idea that SaaS providers often use a single database instance, but then create a column to identify which records belong to which tenants. He said this was state-of-the-art 15 years ago, but is a bad idea now. Too risky. Instead, Larry's OPC has unique database instances for each tenant through virtualization.
Larry also announced the Oracle Social Network (OSN). This is a corporate-version of Facebook, that supports collaboration and file-sharing, similar to IBM [LotusLive], Google Docs, or Microsoft Office365. All of the Fusion Apps are written to interface directly with the OSN or any of these other social networks through APIs. This includes navigation and integrated social networking. He also indicated that all Fusion Apps run on mobile devices. He showed the SAP R3 GUI, and said it reminded him of "the fins on a 1968 Cadillac!"
Larry said that other CRM SaaS focus on helping sales managers track their employees, but Oracle's CRM helps sellers sell more.
He then gave an example of a mythical sales manager Bob, and his sales employee Julian, selling two Exadata boxes for $4.8 Million USD. A "safe harbor" statement was shown at the beginning of this keynote, to make sure nobody asks to buy Exadata boxes this cheap.
This post concludes my series of posts on Oracle OpenWorld 2011 conference. Here are some pictures from Wednesday and Thursday.
IBM as the yardstick by which everyone measures against
Our friends at Violin Memory systems mentioned our joint-venture success results with IBM GPFS, scanning 10 billion files in less than an hour. (Their booth must have been slow, because members of their team spent a lot of time in our IBM booth!)
In fact, it seemed every company compared themselves to IBM in one fashion or another. Larry said that "IBM is a great company" and mentioned the IBM systems several times in comparisons to Oracle's newly announced hardware offerings.
Larry's Sailing Vessel
When things slowed down, I took a walk to see the other parts of the exhibition area. In the Moscone West building was Larry's catamaran that won [last year's America's Cup].
I used to sail myself, and have been part of crews in sailing races in both Japan and Australia. A few years ago, I watched the America's Cup time trials in New Zealand.
On the Streets of San Francisco
On the streets, IBM had advertised some of its products in a manner that thousands of attendees would see every day. Here we have some factoids related to IBM Netezza and DB2 database on POWER servers. We were very careful not to mention either product in the IBM Booth itself, as we all understand that IBM is a guest in Oracle's house this week. We certainly don't want to do anything to upset Larry in any way to make him treat IBM like he treated HP last year, or Salesforce.com this year.
Rest in Peace, Steve Jobs, 1955-2011
On Wednesday evening at Oracle OpenWorld, we were tearing down the booth when we heard that Apple co-founder Steve Jobs had passed away. This is truly a loss for the entire IT industry. I never met Steve in person, nor have I been to any Apple conferences like MacWorld that he spoke at.
At various keynote sessions, Larry Ellison compared his Oracle products to those of Apple, Inc., suggesting that Oracle is the "Apple for the Enterprise".
On our way back to the Hilton hotel on O'Farrell, there was a candle vigil at the Apple Store near Union Square. People left sticky notes on the glass window.
There were a lot of tributes to Steve Jobs, but I liked this 15-minute video of his 2005 Commencement Speech at Stanford University titled [How to Live before you Die.
This will be one of those moments where years later, many people will remember exactly where they were, and what they were doing, when they heard the news. For many, that news came as tweets or text messages on the very iPhones and iPads he helped design.
Rock Concert - Wednesday night
On Wednesday evening, I joined thousands of other attendees on Treasure Island to hear and watch Sting, Tom Petty and the Heartbreakers, and the English Beat in concert. It was cold and dark, but we all had a good time. Needless to say, I didn't make it to Marc Benioff's 8:00am Thursday morning session!
A word of advice: If you go to an evening rock concert at Treasure Island, dress warmly!
Despite the sad news about Steve Jobs, I had a great time at this conference. I learned a lot about what other IT vendors are doing, talked to dozens of IBM clients at the booth, and got to make some new friends that work in other parts of IBM.
(FTC Disclosure: I work for IBM. IBM and Apple are technology partners. I proudly own an Apple iPod, several Mac Mini computers and shares of stock in both IBM and Apple, Inc.)
I took over a hundred pictures at this event. Here are a few of my favorites from Monday and Tuesday.
The IBM Booth #1111 Moscone South
I spent most of my time at the booth in the exhibition area. It was a huge booth, covering various software offerings in the front, and servers and storage systems in the back. Here I am next to the "IBM Watson" simulator, allowing people to play Jeopardy! game against Watson.
In the front was "EoS" which stands for "Exchanging Opinions for Solutions" -- an interactive screen developed by Somnio that allows people to enter questions and opinions and get crowd-sourced answers from people following the Twitter stream. The EoS was connected to the [IBM Mobile App] so people could follow the conversation.
IBM Customer Appreciation Events
On Monday evening we had some customer appreciation events. First was for IBM customers of "JD Edwards", which runs on "IBM i" operating system on POWER servers. This was an elegant affair at the [Weinstein Gallery] surrounded by works of art by Pablo Picasso and Marc Chagall. One customer expressed concern that Oracle would functionally stabilize JD Edwards "World" software and force everyone to move over to "Enterprise One". I told him that I had seen the roadmap for "World" and there are three healthy releases planned for its future. He should have nothing to worry about. IBM and Oracle will work together to make sure our mutual customers get the solutions they need.
Later, we went to the "Infusion" bar for another "IBM appreciation" event with a live band. Here's a Polaroid photo taken of me in the crowd.
Titan Gala Award Reception
On Tuesday night, Oracle gave out awards in 29 categories. IBM won three this year. I took a photo with the ladies from Beach Blanket Babylon, and a mermaid! Joining me to celebrate the awards were IBMers Carolann Kohler, Boyd Fenton, Sue Haad, and Susan Adomovich.
This is my first time attending Oracle OpenWorld, so naively I asked why there were only 29 categories and not an even 30. The IBMers joked that the 30th might as well have been "Best Server/Storage Platform for Integer Math" which Larry Ellison conceded that IBM's POWER 795 server wins over Oracle's new SPARC T4 Supercluster. As Larry said during his keynote "We still have some work to do to beat IBM!"
The event was held at the San Francisco City Hall, I got to walk on the red carpet, with lavish food and drink. I was even given a hand-rolled cigar! Thank you Oracle! We are proud to be your "Diamond Partner" helping our mutual customers get the most out of our solutions.
The "Booth Babes" Controversy
At the EMC booth, these three lovely ladies, Jennifer, Tamara and Manuela, were just a few of the dozen so-called booth babes EMC hired from a local agency. Attendees with technical questions were directed to the EMC guys in the back of the booth, behind the wall.
IBM stopped using "booth babes" a long while ago. At IBM Booth #1111, we had a healthy balance of real men and women executives, technical experts, and support staff at the IBM booth.
A guy from EMC came over to our booth later to explain that EMC is at two other events this same week, and their technical staff is spread thin. EMC is a small company, and skilled technical people are in short supply. We get it. Not every IT vendor has an army of experts in every category like IBM.
I want to thank the IBM-Oracle Alliance team, especially Nancy Spurry and Carolann Kohler for having me involved in these events.
Well, I had a pleasant vacation. I took a trip up to beautiful Lake Powell in Northern Arizona as part of a "Murder Mystery Dinner" weekend. This trip was organized by AAA and Lake Powell in association with the professionals at [Murder Ink Productions] out of Phoenix.
The trip involved two busloads of people from Tucson and Phoenix driving up to Lake Powell, with a series of meals that introduced all the characters and gave out clues to solve a murder. At the end of the dinner on the last evening, we had to guess who dunnit, how, and why. I solved it, and got this lovely tee-shirt.
More importantly, the trip gave me a chance to read
[The Numerati] by Stephen Baker. The author explains all the different ways that "analysts" are able to crunch through the large volumes of data to gain insight. He has chapters on how this is done for shoppers in retail sales, voters for upcoming elections, patients for medical care, and even matchmaking services like chemistry.com. Like the Murder Mystery Dinner, there are too many suspects and too many clues, but these number-crunchers, which Mr. Baker calls The Numerati, are able to figure out through advanced business analytics.
FTC Notice: I recommend this book. I did not receive any compensation to mention this book on this blog, I did not receive a copy of the book free for this review, and I do not know the author. Everyone in my staff is reading this book, and I borrowed a copy from a co-worker.
If you don't understand how this all works, here is a quick 6-minute [video] on YouTube.
Michael Scott, one of my "Second Life" builder/scripters, for demonstrating client-focused dedication to IBM's corporate values.
Our site manager, Terri Mitchell, did a recap of all our recent awards and accomplishments.Of the nine Design Innovation awards won by IBM this year at the CeBIT conference, eight were for IBM System Storage products!
The IBM System Storage EXP3000: an entry-level data storage server that is optimized for cost-sensitive and space-limited environments and employs a user-centered design that enables ease of use and simple tool-less installation and removal of all components.
The IBM System Storage N7000 Series: a modular disk storage system that delivers high-end enterprise storage and data management value ideal for large-scale applications, while helping to anticipate growth, maintaindata availability and reduce costs.
The IBM System Storage N5000 Series: a modular disk storage system designed to address the entire spectrum of data availability challenges while offering value in price and scalability. Built-in enterprise serviceability and manageability features support efforts to increasereliability and simplify storage infrastructure and maintenance.
The IBM System Storage N3700: a filer that integrates storage and storage processing into a single unit, facilitating affordable network deployments.
The IBM System Storage DS4700: a NEBS-compliant disk storage server designed to address requirements for companies in the telecommunications industry, as well as other segments, such as oil and gas, meeting standardsfor electromagnetic compatibility, thermal robustness, earthquake and office vibration resistance, and provides protection for the product components from airborne contaminants.
The IBM System Storage EXP810: a data storage expansion unit capable of 4.8 Terabytes of physical storage, with a user-centered and tool-less design featuring redundant power, cooling, and disk modules for ease of use and simple serviceability.
The IBM System Storage TS3400: an affordable, space-friendly tape library for users in remote locations that supports enterprise-class technology and encryption capabilities.
A representative from Tucson's Brewster Center presented Terri an award, thanking IBM for its strong support for the community through various charity initiatives.
The final speaker was a new IBM client, Tony Casella, the IT Director of the town of Marana. Recently, the town of Marana selected IBM products made big news. Arizona is the fastest growing state in the USA, and the town of Marana, just north of Tucson, is one of the fastest growing communities in Arizona. The town is growing so large that it will soon spill over from Pima into Pinal county, and will be the first town in Arizona authorized to span county boundaries.
Well, it's Tuesday, and so it is "announcement day" again! Actually, for me it is Wednesday morning herein Mumbai, India, but since I was "press embargoed" until 4pm EDT in talking about these enhancements, I had to wait until Wednesday morning here to talk about them.
World's Fastest 1TB tape drive
IBM announced its new enterprise [TS1130 tape drive]and corresponding [TS3500 tape library support]. This one has a funny back-story. Last week while we were preparing the Press Release, we debated on whether we should compare the 1TB per cartridge capacity as double that of Sun's Enterprise T10000 (500GB), or LTO-4 (800GB). The problem changed when Sun announced on Monday they too had a 1TB tape drive, so now instead ofsaying that we had the "World's First 1TB tape drive", we quickly changed this to the "World's Fastest 1TB tape drive" instead. At 160MB/sec top speed, IBM's TS1130 is 33 percent faster than Sun's latest announcement. Sun was rather vague when they will actually ship their new units, so IBM may still end up being first to deliver as well.
While EMC and other disk-only vendors have stopped claiming that "tape is dead", these recent announcements from IBM and Sun indicate that indeed tape is alive and well. IBM is able to borrow technologies from disk, such as the Giant Magneto Resistive (GMR) head over to its tape offerings, which means much of the R&D for disk applies to tape, keeping both forms ofstorage well invested. Tape continues to be the "greenest" storage option, more energy efficient than disk, optical, film, microfiche and even paper.
On the LTO front, IBM enhanced the reporting capabilities of its[TS3310] midrange tape library. This includes identifying the resource utilization of the drives, reporting on media integrity, and improved diagnostics to support library-managed encryption.
IBM System Storage DR550
As a blended disk-and-tape solution, the [IBM System Storage DR550] easily replaces the EMC Centera to meet compliance storagerequirements. IBM announced that we have greatly expanded its scalability, being able to support both 1TBdisk drives, as well as being able to attach to either IBM or Sun's 1TB tape drives.
Massive Array of Idle Disks (MAID)
IBM now offers a "Sleep Mode" in the firmware of the [IBM System Storage DCS9550], which is often called "Massive Array of Idle Disks" (MAID) or spin-down capability. This can reduce the amount of power consumed during idle times.
That's a lot of exciting stuff. I'm off to breakfast now.
The "Storage Symposium Mexico - 2008" conference was a great success this week!
Day 1 - The plan was for me to arrive for the Wednesday night reception. Eachattendee was given a copy of my latest book[Inside System Storage: Volume I] and I was planning to sign them. I thought perhaps we should have a "book signing" tablelike all of the other published authors have.
Things didn't go according to plan. Thunderstorms at the Mexico City airport forced our pilot to find an alternate airport. Nearby Acapulco airport was the logical choice, but was full from all the otherflights, so the plane ended up in a tiny town called McAllen, Texas. I did not arrive until the morning of Day 2,so ended up signing the books throughout Thursday and Friday, during breaks and meals, wherever they couldfind me!
Special thanks to fellow IBMer Ian Henderson who picked me up from the airport at such an awkward hour anddrive me all the way to Cuernavaca!
All of us, IBMers, Business Partners and clients alike, all donned black tee-shirtswith a white eightbar logo for a group photo with one of those "wide lens" cameras. While we werebeing assembled onto the bleachers, I took this quick snapshot of myself and some of the guys behind me.
I was original scheduled to be first to speak, but with my flight delays, was moved to a time slot after lunch.After a big Mexican lunch, the conference coordinators were afraid the attendees might fall asleep,a Mexican tradition called [siesta], so I wasinstructed to WAKE THEM UP! Fortunately, my topic was Information Lifecycle Management, a topicI am very passionate about, since my days working on DFSMS on the mainframe. With 30percent reduction in hardware capital expenditures, 30 percent reduction in operational costs, and typical payback periods between 15 to 24 months, the presentation got everyone's attention.
Of course, a lot happens outside of the formal meetings. We had a Japanese theme dinner, where we woreJapanese Hachimaki [headbands]with the eightbar logo. For those not familiar with Japanese culture, hachimaki are worn today not so much for the practical purpose to catch the perspiration but rather for mental stimulation to express one's determination. Some students wear hachimaki when they study to put themselves in the right spirit and frame of mind.
Shown here are presenters Mike Griese (Infrastructure Management with IBM TotalStorage Productivity Center),Dave Larimer (Backup and Storage Management with IBM Tivoli Storage Manager), myself, and John Hamano(Unified Storage with IBM System Storage N series).
Day 3 - Wrapping up the week, I presented two more times.
First, I covered IBM Disk Virtualization with IBM SAN Volume Controller. One interesting question was if the SAN Volume Controller could be made to looklike a Virtual Tape Library. I explained that this was never part of the original design, but that if you wantto combine SVC with a VTL into a combined disk-and-tape blended solution, consider using theIBM product called Scale-Out File Services[SoFS] which I covered in my post[Moredetails about IBM clustered scalable NAS].
During one of the breaks, I took a picture of the behind-the-scenes staff that put this together. They had created these huge blocks representing puzzle pieces, emphasizing how IBM is one of the few ITvendors that can bring all the pieces together for a complete solution.
Shown hereare Mike Griese (presenter), Cyntia Martinez, Claudia Aviles, Cesar Campos (IBM Business Unit Executive forSystem Storage in Mexico), and Claudia Lopez. Each day the staff wore matching shirts so that it was easyto find them.
Later, I covered Archive and Compliance Solutions to highlight our complete end-to-end set of solutions.When asked to compare and contrast the architectures of the IBM System Storage DR550 with EMC Centera, I explainedthat the DR550 optimizes the use of online disk access for the most recent data. For example, if you aregoing to keep data for 10 years, maybe you keep the most recent 12 months on disk, and the rest is moved,using policy-based automation, to a tape library for the remaining nine years. This means that the disk insidethe DR550 is always being used to read and write the most recent data, the data you are most likely to retrievefrom an archive system. Data older than a year is still accessible, but might take a minute or two for the tapelibrary robot to fetch.The EMC Centera, on the other hand, is a disk-only solution. It offers no option to move older data to tape,nor the option to spin-down the drives to conserve power. It fills up after the same 12 months or so, and then you get towatch it the remaining nine years, consuming electricity and heating your data center.
I don't know about you, butI have never seen anyone purposely put in "space heaters" into their data center, but certainly a full EMC Centeradoes little else. Both devices use SATA drives and support disk mirroring between locations, but IBM DR550 offers dual-parity RAID-6, and supports encryption of the data on both the disk and the tape in the DR550. EMC Centerastill uses only RAID-5, and has not yet, as far as I know, offered any level of encryption. IBM System StorageDR550 was clocked at about three times faster than Centera at ingesting new archive objects over a 1GbE Ethernet connection.
This last photo is me and fellow IBMer Adriana Mondragón. She was one of my students in the [System Storage Portfolio Top Gun class],last February in Guadalajara, Mexico.She graduated in the top 10 percent of her group, earning her the prestigious titleof "Top Gun" storage sales specialist.
The conference wrapped up with a Mexican lunch with a traditional Mariachi band. I took pictures, but figured you allalready know what [Mariachi players] look like, and I didn't wantto detract from the otherwise serious tone of this blog post! This was the first System Storage Symposium in Mexico, butbased on its success, we might continue these annually.
Did you miss your chance to attend Storage Networking World last week? IBM has some upcoming conferences that might be of interest to you.
IBM Systems Conference 2009
In this inaugural event, IBM executives, developers and industry experts reveal the latest innovations, trends and directions. In the span of three full days, you will hear and see technologies demonstrated that are needed to transform and respond effectively in these economic times.
There will be three tracks:
IBM Systems -- Including storage, mainframe, POWER and x86 systems
Solutions for a Dynamic Infrastructure
Professional Development -- including negotiation skills, project management and TCO analysis
IBM System Storage and Storage Networking Symposium
If the above conference is too broad, we have a more storage-specificconference. The [IBM System Storage and Storage Networking Symposium] brings IBM storage developers, architects, technical experts, solution providers and customer speakers together in one place to show you how to address the growing challenge of managing and securing retention managed data. You'll also learn about the latest IBM System Storage™ portfolio product announcements.
I have spoken at these perhaps 12 of the last 14 years. The list of presenters has not yet finalized, so I do not yet know if I will actually be there this year.
Two exciting things are new this year. First, instead of being in San Diego or Las Vegas, it will be held in Chicago, Illinois instead!Secondly, you get a two-for-one with the [IBM System x and BladeCenter Technical Conference]. That's right, they are co-located there in Chicago so that you can attend sessions from both! Perhaps you spend 80 percent of your time on storage, and 20 percent on x86 servers, or 80 percent servers and 20 percent storage, now you can register for one price, and decide when you get there.
If you act soon, you can save money with the early-registration discount by May 31.
Hopefully, this will give you enough time to plan and make travel arrangements!
In addition to keynote speakers Curtis Tearte, General Manager for IBM System Storage, and Clod Barrera, Chief Technical Strategist, my colleague Jack Arnold and I from the [IBM Tucson Executive Briefing Center] will present four topics each.
We have a lot to cover, so I will do the quick recap today, and then go in-depth on subsequent posts.
IBM FlashSystem 840 and V840
The FlashSystem now offers a high-voltage 1300W power supply. There are two supplies providing redundancy. In the unlikely event that you are doing maintenance on one of them, the other supply handles all the workload. With the original power supply, the system slowed down the clock speeds to reduce electrical demand. The new power supplies can handle full performance.
Also, the Graphical User Interface (GUI) now holds 300 days of performance data with pan-and-zoom capability. Five predefined graphs showing key performance metrics with additional user-defined metrics available for visualization.
The new v7.4 level of microcode combines features from v7.2.7 and v7.3 into a single code base.
In previous 3-site mirroring implementations, you had A-to-B-to-C cascading. Metro Mirror would get the data from A-to-B, then Global Mirror would copy B-to-C. Multiple Target Peer-to-Peer Remote Copy (PPRC) feature number 7025 allows you to have two separate paths of the data: A-to-B and separately A-to-C. Some folks refer to this as a "star" configuration.
For System z mainframe clients, the new v7.4 introduces new zHyperWrite for DB2 database logs, enhances zGM (XRC) write pacing, and extends Easy Tier automated-tiering API to allow z/OS applications to influence placement on different tiers of storage.
The High Performance Flash Enclosures (HPFE) that IBM introduced last May for the "A" frames are now available for "B" frames. You can have four HPFE in A, and another 4 in B.
DS8870 now offers 600 GB 15K rpm SAS and 1.6 TB 2.5-inch SSD encryption drives for additional capacity and cost performance options to meet data growth demands within the same space. Both support data-at-rest encryption.
Lastly, we have upgraded the OpenStack Cinder driver to the latest Juno release, including features like volume replication and volume retype.
The latest SAN switch is a slim 1U high box that can be configured with 12 or 24 ports. These are 16Bps ports that can auto-negotiate down to 8Gbps, 4Gbps and 2Gbps. These are easy to set up, and can be managed with the IBM Network Advisor management software.
GPFS is the core technology for IBM's "Codename: Elastic Storage" initiative.
You have several options. First, you can purchase just the GPFS software itself. It runs natively on AIX, Windows and Linux, and can be extended to support other operating systems through the use of NAS protocols like NFS or CIFS. Today, the Linux support which was previously just x86 and POWER has been extended to include Linux on System z mainframes as well.
GPFS v4.1 offers "Native RAID" support, with de-clustered RAID in 8+2P and 8+3P configurations. Like the IBM XIV Storage System, this scatters the data across many drives, and can tolerate drive failures better than traditional RAID-5 configurations.
Another option is to get a pre-configured "Converged" appliance that combines servers, storage and hardware. We already offer SONAS and the Storwize V7000 Unified, but IBM now offers the "GPFS Storage Server" running on the new P822L Linux-on-Power servers, RHEL v7, and and GPFS v4.1 with Native RAID to twin-tailed attached DCS3700 expansion drawers. Since GPFS provides the RAID, no need for DCS37000 controllers, saving clients substantial costs.
The IBM Storwize family includes SAN Volume Controller, Storwize V7000, Storwize V7000 Unified, Storwize V5000, Storwize V3700 and Storwize V3500.
The big announcement is that IBM now offers data-at-rest encryption for block data on internal drives in the new generation of Storwize V7000 and V7000 Unified models. There is no performance impact, and no need to purchase new SED-capable drives.
Data-at-rest encryption helps in several ways. First, it protects data if a drive is pulled out and taken away maliciously. Second, it protects data if the drive fails and you want to send it back to the manufacturer for replacement. Third, it allows you to perform a "secure erase" so that the data can be sold or re-purposed without fear of anyone reading previous data.
Initially, the encryption key management is built-in, with the keys stored on a USB memory stick physically attached to the model. In the future, IBM will extend this support to SVC, extend this support to external virtualized drives, and extend this support to IBM Security Key Lifecycle Manager (SKLM).
Other announcements include 16Gbps adapters for SVC, Storwize V7000 and V7000 Unified. The entire Storwize family will also enjoy both 1.8TB 10K RPM 2.5-inch drives, and 6TB 7200RPM 3.5-inch drives
See the Announcement Letter (available later this month) for details.
New TS1150 enterprise tape drives
The anticipation is over! The new TS1150 tape drive has been announced, with 10TB raw un-compressed "JD" media cartridge capacity and 360 MB/sec throughput performance. The new drive is read/write compatible with TS1140 on JC, JY and JK media cartridges.
For the virtual tape libraries for the System z platform, IBM offers two models. The TS7740 had a small amount of disk front ending tape library of physical tape. The TS7720 had a large amount of disk with no tape library.
But then the person carrying the chocolate bar bumped into the person carrying the jar of peanut butter, and the rest is history. IBM will now allow tape attach on TS7720, best of both worlds! Large disk cache plus tape library attach.
Tape-attached TS7720 configurations can have up to eight partitions, with different partitions have different policies. Some might move data from disk cache to tape more aggressively, while other partitions may keep data on disk for longer periods, or indefinitely if needed.
Logical tape volumes can now be up to 25GB in size.
The DCS3700 is IBM's entry-level disk system for sequential-oriented workloads. Today, IBM announced new disk drive options: 400GB 2.5-inch SSD, 800 GB 2.5-inch SSD, and 1.2TB 10K RPM 2.5-inch drive. All of these offer T10 Protection Information (PI) data integrity.
Well, it's the end of the year, so I thought a recap of year 2014 would be in order.
The year started out with some January announcements, including the IBM FlashSystem 840. IBM is proud to be ranked #1 in All-Flash Arrays, and the IBM acquisition of Texas Memory System has caused all of the other competitors to scramble their own wanna-be offerings. IBM also announced it was going to sell off its System x division to Lenovo.
In February, I wrapped up a project to build a Linux-based PC for a kindergarten class. IBM announced some exciting new things at Pulse 2014 conference, including IBM Bluemix Platform-as-a-Service (PaaS), new IBM SmartCloud Virtual Storage Center offerings, and acquisition of Cloudant Database. Also, on Valentine's day, IBM announced the FlashSystem V840, which combines the software-defined storage features of SAN Volume Controller, with the Microlatency of the FlashSystem 840. IBM sold its 10,000th PureSystems converged expert-integrated system.
In March, I completed a six-month film project ["A Tucson Executive Briefing Center: A Quick Visual Tour"]. I was writer/director/actor for this quick 3-minute film posted on YouTube. I wrote the script and had it reviewed by a professional script reviewer, hired a professional cinemetographer, paid royalties for background music, located a voice-over expert for narration, and trained the actors (all IBM employees) how to read their lines and stand on their mark for the camera. It was a big success!
In April, I presented at the Systems Technical University in Istanbul, Turkey. I had been to Turkey before, but this was my first time to the city of Istanbul itself. The owner of my local [Savaya Coffee] is from Istanbul, and was able to introduce me to someone who was able to arrange for a full tour my first day! Meanwhile, on the other side of the pond, IBMers in New York were celebrating the 50th anniversary of the IBM mainframe, including a cameo appearance on the TV show "Mad Men".
In May, I was busy presenting at the IBM Edge conference in Las Vegas. IBM celebrated the sixth anniversary of IBM ProtecTIER data deduplication device, announced "Codename: Elastic Storage" and new features on the DS8870 disk system, and presented analyst findings that IBM Software Defined Storage was substantially less expensive than competitive offerings.
In July, I took a nice summer vacation, [a road trip across the state of Tennessee]. IBM made a strategic partnership with Apple to offer mobile apps for the data center enterprise for the iOS operating system on iPhones and iPad tablets.
In August, I completed a summer partnership with University of Toronto and IBM Softlayer to build "Concept IBM Watson", a scaled down version of IBM Watson based on my infamous 2011 blog post [How to replicate Watson hardware and systems design for your own use in your basement]. Rather than using three physical servers, however, we had virtual x86 machines running on IBM Softlayer cloud. The system was only asked the simplest "How many...?" questions against a single text document, but proved to the University that teaching analytics by replicating IBM's historic achievement was effective and possible.
In September, I celebrated my eight year "Blogoversary". That's right, I have been blogging for the past eight years! With over 800 posts, and five published books, I countinue to be ranked #1 most-read blog on IBM developerWorks. IBM was ranked #1 for Software Defined Storage!
In October, I presented at the Systems Technical University in Dublin, Ireland. This was my first time in Ireland, and I found Dublin to be quite a beautiful city, with friendly people and delicious food.
The rest of October, and much of November and December, I spent on the road, visiting clients to help close deals! (Sorry folks... Due to SEC black-out rules, I am prohibited from telling you how well I did) Since I am not allowed to talk about on-going discussions that I have with clients, my blog has been noticeably silent during these months. I apologize for any stress or anxiety this might have caused any of my readers!
Despite too-much-candy, too-much-turkey and too-many-cookies that the year-end often brings, I managed to lose twenty pounds on a low-carb, gluten-free, Paleo diet and exercise.
This month (September, 2006) marks our 50th anniversary of the disk system. The first disk system was the 350 Disk Storage Unit, designed to attach to the IBM 305 RAMAC mainframe computer, both introduced to the world in September, 1956.
"Do you know what I do?" Mr. Mondavi recalls Mr. Gallo asked him when they first met. "Yes, you run the largest winery in the country," recalls Mr. Mondavi, then in his mid-20s. "No," Ernest corrected him. "I go out and visit customers in stores."
Robert Smith (aka Radio Voom) reports on National Public Radio that Second Life is now being used for campaigning for political candidates. It used to be that political candidates took trains and buses across the country, meeting people, discussing their issues, and getting a feel for what is going on in the hearts and minds of their potential voters. With the development of TV and Radio, candidates traveled less, hoping to get their word out to people who would listen to them. Using Second Life and other social networking tools brings candidates back to having conversations with the people they hope to represent.
Of course, many of these candidates are old, and are learning internet social networking skills for the first time. John McCain, my senator from Arizona, is running for President at 70 years old! It's true that old dogs CAN learn new tricks.
IBM is investing heavily into Second Life, as are many other forward-thinking companies, to explore the age-old human need for connectedness, community and dialog. I've asked my team to all get their avatars up and running in Second Life. Granted there is a bit of a learning curve, but everybody handles change in different ways, some better than others.
"Knowledge is the antidote to fear." -Ralph Waldo Emerson
Why are most of these guys (and girls) with over a billion US dollars in net worth still working? Perhaps because they embrace new ideas, and are on the thrill seeking side of humanity. I guess I am too. I'll be thrill-seeking in Chicago this weekend, celebrating St. Patrick's day.
The weather has warmed up here in Tucson so I started my Spring Cleaning early this year and unearthed from my garage a [Bankers Box] full of floppy diskettes.
IBM invented the floppy disk back in 1971, and continued to make improvements and enhancements through the 1980s and 1990s. It will be one of the many inventions celebrated as part of IBM's Centennial (100-year) anniversary. Here is an example [T-shirt]
IBM needed a way to send out small updates and patches for microcode of devices out in client locations. IBM had drives that could write information, and sent out "read-only" drives to the customer locations to receive these updates. These were flexible plastic circles with a magnetic coating, and placed inside a square paper sleeve. Imagine a floppy disk the size of a piece of standard paper. The 8-inch floppy fit conveniently in a manila envelope, sendable by standard mail, and could hold nearly 80KB of data.
I've been using floppies for the past thirty years. Here's some of my fondest memories:
While still in high school, my friend Franz Kurath and I formed "Pearson Kurath Systems", a software development firm. We wrote computer programs to run on UNIX and Personal Computers for small businesses here in Tucson. Whenever we developed a clever piece of code, a subroutine or procedure, we would save it on a floppy disk and re-use it for our next project. We wrote in the BASIC language, and our databases were simple Comma-Separated-Variable (CSV) flat files.
The 5.25-inch floppies we used could hold 360KB, and were flexible like the 8-inch models. Later versions of these 5.25-inch floppies would be able to hold as much as 1.2MB of data. We would convert single-sided floppies into double-sided ones by cutting out a notch in the outer sleeve. Covering up the notches would mark them as read-only.
The 3.5-inch floppies were introduced with a hard plastic shell, with the selling point that you can slap on a mailing label and postage and send it "as is" without the need for a separate envelope. These new 3.5-inch floppies would carry "HD" for high density 720KB, and double-sided versions could hold 1.44MB of data. The term "diskette" was used to associate these new floppies with [hard-shelled tape cassettes]. Sliding a plastic tab would allow floppies to be marked "read-only". IBM has the patent on this clever invention.
Continuing our computer programming business in college, Franz and I took out a bank loan to buy our first Personal Computer, for over $5000 dollars USD. Until then, we had to use equipment belonging to each client. The banks we went to didn't understand why we needed a computer, and suggested we just track our expenses on traditional green-and-white ledger paper. Back then, peronsal computers were for balancing your checkbook, playing games and organizing your collection of cooking recipies. But for us, it was a production machine. A computer with both 5.25-inch and 3.5-inch drives could copy files from one format to another as needed. The boost in productivity paid for itself within months.
Apple launched its Macintosh computer in 1984, with a built-in 3.5-inch disk drive as standard equipment. Here is a YouTube video of an [astronaut ejecting a floppy disk] from an Apple computer in space.
In my senior year at the University of Arizona, my roommate Dave had borrowed my backpack to hold his lunch for a bike ride. He thought he had taken everything out, but forgot to remove my 3.5-inch floppy diskette containing files for my senior project. By the time he got back, the diskette was covered in banana pulp. I was able to rescue my data by cracking open the plastic outer shell, cleaning the flexible magnetic media in soapy water, placing it back into the plastic shell of a second diskette, and then copied the data off to a third diskette.
After graduating from college, Franz and I went our separate ways. I went to work for IBM, and Franz went to work for [Chiat/Day], the advertising agency famous for the 1984 Macintosh commercial. We still keep in touch through Facebook.
At IBM, I was given a 3270 terminal to do my job, and would not be assigned a personal computer until years later. Once I had a personal computer at home and at work, the floppy diskette became my "briefcase". I could download a file or document at work, take it home, work on it til the wee hours of the morning, and then come back the next morning with the updated effort.
To help prepare me for client visits and public speaking at conferences, IBM loaned me out to local schools to teach. This included teaching Computer Science 101 at Pima Community College. When asked by a student whether to use "disc" or "disk", I wrote a big letter "C" on the left side of the chalkboard, and a big letter "K" on the right side. If it is round, I told the students while pointing at the letter "C", like a CD-ROM or DVD, use "disc". If it has corners, pointing to corners of the letter "K", like a floppy diskette or hard disk drive, use "disk".
On one of my business trips to visit a client, we discovered the client had experienced a problem that we had just recently fixed. Normally, this would have meant cutting a Program Trouble Fix (PTF) to a 3480 tape cartridge at an IBM facility, and send it to the client by mail. Unwilling to wait, I offered to download the PTF onto a floppy diskette on my laptop, upload it from a PC connected to their systems, and apply it there. This involved a bit of REXX programming to deal with the differences between ASCII and EBCDIC character sets, but it worked, and a few hours later they were able to confirm the fix worked.
In 1998, Apple would signal the begining of the end of the floppy disk era, announcing their latest "iMac" would not come with an internal built-in floppy drive. David Adams has a great article on this titled [The iMac and the Floppy Drive: A Conspiracy Theory]. You can get external floppy drives that connect via USB, so not having an internal drive is no longer a big deal.
While teaching a Top Gun class to a mix of software and hardware sales reps, one of the students asked what a "U" was. He had noticed "2U" and "3U" next to various products and wondered what that was referring to. The "U" represents the [standard unit of measure for height of IT equipment in standard racks]. To help them visualize, I explained that a 5.25-inch floppy disk was "3U" in size, and a 3.5-inch floppy diskette was "2U". Thus, a "U" is 1.75 inches, the thinnest dimension on a two-by-four piece of lumber. Servers that were only 1U tall would be referred to as "pizza boxes" for having similar dimensions.
Every year, right around November or so, my friends and family bring me their old computers for me to wipe clean. Either I would re-load them with the latest Ubuntu Linux so that their kids could use it for homework, or I would donate it to charity. Last November, I got a computer that could not boot from a CD-ROM, forcing me to build a bootable floppy. This gave me a chance to check out the various 1-disk and 2-disk versions of Linux and other rescue disks. I also have a 3-disk set of floppies for booting OS/2 in command line mode.
So while this unexpected box of nostalgia derailed my efforts to clean out my garage this weekend, it did inspire me to try to get some of the old files off them and onto my PC hard drive. I have already retrieved some low-res photographs, some emails I sent out, and trip reports I wrote. While floppy diskettes were notorious for being unreliable, and this box of floppies has been in the heat and cold for many Arizonan summers and winters, I am amazed that I was able to read the data off most of them so far, all the way back to data written in 1989. While the data is readable, in most cases I can't render it into useful information. This brings up a few valuable lessons:
Backups are not Archives
Some of the files are in proprietary formats, such as my backups for TurboTax software. I would need a PC running a correct level of Windows operating system, and that particular software, just to restore the data. TurboTax shipped new software every year, and I don't know how forward or backward-compatible each new release was.
Another set of floppies are labeled as being in "FDBACK" format. I have no idea what these are. Each floppy has just two files, "backup.001" and "control.001", for example.
Backups are intended solely to protect against unexpected loss from broken hardware or corrupted data. If you plan to keep data as archives for long-term retention, use archive formats that will last a long time, so that you can make sense of them later.
Operating System Compatibility
Windows 7 and all of my favorite flavors of Linux are able to recognize the standard "FAT" file system that nearly all of my floppies are written in. Sadly, I have some files that were compressed under OS/2 operating system using software called "Stacker". I may have to stand up an OS/2 machine just to check out what is actually on those floppies.
You can't judge a book by its cover
Floppies were a convenient form of data interchange. Sometimes, I reused commercially-labeled floppies to hold personal files. So, just because a floppy says "America On-Line (AOL) version 2.5 Installation", I can't just toss it away. It might actually contain something else entirely. This means I need to mount each floppy to check on its actual contents.
So what will I do with the floppies I can't read, can't write, and can't format? I think I will convert them into a [retro set of coasters], to protect my new living room furniture from hot and cold beverages.
A lot of people ask me about IBM branding, as we have recently changed brands. In the past we had two separate brands, one for servers (eServer) and one for storage (TotalStorage). These would be fine if we wanted to promote their independence, but customers today want synergy between servers and storage, they want systems that work well together.
Last year, in response to market feedback, we crated a new brand, "IBM Systems" and put all the server and storage product lines under one roof. Over time, we will transition from TotalStorage to System Storage naming. This will occur with new products, and major versions of existing products.
Two other phrases you will hear in the names of our offerings are "Virtualization Engine" and "Express". These are portfolio identifiers. The Virtualization Engine identifier was created to emphasize our leadership in system virtualization, and we have products that span product lines with this identifier.
The Express identifier was created to emphasize our focus on Small and Medium sized business (SMB). It spans not just servers and storage, but across other offerings from other IBM divisions.
Of course, just renaming products and services isn't enough. Systems don't work together just because they have similar names, are covered in similar "Apple white" plastic, or have similar black bezels. Obviously, thoughtful and collaborative design are needed, with the appropriate amounts of engineering and testing. IBM is aligning its server and storage development so that the IBM Systems brand keeps its promise.
IBM's emphasis on "Information Infrastructure" is to help organizations get the right information, to the right people at the right time. This helps them to have the right insights, make the right decisions, and develop the right innovations needed for the challenges at hand.
As the planet got smaller and flatter, IBM led the way. Now, as the planet needs to get smarter--with more efficient health care, energy distribution, financial institutions, and IT infrastructures--IBM will once again take the lead.
In the 2004 comedy ["A Day Without a Mexican"], the director envisions how disruptive life would be in California if all the Mexicans suddenly disappeared. The point is that sometimes you take things in the background for granted.
I was reminded of this when I saw Mark Underwood's blog post [Mainframe: Still Not Crazy After All These Years]. The article reminds us how critical IBM z Systems mainframes (and related storage like the IBM DS8880 disk systems) are in our lives. Here's an excerpt:
"Warren Buffett's Berkshire Hathaway started buying up IBM stock in 2011 and bought still more of IBM later. Despite its disappointing short-term valuation, Berkshire Hathaway is standing by its IBM investment, which is one of Berkshire's top four plays. ... To make this case, some statistics may be needed:
The z13 can withstand an 8.0 earthquake.
z Systems enjoy the highest standardized security certification (FIPS 140-2, highest level 4 of 4).
23 of the world's top 25 retailers use a mainframe.
92 of the top 100 banks are mainframe users.
All 10 of the top 10 insurers have commitments in mainframe technologies.
Around 80 percent of all corporate data is managed by mainframes.
The z13 can process 2.5 billion transactions daily (that's 100 [Cyber Mondays], as IBM's Mark Anzani, VP of z Systems Strategy, Resilience and Ecosystems, observed)."
... In fact, and notwithstanding perceptions to the contrary, the mainframe's center-stage position in large corporations around the world has not budged. That's the conclusion of an industry survey sponsored by Syncsort Inc. and conducted in 2015 by Enterprise Systems Media, a publisher of magazines for IT managers and technical professionals. Seven out of 10 respondents (IT planners, architects and managers at global enterprises with $1 billion or more in annual revenues) ranked the use of the mainframe for large-scale transaction processing as very important."
What would a comparable film depicting "A Day without a Mainframe" be like? I would imagine it somewhere between a disaster movie like  and an end-of-the-world zombie horror movie like [28 Days Later]. I would gladly take a million dollars to write the screenplay!
(FCC Disclosure: I work for IBM and am a filmmaker as well. Earlier in my career, I was chief architect of IBM's Data Facility Storage Management Subsystem (DFSMS) which manages around 80 percent of the world's corporate data. This blog post can be considered a "paid celebrity endorsement" for IBM's z13 System mainframes and DS8880 Disk Systems. I have personal experience with both and highly recommend them. I am neither a Mexican nor resident of California, but work regularly with both in my job responsibilities. Like Warren Buffett, I also own stock in both IBM and Berkshire Hathaway companies. I had no involvement in the making of any of the major motion pictures mentioned in this blog post, have no financial interest in their distribution, and have not been provided any compensation for mentioning them in this blog post. They are all great movies worth watching!)
What do you think the movie would be like? Enter your comments below!
With all the announcements we had in June, it is easy for some of the more subtle enhancements to get overlooked. While I was at Orlando for the IBM Edge conference, I was able to blog about some of the key featured announcements. Then, later, when I got back from Orlando to Tucson, I was able to then blog about [More IBM Storage Announcements]. For IBM's Scale-Out Network Attach Storage (SONAS), I had simply:
"SONAS v1.3.2 adds support for management by the newly announced IBM Tivoli Storage Productivity Center v5.1 release. Also, IBM now officially supports Gateway configurations that have the storage nodes connected to XIV or Storwize V7000 disk systems. These gateway configurations offer new flexible choices and options for our ever-expanding set of clients."
In my defense, IBM numbers its software releasees with version.release.modification, so 1.3.2 is Version 1, Release 3, Modification 2. Generally, modification announcements don't get much attention. The big announcement for v1.3.0 of SONAS happened last October, see my blog post [October 2011 Announcements - Part I] or
the nice summary post [IBM Scale-out Network Attached Storage 1.3.0] from fellow blogger Roger Luethy.
Here is a diagram showing the three configurations of SONAS.
I have covered the SONAS Appliance model in depth in previous blogs, with options for fast and slow disk speeds, choice of RAID protection levels, a collection of enterprise-class software features provided at no additional charge, and interfaces to support a variety of third party backup and anti-virus checking software.
The basics haven't changed. The SONAS appliance consists of 2 to 32 interface nodes, 2 to 60 storage nodes, and up to 7,200 disk drives. The maximum configuration takes up 17 frames and holds 21.6PB of raw disk capacity, which is about 17PB usable space when RAID6 is configured. An interface nodes has one or two hex-core processors with up to 144GB of RAM to offer up to 3.5GB/sec performance each. This makes IBM SONAS the fastest performing and most scalable disk system in IBM's System Storage product line.
I thought I would go a bit deeper on the gateway models. These models support up to ten storage nodes, organized in pairs. The key difference is that instead of internal disk controllers, the storage nodes connect to external disk systems. There is enough space in the base SONAS rack to hold up to six interface nodes, or you can add a second rack if you need more interface nodes for increased performance.
SONAS with XIV gateway
XIV offers a clever approach to storage that allows for incredibly fast access to data on relatively slow 7200 RPM drives. By scattering data across all drives and taking advantage of parallel processing, rebuild times for a failed 3TB drive are less than 75 minutes. Compare that to typical rebuild times for 3TB drives that could take as much as 9-10 hours under active I/O loads!
In the configuration, each pair of storage nodes can connect to external SAN Fabric switches that then connect to one or two XIV storage systems. How simple is that? These can be the original XIV systems that support 1TB and 2TB drives, or the new XIV Gen3 systems that support 400GB Solid-state drives (SSD) and 3TB spinning disk drives. In both cases, you can acquire additional storage capacity as little as 12 drives at a time (one XIV module holds 12 drives).
The maximum configuration of ten XIV boxes could hold 1,800 drives. At 3TB drive per drive, that would be 2.4PB usable capacity.
The SONAS with XIV gateway does not require the XIV devices to be dedicated for SONAS purposes. Rather, you can assign some XIV storage space for the SONAS, and the rest is available for other servers. In this manner, SONAS just looks like another set of Linux-based servers to the XIV storage system. This in effect gives you "Unified Storage", with a full complement of NAS protocols from the SONAS side (NFS, CIFS, FTP, HTTPS, SCP) as well as block-based protocols directly from the XIV (FCP, iSCSI).
SONAS with Storwize V7000 gateway
The other gateway offering is the SONAS with Storwize V7000. Like the SONAS with XIV gateway model, you connect a pair of SONAS storage nodes to 1 or 2 Storwize V7000 disk systems. However, you do not need a SAN Fabric switch in between. You can instead connect the SONAS storage nodes directly to the Storwize V7000 control enclosures.
To acquire additional storage capacity, you can purchase a single drive at a time. That's right. Not 12 drives, or 60 drives, at a time, but one at a time. The Storwize V7000 supports a wide range of SSD, SAS and NL-SAS drives at different sizes, speeds and capacities. The drives can be configured into various RAID protection levels: RAID 0, 1, 3, 5, 6 and 10.
Each Storwize V7000 control enclosure can have up to nine expansion drawers. If you choose the 2.5-inch 24-bay models, you can have up to 480 drives per storage node pair, for a total of 2,400 drives. If you choose the 3.5-inch 12-bay models, you can have up to 240 drives per node pair, 1,200 drives total. At 3TB per drive, this could be 3.6PB of raw capacity. The usable PB would depend on which RAID level you selected. Of course, you don't have to limit yourself all to one size or the other. Feel free to mix 2.5-inch and 3.5-inch drawers to provide different storage pool capabilities.
All three SONAS configurations support Active Cloud Engine. This is a collection of features that differentiate SONAS from the other scale-out NAS wannabees in the marketplace:
Policy-driven Data Placement -- Different files can be directed to different storage pools. You no longer have to associate certain file systems to certain storage technologies.
High-speed Scan Engine -- SONAS can scan 10 million files per minute, per node. These scans can be used to drive data migration, backups, expirations, or replications, for example. It is over 100 times faster than traditional walk-the-directory-tree approaches employed by other NAS solutions.
Policy-driven Migration -- You can migrate files from one storage pool to another, based on age, days since last reference, size, and other criteria. The files can be moved from disk to disk, or move out of SONAS and stored on external media, such as tape or a virtual tape library. A lot of data stored on NAS systems is dormant, with little or no likelihood of being looked at again. Why waste money keeping that kind of data on expensive disk? With SONAS, you can move those files to tape can save lots of money. The files are stubbed in the SONAS file system, so that an access request to a file will automatically trigger a recall to fetch the data from tape back to the SONAS system.
Policy-driven Expiration -- SONAS can help you keep your system clean, by helping you decide what files should be deleted. This is especially useful for things like logs and traces that tend to just hang around until some deletes them manually.
WAN Caching -- This allows one SONAS to act as a "Cloud Storage Gateway" for another SONAS at a remote location connected by Wide Area Network (WAN). Let's say your main data center has a large SONAS repository of files, and a small branch office has a smaller SONAS. This allows all locations to have a "Global" view of the all the interconnected SONAS systems, with a high-speed user experience for local LAN-based access to the most recent and frequently used files.
If you want to learn more, see the [IBM SONAS landing page]. Next week, I will be across the Pacific Ocean in [Taipei], to teach IBM Top Gun class to sales reps and IBM Business Partners. "Selling SONAS" will be one of the topics I will be covering!
This week, IBM celebrates its Centennial, 100 years since its incorporation on June 16, 1911.
A few months ago, the Tucson Executive Briefing Center ordered its latest IBM System Storage [DS8800] to be on display for demos. This was manufactured in Vác, Hungary (about an hour north of Budapest), and was going to be shipped over to the United States.
However, Sam Palmisano, IBM Chairman and CEO, was in Hannover, Germany for the [CeBIT conference] and wanted this DS8800 to be re-directed to Germany first for this event. He was kind enough to sign it for us. Brian Truskowski, IBM General Manager for Storage, and Rod Adkins, IBM Senior Vice President for IBM Systems Technolgoy Group (and my fifth-line manager), also signed this as well!
I am pleased to say this "signed" DS8000 has arrived to Tucson. This is the latest model in a family of market-leading high-end enterprise-class disk systems designed to attach to all computers, including System z mainframes, POWER systems running AIX and IBM i, as well as servers running HP-UX, Solaris, Linux or Windows.
For more on IBM's other innovations over the past 100 years, check out the [Icons of Progress], which includes these storage innovations:
IBM has launched a new blog, focused on making [a smarter planet]. In my post,[The New Year in Six Words], Idiscussed the part of Sam Palmisano's speech that mentioned a small $30 Billion investmentcould result in 950,000 new jobs. For those who wondered how IBM arrived to that figure,here are two posts:
In keeping with the spirit to be a more kinder, gentler 2011, I decided last week to refrain from being the rain on someone else's parade that occurs immediately before, during or after a competitor's announcement or annual conference, and let EMC have their few moments in the spotlight last week. This of course allows me more time to learn about the announcements and reflect on marketplace reactions. Here's a quick look at the [EMC Press Release]:
A new VNXe disk system
Of the 41 new storage technologies and products EMC announced last week, the VNXe is EMC's "me-too" product to compete against other low-end disk systems like the IBM System Storage DS3524 and N3000 series. It looks truly new, developed organically from the ground up, with a new architecture, new OS. It comes in either the 2U-high VNXe3100 or the 3U-high VNXe3300. These employ 3.5-inch SAS drives to provide Ethernet-based NFS, CIFS and iSCSI host attachment. The $10K USD price tag appears to be for the hardware only. As is typical for EMC, they charge software features in bundles or "suites", so the actual TCO will be much higher. I have not seen any announcements whether Dell plans to resell either the VNXe nor the VNX models, now that they have acquired Compellent.
A new VNX disk system
Despite having a similar name as the VNXe, the VNX appears to be a re-hash of the Celerra/CLARiiON mess that EMC has been selling already, based on the old FLARE and DART operating systems of these older disk systems. This scales from 75 to 1000 SAS drives. While EMC calls the VNX "unified", it currently is only available in block-only and file-only models, with a future promise from EMC that they will offer a combined block-and-file version sometime in the future. EMC claims that the VNX will be faster than the predecessors, so hopefully that means EMC has joined the rest of the planet and will publish SPC-1 and SPC-2 benchmarks to back up that claim. They can compare against the SPC-1 benchmarks that our friends at NetApp ran against EMC CLARiiON.
New software for the VMAX
A long time ago, EMC announced they would provide non-disruptive automated tiering. Their first delivery "FAST V1" handled entire LUNs at a time. EMC now has finally "FAST VP" which we expected was going to be called "FAST V2", which provides sub-LUN automated tiering between Solid-state and spinning disk drives.. Meanwhile, IBM has been delivering "Easy Tier" on the IBM System Storage DS8000 series, SAN Volume Controller, and Storwize V7000 disk systems.
Data Domain Archiver
Competing against IBM, HP and Oracle in the tape arena, EMC's latest addition to the Data Domain family is designed for the long-term retention of backups? Archives of backups? Backups are short-lived, protecting against the unexpected loss from hardware failure or data corruption. Keeping backups as "archives" is generally a bad mistake, as it makes it hard to e-Discover the data you need when you need it, and may not have the appropriate hardware tor restore these old backups when you do find them.
I will have to dig deeper into all of these different technologies in separate posts in the future.
It has always been the case in fast pace technology areas that you can't tell the players without a program card, andthis is especially true for storage.
When analyzing each acquistion move, you need to think of what is driving it. What are the motives?Having been in the storage business 20 years now, and seen my share of acquisitions, both from within IBM,as well as competition, I have come up with the following list of motives.
Although slavery was abolished in the US back in the 1800's, and centuries earlier everywhere else, many acquisitionsseem to be focused on acquiring the people themselves, rather than the products or client list. I have seen statistics such as "We retained 98% of the people!" In reality, these retentions usually involve costly incentives,sign-in bonuses, stock options, and the like. Desptie this, people leave after a few years, often because ofpersonality or "corporate culture" clash. For example, many former STK employees seem to be leaving after their company was acquired by Sun Microsystems.
If you can't beat them, join them. Acquisitions can often be used by one company to raise its ranking in marketshare, eliminating smaller competitors. And now that you have acquired their client list, perhaps you can sellthem more of your original set of products!
Symantec had acquired Veritas, which in turn had acquired a variety of other smaller players, and the end result is that they are now #1 backup software provider, even though none of theirproducts holds a candle to IBM's Tivoli Storage Manager. Meanwhile, EMC acquired Avamar to try to get more into the backup/recovery game, but most analysts still find EMC down in the #4 or #5 place in this category.
Next month,Brocade's acquisition of McData should take effect, furthering its marketshare in SAN switch equipment.
Prior to my current role as "brand market strategist" for System Storage, I was a "portfolio manager" where wetried to make sure that our storage product line investments were balanced. This was a tough job, as the investmentshad to balance the right development investments into different technologies, including patent portfolios.Despite IBM's huge research budget, I am not surprised that some clever inventions of new technologies comefrom smaller companies, that then get acquired once their results appear viable.
The last motive is value shift. This is where companies try to re-invent themselves, or find that they are stuck in acommodity market rut, and wish to expand into more profitable areas.
LSI Logic acquisition of StoreAge is a good exampleof this. Most of the major storage vendors have already shifted to software and services to provide customer value,as predicted in 1990's by Clayton Christensen in his book "The Innovator's Dilemma". The rest are still strugglingto develop the right strategy, but leaning in this general direction.
Shakespeare wrote "What's in a name? That which we call a rose by any other word would smell as sweet." This week my theme will be on names, naming convention, and how we access information on storage.
Take for example these two sentences:
The Bears beat New Orleans. Chicago clobbered the Saints.
Though they appear very different, football fans who might have watched either or both of the two conference title games yesterday would quickly recognize that they refer to the same two teams and the same end-result.
I'll be traveling to Asia next week. While most people call me "Tony", my legal given name is "Anthony" which is what appears on my passport and other legal documents. Most English-speaking countries handle this fine, but it can be confusing in Japan or China, where "A. Pearson" doesn't match "T. Pearson".
In the US, our given and family names are referred to as our "first name" and our "last name", relating to their positional sequence. In Asia, family names come first, followed by their given names last. To help avoid confusion, we have started adopting the practice of putting the family name in ALL CAPITAL LETTERS, so I would "Tony PEARSON" while my colleague may be "WONG Francis".
In Japanese, "Mr. JONES" would be "Jones-san". However, Pearson-san is such a toungue-twister, that most just say "Tony-san" which is fine with me. I have been called "Mr. Tony" in a variety of countries, perfectly acceptable.
You can call me anything you like, just don't call me late for dinner.
As financial firms focus on costs, the IT departments will have an opportunity to consolidate their servers, networks and storage equipment. Consolidating disk and tape resources, implementing storage virtualization, and reducingenergy costs might get a boost from this crisis. Consolidating disparate storage resources to a big SoFS, XIV,DS8000 disk system, or TS3500 tape library might greatly help reduce costs.
Having mixed vendor environments that result from such mergers and acquisitions can be complicated to manage. Thankfully, IBM TotalStorage Productivity Centermanages both IBM and non-IBM equipment, based on open industry standards like SMI-S and WBEM.Merged companies might let go IT people with limited vendor-specific knowledge, but keep the ones familiar withcross-vendor infrastructure management skills and ITIL certification.
Comparing different vendor equipment
It seems that often times when there is a merger or acquisition, the two companies were using different storage gear from different vendors. IBM has made some incredible improvements over the past three years, in both performance enhancements and energy efficiency, but many companies with non-IBM equipment may not be aware of them.If there was ever a time to perform a side-by-side comparison between IBM and non-IBM equipment, here isyour chance.
For more on the impact of the financial meltdown on IT, see this InfoWorld[Special Report].
Jon Toigo over at DrunkenData writes in his post[A Wink and a Nod] about thebenefits of the new IBM System z10 Enterprise Class mainframe. Here's an excerpt about storage:
"The other key point worth making about this scenario is that storage behind a z10 must conform to IBM DASD rules. That means no more BS standards wars between knuckle-draggers in the storage world who continue to mitigate the heterogeneous interoperability and manageability of distributed systems storage using proprietary lock in technologies designed as much to lock in the consumer and lock out the competition as to deliver any real value. That has got to be worth something."
For z/OS and TPF operating systems, disk must support CCW commands over ESCON or FICON connections, or NFS commandsover the Local Area Network. However, most of the workloads that are being ported over from x86 platforms willprobably be running Linux on System z images, and as such Linux supports both CCW and SCSI protocols, the latterover native FCP connections through a Storage Area Network (SAN) or via iSCSI over the Local Area Network. Many SAN directors support both FCP and FICON, and the z10 also supports both 1Gbps and 10Gbps Ethernet, so you may not have to invest in any new networking gear.
The best part is that you may not have to migrate your data. The IBM System Storage SAN Volume Controller is supported for Linux on System z, and with "image mode" you can leave the data in its original format on its original disk array. Many file systems are now supported by Linux, including Windows NTFS with the latest NTFS-3G driver.
If your data is already on NAS storage, such as the IBM System Storage N series disk systems, then the IBM z10can access it directly, from z/OS, z/VM or Linux.
Have lots of LTO tape data? Linux on System z supports LTO as well.
Jon continues his rant with a question about porting Microsoft Windows applications. Here's another excerpt:
"For one, what do we do with all the Microsoft servers. There is no Redmond-sanctioned approach to my knowledge for virtualizing Microsoft SQL Server or Exchange Server in a mainframe partition."
Yes, it is possible to run Windows on a mainframe through emulation, but I feel that's the wrong approach. Instead, the focus should be on running "functionally equivalent" programs on the native mainframe operating systems, and again Linuxis often the best choice for this. Switching from Windows to Linux may not be "Redmond-sanctioned", but it getsthe job done.
Instead of SQL Server, consider something functionally equivalent like IBM's DB2 Universal Database, or perhaps an open source database like MySQL, PostgreSQL or Apache Derby. Well-written applications use standard SQL calls, so ifthe application does not try to use unique, proprietary features of MS SQL Server, you are in good shape.
In my discussion last November on [Microsoft Exchange email server], I mentioned that Bynari makes a functionally equivalent email server on Linux that works with your existing Microsoft Outlook clients. Your end-users wouldn't know you migrated to a mainframe! (well, they might notice their email runs faster)
So if your data center has three or more racks of Sun, Dell or HP "pizza box" or "blade" x86 servers, chances are you can migrate the processing over to a shiny new IBM z10 EC mainframe, save some money in the process, without too much impact to your existing Ethernet, SAN or storage system infrastructure. IBM can even help you dispose of the oldx86 machines so that their toxic chemicals don't end up in any landfill.
When new technologies are introduced to the marketplace, it is normal for customers to be skeptical.
My sister is a mechanical engineer, so when she needs to configure a part or component, she candesign it on the computer, and then use a "Rapid Prototyping Machine"that acts like a 3D printer, to generate a plastic part that matches the specifications. Some machinesdo this by taking a hunk of plastic and cutting it down to the appropriate shape, and others use glue andpowder to assemble the piece.
But not everything is that simple. Harry Beckwith deals with the issue of selling services and software featuresin his book "Selling the Invisible". How do you sell a service before it is performed? How do you sell a softwarefeature based on new technology that the customer is not familiar with?
Our good friends over at NetApp, our technology partners for the IBM System Storage N series, developed a"storage savings estimator" tool that can provide good insight into the benefits of Advanced Single InstanceStorage (A-SIS) deduplication feature.
I decided to run the tool to analyze my own IBM Thinkpad C: drive (Windows operating system and programs) and D: drive ("My Documents" folder containing all my data files) to see how much storage savings thetool would estimate. Here are my results:
WINXP-C-07G (C: drive)Total Number of Directories: 1272Total Number of Files: 56265Total Number of Symbolic Links: 0Total Number of Hard Links: 41996Total Number of 4k Blocks: 2395884Total Number of 512b Blocks: 18944730Total Number of Blocks: 2395884Total Number of Hole Blocks: 290258Total Number of Unique Blocks: 1611792Percentage of Space Savings: 20.61Scan Start Time: Wed Sep 5 14:37:06 2007Scan End Time: Wed Sep 5 14:53:51 2007
WINXP-D-07H (D: drive)Total Number of Directories: 507Total Number of Files: 7242Total Number of Symbolic Links: 0Total Number of Hard Links: 11744Total Number of 4k Blocks: 3954712Total Number of 512b Blocks: 31610595Total Number of Blocks: 3954712Total Number of Hole Blocks: 3204Total Number of Unique Blocks: 3524605Percentage of Space Savings: 10.79Scan Start Time: Wed Sep 5 14:21:16 2007Scan End Time: Wed Sep 5 14:34:30 2007
I am impressed with the results, and have a better understanding of the way A-SIS works. A-SIS looks at every4kB block of data, and creates a "fingerprint", a type of hash code of the contents. If two blocks have different "fingerprints", then the contents are known to be different. If two blocks have the same fingerprint, it is mathematically possible for them to be unique in content, so A-SIS schedules a byte-for-byte comparison to be sure they are indeed the same. This might happen hours after the block is initially written to disk, but is a much safer implementation, and does not slow down the applications writing data.
(In an effort to provide support "real time" as data was being written, earlier versions of deduplication
had to either assume that a hash collision was a match, or take time to perform the byte-for-byte comparisonrequired during the write process. Doing this byte-for-byte comparison when the device is the busiest doingwrite activities causes excessive undesirable load on the CPU.)
The estimator tool runs on any x86-based Laptop, personal computer or server, and can scan direct-attached, SAN-attached, or NAS-attached file systems. If you are a customer shopping around for deduplication, ask your IBM pre-sales technical support, storage sales rep, or IBM Business Partner to analyze your data. Tools like this can help make a simple cost-benefit analysis: the cost of licensing the A-SIS software feature versus the amount of storage savings.
In his blog Rough Type, Nick Carr asks Where is my CloudBook?and points to John Markoff's 2-part series in the New York Times on computing in the clouds.(Read it here: Part 1, Part 2)
At first, I thought he meant computing while in an airplane, but instead, he is talking about computing on a laptop or other hand-held device that does not have an internal disk drive, no installedoperating system, no internal data storage. Instead, the idea is that you boot from a CD, accessyour data, and even some of your programs, over the internet. John used an Ubuntu Linux LiveCD in his example.
This week, I am in Sao Paulo, Brazil, and was "in the clouds" for over 10 hours flying from Dallas to here.The one time I am guaranteed "off-line" from the internet is on the plane, and I spend enough time on planesthat I am able to get work done despite being "disconnected".
The same reasons people want to get out of having a disk drive on their laptop, are the reasons data centersare getting out of internal disk on their servers.
disks crash, and typically are not protected in any RAID configuration on most laptops
operating systems get infected with viruses and malware
storage on one server is generally inaccessible to every other server
Booting from CD is especially clever. No more worrying about fixing your Windows registry, viruses,corrupted operating system files, or the cruft that accumulates on your C: drive that slowsyou down. The CD is the sameevery time, so it is like running your system with a freshly installed operating system every day.
The need for central repositories of data harkens back to the years of the IBM mainframe. Of course, whatmade sense back then continues to make sense now. The old 3270 terminals stored no data, and instead merelyprovided keyboard input and display text screen output to the vast amount of data stored on the central system.Today, the inputs are different, using your finger or mouse instead to point to what you want, sliding itacross to make things happen, and the output may now include photos, audio and video, but the concept isstill the same.
I carry my Ubuntu Linux LiveCD with me on every business trip. Combined with externally rewriteable media,such as a USB key, you can get work done even when you are in an airplane, and upload it whenyou are back on the net.
Back in Februray, my blog post [A Box Full of Floppies] mentioned that I uncovered some diskettes compressed with OS/2 Stacker. Jokingly, I suggested that I may have to stand up an OS/2 machine just to check out what is actually on those floppies. Each floppy contains only three files: README.STC, STACKER.EXE and a hidden STACKVOL.DSK file. The README.STC explains that the disk is compressed by Stacker, a program developed by [Stac Electronics, Inc.]. The STACKER.EXE would not run on Windows XP, Vista or Windows 7. The STACKVOL.DSK is just a huge binary file, like a ZIP file, compressed with [Lempel-Ziv-Stac] algorithm that combines Lempel-Ziv with Huffman coding.
In my follow-up post [Like Sands in an Hourglass], I explained how there are many ways I could have tackled this project. I could either use the Emulation approach and try to build an OS/2 guest image under a hypervisor like VMware, KVM or VirtualBox, or just take the Museum approach and try taking one of my half dozen old machines, wipe it clean and stand up OS/2 on it bare metal. This turned out to be more challenging than I expected. The systems I have that are modern and powerful enough to run hypervisors don't have floppy drives, so I opted for the Museum approach.
(A quick [history of OS/2] might be helpful. IBM and Microsoft jointly developed OS/2 back in 1985. By 1990, Microsoft decided it's own Windows operating system was more popular with the ladies, and decided to break off with IBM. In 1992, IBM release OS/2 version 2.0, touted as "a better DOS than DOS and a better Windows than Windows!" Both parties maintained ownership rights, Microsoft renamed OS/2 to Windows NT. The "NT" stood for New Technology, the basis for all of the enterprise-class Windows servers used today. IBM named its version of OS/2 version 3 and 4 "WARP", with the last version 4.52 released in 2001. In its heyday, OS/2 ran the majority of Automated Teller Machines (ATMs), was used for hardware management consoles (HMC), and was used worldwide to run various Railway systems. After 2001, IBM encouraged people to transition from Windows or OS/2 over to Java and Linux. For those that can't or won't leave OS/2, IBM partnered with Serenity Systems to continue OS/2 under the brand [eComStation].)
Working with an IBM [ThinkCentre 8195-E2U Pentium 4 machine] with 640MB RAM and 80GB hard disk, a CD-rom and one 3.5-inch floppy drive, I first discovered that OS/2 is limited to very small amounts of hard disk. There are limits on [file systems and partition sizes] as well as the infamous [1024-cylinder limit] for bootable operating systems. Having a completely empty drive didn't work, as the size of the disk was too big. Carving out a big partition out of this also failed, as it exceeded the various limits. Each time, it felt the partition table was corrupted because the values were so huge. Even modern Disk Partitioning tools ([SysRescueCD] or [PartedMagic]) didn't work, as these create partitions not recognizable to OS/2.
The next obstacle I knew I would encounter would be device drivers. OS/2 comes as a set of three floppy diskettes and a CD-rom. The bootable installation disk was referred to affectionately as "Disk 0", then Disk 1, then Disk 2. Once all drivers have been loaded into memory, then it can start looking at the CDrom, and continue with the installation. In searching for updated drivers, I came across [Updated OS/2 Warp 4 Installation Diskettes] to address problems with newer display monitors. It also addresses the 8.4GB volume limit.
The updates were in the form of EXE files that only execute in a running DOS or OS/2 environment, expanded onto a floppy diskette. It seemed like [Catch-22], I need a working DOS or OS/2 system to run the update programs to create the diskettes, but need the diskettes to build a working system.
To get around this, I decided to take a "scaffolding" approach. Using DOS 6 bootable floppy, I was able to re-partition the drive with FDISK into two small 1.9GB partitions. I have the full five-floppy IBM DOS 6 set, I hid the first partition for OS/2, and install the DOS 6 GUI on the second partition. I went ahead and added a few new subdirectories: BOOT to hold Grub2, PERSONAL to hold the data I decompress from the floppies, and UTILS to hold additional utilities. This little DOS system worked, and I now have new OS/2 "Disk 1" and "Disk 2" for the installation process.
(If you don't have a full set of DOS installation diskettes, you can make due with "FORMAT C: /S" from a [DOS boot disk], and then just copy over all the files from the boot disk to your C: drive. You won't have a nice DOS GUI, but the command line prompt will be enough to proceed.)
Like DOS, OS/2 expects to be installed on the C: drive. I hid the second partition (DOS), and marked the first partition installable and bootable. The OS/2 installation involves a lot of reboots, and the hard drive is not natively bootable in the intermediate stages. This means having to boot from Disk 0, then putting in Disk 1, then disk 2, before continuing the next phase of the installation. I tried to keep the installation as "Plain Vanilla" as possible.
I had to figure out what to include, and what to exclude, and this involved a lot of trial and error. For example, one of the choices was for "external diskette support". Since I had an "internal diskette drive", I didn't think I needed it. But after a full install, I discovered that it would not read or write floppy diskettes, so it appears that I do indeed need this support.
OS/2 supports two different file systems, FAT16 and the High Performance File System (HPFS). Since my partition was only 1.9GB in size, I chose just to use FAT16. HPFS supported larger disk partitions, longer file names, and faster performance, none of which I need for these purposes.
I thought it would be nice to get TCP/IP networking to work with my Ethernet card. However, after many attempts, I decided against this. I needed to focus on my mission, which was to decompress floppy diskettes. It was amusing to see that OS/2 supported all kinds of networking, including Token Ring, System Management, Remote Access, Mobile Access Services, File and Print.
Once all the options are chosen, OS/2 installation then proceeds to unpack and copy all the programs to the C: drive. During this process, IBM had informational splash screens. Here's one that caught my eye, titled "IBM Means Three Things" that listed three reasons to partner with IBM:
Providing global solutions for a small planet
Creating and Applying advanced technologies to improve with which customers run their businesses
Constantly improving customer service with the products and services we provide
You might wonder how these OS/2 splash screens, written over 10 years ago, can appear almost identical to IBM's current [Smarter Planet] campaign. Actually, it is not that odd. IBM has been keeping to these same core principles since 1911, only the words to describe and promote these core values have changed.
To access both OS/2 and DOS partitions, I installed Grand Unified Bootloader [Grub2] on the DOS partition under C:/BOOT/GRUB directory. However, when I boot OS/2, I cannot see the DOS partition. And when I boot DOS, I cannot see the OS/2 partition. Each operating system thinks its C: drive is the only partition on the system.
Now that I had OS/2 running, I was then able to install Stacker from two floppy diskettes. With this installed, I can compress and decompress data on either the hard disk, or on floppy diskettes. Most of the files were flat text documents and digital photos. After copying the data off the compressed disks onto my hard drive, I now can copy them off to a safe place.
To finish this project, I installed Ubuntu Linux on the remaining 76GB of disk space, which can access both the OS/2 and DOS drives FAT16 file systems natively. This allows me to copy files from OS/2 to DOS or vice versa.
Now that I know what data types are on the diskettes, I determined that I could have decompressed the data in just a few steps:
Set up a DOS partition on C: drive
Insert one of the compressed diskettes into the floppy drive
Copy the STACKER.EXE program from the floppy to the C: drive
Run "STACKER A:" to decompress the floppy diskette
However, now that I have a working DOS and OS/2 system, I can possibly review the rest of my floppy diskettes, some of which may require running programs natively on OS/2 or DOS. This brings me to an important lesson. If you are going to keep archive data for long-term retention, you need to choose file formats that can be read by current operating systems and programs. Installing older operating systems and programs to access proprietary formats can be quite time-consuming, and may not always be possible or desirable.
Most businesses in Latin America would be considered "Small and Medium-size" businesses, which we shorten to SMB, but in some places is shortened to SME for "Small and Medium sized Enterprises." The problem with SME is that we often use this to refer to "subject-matter experts," so it can be confusing.
The problem with many acronyms is that in other countries, the letters are re-arranged, based on the syntax of the language.ISO is actually the International Organization for Standards.
Today, we learned about PYME. In Spanish, this stands for pequeñas y medianas empresas, which is literally "small" and "medium" businesses. Of course, most of my colleagues had not recognized PYME, and most of the people we talked to did not understand SMB. Once we equated one to the other, things went smoothly.
For those not familiar with Latin America, I suggest the movieRomancing The Stone, starring Michael Douglas and Kathleen Turner.
Alan was a leader in blogging about IBM Lotus technologies and was very helpfulto me over the past few years in deploying new Lotus technologies at the IBM TucsonExecutive Briefing Center. The Lotus team taught me how to use Second Life, using theLotusSphere 2007 build to demonstrate the various possibilities that we used to run IBM System Storage events last year.
Alan, I wish you the best of luck on your exciting new position!
Two European scientists, Albert Fert (France) and Peter Grunberg (Germany) have won the 2007 Nobel Prize for physics for their research into Giant Magnetoresistance, or GMR. GMR read/write heads are used in IBM disk systems.
New high-density dual-coated particulate magnetic tape: Developed by Fuji Photo Film Co., Ltd., in Japan in collaboration with IBM Almaden researchers, this next-generation version of its NANOCUBIC™ tape uses a new barium-ferrite magnetic media that enables high-density data recording without using expensive metal sputtering or evaporation coating methods.
More sensitive read-write head: For the first time, magnetic tape technology employs the sensitive giant-magnetoresistive (GMR) head materials and structures used to sense very small magnetic fields in hard disk drives.
GMR servo reader: New GMR servo-reading elements, software and fast-and-precise positioning devices provides an active feedback system with unprecedented 0.35-micron accuracy in monitoring and positioning the read-write head over the 1.5-micron-wide residual data track.
Improved tape-handling features: Flangeless, grooved rollers permit smoother high-speed passage of the tape, which also enhances the ability of the head to write and read high-density data.
Innovative signal processing algorithms for the read data channel: An advanced read channel used new "noise-predictive, maximum-likelihood" (NPML) software developed at IBM's Zurich Research Laboratory to process the captured data faster and more accurately than would have been possible with existing methods.
IBM often leverages the research done in one part of its business over to other parts of its business. In this manner, advances in disk translate into advances in tape, keeping tape a viable medium for at least the next 8-10 years.
Actually, if the title confuses you, it is because it has a double meaning.
Meaning 1: IBM earned almost 100 Billion dollars (USD)
IBM's 2010 [earnings report is now available], for the full year 2010 and the fourth quarter. IBM had $99.9 Billion dollars (USD) in revenue, almost $100 Billion dollars that it had set out as a vision in the 1980s. IBM Storage contributed with 8 percent growth, not bad for a year Dave Barry considers [one of the worst years ever.].
IBM President and CEO Sam Palmisano granted me a chunk of IBM stock in appreciation of my efforts towards the 2010 success! Actually, he gave stock to a whole bunch of IBMers, not just me, and they all deserve it also. Woo hoo!
Meaning 2: IBM is almost 100 years old
That's right, this upcoming June 16, 2011, IBM turns 100 years old. This Centennial date also happens to be my 25th year anniversary working in IBM Storage, which IBM calls joining the Quarter Century Club, or QCC for short. So, I am looking forward to plenty of cake and fireworks on that day!
I am looking forward to a year-long celebration on both counts!
Am I dreaming? On his Storagezilla blog, fellow blogger Mark Twomey (EMC) brags about EMC's standard benchmark results, in his post titled [Love Life. Love CIFS.]. Here is my take:
A Full 180 degree reversal
For the past several years, EMC bloggers have argued, both in comments on this blog, and on their own blogs, that standard benchmarks are useless and should not be used to influence purchase decisions. While we all agree that "your mileage may vary", I find standard benchmarks are useful as part of an overall approach in comparing and selecting which vendors to work with, and which architectures or solution approaches to adopt, and which products or services to deploy. I am glad to see that EMC has finally joined the rest of the planet on this. I find it funny this reversal sounds a lot like their reversal from "Tape is Dead" to "What? We never said tape was dead!"
Impressive CIFS Results
The Standard Performance Evaluation Corporation (SPEC) has developed a series of NFS benchmarks, the latest, [SPECsfs2008] added support for CIFS. So, on the CIFS side, EMC's benchmarks compare favorably against previous CIFS tests from other vendors.
On the NFS side, however, EMC is still behind Avere, BlueArc, Exanet, and IBM/NetApp. For example, EMC's combination of Celerra gateways in front of V-Max disk systems resulted in 110,621 OPS with overall response time of 2.32 milliseconds. By comparison, the IBM N series N7900 (tested by NetApp under their own brand, FAS6080) was able to do 120,011 OPS with 1.95 msec response time.
Even though Sun invented the NFS protocol in the early 1980s, they take an EMC-like approach against standard benchmarks to measure it. Last year, fellow blogger Bryan Cantrill (Sun) gives his [Eulogy for a Benchmark]. I was going to make points about this, but fellow blogger Mike Eisler (NetApp) [already took care of it]. We can all learn from this. Companies that don't believe in standard benchmarks can either reverse course (as EMC has done), or continue their downhill decline until they are acquired by someone else.
(My condolences to those at Sun getting laid off. Those of you who hire on with IBM can get re-united with your former StorageTek buddies! Back then, StorageTek people left Sun in droves, knowing that Sun didn't understand the mainframe tape marketplace that StorageTek focused on. Likewise, many question how well Oracle will understand Sun's hardware business in servers and storage.)
What's in a Protocol?
Both CIFS and NFS have been around for decades, and comparisons can sometimes sound like religious debates. Traditionally, CIFS was used to share files between Windows systems, and NFS for Linux and UNIX platforms. However, Windows can also handle NFS, while Linux and UNIX systems can use CIFS. If you are using a recent level of VMware, you can use either NFS or CIFS as an alternative to Fibre Channel SAN to store your external disk VMDK files.
The Bigger Picture
There is a significant shift going on from traditional database repositories to unstructured file content. Today, as much as [80 percent of data is unstructured]. Shipments this year are expected to grow 60 percent for file-based storage, and only 15 percent for block-based storage. With the focus on private and public clouds, NAS solutions will be the battleground for 2010.
So, I am glad to see EMC starting to cite standard benchmarks. Hopefully, SPC-1 and SPC-2 benchmarks are forthcoming?
Last week's focus was on tape libraries, both virtual and real, leading up to our IBM announcement ofacquiring Diligent Technologies. I was focused on HDS blogger Hu Yoshida's post about his conversation with Mark,who was on an expert panel about these topics. Mark discovered that of the top energy consumersin his datacenter, his tape library was in the top five, a surprising result. Hu suggested that switching to a VTL with deduplicationtechnology was a potential alternative, and I pointed to a whitepaper from the Clipper Group that suggested otherwise.
My response was that perhaps Highmark's choice of backup software was poorly written, or that they had set it up with thewrong parameters, and just changing hardware might not be the right answer. I went too far given that I didn't know which software they had, which parameters theywere using, or which tape technology was involved. This came across wrong. I meant to poke fun at Hu's response.I did not mean to imply that Mark and his staff hadmade poor choices, or that they should automatically reject Hu's advice to consider other hardware alternatives.
I have discussed the situation with Mark, and agree that I should know his situation better before offeringsuggestions of my own.
Well, it's Tuesday again, and we had several announcements this month, so here is a quick recap.We had some things announce May 13, and then some more announcements today, but since I was busywith conferences, will combine them into one post for the entire month of May 2008.
This time, I thought I would go "audio" with a recording from Charlie Andrews, IBM director ofproduct marketing for IBM System Storage:
Well, its Tuesday, and that means more IBM announcements!!!
Let's do a quick recap of what was announced for storage:
We now support 1000GB SATA-II drives in the DS4000 series. This is available for the DS4200 model 7V, DS4700, DS4800 as well as the expansion drawers EXP420 and EX810. When I asked our marketing team why we weren't going to say "1TB" like everyone else, they thought 1000GB sounds bigger. I guess I should not have asked that on April Fool's day. For more details, see the IBM press releases for the [DS4200/EXP420and DS4700/DS4800/EXP810].
IBM announced new machine code Release 1.4a for the The IBM Virtualization Engine™ TS7700 virtual tape library for our System z mainframe customers.Various features come with this new level of machine code. See the IBM [Press Release] for more details.
Load balancing across the grid
Host control over the copy of logical volumes on a cluster by cluster basis
Option to gracefully remove an individual cluster from an existing grid
Initial-state reset for TS7700 database for cluster cleanup
Option to upgrade single-cache to dual-cache configuration
Also announced were updates to the 7214 model 1U2. Technically this is not in the IBM System Storage product line,but instead is designed specifically for our System p server line. This is a "media drawer" that allows you to havetape on one side, and optical on the other, in a single enclosure. IBM announced that you can now have DAT160 80GBdrives that is read-write compatible with DAT72 and DDS4 drives, and half-high LTO-4 drives that can read LTO-2 media, and is read-write compatible with LTO-3 media.Read the IBM [Press Release] for details.
Finally, if you are in the United States, Canada or the Carribean, there is a special discount promotionfor tape libraries purchased before June 20, 2008. This includes IBM TS3100, TS3200, TS3310 and TS3500 libraries.See the [Promotion Details] for eligibility.
IBM has added capability to the IBM TotalStorage Productivity Center for Replication. A quick review of the differentoptions for this component.
base Replication (uni-directional from primary to disaster site)
Two-site replication (bi-directional, including failover and failback)
Three-site replication (site awareness for all the copy sessions between all three sites in all situations)
Productivity Center for Replication supported all these levels for DS8000, DS6000 and ESS 800 disk models, butfor SVC it only supported FlashCopy and Metro Mirror for the uni-directional base. IBM announced version 3.4 today that has added support for SVC for Global Mirror (asynchronous disk mirroring) and bi-directional failover/failback. This supports lets you have "practice volumes" that allow IT managers to perform "disaster recovery exercises" without disrupting production workloads.
Also, for the DS8000, there is support for the new Space Efficient FlashCopy and DynamicVolume Expansion features. Here is the IBM
The Productivity Center for Replication server can run on either a Windows/Linux-x86 server or a z/OS mainframe server.The Productivity Center for Replication on System z offers all the same new support for SVC and DS8000, as well asincorporated Basic HyperSwap capability that I mentioned in my post last February[DS8000 Enhancements for the IBM System z10 EC].
Here are the IBM press releases for the TotalStorage Productivity Center for Replication on[Windows/Linux-x86and System z] servers.
I'm at a Business Partner conference today, discussing these announcements and other topics, so need to go back to those festivities.
Well it's Tuesday, which means its time to look at recent announcements.While I was on vacation last week, IBM made a lot of storage announcements October 23.Josh Krischer gives his summary on WikiBon [October 2007 Review].Austin Modine of the The Register went so far as to say that [IBM goes crazy with storage system updates].
IBM System Storage DS8000 series
This is "Release 3" software/microcode upgrades on our existing "Turbo" hardware.
IBM FlashCopy SE -- Here "SE" stands for Space Efficient. Rather than allocating a full 100% of the space for the FlashCopy destination, you can set aside just a fraction, and this will hold all the changed blocks, similar to whatIBM already offers on the DS4000 series.
Dynamic Volume Expansion -- In the past, if you needed more space for a LUN, you had to carve out a newer one elsewhere, and then copy the data over from the old to the new, leaving the old LUN around to be re-used or leftstranded. With this enhancement, you can just upgrade the LUN in place, making it bigger as needed, similar to whatIBM already offers on the DS4000 series and SAN Volume Controller. This applies to CKD volumes for the System zmainframe users out there as well.
Storage Pool Striping -- striping volumes across RAID ranks to eliminate or reduce hot-spots, and provide betterload balancing. Many used SAN Volume Controller in front of the DS8000 to do this, but now you can do it natively inthe DS8000 itself.
z/OS Global Mirror Multiple Reader -- for System z customers, "z/OS Global Mirror" is the new name for XRC. Thisenhancement improves the throughput of sending updates to the remote disaster recovery location.
DS Storage Manager enhancements, the element manager software has been enhanced, and is pre-installed on the new IBM System Storage Productivity Center, which I will talk about below.
Intermix of DS8000 machine types -- this is especially useful to allow new frames to have co-terminating warrantieswith the base units. In other words, as you expand your system, you can ensure that the entire chunk of iron runs outof warranty all at the same time, to simplify your decision making process to upgrade or contract for extended service.
One of the biggest complaints about IBM TotalStorage Productivity Center is that it is software that needs to beinstalled on its own server, and that this installation process can take a day or two. Why wait? Now you can havea hardware console that has the DS8000 Storage Manager software, SVC Admin Console software, and IBM TotalStorageProductivity Center "Basic Edition" pre-installed. Here are the key features.
Pre-installed and tested console
DS8000 R3 GUI integration
Cohabitation of SVC 4.2.1 GUI and CIMOM
Automated device discovery
Asset and capacity reporting, including tape library support
Our "Release 9" applies across the board, from N3000 to N5000 to N7000 series models, includingnew host bus adapters, and the new Data OnTAP 7.2.4 release level.
The Virtual File Manager (VFM) was announced as one of our latest [Storage Virtualization Solutions]. VFMprovides a global namespace that aggregates the file systems from Linux, UNIX, and Windows file servers, as well asN series storage, into a consolidated environment.
IBM's virtual tape library (VTL) for the distributed systems platform, has been enhanced to provide:
Up to 12TB of disk cache, using 750GB SATA disk.
F05 Tape Frames installed as TS7520 base units through a 32 port fibre channel switch
Support for LTO generation 4 tape drives, both as virtual tape drives and as physical tape drives within IBM automated tape libraries attached to the TS7520. This allows you to use Encryption capabilities of LTO4.
DS3000 series now supports SATA disk, and can be attached to AIX and Linux on System p servers. This appliesto the DS3200, DS3300 and DS3400 models.See the [DS3000 Announcement Letter] for more details.
Well, it is Tuesday, and that means IBM announcements. This week many of my colleagues are attending Storage Networking World [SNW] conference. Normally, the most exciting announcements are reserved for the weeks these conferences are held, but IBM apparently made an exception this week.
New Factory configurations for XIV
The first announcement is for new [factory configurations] for the IBM XIV disk system. In the past, you could only order a partial 6-module or a full 15-module rack. Today, IBM announced that there will also be 9-, 10-, 11-, 12-, 13- and 14-module configurations orderable as well.
Some FUD out in the blogosphere led some to believe that these partial configurations had to be made full 15 modules within 12 months. That is false. You can order any of these partial rack configurations and leave them as is until you need more capacity. There is no obligation to buy more capacity with these partial rack configurations.
IBM N series N6060 configurations
This second announcement indicates that the N6060 supports[672 drives]. The N6060 is the latest midrange model of IBM N series unified storage.
If you are asking "What is a 672 drive?" don't feel stupid. It actuallyrefers to the number of external drives that can be attached to the N6060. Previously, it was mis-reported that the N6060 could support as many as 840 drives, but this was not correct, and this announcement is to fixthat typo.
IBM Passport Advantage Sub-Capacity Licensing
This last announcement today relates to IBM Passport Advantage[sub-capacity licensing].Pricing products is always a challenge. You want to come up with a pricing methodology such that people who get the most use pay the most, and those who get less pay less, in a manner that everyone thinks is fair. With commodities, it is simple to price rice by the pound, or fabric by the yard, but what about IT solutions?
Some of the IBM software is based on number of processors used, so that people who have the software running on multiple machines, or machines with multiple cores, should pay more because they are getting more value. This makes sense if this software is the only thing running on that server, but today you can also have server virtualization and are running many guest operating systems, each with different applications. The solution is to use "sub-capacity" licensing. If you have a quad-core processor server, but have four guest operating systems using 25 percent of this, then each OS should only pay for one processor's worth of licensing. Since different processors have different clock speeds, IBM has standardized the calculations to a mythical "Processor Value Unit" or PVU, with a corresponding IBM License Metric Tool (ILMT).
Initially, this will cover specific versions of Citrix Xen Server, Microsoft Hyper-V and VMware, but IBM has made as a "statement of direction" that it will extend this sub-capacity licensing and ILMT support to IBM PowerVM capability for its POWER systems.
I have often heard clients complain that their third party software vendor does not support these hypervisors. Sometimes, this means the third party vendor will not fix or provide assistance if the problem occurs in this environment, and other times, it is that the pricing does not favor this environment, you get charged for all the processors, even if your slice of the processor is much smaller.
If you are at SNW this week, stop by and say "Hi" to my fellow IBM collegues for me.
The proof-of-concept that IBM Haifa research center developed back in 1998 became what we now call the iSCSI protocol.The book iSCSI: The Universal Storage Connection introduces the history as follows:
In the fall of 1999 IBM and Cisco met to discuss the possibility of combining their SCSI-over-TCP/IP efforts. After Cisco saw IBM's demonstration of SCSI over TCP/IP, the two companies agreed to develop a proposal that would be taken to the IETF for standardization.
There are three ways to introduce iSCSI into your data center:
Through a gateway, like the IBM System Storage N series gateway, that allows iSCSI-based servers connect to FC-based storage devices
Through a SAN switch or director, a FC-based server can access iSCSI-based storage, an iSCSI-based server accessing FC-based storage, or even iSCSI-based servers attaching to iSCSI-based storage.
Directly through the storage controller.
IBM has been delivering the first method with its successful IBM System Storage N series gateway products, buttoday we have announced additional support for the second and third methods.Here's a quick recap.
New SAN director blades
Supporting the second method, IBM TotalStorage SAN256B Director is enhanced to deliver iSCSI functionality with a new M48 iSCSI Blade, which includes 16 ports (8 Fibre Channel ports; and 8 Ethernet ports for iSCSI connectivity). We also announced a new Fibre Channel M48 Blade which provides 10 Gbps Fibre Channel Inter Switch Link (ISL) connectivity between SAN256B Directors.
With support for Boot-over-iSCSI, diskless rack-optimized and blade servers can boot Windows or Linux over Ethernet,eliminating the management hassles with internal disk.
All of this is part of IBM's overall push into the Small and Medium size Business marketplace, making it easier to shop for and buy from IBM and its many IBM Business Partners, easier to deploy and install storage, and easier tomanage the storage once you have it.
It's Tuesday, which means IBM makes its announcements. We had several for the IBM System Storage product line. Here's a quick recap.
The IBM System Storage DS3000 now offers DC power models.New DC powered models of the DS3200, DS3400, and EXP3000 are well suited for Telco industry environments, as theseare NEBS and ETSI compliant and are powered by an industry standard 48 volt DC power source.
Also, the IBM System Storage N series now supports750GB SATA drives available for the EXN1000 drawer.
IBM Virtualization Engine TS7740now supports 3-cluster grids. Unlike 3-way replication on disk mirroring, such as IBM Metro/Global Mirror for the DS8000 that enforces a primary, secondary and tertiary copy, the grid implementation of TS7740 tape virtualization allows for any-to-any mirroring. Existing standalone TS7740 clusters can be converted to grid-enabled. A "Copy Export" feature allows virtual tapes to be exported onto physical tape. And in keeping with our theme of "enabling business flexibility", performance throughput can now be purchased in 100 MB/sec increments, up to 600 MB/sec, to match your workload bandwidth requirements.
The IBM System Storage TS1120drives installed in the IBM System Storage™ TS3400 Tape Library can now be attached to System z platforms using the IBM System Storage™ TS1120 Tape Controller. Before this, the TS3400 could only be attached to UNIX, Windows and Linux systems.
The IBM System StorageTS2230 Express is offered as an external stand-alone or rack-mountable unit. This model incorporates the new LTO IBM Ultrium 3 Serial Attached SCSI (SAS) Half-High Tape Drive, and a 3 Gbps single port SAS interface for a connection to a wide spectrum of distributed system servers that support Microsoft Windows and Linux systems.
IBM has added theCisco MDS 9124 for IBM System Storageentry-level fabric switch as an Express offering and part of the IBM Express Advantage Program. Express offerings are specifically created for mid-market companies and are well suited for workgroup storage applications like e-mail serving, collaborative databases and web serving. They bring enterprise-class performance, scalability and features to small and medium-sized companies and are easy to use, highly scalable, and cost-effective.This will make it easier for IBM Business Partners to provide fabric switch connectivity for:
Storage consolidation solutions with IBM System Storage™ DS4000 Express disk arrays, especially the DS4700 Express.
Backup / restore solutions with IBM System Storage™ TS3000 Tape Libraries, such as the TS3200.
Archive and Retention
Ordering large configurations of the IBM System Storage Grid Access Manager just got a lot easier.New features enable configurations greater than 500 TB to be submitted as a single order. No change in the actualproduct, just an improvement in the ordering process.
For System p and System i servers, the IBM 3996 Optical library now supports Gen 2 60GB optical cartridges. These can be read/write or WORM cartridges.
I'm off to Denver, Colorado this week. I hope it is cooler there than it is down here in Tucson, Arizona.