Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Doug Balog, IBM VP and Business Level Executive for Storage, presented Smart Archiving. Citing research by Jon Toigo, Doug indicated that 40 percent of data on disk should be archived. Sadly, a vast majority of companies continue to use their backups as archives. There is a better way to do archives, to address the needs of four use cases:
The IBM Information Archive for email, files and eDiscovery offers full text indexing. A well-deployed archive strategy can save up to 60 percent in backup costs, and reduce backup times by 80 percent. IBM offers advanced analytics and visualization for archive data.
An analysis of a global insurance company found that they kept, on average, 120 copies of every email sent. This was the combination of an average of 12 copies of the email, multipled by 10 backups of the email repository.
Banjercito, a bank in Mexico, has a 10-year retention requirement from government regulations.
The new LTFS Library Edition allows Library-based access to files stored on tape cartridges. The new TS3500 Library Connector means that a single system of connected tape libraries can hold up to 2.7 Exabytes (EB) of data.
Archive Industry Perspectives
Steve Duplessie from Enterprise Strategy Group [ESG] gave his views on the challenges of volume, access and cost. His definition for archive: the long term retention of information on a separate environment for compliance, eDiscovery and business reference purposes. Steve advocates a purpose-built solutiion for archive. There are three major challenges for implementing an archive solution:
Getting Participation -- Steve feels that key stakeholders have inappropriate expectations of what archive is, or can be.
Define Tasks -- Steve argues that archive is very much a process-oriented approach, and tasks must fit business process and procedures
Prepare for Future Content Types -- the frequent change of standard and proprietary data types poses a real challenge for long term retention of data
For example, the Financial Industry Regulatory Authority [FINRA] oversee 4,000 brokerage firms, and 600,000 broker/dealers. They have mandated the storing of digital data related to stock trades, and this can include text messages, voice messages, and emails. They continue to expand this definition, so soon this could include tweets on Twitter, for example.
Steve feels there are four key requirements for archive:
Support for email, such as an email application plug-in
Off-line access to archived data
Support for mobile devices, such as smartphones
Basic search capabilities
Companies are starting to take archive seriously. About 35 percent of firms surveyed have adopted archive, and another 36 percent plan to in the next 12-24 months. Enterprise archive has grown over 200 percent from 2007 to 2009. Steve agrees that not everything needs to be stored on disk. Retention periods greater than six years dictates the need for tape.
Current systems may not meet today's requirements. Data loss and downtime costs have skyrocketed. Data Protection and Retention projects can represent a gold mine of savings, new capabilities can greatly lower costs, allowing companies to shift resources over to revenue generation.
Big Data, New Physics and Geospatial Super-Food
I would vote this the best session of the day! For all those confused on what the heck "Big Data" means, Jeff has the best explanation. Jeff Jonas is an IBM Distinguished Engineer and the Chief Scientist of Entity Analytics. He had just finished his 17th marathon on Saturday, and his fingers were bandaged.
Jeff had founded the Systems Research and Design (SR&D) company, known for creating NORA (non-obvious relationship awareness) used by Las Vegas casinos to identify fraud. SR&D was acquired by IBM back in 2005. Jeff is focused on sensemaking of streams. He feels many companies are suffering from "Enterprise Amnesia".
"The data must find the data .. and the relevance must find the user."
-- Jeff Jonas
Jeff's metaphor to Big Data is a jigsaw puzzle without the picture on the outside of the box. To demonstrate his point, he presented a pile of jigsaw puzzle pieces and asked four teenagers to put the puzzle together without the advantage of the picture on the box. What he had not told them was that he mixed four different puzzles together, removing out 10 to 20 percent of the pieces from each puzzle. He also added some duplicate pieces from a second identical puzzle, and just to make things fun, included a dozen pieces from a sixth puzzle just to mess with their heads. Within a few hours, the kids had managed to figure out that there were four puzzles, that there were duplicate pieces, and that there were some pieces that did not fit any of the four puzzles.
"You can't squeeze knowledge from a pixel."
-- Jeff Jonas
This approach favors false negatives. New observations reverse out old conceptions. As the picture emerges, this provides added focus on new information. More data can provide better predictions. "Bad" data, including misspelled words and mis-coded categories, was often discarded or corrected on the basis of "Garbage-In, Garbage Out", but can now be useful in a Big Data perspective.
Take for example the 600 billion recordings of the "location data" captured on cell phones every day. With regular triangulation of cell phone towers, the information can pinpoint you within 60 meters, add GPS and this improved to within 20 meters, and add Wi-Fi is further improved to 10 meters. While this data is "de-identified" so as not to identify individual users, the process of re-identification is relatively trivial. Jeff's system is able to predict a person will be next Thursday at 5:35pm with 87 percent accuracy.
Thus, Big Data represents an asset, accumulation of context. Real-time analytics can be a competitive advantage. These streams of data will need persistent storage and massive I/O capabilities. In one example, Jeff processed 4,200 separate sources of information and was able to identify "dead votes". These are votes cast by people that died in years prior, indicating voter fraud.
Jeff's latest project, codenamed G2, will tackle not just people, but everything from proteins to asteroids.
Normally, the worst time slot is the hour after lunch, but these presentations kept people's attention.
During the break, I talked with some of the other bloggers at this event. From left to right: Stephen Foskett [Pack Rat] blog, Devang Panchigar [StorageNerve], and yours truly, Tony Pearson. (Picture courtesy of Stephen Foskett)
Meet the Experts
This next segment was a Q&A panel, with a moderator posing questions to four experts. Originally, I was scheduled to be the moderator, but this was changed to Doug Balog. The experts on the panel were:
Rich Castagna, Editorial Director for Storage Media, TechTarget. TechTarget is the group that runs the [SearchStorage] website.
Stan Zaffos, Gartner VP of Research, who spoke earlier today. I have worked with Stan for years as well, and have attended the last four Gartner Data Center Conferences held every December in Las Vegas.
Steve Duplessie, Founder and Senior Analyst, Enterprise Strategy Group (ESG). Steve's blog is titled [The Bigger Truth].
Jon clarified a statement Doug Balog said earlier in the day attributed to his study. Doug had said that 40 percent of all data should be archived. The study that Jon Toigo had done found that, on average, for the data on disk systems, about 30 percent is useful data, 40 percent is not active and could be eligible for archive, and the remaining 30 percent was crap.
The other experts introduced themselves. Rich felt that "Cloud" was still the biggest buzzword in the IT industry. Stan felt that CIOs should ask their storage administrators "What are you doing to improve my agility and efficiency". Steve felt that it was better to focus on improving process and procedures, rather than trying to deploy the best technology.
How can you best reduce backup costs per TB?
Jon- use tape.
Rich- Clean up your environment.
Stan- Don't rehydrate your deduplicated data, adopt archive approach, and revisit your backup schedules.
Steve- Deduplication covers up stupidity. No band-aids! Companies need to address the cause.
Does Backup as a Public Service for large enterprises makes sense?
Rich- Yes, especially for those with Remote Office/Branch Office (ROBO).
Stan- It depends. You should implement client-side dedupe. Get the Cloud Provider to waive telecom bandwidth charges.
Steve- Consider recovery scenarios, and try to maintain control.
Jon- "Clouds" are bulls@#$ marketing. WAN latency will pile up.
What are the top issues IT leaders should be discussing with the Storage Managers?
Stan- To ensure SLAs meet but not exceed design, to automate, and to evaluate SAN/NAS ratios.
Steve- Server virtualization is putting the spotlight on storage. Failure to implement storage virtualization is becoming the gate that slows down sever virtualization adoption.
Jon- Insist on management features from all storage vendors, try to separate feature/function from the underlying hardware layer. See IBM's [Project Zero].
Rich- Efficiency, Archiving, Thin Provisioning, Compression, Data Protection & Retention, Backup Redesign to protect endpoints like laptops and cell phones.
When does Archive eliminate Backup?
The need for protection never goes away. There are two kinds of data: "originals" and "derivatives", and two kinds of disk: "failed" and "not yet failed".
Given SATA and SAS drives, what is the future of 10K/15K RPM drives?
There is no future for these faster drives, they are going away.
What is the biggest challenge for adopting archive?
It is easy to move data out of production systems, but difficult to make these archives accessible for eDiscovery and Search. There is also concern about changing data formats. Adobe has changed the format of PDF a whopping 33 times.
This was by far the most entertaining section of the day! Hand-held devices allowed the audience to vote which answers they liked best.
Dan Galvan, IBM VP of Marketing for Storage, was the next speaker. With 300 billion emails being sent per day, 4.6 billion cell phones in the world, and 26 million MRIs per year, there is going to be a huge demand for file-based storage. In fact, a recent study found that file-based storage will grow at 60 percent per year, compared to 15 percent growth for block-based storage.
Dan positioned IBM's Scale-out Network Attached Storage (SONAS) as the big "C:" drive for a company. SONAS offers a global namespace, a single point of management, with the ability to scale capacity and performance tailored for each environment.
The benefits of SONAS are great. We can consolidate dozens of smaller NAS filers, we can virtualize files across different storage pools, and increase overall efficiency.
Powering advanced genomic research to cure cancer
The next speaker was supposed to be Bill Pappas, Senior Enterprise Network Storage Architect, Research Informatics at [St. Jude Children’s Research Hospital]. Unfortunately, St. Jude is near the flooding of the Mississippi river, and he had to stay put. An IBM team was able to capture his thoughts on video that was shown on the big screen.
Thanks to the Human Genome project, St. Jude is able to cure people. They see 5700 patients per year, and have an impressive 70 percent cure rate. The first genetic scan took 10 years, now the technology allows a genome to be mapped in about a week. Having this genomic information is making vast strides in healthcare. It is the difference of fishing in a river, versus putting a wide net to catch all the fish in the Atlantic ocean all at once.
Recently, St. Jude migrated 250 TB of files from other NAS to an IBM SONAS solution. The SONAS can handle a mixed set of workloads, and allows internal movement of data from fast disk, to slower high-capacity disk, and then to tape. SONAS is one of the few storage systems that supports a blended disk-and-tape approach, which is ideal for the type of data captured by St. Jude.
IBM's own IT transformation
Pat Toole, IBM's CIO, presented the internal transformation of IBM's IT operations. He started in 2002 in the midst of IBM's effort to restructure its process and procedures. They identified four major data sources: employee data, client data, product data, and financial data. They put a focus to understand outcomes and set priorities.
The result? A 3-to-1 payback on CIO investments. This allowed IBM to go from server sprawl to consolidated pooling of resources with the right levels of integration. In 1997, IBM had 15,000 different applications running across 155 separate datacenters. Today, they have reduced this down to 4,500 applications and 7 datacenters. Their goal is to reduce down to 2,225 applications by 2015. Of these, only 250 are mission critical.
Pat's priorities today: server and storage virtualization, IT service management, cloud computing, and data-centered consolidation. IBM runs its corporate business on the following amount of data:
9 PB of block-based storage, SVC and XIV
1 PB of file-based storage, SONAS
15 PB of tape for backup and archive
Pat indicated that this environment is growing 25 percent per year, and that an additional 70-85 PB relates to other parts of the business.
By taking this focused approach, IBM was able to increase storage utilization from 50 to 90 percent, and to cut storage costs by 50 percent. This was done through thin provisioning, storage virtualization and pooling.
Looking forward to the future, Pat sees the following challenges: (a) that 120,000 IBM employees have smart phones and want to connect them to IBM's internal systems; (b) the increase in social media; and (c) the use of business analytics.
After the last session, people gathered in the "Hall of the Universe" for the evening reception, featuring food, drinks and live music. It was a great day. I got to meet several bloggers in person, and their feedback was that this was a very blogger-friendly event. Bloggers were given the same level of access as corporate executives and industry analysts.
Greg and 3PAR's Marc Farley did an "ambush" interview with the folks at the IBM booth at SNW, including Paula Koziol about Twitter, and [Rich Swain] about IBM's latest SONAS product. Here is their post [Storage Monkey business with IBM]:
You can learn more about SONAS from my post [More Details about IBM Clustered NAS]. SONAS is based on software that has been available since 1996, on commodity off-the-shelf server and storage systems, but building a complete system was left as an exercise to the end-user, which many of the top 500 Supercomputers have done.
Back in November 2007, IBM announced Scale-Out File Services (SoFS) which was a set of IBM Global Technical Services to build a customized solution from the software and a set of servers, disk and tape storage. Customized configurations were done for a variety of workloads from Digital Media to Scientific Research High Performance Computing (HPC). Last year, SoFS was renamed to IBM Smart Business Storage Cloud (SBSC).
This year, IBM was able to package all of the software and hardware into an easy to order machine-type model that has everything cabled and ready to use. This is what SONAS is today.
When I was a kid, I used to love old spy movies where they would hide a small microchip or microfiche behind the stamp on a letter or postcard. "Yeah right," I would think to myself, "how much information could that little thing possibly hold."On their post[Bringing the "New Intelligence" Down to Earth: Intro to Semantic Web, Internet-of-Thing], My fellow IBM bloggers Jack Mason and Adam Christensen pointed me to a crazy new product called "Mir:ror" that connects to your PC or laptop.
At first, I thought it was a another product spoof, like Onion News Networks'video of the [Apple MacBook Wheel] that eliminatesthe need for a keyboard.But no, this product is real, from a company called [Violet]. The mir:ror, the internet-connected rabbits, and the tiny postage stamps called "ztamps" with embedded RFID chips that allow everything to be interconnected.I can see a lot of interesting uses for the ztamps. Squishing CD-romsor memory sticks inside presentation folders was always awkward. Butthese are small, flat and discrete. I don't know how many GBs of storage each ztamp holds, but they look cool, don't they?
Just another example of becoming a smarter planet!
This week and next, I am down under in Australia and New Zealand for a seven-city Storage Optimisation Breakfast series of presentations to clients and prospects. My first city for this seven-city tour was Sydney, Australia.
Here is the view from my room at the [Shangri-La hotel], including the famous [Sydney Opera House] and Circular Quay, from which to take a water taxi or ride the Manly Ferry. [Sydney harbour] is the deepest harbour in the Southern Hemisphere, allowing boats of all sizes to enter. This section of the city is known as "The Rocks".
Sydney is a very modern metropolis. The last time I was in Sydney was in May 2007 to teach an IBM Top Gun class. My post back then on [Dealing with Jet Lag] is as relevant now as it was back then. In addition to being 9 hours off-shifted from last week in Dallas, Texas, I also have to deal with the colder climate, about 40 degrees F cooler down here. The weather is crisp and clear, it is Winter going into Spring down here as the seasons are flipped below the equator.
Many of the buildings are recognizable from the movie ["The Matrix"] which was filmed here. We joked that this seven-city trip was also similar to [The Adventures of Priscilla, Queen of the Desert], in that both journeys started in Sydney. If you haven't seen the latter, I highly recommend it to get to learn more about Australia as a country.
(Completely useless trivia: Actor Hugo Weaving appeared in both movies. While most people associate him with Australia, where he has lived since 1976, he actually was born in Nigeria, and traveled extensively because his father worked in the computer industry.)
Here I am standing next to our banner.
The line-up for each event is simple. After all the attendees sit down for breakfast, we have the following three sessions:
First, Anna Wells, local IBM Executive for Storage Sales in Australia and New Zealand presents IBM's strategy for storage, and how IBM plans to address Storage Efficiency, Data Protection and Service Delivery. She then highlights various products that are currently available to help meet customer needs, including XIV and the SAN Volume Controller (SVC).
Second, we have a client or two share their success story. We will have different speakers at the different locations.
Third, I present on future trends that will impact the storage marketplace. With only 40 minutes for my section, I decided to focus on just three specific trends, with a mix of some colorful analogies to help emphasize my key points.
We had a great turn-out for our first event in Sydney, lots of clients and prospects came out for this. There is a lot of enthusiasm for IBM's vision, thought leadership, and broad portfolio of storage solutions.
In general, people agree that IBM, HP and EMC are the top three vendors in storage,with HDS, Sun and Dell rounding out the top six.
The fun begins when a respected analyst like IDC Corp. publishes their calculations,and individual vendors re-swizzle the results because they are not happy with theirfindings.
I thought it would be helpful to illustrate how this all works. First, you need to comeup with a defintion of what you are going to count. You could count units sold, revenue dollars, or capacity Terabytes, or some other generally accepted metric.
Next, you need to define what's in and what's out. For example, you can say "storage"which would include both disk drives and tape drives, both internal to servers, orexternal to servers, or you can choose a more narrow definition, say external disksystems, which might suit you better if you aren't in the tape business, and don't sell servers.
By some definitions, my Apple iPod, Motorolla cell phone, and Canon digital camera,could all be counted as external disk systems, as they all connect via USB cableto my IBM laptop, and act like a disk drive to my Windows operating system, allowingme to read and write data back and forth. It is necessary to define exactly what you plan to include,and what to exclude, based on the reported numbers available.
The last rule is that nothing gets double-counted. In our complicated industry ofmanufacturers and vendors, sometimes storage is manufactured by one company, but soldby another, typically under the vendor's brand, not the manufacturer's brand. Youcan either count manufactured units, or vendor units, but you can't mix and match.
IBM is both manufacturer and vendor. However, IDC only counts vendor units, so storagemanufactured by someone else, but sold by IBM is counted as IBM, and storage manufacturedby IBM but branded by someone else goes to that other vendor. Likewise, HP and Sun re-brandHitachi storage, and Dell re-brands EMC storage.
EMC would like to treat all EMC-manufactured storage re-branded by Dell as EMC vended storage,so that it can move up in the ratings. But Dell wants to count it too, so that it can appearin the top six. You can't have it both ways.
But are these ratings just "bragging rights"? Not always. When big purchases are planned fornew projects, or a client decides its time to throw out the current vendor and shop for a newone, the ratings could influence that decision. In that regard, IDC 4Q05 Storage Tracker reportedIBM as number one over all in storage hardware at the end of 2005, which includes both internal and external disk systems, as well as tape drives sold under the IBM brand, based on dollar revenues. By this method of counting, HP came in at number 2, EMC at number 3, and the rest round out thetop six as before.
In the end, this is just one factor when deciding which brand to choose for your storage needs.
"With Cisco Systems, EMC, and VMware teaming up to sell integrated IT stacks, Oracle buying Sun Microsystems to create its own integrated stacks, and IBM having sold integrated legacy system stacks and rolling in profits from them for decades, it was only a matter of time before other big IT players paired off."
Once again we are reminded that IBM, as an IT "supermarket", is able to deliver integrated software/server/storage solutions, and our competitors are scrambling to form their own alliances to be "more like IBM." This week, IBM announced new ordering options for storage software with System x servers, including BladeCenter blade servers and IntelliStation workstations. Here's a quick recap:
IBM Tivoli Storage Manager FastBack v6.1 supports both Windows and Linux! FastBack is a data protection solution for ROBO (Remote Office, Branch Office) locations. It can protect Microsoft Exchange, Lotus Domino, DB2, Oracle applications. FastBack can provide full volume-level recovery, as well as individual file recovery, and in some cases Bare Machine Recovery. FastBack v6.1 can be run stand-alone, or integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution.
FlashCopy Manager v2.1
FlashCopy Manager uses point-in-time copy capabilities, such as SnapShot or FlashCopy, to protect application data using an application-aware approach for Microsoft Exchange, Microsoft SQL server, DB2, Oracle, and SAP. It can be used with IBM SAN Volume Controller (SVC), DS8000 series, DS5000 series, DS4000 series, DS3000 series, and XIV storage systems. When applicable, FlashCopy manager coordinates its work with Microsoft's Volume Shadow Copy Services (VSS) interface. FlashCopy Manager can provide data protection using just point-in-time disk-resident copies, or can be integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution to move backup images to external storage pools, such as low-cost, energy-efficient tape cartridges.
General Parallel File System (GPFS) v3.3 Multiplatform
GPFS can support AIX, Linux, and Windows! Version 3.3 adds support for Windows 2008 Server on 64-bit chipset architectures from AMD and Intel. Now you can have a common GPFS cluster with AIX, Linux and Windows servers all sharing and accessing the same files. A GPFS cluster can have up to 256 file systems. Each of these file systems can be up to 1 billion files, up to 1PB of data, and can have up to 256 snapshots. GPFS can be used stand-alone, or integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution with parallel backup streams.
For full details on these new ordering options, see the IBM [Press Release].
Continuing my coverage of the IT Security and Storage Expo in Brussels, Belgium, we had some great storage solutions on display at the IBM and I.R.I.S-ICT booth.
Here my IBM colleague Tom Provost is showing the front of the "Smarter Office" solution. The second photo gives the view from behind. While I always explained the solution from the front of the box, many of the more technical attendees at this conference wanted to inspect the ports in the back.
This sound-isolated 11U solution combines the following:
The [IBM Storwize V3700] with 300GB small-form-factor (SFF) drives provides shared storage for the servers.
Two [IBM System x3550 M4 servers] that can run VMware, Hyper-V or Linux KVM server hypervisor software for your Windows and/or Linux applications. These are two socket servers that can have up to 16 x86 cores each.
A Juniper EX2200 switch to network the servers and storage together.
A Local Console Manager (LCM) with rackable keyboard, video, and mouse.
In this next example, the IBM team combined a BladeCenter S chassis that can hold six blade servers, with a Storwize V7000 Unified which offers FCP, iSCSI, FCoE, NFS, CIFS, HTTPS, SCP and FTP block and file protocols.
If those configurations are too small for your needs, consider the Flex System chassis or full PureFlex system frame. The rack-mountable 10U chassis can hold the Flex System V7000 and 10 compute notes. The PureFlex frame can hold up to four of these chasses.
IBM and I.R.I.S-ICT also had an IBM XIV Gen3 and a TS3500 Tape library on display.
If you missed the [IBM System Storage and Storage Networking Symposium] in San Diego, California last month (like I did because I was in Japan and India), here is your chance to attend the one next month in Europe, September 8-11, in beautiful[Montpellier, France]. Several of my colleagues from the IBM Tucson Executive Briefing Center are scheduled to speak at this event.
And maybe, perhaps, some IBM executives, will have something important to say next month also! Stay tuned!
For a list of other IBM events this year, see the [2008 schedule].
And, it's not too late to sign up for IBM Tivoli's [Pulse 2008] conference that will be heldin Orlando, Florida, May 18-22, 2008. I'll be there Sunday and Monday only, in the Tivoli Storage track, so if you are planning to attend and wish to meet up with me while I am there, please send me a note!
This week I am in Chicago for the IBM Storage and Storage Networking Symposium, which coincides with the System x and BladeCenter Technical Conference. This allows the 800 attendees to attend both storage or server presentations at their convenience. There were hundreds of sessions, over 20 time slots, so for each time slot, you have 15 or so topics to choose from.Mike Kuhn kicked off the series of keynote sessions. Here's my quick recap of each one:
Curtis Tearte, General Manager, IBM System Storage
Curtis replaced Andy Monshaw as General Manager for IBM System Storage. His presentation focused on how storage fits into IBM's Dynamic Infrastructure strategy. Some interesting points:
a billion camera-enabled cell phones were sold in 2007, compared to 450 million in 2006.
IBM expects that there will be 2 billion internet users by 2011, as well as trillions of "things".
In the US, there were 2.2 million medical pharmacy dispensing errors resulting for handwritten prescriptions.
Time wasted looking for parking spaces in Los Angeles consumed 47,000 gallons of gasoline, and generated 730 tons of carbon dioxide.
In the US, 4.2 billion hours are lost, and 2.9 billion gallons of gas consumed, due to traffic congestion.
Over the past decade, servers went from 8 watts to 100 watts per $1000 US dollars.
Data growth appears immune to the economic recession. The digital footprint per person is expected to grow from 1TB today to over 15TB by 2020.
10 hours of YouTube videos are uploaded every minute.
Bank of China manages 380 million bank accounts, processing over 10,000 transactions per second.
At the end of the session, Curtis transitioned from demonstrating his knowledge and passion of storage to his knowledge and passion in his favorite sport: baseball. Chicago is home to both the Cubs and the White Sox.
Roland Hagan, Vice President Business Line Executive, System x
IBM sets the infrastructure agenda for the entire industry. The Dynamic Infrastructure initiative is not just IT, but a complete end-to-end view across all of the infrastructures in play, including transportation, manufacturing, services and facilities.Companies spent over $60 billion US dollars on servers last year. Of these, 53 percent for x86-based servers, 9 percent for Itanium-based, 26 percent for RISC-based (POWER6, SPARC, etc.), and 11 percent mainframe. Theeconomic downturn has impacted revenues, but the percentages continue about the same.
The dominant deployment model remains one application per server. As a result, power, cooling and management costs have grown tremendously. There are system admins opposed to consolidating server images with VMware, Hyper-V, Xen or other server virtualizaition technologies. Roland referred to these admins as "server huggers".To help clients adopt cloud computing technologies, IBM introduced [Cloudburst] appliances. IBM plans to offer specialized versions for developers, for service providers, and for enterprises.
IBM's Enterprise-X Architecture is what differentiates IBM's x86-based servers from all the competitors, surrounding Intel and AMD processors with technology that provides distinct advantages. For example, to support server virtualization, IBM's eX4 provides support for more memory, which often is more critical than CPU resources when deploying large number of guest OS images. IBM System x servers have an integrated management module (IMM) and was the first to change over from BIOS to the new Unified Extensible Firmware Interface [UEFI] standard.
IBM servers offer double the performance, consume half the power, and cost a third less to manage, than comparably priced servers from competitors. Of the top 20 more energy efficient server deployments, 19 are from IBM. Roland cited customer reference SciNet, a 4000-server supercomputer with 30,000 cores based on IBM [iDataPlex] servers. At 350 TeraFLOPs it is ranked #16 fastest supercomputer in the world, and #1 in Canada. With apower usage effectiveness (PUE) less than 1.2, it also is very energy efficient. This means that for every 12 watts of electricity going in to the data center, 10 watts are used for servers, storage and networking gear, andonly 2 watts used for power and cooling. Traditional data centers have PUE around 2.5, consuming 25 watts total for every 10 watts used by servers, storage and networking gear.
Clod Barrera, Distinguished Engineer, Chief Technical Strategist for IBM System Storage
Clod presented trends and directions for disk and tape technology, disk and tape systems, and the direction towards cloud computing.
Continuing my week in Chicago for the IBM Storage and Storage Networking Symposium and System x and BladeCenter Technical Conference, I presented a variety of topics.
Hybrid Storage for a Green Data Center
The cost of power and cooling has risen to be a #1 concern among data centers. I presented the following hybrid storage solutions that combine disk with tape. These provide the best of both worlds, the high performance access time of disk with the lower costs and reduced energy consumption of tape.
IBM [System Storage DR550] - IBM's Non-erasable, Non-rewriteable (NENR) storage for archive and compliance data retention
IBM Grid Medical Archive Solution [GMAS] - IBM's multi-site grid storage for PACS applications and electronic medical records[EMR]
IBM Scale-out File Services [SoFS] - IBM's scalable NAS solution that combines a global name space with a clustered GPFS file system, serving as the ideal basis for IBM's own[Cloud Computing and Storage] offerings
Not only do these help reduce energy costs, they provide an overall lower total cost of ownership (TCO) thantraditional WORM optical or disk-only storage configurations.
The Convergence of Networks - Understanding SAN, NAS and iSCSI in the Data Center Network
This turned out to be my most popular session. Many companies are at a crossroads in choosing data and storage networking solutions in light of recent announcements from IBM and others. In the span of 75 minutes, I covered:
Block storage concepts, storage virtualization and RAID levels
File system concepts, how file systems map files to block storage
Network Attach Storage, the history of the NFS and CIFS protocols, Pros and Cons of using NAS
Storage Area Networks, the history of SAN protocols including ESCON, FICON and FCP, Pros and Cons of using SAN
IP SAN technologies, iSCSI and Fibre Channel over Ethernet (FCoE), Pros and Cons of using this approach
Network Convergence with Infiniband and Fibre Channel over Convergence Enhanced Ethernet (FCoCEE), why Infiniband was not adopted historically in the marketplace as a storage protocol, and the features and enhancements of Convergence Enhanced Ethernet (CEE) needed to merge NAS, SAN and iSCSI traffic onto a single converged data center network [DCN]
Yes, it was a lot of information to cover, but I managed to get it done on time.
IBM Tivoli Storage Productivity Center version 4.1 Overview and Update
In conferences like these, there are two types of product-level presentations. An "Overview" explains howproducts work today to those who are not familiar with it. An "Update" explains what's new in this version of the product for those who are already familiar with previous releases. I decided to combine these into one sessionfor IBM's new version of [Tivoli Storage Productivity Center].I was one of the original lead architects of this product many years ago, and was able to share many personalexperiences about its evolution in development and in the field at client facilities.Analysts have repeatedly rated IBM Productivity Center as one of the top Storage Resource Management (SRM) tools available in the marketplace.
Information Lifecycle Management (ILM) Overview
Can you believe I have been doing ILM since 1986? I was the lead architect for DFSMS which provides ILM support for z/OS mainframes. In 2003-2005, I spent 18 months in the field performingILM assessments for clients, and now there are dozens of IBM practitioners in Global Technology Services andSTG Lab Services that do this full time. This is a topic I cover frequently at the IBM Executive Briefing Center[EBC], because it addressesseveral top business challenges:
Reducing costs and simplifying management
Improving efficiency of personnel and application workloads
Managing risks and regulatory compliance
IBM has a solution based on five "entry points". The advantage of this approach is that it allows our consultants to craft the right solution to meet the specific requirements of each client situation. These entry points are:
Tiered Information Infrastructure - we don't limit ourselves to just "Tiered Storage" as storage is only part of a complete[information infrastructure] of servers,networks and storage
Storage Optimization and Virtualization - including virtual disk, virtual tape and virtual file solutions
Process Enhancement and Automation - an important part of ILM are the policies and procedures, such as IT Infrastructure Library [ITIL] best practices
Archive and Retention - space management and data retention solutions for email, database and file systems
I did not get as many attendees as I had hoped for this last one, as I was competing head-to-head in the same time slot as Lee La Frese covering IBM's DS8000 performance with Solid State Disk (SSD) drives, John Sing covering Cloud Computing and Storage with SoFS, and Eric Kern covering IBM Cloudburst.
I am glad that I was able to make all of my presentations at the beginning of the week, so that I can then sit back and enjoy the rest of the sessions as a pure attendee.
Continuing my week in Chicago, I decided to attend some of the presentations from the System x side. This is the advantage of running both conferences in the same hotel, attendees can choose how many of each they want to participate in.
Wayne Wigley, IBM Advanced Technical Support (ATS), presented a series of presentations on different server virtualization offerings available for System x and BladeCenter servers. I am very familiar with virtualization implemented on System z mainframes, as well as IBM's POWER systems, and have working knowledge of Linux KVM and Xen, so I was well prepared to handle hearing the latest about Microsoft's Hyper-V and VMware's Vsphere version 4.
Microsoft Hyper-V 2008
Hyper-V can run as part of Windows 2008, are standalone on its own.Different levels of Windows 2008 include licenses for different number of Windows virtual machines (VMs).Windows Server 2008 Standard includes 1 Windows VM, Enterprise includes 4 Windows VMs, and the Datacenter edition includes unlimited number of Windows VMs. If you want to run more Window VMs than come included, you need to pay extra for each additional one. For example, to run 10 Windows VMs on a 2-socket server would cost about $9000 US dollars on Standard but only $6000 US dollars on Datacenter edition (list prices from Microsoft Web site).
Unlike VMware, which takes a monolithic approach as hypervisor, Hyper-V is more like Xen with a microkernelized approach. This means you need a "parent" guest OS image, and the rest of the Guest OS images are then considered "child" images.These child images can be various levels of Windows, from Windows XP Pro to Windows Server 2008, Xen-enabled Linux, or even a non-hypervisor-aware OS.The "parent" guest OS image provides networking and storage I/O services to these "child" images.For the hypervisor-aware versions of Windows and Linux, Hyper-V allows optimized access to the hypervisor, "synthetic devices", and hypercalls. Synthetic devices present themselves as network devices, but only serve to pass data along the VMBus to other networking resources. This process does not require software emulation, and therefore offers higher performance for virtual machines and lower host system overhead.For non-hypervisor-aware OS images, Hyper-V provides device emulation through the "parent" image, which is slower.
Microsoft System Center Virtual Machine Manager (SCVMM) can manage both Hyper-V and VMware VI3 images.Wayne showed various screen shots of the GUI available to manage Hyper-V images.In standalone mode, you lose the nice GUI and management console.
Hyper-V supports external, internal and private virtual LANs (VLAN). External means that VMs can communicate with the outside world over standard ethernet connections. Internal means that VMs can communicate with "parent" and "child" guest images on the same server only. Private means that only "child" guests can communicate with other "child" images.
Hyper-V supports disk attached via IDE, SATA, SCSI, SAS, FC, iSCSI, NFS and CIFS. One mode is "Virtual Hard Disk" (VHD) similar to VMware VMDK files. The other is "pass through" mode, which are actual disk LUNs accessed natively. VHDs can be dynamic (thin provisioned), fixed (fully allocated), or differencing. The concept of differencing is interesting, as you start with a base read-only VHD volume image, and have a separate "delta" file that contains changes from the base image.
Some of the key features of Hyper-V 2008 are:
Being able to run concurrently 32-bit and 64-bit versions of Linux and Windows guest images
Support for 64 GB of memory and 4-way symmetric multiprocessing (SMP) per VM
Clustering for High Availability and Quick Migration of VM images
Live backup with integration with Microsoft's Volume Shadow Copy Services (VSS)
Virtual LAN (VLAN) support, and Virtual and Pass-through physical disk support
A clever VMbus, virtual service parent/client approach to sharing hardware
Optimized performance options for hypervisor-aware versions of Windows and Linux, and emulated supportfor non-hypervisor-aware OS images.
VMware Vsphere v4.0
This was titled as an "Overview" session, but really was an "Update" session on the newest features of this release. The big change appears to be that VMware added "v" in front of everything.
Under vCompute, there are some new features on VMware's Distributed Resource Scheduler (DRS) which includes recommended VM migrations. Dynamic Power Management (DPM) will move VMs during periods of low usageto consolidate onto fewer physical servers so as to reduce energy consumption.
Under vStorage, vSphere introduces an enhanced Plugable Storage Architecture (PSA), with supportfor Storage Array Type Plugins (SATP) and Path Selection Plugins (PSP). This vStorage API allows forthird party plugins for improved fault-tolerance and complex I/O load balancing algorithms. This releasealso has improved support for iSCSI, including Challenge-Handshake Authentication Protocol (CHAP) support.Similar to Hyper-V's dynamic VHD, VMware supports "thin provisioning" for their virtual disk VMDK files.A feature of "Storage Vmotion" allows conversion between "thick" and "thin" provisioning formats.
The vStorage API for Data Protection provide all the features of VMware Consolidated Backup (VCB). The APIprovides full, incremental and differential file-level backups for Windows and Linux guests, including supportfor snapshots and Volume Shadow Copy Services (VSS) quiescing.
VMware introduces direct I/O pass-through for both NIC and HBA devices. While thisallows direct access to SAN-attached LUNs similar to Hyper-V, you lose a lot of features like Vmotion, High Availability and Fault Tolerance. Wayne felt that these restrictions are temporary, that hopefully VMwarewill resolve this over the next 12 months.
Under vNetwork, VMware has virtual LAN switches called vSwitches. This includes support for IPv6and VLAN offloading.
The vSphere server can now run with up to 1TB of RAM and 64 logical CPUs to support up to 320 VM guest images.Each VM can have up to 255GB RAM and up to 8-way SMP.Vsphere ESX 4 introduces a new virtual hardware platform called VM Hardware v7. While Vsphere 4.0 can run VMs from ESX 2 and ESX 3, the problem is if you have new VMs based on this newer VM Hardware v7, you cannot run them on older ESX versions.
Vsphere comes in four sizes: Standard, Advanced, Enterprise, and Enterprise Plus, ranging in list price from $795 US dollars to $3495 US dollars.
While IBM is the #1 reseller of VMware, we also are proud to support Hyper-V, Xen, KVM and other similar products.Analysts expect most companies will have two or more server virtualization solutions in their data center, and it is good to see that IBM supports them all.
Continuing my week in Chicago, for the IBM Storage Symposium 2009, I attended what in my opinion was the bestsession of the week. This was by a guy named Chip Copper, who covered IBM's set of Ethernet and Fibre Channelnetworking gear. Attributes are the four P's:
Power and Cooling (electricity usage)
Equipment comes in two flavors: Top-of-Rack (ToR) thin pizza box switches, and Middle-of-Row (MoR) much larger directors.The MoR directors are engineered for up to 50Gbps per half-slot, so 10GbE and the future 40GbE can be easily accommodated in a single half-slot, and the future 100GbE can be done with a full slot (two half-slots).
While many companies might have been contemplating the switch from copper wires to optical fiber, there is a new reason for copper cables: Power-over-Ethernet (PoE). Many IP-phones, digital video surveillance cameras, and other equipment can have a single cable that delivers both signal and electricity over copper. If you have already deployed optical fiber throughout the building, there are "last mile" options where the signals are converted to copper wires and electrical energy added for these types of devices.
Two directors can be connected together with Inter-Chassis Link (ICL) cables to make them look like a single director with twice the number of ports. These are different than Inter-Switch Links (ISL) as they are not counted as an extra "hop" for networking counting purposes, especially important for FICON usage.
Today, we have 1Gbps, 2Gbps, 4Gbps and 8Gbps Fibre Channel. Since these all use 10-for-8 encoding (10 bits represents one 8-bit byte), then in was easy to calculate throughput: 8Gpbs was 800 MB/sec, for example. Auto-negotiation between speeds is not done at the HBA card, switch or director blade itself, but in the Short Form-factor Pluggable (SFP) optical connector. However, you can only auto-negotiate if the encoding matches. The 4/2/1 SFP can run at 4Gbps or auto-negotiate to slower 2Gbps and 1Gbps. The 8/4/2 SFP can run at 8Gbps, or auto-negotiate down to slower 4Gpbs and 2Gbps. Folks who still have legacy 1Gbps equipment, but want to run some things at 8 Gbps, can buy 8Gbps-capable switches or director blades, but then put some 4/2/1 SFPs into them. These 4/2/1 SFP are cheaper, so this might be something to consider if budgets are tight. Some SFPs handle up to 10km distances, but others only 4km, so be careful not to order the wrong ones.
Unfortunately, there are proposals in place for 10Gbps and 40Gbps that would use a different 66-for-64 encoding (66 bits represent 8 bytes), so 10Gbps would be 1200 MB/sec. These are used today for ISL between directors and switches.In theory, the 40Gbps could auto-negotiate down to 10Gbps, but not to any of the 8/4/2/1 Gbps that use different 10-for-8 encoding.
For those who cannot afford a SAN768B, there is a smaller SAN384B that can carry: 192 ports (4Gpbs/2Gbps), 128 ports (8Gbps) or 24 ports (10Gbps). The SAN384B can be ICL connected to another SAN384B or even the SAN768B as your needs grow.
On the entry-level side, the SAN24B-4 offers a feature called "Access Gateway". This makes the SAN24B look like an SAN end-point host, rather than a switch, and makes initial deployment of integrated bundled solutions easier. Once connected to everything, you can convert it over to full "switch" mode.The SAN40B-4 and SAN80B-4 provide midrange level support, including Fibre Channel routing at the 8Gbps level. In fact, all 8Gbps ports include routing capability. IBM offers both single-port and dual-port 8Gbps host bus adapter (HBA) cards to connect to these switches. These HBA offer 16 virtual channels per port, so that if you have VMware running many guests, or want to connect both disk and tape to the same HBA, you can keep the channel traffic separate for Quality of Service (QoS).
Chip wrapped up his session to discuss Fibre Channel over Ethernet (FCoE), and explained why we need to have a loss-less Convergence Enhanced Ethernet (CEE) to meet the needs of storage traffic as well as traditional Fibre Channel does today. IBM offers all of the equipment you need to get started today on this FCoCEE, with Converged Network Ethernet cards for your System x servers, and a new SANB32 that has 24 10GbE CEE ports and 8 traditional 8Gbps FC ports. This means that you can put the CNA card in your existing servers, connect to this switch, and then connect to your existing 10GbE LAN and your existing 8Gpbs or 4Gpbs FC-based SAN to the rest of your storage devices.
Worried that the FCoE or CEE standards could change after you deploy this gear? Aren't most LAN and SAN switches based on Application-specific integrated circuit [ASIC] chips which are created in the factory? Don't worry, IBM's equipment have put all the standards-vulnerable portions of the logic into separate Field-programmable gate array [FPGA] that can be updated with simplya firmware upgrade. This is future-proofing I can agree with!
Continuing my week in Chicago, for the IBM Storage Symposium 2008, I attended two presentations on XIV.
XIV Storage - Best Practices
Izhar Sharon, IBM Technical Sales Specialist for XIV, presented best practices using XIV in various environments.He started out explaining the innovative XIV architecture: a SATA-based disk system from IBM can outperformFC-based disk systems from other vendors using massive parallelism. He used a sports analogy:
"The men's world record for running 800 meters was set in 1997 by Wilson Kipketer of Denmark in a time of 1:41.11.
However, if you have eight men running, 100 meters each, they will all cross the finish line in about 10 seconds."
Since XIV is already self-tuning, what kind of best practices are left to present? Izhar presented best practicesfor software, hosts, switches and storage virtualization products that attach to the XIV. Here's some quickpoints:
Use as many paths as possible.
IBM does not require you to purchase and install multipathing software as other competitors might. Instead, theXIV relies on multipathing capabilities inherent to each operating system.For multipathing preference, choose Round-Robin, which is now available onAIX and VMware vSphere 4.0, for example. Otherwise, fixed-path is preferred over most-recently-used (MRU).
Encourage parallel I/O requests.
XIV architecture does not subscribe to the outdated notion of a "global cache". Instead, the cache is distributed across the modules, to reduce performance bottlenecks. Each HBA on the XIV can handle about 1400requests. If you have fewer than 1400 hosts attached to the XIV, you can further increase parallel I/O requests by specifying a large queue depth in the host bus adapter (HBA).An HBA queue depth of 64 is a good start. Additional settings mightbe required in the BIOS, operating system or application for multiple threads and processes.
For sequential workloads, select host stripe size less than 1MB. For random, select host stripe size larger than 1MB. Set rr_min_io between ten(10) and the queue depth(typically 64), setting it to half of the queue depth is a good starting point.
If you have long-running batch jobs, consider breaking them up into smaller steps and run in parallel.
Define fewer, larger LUNs
Generally, you no longer need to define many small LUNs, a practice that was often required on traditionaldisk systems. This means that you can now define just 1 or 2 LUNs per application, and greatly simplifymanagement. If your application must have multiple LUNs in order to do multiple threads or concurrent I/O requests, then, by all means, define multiple LUNs.
Modern Data Base Management Systems (DBMS) like DB2 and Oracle already parallelize their I/O requests, sothere is no need for host-based striping across many logical volumes. XIV already stripes the data for you.If you use Oracle Automated Storage Management (ASM), use 8MB to 16MB extent sizes for optimal performance.
For those virtualizing XIV with SAN Volume Controller (SVC), define manage disks as 1632GB LUNs, in multiple of six LUNs per managed disk group (MDG), to balance across the six interface modules. Define SVC extent size to 1GB.
XIV is ideal for VMware. Create big LUNs for your VMFS that you can access via FCP or iSCSI.
Organize data to simplify Snapshots.
You no longer need to separate logs from databases for performance reasons. However, for some backup productslike IBM Tivoli Storage Manager (TSM) for Advanced Copy Services (ACS), you might want to keep them separatefor snapshot reasons. Gernally, putting all data for an application on one big LUNgreatly simplifies administration and snapshot processing, without losing performance.If you define multiple LUNs for an application, simply put them into the same "consistencygroup" so that they are all snapshot together.
OS boot image disks can be snapshot before applying any patches, updates or application software, so that ifthere are any problems, you can reboot to the previous image.
Employ sizing tools to plan for capacity and performance.
The SAP Quicksizer tool can be used for new SAP deployments, employing either the user-based orthroughput-based sizing model approach. The result is in mythical unit called "SAPS", which represents0.4 IOPS for ERP/OLTP workloads, and 0.6 IOPS for BI/BW and OLAP workloads.
If you already have SAP or other applications running, use actual I/O measurements. IBM Business Partners and field technical sales specialists have an updated version of Disk Magic that can help size XIV configurations fromPERFMON and iostat figures.
Lee La Frese, IBM STSM for Enteprise Storage Performance Engineering, presented internal lab test results forthe XIV under various workloads, based on the latest hardware/software levels [announced two weeks ago]. Three workloadswere tested:
Web 2.0 (80/20/40) - 80 percent READ, 20 percent WRITE, 40 percent cache hits for READ.YouTube, FlickR, and the growing list at [GoWeb20] are applications with heavy read activity, but because of[long-tail effects], may not be as cache friendly.
Social Networking (50/50/50) - 50 percent READ, 50 percent WRITE, 50 percent cache hits for READ.Lotus Connections, Microsoft Sharepoint, and many other [social networking] usage are more write intensive.
Database (70/30/50) - 70 percent READ, 30 percent WRITE, 50 percent cache hits for READ.The traditional workload characteristics for most business applications, especially databases like DB2 andOracle on Linux, UNIX and Windows servers.
The results were quite impressive. There was more than enough performance for tier 2 application workloads,and most tier 1 applications. The performance was nearly linear from the smallest 6-module to the largest 15-module configuration. Some key points:
A full 15-module XIV overwhelms a single SVC 8F4 node-pair. For a full XIV, consider 4 to 8 nodes 8F4 models, or 2 to 4 nodes of an 8G4. For read-intensive cache-friendly workloads, an SVC in front of XIV was able to deliver over 300,000 IOPS.
A single node TS7650G ProtecTIER can handle 6 to 9 XIV modules. Two nodes of TS7650G were needed to drivea full 15-module XIV. A single node TS7650 in front of XIV was able to ingest 680 MB/sec on the seventh day with17 percent per-day change rate test workload using 64 virtual drives. Reading the data back got over 950 MB/sec.
For SAP environments where response time 20-30 msec are acceptable, the 15-module XIV delivered over 60,000 IOPS. Reducing this down to 25,000-30,000 cut the msec response time to a faster 10-15 msec.
These were all done as internal lab tests. Your mileage may vary.
Not surprisingly, XIV was quite the popular topic here this week at the Storage Symposium. There were many moresessions, but these were the only two that I attended.
Continuing my week in Chicago, at the IBM System x and BladeCenter Technical Conference, I attended an
awesome session that summarized IBM's Linux directions. Pat Byers presented the global forces that are
forcing customers to re-evaluate the TCO of their operating system choices, the need for rapid integration
in an ever-changing business climate, government stimulus packages, and technology that has enabled much
better solutions than we had during the last economic turn-down in 2001-2003.
IBM has been committed to Linux for over 10 years now. I was part of the initial IBM team in the 1990s to work on Linux for the mainframe. In various roles, I helped get Linux attachment tested for disk and tape systems, and helped get Linux selected as an operating system platform of choice for our storage management software.
Today, Linux-based server generate $7 Billion US dollars in revenues. For UNIX customers, Linux provides greater flexibility for hardware platform. For Windows customers, Linux provides better security and reliability.
Initially, Linux was used for simple infrastructure applications, edge-of-the-network and Web-based workloads.
This evolved to Application and Data serving, Enterprise applications like ERP, CRM and SCM. Today,
Linux is well positioned to help IBM make our world a smarter planet, able to handle business-critical applications. It is the only operating system to scale to the full capability of the biggest IBM System x3950M2 server.
Pat gave an examples of IBM's work with Linux helping clients.
City of Stockholm
The city of Stockholm, Sweden introduced congestion pricing to reduce traffic.
IBM helped them deploy systems to collect tariffs from 300,000 vehicles a day, with real-time scanning and recognition of vehicle license plates, Web-accessible payment processing, and analytics for metrics and reporting. This configuration was able to
[reduce traffic by 25 percent in the first month].
IBM helped [ConAgra Foods] switch their SAP environment from a monolithic Solaris on SPARC deployment, to a more distributed one using Novell SUSE Linux on x86. The result? Six times faster performance at 75 percent lower total cost of ownership!
IBM's strategy has been to focus on working with two of the major Linux distributors: Red Hat and Novell. It also works with [Asianux] which is like the UnitedLinux for Asia, internationalized for Japan, Korea, and China. It handles special requests for other distributions, from CentOS to Ubuntu, as needed on a case by case basis.
IBM's Linux Technology Center of 600 employees help to enable IBM products for Linux, make Linux a better operating system, expand Linux's reach, and help drive collaboration and innovation. In fact, IBM is the #3 corporate contributor to the open source Linux kernel, behind Red Hat (#1) and Novell (#2). For most IBM products, IBM tests with Linux as rigorously as it does Microsoft Windows. IBM offers complete RTS/ServicePac and SupportLine service and support contracts for Red Hat and Novell Linux.
At the IBM Solutions Center this week, several booths used Linux bootable USB sticks to run their software.
[Novell SUSE Studio] was developed to help
customize Linux to the specific needs for independent vendors.
Both Red Hat and Novell offer distributions in four categories:
Standard - for small entry-level servers, with support for a few virtual guests
Advanced Platform - for bigger servers, and support for many or unlimited number of virtual guests
High Performance Computing - HPC and Analytics for large grid deployments
Real Time - for real time processing, such as with
[IBM WebSphere Real Time], where
sub-second response time is critical.
A key difference between Red Hat and Novell appears to be on their strategy towards server virtualization.
Red Hat wants to position itself as the hypervisor of choice, for both servers and desk top virtualization, announcing Kernel-based Virtual Machine
[KVM] on their Red Hat Enterprise Linux (RHEL) 5.4 release, and their new upcoming
RHEV-V, a tight 128MB hypervisor to compete against VMware ESXi. Meanwhile, Novell is focusing SUSE to be
the perfect virtual guest OS, being hypervisor-aware an dhaving consistent terms and licensing when run under any hypervisor, including VMware, Hyper-V, Citrix Xen, KVM or others.
IBM has tons of solutions that are based on Linux, including the IBM Information Server blade, the InfoSphere Balanced Warehouse, SAN Volume Controller (SVC), TS7650 ProtecTIER data deduplication virtual tape library, Grid Medical Archive Solution (GMAS), Scale-out File Services (SoFS), Lotus Foundations, and the IBM Smart Cube.
If you are interested in trying out Linux, IBM offers evaluation copies at no charge for 30 to 90 days. For
more on how to deploy Linux successfully on IBM servers, see the
[IBM Linux Blueprints] landing page.
Continuing my week in Chicago, for the IBM Storage Symposium 2008, we had sessions that focused on individual products. IBM System Storage SAN Volume Controller (SVC) was a popular topic.
SVC - Everything you wanted to know, but were afraid to ask!
Bill Wiegand, IBM ATS, who has been working with SAN Volume Controller since it was first introduced in 2003. answered some frequently asked questions about IBM System Storage SAN Volume Controller.
Do you have to upgrade all of your HBAs, switches and disk arrays to the recommended firmware levels before upgrading SVC? No. These are recommended levels, but not required. If you do plan to update firmware levels, focus on the host end first, switches next, and disk arrays last.
How do we request special support for stuff not yet listed on the Interop Matrix?
Submit an RPQ/SCORE, same as for any other IBM hardware.
How do we sign up for SVC hints and tips? Go to the IBM
[SVC Support Site] and select the "My Notifications" under the "Stay Informed" box on the right panel.
When we call IBM for SVC support, do we select "Hardware" or "Software"?
While the SVC is a piece of hardware, there are very few mechanical parts involved. Unless there are sparks,
smoke, or front bezel buttons dangling from springs, select "Software". Most of the questions are
related to the software components of SVC.
When we have SVC virtualizing non-IBM disk arrays, who should we call first?
IBM has world-renown service, with some of IT's smartest people working the queues. All of the major storage vendors play nice
as part of the [TSAnet Agreement when a mutual customer is impacted.
When in doubt, call IBM first, and if necessary, IBM will contact other vendors on your behalf to resolve.
What is the difference between livedump and a Full System Dump?
Most problems can be resolved with a livedump. While not complete information, it is generally enough,
and is completely non-disruptive. Other times, the full state of the machine is required, so a Full System Dump
is requested. This involves rebooting one of the two nodes, so virtual disks may temporarily run slower on that
What does "svc_snap -c" do?The "svc_snap" command on the CLI generates a snap file, which includes the cluster error log and trace files from all nodes. The "-c" parameter includes the configuration and virtual-to-physical mapping that can be useful for
disaster recovery and problem determination.
I just sent IBM a check to upgrade my TB-based license on my SVC, how long should I wait for IBM to send me a software license key?
IBM trusts its clients. No software license key will be sent. Once the check clears, you are good to go.
During migration from old disk arrays to new disk arrays, I will temporarily have 79TB more disk under SVC management, do I need to get a temporary TB-based license upgrade during the brief migration period?
Nope. Again, we trust you. However, if you are concerned about this at all, contact IBM and they will print out
a nice "Conformance Letter" in case you need to show your boss.
How should I maintain my Windows-based SVC Master Console or SSPC server?
Treat this like any other Windows-based server in your shop, install Microsoft-recommended Windows updates,
run Anti-virus scans, and so on.
Where can I find useful "How To" information on SVC?
Specify "SAN Volume Controller" in the search field of the
[IBM Redbooks vast library of helpful books.
I just added more managed disks to my managed disk group (MDG), can I get help writing a script to redistribute the extents to improve wide-striping performance?
Yes, IBM has scripting tools available for download on
[AlphaWorks]. For example, svctools will take
the output of the "lsinfo" command, and generate the appropriate SVC CLI to re-migrate the disks around to optimize
performance. Of course, if you prefer, you can use IBM Tivoli Storage Productivity Center instead for a more
Any rules of thumb for sizing SVC deployments?
IBM's Disk Magic tool includes support for SVC deployments. Plan for 250 IOPS/TB for light workloads,
500 IOPS/TB for average workloads, and 750 IOPS/TB for heavy workloads.
Can I migrate virtual disks from one manage disk group (MDG) to another of different extent size?
Yes, the new Vdisk Mirroring capability can be used to do this. Create the mirror for your Vdisk between the
two MDGs, wait for the copy to complete, and then split the mirror.
Can I add or replace SVC nodes non-disruptively? Absolutely, see the Technotes
[SVC Node Replacement page.
Can I really order an SVC EE in Flamingo Pink? Yes. While my blog post that started all
this [Pink It and Shrink It] was initially just some Photoshop humor, the IBM product manager for SVC accepted this color choice as an RPQ option.
The default color remains Raven Black.
Continuing my week in Chicago, for the IBM Storage Symposium 2008, I attended several sessions intended to answer the questions of the audience.
In an effort to be cute, the System x team have a "Meet the xPerts" session at their System x and BladeCenter Technical Conference, so the storage side decided to do the same. Traditionally, these have been called "Birds of a Feature", "Q&A Panel", or "Free-for-All". They allow anyone to throw out a question, and have the experts in the room, either
IBM, Business Partner or another client, answer the question from their experience.
Meet the Experts - Storage for z/OS environments
Here were some of the questions answered:
I've seen terms like "z/OS", "zSeries" and "System z" used interchangeably, can you help clarify what this particular session is about?
IBM's current mainframe servers are all named "System z", such as our System z9 or System z10. These replace the older zSeries models of hardware. z/OS is one of the six operating systems that run on this hardware platform. The other five are z/VM, z/VSE, z/TPF, Linux and OpenSolaris. The focus of this session will be storage attached and used for z/OS specifically, including discussions of Omegamon and DFSMS software products.
What can we do to reduce our MIPS-based software licensing costs from our third party vendors?
Consider using IBM System z Integrated Information Processor
What about 8 Gbps FICON?
IBM has already announced
[FICON Express8] host bus adapter (HBA) cards, that will auto-negotiate to 4Gbps and 2Gbps speeds. If you don't need full 8Gbps speed now, you can
still get the Express8 cards, but put 4/2/1 Gbps SFP ports instead. Currently, LongWave (LW) is only supported to 4km at 8Gbps speed.
I want to use Global Mirror for my DS8100 to my remote DS8100, but also make test copies of my production data to
an older ESS 800 I have locally. Any suggestions? Yes, consider using FlashCopy to simplify this process.
I have Global Mirror (GM) running now successfully with DSCLI, and now want to deploy IBM Tivoli Storage Productivity Center for Replication. Is that possible? Yes, Productivity Center for Replication will detect existing GM relationships, and start managing them.
I have already deployed HyperPAV and zHPF, is there any value in getting Solid-State Drives as well?
HyperPAV and zHPF impact CONN time, but SSD impacts DISC time, so they are mutually complementary.
How should I size my FlashCopy SE pool? SE refers to "Space Efficient", which stores only the changes
between the source and destination copies of each LUN or CKD volume involved. General recommendation is to start with 20 percent and adjust accordingly.
How many RAID ranks should I configure per DS8000 extent pool? IBM recommends 4 to 8 ranks per pool.
Meet the Experts: Storage for Linux, UNIX and Windows distributed systems
This session was focused on storage systems attached to distributed servers, as well as products from Tivoli used to manage them. Here were some of the questions answered:
When we migrated from Tivoli Storage Manager v5 to v6, we lost our favorite "Operational Reporting" tool. How can we get TOR back? You now get the new Tivoli Common Reporting tool.
How can we identify appropriate port distribution for multiple SVC node pairs for load balancing?
IBM Tivoli Storage Productivity Center v4.1 has hot-spot analysis with recommendations for Vdisk migrations.
We tried TotalStorage Productivity Center way back when, but the frequent upgrades were killing us. How has it been lately? It has been much more stable since v3.3, and completely renamed to Tivoli Storage Productivity Center to avoid association with versions 1 and 2 of the predecessor product. The new "lightweight agents" feature of v4.1 resolve many of the problems you were experiencing.
We have over 1600 SVC virtual disks, how do we handle this in IBM Tivoli Storage Productivity Center? Use the Filter capability in combination with clever naming conventions for your virtual disks.
How can we be clever when we are limited to only 15 characters? Ok. We understand.
We are currently using an SSPC with Windows 2003 and 2GB memory, but we are only using the Productivity Center for Replication feature of it. Can we move the DB2 database over to a Windows 2008 server with 4GB of memory?
Consider using the IBM Tivoli Storage Productivity Center for Replication software instead of SSPC for special
circumstances like this.
We love the XIV GUI, how soon will all other IBM storage products have it also? As with every acquisition,
IBM evaluates if there are technologies from new products that can be carried back to existing products.
We are currently using 12 ports on our existing XIV, and love it so much we plan to buy a second frame, but are concerned about consuming another 12 ports on our SAN switch. Any suggestions? Yes, use only six ports per frame. Just because you have more ports, doesn't mean you are required to use them.
We have heard there are concerns from the legal community about using deduplication technology, any ideas how to address that?
Nobody here in the room is a lawyer, and you should consult legal counsel for any particular situation.
None of the IBM offerings intended for non-erasable, non-rewriteable (NENR) data retention records (DR550, WORM tape, N series SnapLock) support dedupe today, and none of IBM's deduplication offerings (TS7650,N series A-SIS,TSM) make any claims for fit-for-purpose for compliance regulatory storage. However, be assured that all of IBM's dedupe technology involves byte-for-byte comparisons so that you never lose any data due to false hash collisions. For all IBM compliance storage, what you write will be read back in the correct sequence of ones and zeros.
Every January, we look back into the past as well as look into the future for trends to watch for the upcoming year. Ray Lucchesi of Silverton Consulting has a great post looking back at the [Top 10 storage technologies over the last decade]. I am glad to see that IBM has been involved with and instrumental in all ten technologies.
Looking into the future, Mark Cox of eChannel has an article [Storage Trends to Watch in 2011], based on his interviews with two fellow IBM executives: Steve Wojtowecz, VP of storage software development, and Clod Barrera, distinguished engineer and CTO for storage. Let's review the four key trends:
Cloud Storage and Cloud Computing
No question: Cloud Computing will be the battleground of the IT industry this decade. I am amused by the latest spate of Microsoft commercials where problems are solved with someone saying "...to the cloud". Riding on the coat tails of this is "Cloud Storage", the ability to store data across an Internet Protocol (IP) network, such as 10GbE Ethernet, in support of Cloud Computing applications. Cloud Storage protocols in the running include NFS, CIFS, iSCSI and FCoE.
Mark writes "..vendors who aren't investing in cloud storage solutions will fall behind the curve."
Economic Downturn forces Innovation
The old British adage applies: "Necessity is the mother of invention." The status quo won't do. In these difficult economic times, IT departments are running on constrained budgets and staff. This forces people to evaluate innovative technologies for storage efficiency like real-time compression and data deduplication to make better use of what they currently have. It also is forcing people to take a "good enough" attitude, instead of paying premium prices for best-of-breed they don't really need and can't really afford.
IT Service Management
Companies are getting away from managing individual pieces of IT kit, and are focusing instead on the delivery of information, from the magnetic surface of disk and tape media, to the eyes and ears of the end users. The deployment mix of private, hybrid and public clouds makes this even more important to measure and manage IT as a set of services that are delivered to the business. IT Service Management software can be the glue, helping companies implement ITIL v3 best practices and management disciplines.
Smarter Data Placement
A recent survey by "The Info Pro" analysts indicates that "managing storage growth" is considered more critical than "managing storage costs" or "managing storage complexity".
This tells me that companies are willing to spend a bit extra to deploy a tiered information infrastructure if it will help them manage storage growth, which typically ranges around 40 to 60 percent per year. While I have discussed the concept of "Information Lifecycle Management" (ILM), for the past four years on this blog, I am glad to see it has gone mainstream, helped in part with automated storage tiering features like IBM System Storage Easy Tier feature on the IBM DS8000, SAN Volume Controller and Storwize V7000 disk systems. Not all data is created equal, so the smart placement of data, based on the business value of the information contained, makes a lot of sense.
These trends are influencing what solutions the various different vendors will offer, and will influence what companies purchase and deploy.
Recently, I spoke with Jarrett Potts, my long-time friend and former IBM colleague, who now works as Director of Strategic Marketing over at STORServer. If you have never heard of STORServer, it is a company that makes purpose-built backup appliances.
What is a Backup Appliance? It is an integrated solution of hardware and software that serves a single purpose: backup and recovery. STORServer Enterprise Backup Appliance (EBA) combines IBM's high-end x86 M4 server, IBM disk and tape storage, and IBM Tivoli Storage Manager (TSM) backup software.
(Fun Fact: The 2012 IBM year-end financial results were announced last month. IBM not only continues its #1 lead in servers overall, but has the #1 marketshare for high-end x86 servers, market-leading disk and tape storage hardware, and market leading backup software.)
To determine the appropriate size of your backup appliance, the folks at STORServer help you every step of the way. They figure out the number of TB you will backup every day, and even help configure all of the TSM server parameters to achieve the policies that make the most sense for your organization.
The appliance can backup every type of data, from databases and Virtual Machines (VMs) to documents, spreadsheets, and other unstructured data.
Are you then left with a solution too complicated to run yourself? No. The STORServer Console is an easy-to-use GUI for ongoing monitoring and maintenance. Plus, your friends at STORServer are only a phone call away in case you have any questions.
(FTC Disclosure: I work for IBM, and STORSever is an approved IBM Business Partner that uses IBM hardware and software to build their solution. I have no financial interest in STORServer, and was not paid by STORServer to mention their company or products on my blog. This post may be considered a celebrity endorsement of STORServer and its Enterprise Backup Appliances.)
Perhaps my readers feel that I am a bit biased in describing a TSM-based solution, and you want a second opinion. No worries, I understand. In the latest 165-page [2012 DCIG Backup Appliance Buyer's Guide], the STORServer models ranked very high. Here is an excerpt:
"Nowhere is this demand for purpose built appliances more evident than in the rise of purpose
built backup appliances (PBBAs) over the last few years and their anticipated growth rate
going forward. A recent market analysis performed by IDC found that worldwide PBBA revenue totaled $2.4 billion in 2011 which was a 42.4 percent increase over the prior year.
This scoring came into play in preparing this Buyer's Guide
as the STORServer EBA 3100 model scored so highly
overall that it fell outside of the two (2) standard deviations
that DCIG generally uses as a guideline for inclusion and
exclusion of products.
The reason DCIG included this model in this Buyer's Guide
whereas in other situations it might not is that DCIG is
unaware of any other backup appliance(s) from any other
providers that come close to matching the EBA 3100's
software and hardware attributes. As such, DCIG felt it
would be doing STORServer specifically and the market
generally a disservice by not highlighting in this Buyer's
Guide that such a backup appliance existed and was
generally available for purchase."
Backup Appliance Models
STORServer EBA 3100
Symantec NetBackup 5220 Backup Appliance
STORServer EBA 2100
STORServer EBA 1100
STORServer EBA 800
Symantec Backup Exec 3600 Appliance
The STORServer is ideal for small and medium-sized business (SMB), but can scale quite large to handle business growth. If you are currently unhappy with your current backup environment, and feel now is the time to look around for a better way of taking backups, you won't go wrong choosing a solution based on IBM's market-leading server and storage hardware with Tivoli Storage Manager software.
Continuing my ongoing discussion on Solid State Disk (SSD), fellow blogger BarryB (EMC) points out in his [latest post]:
Oh – and for the record TonyP, I don't think I ever said EMC was using a newer or different EFDs than IBM. I just asserted that EMC knows more than IBM about these EFDs and how they actually work a storage array under real-world workloads.
(Here "EFD" is refers to "Enterprise Flash Drive", EMC's marketing term for Single Layer Cell (SLC) NAND Flash non-volatile solid-state storage devices. Both IBM and EMC have been selling solid-state storage for quite some time now, but EMC felt that a new term was required to distinguish the SLC NAND Flash devices sold in their disk systems from solid-state devices sold in laptops or blade servers. The rest of the industry, including IBM, continues to use the term SSD to refer to these same SLC NAND Flash devices that EMC is referring to.)
Although STEC asserts that IBM is using the latest ZeusIOPS drives, IBM is only offering the 73GB and 146GB STEC drives (EMC is shipping the latest ZeusIOPS drives in 200GB and 400GB capacities for DMX4 and V-Max, affording customers a lower $/GB, higher density and lower power/footprint per usable GB.)
Here is where I enjoy the subtleties between marketing and engineering. Does the above seem like he is saying EMC is using newer or different drives? What are typical readers expected to infer from the statement above?
That there are four different drives from STEC, in four different capacities. In the HDD world, drives of different capacities are often different, and larger capacities are often newer than those of smaller capacities.
That the 200GB and 400GB are the latest drives, and that 73GB and 146GB drives are not the latest.
That STEC press release is making false or misleading claims.
Uncontested, some readers might infer the above and come to the wrong conclusions. I made an effort to set the record straight. I'll summarize with a simple table:
Usable (conservative format)
Usable (aggressive format)
So, we all agree now that the 256GB drives that are formatted as 146GB or 200GB are in fact the same drives, that IBM and EMC both sell the latest drives offered by STEC, and that the STEC press release was in fact correct in its claims.
I also wanted to emphasize that IBM chose the more conservative format on purpose. BarryB [did the math himself] and proved my key points:
Under some write-intensive workloads, an aggressive format may not last the full five years. (But don't worry, BarryB assures us that EMC monitors these drives and replaces them when they fail within the five years under their warranty program.)
Conservative formats with double the spare capacity happen to have roughly double the life expectancy.
I agree with BarryB that an aggressive format can offer a lower $/GB than the conservative format. Cost-conscious consumers often look for less-expensive alternatives, and are often willing to accept less-reliable or shorter life expectancy as a trade-off. However, "cost-conscious" is not the typical EMC targeted customer, who often pay a premiumfor the EMC label. To compensate, EMC offers RAID-6 and RAID-10 configurations to provide added protection. With a conservative format, RAID-5 provides sufficient protection.
(Just so BarryB won't accuse me of not doing my own math, a 7+P RAID-5 using conservative format 146GB drives would provide 1022GB of capacity, versus 4+4 RAID-10 configuration using aggressive format 200GB drives only 800GB total.)
In an ideal world, you the consumer would know exactly how many IOPS your application will generate over the next five years, exactly how much capacity you will require, be offered all three drives in either format to choose from, and make a smart business decision. Nothing, however, is ever this simple in IT.
Yesterday's post [Software Programmers as Bees]was not meant as "career advice", but certainly I got some interesting email as if it was.Orson Scott Card was poking fun at the culture clash between software programmers andmanagement/marketers, and I gave my perspective, having worked both types of jobs.
This is June. Many students are graduating from high school or college and lookingfor jobs. Some of these might be jobs just for the summer to make some spending money,and others mights be jobs like internships to explore different career paths. I found both programming and marketing are rewarding and interesting work, but each person is different.
There are a variety of ways to find out what your personality traits are,and then focus on those jobs or career paths that are best for those strengths. Hereis an online [Typology Test] based onthe work of psychologists Carl Jung and Isabel Myers-Briggs. The result is a four-letterscore that represents 16 possible personalities. For example, mine is "ENTP",which stands for "Extroverted, Intuitive, Thinking, Perceiving". You can find out otherfamous people that match your personality type. For ENTP, I am lumped together withfellow master inventor Thomas Edison, fellow author Lewis Carrol (Alice in Wonderland), Cooking great Julia Child, Comedians George Carlin and Rodney Dangerfield (I get no respect!),movie director Alfred Hitchcock, and actor Tom Hanks.
USA Today had an article ["CEOsvalue lessons from teen jobs"] which offers some career advice from successful business people.Of course, what worked for them may not work for you, all based on different personality types. Hereis an excerpt of the advice I thought the most useful:
"If you are committed, you will be successful." (unfortunately, the reverse is also true: if you are successful,you will be asked to move to a different job)
"Tackle offbeat jobs. Challenge conventional wisdom within reason. Come into contact with people from all walks of life."
"Show an interest, demonstrate you want to be on the job."
"Never limit yourself. Look beyond to what needs to be done, or should be done. Then do it. Stretch. Go beyond what others expect."
"Find a job that forces you to work effectively with people. No matter what you end up doing, dealing with others will be critical."
"Bring your best to the table every day. Learn professional responsibility and how to handle difficult situations."
"Listen carefully to what customers want."
Before IBM, I ran my own business. If you are thinking, "Maybe I will start my own business instead?" you might want to see this advice from Venture Capitalist [Guy Kawasaki on Innovation].While running your own business has advantages, like avoiding issues "working for the man", it has somedisadvantages as well. It is certainly not as easy as some people make it seem to be.
Of course, things are a lot different nowadays than they were when these CEOs were teenagers. And the pace ofchange does not seem to be slowing down any either. Here is a presentation on [SlideShare.net] that helps bring to focus the realities of globalization:
Wrapping up this week's theme on why the System z10 EC mainframe can replace so many older, smaller,underutilized x86 boxes.This was all started to help fellow bloggers Jon Toigo of DrunkenData and Jeff Savit from Sun Microsystemsunderstand our IBM press release that we put out last February on this machine with my post[Yes, Jon, there is a mainframe that can help replace 1500 x86 servers] and my follow uppost [Virtualization, Carpools and Marathons"].The computations were based on running 1500 unique workloads as Linux guests under z/VM, and notrunning them as z/OS applications.
My colleagues in IBM Poughkeepsierecommended these books to provide more insight and in-depth understanding. Looks like some interesting summer reading. I put in quotes thesections I excerpted from the synopsis I found for each.
"From Microsoft to IBM, Compaq to Sun to DEC, virtually every large computer company now uses clustering as a key strategy for high-availability, high-performance computing. This book tells you why-and how. It cuts through the marketing hype and techno-religious wars surrounding parallel processing, delivering the practical information you need to purchase, market, plan or design servers and other high-performance computing systems.
Microsoft Cluster Services ("Wolfpack")
IBM Parallel Sysplex and SP systems
DEC OpenVMS Cluster and Memory Channel
Tandem ServerNet and Himalaya
Intel Virtual Interface Architecture
Symmetric Multiprocessors (SMPs) and NUMA systems"
Fellow IBM author Gregory Pfister worked in IBM Austin as a Senior Technical Staff Member focused on parallel processing issues, but I never met him in person. He points out that workloads fall into regions called parallel hell, parallel nirvana, and parallel purgatory. Careful examination of machine designs and benchmark definitions will show that the “industry standard benchmarks" fall largely in parallel nirvana and parallel purgatory. Large UNIX machines tend to be designed for these benchmarks and so are particularly well suited to parallel purgatory. Clusters of distributed systems do very well in parallel nirvana. The mainframe resides in parallel hell as do its primary workloads. The current confusion is where virtualization takes workloads, since there are no good benchmarks for it.
"In these days of shortened fiscal horizons and contracted time-to-market schedules, traditional approaches to capacity planning are often seen by management as tending to inflate their production schedules. Rather than giving up in the face of this kind of relentless pressure to get things done faster, Guerrilla Capacity Planning facilitates rapid forecasting of capacity requirements based on the opportunistic use of whatever performance data and tools are available in such a way that management insight is expanded but their schedules are not."
Neil Gunther points out that vendor claims of near linear scaling are not to be trusted and shows a method to “derate” scaling claims. His suggested scaling values for data base servers is closer IBM's LSPR-like scaling model, than TPC-C or SPEC scaling. I had mentioned that "While a 1-way z10 EC can handle 920 MIPS, the 64-way can only handle 30,657 MIPS."in my post, but still people felt I was using "linear scaling". Linear scaling would mean that if a 1Ghz single-core AMD Opteron can do four(4) MIPS, and an one-way z10 EC can do 920 MIPS, than one might assume that 1GHz dual-core AMD could do eight(8) MIPS, and the largest 64-way z10 EC can do theoretically 64 x 920 = 58,880 MIPS. The reality is closer to 6.866 and 30,657 MIPS, respectively.
This was never an IBM-vs-Sun debate. One could easily make the same argument that a large Sun or HP system could replace a bunch of small 2-way x86 servers from Dell. Both types of servers have their place and purpose, and IBMsells both to meet the different needs of our clients. The savings are in total cost of ownership, reducing powerand cooling costs, floorspace, software licenses, administration costs, and outages.
I hope we covered enough information so that Jeff can go back about talking about Sun products, and I can go backto talk about IBM storage products.
To get beyond the simple statistics of vendor popularity, we looked at the number and combinations of vendors with which enterprises work. Many were customers of one or two storage providers, but the rest were customers of up to six storage providers. More than one-third were customers of systems vendors only, bypassing storage specialists.
Comparisons between solutions vendors and storage component vendors are not new. One could argue that this can be compared to supermarkets and specialty shops.
Supermarkets offer everything you need to prepare a meal. You can buy your meat, bread, cheese,and extras all with one-stop shopping. In a sense, IBM, HP, Sun and Dell are offering this to clients who prefer this approach. Not surprisingly, the two leaders in overall storage hardware,IBM and HP, are also the two best to offer a complete set of software, services, servers and storage.
IBM and HP are also the leaders in tape.While Forrester reports that many large enterprises in North America prefer to buy diskfrom storage specialists, others have found that customers prefer to buy their tape from solution providers. Recently, Byte and Switch reports thatLTO Hits New Milestones,where the LTO consortium (IBM, HP, and Quantum) have collectively shipped over 2 million LTO tape drives, and over 80 million LTO tape cartridges. Perhaps this is because tape is part of an overallbackup, archive or space management solution, and customers trust a solution vendor overa storage specialist.
Where possible, IBM brings synergy between its servers and storage. For example, we justannounced the IBM BladeCenter Boot Disk System, a 2U high unit that supports up to 28 blade servers, ideal for applications running under Windows or Linux, and helping to reduce the energy consumption for thoseinterested in a "Green" data center.
Some people prefer buying their meat at the slaughterhouse, bread at the French pastry shop, andso on. Storage specialists focus on just storage, leaving the rest of the solution, like servers,to be purchased separately from someone else. Storage vendors like NetApp, EMC, HDS and othersoffer storage components to customers that like to do their own "system integration", or to thosethat are large enough to hire their own "systems integrator".
Storage specialists recognize that not everybody is a "specialty shop" shopper.HDS has done well selling their disk through solution vendorslike HP and Sun. EMC sells its gear through solution vendor Dell.
Interestingly, I have met clients who prefer to buy IBM System Storage N series from IBM, becauseIBM is a solution vendor, and others that prefer to buy comparable NetApp equipment directly fromNetApp, because they are a storage component vendor.
I mostly buy my groceries at a supermarket, buthave, on occasion, bought something from the local butcher, baker or candlestick maker. And if you are ever in Tucson, you might be able to find Mexican tamalessold by a complete stranger standing outside of a Walgreens pharmacy, the ultimate extreme of specialization. You can get a dozen tamales for tenbucks, and in my experience they are usually quite good. Theoretically, if you get sick, or they don't taste right, you have no recourse, and will probably never see that stranger again to complain to.(And no, before I get flamed, I am not implying any major vendor mentioned above is like this tamale vendor)
Of course, nothing is starkly black and white, and comparisons like this are just to help provide context and perspective,but if you are looking to have a complete IT solutionthat works, from software and servers to storage and financing, come to the vendor you can trust, IBM.
Over the past year and a half, I have been focused on explaining WHAT IBM System Storage was, and WHY IBM should be considered when making a storage purchase decision. Let's recapsome of IBM's accomplishments during this time:
Today, October 1, I switch over to HOW to get it done. In my new job role, I will be leading a seriesof projects and workshops on how to make your data center more green, how to get more value from the information you have, how to better protect your information from unauthorized access or unethical tampering, how to develop and deploya site-wide business continuity plan, and how to centralize your management using open industry standards.
I will still be in Tucson, but am moving from building 9032 over to 9070 to be closer to the rest of my team.
IBM and the Austin Chamber of Commerce is inviting registered SXSW Interactive attendees to the networking reception being hosted by the IBM Innovation Center and the IBM Venture Capital Group. Power Systems and Watson will have a significant feature at this SXSW event to be held on March 14, 2011.
While I won't be there personally at the SXSW conference, I strongly recommend you to attend this event.
Innovators and Entrepreneurs Networking Reception
Four Seasons Hotel
March 14, 2011
Hosted by IBM Venture Capital Group, Austin Chamber of Commerce, and the IBM Innovation Center.
This reception will provide a rare opportunity to network and collaborate with your professional community of industry leaders, entrepreneurs, developers, academics, venture capitalists, members of the Austin Chamber of Commerce.
(Note: While Lenovo has officially taken over the System x on October 1st back in the United States, China, and several other countries in Asia and the Americas, it has not yet happened in Europe. This is expected to happen this December. This results in some awkwardness during this period of transition.)
Day 1 started off with some keynote sessions. Amy Purdy, IBM Director of Training Services, was the emcee.
Gareth Tucker, Director of EMEA for Intel
Gareth focused on the strong partnership between IBM, Lenovo and Intel. For example, a client query that took 4 hours with traditional DB2 database on Intel Xeon, but only 90 seconds on DB2 BLU with the new Xeon V2 chip.
10 years ago, some storage vendors warned clients not to use any Intel-based storage devices. Today, over 85 percent of storage is Intel-based, including most of the IBM System Storage portfolio. IBM SoftLayer also uses Intel to offer both bare metal and virtual x86 servers, and was the first cloud provider to use Intel's "Trusted Execution" mode.
Next year, Microsoft will drop support for Windows 2003 server on July 15, 2015. This represents an excellent selling opportunity to get clients to upgrade their x86 server hardware. Intel estimates there are 24 million instances of Windows 2003 worldwide. On average, it takes 150 days to migrate to Windows 2012, so get clients to start now!
Jeff Howard, Vice President of Lenovo Flex and BladeCenter
Jeff was a last-minute stand-in for Adalio Sanchez who is busy getting thousands of employees and hundreds of trailer trucks full of IT equipment from IBM's Raleigh location to Lenovo's new building in Morrisville.
Lenovo's goal is simple: to be the #1 vendor of x86 enterprise servers. Lenovo sees a $44 Billion USD opportunity in x86 servers, with an additional $14B opportunity selling IBM System Storage attached to these servers. Lenovo is already #1 for Personal Computers in the consumer space, and is #1 for customer satisfaction. IBM System x #1 in reliability and up-time for x86 servers. In a client survey of how many clients had an outage lasting four hours or more, less than 1 percent from IBM System x compared to 13 percent for HP servers. That's a big difference!
There is a 40 percent growth in "Converged Systems" such as the Flex System and PureFlex systems. Lenovo will take over the x86-only versions of these, while IBM will retain the POWER-based and Power-and-x86 hybrid models. IBM will also retain the PureApplication and PureData models of the PureSystems line.
Lenovo is also focused on security. Their "Trusted Platform" includes Self-encrypting Drives (SED) managed by IBM Security Key Lifecycle Manager software, and Crypto-assist co-processors.
Jeff also mentioned new reference architectures for VMware's VSAN, Microsoft's Fast-track Data warehouse for SQL Server, SmartCloud Desktop Infrastructure VDI with Atlantis ILIO, and Flex Systems for Hyper-V.
Greg Lotko, VP of IBM Storage Systems Development
Greg is the new VP of Storage Systems Development, about 11 months on the job, but I am glad to hear that he recognizes that IBM System Storage has a huge portfolio of products.
He focused on those areas where IBM is ranked #1:
IBM is #1 for All-Flash arrays.
IBM is #1 for Software Defined Storage (SDS).
IBM is #1 for Tape, including tape drives, tape libraries and virtual tape systems
The weather here in Dublin is great, although I have had not had much time to enjoy the outdoors with all the awesome and interesting sessions inside!
Before dinner, I was able to catch up with my colleagues from across the pond. Here I am pictured with Ola Surowiec, a Power Systems sales specialist from Scotland.
The dinner was set up as self-service buffet style, with choices of European, Asian, and Middle Eastern cuisine. This is largely the heritage of the Ottoman empire to provide a fusion of flavors from its neighbors.
The city of Istanbul is considered the border between Europe and Asia, with one side of the city on the "European" side, and the other side of the Bosphorus strait being the "Asian" side.
With a population of over 14 million, Istanbul forms one of the largest urban agglomerations in Europe, second largest in the Middle East and the third-largest city in the world by population within its city limits.
The entertainment started with two [belly dancers], one male and one female. (IBM is an equal opportunity employer!) For those not familiar with this particular form of performance art, it is improvised folk dances based on torso articulation and abdominal movements.
I have seen dancers before in Egypt, the country that most people associate with the origin of belly dancing, but the Turkish version is considered more energetic and athletic. Certainly both of our dancers were quite flexible.
This was followed by a live cover band that played the latest English-language hits. Several Americans at the table asked "Wait? We come all the way to Turkey and the local band sings the songs in English?"
In the corner, attendees were invited to dress up as their favorite sultan to take photograph. Here for example, are some of the members of the STU event team. Mo McCullough, Don Meyer, Marlin Maddy, Glenn Anderson and Alex Abderrazag pose with two lovely local ladies in full costume.
The word "sultan" derives from the Arabic word meaning "strength", "authority" or "power". Sultans ruled the Turkish empire from 1299 to 1922.
The [Topkapi palace], where I visited earlier in the week, contains clothing on display of the sultans and princes from the second half of the 15th century to the early 20th century.
The first official day of the [Systems Technical University 2014] conference had keynote sessions in the morning. The conference features experts from IBM Power Systems, IBM System x, IBM PureSystems, and IBM System Storage.
The keynote sessions were started with Amy Purdy, IBM Director of Technical Training Services, the group that is running this conference.
This conference is not focused on System z solutions, as many of the System z clients were in New York City for this birthday event, but it came up several times during the keynote sessions.
(FTC Disclosure: I work for IBM, and this blog post may be considered a paid, celebrity endorsement of IBM products and services. IBM has business relationship with both Intel and Amazon mentioned during the course of the keynote sessions, but I have no financial stake in either company. I was the chief architect for DFSMS, the storage management component of the z/OS mainframe operating system, and was part of the team that ported Linux to the System z mainframe.)
Nicolas Sekkaki, IBM Vice President of Systems and Technology Group in Europe, discussed IBM's commitment to client's privacy, the x86 and POWER server platforms, and a variety of mind-bogging announcements. He is focused on three trends: Big Data, Cloud, and Mobile.
IBM is focusing its hardware efforts on high-value, high-margin solutions such as System Storage, POWER Systems and System zEnterprise mainframe environments. Did you know that 65 percent of the world's business transactions are processed by either POWER systems or System zEnterprise mainframe?
IBM is also extending its continued focus on Linux and Open Source initiatives. For the System zEnterprise mainframes, 78 percent of our clients run Linux on System z. Over 290 clients have added the "zBX" option that allows them to run Windows and AIX on the mainframe as well. It is now less expensive to run workloads on System zEnterprise -- about 1 dollar per day per server -- than public cloud offerings from Amazon Web Services. Linux on POWER also has lower Total Cost of Ownership (TCO) than Linux-x86.
Nicolas also mentioned major changes for the POWER Systems, starting with the [OpenPOWER Consortium], formed by IBM, Google, Mellanox, NVIDIA and Tyan.
The move makes POWER hardware and software available to open development for the first time as well as making POWER Intellectual Property licensable to others, greatly expanding the ecosystem of innovators on the platform. The consortium will offer open-source POWER firmware, the software that controls basic chip functions. By doing this, IBM and the consortium can offer unprecedented customization in creating new styles of server hardware for a variety of computing workloads.
IBM POWER has switched from being "Big Endian" to being "Bi-Endian", allowing operating systems to choose between "Big Endian" or "Little Endian" modes. The Big Endian mode allows for Linux compatibility with the System zEnterprise mainframe, and the Little Endian mode for compatibility with Linux-x86.
Thorston Kahrmann, Intel Account Director for EMEA, presented Intel's rich history of collaboration with IBM, from technologies like BlueTooth and PCiE Generation 3, to platforms like BladeCenter and NeXtScale, to Industry Standards.
IBM had a lot of "firsts" in the x86 server area, including the first 16-processor server, the first to offer hot-swap memory, and over 100 leading performance benchmarks.
The latest Intel Xeon chip is the E7 version 2. For example, changing from DB2 v10.1 on the old E7, to running DB2 BLU columnar acceleration on the new E7 version 2, resulted in a 148 times increase in performance. A query on a 10TB database that previously took four hours was completed in under 90 seconds.
Thorston also wanted to remind the audience that nearly every System Storage product from IBM, from the high-end XIV, SAN Volume Controller, SONAS and FlashSystem V840, to midrange and entry level Storwize products, are all based on Intel's x86 processors.
Louise covered the findings from the latest 2012 CEO study, gathering insight from 1709 CEO interviews. The major focus areas for CEOs are:
Empowering employees through company-wide values
Engaging customers as individuals, rather than via demographics
Amplifying innovation with strategic and tactical partnerships
With smartphones, tablets and ubiquitous Internet access, everyone is now a technologist, so that IT is now becoming a competitive differentiator. IT projects and Business projects are no longer separate. If your IT department is seen as an expense, it will continue to get its budget cut. If, however, your IT department is part of your revenue stream, then it can be viewed as an asset.
Sadly, over 75 percent of IT projects fail, either are way over budget, delivered late, or some combination of the two. Business leaders are pushing for IT improvements, but often CIOs are too afraid to take the risks to move the business forward. Louise cited three reasons for this, which she called the three C's:
The IT and Business leaders did not full understand the context of the project.
The content of the project was not properly defined between IT and Business architects.
The collaboration between IT and Business personnel was not properly established.
Louise wrapped up her session with asking a simple question: How much is the cost of a light bulb. Some might focus on the cost of the bulb itself, while others might add the cost of maintenance, having ladders and personnel to replace them as needed, and others might include the electricity consumed. Both Business and IT leaders need to focus on Total Cost of Ownership (TCO) in their planning.
I presented IBM's Smarter Storage Strategy. This is focused on three key areas:
Data-intensive Solutions. Storage is needed for Big Data analytics. IBM is focused on efficiency in all dimensions: capacity efficiency with data footprint reduction techniques, energy efficiency, administrator efficiency with ease-of-use interfaces, and reduced complexity.
Business-critical workloads. Storage needs to allow business to prioritize which applications and workloads are most critical, and automate Quality of Service (QoS) for each application based on its business importance. The result is a balance between performance and cost across the spectrum of applications.
Start quickly and add value. IBM is committed to support private, hybrid and public cloud deployments. Storage needs to support not just VMware, but also Hyper-V, KVM, PowerVM and z/VM. That is why IBM is a platinum sponsor for the OpenStack foundation.
Eric Aquaronne presented an excellent session on OpenStack foundation, an open source collaboration of various companies to bring a consistent Cloud-management standard across compute, storage and network resources.
Replication for Business Continuity and Disaster Recovery
I have been involved with Business Continuity and Disaster Recovery my entire 28-year career at IBM System Storage, so when I was asked to cover BC/DR in 75 minutes, I focused just on aspects related to disk-to-disk replication.
I divided the presentation into three sections:
Business priorities. You need to prioritize which business processes are most important, and prioritize your recovery accordingly.
Technical implementation. Once priorities are set, there are seven "Business Continuity Tiers" to choose from. BC Tier 1 is the least expensive, recovering from physical tapes stored in an off-site vault. The fastest recovery is BC Tier 7, which automates the storage, server and network fail-over to a secondary site in as little as 30 minutes.
Ongoing management. Just setting up a BC/DR implementation is not enough. It needs to be monitored to ensure that it continues to provide the protection you expect. BC/DR exercises should be performed one or more times per year to ensure that everyone has the skills and procedures documented to succeed in the event of a real disaster.
Of these seven BC tiers, BC Tier 6 is focused on storage replication, such as Metro or Global mirror available on our DS8000, XIV Storage System, SONAS and SAN Volume Controller. BC Tier 7 involves system automation, such as Tivoli Distributed Disaster Recovery Manager and GDPS.
What is Big Data? Architectures and Practical Use Cases
This session was an expanded version of the one I gave in Belgium last year. Big Data is a big topic, and there are a variety of "big data" related sessions at this conference. I focused on three key areas:
The change in the role of Storage Administrator. In the past, most of the data was structured and stored in databases, managed by database administrators. However, in today's environment, over 80 percent of the data is unstructured, outside of traditional relational databases, so either the database administrators need to learn new skills, or storage administrators will need to step up and help manage this unstructured data content.
The change in the role of Business Analyst. We are no longer just looking at the financial consequences of patterns and trends. The new role of Data Scientist needs to apply statistical models, show some business acumen, and be able to "tell a story" that is supported by the data when communicating findings to Business and IT leaders.
The change in the role of Decision Maker. In the past, Decision Support Systems were available only to the top-level business executives. Now, empowered employees have access to real-time analytics that can help them make decisions and take immediate actions.
This session packed the house, with standing room only. I would like to offer a special thanks to IBM VP Bob Sutor, Stephen Brodsky, Linton Ward, and Ralph McMullen in helping me finalize my presentation.
Continuing coverage of the [Systems Technical University 2014] conference, we had an early morning awards ceremony to celebrate top sellers that led big wins in Europe for FlashSystems, XIV, Power Systems, and PureSystems.
Afterwards, there were several breakout sessions on day 2.
Storage Technology Futures -- fresh from IBM research labs, tomorrow in your datacenter
Axel Koester presented several projects from IBM Research labs that have contributed to actual products, including the incredible scalability of [PERCS] that was incorporated into IBM General Parallel File System (GPFS).
Cloud Storage and Active Cloud Engine
My presentation started off explaining the taxonomy of cloud storage. There are basically four kinds of cloud storage: persistent storage, ephemeral storage, hosted storage, and reference storage. Each of these has unique access patterns and service level requirements.
IBM has three distinct cloud storage offerings, so I covered IBM XIV Storage Systems, SONAS and Storwize V7000 Unified with Active Cloud Engine, and Linear Tape File System (LTFS) Enterprise Edition (LTFS-EE).
FlashSystem competitive overview
Henrik Wilken provided an excellent presentation comparing IBM FlashSystems to the dozen or more competitors that offer all-flash or hybrid flash-and-disk combinations.
IBM Tivoli Storage Productivity Center
From 2001 to 2003, I was the chief architect for what is now called Tivoli Storage Productivity Center. It continues to be the top most requested topic for briefings at the IBM Tucson Executive Briefing Center.
I presented an overview of Tivoli Storage Productivity Center, with a brief update on what's new in TPC 5.2.1 and the SmartCloud Virtual Storage Center v5.2.1 releases.
IBM Archive Storage Solutions - Data Retention for Government Compliance and Industry Regulations
I can't believe it has been nine years since I was on the Product Development Team for the IBM DR550 Data Retention storage solution!
In this session, I explained the lessons we learned from the DR550, its successor the Information Archive, and how we now position System Storage Archive Manager (SSAM) software as their replacement. SSAM was recently certified by KPMG to meet a variety of US, European and International laws.
Step Right Up! Take your presentation skills to the next level
Glenn Anderson presented this session under the guise of "Professional Development". Whether you are new to public speaking and looking for some guidance, or are an experienced A-list celebrity looking to gain a few pointers, this session covered it all.
Some of my favorites:
Presentations are not Documentation! If a presentation had all the information to stand on its own, nobody would even bother to listen to the speaker. Many new presenters have 3-4 lines for titles, and too many words in small font to ensure they cover all the details to speak on. Don't do it. My rule of thumb is that 50 percent of the information is conveyed verbally, and the other 50 percent visually from the presentation.
Simplicity is the ultimate sophistication. I couldn't agree more. I try to focus on my core message in my presetations. I am a big fan of the [KISS principle] which stands for "Keep it simple, stupid!"
VOICE - Victory over inconsistent conscious energy! There is nothing more painful than hearing a public speaker who talks to softly, too loudly, or in a monotone manner. Mix it up! If you want to capture someone's attention, whisper! Vary your volume for effect.
Presenting is like Pouring Wine. At cocktail parties, the hosts will walk around with the bottle, and refill the glasses of those who are actively drinking the wine, but leave alone those who haven't sipped a drop. Public speakers need to focus on the needs of those in the audience paying close attention, and ignore people who are asleep, paying attention to their laptops and smartphones, or otherwise distracted.
Don't memorize - Extemporize. Too often, new speakers try to memorize their entire presentation. This doesn't go well, and can end up looking like an actor on live stage forgetting his next line. Instead, focus on getting the general idea across in a more natural conversational tone.
Building Open Clouds on POWER Systems
Mandie Quartly presented the excitement of building a cloud using IBM's new Linux-only line of PowerLinux™ servers, KVM, virsh, virtio and OpenStack interfaces. Jeff Scheel was on hand to interject bits of wisdom throughout her session.
IBM is investing heavily into the Linux side of all of its servers, and the latest investments have been focused on the POWER systems.
Storage Clouds in the Big Blue Sky
Dick Vogelsang presented this session focused mostly on the "Self-service" aspect of Cloud Storage. While this sounded like it would be similar to my session from yesterday, it was actually quite different.
Vogelsang explained SmartCloud Storage Access, and compared this to how competitors are providing (or not providing) self-service provisioning of file spaces and LUNs. He gave examples based on VMware, Hyper-V, and OpenStack Foundation.
It is interesting the angle or spin that each speaker gave to each topic!
Johann Weiss, Jim Blue and I joined several other local experts to answer questions and respond to comments and suggestions attendees had about IBM System Storage products and solutions. Here is a sample:
I would like to add 1TB of Flash to our FlashSystem 810 and have the system automatically re-stripe across this new capacity non-disruptively?
How can I have XIV systems at two datacenters in an active/active configuration that would allow me to vMotion from one location to the other non-disruptively?
Put them behind the SAN Volume Controller in Stretched Cluster mode.
What about a similar active/active but for NAS?
IBM N series.
I would like HyperSwap on the SVC/Storwize family like the DS8000 offers for AIX?
When will IBM offer a multi-frame XIV?
The "Hyper-Scale" set of features lets you logically connect 144 XIV frames together and treat as a single system. There is no need to physically bolt them together, since the communication is done over standard network switches.
When will IBM devices have native FCoE support?
All IBM System Storage products work within an FCoE framework today, either with native FCoE support, or through Top-of-Rack switches splitting out the traffic between IP and FCP traditional networks. IBM Storwize and N series products support FCoE natively, and any disk behind virtualized by SAN Volume Controller or Storwize can be access via FCoE hosts because of this support.
What is FLAPE?
FLAPE is the combination of Flash and Tape. Both of these technologies are improving over 40 percent year-to-year, but disk is slowing down to 20 percent improvement. It is possible to combine Flash and tape systems, such as IBM LTFS-EE or IBM ProtecTIER TS7600 series.
Only the Storwize V7000 Unified supports file modules to add NAS capabilities, what can IBM offer us that is smaller for NAS deployments, perhaps a Storwize V5000 Unified or Storwize V3700 Unified?
Consider the IBM N3000 series.
Other storage vendors indicate that RAID-5 and RAID-6 are running out of steam, are no longer practical to protect ever growing capacities of disk. What is IBM planning in this area?
IBM XIV Storage System was one of the first to offer a distributed RAID that addresses many of the RAID-5/RAID-6 drive rebuild concerns. IBM DCS3700 and DCS3860 also have Dynamic Disk Pooling to reduce drive rebuild impact. Lastly, IBM GPFS now offers Native RAID support, used in the IBM GPFS Storage Server.
Is it true that GPFS is NFS only?
Do not confuse GPFS the file system with the various storage offerings that are based on GPFS. IBM SONAS and Storwize V7000 Unified, both based on GPFS, support CIFS, NFS, HTTPS, SCP and FTP. IBM GPFS Storage Server can be configured to access GPFS natively, or you can run NFS v3/v4 server to make those protocols available. With Microsoft [Windows Storage Server], you can provide CIFS access to any GPFS-based storage solution.
LTFS-EE sounds like an exciting alternative to IBM Tivoli Storage Manager HSM space management for moving data from disk to tape. Do you agree?
Yes, we agree. However, TSM HSM space management supports a broader set of file systems. LTFS-EE only provides disk-to-tape movement for IBM GPFS.
Why does the DS8000 implementation of Easy Tier sub-LUN automated tiering support three tiers, but SVC/Storwize only support two tiers?
The same software engineering team works on both, but develop new features for the DS8000 first, get it working, then port it over to the Storwize family. At times, there might be gaps between what is supported on the latest DS8000 version and what is available on Storwize family products.
In an SVC Stretched Cluster, I would like to have the third quorum disk connected over the IP network, rather than FCP.
Personally, I enjoy these interchanges. They are sometimes called "Birds-of-a-Feather" or BOF at some conferences, "Free-for-All" at others. At IBM conferences, they are often titled "Meet the Experts". Whatever you call it, the questions and feedback on what clients are thinking are quite useful for product planning and prioritization of future planned features.
New Generation Storage Tiering: Less Management, Lower Investment and Increased Performance
This was not just an update to my session last year in Brussels, Belgium. Rather, I decided to start over and focus I/O density as the metric to focus my efforts, armed with real data from Intelligent Storage Tiering Analysis (ISTA) studies done at various clients. From that, I was able to talk about storage tiering on three fronts:
Storage tiering between Flash and disk. IBM FlashSystem and IBM Easy Tier on DS8000 and Storwize family for hybrid Flash-and-disk configurations.
Storage tiering between disk and tape. HSM and Information Lifecycle Management (ILM) on SONAS, Storwize V7000 Unified and LTFS-EE.
Storage tiering automation across your entire environment. ISTA studies can help identify a target mix of Tier 0, Tier 1, Tier 2 and Tier 3 storage. SmartCloud Virtual Storage Center can recommend or perform the movement of LUNs to more appropriate tiers, based on age and I/O density measurements.
Next Generation FlashSystem 840 and V840, Architecture Deep Dive
Detlef Helmbrecht, from the IBM Advanced Technical Skills team in Germany, presented this deep dive in our latest IBM FlashSystem offerings. He started with an analogy. Latency is like a single car driving down an empty highway. IOPS, on the other hand, is like a lot of cars stuck in slow traffic, with all lanes filled on the autobahn. While there are more cars transported on a full highway, the individual cars are not driving very fast. Flash versus disk has similar comparisons.
Detlef explained the differences between the previous FlashSystem 810/820 with the new 840, as well as talk about the FlashAdapter 90 now available as a PCIe card.
Finally, we talked about SAN Volume Controller combined with Flash, and the new FlashSystem V840 which combines SVC and FlashSystem 840 to have an incredibly function-rich, robust solution.
Data Footprint Reduction - Understanding IBM Storage Efficiency Options
My last session of the week! This session covered all of the various technologies for data footprint reduction, including Thin Provisioning, Space-efficient FlashCopy and snapshots, Real-time compression and data deduplication. Frankly, I wasn't expecting many people to attend the last session of the last day, but nearly 50% of the seats were filled, so I was quite pleased on the turn-out.
Fun Fact: Istanbul is considered by TripAdvisor in 2014 as the #1 most popular city to visit in Europe!
Want to hear the latest technical information about IBM Storage, but not willing to wait until the big [IBM Edge Conference] this September? We will have a variety of "Systems Technical University" events in the next few weeks in a variety of locations.
In the United States, I will be presenting several topics at the following:
Atlanta, GA -- April 12-14
San Francisco, CA -- May 10-12
Chicago, IL -- May 18-20
Boston, MA -- June 7-9
Here's my schedule for the one in Atlanta:
Introduction to Object Storage and its Applications with Cleversafe
Software Defined Storage -- Why? What? How?
Integration between Spectrum Scale and Cleversafe
IBM Spectrum Scale for File and Object storage
What Is Big Data? Architectures and Practical Use Cases
New Generation of Storage Tiering: Less Management, Lower Cost and Increased Performance
The Pendulum Swings Back -- Understanding Converged and Hyperconverged Environments
Sometimes, it's difficult to explain the products I manage to people outside the IT storage industry. How do you explain FCP vs. FICON, Giant Magnetoresistive (GMR) heads, the SMI-S interface, etc. enough to then explain how your job relates to those technologies. At least my friends and family read this blog, so they can somewhat understand some of the things I am working on. When I visit my folks on Sundays, we sometimes discuss items they read in my blog that week.
In addition to a "take your children to work day", we have discussed within IBM a "take your parents to work day", especially for the young new hires who have a hard time explaining what their new job is to the rest of their family.
The problem is not just your parents, but any of your co-workers old enough to be parents who haven't bothered to keep up with the latest advancements in Web 2.0 technology. Here are some examples:
A project leader working with a technology partner asked if me if there was a difference between a "blog" and a "wiki" and which should his team use. This was not a simple yes/no answer, and involved some explanation, conversation and understanding of what he was trying to accomplish.
For one of my meetings, someone instant-messaged me asking where it was, was it "face-to-face" (F2F) or Conference call (CC). I replied back, "A2A w/CC" (avatar-to-avatar with voice over conference call). When you are meeting other avatars in-world in Second Life, it gets quite distracting having everyone typing away, with their hands and fingers moving furiously, so we use a conference call to complement our 3D interaction.
That's why I was very excited to seeLinden Lab announces voice beta in Second Life. It won't be fully ready until later this year, but adding voice to Second Life will greatly reduce the hurdles we now have trying to coordinate conference calls with in-world activity.
I realize not everyone can keep up with all the new and different technologies, but the social networking aspects of some of these new developments are worth looking into.
We have successfully arrived to Mumbai, India. Since this is my first time in India, I decidedto check out the town by going to the local McDonald's® restaurant. As a former software engineer of McDonald's, I love the food, and try to visit a McDonald's in every country I visit. Wikipedia calls our transportation an [Auto Rickshaw], but the locals called it a "tuk-tuk". This is not my first time in one, they have them in Thailand and Mexico as well.
We had the hotel identify the address of the closest McDonald's to our hotel. From past experienceI know that tuk-tuk drivers will suggest alternatives, in an effort to earn a larger fare, or to redirectto a preferred location where the driver might get special kick-backs. Our driver was no different.
The traffic was treacherous, the roadswere in roughshod condition, and sad looking stray dogs digging through piles of rubbish were everywhere. The local "Daily News and Analysis" newspaper this week estimates that there are over 70,000 stray dogs in Mumbai alone.What to do with all of these strays is a matter of controversy. In preparation for the Olympic games, China hasasked its restaurants to [take"dog" off their menus].Having lived in one of the poorest countries, and one of the richest, nothing surprises me anymore.
My IBM colleague, Curtis Neal, decided to join me for this adventure. Finally, after about 20 minutes, our driver parks the tuk-tuk. He told us the restaurant is only aboutthree blocks away by foot, he would allow us to treat him to lunch, and then he will take us back to the hotel.While we appreciated his fantastic imagination, we told him we just wanted to be taken one-way to the restaurant, to drop us off at the front door, and we would find another tuk-tuk for the return.
After a bit of argument, we settled on being left only one block away, and we would walk the rest.While we could not see exactly where the restaurant was when we got out, he at least pointed us in the right direction.
The problem was that we approached the restaurant from behind, and came up to its equivalent of a "drive thru" window,ordered our food, and then went to the second window to pick up our order. We were eating on the street. It was not until I decided to take this photo of the restaurant, that we discovered there was an entire seating area upstairs, and around the cornerthe main entrance!
There were plenty of tuk-tuks picking up and dropping people off, so we have no idea why ourprevious driver was unwilling to take us the entire distance.
Cows are sacred here in India, so thereare no beef-based hamburgers to choose from. My choices for sandwiches were:
Since my nutritionist asked me to avoid carbs and fried foods, I chose the McChicken with cheese combo meal with fries and a Coke.
Getting back was also a challenge. While we had no problem haling a tuk-tuk, we had no idea the address of ourhotel, and our driver had no idea where it was. We ended up driving around the city until we found a differenthotel, asked them if they knew where it was, and then eventually getting to our hotel. This is something I shouldhave planned for in advance, getting a card with the hotel details on it before leaving.
While it might seem like a simple trip, Curtis and I probably learned more about India this way than spending a week inside the comforts of our hotel.
Well, this has been an interesting two weeks. On week 1, I focused on IBM's strategy and four keysolutions areas: Information Availability, Information Security, Information Retention, and InformationCompliance. On week 2, I focused on individual products, their attributes, features and functions.Which week drew more blog traffic? You guessed it--week 1. Apparently, people want to know more aboutsolutions to their challenges and problems, and not just see what piece part components are available.
While IBM had switched over to solution-selling a while ago, some of our competitors are still inproduct-selling mode, and try to frame all competitive comparisons on a product-by-product basis.In my post[Supermarkets and Specialty Shops], I drew the analogy that the IT supermarkets (IBM, HP, Sun and Dell) are focusedon selling solutions, but the IT specialty shops (HDS, EMC, and others) are still focused on products.
Certainly, the transition from product-focused to solution-focused is not an easy one. As the IT industry matures, more and more clients are looking to buy solutions from theirvendors. What does it take to change behaviour of newly acquired employees, recently hired sales reps, and business partners, many of whom come from product-centric cultures, to match this dramatic shift in the marketplace? Let's take a look at change in other areas of the world.
On the[Freakonomics blog], Stephen Dubner discusses how clever people in Israel have figured out a way to get people to clean up after their pets in public places. This is a problem in many countries. Here we see an old idea, the [carrot-and-stick] approach, combined with newinformation technology. Here's an excerpt:
"In order to keep a city’s streets clean of dog poop, require dog owners to submit DNA samples from their pets when they get licenses; then use that DNA database to trace any left-behind poop and send the dogs’ owners stiff fines.
Well, it took three years but the Israeli city of Petah Tikva has actually put this plan to work:
The city will use the DNA database it is building to match feces to a registered dog and identify its owner.
Owners who scoop up their dogs’ droppings and place them in specially marked bins on Petah Tikva’s streets will be eligible for rewards of pet food coupons and dog toys.
But droppings found underfoot in the street and matched through the DNA database to a registered pet could earn its owner a municipal fine."
Sometimes, if enough people change, then changing behaviours of the few remaining becomes much easier. DanLockton on his Architectures of Control blog posts about the[London Design Festival - Greengaged]. This year, the festival focused on behavior changes for a greener environment, ecodesign and sustainable issues in design.Here's an excerpt and corresponding 5-minute YouTube video:
Lea argued three important points relevant to behaviour change:
Behaviour change requires behaviour (i.e. the behaviour of others: social effects are critical, as we respond to others’ behaviour which in turn affects our own; targeting the ‘right’ people allows behaviour to spread)
Behaviour and motivation are two different things: To change behaviour, you need to understand and work with people’s motivations - which may be very different for different people.
Desire is not enough: lots of people desire to behave differently, but it needs to be very easy for them to do it before it actually happens."
Of course, tax and government regulations can heavily influence behaviour and decisions. Since today is[International Talk Like a Pirate Day], I thought I would finish this post off with this interesting piece on Google barges. Some companies, like IBM and Google, seem more adaptable to changing behaviour and trying out fresh new ideas.Will Runyon over on the Raised Floor blog, has a post about Google's patent for[Data center barges on the sea]:"The idea is to use waves to power the data centers, ocean water to cool them, and a moored distance of seven miles or more to avoid paying taxes."
Arrr! Now that's what I call a new way of looking at things!
"Our survey data shows that over the past 12 months, more firms have bought their storage from a single vendor. While this might not be for everyone, it's worth serious consideration for your environment. Maybe you won't get the best price per gigabyte every time, but you'll probably save money in the long run because of simpler management, increased staff specialization, increased capacity utilization, and better customer service."
A Forrester survey of 170 companies ranging from SMBs to large enterprises in North America and Europe found that more than 80 percent bought their primary storage from one vendor over the last year. That includes 64 percent of the companies with more than 500 TB of raw storage.
The report, written by analyst Andrew Reichman, says using more than one primary storage vendor can make it more complex to manage, provision and support the storage environment. And while using multiple vendors can often bring better pricing, buying from one vendor can result in volume discounts.
“You may have tried to contain costs by forcing multiple incumbent vendors to continuously compete against each other, with price as the primary differentiator,” Reichman writes. “This strategy can reduce prices and limit vendor lock-in, but it can also lead to management complexity and poor capacity utilization.”
The report recommends keeping things simple by and using fewer vendors when possible. However, that advice comes with several caveats: buying all storage from one vendor means taking the bad with the good, and some vendors’ product families differ so much “they may as well come from different vendors.”
As if by coincidence, fellow blogger from EMC Chuck Hollis gives his reflections on this same topic. Here's an excerpt:
When it comes to buying storage (or any infrastructure technology, for that matter), there seem to be two camps:
Best-of-breed (i.e. multivendor): -- buy what's best, get the best price, keep all the vendors on their toes, etc. etc.
Single vendor: primarily use one vendor's offerings, and hold them accountable for the outcome.
If Chuck had said "multivendor" versus "single vendor", then that would have been a true dichotomy, but interestinglyhe equates best-of-breed with a multivendor approach. Let's consider two examples:
Disk from one vendor, Tape from another
Here is a multivendor strategy, and if you have a clear idea of what best-of-breed means to you, then you couldpick the best disk in the market, and the best tape in the market. However, I don't think this keeps either vendor"on their toes", or helps you negotiate lower prices by threatening to switch to the other vendor. In shops likethis, the staffing usually matches, so there are disk administration and tape operations, with little or no overlap, andlittle or no interest in retraining to use a new set of gear. It is true that disk-based VTL could be used where real tape libraries are used, but this may not be enough to threaten your existing vendors that you will switch all your disk to tape, or all your tape to disk.
One could argue that the vendor that sells the besttape could be the exact same vendor that sells the best disk. In this case, your multivendor strategy would actuallywork against you, forcing you away from one of your best-of-breed choices.
Disk and Tape from one vendor for some workloads, Disk and Tape from another vendor for other workloads
Here is a different multivendor strategy. Having disk and tape for the same vendor allows you to take advantageof possible synergies. The IT staff knows how to use the products from both vendors. This strategy does let you keep your vendors "on their toes". You can legitimately threaten to shift your budget from one vendor over another.However, whatever your definition of best-of-breed is, chances are the product from one vendor is, and the other vendor is not. Both meet some lowest common denominator, meeting some minimum set of requirements, which would allow you to swap out one for the other.
I guess I look at it differently. The equipment in your data center should be thought of as a team. Do your servers, storage and software work well together?
While Americans like to celebrate the accomplishments of individual musicians, athletes or executives, it is actually bands that compete against other bands, sports teams that compete against other sport teams, and companies that compete against other companies. Teamwork in the data center is not just for the people who work there, but also for the IT equipment. Just as a new incoming athlete may not get along well with teammates, shiny new equipment may not get along with your existing gear. Conversely, your existing infrastructure may not let the talents or features of your new equipment shine through.
Putting together the best parts from different teams might serve as a great diversion for those who enjoy["fantasy football"], it may not be the best approach for the data center. Instead, focus on managing your data center as a team, perhaps with theuse of IBM TotalStorage Productivity Center to minimize the heterogeneity of your different equipment. Pick an ITvendor that sells "team players" for your servers, storage and software, with broad support for interoperability and compatibility.
This week, I was in the Phoenix area presenting at TechData's TechSelect University. TechData is one of IBM's IT distributors,
and TechSelect is their community of 440 resellers and 20 vendors. This year they celebrate their 10 year anniversary of this event. I covered three particular topics, and I was videotaped for those who were not able to attend my session. (There were very few empty seats at my sessions)
IBM Business Partners now realize that the "killer app" for storage is combining the IBM System Storage SAN Volume Controller with entry-level or midrange disk storage systems for an awesome solution. Solutions based on either the Entry Edition or the standard hardware models can compete well with a variety of robust features, including thin provisioning, vDisk mirroring, FlashCopy, Metro and Global Mirror. This has the advantage that the SVC can extend these functions not just to newly purchased disk capacity, but also existing storage capacity. The newly purchased capacity can be DS3400, DS4700 or the new DS5000 models. This is great "investment protection" for small and medium sized businesses.
LTO-4 drives and automation
The Linear Tape Open (LTO) consortium--consisting of IBM, HP and Quantum--has proven wildly successful, ending the
vendor-lockin from SDLT tape. I presented the latest LTO-4 offerings, including the TS2240, TS2340, TS2900, TS3100
and TS3200. The LTO consortium has already worked out a technology roadmap for LTO-5 and LTO-6. The LTO-4 drives
support WORM cartridges and on-board hardware-based encryption. The encryption keys can be managed with IBM Tivoli Key Lifecycle Manager (TKLM).
SAN and FCoCEE switches
IBM has agreements with Brocade, Cisco and Juniper Networks for various networking gear. I focused on entry-level switches for SAN fabrics, the SAN24B-4 and Cisco 9124, as well as new equipment for Convergence Enhanced Ethernet (CEE),
including IBM's Converged Network Adapater (CNA) for System x servers, and the SAN32B switch that has 24 10GbE CEE ports and 8 FC ports that support 8/4/2 and 4/2/1 SFP transceivers. FCoE Clients that want to deploy Fibre Channel over CEE (FCoCEE) today have everything the need to get started.
The venue was the
[Sheraton Wild Horse Pass Resort and Spa] in Chandler, just south of Phoenix. This compound includes [Rawhide], an 1800's era Western Town attraction, a rodeo arena, and a casino still under construction.
Dinners were held nearby at the infamous
[Rustler's Rooste] Steakhouse on South mountain.
You could buy 10 liters of gasoline in Venezuela with this coin.
I'm back from South America, and am now in Chicago, Illinois. I'm having breakfast at the Starbucksdowntown, and thought I would make a post before all of my meetings today.
On this trip, I met with IBM Business Partners and sales reps from Argentina, Colombia, Ecuador and Venezuela. While I have visited thefirst three countries on past trips, this was my first time to Caracas, Venezuela. I grew up in La Paz, Bolivia, and speak Spanish fluently, so had no problemgetting around and holding discussions with everyone. While my friends in the US are oftensurprised I speak multiple languages, it doesn't surprise anyone I visit in other countries.If you are going to have worldwide job responsibilities for a global company that does businessin over 180 countries, the least you could do is learn a few additional languages. I suspect themajority of the 350,000 IBM employees speak at least two languages, the exceptions being mostly the 50,000 orso employees that live in the United States.
I flew on American Airlines from Tucson to Dallas to Caracas, and was only slightly delayed as a resultof all of the flight cancellations that happened earlier that week. Some companies designate a single "official airline" for their employees to use. That makessense if all of your employees are located in a single city, and that city is the hub for yourdesignated airline.IBM is too big, too spread out, and sells technology to nearly every airline to make sucha designation. Instead, IBM tries to spread its business out to multiple carriers, although all ofmy colleagues seems to have their own personal favorites. Mine are American Airlines, Singapore Airlines and Cathay Pacific.
While other people were upset over the delays, I found American Airlines did a great job keeping me informed,and all their employees I talked to seemed to be handling the situation fairly well. If youfly on American, I recommend you sign up for "text message" notifications. I did this for everyleg of my trip, and was kept up to date on times, gates and status. Very helpful!American Airlines even started their own corporate blog: [AA Conversation] (Special thanks to my friend[Paul Gillen] for pointing this out)
(I read somewhere that if you are going to travel anywhere, you need to remember to bringboth your sunscreen and your sense of humor, otherwise you are going to get burned. Goodadvice! Trust me, you don't even know how bad it can really be until you travel in the third world.)
Anyhoo, last week, IBM Venezuela celebrated its 70th anniversary. That's right, IBM has been doingbusiness in Venezuela for the past 70 years. Also last week, IBM put out its impressive [1Q08 quarterly results],including 10 percent growth for IBM System Storage product line worldwide, comparing what IBM earned this first quarter to what IBM earned the first quarter of last year. For just the Latin American countries,the growth for IBM System Storage was 20 percent!There are a lot of oil and gas companies in Venezuela. With a barrel of oil selling at more than$117 US dollars, these companies are looking to spend their newly earned profits on IBM systems, software and services.
As for the picture above, that is a one-thousand Bolivares coin, worth about 47 US cents atthis week's official exchange rate. As with many Latin American countries going through [years of high inflation], Venezuela was tired of all those zeros on their money. For example, a cheeseburger, freedom fries and a Cokeat McDonald's would set you back 20,000 Bolivares.This year the Venezuelan governmentcreated a new currency called "Bolivares Fuertes" (VEF), lopping off the last three zeros.So, the coin above would be replaced by a new coin with a big "1" on it instead, and an old 2000 Bolivares billwould be replaced by a new 2 Bolivares Fuertes bill. Unfortunately,I had to give all my new Venezuelan money back at the airport upon leaving, but they let me keep the coinabove, since it is old money, as a souvenir so that I could use it as a ball mark for playing golf.
(The term Bolivares is named after Simon Bolivar who was born in Caracas. He is famous throughoutSouth America, and was, and I am not making this up, the first president of Colombia, the secondpresident of Venezuela, the first president of Bolivia, and the sixth president of Peru. Here isthe [Wikipedia article] to learn more.)
Gasoline costs a mere 100 old Bolivares per liter.For those who don't do metric, gasoline therefore costsless than 18 cents per gallon. By comparison, in the USA, the average today was $3.47 US dollarsper gallon, of which 18.4 cents of this is Federal tax. That's right, we pay more just in taxes forgasoline than los venezolanos pay for it all.
The side effect of cheap gas is bad traffic. Everybody in Venezuela drives their own car, and nobody thinksabout the price of gasoline, carpooling, or taking public transportation, acting much like Americans used to, up until a few years ago. With some of the gridlock we faced, it might have been faster (but not safer)to walk there instead.
Which makes me wonder if American Airlines fills up their airplanes with fuel at these lower prices when theypick up people in Caracas to take them back to the United States. In 2002, fuel represented 10 percentof the average airline's operating expenses, but today it is now 25 percent. That is a drastic increase!
The same is happening in data centers. In the past, electricity was so cheap, and such a small percentof the total IT budget, nobody gave it much thought. But as the usage of electricity increased, andthe cost per KWh went up, this has a multiplying effect, and the growth in power and cooling costs isgrowing four times faster than the average IT hardware budget increase.
During the Republican primaries, Mitt Romney promised Michigan he wouldbring back all those jobs back to the Auto Industry, while his opponent,John McCain, told the audience that those jobs are gone forever, time tostart learning new skills. Mitt won the state, but lost the nomination,and perhaps this snapped him back to reality. Mitt now has a new prescription for what ails the US Auto industry--straight talk that he should have been saying during his campaign,telling people what they should hear, rather than what they wanted to hear.
Gaurav takes this argument one step further, referring to IBM's amazingturn-around back in 1993. Whereas the US Auto Industry has pushed backagainst inevitable globalization, IBM has embraced it, re-inventing itself into aGlobally Integrated Enterprise [GIE] and helping our clients do the same.I've been working for IBM since 1986, so I remember the pre-1993 IBM and how different it is now in the post-1993 era.
The marketplace has responded positively. Since 2004, more than 5,000 companies worldwide have replaced their HP, Sun, and EMC products with energy-efficient IBM Systems: Servers and Storage. Companies have invested in IBM's servers and storage to tackle their most challenging business objectives and to help reduce sprawling data center costs for labor, energy and real estate.This announcement was part of IBM's[Press Release]for its Migration Factory offering. The Migration Factory includes competitive server assessments, migration services, and other resources to help customers achieve energy and space savings and lower their cost of ownership.
Earlier this month, IBM's Chairman and CEO Sam Palmisano recently outlined the possibilities of a smarter planet to the Council on Foreign Relations.Steve Lohr of the New York Times weighs in with his article [I.B.M. Has Tech Answer for Woes of Economy], and Dr. Fern Halper of Hurwitz & Associates gives her take over at [IT-Director.com].
Transcontinental flights and the[Travel Channel] have made the world smaller.Thomas Friedman argued the world has also become "flatter",thanks to advances in computers and global communication, in his 2005 book[The World is Flat].Now, IBM recognizes that InformationTechnology (I.T.) can help us solve the financial meltdown, global warming, and other major problems the world is now faced with.
How? First, our world is becoming instrumented. Sensors, RFID tags and other equipmentare now inexpensive and readily available to be placed wherever they are needed. Second, our world is becoming more interconnected. We are closely approaching two billion internet users andfour billion mobile subscribers, andthese can connect to the trillions of RFID tags, sensors and other instrumentation. Third,our world needs to get more intelligent. Not just US auto workers learning new skills,but all these instruments providing information that can be acted on with intelligentalgorithms. Algorithms can help with automobile traffic in large cities, enhance energyexploration, or improve healthcare.
This week is Thanksgiving holiday in the USA, so I thought a good theme would be things I am thankful for.
I'll start with saying that I am thankful EMC has finally announcedAtmos last week. This was the "Maui" part of the Hulk/Maui rumors we heard over a year ago. To quickly recap, Atmos is EMC's latest storage offeringfor global-scale storage intended for Web 2.0 and Digital Archive workloads. Atmos can be sold as just software, or combined with Infiniflex,EMC's bulk, high-density commodity disk storage systems. Atmos supports traditionalNFS/CIFS file-level access, as well as SOAP/REST object protocols.
I'm thankful for various reasons, here's a quick list:
It's hard to compete against "vaporware"
Back in the 1990s, IBM was trying to sell its actual disk systems against StorageTek's rumored "Iceberg" project. It took StorageTek some four years to get this project out,but in the meantime, we were comparing actual versus possibility. The main feature iswhat we now call "Thin Provisioning". Ironically, StorageTek's offering was not commercially successful until IBM agreed to resell this as the IBM RAMAC Virtual Array (RVA).
Until last week, nobody knew the full extent of what EMC was going to deliver on the many Hulk/Maui theories. Severalhinted as to what it could have been, and I am glad to see that Atmos falls short of those rumored possibilities. This is not to say that Atmos can't reach its potential, and certainly some of the design is clever, such as offering native SOAP/REST access.
Instead, IBM now can compare Atmos/Infiniflex directly to the features and capabilities of IBM's Scale Out File Services [SoFS], which offers a global-scale multi-site namespace with policy-based data movement, IBM System Storage Multilevel Grid Access Manager[GAM] that manages geographical distrubuted information,and IBM [XIV Storage System] that offers high-density bulk storage.
Web 2.0 and Digital Archive workloads justify new storage architectures
When I presented SoFS and XIV earlier this year, I mentioned they were designed forthe fast-growing Web 2.0 and Digital Archive workloads that were unique enough to justify their own storage architectures. One criticism was that SoFS appeared to duplicate what could be achieved with dozens of IBM N series NAS boxes connected with Virtual File Manager (VFM). Why invent a new offering with a new architecture?
With the Atmos announcement, EMC now agrees with IBM that the Web 2.0 and DigitalArchive workloads represent a unique enough "use case" to justify a new approach.
New offerings for new workloads will not impact existing offerings for existing workloads
I find it amusing that EMC is quickly defending that Atmos will not eat into its DMXbusiness, which is exactly the FUD they threw out about IBM XIV versus DS8000 earlier this year. In reality, neither the DS8000 nor the DMX were used much for Web 2.0 andDigital Archive workloads in the past. Companies like Google, Amazon and others hadto either build their own from piece parts, or use low-cost midrange disk systems.
Rather, the DS8000 and DMX can now focus on the workloads they were designed for,such as database applications on mainframe servers.
Cloud-Oriented Storage (COS)
Just when you thought we had enough terminology already, EMC introduces yet another three-letter acronym [TLA]. Kudos to EMC for coining phrases to help move newconcepts forward.
Now, when an RFP asks for Cloud-oriented storage, I am thankful this phrase will help serve as a trigger for IBM to lead with SoFS and XIV storage offerings.
Digital archives are different than Compliance Archives
EMC was also quick to point out that object-storage Atmos was different from theirobject-storage EMC Centera. The former being for "digital archives" and the latter for"compliance archives". Different workloads, Different use cases, different offerings.
Ever since IBM introduced its [IBM System Storage DR550] several years ago, EMC Centera has been playing catch-up to match IBM'smany features and capabilities. I am thankful the Centera team was probably too busy to incorporate Atmos capabilities, so it was easier to make Atmos a separate offering altogether. This allows the IBM DR550 to continue to compete against Centera's existingfeature set.
Micro-RAID arrays, logical file and object-level replication
I am thankful that one of the Atmos policy-based feature is replicating individualobjects, rather than LUN-based replication and protection. SoFS supports this forlogical files regardless of their LUN placement, GAM supports replication of files and medical images across geographical sites in the grid, and the XIV supports this for 1MBchunks regardless of their hard disk drive placement. The 1MB chunk size was basedon the average object size from established Web 2.0 and DigitalArchive workloads.
I tried to explain the RAID-X capability of the XIV back in January, under muchcriticism that replication should only be done at the LUN level. I amthankful that Marc Farley on StorageRap coined the phrase[Micro-RAID array] to helpmove this new concept further. Now, file-level, object-level and chunk-level replication can be considered mainstream.
Much larger minimum capacity increments
The original XIV in January was 51TB capacity per rack, and this went up to 79TB per rack for the most recent IBM XIV Release 2 model. Several complained that nobody would purchase disk systems at such increments. Certainly, small and medium size businessesmay not consider XIV for that reason.
I am thankful Atmos offers 120TB, 240TB and 360TB sizes. The companies that purchasedisk for Web 2.0 and Digital Archive workloads do purchase disk capacity in these large sizes. Service providers add capacity to the "Cloud" to support many of theirend-clients, and so purchasing disk capacity to rent back out represents revenue generating opportunity.
Renewed attention on SOAP and REST protocols
IBM and Microsoft have been pushing SOA and Web Services for quite some time now.REST, which stands for [Representational State Transfer] allows static and dynamic HTML message passing over standard HTTP.SOAP, which was originally [Simple Object Access Protocol], and then later renamed to "Service Oriented Architecture Protocol", takes this one step further, allowingdifferent applications to send "envelopes" containing messages and data betweenapplications using HTTP, RPC, SMTP and a variety of other underlying protocols.Typically, these messages are simple text surrounded by XML tags, easily stored asfiles, or rows in databases, and served up by SOAP nodes as needed.
It's hard to show leadership until there are followers
IBM's leadership sometimes goes unnoticed until followerscreate "me, too!" offerings or establish similar business strategies. IBM's leadership in Cloud and Grid computing is no exception.Atmos is the latest me-too product offering in this space, trying pretty muchto address the same challenges that SoFS and XIV were designed for.
So, perhaps EMC is thankful that IBM has already paved the way, breaking throughthe ice on their behalf. I am thankful that perhaps I won't have to deal with as much FUD about SoFS, GAM and XIV anymore.
Wrapping up this week's theme of thankfulness, I am thankful for theOne Laptop Per Child [OLPC] and their Get-One-Give-One (G1G1)offer.
Last November, I was one of the first to [sign up for the G1G1],and when mine arrived December 24, I posted initial observations in this[OLPC series].Over the past year, I have had the pleasure of helping out teams in Nepal and Uruguay,collaborating with developers in France, India and the United States. Giving back to othershas been a richly rewarding experience for me. I made some new friends, built up newprofessional contacts, and learned some new tricks as well.
Last year's G1G1 offer was limited to US and Canada, but this year, the OLPC have enlisted [Amazon.com] and made the offer available worldwide. You can choose to either give a single laptop for $199 USD, or get two laptops, get one for yourself or your family, and give the other to someone like Zimi, for $399 USD.
I'm thankful I did. Happy Thanksgiving to all my readers in the USA!
In explaining the word "archive" we came up with two separate Japanese words. One was "katazukeru", and the other was "shimau". If you are clearing the dinner plates from the table after your meal, for example, it could be done for two reasons. Both words mean "to put away", but the motivation that drives this activity changes the word usage. The first reason, katazukeru, is because the table is important, you need the table to be empty or less cluttered to use it for something else, perhaps play some card game, work on arts and craft, or pay your bills. The second reason, shimau, is because the plates are important, perhaps they are your best tableware, used only for holidays or special occasions only, and you don't want to risk having them broken. As it turns out, IBM supports both senses of the word archive. We offer "space management" when the space on the table, (or disk or database), is more important, so older low-access data can be moved off to less expensive disk or tape. We also offer "data retention" where the data itself is valuable, and must be kept on WORM or non-erasable, non-rewriteable storage to meet business or government regulatory compliance.
The process of archiving your data from primary disk to alternate storage media can satisfy both motivations.
IBM offers software specifically to help with this archival process.For email archive, IBM offers [IBM CommonStore] for Lotus Domino and MicrosoftExchange. For database archive, including support for various ERP and CRM applications, IBM offers [IBM Optim] from the acquisition of Princeton Softech.
The problems occur when companies, under the excuse of simplification or consolidation, feel they can just usetheir backups as archives. They are taking daily backups of their email repositories and databases, and keepingthese for seven to ten years. But what happens when their legal e-discovery team needs to find all emails or database records related to a particular situation, an employee, client or account? Good luck! Most backupsare not indexed for this purpose, so storage admins are stuck restoring many different backups to temporary storage and combing through the files in hopes to find the right data.
Backups are intended for operational recovery of data that is lost or corrupted as a result of hardware failures, application defects, or human error. Disk mirroring or remote replication might help with hardware failures, but any logical deletion or corruption of data is immediately duplicated, so it is not a complete solution. FlashCopy or Snapshot point-in-time copies are useful to go back a short time to recover from logical failures, but since they are usually on the same hardware as the original copies, may not protect against hardware failures. And then there's tape, and while many people malign tape as a backup storage choice, 71 percent of customers send backups to tape, according to a 2007 Forrester Research report.
Backups often aren't viable unless restored to the same hardware platform, with the same operating system and application software to make sense of the ones and zeros. For this reason, people typically only keep two to five backup versions, for no more than 30 days, to support operational recovery scenarios. If you make updatesto your hardware, OS or application software, be sure to remember to take fresh new backups, as the old backupsmay no longer apply.
Archives are different. Often, these are copies that have been "hardened" or "fossilized" so that they make sense even if the original hardware, OS or application software is unavailable. They might be indexed so that they can be searched, so that you only have to retrieve exactly the data you are looking for. Finally, they are often stored with "rendering tools" that are able to display the data using your standard web browser, eliminating the need to have a fully working application environment.
Take any backup you might have from five years ago and try to retrieve the information. Can you do it? This might be a real eye-opener. You might have inherited this backup-as-also-archive approach from someone else, and are trying to figure out what to do differently that makes more sense. Call IBM, we can help.
Guy Kawasaki is hosting a Web Conference next week on The Art of Evangelism.By this he is referring to promoting products and services, rather than the traditionaldefinition: the preaching or promulgation of the gospel.
A few years ago, I myself had the official title of "Technical Evangelist" for the IBM System Storageproduct line. I never liked the title, and asked to use something else, but since I was part of ateam of "Technical Evangelists," I had to keep it. A lot of companies were using this as a title,I was told, and everyone knew that it was not a religious reference, but a marketing one.
Sometimes, words do not translate well into other countries or cultures. Four years ago, on theweek of September 11, 2003, I traveled to Kuwait, Qatar and UAE for a business trip to present thelatest on our storage products. On arrival in Kuwait, I had to fill out my "visa application" to enterthe country, and it asked for my "occupation/title" but there were not enough spaces to write "Technical Evangelist" so I just entered "Evangelist".
The two Kuwaitis behind the desk looked it up in their Arabic/English dictionary, discussed it, andweren't sure if they should shoot me, or take me to the back room to video tape my proper be-heading. Our official hostcame over to ask what was the delay, and they showed her the dictionary translation. She asked me,"Why would you put Evangelist as your title?" So, I gave her my business card, and told herthat my full title of Technical Evangelist did not fit in the space provided.
She explained to the two behind the desk that I had misunderstood the question, and misspelled theactual word intended was "Engineer". She showed them the agenda of the IBM Technical Conference I wasspeaking at, and the list of Oil and Construction companies that were attending. They looked upthe new title "Engineer", and agreed the translation was suitable for entry, and that these two words,Evangelist and Engineer, used enough similar letters they could understand how one might misspell one for the other.
Our limo took a small detour to the middle of the desert so that we could burn and bury the ashes of the remainder of my business cards, before arriving to the hotel. All of my powerpoint slides that listed my title were changed to "Technical Engineer". The events themselves went very well,as IT people are the same all over the world, and had no problem setting aside religious or politicaldifferences in an effort to learn more about technology.
When I got back to the United States, I shared my experience with my fellow team-mates, most of whom never leavethe country, and would never have thought this might happen. Management agreed to let us change our titles.That was good for me, as I had to order a new box of business cards anyways.
Last year, I became "Manager of Brand Marketing Strategy" of the IBM System Storage product line.Now on business trips I just write "Manager" on the Occupation/Title line. It fits in every form I have ever had to fill, and translates properly into every language.
Now that the frozen economy is starting to thaw, I have been traveling like crazy this month. So far, I have been to Rochester, MN, Los Angeles and San Diego, CA, and now currently in Austin, TX. On the plus side, I was able to enjoy the [Fourth of July] holiday weekend on the beaches of San Diego.
(If you have not been to California beaches lately, here's a quick [video] reminder)
So the big news this week is that the auction over Data Domain is over, and EMC's bid finally won over NetApp. Both NetApp and EMC have data deduplication capabilities in their existing product lines, but neither could compete against IBM's TS7650G ProtecTIER Data Deduplication gateway and TS7650 ProtecTIER appliances, and so were hell-bent to buy Data Domain for large amounts. The final price agreed upon was over two billion US dollars for Data Domain.
For the most part, Data Domain's products are targeted towards small and medium sized businesses, whereas IBM's TS7650 and TS7650G products targets medium and larger sized enterprises.So now that EMC has a viable data deduplication solution, it looks like it will be yet another IBM-vs-EMC debate going forward.
A client asked me to explain "Nearline storage" to them. This was easy, I thought, as I started my IBM career on DFHSM, now known as DFSMShsm for z/OS, which was created in 1977 to support the IBM 3850 Mass Storage System (MSS), a virtual storage system that blended disk drives and tape cartridges with robotic automation. Here is a quick recap:
Online storage is immediately available for I/O. This includes DRAM memory, solid-state drives (SSD), and always-on spinning disk, regardless of rotational speed.
Nearline storage is not immediately available, but can be made online quickly without human intervention. This includes optical jukeboxes, automated tape libraries, as well as spin-down massive array of idle disk (MAID) technologies.
Offline storage is not immediately available, and requires some human intervention to bring online. This can include USB memory sticks, CD/DVD optical media, shelf-resident tape cartridges, or other removable media.
Sadly, it appears a few storage manufacturers and vendors have been misusing the term "Nearline" to refer to "slower online" spinning disk drives. I find this [June 2005 technology paper from Seagate], and this [2002 NetApp Press Release], the latter of which included this contradiction for their "NearStore" disk array. Here is the excerpt:
"Providing online access to reference information—NetApp nearline storage solutions quickly retrieve and replicate reference and archive information maintained on cost-effective storage—medical images, financial models, energy exploration charts and graphs, and other data-intensive records can be stored economically and accessed in multiple locations more quickly than ever"
Which is it, "online access" or "nearline storage"?
If a client asked why slower drives consume less energy or generate less heat, I could explain that, but if they ask why slower drives must have SATA connections, that is a different discussion. The speed of a drive and its connection technology are for the most part independent. A 10K RPM drive can be made with FC, SAS or SATA connection.
I am opposed to using "Nearlne" just to distinguish between four-digit speeds (such as 5400 or 7200 RPM) versus "online" for five-digit speeds (10,000 and 15,000 RPM). The difference in performance between 10K RPM and 7200 RPM spinning disks is miniscule compared to the differences between solid-state drives and any spinning disk, or the difference between spinning disk and tape.
I am also opposed to using the term "Nearline" for online storage systems just because they are targeted for the typical use cases like backup, archive or other reference information that were previously directed to nearline devices like automated tape libraries.
Can we all just agree to refer to drives as "fast" or "slow", or give them RPM rotational speed designations, rather than try to incorrectly imply that FC and SAS drives are always fast, and SATA drives are always slow? Certainly we don't need new terms like "NL-SAS" just to represent a slower SAS connected drive.
It's been a while since I've talked about [Second Life].
The latest post on eightbar[Spimes, Motes and Data centers]discusses IBM's use of virtual world technology to analyze data centers in three dimensions.New World Note asks[What's The Point Of 3D Data Centers?]One would think that a simple monitoring tool based on a two-dimensional floor plan would be enough to evaluate a data center.
Enter Michael Osias, IBM (a.k.a Illuminous Beltran in Second Life). Some of the leading news sites havebegun to notice some 3D data centers that he has helped pioneer. UgoTrade writes up an article aboutMichael and the media attention in [The Wizard of IBM's 3DData Centers].
Of course, in presenting these "Real Life/Second Life" (RL/SL) interactive technologies, IBM is sometimes the target of ridicule. Why? Because IBM is 10 years ahead of everyone else. So, are there aspects of a data center where 3D interfaces makes sense? I think there is.
IBM TotalStorage Productivity Center has an awesome "topology viewer" that shows what servers are connectedto which switches, to which disk systems and tape libraries. This is all done in a 2D diagram, generated dynamicallywith data discovered through open standard interfaces, similar to what you might draw manually with toolslike Visio. Imagine, however, howmore powerful if it were a 3D viewer, with virtual equipment mapped to the physical location of each pieceof hardware on the data center floor, including the position on the rack and location on the data center floor.
Designing computer room air conditioning (CRAC) systems is actually a three dimensional problem. Cold air isfed underneath the raised floor, comes up through strategically placed "vent" tiles, taken in the front ofeach rack. Hot air comes out the back of each rack, and hopefully finds ceiling duct intake to get cooled again.The temperature six inches off the floor is different than the temperature six feet off the floor, and 3Dmonitor tools could be helpful in identifying "hot spots" that need attention. In this case "spimes" representsensors in the 3D virtual world, able to report back information to help diagnose problems or monitor events.
After many people left the mainframe in favor of running a single application per distributed server, the pendulumhas finally swung back. Companies are discovering the many benefits of changing this behavior. "Re-centralization" is the task at hand. Thanks to virtualization of servers, networks and storage, sharing common resources canonce again claim the benefits of economies of scale. In many cases, servers work together in collective unitsfor specific applications that might benefit better if consolidated together onto the same equipment.
IBM's "New Enterprise Data Center" vision recognizes that people will need to focus on the management aspectsof their IT infrastructure, and 3D virtual world technologies might be an effective way to getthe job done.
A long time ago, perhaps in the early 1990s, I was an architect on the component known today as DFSMShsm on z/OS mainframe operationg system. One of my job responsibilities was to attend the biannual [SHARE conference to listen to the requirements of the attendees on what they would like added or changed to the DFSMS, and ask enough questions so that I can accurately present the reasoning to the rest of the architects and software designers on my team. One person requested that the DFSMShsm RELEASE HARDCOPY should release "all" the hardcopy. This command sends all the activity logs to the designated SYSOUT printer. I asked what he meant by "all", and the entire audience of 120 some attendees nearly fell on the floor laughing. He complained that some clever programmer wrote code to test if the activity log contained only "Starting" and "Ending" message, but no error messages, and skip those from being sent to SYSOUT. I explained that this was done to save paper, good for the environment, and so on. Again, howls of laughter. Most customers reroute the SYSOUT from DFSMS from a physical printer to a logical one that saves the logs as data sets, with date and time stamps, so having any "skipped" leaves gaps in the sequence. The client wanted a complete set of data sets for his records. Fair enough.
When I returned to Tucson, I presented the list of requests, and the immediate reaction when I presented the one above was, "What did he mean by ALL? Doesn't it release ALL of the logs already?" I then had to recap our entire dialogue, and then it all made sense to the rest of the team. At the following SHARE conference six months later, I was presented with my own official "All" tee-shirt that listed, and I am not kidding, some 33 definitions for the word "all", in small font covering the front of the shirt.
I am reminded of this story because of the challenges explaining complicated IT concepts using the English language which is so full of overloaded words that have multiple meanings. Take for example the word "protect". What does it mean when a client asks for a solution or system to "protect my data" or "protect my information". Let's take a look at three different meanings:
The first meaning is to protect the integrity of the data from within, especially from executives or accountants that might want to "fudge the numbers" to make quarterly results look better than they are, or to "change the terms of the contract" after agreements have been signed. Clients need to make sure that the people authorized to read/write data can be trusted to do so, and to store data in Non-Erasable, Non-Rewriteable (NENR) protected storage for added confidence. NENR storage includes Write-Once, Read-Many (WORM) tape and optical media, disk and disk-and-tape blended solutions such as the IBM Grid Medical Archive Solution (GMAS) and IBM Information Archive integrated system.
The second meaning is to protect access from without, especially hackers or other criminals that might want to gather personally-identifiably information (PII) such as social security numbers, health records, or credit card numbers and use these for identity theft. This is why it is so important to encrypt your data. As I mentioned in my post [Eliminating Technology Trade-Offs], IBM supports hardware-based encryption FDE drives in its IBM System Storage DS8000 and DS5000 series. These FDE drives have an AES-128 bit encryption built-in to perform the encryption in real-time. Neither HDS or EMC support these drives (yet). Fellow blogger Hu Yoshida (HDS) indicates that their USP-V has implemented data-at-rest in their array differently, using backend directors instead. I am told EMC relies on the consumption of CPU-cycles on the host servers to perform software-based encryption, either as MIPS consumed on the mainframe, or using their Powerpath multi-pathing driver on distributed systems.
There is also concern about internal employees have the right "need-to-know" of various research projects or upcoming acquisitions. On SANs, this is normally handled with zoning, and on NAS with appropriate group/owner bits and access control lists. That's fine for LUNs and files, but what about databases? IBM's DB2 offers Label-Based Access Control [LBAC] that provides a finer level of granularity, down to the row or column level. For example, if a hospital database contained patient information, the doctors and nurses would not see the columns containing credit card details, the accountants would not see the columnts containing healthcare details, and the individual patients, if they had any access at all, would only be able to access the rows related to their own records, and possibly the records of their children or other family members.
The third meaning is to protect against the unexpected. There are lots of ways to lose data: physical failure, theft or even incorrect application logic. Whatever the way, you can protect against this by having multiple copies of the data. You can either have multiple copies of the data in its entirety, or use RAID or similar encoding scheme to store parts of the data in multiple separate locations. For example, with RAID-5 rank containing 6+P+S configuration, you would have six parts of data and one part parity code scattered across seven drives. If you lost one of the disk drives, the data can be rebuilt from the remaining portions and written to the spare disk set aside for this purpose.
But what if the drive is stolen? Someone can walk up to a disk system, snap out the hot-swappable drive, and walk off with it. Since it contains only part of the data, the thief would not have the entire copy of the data, so no reason to encrypt it, right? Wrong! Even with part of the data, people can get enough information to cause your company or customers harm, lose business, or otherwise get you in hot water. Encryption of the data at rest can help protect against unauthorized access to the data, even in the case when the data is scattered in this manner across multiple drives.
To protect against site-wide loss, such as from a natural disaster, fire, flood, earthquake and so on, you might consider having data replicated to remote locations. For example, IBM's DS8000 offers two-site and three-site mirroring. Two-site options include Metro Mirror (synchronous) and Global Mirror (asynchronous). The three-site is cascaded Metro/Global Mirror with the second site nearby (within 300km) and the third site far away. For example, you can have two copies of your data at site 1, a third copy at nearby site 2, and two more copies at site 3. Five copies of data in three locations. IBM DS8000 can send this data over from one box to another with only a single round trip (sending the data out, and getting an acknowledgment back). By comparison, EMC SRDF/S (synchronous) takes one or two trips depending on blocksize, for example blocks larger than 32KB require two trips, and EMC SRDF/A (asynchronous) always takes two trips. This is important because for many companies, disk is cheap but long-distance bandwidth is quite expensive. Having five copies in three locations could be less expensive than four copies in four locations.
Fellow blogger BarryB (EMC Storage Anarchist) felt I was unfair pointing out that their EMC Atmos GeoProtect feature only protects against "unexpected loss" and does not eliminate the need for encryption or appropriate access control lists to protect against "unauthorized access" or "unethical tampering".
(It appears I stepped too far on to ChuckH's lawn, as his Rottweiler BarryB came out barking, both in the [comments on my own blog post], as well as his latest titled [IBM dumbs down IBM marketing (again)]. Before I get another rash of comments, I want to emphasize this is a metaphor only, and that I am not accusing BarryB of having any canine DNA running through his veins, nor that Chuck Hollis has a lawn.)
As far as I know, the EMC Atmos does not support FDE disks that do this encryption for you, so you might need to find another way to encrypt the data and set up the appropriate access control lists. I agree with BarryB that "erasure codes" have been around for a while and that there is nothing unsafe about using them in this manner. All forms of RAID-5, RAID-6 and even RAID-X on the IBM XIV storage system can be considered a form of such encoding as well. As for the amount of long-distance bandwidth that Atmos GeoProtect would consume to provide this protection against loss, you might question any cost savings from this space-efficient solution. As always, you should consider both space and bandwidth costs in your total cost of ownership calculations.
Of course, if saving money is your main concern, you should consider tape, which can be ten to twenty times cheaper than disk, affording you to keep a dozen or more copies, in as many time zones, at substantially lower cost. These can be encrypted and written to WORM media for even more thorough protection.
Of course, he is focused on the home user, and not the bigger mess found in the corporate world, where Federal Rules like the one past last week that begin to mandate that all U.S. companies archive every e-mail and instant message (IM) generated by their employees.
However, the article does bring up issues that effect the corporate world as well. Its not the "format" as much as the medium/player interface. A friend of mine just bought a vintage 8-track-tape player, but has only one 8-track tape to try it out with. He is now looking on eBay for other 8-track tapes.
The idea of keeping old drives around to read back data is not new. A company called eMag Solutions has all kinds of older tape drives to help companiesretrieve data on their older 3420 and 3480 tape cartridges.
The problem is not just accessing the data on the media, but rendering the "ones" and "zeros" into meaningful information. For example, suppose I saved a copy of my Quicken Tax file every year, and copied them onto a singleDVD for long term storage. The problem is that to access 2002 tax data, I have to run that version of the Quicken 2002 program, and hopefully that version will run on my current computer equipment and operating system.
A client I visited earlier this year had to retrieve 4-year-old Oracle data for litigation reasons. However, to make sense of the data, they had to build a server with a down-level version of AIX and down-level version of Oracle to match the level supported by their homegrown application.
One solution might be to find a new format that is application-independent. Flat text files, Adobe PDF format, MP3 audio files, HTML pages, and JPEG photo images are often used to avoid the requirement of special applications to make sense of the data.Unfortunately, in some countries, the laws actually dictate that business must keep their data in the original "digital format". So, if it was a MS WORD v1 document, it must be kept in v1 format, even though today's WORD 2002 can't even make sense of it, and you have to go to IBM or some other third party that have "rendering tools" that understand these older formats.
Luckily, for the corporate world, IBM has a lot of experience in this area, is the leader in Content Management, offers the world's fastest archive/compliance storage, the DR550, clocked at three times faster than the EMC Centera, WORM tape on LTO Generation 3 and 3592 tape cartridges, and software designed to render older formats into readable form.
For the home user, IBM's recent "Innovation Jam" identified this as one of the top 10 ideas, the idea of "Digital Me", storing not just old tax documents, but photos, music, home videos, and so on. My aunt Nancy passed away, leaving me a box of old VHS tapes, which I will watch this month as I sort through all my paper receipts getting ready to file for 2006 taxes.
Continuing my business trip through Canada, an article by Richard Blackwell titled [The Double Bottom Line] yesterday's Globe and Mail newspaper caught my attention.Here is an excerpt, citing Tim Brodhead, president of the J.W. McConnell Family Foundation in Montreal:
The bottom line for any business is making a profit, right?
But how about considering a different, or additional bottom line: helping make the world a better place to live in.
That's the radical proposition underlying the concept of "social entrepreneurship," the harnessing of business skills for the benefit of the disadvantaged.
Young investors, in particular, now want their investments to produce both financial and social returns, he noted.
Until recently, "we could either make a donation [to a charity] and get zero financial return, or we could invest and get zero social return." People now want more of both, but rules governing charities and business make that tough to accomplish.
One stumbling block is the imperative - entrenched in corporate law - that managers and directors of for-profit companies have a fiduciary duty to maximize profits. That structure is a brick wall that limits the expansion of social entrepreneurship, Mr. Brodhead said.
Some companies have embraced the new paradigm of a double bottom line, even if they are uncomfortable with the "social entrepreneur" label.
This fiduciary duty to maximize profits is discussed in the 2003 documentary[Corporation]. However, some organizations are now trying to aligntheir goals, finding ways to benefit their investers, as well as society overall. For example, organization [ONE.org] helped launch [Product (RED)]:
If you buy a (RED) product from GAP, Motorola, Armani, Converse or Apple, they will give up to 50% of their profit to buy AIDS drugs for mothers and children in Africa. (RED) is the consumer battalion gathering in the shopping malls. You buy the jeans, phones, iPods, shoes, sunglasses, and someone - somebody’s mother, father, daughter or son - will live instead of dying in the poorest part of the world. It’s a different kind of fashion statement.
The company, which has operated in Africa for nearly six decades, expects to increase its investment by more than $US120 million (more than R820 million) over the next two years. In the coming year, IBM expects to hire up to 100 students from Sub-Saharan universities to meet the growing demand in services, global delivery and software development.
"The Sub-Saharan African market is poised for double-digit growth flowing from the development and expansion of telecommunications networks, power grids and transport infrastructure," said Mark Harris, Managing Director, IBM South and Central Africa. "Private and public sector investment in the region is transforming the ability of the market to participate in the global economy."
A recent IBM Global Innovation Outlook (GIO) [report on Africa] indicates that the economies ofdozens of African nations are growing at healthy rates, the best in the past 30 years, with 5.5 to 5.8 percent averageacross the continent. This supports last month's news that [Top IBM thinkers to mentor African students]:
Hundreds of IBM scientists and researchers will mentor college students in Africa. Called Makocha Minds (after the Swahili word for "teacher"), the program will reach hundreds of computer science, engineering and mathematics students.
Makocha Minds is an off-shoot of IBM’s Global Innovation Outlook, an annual symposium of top government, business and academic leaders that uncovers new opportunities for business and societal innovation. "African students need to be trained in entrepreneurship so that they get out there and not just make jobs for themselves but create opportunities to employ others as well,” said Athman Fadhili, a graduate student at the University of Nairobi (Kenya).
Most of the mentoring will be via email and online collaboration.
Mentoring via email and online collaboration is very reasonable. I have mentored both high school and collegestudents through a partnership between IBM Tucson and the Society of Hispanic Professional Engineers[SHPE]. While thekids were all located in Tucson, I rarely am, traveling nearly every week, but I madetime for the kids via email and online collaboration wherever I happened to be.
To make this work, we need to get email and online collaboration in the hands who need them.I got my email thanking me for being a "first day donor" to the One Laptop Per Child "Give 1 Get 1" (G1G1) project,and have added this "badge" to the right panel of my blog. If you click on the badge, you will be takento a series of YouTube videos that further describe the project.
According to the email my donated XO laptop will soon be delivered into the hands of a child in Afghanistan, Cambodia, Haiti, Mongolia or Rwanda.
How do these work? Instead of buying your uncle yet another $25 necktie, consider buying a $25 Kiva certificate.The $25 dollar "micro loan" goes to someone in the third world to improve their situation, start a business, geta job, and so on, and you give your uncle a Kiva certificate so that he can track the progress. I think that isvery clever and innovative.
As you can imagine, I get a lot of email from around the world. This one, from a loyal reader from overseas, was particularly interesting. Normally, I would direct them to read the fantastic manual [RTFM], but decided instead to go ahead and tackle it here in my blog.
I follow your blog for several years, it has served as a reference and training for me in my professional career and I want to thank you.
I am writing because my company has acquired a new IBM Storwize V7000 Gen2 to replace a Gen1, with 16 FC ports, 8 ports per controller node and 8-port FC FlashSystem 900. The idea is to virtualize the V7000 storage part Flash900 and other hand assign directly to the host directly. After much reading on forums and storage Redbooks I have nothing clear as it should be wiring the SAN or as zoning would be made to carry out this installation. I would appreciate if you can write on this subject as controversial as seems to be the zoning and wiring SAN and if possible be clarified by me onstage.
I will tackle this in three steps.
First, let's attach "Server 1" and the FlashSystem 900 to the SAN fabric. IBM Spectrum Virtualize can handle one, two or even four separate fabrics. Let's assume you have a dual-port Host Bus Adapter (HBA) in server 1, and two redundant fabrics. We will connect each server port to each FCP switch. Likewise, we will connect each FCP switch to the FlashSystem 900, carve up "Volume 1", and create SAN "Zone A1" and "Zone A2", which identify "Server 1" as the initiator, and "FlashSystem 900" as the target. This is all basic stuff.
"All Storwize V7000 Gen2 nodes in the Storwize V7000 Gen2 clustered system are connected
to the same SANs, and they present volumes to the hosts. These volumes are created from
storage pools that are composed of mDisks presented by the disk subsystems.
The fabric must have three distinct zones:
Storwize V7000 Gen2 cluster system zones
Create one cluster zone per fabric, and include any port per node that is designated for
intra-cluster traffic. No more than four ports per node should be allocated to intra-cluster
Create a host zone for each server host bus adapter (HBA) port accessing Storwize
Create one Storwize V7000 Gen2 storage zone for each storage system that is
virtualized by the Storwize V7000 Gen2. Some storage control systems need two
separate zones (one per controller) so that they do not 'see' each other."
Second, we connect the Storwize V7000 Gen2 to the FCP switches. You don't need to connect all of the ports, but I recommend that you have each controller node to each FCP switch, requiring four cables. Add more connections for added performance bandwidth.
Carve up "Volume 2" and this will be referred to as a "managed disk", mDisk for short, and create a "storage pool" which were formerly known as a "managed disk group" which is why you often see MDG in the naming conventions and examples. Storage pools can have one or more managed disks, and you can add more dynamically as needed.
The "storage zone" indicates the Storwize V7000 Gen2 as the initiator, and the FlashSystem 900 as target. If you want to increase the performance bandwidth, consider more cables between the FCP switches and the FlashSystem 900. We create "Zone B1" and "Zone B2". I recommend a separate "storage zones" for each additional storage system that you choose to attach to the Storwize V7000 Gen2.
The "cluster zone" that connects all of the Storwize V7000 Gen2 node ports together for node-to-node (intra-cluster) communication. Storwize V7000 Gen2 ports can serve as both initiators and targets dynamically. For example, when you write to one node, the node then copies the cache block over to the second node so there are two copies stored safely on separate nodes. Since we have two fabrics, we create "Zone C1" and "Zone C2".
Third, we connect "Server 2" to FCP switches, same as we did with "Server 1". We create "Volume 3" which is a "virtual disk, or vDisk for short, from the storage pool containing Volume 2. The "host zone"indicates Server 2 as the initiator, and Storwize V7000 Gen2 as the target. We create "Zone D1" and "Zone D2". I recommend putting each additional server in its own set of host zones.
In theory, you could have a server connected to both Volume 1 and Volume 3. For example, a Windows server would have a "C:" drive connected directly to FlashSystem 900 for high-speed performance, and have a "D:" drive on Storwize V7000 Gen2 to contain data. The Storwize V7000 Gen2 introduces 60 to 100 microseconds of added latency, but provides added value such as FlashCopy, Thin Provisioning, and Real-time compression.
Of course, there are unique situations that might require special configurations, depending on the servers, operating systems, host bus adapters, FCP switches, and storage systems involved.
I am pleased with the turn-out we had attending last week for my Infoboom Webinar on [The Future of Storage]. The 55-minute replay is available on Infoboom, and the slide deck can be downloaded from the [IBM Expert Network].
I mentioned that I was going to Indianapolis and Boston next week to give lectures on this topic. Here are the details:
Indianapolis - September 7, 2011
The Future of Storage with Tony Pearson Luncheon Briefing
Harry & Izzy's
153 South Illinois Street
Indianapolis, IN 46225
Time: 11am to 1:30pm
Boston - September 8, 2011
The Future of Storage with Tony Pearson Briefing and Networking Reception
The Capital Grille
10 Wayside Road
Burlington, MA 01803
Time: 4:30pm to 6:30pm
I will also be in San Francisco for Oracle OpenWorld (Oct 2-6), Auckland New Zealand (Nov 9-11), and Melbourne Australia (Nov 15-17).
Back in October, Daryl Pereira asked me for an interview about my blog. I get a lot of these requests, but this one was different. Daryl is on the IBM DeveloperWorks team, and he was going to interview me to for the "Great Mind Challenge". This is a fun competition for a group of about 100 college students from San Jose State University to get them to learn blogging best practices and techniques.
This was the one post that put me into the #1 position, with over 70,000 hits so far and counting, and that does not include all the people who read my blog through feed readers or the various cross-postings on IBM Storage Community and IBM Virtual Briefing Center.
This blog post was part of a series on IBM Watson, the computer that beat two humans on the "Jeapoardy!" television game show. Having worked closely with the IBM Research scientists to understand how IBM Watson worked so that I could blog about it, I thought a good way for readers to appreciate how it was put together was to explain how to assemble a scaled-down version. My inspiration was an article by John Pultorak that explained [how to build your own Apollo Guidance Computer (AGC) in your basement].
The blog post series proved to be a big hit. IBM Watson helps to demonstrate many modern computer techniques, including business analytics of Big Data, Cloud Computing, and parallel programming techniques such as Hadoop. Showing that a "Watson Jr." could be built in your basement helped to emphasize that IBM Watson was made from hardware and software that are generally available today.
I am very proud of this blog post. I worked with Moshe Yanai and the rest of the XIV team to be completely accurate and correct to set the right level of expectations. So many false statements and FUD had been thrown out about what would happen if a double drive failure happened during the short 30 minute window of opportunity, and it turns out that in most cases, no data is lost, and in all other cases, the lost data can be easily identified and restored. In most cases, this will be less recovery required than a double drive failure on a traditional RAID-5 disk array.
It was also an opportunity to try out Animoto to create a short and simple video. Normally, when marketing needs a video made, it will cost 25,000 dollars USD or more, and take weeks to produce. I was able to get this video done in just a few hours with no out-of-pocket expenses.
After this post, nearly all FUD in the blogosphere about double drive failures disappeared. More importantly, the XIV sales that quarter (2Q2010) was substantially better than the prior quarter. Many XIV sales reps credit this blog post for that huge bump in XIV sales! I guess this could be the Tony Pearson equivalent of the [Colbert Bump].
In 2009 and 2010, I was the third most influential blogger on IBM's Developerworks, and now in 2011, I have risen to number one position! Internally, we call this "Winning the Devy" (like an Emmy, but for DeveloperWorks bloggers). I would like to thank all my readers for continuing to share in the conversation!
Avi Bar-Zeeb of RealityPrime has an interesting post aboutHow Google Earth [really] Works.Normally, people who are very knowledgeable in a topic have a hard time describing concepts in basic terms. Avi was one of the co-founders of Keyhole, the company that built the predecessor for Google Earth, and also worked with Linden Lab for its 3D rendering it its virtual world, so he certainly knows what he is talking about. While he sometimes drops down into techno-talk about patents, the post overall is a good read.
It is perhaps human nature to be curious on how things are put together and how they function, leading to the popularity of web sites like www.howstuffworks.com that cover a wide range of topics.
Many things can be used without understanding their internal inner workings. You can put on a pair of blue jeans without knowing how the cotton was made into denim fabric; lace up your favorite pair of running shoes without understanding the chemical make-up of the plastic that cushions your feet; or drink a glass of beer after your five mile run without knowing how alcohol is processed by your liver.
For technology, however, some people insist they need to know how it works in order for them to get the most use of it. When shopping for a car, for example, a guy might look under the hood, and ask questions about how the engine works, while his wife sits inside the vehicle, counting cup holders and making sure the radio has all the right buttons.
Not all technology suffers from need-to-know-itis. For example, the Apple iPod music player and the Canon PowerShot digital camera, are both just disk systems that read and write data, with knobs and dials on one end, and ports for connectivity on the other. Everyone just asks how to use their controls, and might read the manual to understand how to connect the cables. Few people who use these devices ask how they work before they buy them.
Other disk systems, the kind designed for data centers for the medium and large enterprise, apparently aren't there yet. Storage admins who might happily own both an iPod player and a PowerShot camera, insist they need to know how the technologies inside various storage offerings work. Is this just curiosity talking? Or are there some tasks like configuration, tuning, and support that just can't be done without this knowledge? Does knowing the inner workings somehow make the job more enjoyable, easier, or performed with less stress?
I'm curious what you think, send me a comment on this.
This week, Allyson Klein, Director of Technical Leadership Marketing from Intel, interviewed me for the Intel® [Chip Chat podcast] to promote the upcoming [IBM Edge conference] to be held June 4-8 in Orlando, Florida. Intel is a big sponsor of the conference. The podcast is only about 8 minutes long. Enjoy!
While the rest of Americans were glued to their televisions watching President Obama explain his plan for recovery, my colleagues and Ihad dinner with clients from Canada.
One in particular claimed her father was known as the kingpin of[Flin Flon]. She lives in Ontario now, but she grew up in this smallmining town in Manitoba made famous for winning a government contractto grow crops for medicinal purposes.
Shown at left is the town's mascott, Flinty. Yes, apparently thetown was named after a fictional character of a paperback novel.
Of course, in conversations with clients, it is best to avoid topics like politics or drugs,but the intersection of government health care and implications on IT can't be disregarded.Since Canada has a more efficient healthcare process, the government enjoys a lower costper citizen. President Obama has suggested that the United States should adopt reforms to make the American system more efficient, including electronic medical records.
Not surprisingly, [smarter healthcare] is part of IBM's latest set of strategic initiatives.Digitizing medical information has a variety of benefits:
Information isn't stranded on islands
If there is any situation that needs to deliver the right information, to the right people,at the right time, healthcare is certainly one of them. Having the right information canhelp reduce medical mistakes.
Physicians spend time with their patients, not paperwork
I personally know some doctors here in Tucson, and they are the first to admit that theywould prefer to focus on their core strengths, which they spent many years in medical school,and leave the administrative details to someone else. Focusing on core strengths is acommon theme for successful businesses, and this is no different.
Expertise needs no passport
Medical emergencies do not always happen near the hospital or clinic that your medical records are stored at.An exciting feature of digital information is that it is easy to transport to where it isneeded, unlike paper records or X-ray film.
To learn more about IBM's strategy and vision, see IBM's[Smarter Planet] Web site.
Chris Anderson, of Wired magazine, wrote a great article called The Long Tail.
This article became a book by the same name published earlier this year, and I just discovered it on a recent visit to Second Life. A lot of IBMers are now alsoSecond Lifers, and I suspect it is just a matter of time before we are conductingour customer briefings there, and getting our year-end bonuses paid directly in Linden bucks.(Those of you not familiar with Second Life can watch this 3-minute video fromthe folks at Text100)
Anyways, the Long Tail describes the new economy of entertainment thanks to digitalstorage. Here are some of the key insights.
In the past, entertainment was all about hits: hit songs, hit movies,hit novels, and this was primarily because of the economic realities restricted byphysical space. Chris writes: "An average movie theater will not show a film unless it can attract at least 1,500 people over a two-week run; that's essentially the rent for a screen. An average record store needs to sell at least two copies of a CD per year to make it worth carrying; that's the rent for a half inch of shelf space."
Things have changed. To drive the point home, Robbie Vann-Adibe (CEO of eCast), poses the trick question"What percentage of the top 10,000 titles in any online media store (Netflix, iTunes, Amazon, or any other) will rent or sell at least once a month?" The answer will surprise you. Write down your guess first, then go read here. His digital jukeboxes are able to play from a list of150,000 songs, not the few hundred you'd find at the Tap Room which is rated as having the best jukebox in Tucson.
The phenomenon is not just limited to music. "Take books," Chris writes, "The average Barnes & Noble carries 130,000 titles. Yet more than half of Amazon's book sales come from outside its top 130,000 titles. Consider the implication: If the Amazon statistics are any guide, the market for books that are not even sold in the average bookstore is larger than the market for those that are..."
This has incredible implications for the storage industry. For one, content providers are going to dig deep into their archives to digitize and deliver "long tail" offerings. If they don't have a deep archive, many will start to build one. Second, the need to search through that large volume of content will become more critical. Classifying and indexing with the appropriate tags and metadata will be an important task.
"The murals in restaurants are on par with the food in museums." --- Peter De Vries
The quote above applies to blogs as well. Those about competitive products of which the blogger has little to no hands-on experience tend to be terribly misleading or technically inaccurate. We saw this last month as Sun Microsystems' Jeff Savit tried to discuss the IBM System z10 EC mainframe.
This time, it comes from EMC bloggers discussing NetApp equipment, and by association, IBM System Storage N series gear.I was going to comment on the ridiculous posts by fellow bloggers from EMC about SnapLock compliance feature on the NetApp, but my buddies at NetApp had already done this for me, saving me the trouble.
The hysterical nature of writing from EMC, and the calm responses from NetApp, speak volumes about the culturesof both companies.
The key point is that none of the "Non-erasable, Non-Rewriteable" (NENR) storage out there are certified as compliant by any government agency on the planet. Governments just aren't in the business of certifying such things. The best you can get is a third-party consultant, such as [Cohasset Associates], to help make decisions that are best for each particular situation.
In addition to SnapLock on N series, IBM offers the [IBM System Storage DR550], WORM tape and optical systems, all of which have been deemed compliant to the U.S. Securities and Exchange Commission [SEC 17a-4] federal regulations by Cohasset Associates. For medical patient records and images like X-rays, IBM offers the Grid Medical Archive Solution [GMAS]designed to meet the requirements of the U.S. Health Insurance Portability and Accountability Act[HIPAA].For other government or industry regulations, consult with your legal counsel.
An astute reader brought this to my attention. The newest addition to our "IBM Express Portfolio"set of SMB-oriented offerings is the new TS3100 tape library. This has one LTO Gen 3 drive and up to 22 cartridges, which can be a mix of WORM and rewriteable cartridges,beautifully packaged in a small 2U high (3.5 inch) rack-mountable chassis. Each cartridge can hold up to 800GB uncompressed, or 1.6TB with typical 2-to-1 compression.
This tape library would be a great complement to TSM Express for backup, and to theDR550 Express for archive and compliance storage.
And now, for a limited time, there is a $1500 rebate, check website for details.
Well, I am back safely from my trip last week to Chicago, and now I am writing this in Madrid, Spain, on my way to Brussels, Belgium for the IT Storage Expo.
For those who have asked how the construction on the new Tucson EBC is going, here are a few pictures I took on Friday. As you can see, it is coming along nicely. The official grand opening will be April 2.
Last Tuesday, we had our official "Grand Opening" for the new Tucson Executive Briefing Center!
We sent out fancy invitations to all the IBM executives who supported this center, local dignitaries from the Tucson and State of Arizona level, and all of the IBM employees on the Tucson campus.
Since our new center is significantly cozier (5700 square feet versus our previous 15,000 square feet), we split the day into two separate events. The first for the IBM executives and local VIPs, and the second for the rest of the IBM employees on campus.
Of course, there is no free lunch. The day started out with a series of speeches. My manager, Doug Davies, was the master of ceremonies to introduce each speaker.
Alistair Symon, IBM Vice President of Enterprise Storage, explained how important storage affects everyone's lives. If you use an ATM machine to withdraw money, for example, you are most probably using IBM System Storage behind the scenes. Nearly all of the IBM disk and tape storage products are designed here in Tucson.
Bruce Wright (shown here) directs the University of Arizona's Office of University Research Parks, serves as CEO of the UA Tech Park, and the founder and president of the Arizona Center for Innovation. Bruce said a few words on how please he was that IBM decided to reverse its July 2011 decision to leave Tucson. The UofA owns all the property, renting back four of the eleven buildings back to IBM, so is effectively our landlord. Next year will mark the 20th anniversary of IBM's sale of the technology park to the University.
Tucson Councilwoman Shirley Scott talked about the improtance of high-paying jobs to the local economy. While IBMers in Tucson are paid less than our counterparts in San Jose, Austin, Raleigh or Poughkeepsie, we are certainly [paid more than the average Tucsonan], thus helping to raise the standard of living here.
Dr. Michael Varney, president and CEO of the local Tucson Metropolitan Chamber of Commerce, praised IBM for its strong reputation in ethics and diversity.
My new second-line manager, Karl Duvalsaint, and my new third-line manager, Doug Dreyer, emphasized the importance of co-locating Briefing Centers in sites that have Research and Development activity. It is important for clients to interact directly with developers, and it is also good for developers to understand directly from clients their needs, preferences and requirements. Worldwide, the IBM Systems and Technology Group has only twelve Executive Briefing Centers, and the Tucson EBC is one of them.
This is not to say that IBM does not have centers in other locations. Our newest client center in Singapore is a shining example. Of course, if they want experts to speak to clients there, they need to be flown in. Doug Dreyer mentioned that IBM plans to launch six such centers in Africa as well.
Next was the ribbon cutting. From left to right, Lee Olguin (our Gunny Sargeant), Tucson Councilwoman Shirley Scott, UofA's Bruce Wright, IBM VP of Program Management Calline Sanchez, My second-line manager Karl Duvalsaint, IBM VP Allistair Simon, my first-line manager Doug Davies, Tucson Chamber of Commerce President Dr. Michael Varney, and my third-line manager Doug Dreyer. We had a member of the local high school band do the drum roll.
Once the ribbon was cut, the IBM Executves and local VIPs were brought in to see the new facility, which has two large rooms, one common dining area, an 800-square foot green data center to showcase our products, our own set of restrooms, a galley to stage up the food and beverage service, and two smaller rooms for private conversations or conference calls. A local high school band provided live music throughout the day.
I hope everyone had some time these past few weeks of the Winter Solstice to enjoy some time off with friends and family. I had a great trip to New York City, got to visit my brother and his friends, went to see my friends in Michigan to celebrate New Years Eve, and see the world premiere of [LexiBaby], an independent film from fellow filmmaker Jonathan Petro.
The latter of course from fellow IBMers, corporate executives receiving bailout money, attorneys that specialize in foreclosures, and the lucky few who will be in Washington DC for the US Presidential Inauguration.In addition to all the bailout money from banks, insurance companies and automakers that will be spent on IBM equipment and services, there might be additional funds from the US Government to improve our country's information infrastructure.In a recent Forbes article titled[The Tech Solution To The Recession], Andy Greenberg writes about US president-elect Barack Obama's ideas about a stimulus to the economy. Here's an excerpt:
"IBM, for starters, believes that a massive infusion of cash should go toward cutting-edge technology. Last month, IBM CEO Sam Palmisano presented a report to Obama's transition team from the Information Technology and Innovation Foundation (ITIF) that argues that a $30 billion investment in universal broadband, health information technology and a smarter power grid could create 950,000 jobs.
"Those disparities, and IBM's argument for focusing a stimulus plan on technology in general, come from what economists have dubbed "network multipliers." The computing giant, and ITIF, argue that technology creates more jobs than other types of infrastructure by enabling new types of businesses.
"If you build more roads, people don't buy more tires or GPS systems, but if you build better networks, you create entirely new business applications," says Rob Atkinson, president of ITIF and an author of the think tank's report. "Something like YouTube could never have existed without broadband."
"Regardless of precisely how tech stimulus money gets spent, IBM will likely sweep up a significant chunk of those taxpayer funds, given the computing giant's diverse hardware, software and services businesses. Other IT infrastructure giants like Microsoft, Hewlett Packard, Oracle and SAP are also likely to vie for pieces of Obama's stimulus package aimed at technology.
"But among those tech companies, IBM has been especially active in driving home the need for national investment in tech systems. In a November speech to the Council on Foreign Relations, Palmisano argued that that the U.S. needs to invest in innovation not just as a solution to our current recession but as a competitive measure in an increasingly integrated and technologically advanced world."
Continuing my business trip through Asia, I have left Chengdu, China, and am now in Kuala Lumpur, Malaysia.
On Sunday, a colleague and I went to the famous Petronas Twin Towers, which a few years ago were officially the tallestbuildings in the world. If you get there early enough in the day, and wait in line for a few hours, you can get a ticket permitting you to go up to the "Skybridge" on the 41st floor that connects the two buildings. The views are stunning, and I am glad to have done this.(If you are afraid of heights, get cured by facing your fears with skydiving)
You would think that a question as simple as "Which is the tallest building in the world?" could easily be answered, given that buildings remain fixed in one place and do not drastically shrink or get taller over time or weather conditions, and the unit of height, the "meter", is an officially accepted standard in all countries, defined as the distance traveled by light in absolute vacuum in 1/299,792,458 of a second.
The controversy stems around two key areas of dispute:
What constitutes a building?
A building is a structure intended for continuous human occupancy, as opposed to the dozens ofradio and television broadcasting towers which measure over 600 meters in height. The Petronas Twin Towers is occupied by a variety of business tenants and would qualify as a building. Radio and Television towers are not intended for occupation, and should not be considered.
Where do you start measuring, and where do you stop?
Since 1969, the height was generally based on a building's height from the sidewalk level of the main entrance to the architectural top of the building. The "architectural top" included towers, spires (but not antennas), masts or flagpoles. Should the measurements be only to the top to the highest inhabitable floor?
What if the building has many more floors below ground level? What if the building exists in a body of water, should sidewalk level equate to water level, and at low tide or high tide? (Laugh now, but this might happen sooner than you think!)
To bring some sanity to these comparisons, the Council on Tall Buildings and Urban Habitat has tried to standardize the terms and definitions to makecomparisons between buildings fair. Why does all this matter whose building is tallest? It matters in twoways:
People and companies are willing to pay more to be a tenant in tall towers, affording a luxurious bird's-eyeview to impress friends, partners and clients, and so the rankings can influence purchase or leasing prices of floorspace in these buildings.
Architects and engineers involved in building these structures want to list these on their resume.These buildings are an impressive feat of engineering, and the teams involved collaborate in a global mannerto accomplish them. If an architecture or engineeering company can build the world's tallest building, you can trust themto build one for you. The rankings can help drive revenues in generating demand for services and offerings.
What does any of this have to do with storage? Two weeks ago, IBM and the Storage Performance Councilanswered the question "Which is the fastest disk system?" with apress release. Customers thatcare about performance of their most mission critical applications are often willing to pay a premium to run theirapplications on the fastest disk system, and the IBM System Storage SAN Volume Controller, built through aglobal collaboration of architects and engineers across several countries, is (in my opinion at least) an impressive feat of storage engineering.
He feels I was unfair to accuse EMC of "proprietary interfaces" without spelling out what I was referring to. Here arejust two, along with the whines we hear from customers that relate to them.
EMC Powerpath multipathing driver
Typical whine: "I just paid a gazillion dollars to renew my annual EMC Powerpath license, so you will have to come back in 12 months with your SVC proposal. I just can't see explaining to my boss that an SVC eliminates the need for EMC Powerpath, throwing away all the good money we just spent on it, or to explain that EMC chooses not to support SVC as one of Powerpath's many supported devices."
EMC SRDF command line interface
Typical whine: "My storage admins have written tons of scripts that all invoke EMC SRDF command line interfacesto manage my disk mirroring environment, and I would hate for them to re-write this to use IBM's (also proprietary) command line interfaces instead."
Certainly BarryB is correct that IBM still has a few remaining "proprietary" items of its own. IBM has been in business over 80 years, but it was only the last 10-15 years that IBM made a strategic shift away from proprietary and over to open standards and interfaces. The transformation to "openness" is not yet complete, but we have made great progress. Take these examples:
The System z mainframe - IBM had opened the interfaces so that both Amdahl and Fujitsu made compatible machines.Unlike Apple which forbids cloning of this nature, IBM is now the single source for mainframes because the other twocompetitors could not keep up with IBM's progress and advancements in technology.
Update: Due to legal reasons, the statements referring to Hercules and other S/390 emulators havebeen removed.
The z/OS operating system - While it is possible to run Linux on the mainframe, most people associate the z/OSoperating system with the mainframe. This was opened up with UNIX System Services to satisfy requests from variousgovernments. It is now a full-fledged UNIX operating system, recognized by the [Open Group] that certifies it as such.
As BarryB alludes, the unique interfaces for disk attachment to System z known as Count-Key-Data (CKD) was published so that both EMC and HDS can offer disk systems to compete with IBM's high-end disk offerings. Linux on System zsupports standard Fibre Channel, allowing you to attach an IBM SVC and anyone's storage. Both z/OS and Linux on System z support NAS storage, so IBM N series, NetApp, even EMC Celerra could be used in that case.
The System i itself is still proprietary, but recently IBM announced that it will now support standard block size (512 bytes) instead of the awkward 528 byte blocks that only IBM and EMC support today. That means that any storage vendor will be ableto sell disk to the System i environment.
Advanced copy services, like FlashCopy and Metro Mirror, are as proprietary as the similar offerings from EMCand HDS, with the exception that IBM has licensed them to both EMC and HDS. Thanks to cross-licensing, you can do [FlashCopy on EMC] equipment. Getting all the storage vendors to agree to open standards for these copy services is still workin progress under [SNIA], but at least people who have coded z/OS JCL batchjobs that invoke FlashCopy utilities can work the same between IBM and EMC equipment.
So for those out there who thought that my comment about EMC's proprietary interfaces in any way implied thatIBM did not have any of its own, the proverbial ["pot calling the kettle black"] so to speak, I apologize.
BarryB shows off his [PhotoShop skills] with the graphic below. I take it as a compliment to be compared to anAll-American icon of business success.
TonyP and Monopoly's Mr. Pennybags Separated at Birth?
However, BarryB meant it as a reference back to long time ago when IBMwas a monopoly of the IT industry, which according to [IBM's History], ended in 1973. In other words, IBMstopped being a monopoly before EMC ever existed as a company, and long before I started working for IBM myself.
The anti-trust lawsuit that BarryB mentions happened in 1969, which forced IBM to separate some of the software from its hardware offerings, and prevented IBM from making various acquisitions for years to follow, forcing IBM instead into technology partnerships. I'm glad that's all behind us now!
IBM has chosen three particular Software Defined Environments. At one end, IBM is a platinum sponsor of OpenStack which supports x86 servers, POWER systems and z System mainframes. A problem with open source projects like this, however, is that they can be a bit like putting together IKEA furniture from pieces in a box: "Some assembly required."
At the other end, highly proprietary environments from VMware and Microsoft bring enterprise-ready out-of-the-box solutions. However, nobody wants to be limited to just x86-based solutions. IBM offers the best of both worlds, basing its IBM Cloud and SmartCloud software on OpenStack standards, but providing enterprise-ready solutions for x86, POWER Systems and z System mainframes. This includes IBM Cloud Manager with OpenStack, IBM Cloud Orchestrator, and IBM SmartCloud Cost Management software products.
(Analogy: If open source solutions were vanilla ice cream, and proprietary solutions were chocolate ice cream, then IBM Cloud and SmartCloud is vanilla ice cream with chocolate sauce on top! This is the same approach IBM used for WebSphere Application Server, based on Apache web server, and IBM BigInsights, based on Hadoop analytics.)
For some people, software defined can also refer to how the resources are deployed. Rather than using specialized hardware, solutions based on industry-standard hardware can be delivered either as pre-built appliances, services in the Cloud, or as software-only products.
Back in the 1990s, IBM came up with the [Seascape Storage Enterprise Architecture], deciding to focus the design of its storage systems to be based, where possible and practical, on industry-standard components.
Let's review a few products:
IBM SAN Volume Controller (SVC) and Storwize V7000: IBM storage hypervisors were originally designed to run on industry-standard x86 servers. The IBM scientists at Almaden Research Center referred to this as the "COMmodity PArts Storage System" (COMPASS) architecture.
That is still mostly true 12 years later, but SVC and Storwize V7000 does have specialized hardware, including host bus adapter cards and the [Intel® QuickAssist] chip for Real-time Compression.
IBM DS8000 disk system: The DS8000 is based on off-the-shelf IBM POWER servers. Originally, you could only purchase POWER-based servers from IBM, but now thanks to the [OpenPOWER Foundation], you now have more options.
The DS8000 does use some specialized hardware for its host and device adapters, taking advantage of ASICs and FPGAs to optimize performance.
IBM XIV storage system: IBM acquired XIV back in 2008, but its design is very similar to Seascape architecture. All of the Intellectual Property was in the software, installed on industry-standard x86 servers, cache memory, host bus adapters and 7200 RPM nearline disk drives. I joked that the entire hardware bill-of-materials could be ordered directly from the CDW catalog!
IBM FlashSystem: IBM is #1 rank in the All-Flash Array market. Rather than using off-the-shelf commodity Solid-State drives (SSD), the IBM FlashSystem employs specialized hardware based on FPGAs to optimize performance.
IBM FlashSystem came from the recent acquisition of Texas Memory Systems, and was not designed under the IBM Seascape architecture.
Combining the method the resources are controlled and managed with the way storage is deployed results in a quadrant. Let's take a look at this from a storage perspective:
Traditional storage products that are based on specialized hardware that do not support Software Defined Environment APIs.
Storage products that are based on specialized hardware, but have been enhanced to support Software Defined Environment APIs. For OpenStack, this refers to Cinder and Swift interfaces. For VMware, this would include VAAI, VASA and VADP interfaces and vCenter Console plug-ins.
Storage products that are basically software, either installed on pre-built hardware appliances, offered as services in the Cloud, or software you deploy on your own industry-standard hardware. Unfortunately, this category does not support software defined environment APIs, and so proprietary interfaces require administrator-intensive involvement instead.
Storage software for industry-standard hardware. You purchase the appropriate server, cache memory, flash and disk drives as needed. This category could also extend to pre-built appliance versions of this software, or as services in the Cloud. APIs for software defined environments are available to deploy this with self-service automation.
IBM Spectrum Storage is a family of Category IV software offerings. Here are the products announced:
Based on technology from...
IBM Spectrum Control™
Simplified control and optimization of storage and data infrastructure
SmartCloud Virtual Storage Center, Tivoli Storage Productivity Center
IBM Spectrum Protect™
Single point of administration for data backup and recovery
Tivoli Storage Manager
IBM Spectrum Accelerate™
Accelerating speed of deployment and access to data for new workloads
XIV storage system
IBM Spectrum Virtualize™
Storage virtualization that frees client data from IT boundaries
SAN Volume Controller
IBM Spectrum Scale™
High-performance, scalable storage manages exabytes of unstructured data
GPFS and codename:Elastic Storage
IBM Spectrum Archive™
Enables easy access to long term storage of low activity data
Linear Tape File System (LTFS)
Last year, IDC recognized IBM as #1 in this new emerging software defined storage market. This announcement reinforces IBM's lead in this area. See the [Press Release] for details.
Yesterday marked the first day of Spring here in the Northern hemisphere, and often this means it is timefor some "Spring cleaning". This is a great time to re-evaluate all of your stuff and clean house.
In the bits-vs-atoms discussion, Annie Leonard has a quick [20-minute video] about the atoms side of stuff,from extraction of natural resources, production, distribution, consumption, to final disposal.
On the bits side of things, the picture is much different.
We don't really extract information,rather we capture it, and lately that process is done directly into digital formats, from digital photography, digital recording of music, and so on. A lot of medical equipmentnow take X-rays and other medical images directly into digital format. By 2011, it is estimated that as much as 30 percent of all storage will be for holding medical images.
Production refers to the process of combining raw materials and making them into something useful. The sameapplies to information, there are a variety of ways to make information more presentable. In the Web 2.0 world, these are called Mashups, combiningraw information in a manner that are more usable.Fellow IBM blogger Bob Sutor discusses IBM's latest contribution, SMash, in his post[Secure Mashups via SMash].
According to Tim Sanders, 90 percent of business information is distributed by email, but less than 10 percentof employees are formally trained to distribute information correctly. Here's a quick 3-minute trailerto his "Dirty Dozen" rules of how to do email properly.
I have not watched the DVD that this trailer is promoting, but I certainly agree with the overall concept.
This week I also had the pleasure to hear [Art Mortell], author ofthe book The Courage to Fail: Art Mortell's Secrets to Business Success. He gave an inspirational talk about how to deal with our stressful lives. One key pointwas that stress often came from our own expectations. This is certainly true on how we consume information.Often times our expectations determine how well we read, watch or listen to information being presented.Sometimes information is factually correct, but presented in such a boring manner that it is just toodifficult to consume.
John Windsor on YouBlog takes this one step further, asking [Are you predictable?]He makes a strong case on why presenting in a predictable manner can actually hurt your chances of communication.
And finally, there is disposal. We are all a bunch of digital pack-rats. With atoms, you eventuallyrun out of closet space, with bits the problem is not as obvious, and often can be resolved by spendingyour way out of it. On average, companies are expanding their storage capacity by 57 percent every year. Thatworked well when dollar-per-GB prices of disk dropped to match, but now technology advancements are slowing down. Diskwill not be dropping in price as fast as you need, and now might be a good time to re-evaluate your"Keep everything forever" strategy.
Consider "Spring cleaning" to be an excellent excuse to evaluate the data you have on your disk systems.Should it be on disk? Will it be accessed often enough to justify that cost? Does it need immediateonline access times, or can waiting a minute or two for a tape mount from an automated library be sufficient?Does it represent business value?
I have been to customers that have discovered a lot of "orphan data" on their disk systems. This isdata that does not belong to anyone currently working at the company. Maybe the owners of the data retired,were laid off, or even fired, but nobody bothered to clean up their files after they left the company.
I've also seen a lot of "stale data" on disk, data that has not be read or written in the past 90 days.Are you spending 13-18 watts of energy to spin each disk drive just to contain data nobody ever looks at?
In some cases, orphan or stale data represents business value, and need to be kept around for businessor legal reasons. Perhaps some government regulation requires you to retain this information for someyears. In that case, rather than deleting it, move it to tape, perhaps using theIBM System Storage DR550 to protect it for the time required and handle its eventual disposal.
Certainly something to think about, while you snap the ears off those chocolate bunnies, watching yourkids run around looking for eggs. Enjoy your weekend!
In his Backup Blog, fellow blogger Scott Waterhouse from EMC has yet another post about Tivoli Storage Manager (TSM) titled [TSM and the Elephant]. He argues that only the cost of new TSM servers should be considered in any comparison, on the assumption that if you have to deploy another server, you have to attach to it fresh new disk storage, a brand new tape library, and hire an independent group of backup administrators to manage. Of course, that is bull, people use much of existing infrastructure and existing skilled labor pool every time new servers are added, as I tried to point out in my post [TSM Economies of Scale].
However, Scott does suggest that we should look at all the costs, not just the cost of a new server, which we in the industry call Total Cost of Ownership (TCO). Here is an excerpt:
Final point: there is actually a really important secondary point here--what is the TCO of your backup infrastructure. In some ways, TSM is one of the most expensive (number of servers and tape drives, for example), relative to other backup applications. However, I think it would be a really interesting exercise to critically examine the TCO of the various backup applications at different scales to evaluate if there is any genuine cost differentiation between them.
Fortunately, I have a recent TCO/ROI analysis for a large customer in the Eastern United States that compares their existing EMC Legato deployment to a new proposed TSM deployment. The assessment was performed by our IBM Tivoli ROI Analyst team, using a tool developed by Alinean. The process compares the TCO of the currently deployed solution (in this case EMC Legato) with the TCO of the proposed replacement solution (in this case IBM TSM) for 55,000 client nodes at expected growth rates over a three year period, and determines the amount of investment, cost savings and other benefits, and return on investment (ROI).
Here are the results:
"A risk adjusted analysis of the proposed solution's impact was conducted and it was projected that implementing the proposed solutions resulted in $16,174,919 of 3 year cumulative benefits. Of these projected benefits, $8,015,692 are direct benefits and $8,159,227 are indirect benefits.
Top cumulative benefits for the project include:
Backup Coverage Risk Avoidance - $6,749,796
Reduction in Maintenance of Competitive Products - $1,576,000
Reduction in Existing Tivoli Maintenance (Storage and Monitoring) - $1,490,000
IT Operations Labor Savings - Storage Management - $982,919
Network Bandwidth Savings - $575,196
Standardization - $366,667
Future cost avoidance of addtional competitive licenses - $350,000
These benefits can be grouped regarding business impact as:
$6,456,025 in IT cost reductions
$1,559,667 in business operating efficiency improvements
$8,159,227 in business strategic advantage benefits
The proposed project is expected to help the company meet the following goals and drive the following benefits:
Reduce Business Risks $6,749,796
Consolidate and Standardize IT Infrastructure $4,975,667
Reduce IT Infrastructure Costs $2,057,107
Improve IT System Availability / Service Levels $1,409,431
Improve IT Staff Efficiency / Productivity $982,919
To implement the proposed project will require a 3 year cumulative investment of $5,760,094 including:
$0 in initial expenses
$4,650,000 in capital expenditures
$1,110,094 in operating expenditures
Comparing the costs and benefits of the proposed project using discounted cash flow analysis and factoring in a risk-adjusted discount rate of 9.5%, the proposed business case predicts:
Risk Adjusted Return on Investment (RA ROI) of 172%
Return on Investment (ROI) of 181%
Net Present Value (NPV) savings of $8,425,014
Payback period of 9.0 month(s)
Note: The project has been risk-adjusted for an overall deployment schedule of 5 months."
IBM Tivoli Storage Manager uses less bandwidth, fewer disk and tape storage resources than EMC Legato. For even a large deployment of this kind, payback period is only NINE MONTHS. Generally, if you can get a new proposed investment to have less than 24 month payback period you have enough to get both CFO and CIO excited, so this one is a no-brainer.
Perhaps this helps explain why TSM enjoys such a larger marketshare than EMC Legato in the backup software marketplace. No doubt Scott might be able to come up with a counter-example, a very small business with fewer than 10 employees where an EMC Legato deployment might be less expensive than a comparable TSM deployment. However, when it comes to scalability, TSM is king. The majority of the Fortune 1000 companies use Tivoli Storage Manager, and IBM uses TSM internally for its own IT, managed storage services, and cloud computing facilities.
Last week, I presented IBM's strategic initiative, the IBM Information Infrastructure, which is part of IBM's New Enterprise Data Center vision. This week, I will try to get around to talking about some of theproducts that support those solutions.
I was going to set the record straight on a variety of misunderstandings, rumors or speculations, but I think most have been taken care of already. IBM blogger BarryW covered the fact that SVC now supports XIV storage systems, in his post[SVC and XIV],and addressed some of the FUD already. Here was my list:
Now that IBM has an IBM-branded model of XIV, IBM will discontinue (insert another product here)
I had seen speculation that XIV meant the demise of the N series, the DS8000 or IBM's partnership with LSI.However, the launch reminded people that IBM announced a new release of DS8000 features, new models of N series N6000,and the new DS5000 disk, so that squashes those rumors.
IBM XIV is a (insert tier level here) product
While there seems to be no industry-standard or agreement for what a tier-1, tier-2 or tier-3 disk system is, there seemed to be a lot of argument over what pigeon-hole category to put IBM XIV in. No question many people want tier-1 performance and functionality at tier-2 prices, and perhaps IBM XIV is a good step at giving them this. In some circles, tier-1 means support for System z mainframes. The XIV does not have traditional z/OS CKD volume support, but Linux on System z partitions or guests can attach to XIV via SAN Volume Controller (SVC), or through NFS protocol as part of the Scale-Out File Services (SoFS) implementation.
Whenever any radicalgame-changing technology comes along, competitors with last century's products and architectures want to frame the discussion that it is just yet another storage system. IBM plans to update its Disk Magic and otherplanning/modeling tools to help people determine which workloads would be a good fit with XIV.
IBM XIV lacks (insert missing feature here) in the current release
I am glad to see that the accusations that XIV had unprotected, unmirrored cache were retracted. XIV mirrors all writes in the cache of two separate modules, with ECC protection. XIV allows concurrent code loadfor bug fixes to the software. XIV offers many of the features that people enjoy in other disksystems, such as thin provisioning, writeable snapshots, remote disk mirroring, and so on.IBM XIV can be part of a bigger solution, either through SVC, SoFS or GMAS that provide thebusiness value customers are looking for.
IBM XIV uses (insert block mirroring here) and is not as efficient for capacity utilization
It is interesting that this came from a competitor that still recommends RAID-1 or RAID-10 for itsCLARiiON and DMX products.On the IBM XIV, each 1MB chunk is written on two different disks in different modules. When disks wereexpensive, how much usable space for a given set of HDD was worthy of argument. Today, we sell you abig black box, with 79TB usable, for (insert dollar figure here). For those who feel 79TB istoo big to swallow all at once, IBM offers "capacity on demand" pricing, where you can pay initially for as littleas 22TB, but get all the performance, usability, functionality and advanced availability of the full box.
IBM XIV consumes (insert number of Watts here) of energy
For every disk system, a portion of the energy is consumed by the number of hard disk drives (HDD) andthe remainder to UPS, power conversion, processors and cache memory consumption. Again, the XIV is a bigblack box, and you can compare the 8.4 KW of this high-performance, low-cost storage one-frame system with thewattage consumed by competitive two-frame (sometimes called two-bay) systems, if you are willing to take some trade-offs. To getcomparable performance and hot-spot avoidance, competitors may need to over-provision or use faster, energy-consuming FC drives, and offer additional software to monitor and re-balance workloads across RAID ranks.To get comparable availability, competitors may need to drop from RAID-5 down to either RAID-1 or RAID-6.To get comparable usability, competitors may need more storage infrastructure management software to hide theinherent complexity of their multi-RAID design.
Of course, if energy consumption is a major concern for you, XIV can be part of IBM's many blended disk-and-tapesolutions. When it comes to being green, you can't get any greener storage than tape! Blended disk-and-tapesolutions help get the best of both worlds.
Well, I am glad I could help set the record straight. Let me know what other products people you would like me to focus on next.
Perhaps the recent financial meltdown is making storage vendors nervous.Both IBM and EMC gained market share in 3Q08, but EMC is acting strangelyat IBM's latest series of plays and announcements. Almost contradictory!
Benchmarks bad, rely on your own in-house evaluations instead
Let's start with fellow blogger Barry Burke from EMC, who offers his latest post[Benchmarketing Badly] with commentaryabout Enterprise Strategy Group's [DS5300 Lab Validation Report]. The IBM System Storage DS5300 is one of IBM's latest midrange disk systems recently announced. Take for example this excerpt from BarryB's blog post:
"I was pleasantly surprised to learn that both IBM and ESG agree with me about the relevance and importance of the Storage Performance Council benchmarks.
That is, SPC's are a meaningless tool by which to measure or compare enterprise storage arrays."
Nowhere in the ESG report says this, nor have I found any public statements from either IBM nor ESG that makes this claim. Instead, the ESG report explains that traditional benchmarks from the Storage Performance Council [SPC] focus on a single, specific workload, and ESG has chosen to complement this with a variety of other benchmarks to perform their product validation, including VMware's "VMmark", Oracle's Orion Utility, and Microsoft's JetStress.
Benchmarks provide prospective clients additional information to make purchasedecisions. IBM understands this, ESG understands this, and other well-respected companies like VMware, Oracle and Microsoft understand this. EMC is afraid that benchmarks mightencourage a client to "mistakenly" purchase a faster IBM product than a slower EMC product. Sunshine makes a great disinfectant, but EMC (and vampires) prefer their respective "prospects" remain in the dark.
Perhaps stranger still is BarryB's postscript. Here's an excerpt:
"... a customer here asked me if EMC would be willing to participate in an initiative to get multiple storage vendors to collaborate on truly representative real-world "enterprise-class" benchmarks, and I reassured him that I would personally sponsor active and objective participation in such an effort - IF he could get the others to join in with similar intent."
As I understand it, EMC was once part of the Storage Performance Council a long time ago, then chose to drop out of it. Why re-invent the wheel by creating yet another storage industry benchmark group? EMC is welcome to come back to SPC anytime! In addition to the SCP-1 and SPC-2 workloads, there is work underway for an SPC-3 benchmark. Each SPC workload provides additional insight for product comparisons to help with purchase decisions. If EMC can suggest an SPC-4 benchmark that it feels is more representative of real-world conditions, they are welcome to join the SPC party and make that a reality.
The old adage applies: ["It's better to light a candle than curse the darkness"]. EMC has been cursing the lack of what it considers to be acceptable benchmarks but has yet to offer anything more realistic or representative than SPC.What does EMC suggest you do instead? Get an evaluation box and run your own workloads and see for yourself! EMC has in the past offered evaluation units specifically for this purpose.
In-house evaluations bad, it's a trap!
Certainly, if you have the time and staff to run your own evaluation, with your own applications in your own environment, then I agree with EMC that this can provide better insight for your particular situation than standardized benchmarks.
In fact, that is exactly what IBM is doing for IBM XIV storage units, which are designed for Web 2.0 and Digital Archive workloads that current SPC benchmarks don't focus on. Fellow blogger Chuck Hollis from EMC opines in his post[Get yer free XIV!]. Here's an excerpt:
"Now that I think about it, this could get ugly. Imagine a customer who puts one on the floor to evaluate it, and -- in a moment of desperation or inattention -- puts production data on the device.
Nobody was paying attention, and there you are. Now IBM comes calling for their box back, and you've got a choice as to whether to go ahead and sign the P.O., or migrate all your data off the thing. Maybe they'll sell you an SVC to do this?
Yuck. I bet that happens more than once. And I can't believe that IBM (or the folks at XIV) aren't aware of this potentially happening."
Perhaps Chuck is speaking from experience here, as this may have happened with customers with EMC evaluation boxes, and is afraid this could happen with IBM XIV. I don't see anything unique about IBM XIV in the above concern. Typical evaluations involve copying test data onto the box, test it out with some particular application or workload, and then delete the data no longer required. Repeat as needed. Moving data off an IBM XIV is aseasy as moving data off an EMC DMX, EMC CLARiiON or EMC Celerra, and I am sure IBM wouldgladly demonstrate this on any EMC gear you now have.
Thanks to its clever RAID-X implementation, losing data on an IBM XIV is less likely thanlosing data on any RAID-5 based disk array from any storage vendor. Of course, there will always be skeptics about new technology that will want to try the box out for themselves.
If EMC thought the IBM XIV had nothing unique to offer, that its performance was just "OK",and is not as easy to manage as IBM says it is, then you would think EMC would gladly encourage such evaluations and comparisons, right?
No, I think EMC is afraid that companies will discover what they already know, that IBM has quality products that would stand a fair chance of side-by-side comparisons with their own offerings.We have enough fear, uncertainty and doubt from our current meltdown of the global financial markets, don't let EMC add any more.
Have a safe and fun Halloween! If you need to add some light to your otherwise dark surroundings, consider some of these ideas for [Jack-O-Lanterns]!
Next week, thousands will convene in Las Vegas for [IBM Pulse 2014], an IBM conference that will focus on Cloud, Service and Storage Management.
To lead up to this event, my colleague Steve Wojtowecz, or 'Woj' as we like to call him, IBM VP of Storage and Network Management Software Development, has a five part series that is worth a read. Here are some excerpts:
"Storage-as-a-utility will pick up momentum. Call it [storage-as-a-service], or a storage / back-up cloud, or whatever name you prefer, deployments of this capability will ramp up dramatically."
"Making something simple look complex is easy, making something complex look simple is hard. Like it or not, we all like things simple and easy to grasp."
"Any data that a company is willing to store should be important enough to (1) be protected and backed up as part of a disaster recovery (DR) plan and (2) used for analytics for new business opportunities."
"Hybrid (specifically hybrid storage and data protection clouds) is no longer hype. Nearly every IT shop speculated that hybrid cloud storage was the future of enterprise storage and in 2014 the future is here."
"... the industry will see accelerated adoption in enterprises (private cloud), as an off-premise managed service (public cloud), and across both (hybrid cloud) based on cost, compliance, security and criticality of data to the enterprise."
"IT teams used to thinking of enterprise data as “their baby” are going to have to get comfortable with the idea that the baby is now living somewhere else."
"Line of business organizations have been using analytics to uncover new revenue streams and business opportunities for years. Now, this technology is being turned inward and applied to the data center itself to drive operational efficiency."
"This level of insight and predictability starts to dabble into the notion of cognitive computing as applied to storage and the data it holds."
"Operational analytics will also be applied for productivity / performance gains for the infrastructure itself, like auto-tiering data for priority applications across heterogeneous hardware platforms."
For more insights into these predictions, attend [IBM Pulse 2014] in Las Vegas, next week, February 23-26.
Sadly, I won't be there in person. Although I helped launch the original IBM Pulse back in 2008, I have only been invited once to come back, and that was as a last minute replacement for another speaker in 2012. Unfortunately, I could not accept because of my [near-death experience].
Last week, I was in Austin, and had dinner at [Rudy's Country Store and BBQ]. They offer their self-proclaimed "Worst BBQ in Austin!" with brisket, sausage and other meats by weight. I got a beer, some potato salad, and creamed corn, all at additional cost, of course. When I went to the cashier to pay, I was offered all the white bread I wanted at no additional charge. Are you kidding me? You are going to charge me for beer, but give me 8 to 12 complimentary slices of white bread (practically half a loaf)? Honestly, I consider bread and beer to be basically the same functional food item, differing only in solid versus liquid form. I chose to have only four slices. The food was awesome!
I am reminded of that from my latest exchange with EMC.It didn't take long after IBM's announcement yesterday of IBM's continued investment in its strategic product set, IBM System Storage DS8000 series, that competitors responded. In particular, fellow blogger BarryB from EMC has a post [DS8000 Finally Gets Thin Provisioning] that pokes fun at the new Thin Provisioning feature.
Interestingly, the attack is not on the technical implementation, which is straightforward and rock-solid, but rather that the feature is charged at a flat rate of $69,000 US dollars (list price) per disk array. BarryB claims that recently EMC Corporate has decided to reduce the price of their own thin provisioning, called Symmetrix Virtual Provisioning (VP) on select subset of models of their storage portfolio, although I have not found an EMC press release to confirm. In other words, EMC will bury the cost of thin provisioning into the total cost for new sales, and stop shafting, er.. over-charging their existing Symmetrix customers that are interesting in licensing this feature.
BarryB claims this was a lucky coincidence that his blog post happened just days before IBM's announcement.
(Update: While the timing appears suspicious, I am not accusing Mr. Burke in anywrongdoing of insider information of IBM's plans, nor am I aware of any investigations on this matter from the SEC or any other government agency, and apologize if my previous attempt at humor suggested otherwise. BarryB claimsthat the reduction in price was motivated to counter publicly announced HDS's "Switch In On" program, that it is not a secret thatEMC reduced VP pricing weeks ago, effective beginning 3Q09, just not widely advertised in any formal EMC press releases.Perhaps this new VP pricing was only disclosed to just EMC's existing Symmetrix customers, Business Partners, and employees. Perhaps EMC's decision not to announce this in a Press Release was to avoid upsetting all the EMC CLARiiON customers that continue to pay for Thin Provisioning, or to avoid a long line of existing VP customers asking for refunds. In any case, people are innocent until proven otherwise, and BarryB rightfully deserves the presumption of innocence in this regard. I'm sorry, BarryB, for any trouble my previous comments may have caused you.)
Instead, let's explore some events over the past year that have led up to this.
Let's start with what EMC previously charged for this feature. Software features like this often follow a common pricing method, based per TB, so larger configurations pay more, but tiered in a manner that larger configurations pay less per TB, combined with a yearly maintenance cost.
(Updated: EMC has asked me nicely not to post their actual list prices,so I will provide rough estimates instead. According to BarryB, these are no longer the current prices, soI present them as historical figures for comparison purposes only.)
Initial List price
Software Maintenance (SWMA) percentage
Software Maintenance per year
Number of years
Software License Cost (4 years)
Holy cow! How did EMC get away charging so much for this? To be fair, these are often deeply discounted, a practice common among the industry. However, it was easy for IBMers to show EMC customers that putting SVC or N series gateways in front of their existing EMC disks was more cost effective. Both SVC and N series, as well as IBM's XIV, provide thin provisioning at no additional charge.
HDS offers their own thin provisioning called Hitachi Dynamic Provisioning.Hitachi also offers an SVC-like capability to virtualize storage behind the USP-V. However, I suspect thatfewer than 10 percent of their install base actually licensed this capability because it cost so much. Under the cost pressure from IBM's thin provisioning capabilities in SVC, XIV and N series, Hitachi launched its ["Switch It On"] marketing campaign to activate virtualization and provide some features at no additional charge, including the first 10TB of Hitachi Dynamic Provisioning.
Last week, Martin Glassborow on his StorageBod blog, argued that EMC and HDS should[Set the Wide Stripes Free]. Here is an excerpt:
HDS and EMC are both extremely guilty in this regard, both Virtual Provisioning and Dynamic Provisioning cost me extra as an end-user to license. But this is the technology upon which all future block-based storage arrays will be built. If you guys want to improve the TCO and show that you are serious about reducing the complexity to manage your arrays, you will license for free. You will encourage the end-user to break free from the shackles of complexity and you will improve the image of Tier-1 storage in the enterprise.
Martin is using the term "free" in two contexts above. In the Linux community, we are careful to clarify "free, as in free speech" or "free, as in free beer". Technically, EMC's virtual provisioning is neither, as one has to purchase the hardware to get the feature, so the term "at no additional charge" is more legally correct.
However, the discussion of "free beer" brings me back to my first paragraph about Rudy's BBQ. Nearly everyone eats bread, with the exception of those with [Celiac Disease] that causesan intolerance for gluten protein in wheat, so burying the cost of white bread in the base cost of the BBQ meat is reasonable. In contrast, not everyone drinks beer, and there are probably several people whowould complain if the cost of beer was included in the cost of the BBQ meat, so charging separately forbeer makes business sense.
The same applies in the storage industry. When all (or most) customers of a product can benefit from a feature, it makes sense to include it at no additional charge. When a significant subset might not want to pay a higher base price because they won't use or benefit from a feature, it makes sense to make it optionally priced.
For the IBM SVC, XIV and N series, all customers can benefit from thin provisioning, so it is included at no additional charge.
For the IBM System Storage DS8000, perhaps some 30 to 40 percent of our clients have only System z and/or System i servers attached, and therefore would not benefit from this new thin provisioning. It may seem unfair to raise the price on everybody. The $69,000 flat rate was competitively priced against the prices EMC, HDS and 3PAR were charging for similar capability, and lower than the cost to add a new SVC cluster in front of the DS8000. IBM also charges an annual maintenance, but far lower than what others charged as well.
(Note: These list prices are approximate, and vary slightly based on whether you are on legacy, ESA, Servicesuite or ServiceElect software and subscription (S&S) service plans, and the machine type/model. The tables were too complicated to include here in this post, so these numbers are rounded for comparison purposes only.)
IBM flat rate
Software Maintenance per year (approx)
Number of years
Software License Cost (4 years)
Pricing is more art than science. Getting the right pricing structure that appears fair to everyone involved can be a complicated process.
Despite having business meetings every day I was here in Moscow, I managed to do a bit of sightseeing. June is a good month to visit Russia, as there are nearly 18 hours of daylight to see things. Some things are outdoors, and not constrained to normal business hours.
Near my hotel, the [Crowne Plaza at the World Trade Center], was a cute little park called "Ulista 1905 Goda". It is always nice to see large cities set aside space for nature. There were plenty of park benches to sit and enjoy. The word Ulista simply means "Street" in Russian language, and 1905 refers to the year of historical importance.
The [1905 Russian Revolution] was a wave of mass political and social unrest that spread through vast areas of the Russian Empire. It included worker strikes, peasant unrest, and military mutinies, including sailors aboard the battleship Potemkin. Alexander Adrianov became Moscow's first official mayor. The revolution led to the establishment of the State Duma of the Russian Empire, the multi-party system, and the Russian Constitution of 1906, ending the reign of Nicholas II, the last Tsar of Russia.
Walking from my hotel towards the direction of the Kremlin, I managed to find the [Old Arbat street], which has been around since the 15th century. This was considered a prestigious area of town, home to many artists, academics and politicians. Today, it is pedestrian-only, no cars allowed, with various souvenir shops and restaurants.
This is [Saint Basil's Cathedral], on the [Red Square]. This is officially The Cathedral of the Protection of Most Holy Theotokos on the Moat, but there is no longer any moat.
There is a lot to see around the Red Square to see. The [Kremlin] is a walled castle with an [Armoury Chamber] and various other cathedrals and government buildings to see inside. A ticket for the Armoury Chamber will set you back 700 rubles (about 22 bucks). [Lenin's Masoleum] is free of charge, but only open for three hours on weekdays, from 10:00am to 1:00pm, so plan accordingly.
Returning back to the hotel from the event venue on Wednesday, I walked past the [Cathedral of Christ the Saviour] on my way to the Kropotskinskaya subway station. It is actually across the river from the Red Square. Built in 1860, it is considered the tallest Orthodox church in the world at 344 feet. The domes are electroplated in gold.
I found the taxis to be ridiculously expensive here in Moscow, so I took to the subway instead. If fellow filmmaker John Waters can [hitchhike across the state of Ohio], I can certainly be adventurous and ride the Moscow Metro.
The Moscow Metro is second most used rapid transit system in the world (the first being the one in Tokyo). As a result, the subway can get quite crowded, but I found being squashed into a carload of Russian supermodels to be quite tolerable. The price is a bargain at only 28 rubles per ride (less than a dollar), with unlimited transfers.
While the Metro is a great way to get around the city, it is also a destination in itself, as the system was built in 1935 and has historical architectures that you can only see underground. At the [Ploshchad Revolyutsii station], for example, there is a whole collection of bronze statues of men and women in different work roles. For the statue of the frontier guard, many people rub the dog's nose for good luck that it has become bright and shiny.
Dispel quickly the notion that you need to eat traditional Russian food while in Moscow. A bowl of Borsch (a watery soup made from beets) and a plate of Beef Stroganof set me back 50 bucks! Apparently, restaurants know that only tourists ask for "traditional Russian food", so the prices are set accordingly.
I had to find less expensive eats to stay within my per diem meal limits. Where do the locals eat? Russia is a modern country, with plenty of Burger King, Wendy's, Baskin Robbins, Dunkin Donuts and Starbucks.
No visit to any foreign country would be complete without at least eating one meal at McDonald's. Before working for IBM, I did software engineering for McDonald's, so as a former employee, I try to visit at least one McDonald's in every country. They have restaurants in over 120 countries, so I have a ways to go yet.
A meal consisting of a "Royal" quarter-pounder with cheese, large fries and a Coke was only 214 rubles, less than seven dollars. The meat patty was medium rare, just like I make at home. You just can't get that in the States where everything has to be overcooked to avoid food-bourne illnesses. The fries were a bit over-salted, but the Coke struck just the right balance of syrup and carbonation.
Moscow is home to many museums and art galleries. The [State Tretyakov Gallery] focuses on sculptures and oil paintings from Russian artists, named after a Russian merchant who dontated his collection to get it started.
Plan a good two hours to see everything. There were many guided tour groups when I was there, which slowed me down getting through the large crowds of old people.
There were over 50 rooms, with subject matter ranging from portraits, ships, and buildings, to piles of dead bodies in battle scenes. I especially liked the unique styles of [Mikhail Vrubel] and [Vasily Vereshchagin]. In many of the rooms, there were laminated placards in large-type English that explained the pieces on display.
My last stop was the [Lomonosov Moscow State University (MSU)]. This served two purposes. First, it is situated up on a hill so that you can see a great view of the rest of the city. Second, there were street vendors selling souvenirs, including the ever-popular [Matryoshka dolls], military hats, keychains, and refrigerator magnets.
In other countries, I have found going to the movies as an interesting way to see the locals in action. Foreign movies are shown here in their original language, with either Russian subtitles for the locals or headphones to hear the Russian dubbed audio track. Sadly, I did not have time to do that this week. This poster, depicting the latest Disney movie "Brave", indicates that it opens this weekend.
As always, from a sightseeing perspective, I try to leave a few things un-done, so I have reason to come back. If you know of any other exciting things to see or do in Moscow, please put that in the comments below so that I can consider it for my next trip! I would like to thank my IBM Russia colleagues Rimma Vladimirova and Sunil Bagai for their suggestions and assistance.
This week I'm in beautiful Guadalajara, Mexico teaching at our[System Storage Portfolio Top Gun class].We have all of our various routes-to-market represented here, including our direct sales force, our technicalteams, our online IBM.COM website sales, as well as IBM Business Partners.Everyone is excited over last week's IBM announcement of [4Q07 and full year 2007 results], which includesdouble-digit growth in our IBM System Storage business, led by sales of our DS8000, SAN Volume Controller and Tapesystems. Obviously, as an IBM employee and stockholder, I am biased, so instead I thought I would provide someexcerpts from other bloggers and journalists.
But what was striking in the company’s conference call on Thursday afternoon was the unhedged optimism in its outlook for 2008, given the strong whiff of recession fear elsewhere.
The questions from Wall Street analysts in the conference call had a common theme. Why are you so comfortable about the 2008 outlook? Now, that might just be professional churlishness, since so many of them have been so wrong recently about I.B.M. Wall Street had understandably thought, for example, that I.B.M.’s sales to financial services companies — the technology giant’s largest single customer category — would suffer in the fourth quarter, given the way banks have been battered by the mortgage credit crunch.
But Mr. Loughridge said that revenue from financial services customers rose 11 percent in the fourth quarter, to $8 billion. The United States, he noted, accounts for only 25 percent of I.B.M.’s financial services business.
The other thing that seems apparent is how much I.B.M.’s long-term strategy of moving up to higher-profit businesses and increasingly relying on services and software is working. Its huge services business grew 17 percent to $14.9 billion in the quarter. After the currency benefit, the gain was 10 percent, but still impressive. Software sales rose 12 percent to $6.3 billion.
Looking at IBM's business segments, it can be seen that they offer far more coverage of the technology space that those of the typical tech company:
IBM is just so big and diversified that there is little comparison between it and most other tech companies. IBM is a member of an elite group of companies like Cisco Systems (CSCO), Microsoft (MSFT), Oracle (ORCL) or Hewlett-Packard (HPQ).
IBM's wide international coverage and deep technological capabilities dwarf those of most tech companies. Not only do they have sales organizations worldwide but they have developers, consultants, R&D workers and supply chain workers in each geographic region. Their product mix runs from custom software to packaged enterprise software, hardware (mainframes and servers), semiconductors, databases, middleware technology, etc., etc. There are few tech companies that even attempt to support that many kinds and variations of products.
As color on the fourth quarter earnings announcement, there are a couple of observations that I would like to make. The first one speaks to IBM's international prowess. The company indicated that growth in the Americas was only 5%. International sales were a primary driver of IBM's good results. As an insight on the difference between IBM and most other tech companies, it is clear that nowadays, a tech company that isn't adept at selling internationally is going to be in trouble.
Terrific performance in a terrific year - no doubt a result of its strong global model. IBM operates in 170 countries, with about 65% of its employees outside US and about 30% in Asia Pacific. For fiscal 2007, revenues from Americas grew 4% to $41.1 billion (42% of total revenue), [EMEA] grew 14% to $34.7 billion (35%of total revenue), and Asia-Pacific grew by 11% to $19.5 billion (19.7% of total revenue). IBM sees growth prospects not just in [BRIC] but also countries like Malaysia, Poland, South Africa, Peru, and Singapore.
Thus far 2008–all two weeks of it–hasn’t been a pretty for the tech industry. Worries about the economy prevail. And even companies that had relatively good things to say like Intel get clobbered. It’s ugly out there–unless you’re IBM.
I am sure there will be more write-ups and analyses on this over the next coming weeks, and others will probably waituntil more tech companies announce their results for comparison.
This week I am in Japan, so my week's theme will center around travel, speaking at conferences, and Japan itself. I first travelled to Japan in the late 1980s, to visit a college friend who was working for Ford Motor Company, on assignment in Japan as liasion to Mazda Corp.
Back then, the only Japanese phrase I knew was "Wakarimashta" which means "I know" or "I understand". If you only know one phrase in a foreign language, this possibly could be the worst to know.
My second trip, I was better prepared. I learned three "survival phrases":
sumimasen - "I'm sorry/excuse me" hanashimasen - "I don't speak" wakarimasen - "I don't know / I don't understand"
These are great phrases to know individually, but even more powerful strung all together, to emphasize that you will begin speaking English, but at least with good reason (and perhaps a bit of irony.)
I've been to Japan many times since, and have picked up more of the language. When travelling to Japan, or anywhere for that matter, it is important to "pack light". I'll be gone for two weeks, but all I bring is a laptop bag and one carry-on piece of luggage.
I went on a trip to Prague (Czech Republic) with a female co-worker who brought FOUR pieces of luggage. One was just for shoes. Another piece was just for hair styling gel, make-up, face creams and finger nail polish. Today, the rules are different, and the TSA allows only a single quart-size plastic bag containing little jars of 3 ounces or less of liquids or gels. I didn't have any "quart-size" bags, so I used a smaller sandwich-size bag.
What does all this have to do with storage? I've helped many clients move data centers, and this involves moving their servers, their networks, and their storage. Servers and Networks are easy to move, but storage presents some challenges. In many cases, the entire company is shut down, the storage is moved, and then the company is operational again. Needless to say, it is best to do this over a weekend.
I tell clients to "pack light" and figure out what data they really need in the move. What do you really need to operate your business? Bring just that, the rest can arrive later.
This same concept applies for Business Continuity and Disaster Recovery planning. What do you really need after a disaster occurs? Can you run your business for a few weeks on that data, until the rest of the data is restored? If you can't run your entire business on that data, can you run your most important parts of your business?
If you run a bank, perhaps keeping your ATM cash machines running is more important than making out new loans. In Japan, if a bank has any outages that impact their ATM machines, they put out a full page advertisement in the local papers to apologize for the inconvenience.
Business Continuity is one of the nine "Infrastructure Solutions" that IBM can help clients with. If you are interested in learning more on how IBM can help you with your Business Continuity, click here.
An avid reader of this blog pointed me to a blog post [A Small Tec DIGG on IBM XIV], byGowri Ananthan, a System Engineer in Singapore.Basically, she covers past battles, er.. discussions between me and fellow blogger BarryB from EMC, and [blegs] foranswers to three questions.
Gowri, here are your answers:
Q1. Does IBM offer a Pay-as-you-Go [PAYGO] upgrade path for its IBM XIV disk storage system?
The concern was expressed as:
PAYGO also requires the customer to purchase the remaining capacity within 12 months of installation. So it is More of a 12-month installment plan than pay-as-you-grow.
A1. Actually, IBM offers several methods for your convenience:
With IBM's Capacity on Demand (CoD) plan, you get the full framewith 15 modules installed on your data center floor, but only pay for the first four modules 21 TB, then pay for 5.3TB module increments as you need them over the next 12 months. This is ideal for companies that don't know how fast they will grow, but do not want to wait for new modules to be delivered and installed when needed.
With IBM's Partial Rack offering, you can get a system with as little as six modules (27TB),and then over time, add more modules as you need. This does not have to be done within 12 months, you can stay at six modules for as long as you like, and you can take as long asyou want to add more modules. When you are ready for more capacity, the drawer or drawerscan be delivered, and installed non-disruptively.
Neither of these are "payment installment plans", but certainly if you want to spread yourcosts into regularly-scheduled monthlypayments across multiple years, IBM Global Financing can probably work something out.
Q2. Does IBM consider the XIV as green storage?
The concern was expressed as:
You are powering (8.4KW) and cooling all 180 drives for the whole duration, whether you're using the capacity or not. is it what you called Greener power usage..?
A2. Yes. IBM considers the IBM XIV as green storage. The 8.4KW per frame is lessthan the 10-plus KW that a comparable 2-frame EMC DMX-950 system would consume. Theenergy savings in IBM XIV comes from delivering FC-like speeds using slower SATA disks that rotate slower, and therefore take less energy to spin.
In the fully-populated or Capacity on Demand configuration, you would spin all 180disks. However, using the partial rack configuration, the 6-module has only 40 percent ofthe disks, and therefore consumes only 40 percent of the energy. If you don't plan to storeat least 20-30 TB, you might consider the DS3000, DS4000, DS5000, or DS8000 disk system instead.
Q3. How do you connect more than 24 host ports to an IBM XIV?
The concern was expressed as:
And finally do not forget my question on 24-FC Ports… Up to 24 Fiber Channel ports offering 4 Gbps, 2Gbps or 1 Gbps multi-mode and single-mode support.Stop.. stop.. how you gonna squeeze existing bunch of FC cables in 24 ports?
A3. Best practices suggest that if you have ten or more physical servers, each with two separate FC ports, then you should use a SAN switch or director in between. If you require four ports per server, then you would need a SAN switch beyond six servers to connect to the IBM XIV. If you consider that 24 FC ports, at 4Gbps, represents nearly 10 GB/sec of bandwidth, you will recognize that this is not a performance bottleneck for the system.
This week I was aboard the Queen Mary in Long Beach, California! This was a business event organized by [Key Info Systems], a valued IBM Business Partner. Key Info resells IBM servers, storage and switches.
The Queen Mary retired in 1967, and has been converted into a hotel and events venue. The locals just parked their car and walked on board, but I got to stay Tuesday through Thursday in one of the cabins. It was long and narrow, with round windows! There were four dials for the bathtub: Cold Salt, Hot Fresh, Cold Fresh, and Hot Salt.
Stepping on the boat was like walking back in time through history! If you decide to go see it, check out the [Art Deco bar at the front of the Promenade deck. The ship is still in the water, but is permanently docked. It is sectioned off to prevent the ocean waves from affecting it, so we did not have the nauseous moving back and forth normally associated with cruise ships.
(It is with a bit of irony that we are on the Queen Mary just days after the tragedy of the [Costa Concordia], the largest Italian cruise ship that ran aground near Isola de Giglio. The captain will have to explain how he [fell into a lifeboat] before he had a chance to wait for everyone else to get safely off the shipwreck. He was certainly no [Captain Sulley]! I am thankful that most of the 4,200 people survived the incident.)
Lief Morin, Founder and Chief Executive for Key Info Systems, kicked off the meeting with highlights of 2011 successes. I have known Lief for years, as Key Info comes to the Tucson EBC on a frequent basis. This event was designed to give his sellers an update of what is the latest for each product line, and what to look forward to in the next 12-18 months.
The next speaker was from Vision Solutions that provides High Availability solutions for IBM i on Power Systems. In 2010, their company nearly doubled in size with the acquisition of Double-Take, which provides data replication for x86 servers running Windows, Linux, VMware, Hyper-V and other hypervisors. The capabilities of Double-Take sounded similar to what IBM offers with [Tivoli Storage Manager FastBack] and [Tivoli Storage Manager for Virtual Environments].
Dinner at Sir Winston's
Rather than take the "Ghosts and Legends" tour, I opted for dinner at the Queen Mary's signature restaurant, Sir Winston's. This is a fancy place, so dress accordingly. If you want the Raspberry soufflé, order it early as it takes 30 minutes to prepare!
[Storwize V7000], including the new Storwize V7000 Unified configuration
Storage is an important part of the Key Info Systems revenue stream, so I was glad to have lots of questions and interactions from the audience.
Murder Mystery Dinner
The acting troupe from [Dinner Detective] put on quite the show for us! With all that is going on in the world, it is good to laugh out loud every now and then.
In other murder mystery dinners I have participated in, each person is assigned a "character" and given a script of what to say and when to say it. This was different, we got to pick our own characters. I chose "Doctor Watson", from the Sherlock Holmes series. Several attendees thought it was a double meaning with [IBM Watson], the computer that figured out the clues on Jeopardy! television game show, and has since been [put to work at Wellpoint] to help out the Healthcare industry.
After the "murder" happened, two actors portraying policemen selected members of the audience to answer questions. We didn't get a script of what to say, so everyone had to "ad lib". I was singled out as a suspect, and had fun playing along in character. One of the attendees afterwards said he was impressed that I was able to fabricate such amusing and elaborate responses to their personal and embarassing questions. As a public speaker for IBM, I have had a lot of practice thinking quickly on my feet.
Fibre Channel and Ethernet Switches
The next two speakers gave us an update on Fibre Channel and Ethernet switches, and their thoughts on the inevitability of Fibre Channel over Ethernet (FCoE). One of the exciting new developments is the [Brocade Network Subscription] which creates a flexible pay-per-use Ethernet port rental model for customers. This is especially timely given the Financial Accounting Standards Board proposed [FASB Change 13] that affects operating leases in the balance sheet.
With the Brocade Network Subscription, you pay monthly for the ports you are using. Need more ports, Brocade will install the added gear. Use fewer ports, Brocade will take the equipment back. There is no term endpoint or residual value like tradtional leasing, so when you are done using the equipment, give it back any time. This is ideal for companies that may need to have a lot of Ethernet ports for the next 2-3 years, but then plan to taper down, and don't want to get stuck with a long-term commitment or capital depreciation.
The last speaker was from VMware. IBM is the #1 reseller of VMware, and VMware commands an impressive 81 percent marketshare in the x86 virtualization space. The speaker presented VMware's strategy going forward, which aligns well with IBM's own strategy, to help companies Cloud-enable their existing IT infrastructures, in preparation for eventual moves to Hybrid or Public cloud deployments.
Special thanks to Lief Morin for sponsoring this event, Raquel Hernandez from IBM for coordinating my travel, and Pete, Christina and Kendrell from Key Info Systems for organizing the activities!
This is a reasonable question. Since Invista 2.0 came out months ago in August, and Invista 2.1 is rumored to be out by end of this month, why put out a press release now, rather than just wait a few weeks? Thesignificant part of this announcement was that EMC finally has their first customer reference.To be fair, getting a customer to agree to be a reference is difficult for any vendor. Some non-profitsand government agencies have rules against it, and some corporations just don't want to be bothered byjournalists, or take phone calls from other prospective customers. I suspect EMC wanted to put the good folks from Purdue University in front of the cameras and microphones before they:
In Moore's terminology, Purdue University would be a "technology enthusiast", interested in exploring the technologyof the EMC Invista. Universities by their very nature often see themselves as early adopters, willing to take big risks in hopes to reap big rewards. The chasm happens later, when there are a lot of early adopters, all willing to be reference accounts. The mainstream market--shown here as pragmatists, conservatives, and skeptics-- are unwillingto accept reference claims from early adopters, searching instead for moderate gains from minimal risks. They prefer references from customers that are similar in size and industry. Whether a vendor can get a product to cross this chasm is the focus of the book.
Why "SAN" virtualization?
Technically, Invista is "storage" virtualization, not "SAN" virtualization. Virtualizationis any technology that makes one set of resources look and feel like a different setof resources, preferably with more desirable characteristics. You can virtualizeservers, SANs, and storage resources.
Virtual SAN (VSAN) technology, supported bythe Cisco MDS 9500 Series Multilayer Director Switch, partitions a single physical SAN into multipleVSANs, allowing different business functions and requirements to share a common physical infrastructure.
How does Invista advance Cisco's VSAN functionality? It doesn't, but that doesn't makethe title a falsehood, or the press release by association full of lies.If you read the entire press release, EMCcorrectly states that Invista is "storage" virtualization. Some storagevirtualization products, like EMC Invista and IBM System Storage SAN Volume Controller (SVC), require a SAN as a platform for which to perform their magic.Marketing people might use the term "SAN" torefer not just the network gear that provides the plumbing, but also to include the storage devices that are attached to the SAN. In that light, theuse of "SAN virtualization" can be understood in the title.
More importantly, it appears that EMC no longer requires that you purchase new SAN equipment from themwith Invista. When the Invista first came out, it cost over a quarter-million US dollars to cover thecost of the intelligent switches, but with the price drop to $100K, I imagine this means theyassume everyone has an appropriately-supported intelligent switch already deployed.
Why this architecture?
In his post [Storage Virtualization and Invista 2.0], EMC blogger ChuckH does a fair job explaining why EMC went in this direction for Invista, and how it is different thanother storage virtualization products.
Most storage virtualization products are cache-based. The world's first disk storagevirtualization product, the IBM 3850 Mass Storage System, introduced in 1974, and thefirst tape virtualization product, the IBM 3494 Virtual tape Server, introduced in 1997, bothused disk cache in front of tape storage. Later virtualization products, like IBM SVC and HDS USP-V, use DRAM memory cache in front of disk storage, but the concept is the same.People are comfortable with cache-based solutions, because the technology is matureand well proven in the marketplace, and excited and delighted that these can offer the following features in a mixed heterogeneous disk environment:
instantaneous point-in-time copy
None of these features are provided by Invista, as there is no cache in the switch. Instead,Invista is a "packet cracker"; it cracks open each FCP packet, inspects and modifies the contents, then passes theFCP packet along to the appropriate storage device. This process slows down each read andwrite by some amount, perhaps 20 microseconds. The disadvantage of slowing down every readand write is offset by having other benefits, like non-disruptive data migration.
To compensate for Invista's inability to provide these features,EMC offers a second solution called EMC RecoverPoint, which is an in-band cache-based appliancesimilar in design to SVC, but maps all virtual disks one-to-one to physical disks. It offersremote distance asynchronous mirroring between heterogeneous devices.EMC supports RecoverPoint in front of Invista, but if you are considering buying bothto get the combined set of features, you might as well buy an IBM SVC or HDS USP-V instead,in one system, rather than two, which is much less complicated. IBM SVC and HDS USP-Vhave both "crossed the chasm" having sold thousands of units to every type and size of customer.
Hopefully, this answers the questions you might have about EMC Invista.
Are you tired of hearing about Cloud Computing without having any hands-on experience? Here's your chance. IBM has recently launched its IBM Development and Test Cloud beta. This gives you a "sandbox" to play in. Here's a few steps to get started:
Generate a "key pair". There are two keys. A "public" key that will reside in the cloud, and a "private" key that you download to your personal computer. Don't lose this key.
Request an IP address. This step is optional, but I went ahead and got a static IP, so I don't have to type in long hostnames like "vm353.developer.ihost.com".
Request storage space. Again, this step is optional, but you can request a 50GB, 100GB and 200GB LUN. I picked a 200GB LUN. Note that each instance comes with some 10 to 30GB storage already. The advantage to a storage LUN is that it is persistent, and you can mount it to different instances.
Start an "instance". An "instance" is a virtual machine, pre-installed with whatever software you chose from the "asset catalog". These are Linux images running under Red Hat Enterprise Virtualization (RHEV) which is based on Linux's kernel virtual machine (KVM). When you start an instance, you get to decide its size (small, medium, or large), whether to use your static IP address, and where to mount your storage LUN. On the examples below, I had each instance with a static IP and mounted the storage LUN to /media/storage subdirectory. The process takes a few minutes.
So, now that you are ready to go, what instance should you pick from the catalog? Here are three examples to get you started:
IBM WebSphere sMASH Application Builder
Base OS server to run LAMP stack
Next, I decided to try out one of the base OS images. There are a lot of books on Linux, Apache, MySQL and PHP (LAMP) which represents nearly 70 percent of the web sites on the internet. This instance let's you install all the software from scratch. Between Red Hat and Novell SUSE distributions of Linux, Red Hat is focused on being the Hypervisor of choice, and SUSE is focusing on being the Guest OS of choice. Most of the images on the "asset catalog" are based on SLES 10 SP2. However, there was a base OS image of Red Hat Enterprise Linux (RHEL) 5.4, so I chose that.
To install software, you either have to find the appropriate RPM package, or download a tarball and compile from source. To try both methods out, I downloaded tarballs of Apache Web Server and PHP, and got the RPM packages for MySQL. If you just want to learn SQL, there are instances on the asset catalog with DB2 and DB2 Express-C already pre-installed. However, if you are already an expert in MySQL, or are following a tutorial or examples based on MySQL from a classroom textbook, or just want a development and test environment that matches what your company uses in production, then by all means install MySQL.
This is where my SSH client comes in handy. I am able to login to my instance and use "wget" to fetch the appropriate files. An alternative is to use "SCP" (also part of PuTTY) to do a secure copy from your personal computer up to the instance. You will need to do everything via command line interface, including editing files, so I found this [VI cheat sheet] useful. I copied all of the tarballs and RPMs on my storage LUN ( /media/storage ) so as not to have to download them again.
Compiling and configuring them is a different matter. By default, you login as an end user, "idcuser" (which stands for IBM Developer Cloud user). However, sometimes you need "root" level access. Use "sudo bash" to get into root level mode, and this allows you to put the files where they need to be. If you haven't done a configure/make/make install in awhile, here's your chance to relive those "glory days".
In the end, I was able to confirm that Apache, MySQL and PHP were all running correctly. I wrote a simple index.php that invoked phpinfo() to show all the settings were set correctly. I rebooted the instance to ensure that all of the services started at boot time.
Rational Application Developer over VDI
This last example, I started an instance pre-installed with Rational Application Developer (RAD), which is a full Integrated Development Environment (IDE) for Java and J2EE applications. I used the "NX Client" to launch a virtual desktop image (VDI) which in this case was Gnome on SLES 10 SP2. You might want to increase the screen resolution on your personal computer so that the VDI does not take up the entire screen.
From this VDI, you can launch any of the programs, just as if it were your own personal computer. Launch RAD, and you get the familiar environment. I created a short Java program and launched it on the internal WebSphere Application Server test image to confirm it was working correctly.
If you are thinking, "This is too good to be true!" there is a small catch. The instances are only up and running for 7 days. After that, they go away, and you have to start up another one. This includes any files you had on the local disk drive. You have a few options to save your work:
Copy the files you want to save to your storage LUN. This storage LUN appears persistent, and continues to exist after the instance goes away.
Take an "image" of your "instance", a function provided in the IBM Developer and Test Cloud. If you start a project Monday morning, work on it all week, then on Friday afternoon, take an "image". This will shutdown your instance, and backup all of the files to your own personal "asset catalog" so that the next time you request an instance, you can chose that "image" as the starting point.
Another option is to request an "extension" which gives you another 7 days for that instance. You can request up to five unique instances running at the same time, so if you wanted to develop and test a multi-host application, perhaps one host that acts as the front-end web server, another host that does some kind of processing, and a third host that manages the database, this is all possible. As far as I can tell, you can do all the above from either a Windows, Mac or Linux personal computer.
Getting hands-on access to Cloud Computing really helps to understand this technology!
My how time flies. This week marks my 24th anniversary working here at IBM. This would have escaped me completely, had I not gotten an email reminding me that it was time to get a new laptop. IBM manages these on a four-year depreciation schedule, and I received my current laptop back in June 2006, on my 20th anniversary.
When I first started at IBM, I was a developer on DFHSM for the MVS operating system, now called DFSMShsm on the z/OS operating system. We all had 3270 [dumb terminals], large cathode ray tubes affectionately known as "green screens", and all of our files were stored centrally on the mainframe. When Personal Computers (PC) were first deployed, I was assigned the job of deciding who got them when. We were getting 120 machines, in five batches of 24 systems each, spaced out over the next two years. I was assigned the job of recommending who should get a PC during the first batch, the second batch, and so on. I was concerned that everyone would want to be part of the first batch, so I put out a survey, asking questions on how familiar they were with personal computers, whether they owned one at home, were familiar with DOS or OS/2, and so on.
It was actually my last question that helped make the decision process easy:
How soon do you want a Personal Computer to replace your existing 3270 terminal?
As late as possible
I had five options, and roughly 24 respondents checked each one, making my job extremely easy. Ironically, once the early adopters of the first batch discovered that these PC could be used for more than just 3270 terminal emulation, many of the others wanted theirs sooner.
Back then, IBM employees resented any form of change. Many took their new PC, configured it to be a full-screen 3270 emulation screen, and continued to work much as they had before. My mentor, Jerry Pence, would print out his mails, and file the printed emails into hanging file folders in his desk credenza. He did not trust saving them on the mainframe, so he was certainly not going to trust storing them on his new PC. One employee used his PC as a door stop, claiming he will continue to use his 3270 terminal until they take it away from him.
Moving forward to 2006, I was one of the first in my building to get a ThinkPad T60. It was so new that many of the accessories were not yet available. It had Windows XP on a single-core 32-bit processor, 1GB RAM, and a huge 80GB disk drive. The built-in 1GbE Ethernet went unused for a while, as we had 16 Mbps Token Ring network.
I was the marketing strategist for IBM System Storage back then, and needed all this excess power and capacity to handle all my graphic-intense applications, like GIMP and Second Life.
Over the past four years, I made a few slight improvements. I partitioned the hard drive to dual-boot between Windows and Linux, and created a separate partition for my data that could be accessed from either OS. I increased the memory to 2GB and replaced the disk with a drive holding 120GB capacity.
A few years ago, IBM surprised us by deciding to support Windows, Linux and Mac OS computers. But actually it made a lot of sense. IBM's world-renown global services manages the help-desk support of over 500 other companies in addition to the 400,000 employees within IBM, so they already had to know how to handle these other operating systems. Now we can choose whichever we feel makes us more productive. Happy employees are more productive, of course. IBM's vision is that almost everything you need to do would be supported on all three OS platforms:
Access your email, calendar, to-do list and corporate databases via Lotus Notes on either Windows, Linux or Mac OS. Corporate databases store our confidential data centrally, so we don't have to have them on our local systems. We can make local replicas of specific databases for offline access, and these are encrypted on our local hard drive for added protection. Emails can link directly to specific entries in a database, so we don't have huge attachments slowing down email traffic. IBM also offers LotusLive, a public cloud offering for companies to get out of managing their own email Lotus Domino repositories.
Create presentations, documents and spreadsheets on either Windows, Linux or Mac OS. Lotus Symphony is based on open source OpenOffice and is compatible with Microsoft Office. This allows us to open and update directly in Microsoft's PPT, DOC and XLS formats.
Many of the corporate applications have now been converted to be browser-accessible. The Firefox browser is available on Windows, Linux and Mac OS. This is a huge step forward, in my opinion, as we often had to download applications just to do the simplest things like submit our time-sheet or travel expense reimbursement. I manage my blog, Facebook and Twitter all from online web-based applications.
The irony here is that the world is switching back to thin clients, with data stored centrally. The popularity of Web 2.0 helped this along. People are using Google Docs or Microsoft OfficeOnline to eliminate having to store anything locally on their machines. This vision positions IBM employees well for emerging cloud-based offerings.
Sadly, we are not quite completely off Windows. Some of our Lotus Notes databases use Windows-only APIs to access our Siebel databases. I have encountered PowerPoint presentations and Excel spreadsheets that just don't render correctly in Lotus Symphony. And finally, some of our web-based applications work only in Internet Explorer! We use the outdated IE6 corporate-wide, which is enough reason to switch over to Firefox, Chrome or Opera browsers. I have to put special tags on my blog posts to suppress YouTube and other embedded objects that aren't supported on IE6.
So, this leaves me with two options: Get a Mac and run Windows on the side as a guest operating system, or get a ThinkPad to run Windows or Windows/Linux. I've opted for the latter, and put in my order for a ThinkPad 410 with a dual-core 64-bit i5 Intel processor, VT-capable to provide hardware-assistance for virtualization, 4GB of RAM, and a huge 320GB drive. It will come installed with Windows XP as one big C: drive, so it will be up to me to re-partition it into a Windows/Linux dual-boot and/or Windows and Linux running as guest OS machine.
(Full disclosure to make the FTC happy: This is not an endorsement for Microsoft or against Apple products. I have an Apple Mac Mini at home, as well as Windows and Linux machines. IBM and Apple have a business relationship, and IBM manufactures technology inside some of Apple's products. I own shares of Apple stock, I have friends and family that work for Microsoft that occasionally send me Microsoft-logo items, and I work for IBM.)
I have until the end of June to receive my new laptop, re-partition, re-install all my programs, reconfigure all my settings, and transfer over my data so that I can send my old ThinkPad T60 back. IBM will probably refurbish it and send it off to a deserving child in Africa.
If you have an old PC or laptop, please consider donating it to a child, school or charity in your area. To help out a deserving child in Africa or elsewhere, consider contributing to the [One Laptop Per Child] organization.
Seth Godin has an interesting post titled Times a Million.He recounts how many people determine the fuel savings of higher-mileage cars to be only $300-$900 per year,and that this is not enough to motivate the purchase of a more-efficient vehicle, such as a hybrid orelectric car. Of course, if everyone drove more efficient vehicles, the benefits "times a million" wouldbenefit everyone and the world's ecology.
When I discuss storage-related concepts, many executives mistakenly relate them to the one area of information technologythey know best: their laptop. Let's take a look at some examples:
Information Lifecycle Management
Information Lifecycle Management (ILM) includes classifying data by business value, and then using this to determineplacement, movement or deletion. If you think about the amount of time and effort to review the files on yourindividual laptop, and to manually select and move or delete data, versus the benefits for the individual laptopowner, you would dismiss the concept. Most administrative tasks are done manually on laptops, because automatedsoftware is either unavailable or too expensive to justify for a single owner.
In medium and large size enterprises, automated software to help classify, move and delete data makes a lot of sense.Executives who decide that ILM is not for their data center, based on their experiences with their laptop, are losingout on the "times a million" effect.
Laptops have various controls to minimize the use of battery, and these controls are equally available when pluggedin. Many users don't bother turning off the features and functions they don't need when plugged in, because theyfeel the cost savings would only amount to pennies per day.
Times a million, energy savings do add up, and options to reduce the amount used per server, per TB of data stored, not only save millions of dollars per year, but can also postpone the need to build a new data center, or upgrade the electrical systems in your existing data center.
Backup and Disaster Recovery planning
I am not surprised how many laptops do not have adequate backup and disaster recovery plans. When executives thinkin terms of the time and effort to backup their data, often crudely copying key files to CDrom or USB key, and worryingabout the management of those copies, which copies are the latest, and when those copies can be destroyed, theymight reject deploying appropriate backup policies for others.
Times a million, the collected data stored on laptops could easily be half of your companies emails and intellectual property. Products like IBM Tivoli Storage Manager can manage a large number of clients with a few administrators,keeping track of how many copies to keep, and how long to keep them.
So, next time you are looking at technology or solutions for your data center, don't suffer from "Laptop Mentality". Focus instead on the data center as a whole.
It's Tuesday, and that means more IBM announcements!
I haven't even finished blogging about all the other stuff that got announced last week, and here we are with more announcements. Since IBM's big [Pulse 2010 Conference] is next week, I thought I would cover this week's announcement on Tivoli Storage Manager (TSM) v6.2 release. Here are the highlights:
Client-Side Data Deduplication
This is sometimes referred to as "source-side" deduplication, as storage admins can get confused on which servers are clients in a TSM client-server deployment. The idea is to identify duplicates at the TSM client node, before sending to the TSM server. This is done at the block level, so even files that are similar but not identical, such as slight variations from a master copy, can benefit. The dedupe process is based on a shared index across all clients, and the TSM server, so if you have a file that is similar to a file on a different node, the duplicate blocks that are identical in both would be deduplicated.
This feature is available for both backup and archive data, and can also be useful for archives using the IBM System Storage Archive Manager (SSAM) v6.2 interface.
Simplified management of Server virtualization
TSM 6.2 improves its support of VMware guests by adding auto-discovery. Now, when you spontaneously create a new virtual machine OS guest image, you won't have to tell TSM, it will discover this automatically! TSM's legendary support of VMware Consolidated Backup (VCB) now eliminates the manual process of keeping track of guest images. TSM also added support of the Vstorage API for file level backup and recovery.
While IBM is the #1 reseller of VMware, we also support other forms of server virtualization. In this release, IBM adds support for Microsoft Hyper-V, including support using Microsoft's Volume Shadow Copy Services (VSS).
Automated Client Deployment
Do you have clients at all different levels of TSM backup-archive client code deployed all over the place? TSM v6.2 can upgrade these clients up to the latest client level automatically, using push technology, from any client running v5.4 and above. This can be scheduled so that only certain clients are upgraded at a time.
Simultaneous Background Tasks
The TSM server has many background administrative tasks:
Migration of data from one storage pool to another, based on policies, such as moving backups and archives on a disk pool over to a tape pools to make room for new incoming data.
Storage pool backup, typically data on a disk pool is copied to a tape pool to be kept off-site.
Copy active data. In TSM terminology, if you have multiple backup versions, the most recent version is called the active version, and the older versions are called inactive. TSM can copy just the active versions to a separate, smaller disk pool.
In previous releases, these were done one at a time, so it could make for a long service window. With TSM v6.2, these three tasks are now run simultaneously, in parallel, so that they all get done in less time, greatly reducing the server maintenance window, and freeing up tape drives for incoming backup and archive data. Often, the same file on a disk pool is going to be processed by two or more of these scheduled tasks, so it makes sense to read it once and do all the copies and migrations at one time while the data is in buffer memory.
Enhanced Security during Data Transmission
Previous releases of TSM offered secure in-flight transmission of data for Windows and AIX clients. This security uses Secure Socket Layer (SSL) with 256-bit AES encryption. With TSM v6.2, this feature is expanded to support Linux, HP-UX and Solaris.
Improved support for Enterprise Resource Planning (ERP) applications
I remember back when we used to call these TDPs (Tivoli Data Protectors). TSM for ERP allows backup of ERP applications, seemlessly integrating with database-specific tools like IBM DB2, Oracle RMAN, and SAP BR*Tools. This allows one-to-many and many-to-one configurations between SAP servers and TSM servers. In other words, you can have one SAP server backup to several TSM servers, or several SAP servers backup to a single TSM server. This is done by splitting up data bases into "sub-database objects", and then process each object separately. This can be extremely helpful if you have databases over 1TB in size. In the event that backing up an object fails and has to be re-started, it does not impact the backup of the other objects.
As I have mentioned before, I started this blog on September 1, 2006 as part of IBM's big ["50 Years of Disk Systems Innovation"] campaign. IBM introduced the first commercial disk system on September 13, 1956 and so the 50th anniversary was in 2006. That means this month, IBM celebrates the "Diamond" anniversary, 60 years of Disk Systems!
For those who missed it, IBM announced last Tuesday encryption capability for the TS1120 drive, our enterprise tape drive that read and write 3592 cartridges. Do you need special cartridges for this? No! Use the sames ones you have already been using!
You can read more about it www.ibm.com/storage/tape."
Short and sweet, but it got me started, and I ended up writing 21 blog posts that first month. You can read blog posts from all 10 years by looking at the left panel of my blog under "Archive".
While traditional disk and tape storage are still very important and relevant in today's environment, IBM has also expanded into other technologies:
In 2012, IBM [acquired Texas Memory Systems]. In 2014, IBM shipped 62PB, more Flash capacity than any other vendor. In 2015, continued its #1 status, shipping 170PB of Flash, again, more than any other vendor.
IBM has flash everywhere, from the advanced FlashSystem 900, V9000, A9000 and A9000R models, to other all-flash array and hybrid flash-and-disk systems a with various sets of features and functions to meet a variety of workload requirements.
The DS8888 all-flash array, and the DS8886 and DS8884 hybrid flash-and-disk systems round out the latest in the DS8000 storage systems family. SAN Volume Controller and Storwize family of products, based on IBM Spectrum Virtualize software, also have all-flash array and hybrid configurations. The most recent being the Gen2+ models of Storwize V7000F and V5030F. The latest solution is the DeepFlash 150 models, designed for analytics and unstructured data.
Between internally-developed IBM Spectrum Scale and IBM Spectrum Archive, and IBM's [acquisition of Cleversafe], IBM is ranked #1 in Object Storage. IBM Cloud Object Storage System, IBM's new name for Cleversafe's flagship product, is available as software-only, pre-built systems, or in the IBM SoftLayer cloud.
Software-Defined Storage (SDS) with IBM Spectrum Storage
Last year, IBM re-branded its various storage software products under the "IBM Spectrum Storage" family. Earlier this year, IBM announced the new [IBM Spectrum Storage Suite license] which makes it even easier to procure, either with a perpetual software license, elastic monthly licensing, or utility license that combines some of each.
IBM is ranked #1 in Software-Defined Storage, with over 40 percent marketshare, offering solutions as Software-only, pre-built systems, and in IBM SoftLayer cloud.
This Thursday, June 16, 2011, marks IBM's Centennial 100 year anniversary. It happens to also be my 25th anniversary with IBM Storage. To avoid conflicts in celebrations, we decided to celebrate my induction into the "Quarter Century Club" (QCC) last Friday instead.
My colleague Harley Puckett was master of ceremonies. Here he is presenting me with a memorial plaque and keychain. Harley mentioned a few facts about 1986, the year I started working for IBM. Ronald Reagan was the US President, gasoline cost only 93 cents per gallon, and the US National Debt was only 2 trillion US dollars!
Here are my colleagues from DFSMShsm. From left to right: Ninh Le, Henry Valenzuela, Shannon Gallaher, and Stan Kissinger. I started in 1986 as aa software developer on DFHSM, and slowly worked my way up to be a lead architect of DFSMS.
Here are my colleagues from Tivoli Storage Manager (TSM). From left to right: Matt Anglin, Ken Hannigan and Mark Haye. I first met them when they worked in DFDSS, having moved from San Jose, CA down to Tucson. While I never worked on the TSM code itself, I did co-author some of the patents used in the product and other products like the 3494 Virtual Tape Server that makes use of TSM internally. I also traveled extensively to promote TSM, often with a TSM developer tagging along so they can learn the ropes about how to travel and make presentaitons.
Here are my colleagues from the disk team. From left to right: Joe Bacco, Carlos Pratt, Gary Albert, and Siebo Friesenborg. I worked on the SMI-S interface for the ESS 800 and DS8000 disk systems needed for the Tivoli Storage Productivity Center. Joe leads the "Disk Magic" tools team. Carlos and I worked on qualifying the various disk products to run with Linux on System z host attachment. Gary Albert is the Business Line Executive (BLE) of Enterprise Disk. Siebo Friesenborg was a disk expert on performance and disaster recovery, but is now enjoying his retirement.
Here are my colleagues from the support team. From left to right: Max Smith, Dave Reed, and Greg McBride. I used to work in Level 2 Support for DFSMS with Max and Dave, carrying a pager and managing the queue on RETAIN. We had enough people so that each Level 2 only had to carry the pager two weeks per year. On Monday afternoons, the person with the pager would give it to the next person on the rotation. On Monday, September 10, 2001, I got the pager, and the following morning, it went off to help all the many clients affected by the September 11 tragedy.
I worked with Greg McBride when he was in DFSMS System Data Mover (SDM), and then again in Tivoli Storage Productivity Center for Replication (TPC-R), and now he is supporting IBM Scale-Out Network Attached Storage (SONAS).
Standing in the light blue striped shirt is Greg Van Hise, my first office-mate and mentor when I first joined IBM. He went on to be part of the elite "DFHSM 2.4.0" prima donna team, then move on to be an architect for Tivoli Storage Manager (TSM).
I wasn't limited to inviting just coworkers, I was also able to invite friends and family. Here are Monica, Richard, and my mother. Normally, my parents head south for the summer, but they postponed their flights so that they could participate in my QCC celebration.
From left to right: my father, Greg Tevis, and myself. It was pure coincidence that my father would wear a loud darkly patterned shirt like mine. Honestly, we did not plan this in advance. Greg Tevis and I were lead architects for the Tivoli Storage Productivity Center, and Greg is now the Technology Strategist for the Tivoli Storage product line.
Here is Jack Arnold, fellow subject matter expert who works with me here at the Tucson Executive Briefing Center, sampling the food. We had quite the spread, including egg rolls, meatballs, luncheon meats, chicken strips, and fresh vegetables.
More colleagues from the Tucson Executive Briefing Center, from left to right, Joe Hayward, Lee Olguin, and Shelly Jost. Joe was a subject matter expert on Tape when I first joioned the EBC in 2007, but he has moved back to the Tape development/test team. Lee is our master "Gunny" sargeant to manage all of our briefing schedules. Shelly is our Client Support Manager, and was the one who organized all the food and preparations for this event!
Lastly, here are Brad Johns, myself, and Harley Puckett. Brad was my mentor for my years in Marketing, and has since retired from IBM and now works on his golf game. I would like to thank all of the Tucson EBC staff for pulling off such a great event, and all my coworkers, friends and family for coming out to celebrate this milestone in my career!
In addition to the plaque and keychain, Harley presented me with a book of congratulatory letters. If you would like to send a letter, it's not too late, contact Mysti Wood (email@example.com).