This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections platform will be sunset on December 31, 2019. On January 1, 2020, this blog will no longer be available. More details available on our FAQ.
The IBM Storage and Storage Networking Symposium continues ...
DS8300 Benchmark for Global Mirror
Phil Allison of Fidelity National Information Services presented his success switching from competition over to IBM DS8300 disk systems for use with Global Mirror. They had usedPerformance Associates famous PAIO driver to help to the benchmarktesting. They ran the benchmars at 2x and 3x their current workloads to see how well the DS8000 performed,measuring IOPS, MB/sec, and millisecond response time (msec). They were very impressed with their results,staying below their target 0.8 msec for most of their runs.
For the Global Mirror, the did a performance "bake-off" between Ciena CN2000 versus Cisco 9216i. These areimplemented differently. Ciena uses a Layer-2 approach, encapsulating the Fibre Channel packets directlyto transport as SDH/SONET or Gigabit Ethernet (GigE), which required dedicated circuits between JacksonvilleFlorida and Little Rock, Arkansas. By contrast, Cisco uses a Layer-3 approach, encapsulating Fibre Channelpackets within an IP packet, which can leverage existing datacenter-to-datacenter backbone.
To add stress to the benchmarks, they used a "Network Impairment" emulator. These artificially inject errors,lose packets, and other signal loss conditions. Running both Cisco and Ciena under these tests help them decide which to purchase, but also enforced that idea that they made the right choice choosing IBM for theirremote distance mirroring solution.
Comparison of Bare Machine Recovery Techniques
"Bare machine recovery" is the phrase used to restore a machine that has no operating system installed (or thewrong operating system). Dave Canan from IBM Advanced Technical Support did a great job reviewing the variousproducts and techniques available, and the pros and cons of each approach. The ones he covered were:
Tivoli Storage Manager - install fresh Windows Operating System, TSM client, and then follow certain steps
Automated System Recovery(ASR) - a new feature of Windows XP and Windows 2003 works with TSM client
Symantec Ghost - formerly callled PowerQuest Drive Image, there are now two versions: Ghost Home Edition and Ghost Corporate Solution Suite
Cristie Bare Machine Recovery(CBMR) - This is an IBM partner that provides both Linux and Windows PE versions. Cristie includes a license for Windows PE, so no need to use the alternative Bart PE method.
SAN Volume Controller - Customer Experience
Bill Giles of Catholic Medical Center, a hospital in New Hampshire, presented his experienceswith IBM System Storage SAN Volume Controller. They have a mix of IBM System x, System p, andSystem i servers, as well as machines from HP, Sun, and Dell. For applications, they havePicture Archiving and Communicatiion System (PACS) for cardiology and radiology, HL7 Interface engine, Clinical Information System, TSM for backup, and Microsoft Exchange fore-mail.
They deployed SVC on AIX, Solaris, Windows 2000 and 2003. They were very delightedwith the results:
Centralized Storage Provisioning
Consolidating disparate storage into a universal platform
Enables non-disruptive data migration
Increased utilization of existing disk resources
Improved disaster recovery with FlashCopy and Metro Mirror
Birds of a Feather (BOF) sessions
We had two BOFs, one for storage attached to System z operating systems, and another for storage attached to Linux, UNIX and Windows systems. This distinctionmade sense when mainframes could only attach to CKD disks and ESCON/FICON tape,and distributed systems could only do FCP/SCSI, but these days, there are all kindsof convergence going on.
Linux on System z can now attach via FCP to LTO tape and SAN Volume Controller, allowing now a wide range of storage options for that platform. z/OS, z/VM, z/VSEand Linux on System z can all access IBM System Storage N series via NFS.
The format was traditional Q&A panel, we had experts at the front of the room,handling the questions and discussion topics brought up by the audience. I'll spareyou the individual questions and answers.
Continuing my blog coverage of the [Forrester IT Forum 2009 conference],I finally catch up with some keynote sessions this morning. Here's my recap on the rest of the main tent general session keynote presentations from BP, Microsoft and CFIL.
Dana Deasy, CIO and Group VP, Information Technology and Services (IT&S), BP
Dana presented "The gift we’ve been given – reinventing the IT organization". He is the CIO of BP, an energy company that made over 360 billion dollars selling oil and gas. In fact, it is the fourth largest company in the world, with 92,000 employees in more than 100 countries. Back in 2007, business was good but the senior management team felt that IT needed to be straightened out.Dana was brought in as a "fresh thinking" outsider, managing a group 4000 IT staff composed mostly of contractors, dealing with more than 2000 IT suppliers and more than 60 versions of SAP.
Dana presented the results of their IT makeover. In the first year, he was able to cut out 400 million US dollars from the IT budget, including the reduction of 500 people from the IT staff. He increased the employee/contractor ratio to 40/60, with plans to bring this up to 65/35 over the next year. He was able to get 1800 IT employees to perform a self-assessment to understand their strengths and weaknesses. He was able to centralize the IT leadership team, and deploy a common [ITIL] best practices implementation.
What did he learn from all this? Here were his top four "lessons learned":
No time to dwell but know your facts
Work in parallel to push the pace of change
Listen but in the end take your own counsel
Tell a compelling story to energize your employees and your leadership
Chris Capossela, Senior VP of Information Worker Product Management Group, Microsoft
Chris presented "Uncovering Value in the Cloud and On Your Desktop", onhow Microsoft customers are taking advantage of the software they have already purchased.For example, Jamba Juice was able to use Microsoft SharePoint to cut down locating documents from 15 minutes to just seconds, reducing 10-15 hours per week for more than 500 managers. More importantly, they were more confident that document they found was the right one. This is often referred to as "one version of the truth." In another example, Tyson Foods was able to connect Microsoft Word to their SAP application, and have that then connect to their Microsoft SharePoint.
Chris was amazed that many Microsoft customers don't take advantage of all that is available to them.He gave four examples:
Planning Services: If you buy an enterprise license to Microsoft products, you get planning services, from either Microsoft's own Microsoft Consulting Services or from thousands of Microsoft Business Partners. Only 8 percent of customers take advantage of this.
Home Use Rights: For enterprise license customers, employees can purchase "home use rights" to use the Enterprise level of Microsoft Office software for only 10 US dollars, but only about 3 percent take advantage of this.
Training: Many enterprise licenses come with 2-4 weeks of training vouchers, but only 40 percent take advantage of these vouchers.
E-Learning: Microsoft also offers e-learning, which Microsoft customers can either have delivered from Microsoft's own hosted services, or they can get a copy of the E-learning materials hosted inside their own company firewall. Again, few take advantage of this.
Chris wrapped up his presentation by citing some examples of customers that migrated from in-house, on-premises collaboration software to Microsoft's "Exchange Online" and "SharePoint Online" cloud computing Software-as-a-Service [SaaS] offerings. The cloud versions of these software do not offer all the features as the on-premise versions, but Microsoft is working to close this gap.
(IBM offers similar cloud computing services for email and collaboration called [LotusLive])
Gary presented "Tough Times: Opportunity for Innovation and Corporate Makeover". He had some greatquotes intended to help people become better leaders, like this:
“Leadership failures do not usuallyresult from leaders not knowing what todo; rather these failures result becauseleaders fail to do what they know fullwell they should and must do.Most leaders never get fully comfortable withthe changes that they wish for theirorganizations.”
Change the Conversation - employees want to have a compelling reason to change.
Create a compelling description of the future - employees want a vision of where they are headed.
Emotionally enlist employees in the cause - leaders are not remembered for their attributes, as much as the causes they stood for.
Help me understand the business - employees often do not have information in context to act accordingly.
Choose passionate - employees want to see leaders that are passionate and confident on the process and strategic direction.
Create a To-Stop list - we all have "to do" lists, but perhaps you need a "to don't" list. In other words, a list of bad habits and practices you need to discontinue.
Gary indicated that trust must be given before it is earned. If a leader doesn't trust the employees, how do you expect the employees to trust the leader? When asking employees to change their behavior, or self-assess their own skills, a leader must emphasize "I mean you no harm." Otherwise,mistrust will undermine the intended results.
The keynote sessions the past three days have provided clear motivation to the CIOs and IT leaders in the audience to consider making the necessary changes, with impressive results and actionable advice.
IBM introduces the eight generation of Linear Tape Open (LTO) tape drive technology, with corresponding support in all of the IBM tape libraries.
Fellow blogger Jon Toigo, of Drunkendata.com fame, came to Tucson to interview Lee Jesionowski, Ed Childers, Calline Sanchez, and me about this. Check out the various segments on YouTube or his website.
The LTO-8 cartridges are not yet available, but when they are, they will hold 12 TB raw capacity, or 30 TB effective capacity at 2.5-to-1 compression ratio. The new drives are N-1 compatible to read/write LTO-7 cartridge media.
Previous generations also supported reading N-2 generation tapes, LTO-8 breaks from that tradition and will not support LTO-6 cartridges at all.
LTO-8 comes in both "Full Height" (FH) and Half-Height (HH) models. The FH models can transfer data at 360 MB/sec (or 900 MB/sec effective at 2.5-to-1 compression), and the HH models at 300 MB/sec (or 750 MB/sec effective at 2.5-to-1).
LTO-8 supports IBM Spectrum Archive and the "Linear Tape File System" (LTFS) tape format for self-describing long-term retention of data.
Compliance storage has come under many names. For tape and optical media, we had "WORM" for Write-Once, Read-Many. For disk-based storage, we had "Fixed-Content" or "Content-Addressable Storage". For file systems, we had "Immutable Storage".
Fortunately, the clever folks who crafted the SEC 17a-4 law came up with an umbrella term: "Non-Erasable, Non-Rewriteable" (NENR) that covers all storage media, from WORM tape and optical, to tamperproof flash, disk and cloud-based solutions.
The other major change is "Concentrated Dispersal" mode, or "CD mode" for short. Erasure Coding works best when data is dispersed across three or more sites. When this happens, you can lose all of the data at one site, and still have 100 percent access to all data from the other locations.
IBM's "Information Dispersal Algorithm", or IDA for short, scattered slices of data across many servers. Great for high availability and performance, but often meant that the minimum deployment was 500TB or greater.
Not every organization is ready for such a large purchase. Some want to just [dip their toe in the water] with something smaller, less expensive. Well IBM delivered!
The new CD mode means that instead of one slice per Slicestor node, you can pack lots of slices on each node. Each slice will be on distinct disk drives, for high availability.
Entry-level configurations now can be as little as 72-104 TB, across 1, 2 or 3 sites.
I did not register soon enough to get into the MGM Grand itself, so I am staying at a Hiltonat the other end of the Las Vegas strip, but am able to hop on the "Monorail" to get to the MGM,just in time for the breakfast and first welcome session.
This conference has a familiar set up: six keynote sessions, 62 break-out sessions, and fourtown hall meetings. Thanks to electronic survey devices on the seats, speakers were able to gatherreal-time demographics. A large portion of attendees, including myself, are attending this conference for theirfirst time. Here's my recap of the first three keynote sessions:
The Future of Infrastructure and Operations: The Engine of Cloud Computing
How much do companies spend just to keep current? As much as 70 percent! The speaker noted thatthe best companies can get this down to 10 to 30 percent, leaving the rest of the IT budget to facilitate transformation. He predicts that companies are transforming their data centers fromsprawled servers to virtualization, towards a fully automated, service-oriented, real-time infrastructure.
Whereas the original motivation for IT virtualization was to reduce costs, companies now recognizethat they greatly improve agility, the ability to rapidly provision resources for new workloads, and that this will then lead to opportunites for alternative sourcing, such as cloud computing.
The operating system is becoming commoditized, focusing attention instead to a new concept: the"Meta OS". VMware's Virtual Data Center and Microsoft's Azure Fabric Controller are just two examples.Currently, analysts estimate only about 12 percent of x86 workloads are running virtualized, but thatthis could be over 50 percent by 2012.In this same time frame, year 2012, storage Terabytes is expected to increase 6.5x fold, and WAN bandwidthgrowing 35 percent per year.
Virtualization is not just for business applications. There are opportunities to eliminate the mostcostly part of any business: the Personal Computer, poster child of the skyrocketing costs of the client/server movement. Remote hosting of applications, streaming of applications,software as a service (SaaS) and virtual machines for the desktop can greatly reduce costs of customizedPC images and help desk support.
Cloud computing not only reduces per costs per use, but provides a lower barrier of entry and somemuch needed elasticity.Draw a line anywhere along the application-to-hardware software/hardware stack, and you can define acloud computing platform/service. About 65 percent of the attendees surveyed indicated that they were already doing something with CloudComputing, or were planning to in the next four years.
To help get there, the speaker felt that Value-added Resellers (VAR) and System Integrators (SI) wouldevolve into "service brokers", providing Small and Medium sized Businesses (SMB) "one throat to choke" in mixedmultisourced operations. The term "multisource" caught me a bit off-guard, referring to having someworkloads run internally (insourced) while other workloads run out on the Cloud (outsourced). Largerenterprises might have a "Dynamic Sourcing Team", a set of key employees serving as decision makers, employing both business and IT skills to determine the best sourcing for each application workload.
What are the biggest obstacles to getting there? The speaker felt it was the IT staff. People and cultureare the most difficult to change. The second are lack of appropriate metrics. Here were the survey resultsof the attendees:
41 percent had metrics for infrastructure economic attributes
49 percent had metrics for qualities of service (QoS)
12 percent had metrics to measure agility, speed of resource provisioning
The Data Center Scenario: Planning for the Future
This second keynote had two analyst "co-presenters". The focus was on the importance of having a documented Data Center strategy and architecture. Unfortunately, most Data Centers "happen on their own", with a majoroverhaul every 5 to 10 years. The speakers presented some "best practices" for driving this effort.
The first issue was to identify tiers of criticality, similar to those by the[Uptime Institute]. In their example, the most criticalworkloads would have perhaps recovery point objectives (RPO) of zero, and recover time objectives of lessthan 15 minutes. This is achievable using synchronous mirroring with fully automation to handle the failover.
The second issue was to recognize that many applications were designed for local area networks (LAN), butmany companies have distributed processing over a wide area network (WAN). Latency over these longer distancescan kill distributed performance of these applications.
The third issue was that different countries offer different levels of security, privacy and law enforcement.Canada and Ireland, for example, had the lowest risk, countries like India had medium risk, and countries likeChina and Russia had the highest risk, based on these factors.
The speakers suggested the following best practices:
Get a better understanding of the costs involved in providing IT services
Centralize applications that are not affected by latency, but regionalize those that are affected toremote locations to minimize distance delays.
Work towards a "lights out" data center facility, with operations personnel physically separated fromdata center facilities.
For the unfortunate few that are trying to stretch out more life from their existing aging data centers,the speakers offered this advice:
Build only what you need
Decommission orphaned servers and storage, which can be 1 to 12 percent of your operations
Target for replacement any hardware over five years old, not just to reduce maintenance costs, butalso to get more energy-efficient equipment.
Consider moving test workloads, and as much as half of your web servers, off UPS and onto the nativeelectricity grid. In the event of an outage, this reduces UPS consumption.
Implement power-capping and load-shedding, especially during peak times.
Enacting these changes can significantly improve the bottom line. Archaic data centers, those typically over 10 years old with power usage effectiveness (PUE) over 3.0 can cost over twice as much as a moreefficient data center. To learn more about PUE as a metric, see the Green Grid's whitepaper[Data Center power efficiency metrics:PUE and DCiE].
While virtualization can help with these issues, it also introduces new problems, such as VM sprawl anddealing with antiquated licensing schemes of software companies.
The Four Traits of the World's Best-Performing Business Leaders
Best-selling author Jason Jennings presented his findings in researching his various books:
It's Not the Big That Eat the Small... It's the Fast That Eat the Slow : How to Use Speed as a Competitive Tool in Business
Less Is More : How Great Companies Use Productivity As a Competitive Tool in Business
Think Big, Act Small
Hit the Ground Running : A Manual for New Leaders
Jason identified the best companies and interviewed their leaders, including such companies as Koch Industries, Nucor Steel, and IKEA furniture. The leaders he interviewed felt a calling to serveas stewards of their companies, not just write mission and vision statements, and be willingto let go of projects or people that aren't working out.
Jasonindicated a 2007 Gallup poll on the American workplace indicates that 70 percent of employees do notfeel engaged in their jobs.The focus of these leaders isto hire people with the right attitudes, rather than the right aptitudes, and give those people with the knowledge and the right to make business decisions. If done well,employees will think and act as owners, and hold themselves accountable for their economic results. Jason found cases where 25-year-olds were givenresponsibility to make billion-dollar decisions!
I found his talk inspiring! The audience felt motivated to do their jobs better, and be more engagedin the success of their companies.
These keynote sessions set the mood for the rest of the week. I can tell already that the speakers willtoss out a large salad of buzzwords and IT industry acronyms. I saw several people in the audience confusedon some of the terminology, and hopefully they will come over to IBM booth 20 at the Solutions Expofor straight talk and explanation.
The author is wondering whether EMC will try to avoid the fate of Hitachi's mainframebusiness, focusing on "moving into the IBM field" of offering software and services for more complete solutions.
Interestingly, one comment opines that EMC's acquisition of Documentum was "followed" byIBM's acquisition of FileNet, not realizing that IBM already has the leading documentmanagement software (IBM Content Manager).
Another comment cites IBM's recent push of Xen asanother example "following" EMC's acquisition of VMware, again not realizing that IBM has hadLogical Partition (LPAR) capability in its System z, System p and System i server lines formany years.
Tuesday is always good for announcements. Today, Gartner, Inc. announced that IBM has taken over HP in its climb to the top. I'll quote directly from today's press release:
STAMFORD, Conn., March 6, 2007 — Worldwide external controller-based (ECB) disk storage revenue totaled $15.2 billion in 2006, a 4.1 percent increase over 2005 revenue of $14.6 billion, according to Gartner, Inc.IBM overtook Hewlett-Packard for the No. 2 position in 2006 (see Table 1). IBM’s worldwide ECB market share increased to 15.8 percent, while HP’s market share dropped to 13.1 percent.
IBM beat HP both in 4Q06, as well as 2006 full year.You can read more about it from Gartner Dataquest report “Market Share: Disk Array Storage, All Regions, All Countries, 1Q05-4Q06" on their website. (Note: non-IBMers might need an account with Gartner to access this, not sure)
The focus was on external controller-based disk, not external controller-less SCSI/SAS disk, not disk arrays posing as virtual tape libraries, nor any disk sold inside HP, Sun, IBM or Dell servers. This is to compare with disk-only vendors such as EMC and HDS. The revenues reflect hardware only, including hardware-related parts of financial leases and managed services. Revenues from optional priced software features such as multi-pathing drivers, management software, or advanced copy services were excluded.I discussed these types of analyst reports back in blog post last September: Space Race Heats Up.
These marketshare numbers are based on revenues, not units or terabytes. When a box gets sold, the revenue was counted toward the vendor that sold it, not the manufacturer that built it. In this last report:
When Dell sells an EMC box, it gets counted as Dell. When Fujitsu Siemens sells an EMC box, it gets counted as "Other".
When HP sells an HDS box, it gets counted as HP. When Sun sells the HDS box, it gets counted as Sun.
When IBM sells its System Storage N series (from the OEM agreement with NetApp), it gets counted as IBM. Both IBM and NetApp experienced growth in the NAS/unified storage arena.
It's still cold here in the Washington DC area, but at least good news like this helps warm me up!
"IBM says revenue for its mainframe business rose 32% in the second quarter compared with a year earlier, easily outpacing overall sales growth of 13%.A big driver was February's launch of IBM's next-generation mainframe line, the z10, its first big upgrade since 2004. IBM spent about $1.5 billion on the new line.
With their power and size, mainframes have some unique advantages over (distributed) servers. Many companies cobble together many servers, powered by industry standard chips made by Intel (INTC) and Advanced Micro Devices, (AMD) to do jobs that were once the province of mainframes. IBM, too, sells such servers.
IBD: Can you tell me more about this business?
Gelardi: Traditionally, the mainframe was the back-office powerhouse for batch and transactional processing — sort of the thing behind banks, the thing behind retailers, the thing behind insurance companies.
It's the thing that, if you screw this up, you just gave your whole business away. The new thing, which is really sort of the second driver of growth, is the introduction of Linux (an open-source operating system popular with some servers) on the mainframe. Z-Linux (IBM's Linux mainframe software) is where we have been able to drive substantially new workloads to the mainframe.
IBD: Why is the mainframe business important to IBM?
Gelardi: It's a very differentiated product environment where we feel very confident that we can say to a client, look, we built this thing from the casters all the way up; the software stack, all the way up. We've built into this a level of performance and scalability and efficiency. We're very, very confident that we can resolve any issue (for customers).
Let me give you an example. If I take (1,500 Intel) servers . . . and put them on a single mainframe, I'll have no performance problems whatsoever. But I'm taking all of that workload that was on 1,500 separate servers and consolidating them on one mainframe. While it may be a million-dollar machine and up, it's actually cheaper than those 1,500 servers.
IBD: What are some big drivers for your clients today?
Gelardi: Energy. If you look at a workload on a previous generation mainframe, z9, for the equivalent performance on a z10, I'm going to use 15% less energy for the same amount of performance.
Look at the (physical data-center space) in the industry. The question used to be, "How much space do you want?" The question now is, "How much energy are you going to consume?" It's more efficient to manage the work loads inside the larger (mainframe).
IBD: So, you're saying that using a mainframe addresses these modern problems better than servers?
IBD: Is it hard to convince people of that?
Gelardi: It's a legitimate question for clients who never had a mainframe. There are a few. (In those cases) it will probably be more complicated (to convince them).
However, a year or so ago we put out a press release about an entertainment (company). Their story was, "We're going to build a new gaming environment." Long story short, they said, "Why not use the mainframe?" There are new clients coming to the mainframe.
IBD: Do mainframes help other IBM businesses?
Gelardi: Clearly. I have very broad coverage. We are the server vendor. We have the storage capacity; we have the operating environment; we have the software stack, (including) Websphere, Tivoli, DB2. We have the services capabilities. We have the consulting capability. You can sort of go on. It becomes an ecosystem that is really valuable to the company at large.
IBD: What mainframe customers were active in the second quarter?
Gelardi: Interesting enough (given the state of the industry), the financial services sector was very strong. That was particularly true in the Americas and in Europe. We have a pretty broad spread (of users), but there is no question that financial services is a core market."
IBM offers a lower total cost of ownership (TCO) than HP or Sun can offer. For more about the IBM System z10 EC, see my posts last month:
According to Gartner data (from 2005!), host-based storage accounts for 34 percent of the overall market for external storage, with the remaining 66 percent going to "fabric-attached" (network) storage, expect this share to grow from 66 percent to 77 percent by 2007.What is the current reality? SAN vs. NAS, FC vs iSCSI?
IBM subscribes to a lot of data from different analysts, they all have their methods for collecting this data, from taking surveys of customers to reviewing financial results of each vendor. While theymight not agree entirely, there are some common threads that lead one to believe they represent "reality". Hereare some numbers from an IDC December 2007 report:
Worldwide Disk Storage
While the 32/68 split is similar to the 34/66 split you mentioned before, you can see that external growth isgrowing faster, so internal host-based storage will drop to 25 percent by 2011, with external storage growing to 75 percent, very close to the 77 predicted. Looking at just the externaldisk storage, there are basically three kinds: DAS (direct cable attachment), NAS (file level protocols suchas NFS, CIFS, HTTP and FTP), and SAN (block-level protocols like FC, iSCSI, ESCON and FICON):
Worldwide External Disk Storage
At these rates, fabric-attached (SAN and NAS) will continue to dominate the storage landscape.Looking more closely now at the block-oriented protocols.
Worldwide External Disk Storage
Fibre Channel (FC)
At these rates, iSCSI will overtake FC by 2011. IBM System Storage N series, DS3300 and XIV Nextraall support iSCSI attachment.
Jon Toigo over at DrunkenData offers some additional data from ex-STKer:[Fred Moore Outlook on Storage 2008]. I met Fredat a conference. He had left STK back in 1998, and started his own company called Horison. NeitherJon nor Fred cite the sources of his statistics, but the following comment leads me to assume hehasn't been paying attention closely to the tape market:
With the demise of STK, who will be the leader in the tape industry?
Depending on how old you are, you might remember exactly where you were when a significant eventoccurred, for example the[Space Shuttle Challenger]explosion. For many IBMers, it was the day our friends at Sun Microsystems announced they were [puttingour lead tape competitor out of its misery]. I was in New York that day, but there was still someconfetti on the floor in the halls of the IBM Tucson lab when I got home a few days later. IBM hasbeen the number one market share leader in tape for over the past four years.
This week, I presented at the "IBM TechU Comes to You" event in beautiful Dubai, United Arab Emirates. This was a three-day event, so here is my recap of Day 2.
Introducing the Spectrum Storage Suite
Mike Griese (IBM WW Spectrum Storage Software Evangelist) presented an overview of the IBM Spectrum Storage family of products, and the new IBM Spectrum Storage Suite license which drastically simplifies the TB-licensing of all six products into a single number.
Spectrum Scale - introduction, use cases, competitive advantages
I presented an overview of IBM Spectrum Scale v4.2.1 release. I covered our support for POSIX, NFS, SMB, Hadoop, OpenStack Swift and Amazon S3 interfaces.
IBM Spectrum Scale is an ideal solution to replace NetApp filers, EMC Isilon or DataDomain storage devices. Use cases include clustered NAS, Object store, and Hadoop repository for analytics.
IBM Spectrum Archive -- Integration with Spectrum Scale, and its Applications in CCTV and Media
This was another special request from the UAE team, and I had a lot of fun putting it together. I started talking about IBM's recent acquisitions in video technologies, including LiveStream and ClearLeap.
I then explained how Spectrum Scale works, and how Spectrum Archive works either separately, or in combination with Spectrum Scale.
A live demo was planned to show this all off, but sadly I had network, firewall and/or VPN issues that prevented me from attaching to my Tucson-based systems. I then wrapped up with client references that have successfully used IBM Spectrum Archive in this area.
IBM Virtual Storage Center - Prepare your existing storage for the future
Mike Griese presented IBM Virtual Storage Center, which combines the "Control Plane" product of IBM Spectrum Control Advanced Edition, with the "Data Plane" products under IBM Spectrum Virtualize.
Introduction to Object Storage and its Applications - Cleversafe
I presented the basics of object store, a radical new way of storing information and how it is different from block or file-based storage alternatives.
I then covered the features of IBM Cleversafe solutions, available as software, pre-built appliances, and in the Cloud. I wrapped up with practical use cases for Content Repository, Enterprise Collaboration, Active Archive, Storage as a Service, and Backup storage pool.
Integration between IBM Spectrum Scale and Cleversafe
This was a fun session.
I presented an overview of IBM Spectrum Scale which provides volume, file and object-level storage interfaces on data that can span various flash, hybrid and spinning disk storage devices.
I gave a quick recap of Cleversafe for those who missed my earlier session.
I then showed how files can be migrated from IBM Spectrum Scale to either Cleversafe on-premises, Cleversafe in the Cloud on IBM SoftLayer, or LTFS-enabled tape using Spectrum Archive, or to any combination of disk, tape, object storage, Cleversafe and Cloud through IBM Spectrum Protect HSM and Space Management features.
Tuesday evening I went out to dinner with the z Systems team. Earlier in my career, I was the chief architect of DFSMS, the storage management element of z/OS operating system, so I continue to have close ties with the folks from Poughkeepsie.
Was Dubai too far away for you to attend? Want to hear the latest technical information about IBM Storage, but not willing to wait until the big [IBM Edge Conference] this September? We will have several more "IBM TechU Comes to You" events in May and June.
I'm glad this is the final day of the IBM Systems Technical Conference (STC08) here in Los Angeles.While I enjoyed the conference, one quickly reaches saturation point with all the information presented.
XIV Architecture Overview
Before this conference, many of the attendees didn't understandIBM's strategy, didn't understand Web 2.0 and Digital archive workloads,and didn't understand why IBM acquired XIV to offer "yet another disk systemthat servers LUNs to distributed server platforms." Brian Shermanchanged all that!
Brian Sherman, IBM Advanced Technical Support (ATS), is part of the exclusive dedicated XIVtechnical team to install these boxes at client locations, so he is very knowledgeable with the technical aspects of the architecture. He presented what the current XIV-branded model that clients can purchase now in select countries, and what the IBM-branded model will change when available worldwide.
Those who missed my earlier series on XIV can find them here:
Beyond this, Brian gave additional information on how thin provisioning, storage pools, disk mirroring, consistency groups, management consoles, and microcode updates are implemented.
N series and VMware Deep Dive
Norm Bogard, IBM Advanced Technical Support, presented why the IBM N series makes such great disk storage for VMware
deployments. This wasclearly labeled as a "deep dive", so anyone who got lost in all of theacronyms could not blame Norm for misrepresentation.
IBM has been doing server virtualization for over 40 years, so it makes sense thatit happens to be the number one reseller of VMware offerings.VMware ESX server is a hypervisor that runs on x86 host, and provides an emulationlayer for "guest Operating Systems". Each guest can hvae one or more virtualdisks, which are represented by VMware as VMDK files. VMware ESX server acceptsread/write requests from the guests, and forwards them on to physical storage.Many of VMware's most exciting features requires storage to be external to thehost machine. [VMotion]allows guests to move from one host to another, [Distributed Resource Scheduler (DRS)]allows a set of hosts to load-balance the guestsacross the hosts, and [High Availability (HA)] allows the guests on a failed hostto be resurrected on a surviving host. All of these require external disk storage.
ESX server allows up to 256 LUNs, attached via FCP and/or iSCSI, and up to 32 NFS mount points. Across LUNs, ESX server uses VMFS file system, which is a clusteredfile system like IBM GPFS that allows multiple hosts to access the same LUNs.ESX server has its own built-in native multipathing driver, and even provides FCP-iSCSIand iSCSI multipathing. In other words, you can have a LUN on an IBM System Storage N series thatis attached over both FCP and iSCSI, so if the SAN switch or HBA fails, ESX servercan failover to the iSCSI connection.
ESX server can use NFS protocol to access the VMDK files instead. While the default is only 8 NFS mount points, you can increase this to 32 mount points. NAS can takeadvantage of Link Aggregate Control Protocol [LACP] groups, what some call "trunking" or "EtherChannel". This is the ability to consolidate multiple streams onto fewer inter-switch Ethernet links, similar to what happens on SAN switches.For the IBM N series, IBM recommends a "fixed" path policy, rather than "most recently used".
IBM recommends disabling SnapShot schedules, and setting the Snap reserve to 0 percent.Why? A snapshot of an ESX server datastore has the VMDK files of many guests, all of which would have had to quiesce or stop to make the data "crash consistent" for theSnapshot of the datastore to even make any sense. So, if you want to take Snapshots, itshould be something you coordinate with the ESX server and its guest OS images, and notscheduled by the N series itself.
If you are running NFS protocol to N series, you can turn off the "accesstime" updates. In normal file systems, when you read a file, it updates the"access time" in the file directory. This can be useful if you are looking forfiles that haven't been read in a while, such as software that migrates infrequentlyaccessed files to tape. Assuming you are not doing that on your N series, you might as well turnoff this feature, and reduce the unnecessary write activity to the IBM N series box.
ESX server can also support "thin provisioning" on the IBM N series. There isa checkbox for "space reserved". Checked means "thick provisioning" and uncheckedmeans "thin provisioning". If you decide to use "thin provisioning" with VMware,you should consider setting AutoSize to automatically increase your datastorewhen needed, and to auto-delete-snap your oldest snapshots first.
The key advantage of using NFS rather than FCP or iSCSI is that it eliminates theuse of the VMFS file system. IBM N series has the WAFL file system instead, andso you don't have to worry about VMFS partition alignment issue. Most VMDK aremisaligned, so the performance is sub-optimal. If you can align each VMDK to a32KB or 64KB boundary (depending on guest OS), then you can get better performance.WAFL does this for you automatically, but VMFS does not. For Windows guests, use "Windows PE" to configurecorrectly-aligned disks. For UNIX or Linux guests, use "fdisk" utility.
What Industry Analysts are saying about IBM
Vic Peltz gave a presentation highlighting the accolades from securities analysts, IT analysts, and newsagencies about IBM and IBM storage products. For example, analysts like that IBM offersmany of the exciting new technologies their clients are demanding, like "thin provisioning", RAID-6 double-drive protection,SATA and Solid State Disk (SSD) drive technology.Analysts also like that IBM is open to non-IBM heterogeneous environments. Whereas EMC Celerra gateways supportonly EMC disk, IBM N series gateways and IBM SAN Volume Controller support a mix of IBM and non-IBM equipment.
Analysts also like IBM's "datacenter-wide" approach to issues like security and "Green IT". Rather than focusingon these issues with individual point solutions, IBM attacks these challenges with a complete"end-to-end" solution approach. A typical 25,000 square foot data center consumes $2.6 million dollars USD in power andcooling today, and IBM has proven technologies to reduce this cost in half. IBM's DS8000 on average consume26.5 to 27.8 percent less electricity than a comparable EMC DMX-4 disk system. IBM's tape systemsconsume less energy than comparable Sun or HP models.
IBM iDataPlex product technical presentation
Vallard Benincosa, IBM Technical Sales Specialist, presented the recently-announced [IBM System x iDataPlex].This is designed for our clients that have thousands of x86 servers, that buy servers "racks at a time", tosupport Web 2.0 and digital archive workloads. The iDataPlex is designed for efficient power and cooling,rapid scalability, and usable server density.
iDataPlex is such a radical design departure, that it might be difficult to describe in words.Most racks take up two floor tiles, each tile is 2 foot by 2 foot square. In that space, a traditionalrack would have servers that were 19 inches wide slide in horizontally, with flashing lights and hot-swappabledisks in the front, and all the power supply, fans and networking connections in the back. Even with IBM BladeCenter,you have chassis in these racks, and then servers slide in vertically in the front, and all of the power supply, fanand networking connections in the back. To access these racks, you have to be able to open the door on boththe front and back. And the cooling has to go through at least 26.5 inches from the front of the equipment to the back.
iDataPlex turns the rack sideways. Instead of two feet wide, and four feet deep, it is four feet wide, and two feet deep.This gives you two 19 inch columns to slide equipment into, and the air only has to travel 15 inches from frontto back. Less distance makes cooling more efficient.
Next, iDataPlex makes only thing in the back the power cord, controlled by an intelligent power distribution unit (iPDU) so you can turnthe power off without having to physically pull the plug. Everything else is serviced from the front door.This means that the back door can now be an optional "Rear Door Heat Exchanger" [RDHX] that is filled with running water to makecooling the rack extremely efficient. Water from a cooler distirubtion unit (CDU) can power about threeto four RDHX doors.
Let's say you wanted to compare traditional racks with iDataPlex for 84 servers. You can put 42 "1U" serversin two racks each, each rack requires 10 kVA (kilo-volt-amps) so you give it two 8.6 kVA feeds each, that is fourfeeds, and at $1500-2000 dollars USD per month, will cost you $6000-8000. The iDataPlex you can fit 84 serversin one 20 kVA rack, with only three 8.6 kVA feeds, saving you $1500-2000 dollars USD per month.
Fans are also improved. Fan efficiency is based on their diameter, so small fans in 1U servers aren't as effective as iDataPlex's 2U fans, saving about 12-49W per server. Whereas typical 1U server racks spend 10-20percent of their energy on the fans, the iDataPlex spends only about 1 percent, saving 8 to 36 kWH per year per rack.
Each 2U chassis snaps into a single power supply and a bank of 2U fans. A "Y"power cord allows you to have one cord for two power supplies. A chassis can hold either two small server "flexnodes"or one big "flexnode". An iDataPlex rack can hold up to 84 small servers or 42 big servers. Since each "Y" cord can power up to four "flexnode" servers, you greatly reduce the number of PDU sockets taken,leaving some sockets available for traditional 1U switches.
The small "flexnode" server can have one 3.5 inch HDD, or two 2.5 inch HDD, either SAS or SATA, and the big "flexnode" can have twice these.If you need more storage, there is a 2U chassis that holds five 3.5 inch HDD or eight 2.5 inch HDD. These areall "simple-swappable" (servers must be powered down to pull out the drives). For hot-swappable drives, a 3Uchassis with twelve 3.5 inch SAS or SATA drives.
The small "flexnode" server has one [PCI Express] slot, the big servers have two. Thesecould be used for [Myrinet] clustering. With only 25W power,the PCI Express slots cannot support graphics cards.
The iDataPlex is managed using the "Extreme Cluster Administration Toolkit" [XCAT]. This is an open source project under Eclipse that IBM contributes to.
Finally was the concept of "pitch". This is the distance from the center of one "cold aisle" to the next "cold aisle".On typical data centers, a pitch is 9 to 11 tiles. With the iDataPlex it is only three tiles when using the RDHX doors, or six tiles without. Most data centers run out of power and cooling before they run out of floor space, so having more dense equipmentdoesn't help if it doesn't also use less electricity.Since the iDataPlex uses 40 percent less power and cooling, you can pack more racks persquare foot of an existing data center floor with the existing power and cooling available. That is what IBM calls "usable density"!
What Did You Say? Effective Questioning and Listening Techniques
Maria L. Anderson, IBM Human Resources Learning, gave this "professional development" talk. I deal with different clients every week, so I fully understand that there is a mix of art and science incrafting the right questions and listening to the responses.The focus was on howto ask better questions and improve the understanding and communication during consultative engagements. Thisinvolves the appropriate mix of closed and open-ended questions, exchanging or prefacing as needed. This wasa good overview of the ERIC technique (Explore, Refine, Influence, and Confirm).
Well, that wraps up my week here in Los Angeles.Special thanks to my two colleagues, Jack Arnold and Glenn Hechler, both from the Tucson Executive Briefing Center,who helped me prepare and review my presentations!
Well, I have left Japan, and while everyone else is enjoying the Super Bowl, I am now in Australia, at another conference.Today I had the pleasure to hear filmmakers talk about their successes, and how IBM helps the movie industry.
At one extreme was Khoa Do, independent filmmaker. After acting in movies asideMichael Caine and Billy Zane, he decided to become his own director. He started a project to help seven disadvantaged youths from a poor drug-ridden section of Sydney, by having them act in his first full-length film.Armed with only an IBM laptop and small budget, he made the film called "The Finished People" that had critical acclaim.
The film was a success, and many of the disadvantaged youths have gone on to act in other movies. In 2005, Khoa Do was named "Young Australian of the Year".
Thanks to IBM technology, filmmaking is now accessible to a wider number of aspiring wanna-be directors. It is no longer necessary to be part of a large film studio with a multi-million dollar budget to tell your story.
At the other extreme, was Xavier Desdoigts, director of technical operations at Animal Logic, the Computer Graphics (CG) arthouse that produced special effects of movies like "The Matrix", "House of Flying Dragons" and "World Trade Center". They started with producing digital effects for TV commercials, like this one forCarlton Draught Beer.
With the support of a large film studio and multi-million dollar budget, Animal Logic now boasts the 86th most powerful "Supercomputer" based on IBM BladeCenter technology, with over 4000 servers connected into a cluster, for making the movie "Happy Feet". The movie took four years to make, with over 500 people, of 27 different nationalities. It was the first CG movie made in Australia, and has been well-received by audiences worldwide.
Mr. Desdoigts gave out some interesting facts and figures about the movie:
While visually stunning on the big screen, each frame is only 1.4 Megapixel, about the same resolution as most camera phones.
In one scene, there are 427,086 penguins all appearing on frame.
Mumble, the lovable lead character, is made up of over 6 million feathers.
As many as 17 dancers were "motion-captured" to choreograph the tap-dancing and character interaction segments.
Only one system admin was needed to manage this entire server farm. (IBM Systems Director technology makes this possible)
The movie consumed 103 TB of disk space, backed up to 595 LTO tape cartridges.
An estimated 17 million CPU-hours were needed for all the processing and rendering.
Rather than talking about technology for technology sake, these filmmakers showed how technology couldbe put to use, in a practical sense, to provide the world something of value.
This week I am in Minneapolis, MN, so was hoping that the complicated process of moving this blog over to "MyDeveloperWorks" would happen while I was gone, but alas, that does not appear to be the case.
Meanwhile, my partner in crime, Barry Whyte, has moved his blog [Storage Virtualization]successfully over to the new server.
Perhaps next week. If all goes well, the URL links should redirect correctly, but those of you out there using feed readers might require you to re-subscribe to get the right RSS feeds.
A Forrester Analyst drew the analogy of a river to the upcoming onslaught of millennials. Some 100 years ago, smart companies positioned themselves near rivers, the water provided power as well as a means of transporting products. However, today, being positioned near a river doesn't ensure company success, and there are plenty of examples of companies that have existed a long time now filing for bankruptcy.
As we get out of this recession, the war for people will be intense. In the United States, as many as 76 million[Baby Boomers], born between 1946 and 1964, are retiring or approaching retirement, being replaced by 46 million [Gen X], born between 1965 and 1976. By 2010, there will be as many as 31 million [Millenials], born between 1977 and 1998, in the workforce.
To drive the point home, the Forrester analyst cited [Whirlpool] as an example, a company more than 100 years old, with 73,000 employees across 170 countries. Whirlpool manufactures kitchen, laundry and other home appliances. From 1997 to 2002, however, Whirlpool's per-ticket sales were dropping at a rate of 3.4 percent per year. To reverse this trend, they established the Whirlpool Young Professional program, assigned I-mentors, and invested in Web 2.0 collaboration tools. They realized that they needed to harness the Gen X and Millenial energy. The result?From 2002 to 2006, they had a compete turn-around, with per-ticket sales growing 5.9 percent per year.
Since I covered IBM's keynote session yesterday, I thought it would only be fair to cover HP's today.IBM and HP are the top two IT vendors in the world, and not surprisingly also the top two IT storage vendors, and are both platinum sponsors for this event.
Phil McKinney, VP and CTO of Hewlett-Packard (HP) Personal Systems Group
Phil presented "Enabling Innovation: A Strength In Any Economy", which covered HP's approach to innovationnot just within HP itself, but also to help their customers. He presented an interesting progression forIT. In the first, IT is very technology-centric, focusing on standardizing platforms and automating tasks.In the second, IT is more process-oriented, standardizing and automating business processes measured for reliable IT outcomes. In the third, IT is business-aligned, standardizing and automating services, measured on business results. He argued that the challenge was for companies to transform their IT through this progression to improve business impact.
To help customers, HP focuses on four aspects of an Innovation Management Framework:
Strategy, Measurement and Metrics
Systems, Collaboration tools and knowledge management
Culture, Education and Training
Ecosystem, business partnerships and customer innovation
He wrapped up his talk reminding us that ideas without execution are just hobbies.
Tom Peck, Senior VP and CIO, Levi Strauss & Co
Levi Strauss & Co. manufactures denim pants and other clothing apparel, and has been doing so for more than 150 years. Tom made a point to actually wear denim jeans and a sports coat on stage for his talk.His presentation "Dealing with Disruption" was not about disruptive technologies, but rather the disruption the economic downturn has impact the retail industry. To survive through this recession, IT leaders needto be bold about their hiring, reorganizing and rethinking of IT because disruption is everywhere.
IT is not a cost center at Levi Strauss, and represents only 3.5 percent of their total expenses. Instead, they have educated their stakeholders that IT is an investment for competitive advantage. They have focused on simplifying, which is important because their line of pants has grown incredibly complex. When you factor in the different fabrics, colors, styles, sizes, fit and finish, you end up with a large numberof different pants. This complexity came from an effort to provide exactly what every customer thought they needed. He cited a great quote:
“If I had asked people what they wanted, they would have said faster horses.” --- Henry Ford
This same complexity occurs in IT. To address the changes needed, Tom combined "Lean IT" principles with "Six Sigma" methodologies. Lean IT helped identify problems with the overall flow of processes and provided the tools to remove steps that did not add value. Six Sigma was applied to the remaining steps that did add value, to improve capability and effectiveness.
Companies that have been around for awhile, like IBM, Whirlpool and Levi Strauss & Co., have learned to adapt to the changing business and IT landscape, and adopt new ideas for new ways of doing things.
Before acquisition, Diligent offered only software. The task of putting this software on an appropriate x86 server with sufficientmemory and processor capability was left as an exercise for the storage admin. With the TS7650G, IBM installs theProtectTIER software on the fastest servers in the industry, the IBM System x3850 M2 and x3950 M2. This eliminateshaving the storage admins pretend that they have hardware engineering degrees.
Before acquisition, the software worked only on a single system. IBM was able to offer multiple configurations of the TS7650G, including a single-controller model as well as a clustered dual-controller model. The clustered dual-controller model can ingest data at an impressive 900 MB/sec, which is up to nine times faster than some of thecompetitive deduplication offerings.
Before acquisition, ProtecTIER emulated DLT tape technology. This limited its viability, as the market sharefor DLT has dropped dramatically, and continues to dwindle. Most of the major backup software support DLT as anoption, but going forward this may not be true much longer for new tape applications.IBM was able to extend support by adding LTO emulation on theTS7650G gateway, future-proofing this into the 21st Century.
At last week's launch, covering so many products with so few slides, this announcement was shrunken down to a single line "Store 25 TB of backups onto 1 TB of disk, in 8 hours" and perhaps a few people missed that this wasactually covering two key features.
With deduplication, the TS7650G might get up to 25 times reduction on disk. If you back up a 1 TB data basethat changes only slightly from one day to the next, once a day for 25 days, it might only take 1 TB, or so, of disk tohold all the unique versions, as most of the blocks would be identical, rather than 25 TB on traditional disk or tapestorage systems. The TS7650G can manage up to 1 PB of disk,which could represent in theory up to 25 PB of backup data.
With an ingest rate of 900 MB/sec, the TS7650G could ingest 25 TB of backups during a typical 8 hour backup window.
The 25 TB of the first may not necessarily be the 25 TB of the second, but the wording was convenient for marketingpurposes, and a comma was used to ensure no misunderstandings.Of course, depending on the type of application, the frequency of daily change, and the backup software employed, your mileage may vary.
Well it's Tuesday again, and you know what that means? IBM Announcements!
You might be thinking, didn't IBM just have a [huge storage announcement October 8, 2013]? You would be right! IBM's $1B additional investment in Storage has been like shot of adrenaline in getting new features and functions out sooner to our clients.
DS8870 Disk System Release 7.2
New IBM POWER7+ controllers. The previous models of DS8870 were based on the POWER7 controllers, and these new models have POWER7+ processors. This change enhances the performance across the board, from mainframe to distributed systems, from sequential to random. Customers with existing POWER7-based models will be able to do MES upgrade to the new POWER7+ next year.
For comparison with older DS8000 models, here are some internal IBM measurements we took for Database workloads on both z/OS(mainframe) and Distributed systems with typical 70% read, 30% write and 50% cache hit:
IBM Internal Measurements (thousands of IOPS)
DB Distributed systems
New 1.2TB (10K RPM) and 4TB (7200 RPM) self-encrypting enterprise drives (SED). This is a 33% boost over the previous 900GB and 3TB drives previously available. As with all the other drives in the DS8870, these new drives include the encryption chip right on the drive itself, offering encryption with scalability.
Improved security. Release 7.2 will support the U.S. National Institute of Standards and Technology [NIST.gov]] 800-131A specification, raising the 96-bit encryption to the required 112 bits on the customer IP network. This involves updates to the security firmware, management software and digital signatures on code loads.
Metro Mirror enhancement for System z. By avoiding serial conflicts of updated blocks, this enhancement can boost performance up to 100 percent when using Metro Mirror with z/OS applications on System z mainframes.
Easy Tier™ reporting and graphs to determine optimal mix. Now you can see for yourself how sub-LUN automated tiering is helping your applications.
Easy Tier Workload Categorization
New workload visuals help clients and IBM technical specialists compare activity across tiers within and across pools to help determine optimal drive mix for current workloads
Easy Tier Data Movement Daily Report
New Easy Tier summary report every 24 hours illustrating data migration activity (5-minute intervals) can help visualize migration types and patterns for current workloads
Easy Tier Workload Skew Curve
Shows skew of all workloads across the system in a graph to help clients visualize and accurately tier configurations when adding capacity or a new system Clients can import data into Disk Magic
All-Flash Optimization. Yesterday, in my post [IBM FlashSystem versus EMC XtremeIO], I mentioned that any hybrid systems like the IBM Storwize V7000 that can support a mix of SSD and HDD can obviously be configured as SSD-only. Apparently, that was not obvious to many readers, so I apologize. For the DS8870, you can configure an all-Flash (SSD only) configuration, and Release 7.2 added some optimization when configured with SSD only.
1,056 drives 15K @146GB in RAID-10
224 drives SSD @400GB in RAID-5
Same - Usable 72 TB
70 percent faster
33 percent less floor space required
62 percent less energy consumed
(Note: Performance results based on measurements and projections using IBM benchmarks in a controlled environment.)
OpenStack™ support DS8870 now offers the [OpenStack Cinder] interface for block LUN allocations in OpenStack environments. IBM is a Platinum sponsor of OpenStack, and Opentack is the strategic platform for IBM private and hybrid clouds.
XIV Storage System
Following on the heels of the [XIV enhancements announced], IBM has now added 800GB Solid State Drives (SSD) as Read cache for its 4TB drive-based models.
DCS3860 Disk System
The DCS3860 is the next generation of the DCS3700 disk system. Designed with Linux-x86 servers in mind, the system offers direct SAS host attachement, 24GB of cache, and 60 drives in a compact 4U drawer. Like its predecessor, the drives are stored on five pull-out trays, with twelve hot-swappable drives per tray. You can add up to five more expansion units, with 60 drives each, for a total of 360 drives in 24U rack space.
These new models will help our clients deploy new workloads and consolidate existing workloads.
Well, I'm back from my vacation from Bali and Singapore, and am glad to seethat my fellow blogger BarryB [aka Storage Anarchist] also had a chance to take a break to exotic locations.
Next Thursday, in the USA, is [Thanksgiving holiday], so this will give me a chance to catch up on my email and read everyone's blog posts and product announcements.
The following week, December 2-5, I'll be attending the 27th annual [Data Center Conference] at the MGM Grand hotel and casino in Las Vegas, Nevada. IBM is a Premier and Platinum sponsor for this event.Look for me in one of the many break-out sessions, one-on-oneexecutive meetings, or IBM's "booth 20" at the solution center. Our team will be showingoff IBM's XIV, SVC and TotalStorage Productivity Center offerings, aswell as explaining IBM Information Infrastructure and the rest of theNew Enterprise Data Center strategy.
"... firms don't have the detailed electricity consumption data they need to implement energy efficiency initiatives. What they have is an energy bill for a facility."
A common adage is that "you can't manage what you don't measure." IBM has beefed up the ability to measure andmonitor electricity usage, not just IBM servers and storage, but also non-IBM IT equipment and facilities infrastructurelike UPS, HVAC, lighting and security alarm systems.
Hitch Green IT to data centre refurbishment projects
"Energy savings alone don't constitute a business case to overhaul an existing data centre, undertake a refurbishment project or build a new Green Data Centre."
Either CIOs don't have the measurements of electricity to perform an ROI or cost/benefit analysis, or the facilitiesfolks that sense improvements are possible may not see the big picture compared to other business investments.Instead, IBM seeks to incorporate IT energy efficiency best practices into existing business plans for data center improvements.
Tackle corporate energy efficiency and emissions
"... a strategy discussion and corporate carbon diagnostic are the start point to stimulate demand. Not a cold sell on Green IT."
Project Big Green is more than just an IT project.IBM's Global Business Services consultants have transformed it into a Carbon Management Strategy encompassing employees, information, property, the supply chain, customers and products. For companies that are looking atreducing their carbon footprint overall, this approach makes a lot of sense.
Differentiate offerings by industry and country
"The inability to get more power into urban data centres has driven demand for energy efficiency by banks, telcos and outsourcers."
Different countries, and different industries, have different priorities.Europe, and in particular the UK, focuses on carbon emissions as much as energy costs due to mandatory emissions caps.For data centers in the largest cities, an increase in electrical supply may not be available, or be too expensive,and the time it takes to build a new data center elsewhere, typically 12-18 months, may not be soon enough to handlecurrent business growth rates. Energy efficiency projects can help buy them some time.
Plan for slow customer adoption
"IBM is developing the market for IT energy efficiency and carbon management services. And its very much an early stage market today."
IBM is frequently on the forefront of new technologies and emerging markets, so it is no surprise that we areused to dealing with slow customer adoption. The combination of high energy costs, tightening regulations and stakeholder pressure will drive the market. Larger companies and government organizations that have the meansto make these necessary changes will probably lead the adoption curve.
Prepare for investment barriers to IT energy efficiency
"With the low hanging fruit picked, IBM has found that there is an unwillingness to spend money on planting a new orchard."
IBM has helped IT clients with quick fixes offering rapid payback such as adjusting data center temperature and humidity to reduce energy consumption. But in the current economic environment, persuading firms to install variable speed fans with a 6-year payback is much tougher. Again, this is a matter of CIOs and other upper level management balancingfinancial investment decisions with some foresight and vision for the future.
Project Big Green launched back in May 2007, and last month IBM renewed its commitment with Project Big Green 2.0,continuing to enhance product and service offerings in support for this much needed area. And while the leadersin the G8 Summit will discuss a variety of topics, three top "green" issues on their agenda include rising energy costs, global climate change and controlling carbon emissions.
Continuing my theme of "Innovation that matters", I thought I would cover MapQuest and NeverLost.
When Shawn Callahan on Anecdote wrote[Our need for the knowledge worker is over], he was referring to the fact that we no longer need the term "knowledge worker", because practically everyone isa "knowledge worker" today. He asks "How does knowledge help us to work better?"
It is said that as much as 30 percent of a knowledge worker's time is spent looking for information to do their jobs. This could be information to make a decision, decide between several choices, take specific action, or schedule when these actions should take place. The logistics of planning a business trip, and actually navigating in unfamiliarsurroundings, is a good example of this, and presents some unique challenges.
Before these technologies
Before these technologies, to plan a trip involved finding someone who lives or has been to the destination city,can recommend hotels and restaurants near the meeting facility, and can suggest approximate times it would take to drive from one place to another. I would bring a compass, and would shop for a city map, either before leaving, or upon arrival.
On one trip to Raleigh, I asked a local IBMer who lived in Raleigh for a hotel recommendation. The hotel was nice,but involved a long 45-60 minute commute each day to the meeting facility. When I asked her why she suggested thatparticular hotel, she said it was because it was "close to the airport". I have since learned never to ask for "best" of anything, as this is subject to such interpretation.
On another trip, I was travelling with a colleague in Germany. He asked how I knew which bus to take, and which bus stop to wait at. I pulled out my compass, and told him that based on the schedule, the bus that went in a specific directionmust be the correct one. The entire bus load of people burst out laughing, that we fit the universal stereotype ofmen who refuse to ask for directions. This method works only in Germany, where timeliness is next to godliness. In other countries, time schedules are more of a suggestion.
Sometimes, maps of the destination city were not always easy to find. Now with the Internet and Google Earth, maps are available before leaving on the trip. (See my post on Inner Workings of Storage which discusses how Google Earth works.)
I like using MapQuest, available online at [mapquest.com], and have not yet looked into the similar systems from Google or Yahoo. I map out each leg of my trip that involves driving, walking or trains. These are oftenairport-to-hotel, hotel-to-meeting, meeting-to-airport. Having a feel for the time and distances between locationshelps choose hotels and restaurants, when to leave, and so on.
I even use MapQuest in Tucson. Recently, a route I generated to visit a friend across town took into accountconstruction on Highway I-10 that has been going on for a while, where 8 miles of on-ramps are closed, and routed me around this mess accordingly. This is one key advantage over a static map, either a paper map, or downloaded from Google Earth.
While MapQuest may not always choose the "best" route, it always finds "a route" that works, and generally works for me.
A few problems with a MapQuest print-out I have found are:
It is on paper, which could impact driving, as I have to look away from the road to look at the instructions.
If it can't find a specific address, it provides generic instructions, and often, this involves airports.
It often starts with "Head Northeast...", so unless you brought your compass, or can tell what direction you are pointing from Sun, Moon or stars, you may end up leaving in the wrong direction.
Recently, I checkmarked the "Request NeverLost" box on my Hertz Gold profile, and now I seem to get NeverLost innearly every rental. The system is based on the[Global Positioning System] set of satellites,complemented by a CD-based street information and yellow pages data for US and Canada, stored in the trunk.
The NeverLost system knows which way the car is oriented, can tell which direction you are driving, and tell youwith voice prompts to be in the left lane, right lane, and when to make left and right turns. No need for a compassor any knowledge of which way is North, East, West or South.
I also like that it gives you three choices for route: (a) Shortest time, (b) Most use of Highways, and (c) Least use of Highways. This came in handy when I was in Toronto last week. Apparently, the 407 Highway had recently implementedan Electronic Toll Road (ETR) which bills based on license plate. While this system is fine for residents, it isnot designed for rental car companies. Hertz left a note in my car warning me NOT to use the 407 highway, or I wouldbe charged an $8.50 dollar penalty. I chose "Least use of Highways" and proceeded to tour the city of Toronto for90 minutes from the Pearson Airport to my hotel in Markham, a trip that would have only taken 20 minutes otherwise.
Once you enter your destination street address, it can estimate the distance to get there. This is not a quick process, as there is no keyboard, you have to enter each letter using up/down/left/right keys. You can enter thename of the street, hotel or restaurant. To find "Sal Grosso" restaurant in Smyrna, it was at 1927 Powers Ferry Road,but NeverLost said that Powers Ferry only went from 2750-6350. I had to select 2750 and then hope to be close enough.
In Dallas, I tried to find "P. F. Chang's" restaurant, and you have to make sure that the periods and spaces are entered exactly. I ended up looking for restaurants in Grapevine, Texas, and then just going through the list ofall that start with the letter "P".
Another issue is that sometimes it takes awhile to find the satelites in the sky. I get the car started, I hit theenter button to get the NeverLost started, enter the address, and then it starts looking for satellites? Why doesn'tit look for satellites while you spend 3-5 minutes trying to enter the street address?In my case, I take out my MapQuest print-out, head in the right direction, and hope that NeverLost catches upeventually, in time to help me get to the final location.
It is not clear how often Hertz updates the CDrom that contains the street and yellow pages data. About 30-40 percent of the time, it can't find the street address I am looking for, and I have to be creative on howto get me in the general area.
Part of the problems is that I have not read the entire instruction manual, and do not have time to learn itwhen I am in the car driving. I might have to put this on my reading to-do list before my next trip. Some ofmy other colleagues have purchased their own GPS-based systems, like those from Garmin or Magellan, so that theyalways have it available, and they always know how to use it. This has the advantage that you can use it when walking around, or in your own car when you are home, as well.
Despite these few problems, I am impressed on the innovations involved to make this all happen. All of the mapping information was stored, transmitted, searched, and then plotted in a manner that provides specificinformation that you need to get the job done. For now, I will probably use a combination of these to planand travel on my business trips. Wouldn't it be nice if other areas in your life had this kind of support?
This week I am off to Budapest, Hungary, for business meetings. It is the closest major city to IBM'smanufacturing plant in a small town called Vac (rhymes with "knots") where the IBM System Storage DS8000 seriesand SAN Volume Controller are assembled.
Hey everyone, I'm having a great time in New York.
Here are a few webinars this week you might be interested in, related to tape, and tape encryption:
1) Wednesday If regulatory compliance and protecting your data against security breaches is top of mind for you, I invite you to attend a webinar on a new enterprise encryption solution from IBM featuring the IBM System Storage™ TS1120 tape drive. On September 20, 2006 Jon Oltsik, Senior Analyst for Information Security with the Enterprise Strategy Group, will moderate a discussion on IBM’s encryption strategy and latest data security advances with a panel of our product and industry experts.