Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
If you store your VMware bits on external SAN or NAS-based disk storage systems, this post is for you. The subject of the post, VM Volumes, is a potential storage management game changer!
Fellow blogger Stephen Foskett mentioned VM Volumes in his [Introducing VMware vSphere Storage Features] presentation at IBM Edge 2012 conference. His session on VMware's storage features included VMware APIs for Array Integration (VAAI), VMware Array Storage Awareness (VASA), vCenter plug-ins, and a new concept he called "vVol", now more formally known as VM Volumes. This post provides a follow-up to this, describing the VM Volumes concepts, architecture, and value proposition.
"VM Volumes" is a future architecture that VMware is developing in collaboration with IBM and other major storage system vendors. So far, very little information about VM Volumes has been released. At VMworld 2012 Barcelona, VMware highlights VM Volumes for the first time and IBM demonstrates VM Volumes with the IBM XIV Storage System (more about this demo below). VM Volumes is worth your attention -- when it becomes generally available, everyone using storage arrays will have to reconsider their storage management practices in a VMware environment -- no exaggeration!
But enough drama. What is this all about?
(Note: for the sake of clarity, this post refers to block storage only. However, the VM Volumes feature applies to NAS systems as well. Special thanks to Yossi Siles and the XIV development team for their help on this post!)
The VM Volumes concept is simple: VM disks are mapped directly to special volumes on a storage array system, as opposed to storing VMDK files on a vSphere datastore.
The following images illustrate the differences between the two storage management paradigms.
You may still be asking yourself: bottom line, how will I benefit from VM Volumes?
Well, take a VM snapshot for example. With VM Volumes, vSphere can simply offload the operation by invoking a hardware snapshot of the hardware volume. This has significant implications:
VM-Granularity: Only the right VMs are copied (with datastores, backing up or cloning individual-VM portions of hardware snapshot of a datastore would require more complex configuration, tools and work)
Hardware Offload: No ESXi server resources are consumed
XIV advantage: With XIV, snapshots consume no space upfront and are completed instantly.
Here's the first takeaway: With VM Volumes, advanced storage services (which cost a lot when you buy a storage array), will become available at an individual VM level. In a cloud world, this means that applications can be provisioned easily with advanced storage services, such as snapshots and mirroring.
Now, let's take a closer look at another relevant scenario where VM Volumes will make a lot of difference - provisioning an application with special mirroring requirements:
VM Volumes case: The application is ordered via the private cloud portal. The requestor checks a box requesting an asynchronous mirror. He changes the default RPO for his needs. When the request is submitted, the process wraps up automatically: Volumes are created on one of the storage arrays, configured with a mirror and RPO exactly as specified. A few minutes later, the requestor receives an automatic mail pointing to the application virtual machine.
Datastores case #1: As may be expected, a datastore that is mirrored with the special RPO does not exist. As a result, the automated workflow sets a pending status on the request, creates an urgent ticket to a VMware administrator and aborts. When the VMware admin handles that ticket, she re-assigns the ticket to the storage administrator, asking for a new volume which is mirrored with the special RPO, and mapped to the right ESXi cluster. The next day, the volume is created; the ticket is re-assigned to the storage admin, with the new LUN being pointed to. The VMware administrator follows and creates the datastore on top of it. Since the automated workflow was aborted, the admin re-assigns the ticket to the cloud administrator, who sometime later completes the application provisioning manually.
Datastores case #2: Luckily for the requestor, a datastore that is mirrored with the special RPO does exist. However, that particular datastore is consuming space from a high performance XIV Gen3 system with SSD caching, while the application does not require that level of performance, so the workflow requires a storage administrator approval. The approval is given to save time, but the storage administrator opens a ticket for himself to create a new volume on another array, as well as a follow-up ticket for the VMware admin to create a new datastore using the new volume and migrate the application to the other datastore. In this case, provisioning was relatively rapid, but required manual follow up, involving the two administrators.
Here's the second takeaway: With VM Volumes, management is simplified, and end-to-end automation is much more applicable. The reason is that there are no datastores. Datastores physically group VMs that may otherwise be totally unrelated, and require close coordination between storage and VMware administrators.
Now, the above mainly focuses on the VMware or cloud administrator perspective. How does VM Volumes impact storage management?
VM's are the new hosts: Today, storage administrators have visibility of physical hosts in their management environment. In a non-virtualized environment, this visibility is very helpful. The storage administrator knows exactly which applications in a data center are storage-provisioned or affected by storage management operations because the applications are running on well-known hosts. However, in virtualized environments the association of an application to a physical host is temporary. To keep at least the same level of visibility as in physical environments, VMs should become part of the storage management environment, like hosts. Hosts are still interesting, for example to manage physical storage mapping, but without VM visibility, storage administrators will know less about their operation than they are used to, or need to. VM Volumes enables such visibility, because volumes are provided to individual VMs. The XIV VM Volumes demonstration at VMworld Barcelona, although experimental, shows a view of VM volumes, in XIV's management GUI.
Here's a screenshot:
That's not all!
Storage Profiles and Storage Containers: A Storage Profile is a vSphere specification of a set of storage services. A storage profile can include properties like thin or thick provisioning, mirroring definition, snapshot policy, minimum IOPS, etc.
Storage administrators define a portfolio of supported storage services, maintained as a set of storage profiles, and published (via VASA integration) to vSphere.
VMware or cloud administrators define the required storage profiles for specific applications
VMware and storage administrators need to coordinate the typical storage requirements and the automatically-available storage services. When a request to provision an application is made, the associated storage profiles are matched against the published set of available storage profiles. The matching published profiles will be used to create volumes, which will be bound to the application VMs. All that will happen automatically.
Note that when a VM is created today, a datastore must be specified. With VM Volumes, a new management entity called Storage Container (also known as Capacity Pool) replaces the use of datastore as a management object. Each Storage Container exposes a subset of the available storage profiles, as appropriate. The storage container also has a capacity quota.
Here are some more takeaways:
New way to interface vSphere and storage management: Storage administrators structure and publish storage services to vSphere via storage profiles and storage containers.
Automated provisioning, out of the box: The provisioning process automatically matches application-required storage profiles against storage profiles available from the specified storage containers. There is no need to build custom scripts and custom processes to automate storage provisioning to applications
The XIV advantage:
XIV services are very simple to define and publish. The typical number of available storage profiles would be low. It would also be easy to define application storage profiles.
XIV provides consistent high performance, up to very high capacity utilization levels, without any maintenance. As a result, automated provisioning (which inherently implies less human attention) will not create an elevated risk of reduced performance.
Note: A storage vendor VASA provider is required to support VM Volumes, storage profiles, storage containers and automated provisioning. The IBM Storage VASA provider runs as a standalone service that needs to be deployed on a server.
To summarize the VM Volumes value proposition:
Streamline cloud operation by providing storage services at VM and application level, enabling end-to-end provisioning automation, and unifying VMware and storage administration around volumes and VMs.
Increase storage array ROI, improve vSphere scalability and response time, and reduce cloud provisioning lag, by offloading VM-level provisioning, failover, backup, storage migration, storage space recycling, monitoring, and more, to the storage array, using advanced storage operations such as mirroring and snapshots.
Simplify the adoption of VM Volumes using XIV, with smaller and simpler sets of storage profiles. Apply XIV's supreme fast cloning to individual VMs, and keep automation risks at bay with XIV's consistent high performance.
Until you can get your hands on a VM Volumes-capable environment, the VMware and IBM developer groups will be collaborating and working hard to realize this game-changing feature. The above information is definitely expected to trigger your questions or comments, and our development teams are eager to learn from them and respond. Enter your comments below, and I will try to answer them, and help shape the next post on this subject. There's much more to be told.
A lot was announced this week, so I decided to break it up into several separate posts. This is part 3 in my 3-part series, focusing on our Tivoli Storage products.
To read the rest of the series, see:
The latest release of FlashCopy Manager now supports NetApp and IBM N series storage devices. This provides application-aware snapshots, coordinated with applications like SAP, DB2 and Oracle.
FlashCopy Manager now integrates with Metro and Global Mirror capabilities, so that application-consistent copies are available at remote sites for disaster recovery, or to off-load the FlashCopy destination copy from disk to Tivoli Storage Manager storage pools.
Tivoli Storage Manager v6.4
IBM Tivoli Storage Manager is part of IBM's Unifed Recovery Management. Here are some highlights:
Enhanced Reporting. Cognos reporting to monitor backup and archive environments.
TSM for ERP. I remember when these were called "Tivoli Data Protection" modules. We still refer to them as "TDPs". The TSM for ERP provides backup capability for SAP environments, and this latest release adds support for in-memory SAP HANA databases.
TSM for Virtualization Environments IBM TSM is famous for its patented "Progressive Incremental Backup" which is far more efficient than full+incrementals or full+differentials. IBM now extends this method to VM images. With people consolidating more and more VMs onto fewer host servers, TSM-VE now offers multiple backup streams in parallel. TSM-VE can now take application-aware backups of Microsoft Exchange, SQL Server, and Active Directory running in VMs. TSM-VE will also support vApp and VM templates. If it takes you [a day and a half to build a VMware template], you would want to make sure all that work was backed up, right?
Enhanced Security. Complex password support and improved user authentication and management by integration with Lightweight Directory Access Protocol (LDAP)
A lot was announced yesterday, so I decided to break it up into several separate posts. This is part 2 in my 3-part series, focusing on: Storwize V7000 Unified, LTO-6 tape, and the SmartCloud Virtual Storage Center.
The Storwize V7000 Unified is a product that consists of a 2U-high Storwize V7000 control enclosure that provides block-based access, combined with two 2U-high File Modules that provide file-based NAS protocols: CIFS, NFS, HTTPS, SCP and FTP. The problem was that when it was introduced, it was based on Storwize V7000 v6.3, so when the Storwize V7000 v6.4 features were announced last June, they did not apply to the Storwize V7000 Unified.
That is all fixed now, so the Storwize V7000 Unified now supports the full v6.4 features, including Real-time Compression for both file and block-based access to primary data, and Fibre Channel over Ethernet (FCoE) for block access.
The two File Modules are no longer limited to a single Storwize V7000 control enclosure, you can now connect to up to four control enclosures clustered together. Combined with up to nine expansion enclosures for additional disk raises the total maximum to 960 drives.
If you don't already have an Active Directory or LDAP server, the Storwize V7000 Unified now offers an embedded LDAP server, for smaller deployments that want to reduce the number of servers they need to purchase for a complete solution.
Like the [IBM XIV Gen3 storage system], both the Storwize V7000 and V7000 Unified now also support the OpenStack Nova-volume interface.
Lastly, if you have a Storwize V7000 v6.4, you can upgrade it to a Storwize V7000 Unified by simply adding the two File Modules. This can be done in the field.
IBM LTO-6 for tape libraries and drives
IBM introduces the sixth generation of Linear Tape Open (LTO-6) drives, which can be used as stand-alone IBM TS1060 drives, or in IBM tape libraries. As with previous models of LTO, the LTO-6 can read two older generations (LTO-4 and LTO-5) tape media, and can write to previous generation (LTO-5) tape media. You can buy the LTO-6 drives now, and use the older media until LTO-6 tape cartridges are available (hopefully later this year!)
My friend, Brad Johns, from Brad Johns Consulting, has a great post on this [LTO-6 Announcement]. While you expect the new drives to be faster with a denser tape media format, the key advantage to the LTO-6 is that it improves the compression algorithm, from the previous 2:1 to the new 2.5:1 compression ratio:
Thus, with the improved compression, the LTO-6 is 40 percent faster, with double the tape cartridge density. This can reduce backup times by 30 percent, increase the amount of data that sits in your automated tape libraries, and reduce the courier costs sending tapes off-site.
IBM SmartCloud Virtual Storage Center v5.1
Last year, IBM coined the phrase "Storage Hypervisor" to refer to the underlying technology in the IBM SAN Volume Controller (SVC) and Storwize V7000 disk systems.
At the IBM Edge conference last June, my colleague Mike Griese presented [SmartCloud Virtual Storage Center]. Back then, it was a pilot program (beta test), and this week, IBM announces that it will be formally available as a product.
The idea was simple: take the basic storage hypervisor, and add the necessary software to make it a complete solution.
If all of your disk is currently virtualized behind IBM SAN Volume Controller (SVC), or you want to put all of your data behind SVC, then SmartCloud Virtual Storage Center is for you. Basically, for one per-TB price, you get all of the following:
The software features of SAN Volume Controller v6.4, including FlashCopy, Metro Mirror and Global Mirror.
The full advanced features of IBM Tivoli Storage Productivity Center v5.1, including the Storage Analytics Engine that does "Right-Tiering", recommending which LUNs should be moved entirely from one disk system to another, based on policies and access patterns.
IBM Tivoli Storage FlashCopy Manager v3.2 which manages FlashCopy with full coordination with applications, including Microsoft Exchange, SQL Server, DB2, Oracle, SAP, and VMware. This ensures that the FlashCopy destination copies are clean, eliminating the need to run backout or redo logs to correct any incomplete units of work.
If this combination sounds familiar, it was based on IBM's previous attempt called [Rapid Application Storage] which combined the Storwize V7000 with Tivoli Storage Productivity Center Midrange Edition and FlashCopy Manager.
The key difference is that SmartCloud VSC does not include the SVC hardware itself, you buy this separately. If you want Real-time Compression, that is charged separately for the subset of TB of the volumes that you select for compression.
Well it's Wednesday, and you know what that means... IBM Announcements.
(Normally, announcements are on Tuesdays, but we moved this one over to Wednesday to line up with our big launch event in Pinehurst, NC. )
A lot was announced today, so I decided to break it up into several separate posts. I will start with our Enterprise Systems: DS8870, TS7700 Release 3, and XIV Gen3.
Enterprise systems are the servers, storage and software at the core of an enterprise IT infrastructure. Enterprise systems enable a private cloud infrastructure at enterprise scale, with flexible service delivery models that provide dynamic efficiency for resource and workload management. They make sure critical data is always available across the enterprise, making it accessible in new ways so that actionable insights can be derived from advanced and operational analytics. They also provide ultimate security, ensuring the integrity of critical data while mitigating risk and providing assured compliance.
IBM System Storage DS8870® disk system
This new storage system is the next generation in IBM's DS8000 series, based on IBM's POWER7 chipset. Each CEC can have 2, 4, 8 or 16 cores. Like the DS8800, you can have a mix of 2.5-inch and 3.5-inch disk drives of different speeds and capacities, up to 1,536 drives in a four-frame configuration. The maximum cache is now 1TB usable. The combination of faster chipset and more cache can triple performance for some workloads!
All DS8870s ship standard with all Full Disk Encryption (FDE-capable) drives. The problem in the past was that people would buy DS8000 with non-FDE drives, and then later want to activate encryption, and discovered that they have to swap out their drives with those with the encryption chip built in. Now, all drives on the DS8870 will have the encryption chip. This also allows Easy Tier sub-volume automated tiering to move encrypted data between all media types.
Flash optimization with DS8000 Easy Tier can improve performance up to 3 times with 3% of data on solid-state storage. Easy Tier is easy to deploy and runs automatically.
Support of the American National Standards Institute's (ANSI) T10 Data Integrity Field (DIF) standard. This is a feature that the mainframe has had for years, and is now being extended to distributed operating systems. The concept is simple. When sending data between server and storage, generate a checksum at the source, and then validate the checksum at the target. When you write a block of data, the server generates the checksum, and the DS8870 validates the checksum on arrival. When you read the data back, the DS8870 generates the checksum, and the server validates it on arrival. This ensures that data was not corrupted in between. There is a great write-up on IBM developerWorks: [End-to-end data protection using T10 standard data integrity field].
Energy Efficient. The DS8870 consumes less energy than its predecessor, the DS8800. For example, a fully-configured four-frame DS8870 with 1,536 disk drives consumes only 23.2kW, compared to the same number of drives in a DS8800 consumed 26.3 kW. By comparison, the DS8700 with five frames and 1,024 drives consumed 29.2kW.
Support for new System z load balancing algorithm. System z Workload Manager now interacts with the DS8870 I/O Priority Manager to optimize designated Quality of Service (QoS) levels. We have also the fastest operational analytics solution with DB2 list Prefetch cache optimization with DS8870 High Performance FICON (zHPF) integration. This solution increases DB2 query performance up to 11 times with disk, and up to 60 times with solid-state drives (SSD). File scans are up to 30 percent faster using DS8870 zHPF support for sequential access methods (QSAM, BPAM, and BSAM).
VMware vStorage APIs for Array Integration (VAAI) support. Why should the IBM DS8800 series support VMware when IBM already offers great VMware support with SAN Volume Controller (SVC), Storwize V7000 and XIV storage sytsems? Good question. This was hotly debated between development and marketing. Several DS8000 customers have already added SVC to provide full VMware VAAI support. As a consultant, I am neither development nor marketing, but felt it necessary to weigh in on my opinion on this. The DS8000 is a consolidation platform. According to one analyst survey, 22 percent of companies run on a single disk platform, so for DS8000 to be the one, it needs to support VMware and exploit these special APIs.
Six Nines Availability. Critical enterprise systems need to deliver continuous data availability, or very close to it. IBM solutions can help deliver up to six “nines” of availability, or 99.9999 percent when combining DS8000 Metro Mirror and GDPS Hyperswap. That's less than 30 seconds of downtime per year.
The TS7700 Release 3 represents a refresh to our existing virtual tape libraries. These are mainframe-only, offered in two models: TS7720 is a disk-only device, and the TS7740 is a blended disk-and-tape solution.
Industry standard hardware encryption. This applies to user data stored on the TS7700 system cache (disk), and for data transferred between TS7700 systems. This is especially important for regulations, like Payment Card Industry Data Security Standard (PCI-DSS). In previous models, the data would not be encrypted until it was moved off disk and written to tape. Now, it is encrypted the minute in lands on the disk cache, and stays encrypted as it is replicated from one TS7700 to another in the grid.
Up to 4 Million logical volume capacity. This is twice the previous support.
More physical capacity for TS7720 systems. The maximum capacity for the disk-only model is raised from 440TB to 620TB, representing a 40 percent increase.
My latest book "Inside System Storage: Volume V" is now available!
I have published my fifth volume in my "Inside System Storage" series! Currently, it is only available in Paperback. My editor, Susan Pollard, is hoping to have the eBook and Hardcover versions ready for Cyber Monday. The foreword was written by my Dr. Sondra Ashmore.
You can order this, and all my other books, in all formats, directly from my [Author Spotlight] page. The paperback will also be available soon from other online booksellers, search for ISBN 978-1-300-26223-7.
Improved Scalability. A new Multi-system Manager (MSM) server reduces the operational complexity for large and multi-site XIV deployments. Previously, admins connected directly to XIV boxes. If you had 10 admins logged in, then every XIV box was managing 10 admin conversations. The new MSM acts as a go-between. The admins connect to the MSM, and the MSM connects to the XIV boxes. The MSM polls and caches the status of each XIV, greatly increasing the number of XIV boxes that an admin can manage.
Enhanced User Interface. A new Multi-system Manager server reduces the operational complexity for large and multi-site XIV deployments. We also added support for IPsec and US. Government (USGv6) certification for admistering the XIV over IPv6 networks. The XIV Mobile Dashboard app for iPhone and iPad is spiffed up. Finally, the GUI has been internationalized and translated to the Japanese language.
Enhanced Integration for Cloud. For OpenStack, XIV now offers a Nova-volume driver which provides persistent storage to OpenStack compute nodes. The Nova task force is now looking to move storage into its own project called Cinder. For VMware, XIV has full support for Site Recovery Manager (SRM) v4.1 and v5.0 releases. XIV now also supports the Microsoft System Center Virtual Machine Manager, which can manage Hyper-V, VMware and Citrix XenServer hypervisors.
Smaller entry point. The original XIV supported 1TB and 2TB drives, with the smallest offering being 27TB usable. When IBM introduced the XIV Gen3, the two choices were 2TB and 3TB disk drives. Unfortunately, this meant that the initial entry model was now 55TB in size, and each additional module would be more expensive as well. IBM is now going to offer 1TB support for XIV Gen3 for a lower price point, these are actually 2TB drives with half the capacity turned off.
The job is located in Tucson, Arizona, which is a great place to live! Tucson is the headquarters for IBM storage design and development, with the largest collection of engineers, software developers and testers. The IBM Tucson Executive Briefing Center is located on the [University of Arizona Science and Technology Park] campus that houses over 7,000 employees from 50 different companies.
What does the job entail?
Primarily, you will be developing, customizing and presenting Powerpoint presentations and live product demos. For some briefings, you will work with sales reps, IBM Business Partners, and clients to develop an agenda of topics to discuss. At times, the presentation may involve working to solve the client's problems, drawing on the whiteboard or flip charts to help capture the requirements and architect a solution.
Which products are we talking about?
The [IBM System Storage product line] includes solid-state drives (SSD), block and file-based disk systems, tape drives and libraries, storage virtualization, and storage management software.
Is there any opportunity for travel?
Most of the presentations will be performed in Tucson, either in person, by webcast or video conference call. Sometimes, this includes discussions over drinks, dinner or golfing. Occasionally, there will be travel to present at client locations, IBM branch offices, events or conferences. My manager estimates approximately 10 percent travel.
Is the pay based on a commission?
Absolutely not! We are consultants, not salespeople. To maintain our "trusted advisor" status, it is a flat salary, with possibility for year-end bonus based on how well our division does overall. This allows us to present and position all of the products fairly to the clients at briefings without bias. Our clients appreciate that! The job is considered pre-sales technical support.
Is training included?
Yes. Assuming you already have a strong background in storage hardware and software, and how these connect to SAN and LAN networks for a variety of operating systems like z/OS, AIX, Windows and Linux, there will be training for the latest updates and features of the IBM products throughout the year. Also, there will be professional training to build up your public speaking and meeting facilitation skills.
How do I apply?
If you are an American citizen, fluent in the English language, and have at least a Bachelor's Degree, go to the [IBM Employment website], look for "Storage Support Specialist" position using job code "STG-0524037" or "STG-0525309". IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Last year, the Austin Executive Briefing Center had a room full of experts to help customers learn about IBM hardware to run Oracle applications. This year, IBM is back in San Francisco, with subject matter experts representing Power Systems, System x servers, PureSystems, Storage and System z mainframes. If you are in San Francisco, consider taking 1-2 hours out of your schedule to speak to IBM experts. These are intended to answer the question: Why choose IBM for your Oracle (and other) workloads?
Event: IBM Mini-Briefings Location: San Francisco Marriott Marquis, 55 Fourth Street, very close to the Moscone Center Dates: Monday through Wednesday, October 1-3, 2012
Subject Matter Experts:
Pat O'Rourke, Austin Briefing Center, Power Systems
Dennis Wunder, Poughkeepsie Briefing Center, System z mainframes
Steve Loeschorn, Raleigh Briefing Center, System x servers
Curtis Neal, Tucson Briefing Center, Storage
IBM will also have a booth presence on the main Oracle OpenWorld showroom floor. Please stop by and visit my colleagues! To sign up for a Mini-Briefing at Oracle OpenWorld, for any or all of the topics above, visit the new [IBM STG Austin EBC] website.
Many thanks to the 186 people who registered for yesterday's webcast "Solving the Storage Capacity Crisis -- Tools and Practices for Effective Management!" We had some excellent questions posed during the live Q&A:
Do you recommend moving to a SAN before implementing the management techniques you described, or will these tactics work just as well on direct-attached storage?
How does data center tiering differ from hierarchical storage management?
How do you recommend decisions about data priority be made when there are multiple stakeholders competing for attention?
You didn't mention deduplication. Does that have much impact on capacity management?
When outsourcing to a storage service provider, do you have any recommendations of the merits of wholesale outsourcing vs. partial outsourcing?
What are the dangers of giving end-users the ability to manage their own storage? What kind of education should be put in place?
The webcast was recorded, so in case you missed it, or just want to hear it again, the recording is now available in the [On24 archives].
Now an avid reader of my blog has brought this to my attention. Apparently,
EMC has been showing customers a presentation
[Accelerating Storage Transformation with VMAX and VPLEX] with false and misleading comparison claims between IBM DS8000, HDS VSP and EMC VMAX 40K disk system performance.
(FTC Disclosure: This would be a good time to remind my readers that I work for IBM and own IBM stock. I do not endorse any of the EMC or HDS products mentioned in this post, and have no financial affiliation or investments directly with either EMC nor HDS. I am basing my information solely on the presentation posted on the internet and other sources publicly available, and not on any misrepresentations from EMC speakers at the various conferences where these charts might have been shown.)
The problem with misinformation is that it is not always obvious. The EMC presentation is quite pretty and professional-looking. It is the typical slick, attention-getting, low-content, over-simplified marketing puffery you have come to expect from EMC. There are two slides in particular that I have issue with.
This first graphic implies that IBM and HDS are nearly tied in performance, but that EMC VMAX 40K has nearly triple that bandwidth. Overall the slide has very little detail. That makes it difficult to determine what exactly is being claimed and whether a fair comparison is being made.
The title claims that VMAX 40K is "#1 in High Bandwidth Apps". Only three disk systems are shown so the claim appears to be relative to only the three systems. The wording "High Bandwidth Apps" is confusing considering the cited numbers are for disk systems and no application is identified. By comparison, IBM SONAS can drive up to 105 GB/sec sequential bandwidth, nearly double what EMC claims for its VMAX 40K, so EMC is certainly not even close to #1.
Is the workload random or sequential? That is not easy to determine. The use of "GB/s" along with the large block size of 128KB implies the I/O workload is sequential, which is great for some workloads like high performance computing, technical computing and video broadcasts. Random workloads, on the other hand, are usually measured in I/Os per second (IOPS) with a block size ranging 4KB to 64KB. (I am assuming the 128K blocks refers to 128KB block size, and not reading the same block of cache 128,000 times.)
The slide states "Maximum Sustainable RRH Bandwidth 128K Blocks". The acronym "RRH" is not defined; but I suspect this refers to "random read hits". For random workloads, 100 percent random read hits from cache represents one corner of the infamous "four corners" test. Real-world workloads have a mix of reads and writes, and a mix of cache hits and cache misses. It is also unclear whether the hits are from standard data cache or from internal buffers in adapters (perhaps accessing the same blocks repeatedly) or something else. So is this really for a random workload, or a sequential workload?
(The term "Hitachi Math" was coined by an EMC blogger precisely to slam Hitachi Data Systems for their blatant use of four-corners results, claiming that spouting ridiculously large, but equally unrealistic, 100 percent random read hit results don't provide any useful information. I agree. There are much better industry-standard benchmarks available, such as SPC-1 for random workloads, SPC-2 for sequential workloads, and even benchmarks for specific applications, that represent real-world IT environments. To shame HDS for their use of four-corners results, only for EMC themselves to use similar figures in their own presentation is truly hypocritical of them!)
The IBM system is identified as "DS8000". DS8000 is a generic family name that applies to multiple generations of systems first introduced in 2004. The specific model is not identified, but that is critical information. Is this a first generation DS8100, or the latest DS8800, or something in between?
The slide says "Full System Configs", but that is not defined and configuration details are not identified. Configuration details, also critical information in assessing system performance capabilities, are not specified. If the EMC box costs seven times more than IBM or HDS, would you really buy it to get 3x more performance? Is the EMC packed with the maximum amount of SSD? Were there any SSD in the IBM or HDS boxes to match?
The source of the claimed IBM DS8000 performance numbers is not identified. Did they run their own tests? While I cannot tell, the VMAX may have been configured with 64 Fibre Channel 8Gbps host connections. In that case each channel is theoretically capable of supporting about 800 MB/s at 100% channel utilization. Multiplying 64 x 800MB/s = 51.2GB/s, so did EMC just do the performance comparison on the back of a napkin, assuming there are no other bottlenecks in the system? Even then, I would not round up 51.2 to 52!
Response times were not identified. For random I/Os, response time is a very important metric. It is possible that the Symmetrix was operating with some resources at 100% utilization to get the highest GB/s result, but that would likely make I/O response times unacceptable for real-world random I/O workloads.
IBM and HDS have both published Storage Performance Council [SPC] industry-standard performance benchmarks. EMC has not published any SPC benchmarks for VMAX systems. If EMC is interested in providing customers with audited, detailed performance information along with detailed configuration information, all based on benchmarks designed to represent real-world workloads, EMC can always publish SPC benchmark results as IBM and other vendors have done. In past blog fights, EMC resorts to the excuse that SPC isn't perfect, but can they really argue that vague and unrealistic claims cited in its presentation are better?
The second graphic is so absurd, you would think it came directly from Larry Ellison at an Oracle OpenWorld keynote session. EMC is comparing a configuration with VMAX 40K plus an EMC VFCache host-side flash memory cache card to a configuration with an IBM and HDS disk system without host-side flash memory cache also configured. The comparison is clearly apples-to-oranges. Other disk system configuration details are also omitted.
FAST VP is EMC's name for its sub-volume drive tiering feature, comparable to IBM Easy Tier and Hitachi's Dynamic Tiering. The graph implies that IBM and HDS can only achieve a modest increment improvement from their sub-volume tiering. I beg to differ. I have seen various cases where a small amount of SSD on IBM DS8000 series can drastically improve performance 200 to 400 percent.
The "DBClassify" shown on the graph is a tool run as part of an EMC professional services offering called Database Performance Tiering Assessment, makes recommendations for storing various database objects on different drive tiers based on object usage and importance. Do you really need to pay for professional services? With IBM Easy Tier, you just turn it on, and it works. No analysis required, no tools, no professional services, and no additional charge!
VFCache is an optional product from EMC that currently has no integration whatsoever with VMAX. A fair comparison would have included a host-side flash memory cache (from any vendor) when the IBM or HDS storage system was configured. Or leave it out altogether and just focus on the sub-volume tiering comparison.
Keep in mind that EMC's VFCache supports only selected x86-based hosts. IBM has published a [Statement of Direction] indicating that it will also offer this for Power systems running AIX and Linux host-side flash memory cache integrated with DS8000 Easy Tier.
I feel EMC's claims about IBM DS8000 performance are vague and misleading. EMC appears to lack the kind of technical marketing integrity that IBM strives to attain.
Since EMC is not able or willing to publish fair and meaningful performance comparisons, it is up to me to set the record straight and point out EMC's failings in this matter.
Reminder: It's not to late to register for my Webcast "Solving the Storage Capacity Crisis" on Tuesday, September 25. See my blog post [Upcoming events in September] to register!
Can you believe it is September already? We have a number upcoming events that you might be interested in.
IBM Smarter Analytics by Design
Join the first of our 'Smarter Analytics by Design' virtual events to learn more from leading industry analyst IDC on how analytics can help you solve business challenges, and the capabilities you'll need to be successful in this ever-changing landscape. You'll also hear real case examples from AXTEL and Miami-Dade County and the results of their analytics approaches.
Webcast: IBM Smarter Analytics by Design Date: Thursday, September 13, 2012 Time: 1:00 pm ET / 12:00 pm CT / 10:00 am PT
Dan Vessett and Jean Bozman, International Data Corporation (IDC)
Gaspar Rivera Del Valle, AXTEL, Monterrey, Mexico
Adrienne DiPrima, Rosario Fiallos, Jaci Newmark, Miami-Dade County, South Florida
The problems that used to keep storage managers awake at night -- power, cooling and physical footprint -- are being successfully addressed by technology, but a more vexing issue still remains: How to get more out of the limited supply of skilled storage management professionals.
Webcast: Solving the Storage Capacity Crisis Date: Tuesday, September 25, 2012 Time: 12:00 pm ET / 10:00 am CT / 09:00 am PT
Demand for storage capacity continues to grow far faster than the pool of people to manage it. With no end in sight to data growth, businesses need to apply technology and practices that distribute management responsibility to the people who need storage, and multiply the volumes of storage that skilled professionals can handle.
In this presentation, in this session, I will cover best practices and new tools that are enabling leaps in productivity, in three main areas:
IBM is bringing back and expanding its Mini Briefing program to Oracle OpenWorld.
What is a Mini-Briefing you might ask? It is a small, customized briefing by the Executive Briefing Centers, held nearby a related conference, allowing conference attendees to take 1-2 hours out of their schedule to speak to IBM experts. These are intended to answer the question: Why choose IBM for your Oracle (and other) workloads?
Event: IBM Mini-Briefings Location: San Francisco Marriott Marquis, 55 Fourth Street, very close to the Moscone Center Dates: Monday through Wednesday, October 1-3, 2012
Last year, the Austin Executive Briefing Center had a room full of experts to help customers learn about IBM hardware to run Oracle applications. This year, IBM is back in San Francisco, with subject matter experts representing Power Systems, System x servers, PureSystems, Storage and System z mainframes.
Subject Matter Experts:
Pat O'Rourke, Austin Briefing Center, Power Systems
Dennis Wunder, Poughkeepsie Briefing Center, System z mainframes
Steve Loeschorn, Raleigh Briefing Center, System x servers
Curtis Neal, Tucson Briefing Center, Storage
Of course, IBM will also have a booth presence on the main Oracle OpenWorld showroom floor. Sadly, I will not be there myself this year. Please stop by and visit my colleagues!
To sign up for a Mini-Briefing at Oracle OpenWorld, for any or all of the topics above, visit the new [IBM STG Austin EBC] website.
I hope you can participate in one or more of these events!
This month, I am pleased to announce the new [IBM STG Executive Briefing Center] website, representing a huge improvement over the previous website we had been using over the past two years. STG refers to IBM's Systems and Technology Group, the division that focuses on servers, storage, switches and the system software that makes them run. This new website is for the dozen STG EBCs that span the globe. The new website reminds me of this famous quote:
"Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away"
-- Antoine de Saint-Exupery
Let's take a quick look at what makes it so much better.
The previous website required registration. At every briefing, those of us who work in the EBCs had to pass around a sign-up sheet for email addresses from each attendee so that we could send them an invitation to register for the site. We would have a hard time reading people's handwriting, resulting in some emails coming back rejected.
Inspired by self-service gas stations, automated teller machines, and the many self-service portals of Cloud Computing, the new website has everything up-front, without registration. IBM Business Partners and sales representatives can easily request a briefing at any of the dozen briefing centers represented!
IBM-managed and IBM-hosted
We had a difficult time explaining to our attendees why our previous website was hosted on a lone machine and maintained by a third party. Think about it, IBM manages the data centers of over 400 clients. IBM has provided web hosting to the most mission critical workloads, with high levels of availability and reliability, and is recognized as one of the "Big 5" Cloud companies. I have done web design myself in my career, and we were terribly disappointed with the third party chosen to create and maintain our previous website, constantly having to point out errors in their HTML and CSS.
For the new website, IBM took back control. Staff from each EBC, myself included, came up with a simple page to bring the essence of each location to life. Special thanks to my colleage Hal Jennings, from the Austin EBC, for bringing this altogether!
Despite two years of manually registering attendees to use the previous website, Google Analytics showed that few people visited, and the few that did spent little time exploring the vast repository of content.
The new website is vastly simpler. The front page points to all twelve EBCs, and a single mouse click gets you to the location you are interested in, with all the details you need to make a decision to book a briefing, and the contact information to make it happen.
Elimination of Wasted and Duplicate Effort
In the previous website, we spent as much as 15 hours just to create, voice over, edit and produce a single 15-minute recorded presentation. Less than six percent of the previous website visitors watched more than five minutes of these videos, making us feel that most of our effort was wasted.
The EBC staff kept wasting their time, month after month, thanks to all-stick, no-carrot tactics that mandated minimums for contributions for more and more content that nobody was ever looking at. Even more disappointing was that much of our work duplicated the formal responsibilities of our IBM Marketing team. They weren't happy about this either, causing confusion between the roles of our two teams.
Finally, we said enough was enough! The new STG EBC website is a marvel in minimalism. If you want to see presentations, videos, expert profiles, or partake in on-going conversations, I welcome you to visit the [IBM Expert Network], the [IBM Storage YouTube Channel], and the [Storage Community] where they belong.
With all of the distractions this week, from the Republican National Convention in Florida, to the Tropical Storm Isaac that hit New Orleans on the 7th anniversary of Hurricane Katrina, I thought I would continue this week's theme on the IBM zEnterprise EC12.
Processing an insurance claim: $56 U.S. dollars (USD) with mainframes, versus $92 USD with distributed servers.
Processing a mobile subscriber: $18.26 USD with mainframes, versus $26.12 USD with distributed servers.
IT cost for an ATM machine: $572 USD with mainframes, versus $1021 USD with distributed servers.
In the whitepaper [Total Economic Impact of IBM System z], Forrester Research interviews the executives of five existing mainframe clients, and through in-depth analysis of their deployments, is able to present a "composite" company with an IT-staff of 4,500 employees. The result is impressive: deploying an IBM System z had an ROI of 199 percent. That is a payback period of less than five months!
A finish this post with a quick [6-minute Youtube video], featuring my colleage, Nick Sardino. Nick and I have worked together in the past at various conferences and conventions.
Well it's Tuesday again, and you know what that means! IBM Announcements!
For nearly 50 years, IBM has been leading the IT industry with its mainframe servers. Today, IBM announced its 12th generation mainframe in its [System z product family], the IBM zEnterprise EC12, or zEC12 for short. I joined IBM in 1986, and my first job was to work on DFHSM for the MVS operating system. The product is now known as DFSMShsm as part of the Data Facility Storage Management System, and the operating systems went through several name changes: MVS/ESA, OS/390, and lately z/OS. I was the lead architect for DFSMS up until 2001. I then switched to be part of the team that brought Linux to the mainframe. Both of these experiences come in handy as I deal with mainframe storage clients at the Tucson Executive Briefing Center.
Let's take a look at some recent developments over the past few years.
In the 9th and 10th generations (IBM System z9 and z10, respectively), IBM introduced the concept of a large "Enterprise Class", and a small "Business Class" to offer customer choice. These were referred to as the EC and BC models.
For the 12th generation, IBM kept the name "zEnterprise", but went back to the "EC" to refer to Enterprise Class. Rather than offer a separate "small" Business Class version, the zEC12 comes in 60 different sub-capacity levels. Many software vendors charge per core, or per [MIPS], so offering sub-capacity means that some portion of the processors are turned off, so the software license is lower. The top rating for the zEC12 is 78,000 MIPS. (I would have thought by now that we would have switched over to BIPS by now!)
If you currently have a z10 or z196, then it can be upgraded to zEC12. The zEC12 can attach to up to four zBX model 003 frames that can run AIX, Microsoft Windows and Linux-x86. If you currently have zBX model 002 frames, these can be upgraded to model 003.
The key enhancements reflect the three key initiatives:
Operational Analytics - Most analytics are done after-the-fact, but IBM zEnterprise can enable operational analytics in real-time, such as fraud detection while the person is using the credit card at a retail outlet, or online websites providing real-time suggestions for related products while the person is still adding items to their shopping card. Operational analytics provides not just the insight, but in a timely manner that makes it actionable. There is even work in place to [certify Hadoop on the mainframe].
Security and Resiliency - IBM is famous for having the most secure solutions. With industry-leading EAL5+ security rating, it beats out competitive offerings that are typically only EAL4 or lower. IBM has a Crypto Express4S card to provide tamper-proof co-processing for the system. IBM introduces the new "zAware" feature, which is like "Operational Analytics" pointed inward, evaluating all of the internal processes, error logs and traces, to determine if something needs to be fixed or optimized.
Cloud Agile - When people hear the phrase "Cloud Agile" they immeidately think of IBM System Storage, but servers can be Cloud Agile also, and the mainframe can run Linux and Java better, faster, and at a lower cost, than many competitve alternatives.
Earlier this week, Jon Erickson from Forrester Reserch, and Chris Saul from IBM, co-presented a webcast on the economic impact of using SAN Volume Controller for storage virtualization. The event was co-sponsored by IBM, InformationWeek, and UBM TechWeb, The Global Leader in Business Technology Media, a Division of UBM LLC. Jon spoke first, covering the cost savings and financial benefits of using SAN Volume Controller in your environment. His analysis shows a payback period of only 18 months!
Chris Saul (IBM) then covered the latest features introduced last June for SAN Volume Controller v6.4 release. Many of these features are available on older hardware models of SAN Volume Controller. One of the most exciting features is Real-time Compression.
If you missed the webcast, you can listen to the [Replay]. There is also a [whitepaper] if you prefer that format.
The Real-time Compression benefits can vary by the type of data compressed. Some data compresses only 20% savings. Other data compresses 80% or more. The best way to find out how much Compression would benefit your environment is to run the [IBM Comprestimator Tool] that runs against your own data!
If you are constantly battling out-of-space conditions, and would like to make extra room on your existing storage devices, your dreams have come true!
IBM has announced it has entered into a definitive agreement to acquire Texas Memory Systems, Inc. (TMS), a privately held Houston, Texas-based company with about 100 employees, that focuses on solid-state flash optimized systems and solutions, including the RamSan family of external rack-mounted storage, as well as PCIe cards for internal storage that fit inside servers.
I've mentioned Solid-State Drive storage quite a few times over the past few years in this blog, which included some great interactions with my friends over at Texas Memory Systems. Here's a quick look:
In my now infamous blog post [Hybrid, Solid State and the future of RAID], I resort to a deck of [Tarot cards] in an effort to fight [writer's block] responding to query about combining solid-state with spinning disk. In the original post, I poked fun at Texas Memory Systems having the slogan "World's Fastest Storage". Woody Hutsell, then VP of marketing for Texas Memory Systems, explained that the reason that TMS did not have faster benchmark results was because it did not have a million dollars to buy the fastest IBM UNIX server.
In my post [Good News and Bad News], I mentioned that Texas Memory Systems has an impressive SPC benchmark result. The Storage Performance Council [SPC] publishes the benchmarking industry standard by which all block-based storage devices are measured. It looks like the TMS performance test department finally got the million-dollar IBM server they needed for this.
My colleagues in marketing were not amused, afraid that mentioning small companies like TMS would give them a huge boost in marketing awareness, above and beyond what TMS could do on their own modest marketing budget, similar to the [Colbert Bump]. I could call it the Pearson Bump. If you first heard of Texas Memory Systems from my blog, or bought TMS products based on my discussion, please post a comment below!
IBM made history as the first major storage vendor to [break the 1 million IOPS barrier with Solid State Disk]. The project was known as "Quicksilver", and was able to demonstrate that a product like SAN Volume Controller with Solid-State Drives (SSD) can indeed provide a significant boost in performance to external disk arrays. The IBM 2145-CF8 and 2145-CG8 models allow up to four SSD in each node. I was asked not to blog the entire month of August, so that our upcoming September announcements would get more notice, but I couldn't resist covering Quicksilver. The original post had mentioned Texas Memory Systems, but were later removed to avoid the "Pearson Bump".
In my post [Day 2 IBM Storage University - Solutions Expo - TMS After-party], I mentioned that I attended the TMS after-party. Texas Memory Systems had just been qualified as Solid-State Drive (SSD) storage behind the IBM SAN Volume Controller, and the two products work extremely well together for IBM Easy Tier, the sub-volume automated tiering capability to optimize storage performance. I was able to catch up with my friend Erik Eyberg, and meet CEO and Founder Holly Frost.
Nearly half (43 percent) of IT decision makers say they have plans to use SSD technology in the future or are already using it in their datacenter. Solid-state can refer to both volatile Random Access Memory (RAM) and non-volatile Flash, and Texas Memory Systems has built solutions around both types. The survey question referred to non-volatile Flash Solid-State Drives (SSD) that do not require a battery to keep the data from fading away after the power goes out. Nearly all storage in the datacenter has volatile Random Access Memory (RAM).
Speeding delivery of data was the motivation behind 75 percent of respondents who plan to use or already use SSD technology. I would have thought this would have been 100 percent, but the other options included reduced energy consumption, and improved drive reliability, which are both also true with Solid-State Drives.
However, for those who were not using SSD today, the major factor was cost, according to 71 percent of respondents. On a Dollar-per-GB basis, Solid-State Drives continue to be anywhere from 10 to 25 times more expensive spinning disk. Last year's tsunami in Japan, and the floods in Thailand, have caused spinning disk prices to rise to cover component shortages, thereby shrinking the price gap between SSD and spinning disk.
Nearly half (48 percent) say they plan on increasing storage investments in the area of virtualization, cloud (26 percent) and flash memory/solid state (24 percent) and analytics (22 percent).
I am back from lovely Taipei. The IBM Top Gun class went well. Here are a few pictures of things I found interesting while I was there.
On the first day of class, I asked for some coffee. Our lovely class assistant, Ashley, brought me a cup with an interesting paper filter hanging on the edge. I have since learned that there are two drinks never to order in Taiwan: coffee and wine. If you enjoy either, you won't here. Instead, I drank the local "Taiwan Beer" and various types of tea.
Our class was on the 14th floor of the building, and there was this warning sign posted in the elevator. I have no idea what Chinese characters say, but we found the cartoon depictions of elevator dangers amusing. We interpreted the lower left corner to mean "Don't let your evil twin sister push you out of a moving elevator!"
I have to say that the variety of food was excellent. One night, we had dinner at a [Spanish Tapas] restaurant. The Spanish had a settlement on Taiwan island, known as Formosa back then, until driven out by the Dutch in 1642. We also had a traditional Chinese lunch, with dumplings, pickled cabbage, and "Lion's Head" soup.
From the classroom floor, we could see the Taipei 101 building, considered the third [tallest skyscraper in the world]. This wasn't here the last time I was in Taiwan.
On the last day, we were treated to some [Bubble tea], a specialty drink that originated in Taiwan in the 1980's. The straw was unusually thick, about twice as thick as a normal straw. We quickly figured out why. It was so that we could slurp up the brown floating things at the bottom. We didn't realize this until after the first sip. These floaties were actually Boba Tapioca pearls. The tea itself was delicious and sweet.
Special thanks to Joe Ebidia for managing the class, his assistant Ashley, and our local support team Justin and Stewart. I would also like to thank the staff at the Sherwood Hotel.
This week, I am in Taipei, teaching Top Gun class. There was concern that another typhoon would hit the island of Taiwan later this week, but it looks like it is now headed for Hong Kong instead.
Elsewhere in the world, there are several events going on next week, so I thought I would bring them to your attention.
ECTY - South Africa
Next week, Jerry Kluck, IBM Global Sales Executive for Storage Optimization and Integration Services, will be the keynote speaker at "Edge Comes to You" (ECTY) conference in South Africa. This is a one-day event, similar to the [ECTY event in Moscow, Russia] that I spoke at last June.
Here is the schedule for South Africa next week:
Monday, August 20, 2012 - Johannesburg
Wednesday, August 22, 2012 - Cape Town
(I have been to both Jo'burg and Cape Town back in 1994. A month after Apartheid had just ended, I was part of a small group of IBMers sent to re-establish IBM's business operations there. I would have liked to have attended the events next week, not just to hear Jerry speak, but also to see how much the country has changed over the past 18 years, but I could not get a work permit in time.)
If you are interested in attending either of these next week, contact your local IBM Business Partner or sales rep to attend.
Forrester's Total Economic Impact Study of Virtualized Storage
Virtualized storage can help organizations stretch their storage investment dollar and storage administration and management resources. Jon Erickson from Forrester Research will review the latest findings from IBM SAN Volume Control (SVC) users studied as part of the recently completed Forrester Total Economic Impact Study of IBM System Storage SAN Volume Controller.
Date: Tuesday, August 21, 2012
Time: 10:00 AM PDT / 1:00 PM EDT
Duration: 60 minutes
Among the findings, users were able to:
Avoid the capital cost of additional storage
Increase IT productivity
Provide greater end user data availability
The second presenter is Chris Saul, IBM Storage Virtualization Manager, who will explain how SVC can manage heterogeneous disk from a single point of control, autonomously manage tiered disk storage and can store up to five times as much data on your existing disk using IBM Real-time Compression.
Not all virtualization solutions are created equal! That's true for storage virtualization, like the SAN Volume Controller mentioned above, and it's true for server virtualization as well.
This webcast discusses the real-world impact on businesses that deploy IBM's PowerVM®
virtualization technology as compared to those using Oracle® VM for SPARC (OVM SPARC), Microsoft® Hyper-V, VMware® vSphere or other competing products.
Date: Wednesday, August 22, 2012
Time: 10:00 AM PDT / 1:00 PM EDT
Duration: 60 minutes
This webcast will include findings from a [Solitaire Interglobal] study of over 61,000 customer sites on the value of virtualization from a business perspective and how IBM's PowerVM provides real business value.
Other key discussion points that will be covered during this webcast include:
Behavioral characteristics of server virtualization technologies that were examined and analyzed from survey participant's environments
How IT colleagues were able to obtain a faster time-to-market for business initiatives when using IBM PowerVM
Why the learning curve time for PowerVM is as much as 2.58 times faster than for other offerings
Why VM reboot comparisons for PowerVM vs competitive platforms resulted in downtime of 5.5 times less than with other options
A TCO reduction of up to 71.4% for PowerVM compared to alternative options
This webcast will also feature an in-depth discussion on the IBM PowerVM solution from an IBM product expert who will share the unique virtualization features available when PowerVM is utilized within the IBM Power Systems™ environment.
Every year, I teach hundreds of sellers how to sell IBM storage products. I have been doing this since the late 1990s, and it is one task that has carried forward from one job to another as I transitioned through various roles from development, to marketing, to consulting.
This week, I am in the city of Taipei [Taipei] to teach Top Gun sales class, part of IBM's [Sales Training] curriculum. This is only my second time here on the island of Taiwan.
As you can see from this photo, Taipei is a large city with just row after row of buildings. The metropolitan area has about seven million people, and I saw lots of construction for more on my ride in from the airport.
The student body consists of IBM Business Partners and field sales reps eager to learn how to become better sellers. Typically, some of the students might have just been hired on, just finished IBM Sales School, a few might have transferred from selling other product lines, while others are established storage sellers looking for a refresher on the latest solutions and technologies.
I am part of the teach team comprised of seven instructors from different countries. Here is what the week entails for me:
Monday - I will present "Selling Scale-Out NAS Solutions" that covers the IBM SONAS appliance and gateway configurations, and be part of a panel discussion on Disk with several other experts.
Tuesday - I have two topics, "Selling Disk Virtualization Solutions" and "Selling Unified Storage Solutions", which cover the IBM SAN Volume Controller (SVC), Storwize V7000 and Storwize V7000 Unified products.
Wednesday - I will explain how to position and sell IBM products against the competition.
Thursday - I will present "Selling Infrastructure Management Solutions" and "Selling Unified Recovery Management Solutions", which focus on the IBM Tivoli Storage portfolio, including Tivoli Storage Productivity Center, Tivoli Storage Manager (TSM), and Tivoli Storage FlashCopy Manager (FCM). The day ends with the dreaded "Final Exam".
Friday - The students will present their "Team Value Workshop" presentations, and the class concludes with a formal graduation ceremony for the subset of students who pass. A few outstanding students will be honored with "Top Gun" status.
These are the solution areas I present most often as a consultant at the IBM Executive Briefing Center in Tucson, so I can provide real-life stories of different client situations to help illustrate my examples.
The weather here in Taipei calls for rain every day! I was able to take this photo on Sunday morning while it was still nice and clear, but later in the afternoon, we had quite the downpour. I am glad I brought my raincoat!
With all the announcements we had in June, it is easy for some of the more subtle enhancements to get overlooked. While I was at Orlando for the IBM Edge conference, I was able to blog about some of the key featured announcements. Then, later, when I got back from Orlando to Tucson, I was able to then blog about [More IBM Storage Announcements]. For IBM's Scale-Out Network Attach Storage (SONAS), I had simply:
"SONAS v1.3.2 adds support for management by the newly announced IBM Tivoli Storage Productivity Center v5.1 release. Also, IBM now officially supports Gateway configurations that have the storage nodes connected to XIV or Storwize V7000 disk systems. These gateway configurations offer new flexible choices and options for our ever-expanding set of clients."
In my defense, IBM numbers its software releasees with version.release.modification, so 1.3.2 is Version 1, Release 3, Modification 2. Generally, modification announcements don't get much attention. The big announcement for v1.3.0 of SONAS happened last October, see my blog post [October 2011 Announcements - Part I] or
the nice summary post [IBM Scale-out Network Attached Storage 1.3.0] from fellow blogger Roger Luethy.
Here is a diagram showing the three configurations of SONAS.
I have covered the SONAS Appliance model in depth in previous blogs, with options for fast and slow disk speeds, choice of RAID protection levels, a collection of enterprise-class software features provided at no additional charge, and interfaces to support a variety of third party backup and anti-virus checking software.
The basics haven't changed. The SONAS appliance consists of 2 to 32 interface nodes, 2 to 60 storage nodes, and up to 7,200 disk drives. The maximum configuration takes up 17 frames and holds 21.6PB of raw disk capacity, which is about 17PB usable space when RAID6 is configured. An interface nodes has one or two hex-core processors with up to 144GB of RAM to offer up to 3.5GB/sec performance each. This makes IBM SONAS the fastest performing and most scalable disk system in IBM's System Storage product line.
I thought I would go a bit deeper on the gateway models. These models support up to ten storage nodes, organized in pairs. The key difference is that instead of internal disk controllers, the storage nodes connect to external disk systems. There is enough space in the base SONAS rack to hold up to six interface nodes, or you can add a second rack if you need more interface nodes for increased performance.
SONAS with XIV gateway
XIV offers a clever approach to storage that allows for incredibly fast access to data on relatively slow 7200 RPM drives. By scattering data across all drives and taking advantage of parallel processing, rebuild times for a failed 3TB drive are less than 75 minutes. Compare that to typical rebuild times for 3TB drives that could take as much as 9-10 hours under active I/O loads!
In the configuration, each pair of storage nodes can connect to external SAN Fabric switches that then connect to one or two XIV storage systems. How simple is that? These can be the original XIV systems that support 1TB and 2TB drives, or the new XIV Gen3 systems that support 400GB Solid-state drives (SSD) and 3TB spinning disk drives. In both cases, you can acquire additional storage capacity as little as 12 drives at a time (one XIV module holds 12 drives).
The maximum configuration of ten XIV boxes could hold 1,800 drives. At 3TB drive per drive, that would be 2.4PB usable capacity.
The SONAS with XIV gateway does not require the XIV devices to be dedicated for SONAS purposes. Rather, you can assign some XIV storage space for the SONAS, and the rest is available for other servers. In this manner, SONAS just looks like another set of Linux-based servers to the XIV storage system. This in effect gives you "Unified Storage", with a full complement of NAS protocols from the SONAS side (NFS, CIFS, FTP, HTTPS, SCP) as well as block-based protocols directly from the XIV (FCP, iSCSI).
SONAS with Storwize V7000 gateway
The other gateway offering is the SONAS with Storwize V7000. Like the SONAS with XIV gateway model, you connect a pair of SONAS storage nodes to 1 or 2 Storwize V7000 disk systems. However, you do not need a SAN Fabric switch in between. You can instead connect the SONAS storage nodes directly to the Storwize V7000 control enclosures.
To acquire additional storage capacity, you can purchase a single drive at a time. That's right. Not 12 drives, or 60 drives, at a time, but one at a time. The Storwize V7000 supports a wide range of SSD, SAS and NL-SAS drives at different sizes, speeds and capacities. The drives can be configured into various RAID protection levels: RAID 0, 1, 3, 5, 6 and 10.
Each Storwize V7000 control enclosure can have up to nine expansion drawers. If you choose the 2.5-inch 24-bay models, you can have up to 480 drives per storage node pair, for a total of 2,400 drives. If you choose the 3.5-inch 12-bay models, you can have up to 240 drives per node pair, 1,200 drives total. At 3TB per drive, this could be 3.6PB of raw capacity. The usable PB would depend on which RAID level you selected. Of course, you don't have to limit yourself all to one size or the other. Feel free to mix 2.5-inch and 3.5-inch drawers to provide different storage pool capabilities.
All three SONAS configurations support Active Cloud Engine. This is a collection of features that differentiate SONAS from the other scale-out NAS wannabees in the marketplace:
Policy-driven Data Placement -- Different files can be directed to different storage pools. You no longer have to associate certain file systems to certain storage technologies.
High-speed Scan Engine -- SONAS can scan 10 million files per minute, per node. These scans can be used to drive data migration, backups, expirations, or replications, for example. It is over 100 times faster than traditional walk-the-directory-tree approaches employed by other NAS solutions.
Policy-driven Migration -- You can migrate files from one storage pool to another, based on age, days since last reference, size, and other criteria. The files can be moved from disk to disk, or move out of SONAS and stored on external media, such as tape or a virtual tape library. A lot of data stored on NAS systems is dormant, with little or no likelihood of being looked at again. Why waste money keeping that kind of data on expensive disk? With SONAS, you can move those files to tape can save lots of money. The files are stubbed in the SONAS file system, so that an access request to a file will automatically trigger a recall to fetch the data from tape back to the SONAS system.
Policy-driven Expiration -- SONAS can help you keep your system clean, by helping you decide what files should be deleted. This is especially useful for things like logs and traces that tend to just hang around until some deletes them manually.
WAN Caching -- This allows one SONAS to act as a "Cloud Storage Gateway" for another SONAS at a remote location connected by Wide Area Network (WAN). Let's say your main data center has a large SONAS repository of files, and a small branch office has a smaller SONAS. This allows all locations to have a "Global" view of the all the interconnected SONAS systems, with a high-speed user experience for local LAN-based access to the most recent and frequently used files.
If you want to learn more, see the [IBM SONAS landing page]. Next week, I will be across the Pacific Ocean in [Taipei], to teach IBM Top Gun class to sales reps and IBM Business Partners. "Selling SONAS" will be one of the topics I will be covering!
Next week we have two events related to Infrastructure for midsize businesses!
On Monday, August 6th, 1pm EDT, we have a TweetChat to cover "IT Infrastructure Improvements for Midsize Businesses." You can join at [http://tweetchat.com/room/expertsyschat] or simply tweet with hashtag: #ExpertSysChat
On Tuesday, August 7th, 12pm EDT, IBM's Midsize Insider is hosting me as a speaker for a Webcast: [Storage Management with IBM]. Midsize Insider is a valuable repository of expert content tailored for small-to-midsized business owners and IT decision makers.