This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
We have a new member of the ever-growing IBM Spectrum Storage family! IBM Spectrum Discover is modern metadata management software that delivers data insight for petabyte-scale, unstructured data.
IBM Spectrum Discover easily connects to IBM Cloud Object Storage (COS) and IBM Spectrum Scale and Elastic Storage Server (ESS) to rapidly ingest, consolidate, and index metadata for billions of files and objects, providing a rich layer of metadata on top of these storage sources. IBM plans to extend support to other platforms next year.
This metadata enables data scientists, storage administrators, and data stewards to efficiently manage, classify, and gain insights from massive amounts of unstructured data. The insights gained accelerate large-scale analytics, improve storage economics, and help with governance to create competitive advantage, speed critical research, and mitigate risk.
This initial release is labeled v2.0 as IBM has deployed this in beta form already at various client locations. Here are some key highlights:
Event-notifications and policy-based workflows to automate metadata ingestion and metadata indexing at a petabyte scale
Fine-grained views of storage consumption based on a wide range of system and custom metadata
Fast, efficient search through petabytes of data, resulting in highly relevant results for large-scale analytics
Ability to quickly differentiate mission-critical business data from data that can either be deleted or moved to a cheaper, colder tier
Policy-based custom tagging that enables organizations to classify and categorize data, and align this data with the needs of the business
A software developers kit (SDK) to build action agents that extract metadata from file headers and content, automate data movement, and provide integration to open source software, such as Apache Spark, Apache Tika, PyTorch, Caffe and TensorFlow, to facilitate data identification and speed large-scale data processing
The latest IBM FlashSystem 900 comes in two models, the AE3 "full purchase" model, and the UF3 "storage utility pricing" model where you pay less initially, and then more as you consume more of the capacity. They are the same hardware, just licensed differently.
Currently, IBM offers FCP or InfiniBand host attachment, with up to twelve 3.6TB, 8.5TB or 18TB modules (PCiE card). A full 2U drawer would be configured as 10+P+S RAID5 for high availability and data protection.
Each module offers embedded compression chip, but modules only had enough DRAM cache to allow a maximum of compressed 22TB effective data, so while the 3.6TB and 8.5TB could compress data up to 2.5x, the 18TB card was somewhat limited at 1.2x, which might be fine for some already-compressed data like MP3 audio, or JPEG photos.
This month, IBM offers new XL MicroLatency Modules, 18TB cards with enough DRAM cache to support 44TB compressed data, up to an effective 2.4x compression ratio. A full twelve-module drawer could hold up to 440TB of effective capacity.
IBM also now offers a quad-port 16Gb FCP card that supports both SCSI and NVMe commands over fabric. This is often denoted as either FC-NVMe or NVMe/FC. The FlashSystem 900 already supported NVMe-OF for InfiniBand (see my blog post [IBM February 2018 Announcements])
IBM Cloud Tape Connector for z/OS is a software-defined storage solution that provides an alternative to virtual tape libraries like the TS7760. Here are some highlights:
Robust virtual tape emulation solution with e-vaulting to cloud-based offsite storage for cold, archival, or backup data. Virtual tape emulation simulates IBM compatible tape controllers, tape drives, and tape volumes, maintained on any IBM z/OS-compatible disk system, such as IBM DS8000. IBM Cloud Tape Connector for z/OS provides several vault, transfer, and recovery options to support business continuity and resiliency.
Sequential z/OS data set cloud storage and retrieval. Sequential data sets stored on disk or flash storage can be moved to the cloud by IBM Cloud Tape Connector for z/OS without the requirement of performing a tape-write operation.
Automatic application recall of data from cloud, whether e-vaulted through virtual tape emulation or copied directly to the cloud.
Pervasive encryption support. This feature enables enterprises to ensure that any data copied to the cloud is encrypted before it is transmitted, automatically protecting and handling the encryption keys.
Support for IBM Cloud Object Storage using S3 protocol, as well as Amazon S3, Hitachi HCP protocol, and EMC Elastic Cloud Service Protocol.
I was in Hollywood Florida for the IBM Systems Technical University. Here is my recap of the final two days, day 4 and 5.
The Pendulum Swings Back: Understanding Converged and Hyperconverged Systems
Once again, I presented my popular session on converged and hyperconverged systems. For converged, IBM offers IBM PureApplication systems with Power and x86 servers, as well as partnership with Cisco called VersaStack. Both support IBM Cloud Private as a platform for running applications.
For Hyperconverged, IBM offers Spectrum Accelerate and Spectrum Scale, as well as partnerships with SuperMicro that combines Spectrum Accelerate on SuperMicro x86 servers, and partnership with Nutanix for CS-models of Power servers pre-installed with Nutanix software.
Unlike other converged and hyperconverged solutions that act as isolated islands of compute and storage, IBM's solutions can be incorporated into an existing datacenter with IBM Cloud Private for orchestration, and IBM Spectrum Scale to provide common access to data.
The Seven Tiers of Business Continuity and Disaster Recovery
With all the natural disasters that happened last year in the USA, and the more recent ones all over the world, this session continues to draw a crowd.
The seven tiers range from the least expensive to most expensive. The least expensive involves restoring data from tapes stored in an offsite vault. Tape continues to be the least expensive storage medium, and can be used to bring up a company in a few days.
For faster recovery, there are options like electronic vaulting to virtual tape libraries, and now the use of Cloud storage for ubiquitous access to data from different locations.
Snapshots of entire volumes, virtual machines or databases are also quite popular. IBM offers IBM Spectrum Protect Snapshot, Spectrum Protect Plus, and Spectrum Copy Data Management for this.
Faster recovery is possible with remote mirroring. This involves sending all of the updates to a secondary location. In the event of a disaster, clients can switch processing with the data already there. IBM has over 800 clients able to do just that in less than 30 minutes.
Event Night by the Pool
Photography by Mo Reyes
While Hurricane Michael raged in upper Florida the week prior, the event coordinators were a bit nervous to offer an evening dinner event by the pool, but the weather cooperated!
Photography by Mo Reyes
I was a social butterfly, moving from table to table to talk to all of the various attendees. A light breeze and excellent food and music made for an enjoyable night!
The pool reception went on to about 10:00pm at night. IBM had lit up its logo into the pools for a great view from above. Perhaps just 30 minutes after arriving back to my hotel room, we had quite the thunderstorm! How incredibly lucky this did not happen during the event!
The following day, I presented my session on "Managing Risk with Data Footprint Reduction, a repeat of the session I did earlier that week.
This was a pleasant way to end the week! Aside from the heat and humidity being above average for October, it was a beautiful hotel in a lovely city.
Last week, I was in Hollywood Florida for the IBM Systems Technical University. Here is my recap of days 2 and 3.
Information Lifecycle Management: Why Archive is Different than Backup
Some companies keep backup copies for years and years. They think this is all they need to do to comply with government regulations for data retention. They could not be more wrong!
This session explained why keeping backups for more than a few months is a bad idea, and how to fix it with proper Information Lifecycle Management practices, the proper use of archive as an alternative to keeping backups to long, and the advantages of archives versus backup.
Storage for Rookies: Introduction to IBM Cloud Object Storage
My session on IBM COS was so popular, we repeated for the "Storage for Rookies" track. In this track, registrants attend specifically selected topics to complete a "degree". This is "University" after all!
My session was organized into three sections. First, a general overview of "Object Storage" that can be accessed via HTTP over TCP/IP networks, and how this is different from traditional block or file storage.
Second, a review of the architecture and features of IBM Cloud Object Storage, and how these can be deployed on-premises, in a hybrid cloud configuration, or use in the public IBM Cloud.
Third, how to use IBM Cloud Object Storage for various use cases, including programming languages that support object storage, NAS gateways, and backup software like IBM Spectrum Protect.
Managing Risks with Data Footprint Reduction
What happens when airlines sell more tickets than actual seats on the airplane? Travelers get upset, and sometimes the airline has to forcibly drag people off the plane.
Likewise, storage admins who over-provision storage run the risk of having application outages from out-of-space conditions. This session explained how thin provisioning, deduplication and compression can help, but at other times make things more complicated.
IBM Spectrum Scale Users Group
We had a great turn-out for this "Users Group". IBM Spectrum Scale and Elastic Storage Server grew substantially last year, and we are keeping up the momentum!
We had several presenters cover various updates, followed by cocktails!
After all that excitement, we went to Jimmy Buffett's Margaritaville for a "Storage Team" dinner. There were karaoke singers, accompanied by a live band. A fun time was had by all!
Last week, I was in Hollywood Florida for the IBM Systems Technical University. Here is my recap of day 1.
Introduction to IBM Cloud Object Storage (powered by CleverSafe)
For the first session of the week, at 8:30am in the morning, this was a surprisingly interactive session. I had lots of questions from the attendees.
This session was organized into three sections. First, a general overview of "Object Storage" that can be accessed via HTTP over TCP/IP networks, and how this is different from traditional block or file storage.
Second, a review of the architecture and features of IBM Cloud Object Storage, and how these can be deployed on-premises, in a hybrid cloud configuration, or use in the public IBM Cloud.
Third, how to use IBM Cloud Object Storage for various use cases, including programming languages that support object storage, NAS gateways, and backup software like IBM Spectrum Protect.
Opening Session: Storage Panel of Experts
The opening session started out with an introduction of Calline Sanchez, the new Vice President for IBM Systems Lab Services.
This was followed by something completely different. Mo McCullough acted asked a panel of experts a series of questions, combined with recommended sessions that support each solution. We had the following experts, shown here sitting from left to right in the photo:
Clod Barrera, IBM CTO for Storage
Kelly Groff, Senior offering manager for FlashSystem
Jack Arnold, Security Specialist for U.S. Federal Systems
Brian Sherman, IBM Washington Systems Center
Tony Pearson (yes that is me on the far right)
The session was then wrapped up by Mario Franzone, manager of Technical Events, showing off the latest features of the "IBM TechU" mobile app, which provides the agenda, maps, and other useful information to navigate the conference smoothly.
IBM Hybrid Cloud and Multicloud Storage Solutions
This was previously called IBM Hybrid Cloud Storage Solutions, but now that many clients choose to have multiple different cloud configurations, I added "Multicloud" to the mix.
I organized this talk into five sections:
Archiving less active storage to the Cloud
Hybrid Cloud configurations for backups and snapshots
Business Continuity and Disaster Recovery
Daily Operations, Reporting and Analytics
Production applications in the Hybrid Cloud
I added some slides near the end of my talk about IBM Cloud Private. IBM Spectrum Access blueprints with IBM Spectrum Connect provide interfaces for persistent storage for VMware, Microsoft, Cloud Foundry and Docker Containers.
This was a good way to start the week! Attendees were thankful that they had missed Hurricane Michael that swept through Florida the week before. The red tide had abated, and wind speeds were back to normal levels.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Here is a quick recap of the October 9, 2018 announcements this week.
IBM Elastic Storage Server V5.3.2
The new IBM Elastic Storage Server v5.3.2 offers support for new drawers, non-disruptive upgrades of older models, and an optional 100GbE switch.
When the ESS was first announced, we had GSx models and GLx models, where x represented the number of storage drawers. The "S" stood for small 2U-24 drive drawers, so for example the GS4 had two Power8 servers combined with four 2U-size flash SSD drawers. The "L" stood for large 4U-60 drive nearline HDD drawers.
The second generation models append "S" for Second, so we had GS4S and GL6S. The large models changed to larger 5U-84 drive drawers. As with the previous "L" models, two slots per system contain Solid State Drives for internal use and caching, leaving the rest for slower spinning HDD disk.
Before this week, to upgrade from one model to another meant moving the data off, installing and configuring the additional drawers, and then move the data back. With today's announcements, you can now non-disruptively upgrade GS1S to GS2S to GS4S models, and GL1S to GL2S to GL4S to GL6S.
While you can federate as many GS and GL models together, that may mean having to spend more for Power8 servers than you are comfortable with, so IBM added "GHxy" Hybrid models, with x 2U-24 drive drawers, and y 5U-84 drive drawers. Initial models included the GH14 and GH24, which had one or two flash drawers, and four large drawers. This week, IBM announced a new GH12 model. The SSD flash in the 2U drawer can be 3.84TB or 15.36TB, and the nearline drives in the 5U drawers can be 4TB, 8TB or 10TB capacities.
What did IBM call the third generation GL models? Instead of using "T" which is both the next letter in the alphabet after "S", and the initial letter of the word "third", IBM instead decided to use "C" to designate CORAL project, the Collaboration of Oakridge, Argonne, and Lawrence Livermore national labs. Since the change applied only to the GL models, not the GS models, this makes sense.
To meet the requirements to build the world's fastest supercomputer for the CORAL project, IBM created a modified Elastic Storage Server model with 4U drawers that contained 106 drives. Now, these are available to the general public! IBM announced GL1C, GL2C, GL4C and GL6C models. In these, there are 2 SSD drives, and the rest are 10TB nearline drives.
The new optional 100GbE switch has 32 ports with a total of 6.4 Tbps. These can support 10, 40, 50 and 100GbE data rates, with 300 nsec latency for 100 GbE port to port
Spectrum Scale is licensed two ways: Standard Edition based on the number of sockets, with different prices for NSD servers, FPO servers and NSD clients; and the "Data Management" edition which offered advanced features, and was based on capacity of NSD, independent of the number of servers and clients attached.
Clients liked the capacity-based license model, but did not necessarily need the advanced features. In response, IBM now offers the "Data Access" edition, which offers the same features and functions of Standard Edition, but with capacity-based licensing.
For ESS models, you can chose to license by disk as before, or by capacity in combination with Spectrum Scale capacity-based deployments.
Hortonworks Data Platform v3.0.1 has followed suit. With the merger between Hortonworks and Cloudera, Hortonworks now offers capacity-based licensing for shared storage, like the IBM Elastic Storage Server.
IBM FlashSystem A9000/A9000R software version 12.3
There are three enhancements in this release: Three-site replication, a new model of A9000R, and raising a previous pool size limit.
For three-site replication, you can now combine HyperSwap which maintains two identical copies at distance, with a third asynchronous mirroring. The first two are typically within 100 km, but the third copy can be a much greater distance, across the continent if you like.
The A9000 "Pod" had three x86-based controller and one FlashCore drawer. The A9000R "Rack" had four, six or eight x86-based controllers and two, three or four FlashCore drawers, respectively, as well as a Power Distribution Unit (PDU) and pair of InfiniBand switches to connect everything together. The new "Grid Starter" model is very much like the "Pod" with three controllers and one FlashCore drawer, but adds the PDU and IB switches. The idea is that you can start with a "Grid Starter", then later upgrade to the larger A9000R models as you grow.
Back in XIV days, the architectural limit per pool of 1PB was plenty big. But with the new capacities on the A9000 and A9000R, the 1PB limit was starting to draw complaints. This limit was lifted, so that now a single pool can be made with the entire capacity of the box.
In the mainframe world, IBM Geographically Dispersed Parallel Sysplex, now just GDPS, provide the highest BC-7 business continuity tier, providing end-to-end coordination with servers, networks and storage devices. For IBM Power Systems, similar BC-7 support is provided by IBM Geographically Dispersed Resiliency.
In this week's announcement, IBM Geographically Dispersed Resiliency (GDR) for Power Systems has been renamed and now offered in two editions: VM Recovery Manager HA and VM Recovery Manager DR. The "HA" edition provides high availability using Power Systems Live Partition Mobility for AIX, IBM i and Linux operating systems.
The "DR" edition provides both High Availability and Disaster Recovery capabilities, supporting mirrored storage systems like IBM DS8000, SAN Volume Controller, FlashSystem 9100 and V9000, and Storwize systems, as well as competitive storage from Dell EMC and Hitachi.
Next week, I will be in Hollywood, Florida for IBM Technical University (Oct 15-19), and then Rome for the IBM Technical University (Oct 22-26). I will be covering many of these announcements above, and more!
I have returned safely from the IBM Technical University in Johannesburg, South Africa, and now preparing for my next events. IBM plans to have back-to-back Technical University events in Hollywood Florida.
October 8-12, will focus on [IBM Z mainframe, and a subset of IBM Storage] that offer synergy for IBM Z, such as DS8880 storage system, and the TS7760 Virtual Tape Engine. There will be 28 sessions on storage related to IBM Z.
It's September, and many students are going back to school. A friend had asked me for advice to give his son as he enters high school. Here were my thoughts.
(FTC Disclosure: I work for IBM. I have not received any compensation from any third parties for the products or services mentioned in this post.)
I highly endorse David Allen's book [Getting Things Done]. Trying to remember all of the homework, tasks and assignments that you need to get done can add unnecessary stress.
The trick is to write these things down. Whether this is on paper, or electronically, David's GTD process works.
Students should learn to become "Search Ninjas" in finding information to complete their homework and tasks. To that end, I recommend using a site like "LastPass" to store unique passwords for each online service.
LastPass is short for "the last password you will ever need to remember", as it stores all of your passwords for banking, social media, and other online resources. One strong password gets you in. This is further strengthened by two-factor authorization, such as using "Google Authenticator". In this manner, to log into your LastPass account, you need both your strong password, as well as access to your smartphone for Google Authenticator to provide a six-digit code to validate your identity.
Writing your thoughts down sometimes requires different approaches. A [mind map] is a hierarchical diagram to help capture thoughts non-linearly. I have seen these used to capture thoughts generated during idea brainstorming sessions. I use them to help me create new PowerPoint presentations.
There are many mind mapping tools available. On my smartphone, I use the [SimpleMind] app. On my Linux laptop, I use [View Your Mind]. Try several out, and pick the one that works best for you.
The trick is to identify which general pattern a specific problem falls under, and use the general solution as the basis for solving it. Part of this approach is to identify all of the inherent contradictions and eliminating or addressing them one by one.
Whenever Rafael complains to me he has a problem to solve, like figuring out how to get the oil changed in his car, I ask him how he would solve it in a video game. He would reply that he would determine what "items" he needed, either to trade or gain entry into a realm, and what sequence of steps needed to happen in what order. I would then explain that life is just like that, except instead of jewels and swords, you are using cash or credit cards!
Not surprisingly, IBM technology can be found in certain models of Sony PlayStation, Nintendo Wii, and Microsoft Xbox.
Imagine having homework in three different subjects. A student might spend all night on one topic, and never get around to the other two. The [Pomodoro Technique] is surprisingly simple. It focuses on two problems kids have these days: getting started, and staying focused.
The technique divides up the hours available into 25-minute slots, with 5-minute breaks in between. For example, the student might spend the first 25 minutes on math homework, then take 5-minute break playing video games, then 25 minutes reading History, then 5-minute break checking Facebook, then 25 minutes completing an essay for Spanish class. Each 5-minute break helps to clear the mind for the next task.
I use this method at work. Often, I have a variety of tasks facing me, booking flights for my next trip, updating PowerPoint presentations, and writing my next blog post. Breaking up the day into smaller 25-minute segments helps me stay focused.
In Italian, "pomodoro" means tomato, and the 25 minutes was inspired by a 25-minute kitchen timer shaped like a tomato. You certainly don't need a tomato-shaped timer to use this technique, as there are smartphone apps available to do this for you.
High school is a good time to start developing good habits in project management, time management, problem solving and password security.
Do you have any suggestions? Please feel free to contribute in the comment section below!
Last week, September 11-13, I was in Johannesburg for the IBM Technical University! The event was held at the Hyatt Regency in the Rosebank section of town. This event was focused on IBM Systems, including storage, Power systems, and IBM Z mainframe servers. Here is my recap for the third and final day:
What else can you use that data for? Adventures in Data Reuse
Did you know that IBM invented "Copy Data Management" in 1998? I do, of course, since I was one of the inventors! Originally developed for DFSMS on z/OS, there are now copy data management solutions for a multitude of operating systems, databases and applications.
This session covered IBM Spectrum Protect Snapshot, IBM Spectrum Protect Plus, and IBM Spectrum Copy Data Management.
Copies of production data are not just for data protection and disaster recovery. The copies can be reused for other IT or business purposes:
Testing and DevOps - After a copy of production is made, columns of databases containing sensitive, personally-identifiable information (PII) can be masked, scrambled or obfuscated, to keep them out of the prying eyes of testers and developers. IBM Spectrum Copy Data Management offers data masking features.
Reporting and Analytics - Running reports or analytics against production data can drastically impact performance and cache hit rate on storage devices. Making copies to other systems, and running reports and analytics elsewhere makes a lot of sense.
Hybrid Cloud - Why limit your copies to just your own data center? Copies of data can be sent to off-premises to perform DevOps, Reporting and Analytics in the cloud.
Be Persistent in your Journey to Private Cloud
IBM offers persistent storage for IBM Cloud Private deployments. This includes IBM Spectrum Virtualize family of products, Spectrum Accelerate family of products, VersaStack converged systems, and DS8000 systems.
IBM Spectrum Access blueprints are available to deploy persistent storage for IBM Cloud Private software on VersaStack, POWER and IBM Z servers.
IBM Spectrum Connect provides the necessary interfaces for Kubernetes to claim persistent storage for Docker containers.
Is your data center ready for NVMe, NVMe-OF or FC-NVMe? Initiated in 2011, the NVMe standard is relatively young. I covered its short history, why zero-copy protocols like FCP and RDMA can drastically reduce latency, and all the components needed for a complete end-to-end solution.
Inside All-Flash Arrays, you can use standard 12Gbps SAS to connect to SCSI-based Solid-State Drives (SSD), or you can use the much faster PCiE bus at 32Gbps with NVMe-based drives.
NVMe provides for advanced parallelism, since flash is not mechanical, and does not rely on the position of a read/write head over a platter as spinning disks do. Traditional SSD pretend to be spinning disks, so often process one command at a time, to maintain the charade.
NVMe is designed to work only with flash devices, so it uses a streamlined 15 commands, versus the 34 commands in SCSI to handle other storage media.
But having an NVMe-inside All-Flash Array is not the end of the story. Rather than sending all of those SCSI commands across network, only for some to be disregarded when they arrive, you can send the streamlined NVMe commands instead. NVMe over the networks is available now. NVMe-OF offers support for Ethernet and InfiniBand, and FC-NVMe offers support for FCP.
The last stage is application exploitation from the host server. The industry still needs Operating System drivers, multipathing drivers, and applications that take advantage of NVMe. IBM anticipates this will occur later this year, and into 2019.
IBM Storage Infrastructure Optimization (SIO) assessment
Ishmail Shaik, IBM Lab Services, presented an interactive peek of what an SIO entails.
In 2005, I led a series of "Information Lifecycle Management" (ILM) studies for various clients, combining the methods from "disk studies" and "tape studies" that I had performed since the 1980s.
The ultimate win-win scenario, these ILM studies proved successful, not only saving the clients millions of dollars, but often resulting in follow-on sales of IBM storage hardware, software and services.
Over those 18 months, I trained several IBM Systems Lab Services colleagues in the process. These studies formed the basis of "Storage Infrastructure Optimization" assessments launched officially in 2011.
The SIO assessment process has evolved a lot since I was last involved with it. Here are a few of the changes I noticed from his presentation:
Core Modules - No longer just focused on Lifecycle Management, SIO studies offer four additional modules: Modernize & Transform, Business Resiliency, Manage & Control, and New Workloads.
Data Collection - The biggest challenge back then was collecting data to provide recommendations. I managed with in-person interviews and what little tools were available back then, collected into TCO spreadsheets, VISIO diagrams and PowerPoint slides. Today, we have sophisticated data collection tools, including IBM Spectrum Control, Storage Insights, Arxview, and Butterfly.
Engagement Workshop - SIO now has incorporated "Design Thinking" methodology to help clients prioritize findings into a set of short-term, medium-term and long-term recommendations.
The three-day event ended with a closing session, hosted by Mario Franzone.
Last week, September 11-13, I was in Johannesburg for the IBM Technical University! The event was held at the Hyatt Regency in the Rosebank section of town. This event was focused on IBM Systems, including storage, Power systems, and IBM Z mainframe servers. Here is my recap for the second day:
Nutanix 101: Intro to Hyperconverged Infrastructure and Private Cloud on IBM Power Systems
I attended this based on the abstract for this session:
"Learn in this session why IBM has partnered with Nutanix around hyperconvergence, how this architecture can help drive simplicity, performance and cost efficiency into your IT landscape. You will get both a high level overview on Nutanix, as well as how IBM CS Series is using the Nutanix software to deliver a worldclass application platform, followed by a live demo to show you how Nutanix works."
Sadly, I felt the title and abstract were partially misleading.
Rui Gonclaves from Nutanix gave a nice overview of how Nutanix software can help drive simplicity and cost efficiency to x86 server deployments. It supports VMware, Hyper-V and its own version of Linux KVM called the Acropolis Hypervisor (AHV). Its PRISM software helps to provide one-click management convenience for a cluster of x86 servers.
Nutanix considers its software to be the value of the solution, and treats the servers it runs on as mere commodities. By partnering with IBM, Nutanix adds another concubine to its harem. The only subtle reference to the new CS models was an IBM logo among the logos of Lenovo, HP, DellEMC, and Cisco UCS. Rui failed to cover any details of the CS models, nor their advantages over x86 servers.
(IBM, on the other hand, considers its hardware to be the value of the solution, and treats the applications as commodities. IBM Power servers are able to run open source databases like MongoDB and EnterpriseDB better. For example, a 3-node cluster of IBM CS822 servers (22-core models) was able to run more than twice the transactions per second (tps) per dollar than a comparable cluster of 24-core Dell CX630-10 machines.)
Rui finished his presentation 25 minutes early, so there would have been enough time to cover the CS models, or show a live demo, but that didn't happen either.
Save the World! Save your IT Budget with IBM Cloud Object Storage
All of the presenters at this conference were asked to come up with fun and quirky titles for their sessions. Since clients use IBM Cloud Object Storage (COS) to save large repositories of active archives, the phrase "Save the World!" has a double meaning.
IBM has clients with more than 100 PB deployments of IBM Cloud Object Storage, so the idea that you can "Save the world's amount of data" was not too outrageous.
IBM COS is relatively inexpensive, at a total cost of ownership that is up to 70 percent less expensive than traditional disk-based solutions. A lot of your data is probably static, stable, unstructured content ideal for low-cost storage with IBM COS, so the idea that you can save your IT budget wasn't outlandish either.
Discover advanced features & last announcements with IBM Spectrum Virtualize
When I saw this title, I was afraid it might overlap too much with my session "Dip your TOE in our Pool". Instead, Dominique Salomon from the IBM Client Experience Center in Montpelier France, presented a great overview of the basic and advanced features of Spectrum Virtualize family of products.
He cover automated tiering with IBM Easy Tier, data footprint reduction with Thin Provisioning, Compression and Deduplication, as well as Copy Services like FlashCopy and remote mirroring.
How big is your NAS? Sizing, Management, and Deployment
While I had fun coming up with fun and quirky titles for their sessions, their drawback is that it forces people to read the abstracts to understand what will be covered in each session.
In this session, I covered IBM's three main NAS offerings: Spectrum Scale, Spectrum NAS, and IBM Cloud Object Storage with NAS gateways from Ctera Networks, Avere, Panzura, and Nasuni.
The rest of the session was IBM's new File and Object Storage Design Engine (FOS-DE) studio, an online tool to help decide which of the three NAS solutions is the best fit, and rough sketch configuration that meets a client's specific capacity and performance requirements.
The FOS-DE tool is available at no charge to all IBM employees, IBM Business Partners, and prospective clients.
I wasn't planning to give a live demo, but I ended ten minutes early, and had decent Wi-Fi connection, so I was able to demonstrate the FOS-DE studio with the remainder of my time slot.
Nightmares and Dreams: Manage your entire Storage Infrastructure with IBM Spectrum Control and Storage Insights
What keeps you up at night? That was the question that motivated the title of this session. I organized this topic into three segments:
Visibility - Can you even understand your storage infrastructure? IBM Storage Insights is available at no additional charge for IBM block storage devices, and can greatly enhance your visibility into your capacity growth, performance bottlenecks, and other vital insights.
Control - Reporting is not enough, you need to take action? IBM Spectrum Control Standard Edition, Spectrum Connect, and Copy Services Manager can help configure, provision and perform other actions needed to your storage infrastructure.
Automation - As data centers grow, the actions required often overwhelm existing IT staff. IBM Spectrum Control Advanced Edition adds analytics and automation.
Johannesburg is nine hours ahead of my home town in Tucson, Arizona. Jet lag hit me hard this second day, so I opted out of the evening activities, and got some much needed rest.
Last week, September 11-13, I was in Johannesburg for the IBM Technical University! The event was held at the Hyatt Regency in the Rosebank section of town. This event was focused on IBM Systems, including storage, Power systems, and IBM Z mainframe servers. Here is my recap for the first day:
Opening Keynote Session
The conference was opened by a warm welcome from Ronnie Moodley, IBM Executive for Systems Hardware. He explained that we live in a VUCA world. For those who have not heard this term before, it is a four-letter acronym that conflates four different business challenges: [Volatility, Uncertainty, Complexity, and Ambiguity].
Ronnie also mentioned the shifts in marketing, from the "four P's" to the "four E's":
Clients are no longer evaluating individual products, but also services that come with it, the context on how it is used, the identify of users, and other characteristics that provide a complete experience.
With so many free, open-source alternatives, the question is not comparing the prices of competing products, but what do you exchange for choosing one option for another. Often referred to as the total cost of ownership (TCO) or "opportunity cost" in economic terms.
The Internet and cloud technologies now allow people to buy and use products practically anywhere. Having a bricks-and-mortal location on a busy street corner may no longer be a competitive advantage.
Old marketing methods relied on uni-directional promotion from corporate marketing teams. Today, social media, blogs, and word-of-mouth evangelism are providing greater influences on purchase decision.
The second segment was "The World is our Lab", by Kugendran Naidoo, IBM Research South Africa. Unlike some companies that consolidate all of their research to one location, IBM does research across the globe, with two locations in Africa (Nairobi, Kenya and here in Johannesburg, South Africa).
Dr. Naidoo explained that often research leads us into areas we weren't expecting. For example, an algorithm developed to detect black holes in space failed, but it turned out to be useful for detecting Wi-Fi hot spots.
This begins back in 1974, when Stephen Hawking theorized that under certain circumstances, small black holes might "evaporate" — and simultaneously emit radio signals. These hypothesized black holes were about the mass of Mount Everest, and smaller than an atom. Soon after, the physicist and engineer John O'Sullivan tried to find these signals.
If these small black holes were evaporating, they would emit radio signals as they vanished. But because of their great distance from us, these signals would be hard to identify because they would be tiny by the time they arrived, as well being buried in a background of louder 'noise'. Furthermore, this tiny signal would be 'smeared' (turned from a sharp spike into a rounded shape). So he and his colleagues came up with a wonderful mathematical tool to detect these tiny, smeared signals.
As it turned out, they never did find these small black holes.
In 1992, John O'Sullivan was at Commonwealth Scientific and Industrial Research Organization (CSIRO) in Australia, trying to develop computer networks that communicated without wires.
But there was a big problem. The signals he wanted to detect were tiny, smeared and buried in a background of louder 'noise'. Just like the black hole signals.
By a wonderful coincidence, his black hole mathematics turned out to be the key to Wi-Fi. CSIRO took out patents in Australia in 1992, and in the US in 1996. By 2000, they had some working chips.
Improve your NAS environment in One Day! Introducing IBM Spectrum NAS
IBM has been in the NAS storage business for decades. IBM Spectrum NAS is our most recent software defined storage. This session gave an overview on how Spectrum NAS is designed. This software can be deployed on as few as four nodes in less than an hour, leaving you the rest of the day to migrate your data from other NAS solutions.
IBM Spectrum NAS fills the gap between a single file server and expensive dual-controller models available commercially. A single file server, running perhaps Windows Storage Server or Linux with NFS and Samba, represents a single point of failure (SPOF). Lose the one server, and your department or team loses access to all of those shared files!
At the other extreme, commercial dual-controller NAS devices, such as those from NetApp or DellEMC, are loaded with advanced features and application-specific capabilities. Some people take advantage of these, others don't.
IBM Spectrum NAS is software defined storage that runs on four or more nodes, is highly available, and provides many of the advanced features offered by commercial dual-controller models at roughly half the total cost of ownership.
Dip your TOE in our Pool! iSER and Data Reduction with IBM Spectrum Virtualize
All of the presenters at this conference were asked to come up with fun and quirky titles for their sessions. The title is a bit of wordplay.
When IBM launched its SAN Volume Controller in 2003, I was one of the "Technical Evangelists" that traveled around the world to explain how it works. Today, 15 years later, I am still talking about how great this technology is.
Ethernet network interface cards that have co-processors to offload some of the TCP/IP processing are called TCP-Offload-Engines, or "TOE" cards.
IBM recently announced two new flavors of 25GbE cards, one that supports RDMA over Converged Ethernet (RoCE), and another that supports Internet Wide Area RDMA protocol (iWARP).
To implement data deduplication, the Spectrum Virtualize team refactored the code that handled pools of managed space. The original pools are now referred to as "Legacy Storage Pools", and the new pools are referred to as "Data Reduction Pools".
Fahima Zair, Tony Pearson, and Maria Lancaster
After the sessions, we had a nice evening reception to celebrate the General Availability of the IBM FlashSystem 9100. At events like these, many attendees are local and commute to the event, so I was happy to see many stuck around to have conversations with the experts.
I was able to reconnect with many of my colleagues, including Fahima Zair in charge of our VersaStack relationship with Cisco, and Maria Lancaster from our Storage Marketing team.
Well, it's Tuesday again, and you know what that means? IBM Announcements! This week I am in San Francisco, California speaking to clients. A bit colder than Tucson, Arizona!
(FTC Disclosure: I work for IBM. Special thanks to Mark Larson (IBM SAN team), and both Craig Nelson and Peter Schmelter from Broadcom, for their assistance with this post. I have no personal financial interest in Broadcom. This blog post can be considered a "paid celebrity endorsement" of the IBM products mentioned below.)
Spectrum Control v5.3
Back in 2003, I was the chief architect of Spectrum Control v1, formerly called TotalStorage Productivity Center, and later Tivoli Storage Productivity Center. IBM Spectrum Control is part of the IBM Spectrum Storage Suite.
There are two editions: Standard Edition and Advanced Edition.
(What happened to the other editions? The "Base Edition" is now called IBM Spectrum Connect. The "Spectrum Control Storage Insights" service in the IBM Cloud is now just called IBM Storage Insights and Storage Insights Pro.)
The Standard Edition v5.3 offers the following:
Capacity visualization and management, Performance troubleshooting, Health and performance alerting, Application modeling, and support for VMware data sources
Create, save, and send reports directly in the web UI. The reports can be run now, or scheduled to be run later. When a report is run, it can be sent by email or exported and saved in different file types.
Support IBM FlashSystem 900 AE3 models using compression, and the new IBM FlashSystem 9100
Improved automation of counting the licenses for enclosure-based storage devices
The latest IBM Copy Services Manager (CSM) v6.2 for managing remote mirroring, replacing the previous IBM Tivoli Storage Productivity Center for Replication.
The Advanced Edition v5.3 provides all of the above, as well as the following.
Tiered storage optimization with intelligent analytics
Service catalog with policy-based provisioning
Self-service provisioning with restricted use logins
Analysis of reclaimable space
Showback and Chargeback reports
Application-based snapshot management using IBM Spectrum Protect Snapshot (formerly known as IBM FlashCopy Manager, FCM)
Clients with v5.2.x version of IBM Spectrum Control can upgrade to this new release.
Clients with IBM Spectrum Virtualize-based appliances can bundle Spectrum Control v5.3 with the latest Spectrum Virtualize v8 code. This bundle is referred to as "IBM Virtual Storage Center", or VSC for short. VSC supports SAN Volume Controller, FlashSystem 9100 and V9000, Storwize V7000 and V5000 models.
IBM's announcement of NVMe-capable FlashSystem 9100 has caused many to re-evaluate their SAN infrastructure. All IBM b-type Gen5 and Gen6 switches and directors are NVMe-ready!
(Last year, Broadcom completed its acquisition of Brocade. I am thankful both start with the letter "B", so we won't have to rename our B-type switches to another letter!)
There are two new products in this announcement. The SAN 128B-6 is a Gen6 switch in a 2U container. The other is a 64-port Blade that fits into existing Gen6 Directors, like the 256B-6 or 512B-6 models.
But the 128B-6 doesn't have 128 standard ports ! It actually has 96 standard ports, plus eight "Q-Flex" ports (that can be used to create a total of 128 ports) . Likewise, the 64-port blades have 16 Q-Flex ports (that can be used to create 64 ports).
What is going on? The Q-Flex ports can actually run four channels in different colors of light over the same fiber optic cable, reducing the wiring mess. These Q-Flex can be used for host or device traffic, but are often used as "Inter-Switch Links" or ISL for short.
All of the standard and Q-Flex ports are 32Gbps, but can are capable of autosensing 4, 8, 16, and 32 Gbps port speedsm depending on the SFPs used , for interoperability with existing servers and storage devices. In the case of Q-Flex, all four colors must be run at the same speed, so a Q-Flex represents either 4x32, 4x16, 4x8 or 4x4 Gbps links. You cannot mix different speeds on a single Q-Flex.
In addition, the 64-port blade also supports 10 GbE, 25 GbE, and 40 GbE using the appropriate QSFP transceivers.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(This week I am in Pennsylvania and New York speaking to clients. The weather this week has not been cooperative!)
Spectrum Protect Plus 10.1.2
Just in time for the upcoming VMworld conference, IBM announces the following features added to Spectrum Protect Plus, a snapshot-based backup software for VMware, Hyper-V and databases.
Data-at-Rest Encryption for local backups stored in the vSnap repository
IBM Db2 support with point-in-time recovery
VMware vSphere 6.7 support
Alerting for backup and restore jobs and storage thresholds limits
Drill-down capabilities for dashboard widgets
Spectrum Protect 8.1.6
IBM also continues to enhance its traditional file-based backup product. Here are some of the features:
Tier data by backup state for container pools. When you have multiple backup versions, the most recent version is called the "active", the older versions are called "inactive" versions. Rarely do you recover inactive versions, so this feature allows them to be migrated off to object or cloud storage.
Ransomware detection for Virtual Environment workloads. This is an enhancement of the "Ransomware detection" introduced earlier this year, but for VMware and Hyper-V images.
IBM DS8882F All-Flash Array
When IBM announced the DS8880, it shocked folks that it changed them from the previous 33-inch wide, to a standard 19-inch width. The IBM Z team followed up with 19-inch wide models of its mainframe servers.
Now, IBM can bring these together. There are two flavors of the new DS8882F:
The "Rackless" model is 17U in height with the optional keyboard/monitor, and can be put into existing 19-inch racks. These can be used with VMware, Linux, Windows, AIX and z/OS.
The "Flex Frame" model, which is 16U, allowing it to fit nicely inside a single-rack IBM Z Z14 ZR1 model, or LinuxOne RockHopper II model. It is 16U instead of 17U because it shares the existing 1U-high keyboard/monitor unit.
Like the DS8888F, DS8886F, and DS8884F models, the new DS8882F uses the High Performance Flash Enclosure (HPFE) gen2 drawers, supporting either high-performance/high-endurance drives (400GB to 3.2TB each), or high-capacity/standard-endurance drives (3.8TB to 15.3 TB each).
The R8.5 release of firmware that accompanies this announcement also supports data-in-flight encryption for Transparent Cloud Tiering. It also supports a new feature called "Safeguarded Copies", up to 500 copies to protect against hackers and ransomware.
IBM Spectrum Access blueprints have been extended to support IBM Z and LinuxOne. These blueprints show how to run IBM Cloud Private with Spectrum Connect with IBM block storage, including IBM DS8880/F, SVC, Storwize and FlashSystem models.
IBM Storage Solutions for Virtual Desktop Infrastructures (VDI)
IBM offers a new blueprint to configure Virtual Desktops with its newly announced IBM FlashSystem 9100 device. The low latency/high IOPS capability of the FlashSystem 9100 is perfect for the type of "boot storms" that are often encountered with VDI deployments.
IBM Spectrum Scale 5.0.2 and Elastic Storage Server
At recent IBM Technical University, I joked that the IBM Elastic Storage Server is only "part of a complete breakfast" because it only supported the NSD POSIX interface. To make it useful in most situations, you needed to buy additional servers outside of the ESS to run Spectrum Scale protocol nodes to provide industry-standard file and object protocols.
Today, IBM announced that you can order a new "IBM Elastic Storage Server Data Server" (5148-22L) which is a POWER server with the Spectrum Scale software pre-installed for protocol node support. It has [similar specifications] to the IBM Elastic Storage Server Management Server (5148-21L).
If you prefer to run Spectrum Scale in the cloud, you can "Bring your own license" (BYOL) to Amazon Web Services.
I travel a lot. In the first six months of this year, I was on the road 17 of the 26 weeks. This week, I am visiting clients in beautiful Minneapolis, MN.
Several readers have asked me what mobile phone or web apps I find the most useful, and here are my top three. For each, I will explain how I use them, and why they are useful.
(FTC Disclosure: I work for IBM, and have no financial connections to any of the companies mentioned below, and have not been compensated in any way to mention them on this post. IBM has selected Concur as its travel platform, which runs TripIt mentioned below. This blog post can be considered a "paid celebrity endorsement" for each of the three sites below.)
[Rome2Rio] is one I use long before I plan my trip, and works both on my mobile phone as well as web application. Many people use apps like "Google Maps" for driving directions from point A to point B. But Rome2Rio handles airlines, trains and other alternative modes of transportation. It also provides estimated prices for each mode of transportation.
Landing in Gatwick Airport, I used Rome2Rio to figure out the most cost-effective way to get to my hotel on Southampton Row. A taxi would have been $160-200, Ride-share like Uber or Lyft $75-95, and train $17-28. I chose the train and saved a lot of money!
Rome2Rio is a great app, both for advanced planning, as well as dealing with situations in the moment. I have it bookmarked on my browser, and the app installed on my phone.
Long before IBM signed on Concur as its travel expense and trip planning tool, I was using [TripIt]. It automatically enters all of my airfare, hotel and car rental reservations into a single chronological itinerary, but then lets me add everything in between, such as meetings, dinner restaurant plans, and other activities.
While I am planning my travels, TripIt ensures I have all the connections I need. If I land at this airport, do I have a rental car or other transportation to the hotel? This forces me to get in advance all of the times and locations of every client dinner, briefing, or other meeting, so that I can plan how to get from point A, to B, to C, accordingly.
A few days before my trip, I can print out my TripIt itinerary, to PDF format file to send to my family and co-workers.
While traveling, I have the TripIt app to have all the information I need close at hand, including hotel address locations, or confirmation numbers once I arrive to the hotel.
[FlightStats] will show you the status of all flights, on any airline. Just enter the 2-character airline code, like AA for American Airlines, or DL for Delta Airlines, then the flight number. Here are the different ways I find this useful:
When I land at an airport connection, but have not yet left the plane, I can use FlightStats to determine which gate I have arrived at, and which gate I need for my next flight. This will give me a good sense of how much time I have, do I need to hurry, can I stop for a snack, and so on.
FlightStats seems to be more up-to-date than computer screens at the airport. I have learned of flight delays from FlightStats sooner than I have from the computer screens or gate agents.
If my flight is canceled or delayed, FlightStats also can find flights from point A to B using real-time information.
Are there any apps or web sites you recommend? Please comment below!
Mark your calendars! IBM plans to have back-to-back Technical University events in Hollywood, Florida:
October 8-12, will focus on IBM Z mainframe, and a subset of IBM Storage that offer synergy for IBM Z, such as DS8880 storage system, and the TS7760 Virtual Tape Engine.
October 15-19, will focus on IBM Power Systems and the entire IBM Storage portfolio.
When I first learned of this, I was not aware there was a city called Hollywood in Florida. The Hollywood in Florida is situated between Fort Lauderdale and Miami, so you can fly into either of those two airports to get to the conference.
(Did you know? The Hollywood most people know in California is no longer its own city, but rather incorporated as a neighborhood district into Los Angeles back in 1910. There are actually thirty different places called "Hollywood" around the world, two dozen in the United States, with the rest scattered in Ireland, Turkey, Russia, Singapore and the Philippines. Not all of these are formally "cities", but in some cases neighborhoods, districts, unincorporated areas, or other populated places. The Hollywood in Maryland claims to be the first, established in 1867!)
I only plan to attend the second week only, October 15-19. Here are some highlights:
In the past, IBM had keynote sessions for each brand, for example, one focused on IBM Power systems, and another on IBM Storage. However, these were scheduled during the same time slot, forcing some people to make a tough choice.
To solve this, the two keynote sessions will be staggered, so attendees can attend both!
The storage keynote will take on a new format, with a panel of experts. I have been invited as one of the experts to participate! If there is a particular topic you want to hear about on the panel, please enter your comments below.
As with most conferences, there is a "Call for Papers" requesting speakers submit the topics they can present, and then conference coordinators accept, adjust or reject them in building the final agenda.
Here are the topics I submitted:
Build your personal brand! Social Media tips from an experienced blogger
The Pendulum Swings Back - Understanding Converged and Hyperconverged Systems
IBM Hybrid and Multi-Cloud storage solutions
IBM Cloud Object Storage (powered by Cleversafe)
Managing Risks with Data Footprint Reduction
Information Lifecycle Management: Why Archive is different than Backup
The Seven Tiers of Business Continuity and Disaster Recovery
If you attended the IBM Technical Universtiy in Orlando last May, the conference in October will have six months' worth of new announcements and products to cover.
I also plan to be at the IBM Technical University events in Johannesburg, South Africa (September 11-13), and Rome, Italy (October 22-26). If you plan to be at any of these events, let me know! If not, you can follow along with Twitter hashtag: #IBMtechU
Several readers have asked me what is the difference between Hybrid Cloud and Multi-Cloud. The two phrases are used in various contexts, not just by IBM, but also by our competitors, as well as the press and industry analysts.
A hybrid cloud attempts to develop a single platform to run a specific Cloud workload. This single platform combines two or more of the following resources:
on-premise private Cloud
off-premise private Cloud
off-premise public Cloud
A Hybrid Cloud is like the United Nations peacekeeping force. A single force, with a single mission, representing the combined resources of many countries.
A Hybrid Cloud is a deployment model that might offer advantages over just using a Private Cloud, or just using a Public Cloud.
A practical example is Tennis Australia. For three weeks every January, they run the Australian Open, a tennis tournament, with over 4,000 employees, and millions of views to their website each day. For the rest of the year, they have only about 300 employees, and manage quite well to run smaller tournaments for high-school and college students, as well as plan for next year's event.
In this case, a Hybrid Cloud that combines perhaps two racks of an on-premise private Cloud, combined with the incredible power of IBM Cloud, gives them the variability and agility needed to run smoothly without wasting CAPEX on equipment they don't need.
Many "Hybrid Cloud" products focus on being the "glue" that combines two different resources together. This can be at the management layer, the data layer, the application layer, or the infrastructure layer.
In contrast, a Multi-Cloud represents a deployment strategy for different Cloud workloads. One workload might be better served on a Private Cloud, another workload might be better served on a Public Cloud, and a third workload, as we saw above, might benefit from the combined resources of a Hybrid Cloud.
In the past, people felt that all Cloud Service Providers were the same. Just as people buy gasoline from which ever gas station offers the lowest prices, many just chose their Cloud Service Provider based entirely on the costs involved. Loyalty can change the minute new price tables are published.
But today, Cloud Service Providers have made an effort to provide differentiation. For example, your Multi-Cloud might have three Hybrid Clouds. One cloud platform combines your on-premise Private Cloud with IBM Cloud, another combines your on-premise Private Cloud with Amazon Web Services, and a third combines your on-premise Cloud with Microsoft Azure.
In this case, a Multi-Cloud is like the various armed forces. You might deploy the Army for one mission, the Navy for another, and the Air Force or Marines for a third.
Many "Multi-Cloud" products focus on being versatile and multi-purpose. For example, the same FlashSystem 9100 that you deploy in your "Analytics Cloud" platform could also be useful for your "Docker Container Cloud" platform, or your "DevOPS Cloud" platform. IBM's various Multi-Cloud Solutions provide the additional software and services needed to complement the FlashSystem 9100 to pull this off.
Deciding to use a Multi-Cloud strategy is mostly a business decision. Deploying a Hybrid Cloud as one of your Multi-Cloud platforms could be a combination of business and technical decision.
Well, it's Tuesday again, and you know what that means? IBM Announcements! After much needed vacation in Cancun Mexico, Lake Havasu and Sedona, Arizona, I am glad to be back at work! This week, I was visiting clients in the Los Angeles area.
IBM FlashSystem 9100
IBM's latest addition to its lineup of All-Flash Arrays is the FlashSystem 9100.
There are actually two models: the 9110 (model AF7) has 8-core processors, and the 9150 (model AF8) has 14-core processors. Both models are 2U 19-inch shelves with 24 drives on the front, with two control node canisters in the back. The term "FlashSystem 9100" applies to both 9110 and 9150 models.
Each canister has two processors, 64GB to 768GB of cache memory, an on-board 1GbE port for management, four 10GbE ports for Ethernet, and three HIC slots for I/O adapters, which can be any mix of quad-port FC cards, dual-port 25GbE Ethernet cards, or 12Gb SAS cards for expansion drawers.
For drives, you can have any mix of FlashCore Modules (FCM) or Industry-Standard NVMe (ISN) drives. The FlashCore modules are similar to the FlashCore boards in the FlashSystem 900, including Variable-Striped RAID, advanced flash management, heat binning, health separation, hardware-embedded encryption and compression.
These FCM are packaged into standard NVMe SSD form-factor, with 4.8, 9.6 and 19.2 TB capacities. The Industry-Standard NVMe drives come in 1.92, 3.84, 7.68 and 15.36 TB capacities to offer additional price/capacity options to clients.
A fully maxed out twenty-four FCM module system at 19.2TB represents approximately 400TB usable capacity, combined with 5:1 data footprint reduction with deduplication and compression, can provide up to an effective 2PB in as little as 2U of rack space!
The NVMe and FlashCore technology truly accelerates performance. Latencies as low as 100 microseconds are 2.5x lower than competitive offerings. Each control enclosure can deliver up to 2.5 Million IOPS, and a four-way cluster up to 10 million IOPS in just 8U!
You can mix and match FCM and ISN drives in the same controller, but FCM and ISN have to be in their own separate RAID groups. To use Distributed RAID6 (DRAID6), you need at least six drives for this.
IBM has made a "Statement of Direction" that these models are NVMe-OF hardware ready and will support both FC-NVMe and NVMe-OF over Ethernet by year end. Part of this involves changes to server-side software, including various operating systems, device drivers, and multi-pathing drivers.
The FlashSystem 9100 support up to 40U of expansion drawers, over 12Gb SAS, in two sizes. A 2U drawer for 24 SFF drives, and 5U for 92 SFF/LFF drives. Each FlashSystem 9100 can support up to 760 drives. These expansion drawers are not NVMe, so the Solid-State Drives (SSD) inside them use standard SAS. Consider using Easy Tier sub-LUN automated tiering to move fast data up to the FCM/ISN drives, and slower data to these SAS-based SSD.
Even though it doesn't have a "V" in its name, the FlashSystem 9100 runs Spectrum Virtualize, so you can also virtualize other storage behind it. Over 400 different storage devices from leading storage vendors are supported. The FlashSystem 9100 can be virtualized behind SVC or FlashSystem V9000.
FlashSystem 9100 can also cluster with Gen2 and Gen2+ models of the Storwize V7000 and V7000F controllers. You can connect up to four of any of these into a single cluster, supporting up to 3,040 drives.
The FlashSystem 9100 offers all of the features you have come to love from the rest of the Spectrum Virtualize products: data deduplication and compression, encryption, high-availability guarantee, data footprint reduction guarantee, hardware refresh option after three years, storage utility pricing, and IBM Storage Insights support.
IBM has no plans to withdraw either the existing FlashSystem V9000 nor the Storwize V7000/F models anytime soon. They continue to be available for purchase.
To complement the hardware features of the FlashSystem 9100, IBM has come up with three Multi-cloud solutions.
Multi-Cloud Solution for Data Reuse, Protection and Efficiency - this combines Spectrum CDM with Spectrum Protect Plus to take snapshots of volumes on FlashSystem 9100. These snapshots are not just for data protection, but can also be "reused" for other purposes, like dev/test, DevOPS, or analytics.
Multi-Cloud Solution for Business Continuity and Data Reuse - combines Spectrum CDM with Spectrum Virtualize in the Public Cloud, allowing you to take snapshots to the IBM Cloud for disaster recovery. The snapshots can be used in the cloud, or copied back to the same or different data center.
Multi-Cloud Solution for Private Cloud Flexibility and Data Protection - combines IBM Cloud Private, Spectrum CDM, and Spectrum Connect to support client's efforts to re-factor their applications with Docker containers and Kubernetes. IBM FlashSystem 9100 can be used as persistent storage for containerized applications.
This release applies only to the Storwize V7000/F and the new FlashSystem 9100 models, and provides support for iSCSI Extensions over RDMA (iSER) on the 25GbE NIC cards. If you want to cluster existing Storwize V7000/F models to the new FlashSystem 9100 models, you need all of them to be at least v8.2.0 release.
Lower latencies and higher bandwidth requirements can be addressed by using RDMA to implement iSCSI. iSER is a new interconnect protocol that allows iSCSI to run on top of RDMA technology. RDMA can be implemented by using RoCE (RDMA over Converged Ethernet) or iWARP (Internet Wide-area RDMA Protocol). iSER enables iSCSI to run on top of it regardless of which of these technologies is used underneath.
The "Storage Utility" pricing available for many of IBM's other products has been extended to include the IBM FlashSystem 9100 and IBM Cloud Object Storage.
Basically, this is a variable-priced usage-based lease. Let's say you lease 500TB of capacity, but only use 150TB, the first few months you only pay for 150TB, a bit later, you use more, and now start paying more monthly, say 200TB. The price can go up or down. At the end of the lease, typically 36 or 60 months, you have a choice: give the equipment back, or pay the difference.
Demonstrate that IBM technologies in areas like Artificial Intelligence (AI), Blockchain, Cloud, and the Internet of Things (IoT) are relevant in solving the world's biggest challenges
Encourage developers to contribute their time and talent to open source projects that benefit the greater good
Generate fresh ideas on how to tackle age-old problems that plague society
Each year will have a different focus. This year, the focus is in preventing, responding to and recovering from natural disasters, especially important with 2017 ranked as one of the worst years on record for catastrophic events, including fires, floods, earthquakes and storms.
Call for Code invites developers to create new applications to help communities and people better prepare for natural disasters. For example, developers may create an app that uses weather data and supply chain information to alert pharmacies to increase supplies of medicine, bottled water and other items based on predicted weather-related disruption. Or it could be an app that predicts when and where the disaster will be most severe, so emergency crews can be dispatched ahead of time in proper numbers to treat those in need.
Can't think of any ideas for an app? Here are some TED videos that might inspire you:
IBM's $30 million USD investment over five years will fund access to developer tools, technologies, free code and training with experts. To raise awareness and interest in Call for Code, IBM is coordinating interactive educational events, hackathons and community support for developers around the world in more than 50 cities, including Amsterdam, Bengaluru, Berlin, Delhi, Dubai, London, New York, San Francisco, Sao Paulo and Tel Aviv.
(My earliest memory of using a contest for fresh ideas was back in 1975, after the city of Tucson purchased the Tucson Rapid Transit Company. Rather than hiring an expensive marketing agency to run focus groups or surveys, the City of Tucson published in the local newspaper a "Name that Bus" contest. The winning entry was [Sun Tran], submitted by 25-year-old college student [Benjamin Rios]. He won the grand prize: $150 portable television!)
The winning Call for Cloud team will receive a financial prize and access to long-term support to help move their idea from prototype to real-world application.
Developers can register today at the [Callforcode.org] website. Projects can be submitted by individuals – or teams of up to five people – between June 18, 2018 and August 31, 2018. If you would like me on your team, as an honorary member, technical adviser or mentor, please let me know!
Thirty semi-finalists will be selected in September. A prominent jury, including some of the most iconic technologists in the world, will choose the winning solution from three finalists. The winner will be announced in October 2018 during a live-streamed concert and award event coordinated by David Clark Cause.
Additional details, a full schedule of in-person and virtual events, and training and enablement for Call for Code are available at [www.developer.ibm.com/callforcode] website.
This month, IBM Tucson Development Lab is celebrating 40 year anniversary! IBM has been operating in Arizona for the past 70 years, and of course IBM has been in the storage business for the past 90 years if you consider "punched cards" as storage on paper.
This year also marks the 40 year anniversary of DFHSM, the first product I worked on when I started here back in 1986. DFHSM stands for the Data Facility Hierarchical Storage Manager, which effectively moves data between disk and tape storage.
IBM put up two banners to celebrate! The first was for IBM Enterprise Tape storage. My first question was "What are punched cards doing on a banner for magnetic tape?"
A bit of history will explain that the first tape storage was non-magnetic. Back in 1725, Basile Bouchon developed the control of a loom by punched holes in paper tape. These were used to create intricate patterns in woven cloth.
In the late 1880s, Herman Hollerith, a young technical whiz at the US Census Bureau, had an idea for a machine that could count and sort census results far faster than human clerks. The bureau funded Hollerith’s work, and the [first tabulating machines] helped count the 1890 census, saving the bureau several years’ work and more than US$5 million.
Hollerith left the bureau to form the Tabulating Machine Company, selling his system to other countries’ census offices and then to businesses such as railroads and retailers. Hollerith had little competition, and his machines and punched cards became the standard for the industry.
In 1911, financier Charles Flint bought the Tabulating Machine Company and merged it with the International Time Recording Company and the Computing Scale Company of America to form the Computing-Tabulating-Recording Company, or C-T-R, later renamed IBM in 1924.
In 1928, IBM introduced a new version of the punched card with rectangular holes and 80 columns. The 80-character standard was used from everything from the first computer screens, to the first file layouts
It wasn't until 1952 that the first magnetic tape system hit the scene: the IBM model 726. Tape reels were the size of pizzas, and were prominently shown spinning around in various Hollywood movies to represent computers "working" on a problem.
In my now infamous 2007 post [Hu Yoshida should know better], I explain the 3850 Mass Storage System (MSS). In 1974, The IBM 3850 MSS was one of the first hybrid disk-and-tape storage systems. It was an automated tape library pretending to be disk, with tape cartridges stored in hexagonal honeycomb shelves. The tape cartridges were cylindrical, about the size of a can of soda. The spool of 770 feet of tape media held just 5MB of data.
A full IBM 3850 MSS configuration with thousands of tape cartridges was used for the 1980 US Census, holding 102 GB database, representing the data collected about 226.5 million U.S. residents. That's about 450 bytes per resident, enough to fill six punched cards.
The second banner was for IBM Enterprise Disk storage.
IBM introduced the IT industry's first commercial disk system in 1956. While the banner says "RAMAC 305", that is the name of the server. The storage system was called the [350 Disk Storage Unit]. It was the size of two refrigerators and held 5 MB of data.
In the early 1990s, I visited a client in Germany that had a 3990 controller with two 3390 disk systems attached, holding 90 GB of data in the size of three refrigerators. They had five storage administrators to manage this configuration.
A few years later at another client, they had roughly 7000 GB (7 TB) of data on their mainframe, and an equal amount across all of their Windows and UNIX servers. I met with their storage administrators, there were two for the mainframe, and about three dozen for the distributed servers.
I had two questions for them. First, why were there two storage admins for the mainframe? The mature policy-based automation on the platform would mean only one person required. Their response: when one of us is on a two-week vacation, the other can handle the workload.
My second question was for the remaining storage admins: When was the last time any of you took a two-week vacation? None had, of course, since the storage administration tools back then meant they were all working overtime on various tedious and manual tasks!
In February 2006, the folks in IBM Germany asked the IBM Storage Marketing team what events or celebration were planned for September 13, 2016, the 50 year anniversary of disk. My marketing colleagues responded, "that is only seven months away, you didn't give us enough lead time notice to plan!"
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(FTC Disclosure: I work for IBM, and have either written code and/or presented the DS8000 storage system and Spectrum Storage products in my professional capacity. This blog post can be considered a "paid celebrity endorsement" for the IBM DS8000 Storage System and Spectrum Storage software.)
IBM DS8880 and DS8880F Storage Systems
For those not up on the DS8000 nomenclature, here's a quick recap:
DS8880 supports a hybrid mix of Flash cards, SSD, 15K, 10K and 7200 rpm drives.
This includes the DS8884 and DS8886. The Flash cards are held in High Performance Flash Enclosures (HPFE) directly attached to the controllers, whereas the SSD and spinning disk are in shelves connected via the Device Adapters.
DS8880F is an all-flash array, with Flash cards only in HPFE. This includes the DS8884F, DS8886F and DS8888F models.
DS8880/F is convenient shorthand to refer to both the hybrid and all-flash models collectively.
Today, IBM announces new 7.68TB flash cards for the High Performance Flash Enclosures of the IBM DS8880/F. These are double the capacity of the 3.84TB cards currently available, doubling the total capacity to 368.6TB per HPFE.
Different DS8880 models support a different number of HPFE. An HPFE is a pair of 2U drawers, holding a total of 48 flash cards. You can purchase flash in groups of 16 cards, with the option to mix and match within the HPFE. For example, you can have 16 cards at the 1.6TB capacity, 16 cards with 3.68TB and 16 cards of the new 7.68TB capacity, all in a single HPFE.
The new 7.68TB support 1 Drive Write Per Day (DWPD). Some people call these "Read-Intensive" drives, but IBM refers to them as "High-Capacity Drives", to differentiate them from the "High Performance Drives" that support 10 DWPD.
In reality, the read performance is similar in both types of Flash cards offered, but the write performance is slightly slower for the High-Capacity drives due in part to additional garbage collection performed in the background. Our studies found that over 90 percent of workloads might find the High-Capacity drives good enough to handle I/O requirements.
IBM Easy Tier was updated to distinguish between High-Performance and High-Capacity flash cards, so that blocks of data that have higher or lower I/O characteristics will be relocated to the appropriate level of storage.
The newest level of IBM Spectrum Storage Suite simplifies procurement by bringing together the latest releases of the following software:
IBM Spectrum Accelerate V11
IBM Spectrum Archive Enterprise Edition V1 (Linux edition)
IBM Spectrum Control Advanced Edition V5
IBM Spectrum Protect Suite V8 (including Spectrum Protect Plus!)
IBM Spectrum Scale Data Management Edition V5
IBM Spectrum Virtualize Software for SAN Volume Controller V8 (including FlashCopy and Remote Mirror, Real-time Compression and Encryption Software)
IBM Spectrum Virtualize Software-only V8
IBM Cloud Object Storage System V3
Instead of buying software products separately, a single license enables administrators to deploy IBM Spectrum Storage Suite software when and where they need it, without having to wait. Simplified capacity pricing can significantly reduce software costs and time spent on license management.
The Spectrum Storage suite also offers a "sandbox" approach for try-and-buy. Since you have access to all the software listed, you can set up a sandbox to experiment with the functionality, without having to pay for the added capacity, until you deploy it to dev/test, quality assurance, or production.
The suite is licensed per Tebibyte [TiB]. For those not familiar with international standards, here is a comparison table:
Always decimal, 10 to the 12th power
Always binary, 2 to the 40th power
The two terms sound similar and represent nearly the same quantity within 10 percent of each other, so it is understandable when people mistakenly use the terms interchangeably.
From farm to fork, IBM Food Trust platform is a collaborative network of growers, processors, wholesalers, distributors, manufacturers, retailers and others enhancing visibility and accountability in each step of the food supply.
Powered by the IBM Blockchain Platform on IBM servers and storage systems, IBM Food Trust directly connects participants through a permissioned, permanent and shared record of food origin details, processing data, shipping details and more.
(This reminds me of a funny story: the man sitting next to me on my flight back from an IBM Systems conference in New Orleans asked me, "You look familiar. Didn't I see you at the conference this week?" I responded "Yes, were you there for the "server" or "storage" side?" He thought about it for a while, and said "I guess the server side". "Too bad", I replied, "I am on the storage side."
It took us a while, but I realized he worked in the food and restaurant industry, and that he was at a completely different conference. It happened to also have both a "server" and "storage" side!)
The IBM Food Trust platform provides new levels of transparency, quicker recalls, better standardized communication and protection of brand value. As an authorized user, you have immediate access to shared, actionable food supply data through integrated IBM Blockchain-powered modules for faster traceability and more confidence in provenance.
Today, IBM announces new services to enable clients to successfully connect to and make use of the IBM Food Trust Platform.