Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2018, Tony celebrates his 32th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
The Collaboration of Oak Ridge, Argonne, and Livermore [CORAL] is a joint procurement activity among three of the Department of Energy's National Laboratories launched in 2014 to build state-of-the-art high-performance computing (HPC) technologies that are essential for supporting U.S. national nuclear security and are key tool s used for technology advancement and scientific discovery.
Of course, when you hear "state-of-the-art technology", IBM is probably the first company that comes to mind!
The new IBM Spectrum Scale 5.0 has been greatly enhanced to meet CORAL requirements:
Dramatic improvements in I/O performance
Significant reduction in internode software path latency to support the newest low-latency, high-bandwidth hardware such as NVMe
Improved performance for many small and large block size workloads simultaneously from new 4 MB default block size with variable sub-block size based on block size choice
Improved metadata operation performance to a single directory from multiple nodes
Spectrum Scale 5.0 now handles automatically tuning more than twenty communication protocol and buffer management parameters, aiding setup for optimal performance. The enhanced GUI features many capabilities including performance, capacity, network monitoring, AFM (multicluster management), transparent cloud tiering, and enhanced maintenance and support, including interaction with IBM remote support.
Spectrum Scale 5.0 now offers file-level immutability. Previous releases supported immutability at the file set granularity, so this allows greater granularity. Immutability can be an effective tool as part of an overall Non-Erasable, Non-Rewriteable [NENR] compliance policy.
Spectrum Scale comes in both "Standard Edition" and "Data Management Edition". The latter offers some additional features, including Transparent Cloud Tiering, Asynchronous AFM Disaster Recovery support, and Encryption. Some additional enhancements to Data Management Edition in Spectrum Scale 5.0 are:
File audit logging capability to track user accesses to file system and events supported across all nodes and all protocols
Parseable data stored in secure retention-protected fileset
Data security following removal of physical media protected by on-disk encryption
The new IBM Storage Utility Offerings include the IBM FlashSystem 900 (9843-UF3), IBM Storwize V5030 (2078-U5A), and Storwize V7000 (2076-U7A) storage utility models that enable variable capacity usage and billing.
These models provide a fixed total capacity, with a base and variable usage subscription of that total capacity. IBM Spectrum Control Storage Insights is used to monitor the system capacity usage. It is used to report on capacity used beyond the base subscription capacity, referred to as variable usage.
The variable capacity usage is billed on a quarterly basis. This enables customers to grow or shrink their usage, and only pay for configured capacity.
Suppose you only need 300 TB today, but expect this to grow to 1 PB (1000 TB) over the course of three years. You install 1000 TB (1 PB) of capacity, and pay for the base 300 TB, plus whatever above this 300 TB you might be using during each subsequent quarter. After 36 months, you pay for the rest of capacity installed.
(There are comparable offerings from IBM's competitors, but they often require that you pay for at least 75 to 85 percent of the installed amount, and then you would need to continue to disrupt your operations with additional capacity installed throughout the 12 to 36 month period. IBM's approach allows you to avoid installation disruption during the entire 36 month period!)
IBM Spectrum Virtualize for Public Cloud V8.1.1 delivers a powerful solution for the deployment of IBM Spectrum Virtualize software in public cloud, starting with IBM Cloud. This new capability provides a monthly license to deploy and use Spectrum Virtualize in IBM Cloud to enable hybrid cloud solutions
Remote replication will be supported between Spectrum Virtualize-based appliances (including SAN Volume Controller (SVC), the Storwize family, IBM FlashSystem V9000, and VersaStack with Storwize family or SVC), or Spectrum Virtualize Software, to the IBM Cloud.
Using IP-based replication with Metro Mirror, Global Mirror, or Global Mirror with Change Volumes, clients can create secondary copies of on-premises data in the public cloud for disaster recovery. IBM has over 25 data centers around the world available to chose from. Remote copy services can also be used between two IBM Cloud data centers for improved availability.
The solution is based on bare metal servers. You can create either two- or four-node high availability clusters.
Spectrum Virtualize on-premise SVC and Storwize now also support 2.4 TB 10K rpm 2.5-inch SAS hard disk drives.
IBM has been holding various "Hackathons" and "Meetups" as a new way to reach out to prospective clients. IBM sponsored a meetup at the Austin Executive Briefing Center (EBC) to discuss Machine Learning with TensorFlow on IBM Power systems, October 26, 2017.
This was a joint event, co-sponsored by [IBM Watson/Cognitive Austin] and [Big Data/AI Revealed] meetup groups. Special thanks to my colleague Cathy Cocco, IBM Executive IT Architect with the IBM Austin EBC, for coordinating this event with their organizers.
(What is a Meetup? [Meetup.com is an online social networking website that facilitates in-person local group meetings. Meetup allows members to find and join groups unified by a common interest, such as books, games, pets, technology, careers or hobbies. In 2017, there are 32 million users with 280 thousand groups available across 182 countries.)
Here was the agenda for the event:
Registration, Pizza & Soft drinks
Tensorflow 101 presentation
Demo: Using TensorFlow for Financial Market Predictions on IBM POWER Systems
Lightning Talk: IBM Data Science Experience
Clarisse Taaffe-Hedglin: Intro to TensorFlow on IBM Power servers
Our guest speaker was my colleague Clarisse Taaffe-Hedglin, IBM Cognitive Senior Technical Architect, part of the same Worldwide Client Centers team that I work in. She flew in from Charlotte, NC.
Her topic was TensorFlow, an open source [Machine Learning] framework. TensorFlow was originally developed by Google, but was made open source in November 2015.
Machine Learning is popular in a variety of industries, from self-driving cars and trucks, speech recognition and video surveillance, to what movie to watch next on Netflix. There are three aspects to Machine Learning:
Data: Start with the data you want to analyze. This could be IoT sensor data, security logs, or social media feeds. Check out all that happens in an "Internet Minute"!
Compute: While mathematical computations can be performed on traditional CPUs, some frameworks are optimized and accelerated with Graphical Processing Units (GPU). These GPU can perform Teraflops of single and double precision calculations.
Technique: As methodology have gotten more complicated over the years, frameworks have evolved to match.
The [TensorFlow] framework is now one of the most popular among data scientists. You can download it for free at [Github].
Clarisse showed the various programming/calculation tools used by data scientists. The top five were: Python, R, SQL language, MapReduce, and Microsoft Excel.
Mathematical models come in many flavors. Clarisse explained they can be used to identify clusters of data that might have similar properties, or to perform classification, or linear regression. The results can be "descriptive", gaining a better understanding of what already is, or "predictive" for what might be.
Some frameworks like Chainer or Torch are more flexible, using a dynamic Build-by-Run approach. However, these do not scale well. Theano and TensorFlow, on the other hand, employ a Define-then-Run approach, which scales better for larger projects. With the growth in popularity with TensorFlow, the Theano framework has been "functionally stabilized".
Clarisse Taaffe-Hedglin: Financial Markets Demo
For the demo, Clarisse had historical stock closing data for USA, Australia and Asian stock markets. The hypothesis: We can determine a Buy/Sell for USA stocks based on the closing results of non-American stock results? This is a classic "Binary Classification" model. The other stock markets close 4-16 hours before the U.S. markets open, so this has real-world applicability.
Since the data was in different monetary units, she did some cleanup to normalize the data, removing out the trends, and converting everything to U.S. Dollars (USD).
Clarisse used "Supervised Learning" on 80 percent subset of the data, and then used the other 20 percent remaining data to validate how well it did.
As with any model, you measure how good it is by how close it results in the correct answer. Wrong answers are weighted by how bad they are. This is often referred to as "Loss" or "Cost". Different models can therefore be compared by minimizing the loss.
Using a simple y=wx+b mathematical model, she ran 30,000 iterations. After 5,000 iterations, the model was already guessing correctly 55 percent of the time, by the time we hit 30,000 this was up to 68 percent accuracy.
TensorFlow also supports "hidden layers", basically intermediate variables that are then used in subsequent layers for more complicated calculations. This is the way our brain works with neural networks. With two added layers, she re-ran the 30,000 iterations, and now was up to 73 percent accuracy.
Normally, this kind of analysis would take hours or days, but since TensorFlow takes advantage of the IBM Power8 CPU and NVidia Tesla K80 GPU in the IBM Power server, the whole thing finished in five minutes!
Tuhin Mahmed: Lightning Talk on IBM Data Science Experience (DSX)
Tuhin Mahmed, IBM Software Developer, is the organizer for the Big Data/AI meetup group. He wants to promote the idea of "Lightning Talks" where each person presents for just 10-15 minutes. This is a variant of the popular [Pecha Kucha] events.
To get things started, he presented 10-15 minutes on [IBM Data Science Experience], or DSX for short. Taking Multiple Listing Service (MLS) real estate data of closing prices on houses sold in a range of zip codes from the Austin Area, he mapped these on x-y axis. The x axis was square feet, and the y axis was closing price.
Using DSX, he was able to develop a mathematical model that estimates house closing prices based on their zip code and square footage.
This was a simple example, but it showed the power of Jupyter Notebooks, and how anyone can get a 30-day free trial of DSX for their own experimentation.
Currently, being a data scientist is more of an art than a science. This is one of those fields that takes only a few months to learn, but years to master.
Rather than building a model from scratch, data scientists can take existing models, and modify them to fit their needs. There are a variety of existing models available in what is called the "Model Zoo". Google has over 2,000 projects already.
Those interested in trying this out TensorFlow for themselves were directed to [Nimbix], a Cloud Service Provider that offers POWER servers with NVidia GPUs.
There were about 50 attendees, more than half identified themselves data scientists. As the first inaugural sponsored event for the IBM Austin EBC, I think this was a success!
If you are in the Austin area, the next meetup will be at the [Capital Factory] on Brazos Street on November 30, 2017.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Today, IBM announces a complete refresh of its IBM FlashSystem® all-flash array product line.
(FCC Disclosure: I work for IBM. Compression, data footprint reduction, and performance results, based here on internal IBM tests, vary widely by data and workload type. Your mileage may vary. This blog post can be considered a "paid celebrity endorsement".)
New FlashSystem 900 model AE3
The new AE3 model introduces new Microlatency cards at larger capacities: 3.6, 8.5 and 18 TB. Compare that to the previous model AE2 at 1.2, 2.9 and 5.7 TB.
These capacities are achieved by combining three-dimensional (3D) chip layout with Triple-Level Cell (TLC) transistors, often referred to as 3D-TLC. The previous technology was single-layer 2-dimensional, multi-level cells (MLC).
Last week, at IBM Systems Technical University in New Orleans, Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist, explained this via an analogy. The 2-dimensional is like a Bungalow. If you want to pack in more people, you need to make the rooms smaller, which is getting more difficult. Alternatively, you could build a multi-story skyscraper, adding more floors relieves pressure to shrink the rooms down.
Triple-level cell holds three bits per transistor. In the past, we had Single-level Cell (SLC) that stored one bit, and Multi-level Cell (MLC) that stored two bits. A future technology, Quad-level Cell (QLC) is not yet ready for production workloads in a datacenter.
The new AE3 models also offer Embedded inline Compression (EiC), with "Always-On" compression being done right on the Microlatency cards. With a fully-loaded 12 card 2U drawer, that is 10+P+S RAID-5 configuration, the amount of effective capacity is drastically increased:
FlashSystem 900 Model AE3
2U Drawer (Usable TB)
2U Drawer (Effective TB) w/EiC
The compression gets 2x to 3.5x on typical data, but your mileage may vary. The small latency cards are capped at 110 TB, and the medium and large at 220 TB effective capacity, to avoid overwhelming the on-board DRAM cache. For clients who need smaller amounts of flash, IBM will continue to sell the AE2 models with 1.2 TB MLC Microlatency cards.
After the compression, the data is encrypted with AES 256-bit encryption. This is same as the previous AE2 models, so nothing changing there.
The EiC compression and encryption do not impact performance. The new Microlatency cards achieve as low as 95 microsecond latency, about 10x faster than traditional Solid-State Drives (SSD) found in Dell EMC XtremIO and Pure Storage competitive offerings, and 40 percent faster than the new NVMe Solid-State drives. A 2U drawer can deliver up to 1.2 million IOPS, slightly more than the AE2 models (1.1 Million IOPS).
The new FlashSystem V9000 take advantage of the new FlashSystem 900 AE3 models, effectively tripling the usable capacity.
The interesting thing now is compression. Both are hardware-accelerated, with EiC being done on the Flash cards, and Real-time Compression (RtC) being done by the Intel QuickAssist chips in the controllers.
The EiC method works on 4KB blocks, so only gets 2.5x to 3.5x on typical data. The RtC method works on larger 32KB blocks, is therefore able to find more replicated sequence of characters, gets up to 5x ratio, with compressed data in the controller node cache for better cache hit ratios.
However, RtC is limited to only 512 volumes, so admins would run the [Comprestimator tool] and select the cache friendly workloads with the best compression, such as Databases and CAD/CAM images.
With new FlashSystem V9000, you now get the benefits of both. Continue to use RtC for data that is better served with 4x-5x compression, and let EiC compress everything else!
FlashSystem V9000 model AE3
Usable (1 drawer) TB
Usable (8 drawers) TB
Running a typical 70/30 workload, representing 70 percent reads and 30 percent writes, each controller pair can deliver up to 600,000 IOPS. With four V9000 controller pairs clustered together, that is 2.4 Million IOPS. For more read-intensive, cache-friendlier workloads, IBM has clocked the system up to 1.3 million IOPS per controller node-pair, and 5.2 million for a four-pair cluster.
As with the previous model, the FlashSystem V9000 offers "Easy Tier" automatic sub-LUN tiering, and "storage virtualization" to manage both SAS-attached and SAN-attached storage. Over 400 different devices from major vendors are supported. This means that the busiest blocks will be moved up to low-latency Flash, and less active data will be moved to spinning disk.
As with the FlashSystem V9000, the A9000/R model 425 use the new FlashSystem 900, increasing the effective capacity.
The A9000/R models will continue to do "Data Footprint Reduction" of pattern removal, data deduplication and RtC compression for data to achieve up to 5x compression ratio. However, to improve performance, internal metadata will not be compressed with RtC, allowing the underlying Flash cards to do EiC instead. This reduces CPU workload.
The FlashSystem A9000 model 425, aka "The Pod", has three grid controllers combined with the new FlashSystem 900 model AE3 for compact 8U solution that can store nearly a Petabyte. For smaller deployments, IBM also offers an 8-card partially-filled drawer for lower entry system size.
A9000 Model 425
Number of cards/drawer
Effective @5x TB
The FlashSystem A9000R model 425, aka "The Rack", has two to four grid elements, each grid element has two grid controllers and one FlashSystem 900 AE3 drawer. The previous 415 model supported five and six grid elements, but for now, model 425 is limited to just two, three or four. The A9000R model 425 supports all three Microlatency sizes, whereas the previous 415 model only supported medium (2.9 TB) and large (5.7 TB) sizes.
FlashSystem A9000R model 425
Usable (2 elements) TB
Usable (3 elements) TB
Usable (4 elements) TB
Performance of both the A9000 and A9000R are based on the number of grid controllers. Each grid controller gets about 300,000 IOPS. The A9000 pod with three controllers gets up to 900,000 IOPS. Each A9000R grid element has two controllers, so 600,000 IOPS per element, with 2.4 million IOPS for a maxed out four-element A9000R rack.
Along with the hardware changes, IBM released version 12.2 of the Spectrum Accelerate software that runs in the FlashSystem A9000/R models.
This version supports Asynchronous mirroring between FlashSystem A9000/R systems and IBM XIV Gen3 storage. The replication can go in either direction, but the intent is to use FlashSystem for production, replicating to XIV Gen3 at a disaster recovery facility. Version 12.2 also increased the number of volumes, snapshots, and consistency groups supported.
24,000 volumes and snaps
1024 consistency groups, 512 volumes per consistency group
The new version applies to both the new model 425, as well as the previous 415 models!
This week, I am presenting at the IBM Systems Technical University for IBM Storage and POWER Systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency. There were about 800 clients attending.
This is my recap for the last few sessions before I left town, spanning Tuesday afternoon and Wednesday afternoon.
Reasons why IBM hyperconverged systems powered by Nutanix surpass other HCI from HPE, Cisco and more
Rob Simpson, Senior Strategic Marketing Manager at Nutanix, presented Nutanix hyperconverged systems. Nutanix runs on both x86 and POWER. For x86, it supports VMware, Microsoft Hyper-V, and Citrix XenServer, as well as their own Acropolis Hypervisor (AHV) derived from Linux KVM. For POWER, it uses AHV re-compiled for POWER chip set.
Hyperconverged systems can be sold in full rack configurations, as individual appliances, or as software that can be deployed on your own servers. Rob compared Nutanix against three competitive appliances: Dell EMC VxRAIL based on VMware VSAN, HPE Simplivity, and Cisco HyperFlex.
Everything you wanted to know about IBM Spectrum Scale metadata but didn't know to ask
Eric Sperley, IBM Software Defined Infrastructure Architect, presented the internal metadata structures used in IBM Spectrum Scale.
Why, oh why, did I attend this presentation? I had worked on Spectrum Scale back when it was called GPFS over 15 years ago, and thought I already knew everything about "inodes" that I ever wanted to, but Eric proved me wrong!
"Laws, like sausages, cease to inspire respect in proportion as we know how they are made."
--John Godfrey Saxe
A lot has changed! There have been a lot of improvements to the internal structures to improve parallel I/O performance, and reduce latency of administrative tasks.
IBM Spectrum Scale can be divided into different file systems, each of which can be configured with different performance characteristics and block size, such as random small files for scanned images, versus large sequential files for streaming videos.
My presentation was nowhere near as technical as Eric's above. I provided an overview of how IBM Spectrum Scale is configured, how it works, and how it interacts with IBM Cloud Object Storage System, Spectrum Protect, and System Archive.
I also covered the latest GSxS and GLxS models of the Elastic Storage Server, or ESS for short. These models provide awesome performance at low cost. The GSxS models are all-flash arrays for high performance. The GLxS models are hybrid with 2 Solid-State Drives and the rest NL-SAS 7200 rpm spinning disk for high capacity.
IBM COS new features
Andy Kutner, IBM Channel and Alliances Architect, presented the latest features in IBM Cloud Object Storage, IBM COS for short.
Compliance Enabled Vaults, or CEV for short, offer Non-Erasable, Non-Rewriteable (NENR) tamperproof protection for objects. Objects written to a CEV vault can not be deleted or replaced with newer versions, for a specific retention period.
(Note: Some folks mistakenly use the term "Write Once, Read Many" (WORM) for this. WORM applies only to tape, optical, paper tape, punched cards, and non-erasable ROM chips. For this reason, the term "Non-Erasable, Non-Rewriteable" (NENR), used in the U.S. Securities Exchange Commission (SEC 17a-4) regulation, has been created to extend this tamperproof protection to flash, disk and cloud-based storage architectures.)
The entry-level systems lowers the minimum capacity of systems. Before, IBM recommended at least 500 TB capacity to consider IBM COS. Now, the combination of embedded Accessers and Concentrated Dispersal mode, can lower the starting point to as low as 72 TB, but still allow you to grow to multiple PBs.
This week, I am presenting at the IBM Systems Technical University for Storage and POWER Systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency.
This is my recap for sessions on Day 2 morning.
FlashSystem A9000 and A9000R Overview
Andy Walls, IBM Fellow, CTO and Chief Architect,and Brent Yardley, IBM STSM and Master Inventor, co-presented this session. This was the "deep dive" of the A9000/R, a basic continuation of the one they did yesterday.
The Pendulum Swings Back -- Understanding converged and hyperconverged integrated systems
With IBM's partnership with Nutanix, this has become a particularly popular topic. I cover the last 50 years of storage evolution, from internal storage and external storage to NAS and SAN storage networks.
More recently, people have been willing to give up all those gains for something simpler, less powerful, less reliable, less expensive. Enter Converged and Hyperconverged Systems. IBM PureSystems and VersaStack lead the pack for Converged Systems, along with IBM Spectrum Scale, Spectrum Accelerate and Nutanix on IBM Power Systems for Hyperconverged Integrated Systems.
New Generation of Storage Tiering -- Less Management, Lower Costs, and Improved Performance
There are orders of magnitude between the fastest All-Flash Array and the least expensive tape storage. Ideally, there would be a "slider bar" that allowed people to select from the fastest to the least expensive. IBM offers a variety of solutions to offer this "slider bar", with automation to move data as needed between tiers.
I start with IBM Easy Tier, available on DS8000 and Spectrum Virtualize products, to IBM Virtual Storage Center where advanced analytics moves data to the right location, to IBM Spectrum Scale which provides the ultimate tiering, across multiple locations, between flash, disk and tape.
The lunches at these conferences are amazing, but then the "Big Easy" is known for its food!