This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Last week, I was in São Paulo, Brazil for IBM Systems Technical University.
Instead of separate physical rooms for each breakout session, this event had "virtual rooms". One speaker called it the "Software Defined Stage". Basically, there were five "rooms" in the main ballroom, and another eight rooms in a second ballroom.
Rather than blasting out each speaker's voice over loudspeakers, each speaker spoke softly into a headset microphone. All attendees wore headsets. Rooms 1 through 4 offered real-time translation, so attendees could chose to hear in English or Brazilian Portuguese.
In the other 13 "rooms", local speakers spoke in Brazilian Portuguese, but you still had to wear headsets to avoid speaking louder than the speaker next to you. For many of these, the charts were written in English.
My translators, Luciana and Marilia, explained to me the advantage of this approach. When speakers use English language, those who prefer must hear the real-time translation wore the "headphone of shame" which advertised to all others that an attendee's English proficiency was poor.
Sometimes, those who did not understand English well would not wear their headsets, nod or laugh with other attendees, but fail to understand the message. By forcing everyone to wear headsets, there is no stigma associated, and everyone can discreetly select the language they prefer to listen in.
Here is my recap for the breakout sessions on Day 2:
In this presentation, I gave an overview of interest in Cloud technologies, including OpenStack and RESTful APIs to manage server and storage resources. I then covered IBM Hybrid Cloud Storage configurations in five categories:
Cold storage for data infrequently accessed
Backup and Snapshot storage
Disaster Recovery storage
Daily Operations and Reporting
Special thanks to Chris Vollmar and Brian Sherman for their help in preparing this presentation.
Data Optimization: How to verify your data is being used efficiently
It is hard to believe that it was over 15 years ago that I was the chief architect for the software we now call IBM Spectrum Control. There are a variety of editions and bundles for this product, but my focus on this talk was on the advanced storage analytics found in IBM Virtual Storage Center and IBM Spectrum Control Advanced Edition.
I covered three use cases:
What storage tier to put your workload in, and how to move existing data into a faster or slower tier to meet business requirements and IT budgets.
For steady state environments, how to re-balance storage pools within a single tier to keep things even for optimal performance.
When it is time to decommission storage, how to transform volumes from one storage pool to another without downtime or outages.
Special thanks to Bryan Odom for his help in preparing this presentation.
IBM Hyperconverged Systems powered by Nutanix: Technical Overview
Ricardo Matinata, IBM Senior Technical Staff Member for Linux, KVM and Cloud on POWER, presented the latest IBM CS models for POWER systems that are pre-installed with Nutanix software running their Acropolis Hypervisor (AHV) to run Linux on POWER application virtual machines.
Managing Risks with Thin Provisioning, Compression, and Data Deduplication
This session had four parts. First, an overview of "Data Footprint Reduction" technologies, like compression, data deduplication, space-efficient snapshots and thin provisioning.
Second, a look at how these technologies can get storage administrators in trouble. Much like airlines selling more tickets than seats on the airplane, storage administrators may over-provision based on data reduction estimates, and then suddenly run out of storage capacity.
Third, an overview of IBM FlashSystem A9000 and A9000R products, often referred to as "A9000/R" to cover both as a family. These models offer data footprint reduction for all data.
Finally, I explain how the Hyper-Scale Manager GUI can help with reporting and analytics to avoid these risks. This GUI is available for the FlashSystem A9000/R, as well as XIV Gen3 and Spectrum Accelerate software clusters.
Special thanks to Rivka Matosevich for her help in preparing this presentation.
The Right Flash for the Right Workload
Fabiano Gomes, IBM Client Technical Specialist, presented IBM's portfolio of All-Flash Arrays, from FlashSystem and DS8000F to Elastic Storage Server and Storwize V7000F and V5000F models. Each of these have their own characteristics, which might favor one over the others for particular workloads and use cases.
The day was capped off with a nice evening reception at the pool bar. Bartenders were serving Caiparinhas, a Brazilian cocktail traditionally made sugar cane liquor, sugar and lime, but in this case offered in other flavors, such as pineapple or passion fruit.
This week, May 14-18, is Business Continuity Awareness Week!
This worldwide event, sponsored by the [Business Continuity Institute], promotes education and awareness designed to increase our understanding of business continuity, teach clients on ways to understand and manage IT and business risks, and introduce new techniques and technologies designed to minimize and even to eliminate business and personal disruption.
IBM is actively involved. Monday starts off with opening statements by Andrea Sayles, IBM General Manager of Resiliency Services, and Michael Puldy, IBM Director of Global Business Continuity Management.
The event offers a variety of online webinars, as well as a wealth of educational resources.
IBM Master Inventor, Senior IT Architect, and Event Content Manager
Well, it's Tuesday again, and you know what that means? IBM Announcements! We have a lot today, so I will just give you the quick highlights, and then Chris and Lloyd will follow-up with more detailed posts.
New IBM Storwize V5000 models
IBM introduces several new entry-level models.
The Storwize V5010E and V5030E are the "Express" models that allow for hybrid configurations, mixing Flash and spinning HDD disk. The Storwize V5010E is a single controller, two-canister model, with basic features. The Storwize V5030E adds more memory, more CPU power, and additional features like Data Reduction Pools and data-at-rest Encryption. Hosts can attach via SAS, 16Gb FCP or iSCSI.
The Storwize V5100 is the baby model of the FlashSystem 9100, supporting both FlashCore and industry-standard NVMe Flash drives, with the option to SAS-attach expansion drawers, mixing Flash and spinning HDD disk. The Storwize V5100F is the all-flash version. Hosts can attach via 32Gbps FCP, 25GbE RoCE, 25GbE iWarp, and iSCSI.
IBM Spectrum Virtualize for Public Cloud on Amazon Web Services (AWS)
The IBM Spectrum Virtualize for Public Cloud, or what our young folks shorten unofficially to SV4PC, that has been available on IBM Cloud will now also be available on Amazon Web Services.
For those readers asking "What took so long?" Amazon was not going to put specialized equipment in their data centers, so IBM had to make Spectrum Virtualize software container-native. Yes, the SVC code now runs in its own Docker container.
Basically a 2-node cluster is represented as two AWS EC2 instances, virtualizing EBS storage. The Transparent Cloud Tiering (TCT) that let's you "FlashCopy-to-the-Cloud" can be used to go directly to Amazon's S3 object storage.
This conversion to container-native has worked so well, IBM now plans to offer container-native software-defined storage capability across the board, for object storage, block storage, and file storage.
Did you notice that the Storwize V5100/F models support 32Gbps FCP in the section above? If that raised your eyebrow, I am pleased to tell you that IBM will be supporting 32Gbps FCP on these new Storwize V5100/F, the Storwize V7000 Gen3 and the FlashSystem 9100 devices.
We have also added a new b-type SAN switch, the SAN18B-6 which is Broadcom's Gen6 technology in a sleek 1U configuration, sporting 12 FCP ports that support 32Gbps and auto-negotiate to slower speeds as needed for compatibility with 8Gbps and 16Gbps devices. The other six ports are Ethernet, and can be used for disaster recovery replication, either using native TCP/IP or FCIP protocols.
IBM has enhanced the alerting capabilities of both the on-premise IBM Spectrum Control and its "as-a-Service" sister offering IBM Storage Insights. This allows you to set up alerts for "device groups" across multiple storage devices, as well as setting up filters to make the alerts more meaningful, eliminating some of the noise.
When IBM first introduced IBM Storage Insights, it was intended as an alternative to the on-premise solution. Now, clients demand both, so now if you have one, we can offer you the other! The new [IBM Storage Insights for IBM Spectrum Control] is an IBM Cloud service that can help you predict and prevent storage problems before they impact your business.
It is complementary to IBM Spectrum Control and is available at no additional cost if you have an active license with a current subscription and support agreement for IBM Virtual Storage Center, IBM Spectrum Storage Suite, or any edition of IBM Spectrum Control.
As an on-premises application, IBM Spectrum Control doesn't send the metadata about monitored devices offsite, which is ideal for dark shops and sites that don't want to open ports to the cloud. However, if your organization allows for communication between its network and the cloud, you can use IBM Storage Insights for IBM Spectrum Control to transform your support experience for IBM block storage.
IBM Spectrum Scale has been certified to run with with Hortonworks Data Platform (HDP) 3.1 release.
(Ha, I probably could have fit all that in the title of this section, but instead I just said "IBM Spectrum Scale" and you are thinking "Oh Boy!" and then you see something that could have fit in the title and feel all disappointed. It is kind of like when the local news asks "Was the restaurant you had lunch today contaminated with Salmonella?" and then follows up with the answer "Find out at 11:00pm evening news!" And then you wait until 11:00pm for then to say, "No, there was no salmonella found in any of the restaurants.".)
So, I would not have mentioned Spectrum Scale certification of HDP 3.1 unless there was at least something else worth mentioning. There is! IBM Spectrum Scale now also enhanced its performance for SMB and NFS, and has enhanced the scalability and resiliency of its Active File Management (AFM) feature.
The IBM FlashSystem A9000 and A9000R are targeted to Cloud and Managed Service Providers (CSP/MSP). The 12.3.2 release now supports VLAN tagging for iSCSI deployments. This VLAN tagging allows multiple virtual networks and IP addresses to share iSCSI ports, making it ideal for multi-tenancy for CSP/MSP clients.
IBM manages over a hundred Blockchain networks for its clients. For those not familiar with Blockchain, it is a way to record transactions, whenever money or product changes hands, an entry is recorded into the blockchain ledger for all to see.
This has two drawbacks. Information stored in the ledger may contain information you do not want everyone to see. The other is scalability, storing photos, and other supporting documents may be nice to have, but takes up a lot of space, and slows down transaction rates.
The solution is "off-chain" data. These are supporting documents that aren't needed in the blockchain itself. To connect them, you store a checksum hash of the supporting document in the ledge, then store the supporting document as off-chain data on-premises. If you need to produce the document for an audit, its checksum hash will match what is in the ledger.
In the beginning, people thought Docker containers would just be used for microservices with no persistent storage. Then clients realized they needed persistent storage, and they needed to orchestrate that storage provisioning. The IT industry has a variety of different orchestrators like Kubernetes, Docker Swarm, and Mesos. All of these manage persistent storage differently. IBM has focused on Kubernetes, using Ubiquity open source project to manage FlexVolumes.
Container Standard Interface (CSI) is an effort to standardize the provisioning of persistent storage. Allowing containerized applications to have access to storage that persists, even after the container is shutdown or crashes. For the next few years, I suspect IBM will need to support both the old way (FlexVolumes) and the new way (CSI) until all the standards settle.
You can hear all about these exciting announcements at the upcoming IBM Systems Technical University (TechU) in Atlanta, GA (USA), April 29-May 3. Visit [ibm.biz/Atlanta2019] to learn more and register. The three of us all plan to be there! Stop by and say hello.
IBM Senior Certified Executive IT Architect
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Today I want to write about a very recent enhancement for IBM storage clients. This announcement was specific to all IBM Storage clients using IBM Spectrum Control.
IBM recently announced a new solution for existing Spectrum Control clients to obtain a cloud based version and get even more value from their Spectrum Control investment. Whether you have Spectrum Control Standard, Advanced, Select or Virtual Storage Center all versions are covered under this new solution.
In this new solution for existing clients of Spectrum Control, they are entitled to this new solution using existing Spectrum Control licensed capacity. This new solution which is a cloud based Software as a Service (SaaS) offering is titled Storage Insights for Spectrum Control. For current Spectrum Control clients this new solution is provided with no additional costs.
Since the release of Storage Insights & Storage Insights Pro existing clients with Spectrum Control have asked for a similar cloud based option. Today we have that option.
Other devices: Non-IBM storage devices, VMWare, SAN Switches
On Premises / Cloud
Asset management (Type, model, serial number, firmware)
On Premises / Cloud
Support management: Ticket creation / log upload
On Premises / Cloud
Health (show status of entity) direct / call home
On Premises / Cloud
Alerting (send mesg to user) status / thresholds / eMail / SNMP / scripts
On Premises / Cloud
Storage / Fabric Performance and Error Reporting
On Premises / Cloud
Performance Interval, retention
5 min, 24 hour
5 min, 1 year
1 min, customizable
On premises / Cloud
Provisoning using Service Classes & Capacity Pools with automatic zoning
Reclamation analysis of unused volumes
On Premises / Cloud
Service Management (Chargeback & Consumer reports)
On Premises / Cloud
Custom Reporting: GUI / API
On Premises / Cloud
Tiering support across pools
Recommend and Implement
On Premises / Cloud
Balance workload across pools
User ManagementL Active DIrectory/LDAP integration
Cloud portal SLA
If your existing Spectrum Control instance is providing your requirements then consider Storage Insights for Spectrum Control for the added value of enhance IBM Storage Support, and constantly getting access to the latest features of Storage Insights for Spectrum Control without any of the maintenance or upgrade activities.
Whatever reason you may need to reach out to IBM Storage Support, with IBM Support having immediate access to your storage configuration details will reduce the time and your teams effort to get a resolution or recommendation from IBM on how to proceed.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Here is a quick recap of the October 9, 2018 announcements this week.
IBM Elastic Storage Server V5.3.2
The new IBM Elastic Storage Server v5.3.2 offers support for new drawers, non-disruptive upgrades of older models, and an optional 100GbE switch.
When the ESS was first announced, we had GSx models and GLx models, where x represented the number of storage drawers. The "S" stood for small 2U-24 drive drawers, so for example the GS4 had two Power8 servers combined with four 2U-size flash SSD drawers. The "L" stood for large 4U-60 drive nearline HDD drawers.
The second generation models append "S" for Second, so we had GS4S and GL6S. The large models changed to larger 5U-84 drive drawers. As with the previous "L" models, two slots per system contain Solid State Drives for internal use and caching, leaving the rest for slower spinning HDD disk.
Before this week, to upgrade from one model to another meant moving the data off, installing and configuring the additional drawers, and then move the data back. With today's announcements, you can now non-disruptively upgrade GS1S to GS2S to GS4S models, and GL1S to GL2S to GL4S to GL6S.
While you can federate as many GS and GL models together, that may mean having to spend more for Power8 servers than you are comfortable with, so IBM added "GHxy" Hybrid models, with x 2U-24 drive drawers, and y 5U-84 drive drawers. Initial models included the GH14 and GH24, which had one or two flash drawers, and four large drawers. This week, IBM announced a new GH12 model. The SSD flash in the 2U drawer can be 3.84TB or 15.36TB, and the nearline drives in the 5U drawers can be 4TB, 8TB or 10TB capacities.
What did IBM call the third generation GL models? Instead of using "T" which is both the next letter in the alphabet after "S", and the initial letter of the word "third", IBM instead decided to use "C" to designate CORAL project, the Collaboration of Oakridge, Argonne, and Lawrence Livermore national labs. Since the change applied only to the GL models, not the GS models, this makes sense.
To meet the requirements to build the world's fastest supercomputer for the CORAL project, IBM created a modified Elastic Storage Server model with 4U drawers that contained 106 drives. Now, these are available to the general public! IBM announced GL1C, GL2C, GL4C and GL6C models. In these, there are 2 SSD drives, and the rest are 10TB nearline drives.
The new optional 100GbE switch has 32 ports with a total of 6.4 Tbps. These can support 10, 40, 50 and 100GbE data rates, with 300 nsec latency for 100 GbE port to port
Spectrum Scale is licensed two ways: Standard Edition based on the number of sockets, with different prices for NSD servers, FPO servers and NSD clients; and the "Data Management" edition which offered advanced features, and was based on capacity of NSD, independent of the number of servers and clients attached.
Clients liked the capacity-based license model, but did not necessarily need the advanced features. In response, IBM now offers the "Data Access" edition, which offers the same features and functions of Standard Edition, but with capacity-based licensing.
For ESS models, you can chose to license by disk as before, or by capacity in combination with Spectrum Scale capacity-based deployments.
Hortonworks Data Platform v3.0.1 has followed suit. With the merger between Hortonworks and Cloudera, Hortonworks now offers capacity-based licensing for shared storage, like the IBM Elastic Storage Server.
IBM FlashSystem A9000/A9000R software version 12.3
There are three enhancements in this release: Three-site replication, a new model of A9000R, and raising a previous pool size limit.
For three-site replication, you can now combine HyperSwap which maintains two identical copies at distance, with a third asynchronous mirroring. The first two are typically within 100 km, but the third copy can be a much greater distance, across the continent if you like.
The A9000 "Pod" had three x86-based controller and one FlashCore drawer. The A9000R "Rack" had four, six or eight x86-based controllers and two, three or four FlashCore drawers, respectively, as well as a Power Distribution Unit (PDU) and pair of InfiniBand switches to connect everything together. The new "Grid Starter" model is very much like the "Pod" with three controllers and one FlashCore drawer, but adds the PDU and IB switches. The idea is that you can start with a "Grid Starter", then later upgrade to the larger A9000R models as you grow.
Back in XIV days, the architectural limit per pool of 1PB was plenty big. But with the new capacities on the A9000 and A9000R, the 1PB limit was starting to draw complaints. This limit was lifted, so that now a single pool can be made with the entire capacity of the box.
In the mainframe world, IBM Geographically Dispersed Parallel Sysplex, now just GDPS, provide the highest BC-7 business continuity tier, providing end-to-end coordination with servers, networks and storage devices. For IBM Power Systems, similar BC-7 support is provided by IBM Geographically Dispersed Resiliency.
In this week's announcement, IBM Geographically Dispersed Resiliency (GDR) for Power Systems has been renamed and now offered in two editions: VM Recovery Manager HA and VM Recovery Manager DR. The "HA" edition provides high availability using Power Systems Live Partition Mobility for AIX, IBM i and Linux operating systems.
The "DR" edition provides both High Availability and Disaster Recovery capabilities, supporting mirrored storage systems like IBM DS8000, SAN Volume Controller, FlashSystem 9100 and V9000, and Storwize systems, as well as competitive storage from Dell EMC and Hitachi.
Next week, I will be in Hollywood, Florida for IBM Technical University (Oct 15-19), and then Rome for the IBM Technical University (Oct 22-26). I will be covering many of these announcements above, and more!
(Actually, the [XIV Model 314] was announced on Nov 10, 2015 last year, but announcements made in November and December are often overlooked between distractions like holidays and year-end processing. Today's announcement was to eliminate the "not available in some countries" restriction. The last time I mentioned on this blog that a product was not available in some countries, I had tons of questions of "why". Hopefully, waiting until a product is available in all countries eliminates that concern.)
What does the XIV model 314 offer? IBM doubled the processors, up to 180 cores, and doubled the DRAM cache, up to 1440 GB. Both of these changes were done to improve the Real-time compression capability.
To reduce test effort cycle time, IBM simplified the configuration options:
Instead of ranging from 6 to 15 modules, the model 314 is limited to 9-15 modules.
The drive sizes are reduced to just 4TB and 6TB capacities.
If you want a Solid-State drive (SSD) for cache boost, only the 800GB option is available.
Through a combination of thin provisioning and compression, you can define up to 2 PB of soft capacity per rack.
The firmware v11.6.1 reduces the minimum volume size for compression from 103GB to 51GB. Firmware perpetually licensed for Spectrum Accelerate can be used with the XIV Model 314.
This week, IBM clients, Business Partners and executives get together for the new IBM [Think 2018] conference. This is a combination of last year's three events: Edge, InterConnect, and World of Watson (WoW).
(The theme this week is "Putting smart to work." Some might feel that this is a grammatically-incorrect use of the adjective [smart], referring to having quick-witted intelligence or being neat and well-dressed. Many words in the English language have multiple meanings and uses. The word smart is also a noun, referring to either business acumen, technical skills, or "a sharp stinging pain")
The keynote session today was "Science Slam: Unveiling 5 Breakthrough Technologies That Will Change the World!" by Arvind Krishna, IBM Research Director. IBM has over 3,000 researchers, in 12 labs, across six continents.
This talk was based on IBM's annual five-in-five, five predictions that might change the world in the next five years. For amusement, read my 10-year-old blog post [Five in five for 2008], including predictions for smart thermostats that can be controlled remotely, and self-driving cars.
("Science Slam" is IBM Research version of [Pecha Kucha], but instead of art students having 20 minutes to show 20 PowerPoint slides, each IBM research scientist has 5-7 minutes to explain the research project they are exploring. These are done both internally, as well as to audiences outside the company.)
Jamie Garcia served as emcee, introducing each of the five experts. Each spent 5-7 minutes, Science Slam style, on what projects they were working on.
1. Crypto-anchors and blockchain technology
‘Everything you don’t understand about money
combined with everything you don’t understand
about computers’ [25-minute video]
Andreas Kind presented first. Blockchain is not just a provenance system that enables Bitcoin and other cryptocurrencies, it can be used for other goods.
(The best layman explanation of blockchain and cryptocurrencies I saw was John Oliver's humorous take on his HBO show [Last Week Tonight]!)
Counterfeit goods, from cinnamon to footwear, to medicine and automotive parts, is estimated over $1.8 trillion US dollars. IBM is working on how to use blockchain for other things, such as to restore trust into global supply chain. IBM hopes to reduce the number of counterfeit goods in half or more.
Andreas explained tamper-proof technologies called "crypto-anchors" -- from indelible ink on pharmaceuticals to computers smaller than a grain of salt -- that can be used to track products as they travel from one country to the next.
2. Lattice Cryptography and Fully Homomorphic Encryption
Cecilia Boschini from IBM Zurich presented next. As quantum computers get more powerful, the basic math involving prime numbers that most current encryption models are based on become vulnerable.
(Don't worry, she assured the audience, hackers would need a 1000-Qubit quantum computer to break today's encryption codes, which don't exist yet!)
What we need are post-quantum or quantum-resistant mathematical models. Lattice Cryptography aims to use more difficult math equations to make it more difficult for hackers to break the code, even when armed with quantum computers.
Another challenge with existing encrypted data is that we must decrypt the data to perform computations on it. Fully Homomorphic Encryption, or [FHE] for short, allows computations to be done in its encrypted state. For example, if I had a list of names with credit card or social security numbers encrypted, I could sort this list alphabetically without decrypting any of the data.
3. AI-enabled robotic microscopes to monitor ocean water
Tom Zimmerman is known as IBM Almaden's [McGyver], able to use common technologies in new and innovative ways.
By 2025, over half of the world's population will be living in water-stressed locations. IBM is working on robotic microscopes that can be deployed across the oceans, connected to the Cloud, monitoring the state of plankton.
Why plankton? Plankton produces two-thirds of all oxygen we breathe, and serves as the "baby food" for all oceanic species. Tom has re-programmed "face recognition" in smartphone cameras to recognize plankton, identifying what they are doing and eating.
Monitoring plankton provides an "early warning system", the proverbial [canaries in the coal mine] for impending water problems.
4. Eliminating Bias from Artificial Intelligence (AI)
Information overload! Overwhelmed by too much, our brains sort it out by either looking only for differences, or focusing on what we are already familiar with that confirm our beliefs.
Not enough meaning. Lacking complete information, our brains fill the gaps and connect the dots to find patterns that aren't patterns at all. Racism, prejudice, and stereotypes are examples of this.
The need to act fast! Survival in some cases demands acting fast, to avoid being eaten by an animal, for example. Unfortunately, our brains favor the quick and simple, over the more important but often delayed, distant or complicated response.
What should we remember? We decide what to remember, and what to forget. Our brains often favor generalities over specifics, as they take up less space. The details we remember when we experience it, or often edited or reinforced after the fact.
IBM is collaborating with the Massachusetts Institute of Technology [MIT] to reduce bias in Artificial Intelligence by rating different AI models on fairness.
The AI models that will win in the future are those where the biases are tamed or eliminated altogether.
5. Quantum Computing
Talia Gershon was the last speaker.
Many problems become exponentially more difficult to solve with classical computers. For example, simulating protein molecular bonding gets more difficult the larger the molecules are, because you have more electron interactions.
Quantum Computers run at a temperature of 15 millikelvin (mK), which is 460 degrees below zero. The computation unit is called a [Qubit], and a 5-Qubit quantum computer can solve problems that your laptop can solve classically. IBM now has "IBM Q" with 50-Qubit computers available.
The IT industry is still in the early stages, but IBM Quantum Information Software development kit (QISkit) allows programmers to experiment and develop algorithms for this new computational model.
Over the next five years, IBM predicts that Quantum Computing will transition from the lab, to the mainstream, to solve problems that were previously too difficult or time-consuming to solve.
Demonstrate that IBM technologies in areas like Artificial Intelligence (AI), Blockchain, Cloud, and the Internet of Things (IoT) are relevant in solving the world's biggest challenges
Encourage developers to contribute their time and talent to open source projects that benefit the greater good
Generate fresh ideas on how to tackle age-old problems that plague society
Each year will have a different focus. This year, the focus is in preventing, responding to and recovering from natural disasters, especially important with 2017 ranked as one of the worst years on record for catastrophic events, including fires, floods, earthquakes and storms.
Call for Code invites developers to create new applications to help communities and people better prepare for natural disasters. For example, developers may create an app that uses weather data and supply chain information to alert pharmacies to increase supplies of medicine, bottled water and other items based on predicted weather-related disruption. Or it could be an app that predicts when and where the disaster will be most severe, so emergency crews can be dispatched ahead of time in proper numbers to treat those in need.
Can't think of any ideas for an app? Here are some TED videos that might inspire you:
IBM's $30 million USD investment over five years will fund access to developer tools, technologies, free code and training with experts. To raise awareness and interest in Call for Code, IBM is coordinating interactive educational events, hackathons and community support for developers around the world in more than 50 cities, including Amsterdam, Bengaluru, Berlin, Delhi, Dubai, London, New York, San Francisco, Sao Paulo and Tel Aviv.
(My earliest memory of using a contest for fresh ideas was back in 1975, after the city of Tucson purchased the Tucson Rapid Transit Company. Rather than hiring an expensive marketing agency to run focus groups or surveys, the City of Tucson published in the local newspaper a "Name that Bus" contest. The winning entry was [Sun Tran], submitted by 25-year-old college student [Benjamin Rios]. He won the grand prize: $150 portable television!)
The winning Call for Cloud team will receive a financial prize and access to long-term support to help move their idea from prototype to real-world application.
Developers can register today at the [Callforcode.org] website. Projects can be submitted by individuals – or teams of up to five people – between June 18, 2018 and August 31, 2018. If you would like me on your team, as an honorary member, technical adviser or mentor, please let me know!
Thirty semi-finalists will be selected in September. A prominent jury, including some of the most iconic technologists in the world, will choose the winning solution from three finalists. The winner will be announced in October 2018 during a live-streamed concert and award event coordinated by David Clark Cause.
Additional details, a full schedule of in-person and virtual events, and training and enablement for Call for Code are available at [www.developer.ibm.com/callforcode] website.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the sessions of Day 3.
Ethernet-only SANs -- Myth or Reality?
Anuj Chandra, IBM Advisory Engineer, presented an excellent overview of Ethernet-based SANs. He started with a quick history of Ethernet, starting with Robert Metcalfe's original drawing for his concept.
In the past, Ethernet was used for email and message transfer, and so dropped packets were tolerated. However, with the use of Ethernet for SANs, many standards have been adopted to make Ethernet networks more robust. These meet requirements for Flow Control, Congestion management, low latency, data integrity and confidentiality, network isolation, and high availability.
These standards are known as IEEE 802.1Q "Data Center Bridging", including 8012.Qbb Priority Flow Control, 802.1Qaz Enhanced Transmission Selection, 802.1Qau Congestion Notification. There is also the IETF Transparent Interconnection of Lots of Links (TRILL) to replace Spanning Tree Protocol (STP). All of these features are negotiated between endpoints server and storage. Ethernet that supports these new standards is often referred to as "Converged Ethernet" since it handles both traditional email/message traffic as well as SAN data traffic.
In addition to 1GbE and 10GbE, we now have 2.5, 5, 20, 40, 50, 100 Gb Ethernet speeds. By 2020, Anuj estimates over half of all Ethernet ports will be 25 GbE or faster. Amazingly, some of these can work on existing 10BASE-T cables.
Anuj also covered Remote Direct Memory Access (RDMA), and the RDMA-capable Network Interface Cards (RNIC) that support them. In one chart, shown here, Anuj explained Infiniband, RDMA over Converged Ethernet (RoCE) and RoCE v2, and Internet Wide Area RDMA Protocol (iWARP).
While many of these enhancements were intended for Fibre Channel over Ethernet (FCoE), the beneficiary has been iSCSI. Now there is iSCSI Extensions for RDMA (iSER) to take even more advantage of these changes, and can work with Infiniband, RoCE or iWARP. All of these networks can also be used as the basis for NVMe over Fabric (NVMeOF).
Ethernet is the backbone of Cloud usage, and IBM is well positioned to take advantage of these new networking technologies.
Digital Video Surveillance solutions for extended video evidence protection
Dave Taylor, IBM Executive Architect for Software Defined Storage solutions, presented this session on Digital Video Surveillance (DVS).
Most video surveillance is either analog-based, going to standard VHS tapes, or file-based. Sadly, security guards that watch live camera feeds lose their attention span after 22 minutes.
There are an estimated 72 million cameras globally, with 1.5 million more every year.
City governments spend 57 percent of their budget on "public safety". This can include body cams for police departments. Taser International, now called AXON, dominates the body-cam market.
City budgets may not be prepared to store all of this video content into a cloud that complies with Criminal Justice Information Services (CJIS) standards. These Cloud services tend to be more expensive, as the videos must be treated as evidence, tamper-proof, and with appropriate chain of custody.
DVS is not just storing movies. IBM offers Intelligent Video Analytics. It is important to be able to derive insight and actionable response.
Storage capacity adds up quickly. Standard 1080p (1920 by 1080 pixel) camera generates 2.92 GB per hour, 70 GB per day, and over 2TB per month. If you have 1,000 cameras, that's over 2PB of data.
For xProtect servers running Windows, the Tiger Bridge Connector can be used to move the video files to either IBM Spectrum Scale or IBM Cloud Object Storage.
Deep Dive into HyperSwap for Active-Active applications and Disaster Recovery
Andrew Greenfield, IBM Global Engineer for Storage, explained the different ways HyperSwap is implemented across the IBM storage portfolio.
For IBM DS8000, HyperSwap is based on Metro Mirror synchronous replication. In the event that the primary DS8000 fails, the host server can automatically re-direct all I/O to the secondary DS8000. This is often referred to as "High Availability" (HA), and in some cases can serve as Disaster Recovery.
For IBM Spectrum Virtualize products, including SAN Volume Controller (SVC), FlashSystem V9000, Storwize V7000 and V5000 products, as well as Spectrum Virtualize sold as software, the implementation is different.
Previously, SVC offered Stretched Clusters, which put one node in one site, and a second node at another site, which allows for an Active/Active configuration. Unfortunately, the nodes in FlashSystem V9000 and Storwize are "connected at the hip", effectively bolted together, so putting separate nodes in different locations was not possible. To solve this, IBM developed HyperSwap that allows one node-pair to replicate across sites to another node-pair in the same Spectrum Virtualize cluster.
However, even though it is called "HyperSwap", it is not implemented in any way similar to the DS8000 method. Instead, Spectrum Virtualize uses the Global Mirror with Change Volumes to replicate data between sites.
IBM Storage and VMware Integration
This session was co-presented by Brian Sherman, IBM Distinguished Engineer, and Steve Solewin, IBM Corporate Solutions Architect.
For nearly two decades, IBM is a "Technology Alliance Partner" with VMware. To provide consistent integration to all the features and functions of VMware, IBM Spectrum Control Base Edition (SCBE) is provided at no additional charge for IBM DS8000, XIV, FlashSystem and Spectrum Virtualize products.
SCBE is downloadable as an RPM for RedHat Enterprise Linux (RHEL) can run bare-metal or as a VM.
For those using Hyper-Scale Manager, it will automatically install a special A-line-only version of SCBE. It will install SCBE, but it will only manage the A-line products (FlashSystem A9000, FlashSystem A9000R, XIV and Spectrum Accelerate).
Storage admins can define "storage services" that can be assigned to vCenter. This allows VMware admins to allocate storage in self-service mode.
After the meetings were over, IBM had a special event at the Universal City Walk to enjoy some drinks, food, and conversation, and to watch Blue Man Group.
Last week, I presented at the "IBM TechU Comes to You" event in beautiful Nairobi, Kenya. This was a three-day event, so here is my recap of Day 2, Wednesday Aug 3, 2016.
IBM Spectrum Scale overview and update
This session was covered by Mack Kigada, IBM Executive Consultant for the "Executive Advisory Practice" portion of Systems Lab Services. This session explained the basic features of Spectrum Scale, including the latest features of version 4.2, and related Elastic Storage Server pre-built systems.
Software Defined Storage - IBM Spectrum Overview
This session was presented by Saumil Shah, IBM Spectrum Protect Sales Leader for Middle East, Turkey & Africa. Since SDS is an important topic, the conference coordinators schedule several speakers to present at different time slots, to give everyone a chance to hear the SDS message. Rather than using my same charts, Saumil used his own deck, which he customized based on his experience working in this region.
Flash and the Next Generation Data Center
This session was covered by Firat Ozturk, IBM FlashSystem Sales Leader for Middle East, Turkey & Africa. While IBM offers all-flash array versions of its DS8000, SVC and Storwize product lines, Firat focused on the IBM FlashSystem family, including the FlashSystem 900, FlashSystem V9000, and the new A9000/A9000R models.
According to IDC, Flash-based technologies are predicted to represent 50 percent of the storage capacity sold in 2018. Today it is about 10 percent, so that is a big leap. The primary reason he feels are new applications like Cloud and Mobile that are driving customer expectations to faster performance.
Which product should you get? Firat indicated that the FlashSystem 900 is ideal to boost the performance of specific applications, like Oracle or SAP HANA. The FlashSystem V9000 borrows all the code base from SVC and Storwize with Real-time compression ideal for OLTP and Database applications, while offering Storage Virtualization to protect your existing storage infrastructure investment. The FlashSystem A9000 and A9000R are targeted to Cloud deployments, as well as Server Virtualization and Virtual Desktop Infrastructure (VDI).
What is Big Data? Architectures and Practical Use Cases
I have been presenting this since 2013, but still draws a new crowd every time. Based on my [2015 Presentation], I made some updates to reflect IBM's latest support for Spark, and the new POWER8 solution offerings.
Storage Tiering on z Systems: Less Management, Lower Costs, Less Management, and Increased Performance
When I present Storage Tiering for distributed systems, I typically focus on Easy Tier feature of SAN Volume Controller, the Analytics-based storage optimization of Spectrum Control, and the Information Lifecycle Management (ILM) policies of Spectrum Scale and Spectrum Archive. This time, Glenn Anderson asked me to give this a "z Systems" slant, for a mainframe-oriented audience.
In this new version, I focused on Easy Tier on IBM DS8000 systems, Hierarchical Storage Management in DFSMShsm, and the new Class Transition features that were introduced initially with DFSMSoam for objects, and now extended for data sets.
Linux on IBM z Systems and its Participation in Open Source Ecosystem, including Blockchain
Wow! What a long title!
This session was presented by Holger Smolinski, IBM Senior Performance Analyst Linux and KVM on IBM z Systems from the Boeblingen, Germany Lab. Back in the late 1990s, Holger and I worked on porting Linux to the S/390 platform. I led a team to test all of the device drivers for IBM disk and tape storage systems, working with Holger and his team to fix the drivers and submit them to the Open Source Community, so that they would be incorporated formally into the latest Red Hat and SUSE distributions.
Holger gave quite an extensive overview of the entire Open Source Ecosystem that run on Linux on z System mainframes. Over 60 percent of new mainframe customers use Linux on z Systems operating system, and the complete set of capabilities, makes this quite practical.
One of the latest of these is [Blockchain], a new way to track transactions between organizations. The open source project for this is [HyperLedger]. Transactions are recorded into blocks that are encrypted with a hash code, which prevents tampering and fraud. These blocks are then chained together as transactions occur between organizations.
For example, if a product is manufactured in China, shipped over the Pacific Ocean by a shipping company, received at a port in the United States, processed by US Customs, then shipped via trucking company to the buyer, these all would be represented as transaction blocks chained together.
Wednesday we had free evening to explore on our own. Some of my colleagues went to an all-you-can eat steakhouse for dinner, but I will get plenty of that in my upcoming trip to Sao Paulo, Brazil, so went elsewhere.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
The Collaboration of Oak Ridge, Argonne, and Livermore [CORAL] is a joint procurement activity among three of the Department of Energy's National Laboratories launched in 2014 to build state-of-the-art high-performance computing (HPC) technologies that are essential for supporting U.S. national nuclear security and are key tool s used for technology advancement and scientific discovery.
Of course, when you hear "state-of-the-art technology", IBM is probably the first company that comes to mind!
The new IBM Spectrum Scale 5.0 has been greatly enhanced to meet CORAL requirements:
Dramatic improvements in I/O performance
Significant reduction in internode software path latency to support the newest low-latency, high-bandwidth hardware such as NVMe
Improved performance for many small and large block size workloads simultaneously from new 4 MB default block size with variable sub-block size based on block size choice
Improved metadata operation performance to a single directory from multiple nodes
Spectrum Scale 5.0 now handles automatically tuning more than twenty communication protocol and buffer management parameters, aiding setup for optimal performance. The enhanced GUI features many capabilities including performance, capacity, network monitoring, AFM (multicluster management), transparent cloud tiering, and enhanced maintenance and support, including interaction with IBM remote support.
Spectrum Scale 5.0 now offers file-level immutability. Previous releases supported immutability at the file set granularity, so this allows greater granularity. Immutability can be an effective tool as part of an overall Non-Erasable, Non-Rewriteable [NENR] compliance policy.
Spectrum Scale comes in both "Standard Edition" and "Data Management Edition". The latter offers some additional features, including Transparent Cloud Tiering, Asynchronous AFM Disaster Recovery support, and Encryption. Some additional enhancements to Data Management Edition in Spectrum Scale 5.0 are:
File audit logging capability to track user accesses to file system and events supported across all nodes and all protocols
Parseable data stored in secure retention-protected fileset
Data security following removal of physical media protected by on-disk encryption
The new IBM Storage Utility Offerings include the IBM FlashSystem 900 (9843-UF3), IBM Storwize V5030 (2078-U5A), and Storwize V7000 (2076-U7A) storage utility models that enable variable capacity usage and billing.
These models provide a fixed total capacity, with a base and variable usage subscription of that total capacity. IBM Spectrum Control Storage Insights is used to monitor the system capacity usage. It is used to report on capacity used beyond the base subscription capacity, referred to as variable usage.
The variable capacity usage is billed on a quarterly basis. This enables customers to grow or shrink their usage, and only pay for configured capacity.
Suppose you only need 300 TB today, but expect this to grow to 1 PB (1000 TB) over the course of three years. You install 1000 TB (1 PB) of capacity, and pay for the base 300 TB, plus whatever above this 300 TB you might be using during each subsequent quarter. After 36 months, you pay for the rest of capacity installed.
(There are comparable offerings from IBM's competitors, but they often require that you pay for at least 75 to 85 percent of the installed amount, and then you would need to continue to disrupt your operations with additional capacity installed throughout the 12 to 36 month period. IBM's approach allows you to avoid installation disruption during the entire 36 month period!)
IBM Spectrum Virtualize for Public Cloud V8.1.1 delivers a powerful solution for the deployment of IBM Spectrum Virtualize software in public cloud, starting with IBM Cloud. This new capability provides a monthly license to deploy and use Spectrum Virtualize in IBM Cloud to enable hybrid cloud solutions
Remote replication will be supported between Spectrum Virtualize-based appliances (including SAN Volume Controller (SVC), the Storwize family, IBM FlashSystem V9000, and VersaStack with Storwize family or SVC), or Spectrum Virtualize Software, to the IBM Cloud.
Using IP-based replication with Metro Mirror, Global Mirror, or Global Mirror with Change Volumes, clients can create secondary copies of on-premises data in the public cloud for disaster recovery. IBM has over 25 data centers around the world available to chose from. Remote copy services can also be used between two IBM Cloud data centers for improved availability.
The solution is based on bare metal servers. You can create either two- or four-node high availability clusters.
Spectrum Virtualize on-premise SVC and Storwize now also support 2.4 TB 10K rpm 2.5-inch SAS hard disk drives.
Last year, Hurricanes Harvey, Irma, Jose, and Maria, ravaged various parts of North America and the Caribbean. My topic on Business Continuity and Disaster Recovery (BC/DR) was well attended. I have been working in BC/DR for most of my career, including the "High Availability Center of Competency" or HACOC, for short.
However, natural disasters like hurricanes, tornadoes, forest fires and floods represent less than 20 percent of all disasters. The majority of disasters, nearly 75 percent, arise from electrical power outages, human error, system failure and ransomware.
The seven tiers were developed by a group of IBM customers back in the 1980s, and have stood the test of time. I recently published an article in IBM Systems Magazine (January/February 2018) based on this presentation.
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods.
Finally, I covered some Hybrid Cloud Storage configurations, showing how a combination of Traditional IT, on-premise local private cloud, off-premise dedicated private cloud and public cloud, and be combined to provide added value.
Reporting and Monitoring: How to Verify your Storage is Being Used Efficiently
It is hard to believe that it was over 15 years ago that I was the chief architect for the software we now call IBM Spectrum Connect, Spectrum Control and Storage Insights. There are a variety of editions and bundles for this product, but my focus on this talk was on the advanced storage analytics found in IBM Virtual Storage Center and IBM Spectrum Control Advanced Edition.
I covered three use cases:
What storage tier to put your workload in, and how to move existing data into a faster or slower tier to meet business requirements and IT budgets.
For steady state environments, how to re-balance storage pools within a single tier to keep things even for optimal performance.
When it is time to decommission storage, how to transform volumes from one storage pool to another without downtime or outages.
Special thanks to Bryan Odom for his help in updating this presentation.
Spectrum Virtualization Data Reduction Pools 101
Barry Whyte, IBM Master Inventor and ATS for Storage Virtualization for Asia Pacific region, presented on how Data Reduction Pools were implemented in version 8.1.2 of Spectrum Virtualize. The software in the latest IBM SAN Volume Controller (SVC), IBM Storwize products, and IBM FlashSystem V9000.
Basically, rather than say we "re-wrote" the code, we prefer softer euphemisms like the code was "re-imagined" or, my favorite lately, "re-factored". Legacy Storage Pools will continue to be supported, but IBM anticipates that people over time will transition to the new Data Reduction Pools (DR Pools).
Like Legacy Storage Pools, the new DR Pools also support a mix of Fully-allocated, Thin-Provisioned, and Compressed-Thin volumes. IBM has made a statement of direction that it will offer Data Deduplication feature in the future, but these will only be on the new DR Pools.
While DR Pools are available today with version 8.1.2, there are a few restrictions. There is a limit of four DR Pools per cluster, and the amount of total capacity of each pool depends on the extent size and number of I/O groups configured. Some of the migration methods developed for Legacy Storage Pools are not available, and in reality don't make sense in the new DR pool scheme. Child Pools are not supported either.
One of the big improvements that DR Pools offer is in the area of compression. With Legacy Storage Pools, CPU cores were dedicated for compression, so they were either under-utilized or overwhelmed. With DR pools, all CPU cores can be used for either I/O or compression, which potentially can increase performance by up to 40 percent!
After the sessions, IBM had its "Solution Center Reception". This is a chance to relax and unwind after a long day, with food and drink, and various sponsors in booths to explain their latest offerings.
This is Katie Thacker from [FIT]. In March 2018, FIT was recognized as IBM’s Top Strategic Service Provider of the year!
These are Elizabeth Krivan and Kelly Bouchard, two recently-hired IBM storage sellers. They attended my sessions at the IBM Technical University in New Orleans last October, so it was good to see them again at my sessions here in Orlando.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
Well, it's Tuesday again, and you know what that means? IBM Announcements! There were lots of announcements today, so I have split this up into two posts. One for the Tape and Cloud announcements, and the other for the Spectrum Storage family.
IBM Spectrum Virtualize Software V7.8.1
IBM Spectrum Virtualize&trade: V7.8.1 is the latest software for FlashSystem V9000, SAN Volume Controller and Storwize products.
Last release, IBM introduced "Host Groups" for clusters that needed to share a common set of volumes. This release offers "Host cluster I/O throttling": I/O throttling can be managed at the host level (individual or groups) and at managed disk levels for improved performance management,and GUI support.
Increased background FlashCopy transfer rates: This feature enables you to increase the rate of background FlashCopy transfers, providing faster copies as the infrastructure allows. This takes advantage of the higher performance capabilities of today's systems, processing the copy in a shorter period of time. The default was 64 MB/sec, and now we can go up to 2 GB/sec, for those who want their FlashCopy to be done as fast as possible.
Port Congestion Statistic: Zero buffer credits help detect SAN congestion in performance-related issues, improving support in high-performance environments. IBM had this for the 8Gbps FCP cards, but not for the 16Gbps cards, so now that's fixed.
Resizing of volumes in remote mirror relationships: Target volumes in remote mirror relationships will be automatically resized when source volumes are resized. Lots of clients asked for this, and IBM delivered!
Consistency protection for Metro/Global Mirror relationships: An automatic restart of mirroring relationships after a link fails between the mirror sites improves disaster recovery scenarios, helping to ensure the applications are protected throughout the process.
When IBM introduced "Global Mirror with Change Volumes" (GM CV), I wanted to call it "Trickle Mirror", because the primary site takes a FlashCopy, trickles the data over, then FlashCopy at the remote site. Now, clients using traditional Metro or Global Mirror can add "Change Volumes" as protection. In the unlikely event a network disruption occurs, it drops down to GMCV until the link resumes full speed.
Support of SuperMicro servers for the Spectrum Virtualize as Software Only offering: Support for x86-based Intel™ servers by SuperMicro for Spectrum Virtualize Software is available with this release.
Last year, IBM offered Spectrum Virtualize as software that could run on Lenovo servers. However, now there are clients who want alternative server choices.
Supermicro SuperServer 2028U-TRTP+ is supported to run Spectrum Virtualize Software. This is a great option for end clients, managed service or cloud service providers deploying private clouds, building hosted services, or using software-defined storage on third party Intel servers. This a fully inclusive license with all key features available on Spectrum Virtualize in a single, downloadable image.
IBM Spectrum Control V5.2.13 and IBM Virtual Storage Center V5.2.13
We often joke that IBM Virtual Storage Center is the [Happy Meal] combining storage virtualization with Spectrum Virtualize hardware like FlashSystem V9000, SAN Volume Controller or Storwize as the "hamburger", Spectrum Control as the "fries" and "Spectrum Protect Snapshot" as the "soft drink". Storage Analytics was included as a "prize inside" only available in the VSC bundle to entice clients to chose this option.
Whenever IBM updates Spectrum Control, they often put out a new version of the Virtual Storage Center bundle as well. I was the Chief Architect for Spectrum Control 2001-2002, and Technical Evangelist for SVC in 2003 when we first introduced the product, so I have long history with both products.
This release provides additional information and performance metrics on Dell EMC VMAX and EMC VNX devices. This is done natively, they do not need to be virtualized by Spectrum Virtualize as was often done in the past.
IBM now offers better visibility of drives within IBM Cloud Object Storage Slicestor® nodes. IBM acquired Cleversafe 18 months ago, and are working to get it under the Spectrum Control management umbrella.
IBM Spectrum Scale™ file system to external pool correlation. Spectrum Scale can migrate data to three different type of "external pools":
Cloud Object pool, either on-premise Object Storage or off-premise Cloud Service Provider storage.
Spectrum Protect pool, where Spectrum Protect manages the migrated data on one of 700 supported devices, including tape, virtual tape, optical, flash, disk, object storage or cloud.
Spectrum Archive pool, where data is written directly to physical tape using the Industry-standard LTFS format.
This release provides additional information on the copy data panel about SAN Volume Controller (SVC) HyperSwap® and vDisk mirror.
While the "Virtual Storage Center" bundle is an awesome deal, some clients have asked for the "Vegetarian Option" (Fries and Drink only). Why? Because they want the advanced storage analytics (prize inside) for other devices like DS8000, XIV, etc. So, IBM created the "IBM Spectrum Control Advanced Edition", which has everything in VSC except the Spectrum Virtualize itself.
Advanced edition adds improvements to the chargeback report. It also includes IBM Spectrum Protect™ Snapshot V8.1 release.
IBM Spectrum Control Storage Insights Software as a Service
Storage Insights is IBM's "Software-as-a-Service" reporting-only offering subset of Spectrum Control Advanced Edition. It includes direct support for Dell EMC VMAX, VNX, and VNXe storage systems. This is huge! Now, clients who have only EMC hardware can now, on a monthly basis, figure out where they are wasting money and decrease their costs.
Other features carried over include the enhanced drive support for IBM® Cloud Object Storage, enhanced external capacity views for IBM Spectrum Scale™ and additional replication views for vDisk mirror and HyperSwap® relationships for SAN Volume Controller (SVC) and Storwize® devices that I mention above.
Well, it's Tuesday again, and you know what that means? IBM Announcements! There were a lot of IBM Power System announcements on Tuesday, so the IBM Power team asked us to wait until Thursday to post about all of the IBM storage announcements, to avoid overwhelming excitement levels with the press and analysts.
(FTC Disclosure: I work for IBM. I have either worked on the code, developed marketing materials, and/or represented each of the products below in my professional capacity. This blog post can be considered a "paid celebrity endorsement")
A few months ago, IBM re-factored its internals. Spectrum Virtualize will continue to support its legacy storage pools, but also offered "Data Reduction Pools", or "DR pools" for short. At the time, this supported only Thin Provisioning and Compression. See fellow blogger Barry Whyte's post on [Data Reduction Pools] for more details.
Spectrum Virtualize 8.1.3 release now adds Data Duplication and RESTful API support for the Spectrum Virtualize family, including SAN Volume Controller, FlashSystem V9000 and Storwize products. These features also apply to Spectrum Virtualize as software only, and to Spectrum Virtualize for the Public Cloud.
Data Deduplication is a form of data footprint reduction. Like the deduplication in Spectrum Protect and FlashSystem A9000/R products, Spectrum Virtualize will use SHA1 hash codes to identify duplicate 8K blocks. If the hash code of the block about to be written does not match any existing hash code previously written to the cluster, it is considered unique data.
Legacy storage pools supported three kinds of volumes: fully-allocated, thin-provisioned, and compressed-thin volumes. The new DR pools support five kinds: fully-allocated, thin-provisioned, deduped-thin, compressed-thin, and deduped-compressed-thin volumes.
The new deduplication feature is included at no additional charge with the base Spectrum Virtualize license.
The RESTful API enables storage admins to easily automate common tasks with industry-standard tools. RestAPI support is available to interface with the command-line interface (CLI), create vDisk volumes and generate views normally available through the CLI, and secure authentication to the IBM Spectrum Virtualize family.
The SAN Volume Controller, FlashSystem V9000 and Storwize family now also support 12TB drives for internal storage. These are 7200 rpm 3.5 inch drives that can be in the 2U 12-bay or 5U 92-bay expansion drawers, or directly in the 12-bay Storwize controllers. Spectrum Virtualize 7.8.1 is the minimum level to support these high-capacity disks.
IBM Spectrum Virtualize for Public Cloud, available on IBM Cloud, has been enhanced to support a full eight node cluster (four node-pairs, or "I/O Groups" as they are called). This can be used as a target for remote mirror from your Spectrum Virtualize cluster on premises.
IBM offers data footprint reduction, high availability, and technical refresh guarantee programs for these products. See Ernie Pitt's blog post on [Peace of Mind with IBM Storage].
IBM Spectrum Scale 5.0 is highly scalable file and object storage system. It is available as software, pre-built appliances, and in the Cloud.
The pre-built appliances are called "Elastic Storage Server", combining Spectrum Scale software on two IBM Power servers with drawers of flash or disk drives.
IBM introduces two new "Hybrid" models to the ESS family. GH14 has one 2U drawer with 24 Solid State Drives (SSD) combined with four 5U drawers with 7200rpm spinning disk. The GH2R has two 2U drawers with four 5U drawers.
Like the GS models, the SSD are either 3.84TB or 15.3TB capacities. The 5U drawers are similar to those in the GL models, either 4TB, 8TB or 10TB capacities.
A new Enterprise Slim Rack (S42) is now available to hold these. The S42 is available for all ESS orders, including the GS, GL and new GH models.
IBM has shortened the name of "Spectrum Control Storage Insights" to just "Storage Insights" and made it available in two flavors: Storage Insights, and Storage Insights Pro.
Storage Insights is a no-cost cloud Artificial Intelligence (AI) service that provides common monitoring capabilities to all of your IBM block-level storage, including IBM FlashSystem, SAN Volume Controller (SVC), Storwize, DS8000 models and IBM XIV Storage Systems. Here are some of the capabilities offered:
View the health, performance, and capacity of all your IBM-supported devices from a single place
Filter storage device events to help you focus on the things that require your immediate attention
Act on predictive insights provided by device intelligence before anomalies have an impact on service levels
Use actionable data you get to resolve more issues on your own
Open and view IBM support tickets
Enable IBM Support to automatically collect log packages with no interaction with the client
IBM Storage Insights Pro includes everything in Storage Insights as well as these additional capabilities. This is a fee-based cloud service, licensed per TiB per month, for the added functionality:
Business impact analysis
Data placement optimization with tier planning
Capacity optimization with reclamation planning
Supports file and object storage, including IBM Spectrum Scale, Elastic Storage Server (ESS), and IBM Cloud Object Storage (IBM COS)
Both Storage Insights and Storage Insights Pro use a "data collector" that runs on premises. This can be any bare metal server or Virtual Machine running Windows, Linux or AIX operating system connected to the SAN, with access to the Internet to upload the data to the IBM Cloud.
If you have IBM block storage today, there is no reason not to try this out. You can download the "data collector" and start using Storage Insights right away. If you like it, consider upgrading to Storage Insights Pro, or the full on-premise Spectrum Control product.
As I have mentioned before, I started this blog on September 1, 2006 as part of IBM's big ["50 Years of Disk Systems Innovation"] campaign. IBM introduced the first commercial disk system on September 13, 1956 and so the 50th anniversary was in 2006. That means this month, IBM celebrates the "Diamond" anniversary, 60 years of Disk Systems!
For those who missed it, IBM announced last Tuesday encryption capability for the TS1120 drive, our enterprise tape drive that read and write 3592 cartridges. Do you need special cartridges for this? No! Use the sames ones you have already been using!
You can read more about it www.ibm.com/storage/tape."
Short and sweet, but it got me started, and I ended up writing 21 blog posts that first month. You can read blog posts from all 10 years by looking at the left panel of my blog under "Archive".
While traditional disk and tape storage are still very important and relevant in today's environment, IBM has also expanded into other technologies:
In 2012, IBM [acquired Texas Memory Systems]. In 2014, IBM shipped 62PB, more Flash capacity than any other vendor. In 2015, continued its #1 status, shipping 170PB of Flash, again, more than any other vendor.
IBM has flash everywhere, from the advanced FlashSystem 900, V9000, A9000 and A9000R models, to other all-flash array and hybrid flash-and-disk systems a with various sets of features and functions to meet a variety of workload requirements.
The DS8888 all-flash array, and the DS8886 and DS8884 hybrid flash-and-disk systems round out the latest in the DS8000 storage systems family. SAN Volume Controller and Storwize family of products, based on IBM Spectrum Virtualize software, also have all-flash array and hybrid configurations. The most recent being the Gen2+ models of Storwize V7000F and V5030F. The latest solution is the DeepFlash 150 models, designed for analytics and unstructured data.
Between internally-developed IBM Spectrum Scale and IBM Spectrum Archive, and IBM's [acquisition of Cleversafe], IBM is ranked #1 in Object Storage. IBM Cloud Object Storage System, IBM's new name for Cleversafe's flagship product, is available as software-only, pre-built systems, or in the IBM SoftLayer cloud.
Software-Defined Storage (SDS) with IBM Spectrum Storage
Last year, IBM re-branded its various storage software products under the "IBM Spectrum Storage" family. Earlier this year, IBM announced the new [IBM Spectrum Storage Suite license] which makes it even easier to procure, either with a perpetual software license, elastic monthly licensing, or utility license that combines some of each.
IBM is ranked #1 in Software-Defined Storage, with over 40 percent marketshare, offering solutions as Software-only, pre-built systems, and in IBM SoftLayer cloud.
The article starts out giving background history of the current mess we are in. Here is an excerpt:
"Throughout most of U.S. history, American high school students were routinely taught vocational and job-ready skills along with the three Rs: reading, writing and arithmetic...
...But in the 1950s, a different philosophy emerged: the theory that students should follow separate educational tracks according to ability...
Ability tracking did not sit well with educators or parents, who believed students were assigned to tracks not by aptitude, but by socio-economic status and race. ...
...The backlash against tracking, however, did not bring vocational education back to the academic core. Instead, the focus shifted to preparing all students for college, and college prep is still the center of the U.S. high school curriculum..."
My father was a mechanical engineer who enjoyed fixing cars and woodworking on the weekends. I had plenty of "vocational training" growing up at home, no need for me to have this in school, allowing me to focus on getting ready for college.
Nicholas asks legitimate questions at this stage: "So what’s the harm in prepping kids for college? Won’t all students benefit from a high-level, four-year academic degree program?" His initial response is:
"... As it turns out, not really. For one thing, people have a huge and diverse range of different skills and learning styles. Not everyone is good at math, biology, history and other traditional subjects that characterize college-level work.
Not everyone is fascinated by Greek mythology, or enamored with Victorian literature, or enraptured by classical music. Some students are mechanical; others are artistic. Some focus best in a lecture hall or classroom; still others learn best by doing, and would thrive in the studio, workshop or shop floor..."
Hard to argue that people are different, and learn in different ways. Not everyone is meant for college.
"...And not everyone goes to college. The latest figures from the U.S. Bureau of Labor Statistics (BLS) show that about 68 percent of high school students attend college. That means over 30 percent graduate with neither academic nor job skills..."
Here is what I have most problems with. To think that the 30 percent of high schools students graduate, but do not go to college, have neither academic nor job skills? I disagree with this, as there are many jobs where the academic and job skill training they received in high school is more than adequate. Nicholas then doubled down:
"...But even the 68 percent aren't doing so well. Almost 40 percent of students who begin four-year college programs don’t complete them, which translates into a whole lot of wasted time, wasted money, and burdensome student loan debt. Of those who do finish college, one-third or more will end up in jobs they could have had without a four-year degree. The BLS found that 37 percent of currently employed college grads are doing work for which only a high school degree is required.
It is true that earnings studies show college graduates earn more over a lifetime than high school graduates. However, these studies have some weaknesses. For example, over 53 percent of recent college graduates are unemployed or under-employed. And income for college graduates varies widely by major – philosophy graduates don’t nearly earn what business studies graduates do. Finally, earnings studies compare college graduates to all high school graduates. But the subset of high school students who graduate with vocational training – those who go into well-paying, skilled jobs – the picture for non-college graduates looks much rosier.
Yet despite the growing evidence that four-year college programs serve fewer and fewer of our students, states continue to cut vocational programs..."
There are a lot of successful billionaires who did not complete four yeas of college: Bill Gates, Steve Jobs, Michael Dell, Henry Ford, and Howard Hughes, just to name a few.
If you feel that the only purpose of attending high school or college is to get job-specific skills, then you are missing out on all the other aspects of those that teach you valuable life lessons, getting along with others, teamwork, communications, and other "soft skills" that aren't necessarily job-specific.
Teenagers entering college are still growing up, trying to figure out what they want to do with their lives, discovering new ideas, new ways of thinking, and networking with people of different backgrounds and cultures.
"...The U.S. economy has changed. The manufacturing sector is growing and modernizing, creating a wealth of challenging, well-paying, highly skilled jobs for those with the skills to do them. The demise of vocational education at the high school level has bred a skills shortage in manufacturing today, and with it a wealth of career opportunities for both under-employed college grads and high school students looking for direct pathways to interesting, lucrative careers. Many of the jobs in manufacturing are attainable through apprenticeships, on-the-job training, and vocational programs offered at community colleges. They don’t require expensive, four-year degrees for which many students are not suited..."
The skills shortage is real, but until employers are willing to pay people for what they're worth, the situation will not be resolved. The free market has a way to fix skills shortages. High demand raises salaries, and causes people to invest in high school and college education in part to vie for these positions. That is in part why medical doctors are paid so much.
"...The modern workplace favors those with solid, transferable skills who are open to continued learning. Most young people today will have many jobs over the course of their lifetime, and a good number will have multiple careers that require new and more sophisticated skills..."
A few years ago, I was hosting clients for dinner in Tucson. The sales rep had brought his daughter and her roommate along, as there was a shooting at their college campus and classes were canceled for the week. The daughter asserted, "In 18 months, I will no longer have to learn anything again. I will be done with school." Her roommate chimed in, "Ha! I am a year ahead of you, and only six months away from that!"
I was the bearer of bad news. "Ladies," I said, "you will have to get used to learning new things the rest of your lives." The highest ranking client at the table overheard me, and she re-iterated, "Ladies, that is probably the best advice I have heard in awhile. I suggest you heed it carefully."
A big part of high school and college education is to teach you how to learn on your own. Learn to read, search out information, take measurements, gather data, make plans, and ask the right questions. These are skills that are useful in a wide variety of careers.
Nicholas concludes with:
"...Just a few decades ago, our public education system provided ample opportunities for young people to learn about careers in manufacturing and other vocational trades. Yet, today, high-schoolers hear barely a whisper about the many doors that the vocational education path can open. The “college-for-everyone” mentality has pushed awareness of other possible career paths to the margins. The cost to the individuals and the economy as a whole is high. If we want everyone’s kid to succeed, we need to bring vocational education back to the core of high school learning."
I agree the educational system in United States is broken, but I am not sure I agree with everything that Nicholas writes in this article.
This week, IBM sponsored a nice multi-client event in San Juan, Puerto Rico. I was quite impressed with the quality of this video. Our marketing department has really done a good job on this!
This event was not just multi-client, but also spanned different industry sectors. IBM recently has realigned to five different sectors, and we had clients from different sectors attending the event.
The night before, I was able to meet most of the other IBM executives who came down for the event. Unfortunately, two were delayed because of the snow storms in the Northeast part of the United States, but they were able to arrive the next day.
The venue was the El Touro restaurant, near the Hilton Caribe. The weather was just right, about 75 degrees and breezy. It was a little humid for me, but everyone else were just happy to be out of the cold. Meanwhile it is nearly 90 degrees in Tucson, Arizona where I am from.
This was billed as a "Lunch and Learn" and the food was delicious! In an effort to keep it simple, we had small dishes of fish with fruit-based cream sauce, paella with rabbit meat and rice, pork belly, Crema Catalana and a churo for dessert. This gave everyone a sample taste of everything, without having to order off a menu.
We basically took the same approach with the presentation. First, Marcos Obermaeir and Marcos Otero, the two leads for this event, thanked the audience and explained their new roles. Marcos Obermaeir is focused on Financial and Insurance sector, while Marcos Otero focused on Communications sector.
Next we had Debbie Niven and Roopam Master, both IBM Executives, explain their roles, and how IBM can help both clients and Business Partners in Puerto Rico.
I presented samples of much larger presentations on three topics. First, the excitement over Software Defined Storage with IBM Spectrum Storage family of products. Second, IBM Spectrum Scale as a better replacement for Hadoop File System (HDFS) for Hadoop, IBM BigInsights and Hortonworks analytics deployments. Third, IBM Cloud Object Storage, and how this can be combined with IBM Spectrum Protect to backup your data to object storage either on premises, or in the Cloud.
I could have easily spoken an hour on each topic, but instead, we shortened to about 20 minutes each, in keeping with the "Tapas" theme of the restaurant. This allowed those clients who wanted to hear more to have a reason to request a follow-up visit or call.
After the clients left, the IBM team had a reception for the IBM Business Partners. About 80 percent of IBM's storage business in Puerto Rico is done through IBM Business Partners, so they are an important link in IBM's "Go-to-Market" strategy.
The moon was nearly full, and the breeze and waves were a spectacular backdrop to the conversations I had with each person I met.
Next month, I will be presenting at the IBM Systems Technical University for Storage and POWER. This conference will be held in New Orleans, Louisiana, October 16-20, 2017.
Instead of a "Meet the Experts" Q&A panel, this event will feature a "Poster Session". I had the pleasure of doing one of these down in Melbourne, Australia last month. For those who missed it, here are my blog posts:
By now, you have already decided on a title and abstract of your poster. You will need to figure out a quick and easy way to explain your poster, and as always, shorter is better. It reminds me of a famous quote:
"Sorry this letter is too long...
If I had more time, I could have made it shorter!
-- Blaise Pascal
The event team asked me to write some instructions on the mechanics of how to put together a poster for this, since it is new for many people. I use Microsoft PowerPoint 2013 and ImageMagick tools to accomplish this.
Arrangement of Slides
Posters for the IBM Systems Technical University in New Orleans will be 24x36 inches in size. If you print out your poster in 8.5x11 inch standard size letter pages, that would be eight slides, 2 columns, 4 rows. This leaves one inch border all around.
The event will provide both the foam board and double-sided sticky tape. You can bring your poster as a stack of Letter-sized pages in a folder, and assemble your poster at the event.
You can increase the size of individual image to 17x22, to offer the "Big Picture" view. Basically, we can take a standard 8.5x11 Letter size page, expand it onto four separate pages, and then put them on the poster! I will show you how in the steps below.
Lastly, you can have two big slides. If your poster is organized as "Before/After" or "Problem/Solution" then this arrangement could be perfect for you.
Setting Custom Paper Size on PowerPoint
In Melbourne, I had to use European A4 standard paper, and had to figure out how to do this in PowerPoint. I was surprised to learn that the PowerPoint default is 4:3 ratio of 10x7.5 inch, and that this is stretched to be whatever paper size you print on.
The difference is slight, but I prefer [WYSIWYG], so we will change the slide to "Custom size" and force it to 8.5x11 inches, with "Landscape" orientation. This will avoid anything looking stretched or squished on the big poster.
Converting a PowerPoint Slide to PNG Image file
If you would like to resize one or more of your PowerPoint slides, you will need to save those slides as images. Select "File" and "Save As" and as the format, choose "PNG" format. You can also select GIF or JPG, but I prefer PNG.
You can export all of your slides as images, in which case it will create a folder and number each slide individually. Or, you can select "Just This One" for the current slide.
By default, it will use the same name as your PPT file, just change the extension to PNG. I suggest you name the file something meaningful to you. In my examples below, I use "small.png" as the file name.
I am using PowerPoint 2013, which defaults to 96 dpi. So, an 8.5x11 paper becomes 1056x816 pixels in size.
If you have PowerPoint 2003 or higher, you can change the Windows registry to specify image resolutions. Not recommended for the faint of heart. Or anyone else. But here's the deal if you want to try (if the following doesn't make any sense, it might be better not to mess with the registry):
Quit PowerPoint if it's running
Navigate to HKEY_CURRENT_USER\Software\Microsoft\Office\X.0\PowerPoint\Options
(For X> above, substitute 16.0 for PowerPoint 2016, 15.0 for PowerPoint 2013, 14.0 for PowerPoint 2010, 12.0 for PowerPoint 2007 and 11.0 for PowerPoint 2003.
Add a new DWORD value named ExportBitmapResolution and set its DECIMAL value to the DPI value you want (for example, 300 means 300 dots per inch)
Close REGEDIT, start PowerPoint and test. Your files will be 3300x2550 pixels instead.
Resizing and splitting up PNG Image files
To expand and chop the slide into four letter-sized pages, we will use "ImageMagick", an open source tool you can download for free at [ImageMagick] is a collection of command line utilities. The first "identify" will confirm your pixel size for your PNG image. Replace "small.png" with whatever you named your PNG image above.
Lastly, we crop the "big.png" image we just created into four smaller pieces. Each piece will be exactly the size as your original image! The files will be named big_0.png, big_1.png, big_2.png and big_3.png.
Since the resulting four pieces are exactly the size of a page, you can put them back into your PowerPoint deck. Create four blank slides, select Insert then Pictures. Insert each picture (big_0.png, big_1.png, big_2.png, and big_3.png) as a separate page.
You can print this out, and bring with you to the event, or send it to someone to have them print for you.
Upload files to IBM@Box
This next step is completely optional, but found it adds a nice touch. As an IBMer, you can upload your presentation, and any documents, whitepapers or other materials, to [IBM@Box]. Create a directory that is unique to you, such as your last name and the conference. For example, I have "Pearson-STU-NOLA-2017" as my folder name.
You can create a "URL Link" to this folder. Select "Share", then "Share Link" to create a dialog box. It is important to specify "People with this link" if you want those outside of IBM, such as clients and IBM Business Partners, to have access.
Press the little "gear" button on the upper right, and it gives you options to customize the URL. Normally the URL is some long random sequence of characters, but you can rename it to something meaningful and easier to remember.
Generate a QR Code
Since you have a URL Share Link for your files on IBM@Box, you can generate a QR Code for this link, and include on your poster!
There are several online websites that can generate a QR Code for free. I use [QRme.com] in this example. Go to the website, copy in the URL, and press "Generate" button.
The QR Code is generated successfully, right click and "Save Image" to a file on your hard drive. This image can be inserted as a picture like we did above onto any slide. You can resize as needed.
In Melbourne, one of the posters had the QR Code at the top, with the Title, and it was impossible to see, so difficult to use a smartphone to scan the information. For this reason, I recommend putting the QR code in the center or lower right corner of your poster. Between shoulder and waist height for the audience, to be comfortable to scan.
I am looking forward to going back to New Orleans to speak at this conference!
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM Elastic Storage Server
Replacing the older "GSn" and "GLn" models, IBM announces the "Second Generation" GSnS and GLnS models (the second "S" stands for Second Generation), the "n" continues to refer to the number of storage drawers. All of these have a pair of POWER8 servers to drive amazing performance at a low price point.
The "GSnS" models are based on smaller 2U, 24-drive storage drawers, with 3.84 and 15.36 TB Tier-1 Read-intensive Solid-State Drives (SSD). The "GLnS" models are based on larger 5U, 84-drive storage drawers, with 4TB, 8TB and 10TB nearline (7200 rpm) spinning disk.
These new models have the latest IBM Spectrum Scale software pre-installed.
In addition to IBM's two existing Hyperconverged offerings--IBM Spectrum Accelerate for x86 servers, and IBM Spectrum Scale for x86, POWER and z Systems servers--IBM Power Systems now offers a third option. This integrated offering combines Nutanix's Enterprise Cloud Platform software with IBM Power Systems™ hardware to deliver a turnkey hyperconverged solution that targets critical workloads in large enterprises.
Nutanix is offered and will be defaulted/required on these Power® servers only:
While "Hyperconvergence" is still fairly new, and only about 1 percent of data centers have deployed this new technology, I am glad that IBM is a leader in this space with multiple offerings across both x86 and POWER systems platforms.
Edge will be different in many ways this year. The past few years we had separate "Executive Edge" for C-level executives, "Winning Edge" for IBM Business Partners, and "Technical Edge" for server, network and storage administrators.
This year, all 1,000 sessions are combined back into one, but with clever hints in the titles. The words "General Session", "Outthink" or "Cognitive" are used to indicate C-level executive talks. Those that use the terms "Winning" or "Community" target IBM Business Partners, Managed Service Providers and Cloud Service Providers. Those that mention z Systems, POWER servers, or Storage solutions, often adding the term "Deep-Dive", are technical.
(Unlike other sessions that might appeal to one portion of the audience or another, mine are suitable for everyone, from C-level executives and IBM Business Partners to storage administrators. To help people find them under the new naming scheme, I have added "Tony Pearson Presents", or words to that effect.)
About 260 breakout sessions relate to IBM Storage, but there are only 20 or so time slots, so obviously you can't see them all in person.
I strongly suggest you pick about three to five topics per time slot, so that you are not overwhelmed by the dozens of choices during the event. This allows you to make a quick decision on which one you finally decide on during each time slot.
Occasionally, a session might get canceled, postponed, or be so full of attendees that nobody else is allowed in, so having three to five topics selected allows you to chose an alternate.
Here is my schedule for next week at Edge 2016.
Trends & Directions: The Future of Storage in the Cloud and Cognitive Era
All Flash is Not Created Equal: Tony Pearson Contrasts IBM FlashSystem and SSD
MGM Grand - Studio 9
Solution EXPO: Reception
Edge at Night: Poolside Reception and Concert "Train"
Tony Pearson Presents IBM Cloud Object Storage System and Its Applications
MGM Grand - Room 114
The Pendulum Swings Back: Tony Pearson Explains Converged and Hyperconverged Environments
MGM Grand - Room 113
Solution EXPO: Reception
Tony Pearson Presents IBM's Cloud Storage Options
MGM Grand - Room 116
My colleagues Dave Dabney or Adam Bergren will be located at the WW Systems Client Centers Booth 125 of the Solution EXPO.
If you are active in Social Media, consider using the hashtags #IBMedge, #IBMstorage, and #IBMcloud. You can follow me on Twitter, my handle is @az990tony
For those interested in a one-on-one meeting with me, over breakfast, lunch or dinner, or some other time, I have several slots still available. Fill out a request form on BriefingSource at: [https://briefingsource.dst.ibm.com/]
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM Storwize V5030F and V7000F all-flash high-density expansion enclosure
The 5U-high, 92-drive expansion enclosure introduced for the IBM Storwize V5000 and V7000 is now available for the all-flash models V5030F and V7000F. High-density expansion enclosure Model A9F requires IBM Spectrum Virtualize Software V7.8, or later, for operation.
The enclosure allows any mix of "Tier 0" write-endurance SSD at 1.6TB and 3.2TB capacities, and "Tier 1" read-intensive SSD at 1.92TB, 3.84TB, 7.68TB and 15.36TB capacities.
Storwize V5030F control enclosure models support attachment of up to 40U of expansion enclosures, which equates to eight high-density expansion enclosures, up to 760 drives per control enclosure, and up to 1,056 per clustered system.
Storwize V7000F control enclosure models support attachment of up to eight high-density expansion enclosures, up to 760 drives per control enclosure, and up to 3,040 drives per clustered system.
IBM has adopted "Agile" process for all of its IBM Spectrum Storage software. Spectrum Virtualize is offered in a variety of forms. IBM offers the FlashSystem V9000, SAN Volume Controller, Storwize family, and Spectrum Virtualize as software that runs on Lenovo and SuperMicro servers. This means quarterly delivery of new features and functions!
Lots of small enhancements were added in this release:
Apply Quality-of-Service (QoS) to a Host Cluster in terms of IOPS and or MB/s throughput.
SAN Congestion reporting, via buffer credit starvation reporting in Spectrum Control and via the XML statistics reporting, for the 16Gbps FCP Host Bus Adapter (HBA).
Resizing for Metro Mirror and Global Mirror remote copy services of thin provisioned volumes.
Consistency Protection for Metro Mirror and Global Mirror. You can now define "Change Volumes" to be used in the event of problems with MM or GM, it will switch over to GMCV mode.
Increased FlashCopy Background Copy Rates
Proactive Host Failover during temporary and permanent node removals from cluster
IBM Aspera® Files cloud service helps to enable fast, easy, and secure exchange of files and folders of any size between users, even across separate organizations. Aspera Files is currently available in three all-inclusive editions of Personal, Business, and Enterprise. Clients can subscribe either to a committed amount of data transferred on a monthly or annual basis or as a pay-per-use option.
Personal edition now includes 20 authorized users and a single workspace.
Business edition now includes 100 authorized users, 100 workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, and support for Single-Sign-On.
Enterprise edition now includes 500 authorized users, no limit on number of workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, and support for Single-Sign-On.
IBM is now introducing a new "Elite edition" includes 2500 authorized users, no limit on number of workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, support for Single-Sign-On, and access to IBM Aspera Developer Network and nonproduction organization.
With the addition of the new Elite edition, clients have the flexibility to subscribe to additional functionality in Aspera Files that helps provide higher value and greater differentiation. The Elite edition is available as a subscription and on a pay-per-use basis.
In addition to the existing charge metric of data transferred, a user subscription metric is now included for all four editions. Each edition comes with an included number of authorized users in addition to other key features and capabilities.
Last week, I presented at the "IBM TechU Comes to You" event in beautiful Nairobi, Kenya. This was a three-day event, so here is my recap of Day 1, Tuesday Aug 2, 2016.
Opening Keynote Session
Once again, Marlin Maddy, IBM Manager of Worldwide Systems Technical Events, served as master of ceremonies. He arrived into Nairobi just a few hours earlier, and we were worried that one of us might have to jump in and take over if he had any delays in his flight schedule. Fortunately, he arrived and did a great job welcoming the audience.
Eric Jaoko, chief manager of Kenya's Rural Electrification Agency [REA], presented next. Back in 1973, the Kenyan government wanted to have all of its rural areas offering electrical service. Some 30 years later, in 2002, only 4 percent of the rural areas had achieved this. In 2006, the Kenyan government formed this new REA agency to accelerate the progress. By 2008, nearly 25 percent of rural areas were electrified. Currently (2016), they are now at 68 percent, including all primary schools (more than 20,000 across the country).
Eric mentioned that this success was in part to their partnership with IBM for Information Technology. REA switched from Oracle to SAP applications on IBM Power systems with IBM Storwize V7000, resulting in lower costs, less power consumption, easier to deploy and manage, redundancy and high availability, scalability and high speed access to critical data. Not surprisingly, IBM's leadership in "Mobility" plays another key role, since these areas are rural and often connected only by cellular phone service.
REA employees both AIX and Linux on POWER operating systems, and uses OpenStack to manage both the servers and storage components. PowerVM, PowerVC and PowerHA complete the solution to provide a more robust environment. REA found it was very easy to clone their SAP systems, which made it very easy to test software upgrades without impacting their production environments.
The next speaker was IBM's own Glenn Anderson, IBM z Systems Consultant and Worldwide Technical Events Content Manager. His talk was titled "Think Outside the Cubicle" to emphasize that there are changes underfoot in the IT industry. Rather than focusing on IT as a cost to be reduced, enlightened CEOs are discovering that IT can be used to optimize value for their organization.
One trend that has changed drastically is what IBM refers to as "Systems of Engagement". To better connect with clients, customers and suppliers, organizations now create conversations on social media channels, listen and react to those conversations, building communities that allow them to better understand and serve their markets.
Another trend was "Two-speed IT", often called "Bimodal IT", which indicates that some projects should have "fast-track" status, streamlining the process of design, development and deployment for new innovations. This is in contrast to traditional "slower" projects for mission critical "Systems of Record" operations, like databases and Online Transaction Processing (OLTP).
His last trend he covered was this notion of "Cognitive Business", the use of self-learning, natural language processing to assist in business decision making. Glenn compared the old way as a static map that indicated "You Are Here". The new way was more like GPS, which indicated where you are, where you want to be, and the steps to get there.
(You might ask "Why do business leaders need such assistance?" First, business executives cannot ingest and comprehend the vast amount of data they need to make correct decisions, causing them to make less-than-optimal choices with limited information. Second, business leaders are often only on the job a few years, moving around from one opportunity to another, and do not build the experience background that a computer that can ingest millions of documents can achieve much more quickly. Third, business leaders often are prone to bias, surrounding themselves with ["yes-men"], unwilling to accept any information that contradicts their world view. Computers do not have that bias, and are capable in finding insights, trends and patterns that business leaders might not have considered.)
Software Defined Storage -- What? Why? How?
I was honored to be asked to be the keynote kick-off for the IBM Storage track of this conference. There is still much confusion over the concept of Software Defined Storage (SDS). While there are many different positions on this, IBM has adopted the IDC definition, which requires all three criteria to be met:
Solutions based on Industry-standard, off-the-shelf components.
Solutions that offer the complete set of storage features and functions, such as point-in-time copies, data footprint reduction, technical refresh migration, and remote replication.
Solutions that are offered in multiple ways, such as software-only, pre-built systems using industry-standard off-the-shelf components, and cloud-based services.
IBM's SDS offerings include all of the IBM Spectrum Storage family available as software-only, pre-built systems like SAN Volume Controller and XIV Gen3, and cloud-based services like IBM Cloud Managed Backup and Archive, and IBM Cloud Object Storage System (formerly Cleversafe).
IBM ranks #1 in SDS marketplace, with over 40 percent marketshare. The advantage of IBM's approach is that it does not require a complete rip-and-replace of existing IT infrastructure. IBM solutions can work with your existing servers and storage that you have already in place! This allows for a smooth and graceful transition.
Cloud Computing Concepts and the Role of Infrastructure
This session was covered by Mack Kigada, IBM Executive Consultant for the "Executive Advisory Practice" portion of Systems Lab Services. Frankly, I think this should have been classified as a "Cross-Brand" rather than other "Storage", as it showed not just storage but also how servers and OpenStack participate in a complete Hybrid Cloud solution.
The new IBM FlashSystem A9000 GUI
This session was presented by Dominique Salomon, IBM Certified IT Specialist Storage and European New Technology Introduction Leader. He works at the IBM Montpelier Briefing Center in France, a sister organization to the IBM Tucson Executive Briefing Center that I work in.
When IBM was ready to launch its newest FlashSystem offering, which combines the low-latency IBM FlashCore technology from IBM FlashSystem 900 with the IBM Spectrum Accelerate software from XIV, they had to decide what Graphical User Interface [GUI] to deploy it with. The IBM development team had narrowed it down to three options:
Use the IBM XIV Gen3 GUI, which is installed client code that runs on a handful of select operating systems. This GUI is nine years old.
Adopt and modify the browser-based GUI used by all of the other IBM Storage systems like DS8000 and SAN Volume Controller. By using HTML5, AJAX and Dojo widgets, this newer approach eliminates Operating System and Java dependencies, and can run on desktops, laptops, tablets and smartphones. However, this technology is four years old.
Deploy a new GUI, adopting the latest techniques and methods, offering a new, simpler way to manage the new device.
The development team decided on the third option, and so Dominique spent the first half hour explaining what the IBM FlashSystem A9000 and A9000R systems are, and then the last half showing a live demo connecting back to his systems in Montpelier, France.
IBM XIV, Spectrum Accelerate and the new IBM FlashSystem A9000
This session was covered by Maurice "Mo" McCullough, IBM Storage Technical Content Leader for IBM Systems Worldwide Technical Events. In retrospect, he admitted that he should have scheduled this session before Dominique's session above, which would have reduced the amount of time and questions Dominique spent explaining the IBM FlashSystem A9000 and more time showing the new GUI.
Mo first covered the newest model of the XIV Gen3 pre-built system, the model 314. It has double the cache memory and double the processing cores to drastically improve Real-time compression. Then, he explained IBM Spectrum Accelerate, available as either software you can deploy on your own x86 servers on-premises, or in cloud-based servers from IBM SoftLayer. Finally, Mo covered the A9000 and A9000R, the newest members of the IBM FlashSystem family that share features and capabilities with the XIV Gen3 and Spectrum Accelerate offerings.
Tuesday evening we had a welcome reception for all the attendees, staff and speakers. This was a great time to relax and meet everyone on a social level.
Last week, I presented at the "IBM TechU Comes to You" event in beautiful Nairobi, Kenya. This was a three-day event, so here is my recap of Day 3, Thursday Aug 4, 2016.
Business Continuity and Disaster Recovery for z Systems
I have been working in Business Continuity and Disaster Recovery my entire career at IBM, so when I was asked to give a "z Systems" mainframe slant to my standard BC/DR pitch, I was up to the challenge. IBM offers a complete set of solutions, and I presented best practices for each.
Data Protection, Management and Journey to the Cloud with IBM Spectrum Protect
This session was presented by Saumil Shah, IBM Spectrum Protect Sales Leader for Middle East, Turkey & Africa. I am glad that Saumil volunteered to cover IBM Spectrum Protect, as I already had six sessions on my plate for this week. My version tends to focus on the "What and How" of data protection, whereas Saumil focused instead on the "Why" of data protection. Why should you protect data, and why you should use IBM Spectrum Protect instead of the various other software out in the marketplace.
IBM Spectrum Virtualize - Understanding SVC, Storwize and the FlashSystem V9000
IBM Spectrum Virtualize is the new name for the code base shared by all of these products. I presented the latest features of SVC, Storwize and FlashSystem V9000 hardware models, as well as the latest software features.
How to combine the advantages of Storage Virtualization and Flash performance (the Turbocompression effect)
This session was presented by Dominique Salomon, IBM Certified IT Specialist Storage and European New Technology Introduction Leader. He works at the IBM Montpelier Briefing Center in France, a sister organization to the IBM Tucson Executive Briefing Center that I work in. The term "Turbocompression" was initially coined by his team in Montpelier to explain the combined benefits of Flash technology, Easy Tier automated sub-LUN tiering, with Real-time Compression.
I have to admit that the first time I heard this, I was skeptical. It sounded like a marketing gimmick to mention these together. However, once I saw the demo and the resulting numbers, I was convinced. IBM Easy Tier technology identifies and ranks which blocks are the busiest, and moves extents to the appropriate place. Real-time Compression can compress data in cache memory, flash and spinning disk, allowing more of the busiest blocks of data to reside in the fastest storage media. This means higher hit ratios for cache, lower latency for flash, and less wear-and-tear on the spinning disk drives.
Storage Integration with OpenStack
While OpenStack is used by more than 60 percent of Cloud Service providers, it is used by fewer than 10 percent of the Fortune 500 corporations. This represents an excellent opportunity for IBM, leading in having its storage products support this important open source interface.
IBM supports OpenStack Cinder interface for its block level devices, including DS8000 and XIV. IBM Supports OpenStack Swift for its object storage, including IBM Spectrum Scale, IBM Spectrum Archive, and IBM Cloud Object Storage System (formerly Cleversafe). IBM Spectrum Scale supports OpenStack Cinder, Swift, and Manila interfaces for a complete solution across volumes, files and objects.
Marlin Maddy, IBM Manager of Worldwide Systems Technical Events, served as master of ceremonies. He thanked the audience for attending, and drew names for prizes. This time these were Samsung "smart-watches".
Thursday evening, some people left, and the few of us remaining had dinner at the Intercontinental Hotel. I joined folks from USA, Germany and Middle East. I love our informal discussions! I learn so much listening to other points of view.
This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year.
Day 4, the last day of the conference, is only a partial day, and many people opted to leave on Wednesday evening, or Thursday morning instead. The breakfast and lunch meals had fewer people than the previous days. Here is my recap of day 4 Thursday breakout sessions.
Building Hyperconverged Infrastructure for Next-Generation Workloads
Supermicro is more than happy to customize these, upgrading the CPU, RAM, disk or networking connectivity as needed. This solution is roughly half the price of Nutanix, and offers a better Next-Business-Day/9am-to-5pm support package .
The last time I was in Las Vegas, I presented this topic at [IBM Interconnect conference]. Back then, I was given only 20 minutes, was placed on the Solutions Expo showroom floor, competing with the noise and traffic of attendees going to lunch.
This time, it was much better, a large room, and a bigger-than-expected audience given that it was scheduled on Thursday morning.
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods. I wrapped up the session covering the various storage solutions that IBM offers for all four Cloud Storage types.
IBM Storwize and IBM FlashSystem with VersaStack versus NetApp FlexPod
Norm Patten, part of the IBM Competitive Project Office Storage Team, presented a competitive comparison between VersaStack with IBM storage, versus FlexPod with NetApp storage.
Commodity Solid State Drives (SSD) and Shingled Magnetic Recording [SMR] offer low-cost, high-capacity storage.
However, they have their own set of problems, so IBM is developing software that can be included in IBM Spectrum Accelerate, Spectrum Scale, and Spectrum Virtualize to optimize their utility.
The concept of Log-Structured Array has been around since 1988. The IBM RAMAC Virtual Array back in the 1990s used it. NetApp's Write-Anywhere File System (WAFL) is an implementation of the [Log-Structured File System] general concept.
SALSA combines Log-Structured Array with enhancements borrowed from the IBM FlashSystem design, that I covered in my Monday and Wednesday presentations, to enhance write endurance by as much as 4.6 times!
This was an NDA session, so I cannot blog any of the details.
World-class Flash-optimized Data Reduction and Efficiency with IBM FlashSystem A9000 and A9000R
Tomer Carmeli, IBM Offering Manager for the A9000 and A9000R presented. He presented an overview of these models on Monday, so this session was focused on the data footprint reduction technologies.
Basically, it is a three step process. First, all "standard patterns" are removed. IBM has identified some 260 standard patterns that are 8KB in length, such as all zeros, all ones, or all spaces, and replaces these blocks immediately with a pattern token.
Second, [SHA-1] 20-byte hash codes are computed on 8KB pieces on a rolling 4KB alignment boundary. In other words, if a 64KB block of data is written, bytes 0-to-8KB are hashed an compared to existing hash codes. If no match, then bites 4KB-to-12KB are hashed, and so on. This approach nearly doubles the likelihood of finding duplicates. When a block match is found, the algorithm can replacing them with pointer and reference count.
Third, any unique data that still remains is compressed using Lempel-Ziv algorithm. This is done using the [Intel® QuickAssist]. This co-processor can compress data 20 times faster than software algorithms running on general-purpose x86 processors.
Do you want an estimate of how much "reduction ratio" you may achieve? IBM has developed two estimator tools to help. The first tool is a complete scan for data expected to be dedupe-friendly. It is a slow process, taking 8 hours per TB. This would be ideal for Virtual Desktop Infrastructure or backup copies.
The second tool is the infamous [Comprestimator] that IBM has had for awhile to help estimate compression savings for IBM Spectrum Virtualize storage solutions like SVC, Storwize and FlashSystem V9000. This tool is very fast, looking at only a statistically-valid subset of the data.
The results of both tools are merged, and the result is within five percent accuracy. This allows IBM to offer guidance on which data to place on these new A9000 and A9000R models, as well as offer a "reduction ratio" guarantee.
A client asked me why I bother to attend other sessions, when I probably know most of the material they present. I explained that I can always learn from others. I can honestly say that I learned something new and useful at every session I attended.
I am not in Las Vegas this week for this year's event, but the sessions will be streamed live through [IBM GO].
IBM Systems Technical University - May 22-26, 2017 - Orlando, FL
IBM Systems Technical University is the evolution of a variety of other conferences related to servers, storage and software. Starting out as the "IBM Storage Symposium", then added "System x" servers and renamed to "Storage and System x University", then dropped "System x" when IBM sold off that business to Lenovo.
A few years ago, it was renamed "Edge", initially just focused on Storage, but then two years ago combined with System z mainframe servers and POWER Systems for IBM i and AIX platforms. It also covers software products that previously had their own conferences, like IBM Pulse or MaximoWorld
Last year, the IBM Marketing team tried a daring experiment. Let's change "Edge" to be a "Cognitive Solutions and Cloud Platform" conference, with emphasis on IT Infrastructure.
The experiment failed. Not because IBM Systems don't support these new initiatives, but because the audience were more interested to hear about how IBM Systems help their current day-to-day business. As many attendees told me, "If we wanted to hear about Cognitive or Cloud, we have plenty of other of conferences that cover that already!"
While 40 percent of IBM revenues are generated from Cognitive Solutions and Cloud Platform, the other 60 percent are traditional, on-premise, systems-of-record application workloads, the kind that business, non-profit groups, and government agencies have been using for the past few decades!
To address this need, IBM offered three-day "IBM Systems Technical University" events at various locations. Last year, I presented storage topics at events in Atlanta, Austin, Bogota, Boston, Chicago, Dubai, Nairobi, and São Paulo.
We will have several of those this year as well. The main one will be a full 5-day event, May 22-26, in Orlando Florida. I will be there presenting various sessions on storage!
IBM World of Watson - October 29-November 2, 2017 - Las Vegas, NV
This is a Cognitive Solutions and Cloud Platform conference, with an emphasis on Analytics and Database technologies.
I did not attend World of Watson, or WoW for short, last year, but it was an evolution of the conference previously called "IBM Insight". I am sure everything from DB2 and Open Source databases to Hadoop and Spark will be covered this year as well.
In writing this post, I realize that this year will be like a "Conference Sandwich". Cognitive-and-Cloud at the top and bottom, with all the meat, veggies and garnish in the middle!
We have a new member of the ever-growing IBM Spectrum Storage family! IBM Spectrum Discover is modern metadata management software that delivers data insight for petabyte-scale, unstructured data.
IBM Spectrum Discover easily connects to IBM Cloud Object Storage (COS) and IBM Spectrum Scale and Elastic Storage Server (ESS) to rapidly ingest, consolidate, and index metadata for billions of files and objects, providing a rich layer of metadata on top of these storage sources. IBM plans to extend support to other platforms next year.
This metadata enables data scientists, storage administrators, and data stewards to efficiently manage, classify, and gain insights from massive amounts of unstructured data. The insights gained accelerate large-scale analytics, improve storage economics, and help with governance to create competitive advantage, speed critical research, and mitigate risk.
This initial release is labeled v2.0 as IBM has deployed this in beta form already at various client locations. Here are some key highlights:
Event-notifications and policy-based workflows to automate metadata ingestion and metadata indexing at a petabyte scale
Fine-grained views of storage consumption based on a wide range of system and custom metadata
Fast, efficient search through petabytes of data, resulting in highly relevant results for large-scale analytics
Ability to quickly differentiate mission-critical business data from data that can either be deleted or moved to a cheaper, colder tier
Policy-based custom tagging that enables organizations to classify and categorize data, and align this data with the needs of the business
A software developers kit (SDK) to build action agents that extract metadata from file headers and content, automate data movement, and provide integration to open source software, such as Apache Spark, Apache Tika, PyTorch, Caffe and TensorFlow, to facilitate data identification and speed large-scale data processing
The latest IBM FlashSystem 900 comes in two models, the AE3 "full purchase" model, and the UF3 "storage utility pricing" model where you pay less initially, and then more as you consume more of the capacity. They are the same hardware, just licensed differently.
Currently, IBM offers FCP or InfiniBand host attachment, with up to twelve 3.6TB, 8.5TB or 18TB modules (PCiE card). A full 2U drawer would be configured as 10+P+S RAID5 for high availability and data protection.
Each module offers embedded compression chip, but modules only had enough DRAM cache to allow a maximum of compressed 22TB effective data, so while the 3.6TB and 8.5TB could compress data up to 2.5x, the 18TB card was somewhat limited at 1.2x, which might be fine for some already-compressed data like MP3 audio, or JPEG photos.
This month, IBM offers new XL MicroLatency Modules, 18TB cards with enough DRAM cache to support 44TB compressed data, up to an effective 2.4x compression ratio. A full twelve-module drawer could hold up to 440TB of effective capacity.
IBM also now offers a quad-port 16Gb FCP card that supports both SCSI and NVMe commands over fabric. This is often denoted as either FC-NVMe or NVMe/FC. The FlashSystem 900 already supported NVMe-OF for InfiniBand (see my blog post [IBM February 2018 Announcements])
IBM Cloud Tape Connector for z/OS is a software-defined storage solution that provides an alternative to virtual tape libraries like the TS7760. Here are some highlights:
Robust virtual tape emulation solution with e-vaulting to cloud-based offsite storage for cold, archival, or backup data. Virtual tape emulation simulates IBM compatible tape controllers, tape drives, and tape volumes, maintained on any IBM z/OS-compatible disk system, such as IBM DS8000. IBM Cloud Tape Connector for z/OS provides several vault, transfer, and recovery options to support business continuity and resiliency.
Sequential z/OS data set cloud storage and retrieval. Sequential data sets stored on disk or flash storage can be moved to the cloud by IBM Cloud Tape Connector for z/OS without the requirement of performing a tape-write operation.
Automatic application recall of data from cloud, whether e-vaulted through virtual tape emulation or copied directly to the cloud.
Pervasive encryption support. This feature enables enterprises to ensure that any data copied to the cloud is encrypted before it is transmitted, automatically protecting and handling the encryption keys.
Support for IBM Cloud Object Storage using S3 protocol, as well as Amazon S3, Hitachi HCP protocol, and EMC Elastic Cloud Service Protocol.
This week, IBM InterConnect conference is going on in Las Vegas, Nevada.
One time in Las Vegas, I took the gondola ride at the Venetian Hotel. These are not boats with a motor on a chain or track, a but actually steered and propelled independently by the gondolier. At various points on our path, our gondolier would serenade our group with beautiful Italian songs.
As the ride was ending, I asked our gondolier how long their training program was to do this job. He told me "six weeks". I said "Wow, I would love to learn how to sing Italian songs like that in six weeks". He corrected me, "No, silly, they only hire experienced singers, and teach them six weeks to manage the gondola by turning the oar in the water."
(FCC Disclosure: I work for IBM. I have no financial interest in the Venetian Hotel, CBS Studios, or the producers of any television shows mentioned in this post. David Spark has provided me a complimentary copy of his book. This blog post can be considered an "unpaid celebrity endorsement" for the book reviewed below.)
InterConnect 2017 includes "Concourse", a trade show floor with people showing off the latest technologies. In the past 25 years, I have attended many conferences, and on occasion I have worked "booth duty". I am not in Las Vegas this week, so this post is advice to those that are.
One time, when the coordinators for an upcoming conference announced at an all-hands meeting they were looking for "a number of knowledgeable and outgoing volunteers" to work the IBM booth, one of the employees in the audience asked "How many of each?" While this might have meant to draw laughs, it underscored a real problem.
In many IT and engineering fields, the terms "knowledgeable" and "outgoing" are seen as mutually exclusive. People are either one or the other. A study titled [Personality types in software engineering], by Luiz Fernando Capretz of The University of Western Ontario, analyzed Myers-Briggs Type Indicator of personality and found the majority of engineers were "Introverts".
This line of thinking is further reinforced by the various characters on the television shows like "The Big Bang Theory". If you are familiar with the show, you have Sheldon and Amy are the most knowledgeable, but also the most socially awkward, and then you have Penny and Howard, less knowledgeable but at the more outgoing end of the spectrum.
I understand that for many engineers, working a booth at a trade show is far outside their "comfort zone". But what do you think is more likely, that you can train an engineer to work a booth in six weeks, be more outgoing, hold the right conversations, tell the right stories -- or -- train a professional model, a young, good looking man or woman, who is already outgoing and friendly, to answer technical engineering questions about your products and services?
I have been attending conferences for over 25 years, and occasionally have worked a booth or two. I started out as an engineer, but went through extensive training for public speaking, talking to the media and press, and moderating Q&A Expert panels.
Sadly, most people who work the booth get little to no training at all. You might be told your scheduled hours, how to scan bar codes on badges, and where the brochures and swag are stored. Then, you get your official "shirt" and told to wear it with a certain color pants, so that everyone looks like part of the team.
Fortunately, fellow blogger David Spark, of Spark Media Solutions, has written a book titled "Three feet from Seven Figures" with loads of advice on how to work a booth with one-on-one engagement techniques to qualify more leads at trade shows.
The title of his book warrants a bit of explanation. When you are working a booth, potential buyers and influencers are walking by, often just three feet away from you, and these could represent million-dollar opportunities.
Too often, the folks working a booth take a passive approach. They look down at their phones, chat with their colleagues, and basically wait for complete strangers to ask them a question or request a demo. This non-verbal communication can really be a turn-off. David explains this in all-too-familiar detail and how to be more actively engaged.
David shows how to break the ice and build rapport with each attendee, how to qualify them as legitimate leads, and how to handle each type of situation.
For qualified leads, you need to maximize the opportunity. If you imagine how much a company spends to send its employees to work the booth, plus the cost of the booth itself, and divide it by the limited number of hours that the trade show floor is open, you quickly realize that each hour is precious.
Your time is valuable, and certainly their time is valuable also. Let's not spend too much time on a single lead, but rather capture the information, end the conversation, and move on.
If you are working a booth at IBM InterConnect, or plan to work a booth at an event later this year, I highly recommend getting this book! It is available in a variety of hard copy and online formats at [ThreeFeetBook.com].
The study surveyed 5,676 leaders from various industries, education, and government agencies responsible for workforce development and labor/workforce policy. This was a truly global survey, with respondents from North and South America, the Nordics, Europe, Africa, Middle East and Asia.
A gloomy picture for the future
The survey paints a gloomy picture for the future. The majority of industry executives struggle to keep their workforce skills current, in light of rapidly changing technological advancements.
Only 55 percent of the respondents felt the current education system, from grade school up to university, were adequate to ensure lifelong learning and skills development. Most blamed inadequate investment from private industry in addressing these issues.
Any problem can be solved if (a) everyone agrees what the problem is, and (b) everyone feels it is high enough priority to solve. The study found there was a disparity of what the problem is, what the priorities are, and who should solve it.
In the book Class Counts: Education, Inequality, and the Shrinking Middle Class, the author Allan Ornstein argues ".. the debate centers on whether the government should take a backseat or manage the economy, whether a free market should prevail or whether we should redefine or tinker with market forces..."
Which workplace skills are in short supply?
Can we at least agree on which workplace skills are in short supply?
Not surprisingly, Industry leaders ranked the top three skills required:
Technical core capabilities for Science, Technology Engineering and Math [STEM]
Basic computer and software/application skills
Fundamental core capabilities around reading, writing and arithmetic (often called [the three Rs])
These are all "hard skills", referring to the knowledge, skills and competencies to perform specific tasks. Nearly 75 percent of corporate training budgets are focused on hard skills.
Government leaders, on the other hand, especially those that are responsible for labor/workforce policy, ranked the top three skills:
Ability to communicate effectively in a business context
Willingness to be flexible, agile and adaptable to change
Ability to work effectively in team environments
These would all be classified as "soft skills", referring to the people skills, social skills, communication and emotional intelligence to effectively navigate the environment and work well with others.
In fact, these government leaders felt that STEM, computer skills and "the three Rs" ranked the lowest requirements in their priority.
"Unless managers have forgotten everything they learned in Econ 101, they should recognize that one way to fill a vacancy is to offer qualified job seekers a compelling reason to take the job. Higher pay, better benefits, and more accommodating work hours are usually good reasons for job applicants to prefer one employment offer over another."
"... the long-hours pandemic is a symptom of the tech and design sectors' badge-of-honor-martyr-complex. ... part of the reason that women can't have it all is that American business has grown this time-macho culture, a relentless competition to work harder, stay later, pull more all-nighters, ... the classic 40-hour work week have trained us to measure our labor by the number of hours we log,... However, this mindset is dead wrong when applied to today's professionals. The value ... isn't the time they spend, but the value they create through their knowledge."
IT jobs require creativity and focus. In a feature article titled [Why you should work 4 hours a day, according to science], Alex Soojung-Kim Pang, author of Rest: Why You Get More Done When You Work Less, looks at the work habits of highly accomplished creative people through history and finds that they all shared a passion for their work, a terrific ambition to succeed, and an almost superhuman capacity to focus.
Yet when you look closely at their daily lives, they only spent a few hours a day doing what we would recognize as their most important work. The rest of the time, they were hiking mountains, taking naps, going on walks with friends, or just sitting and thinking.
Encouraging more students to develop the skills early
While we all agree that employers should raise salaries, offer better benefits, and fix their morally-corrupt culture of working too many hours, that only addresses part of the problem, the demand half of the equation. We also need to get kids to learn the hard and soft skills needed at an early age.
Do students have what it takes to work in the IT industry? John Rampton lists the [15 Characteristics of a Good Programmer]. Most are soft skills, with my favorites being: Laziness, Impatience and Hubris.
In his book Why Good People Can't Get Jobs: The Skills Gap and What Companies Can Do About It, Peter Cappelli advises corporations to take a more proactive role:
"... a huge part of the so-called skills gap actually springs from weak employer efforts to promote internal training for either current employees or future hires ... It makes no sense for the employers, as consumers of skills, to remain an arm's-length distance from the schools that produce those skills..."
The major stakeholders, from industry to education to government, should partner together. For example, the Chicago Public Schools (CPS) system will be the first in the United States to [require all students to take computer science] in high school, starting with the class graduating in 2020. Grants and training are being provided by IT industry giants like Google and Microsoft.
IBM is also doing its part with [a new education paradigm], called Pathways in Technology Early College High Schools [P-TECH]. Normal high school is typically four years (grades 9 to 12), but P-TECH is a system of innovative public schools spanning grades 9 to 14 that bring together the best elements of high school, college, and career. The additional two years (grades 13 and 14) of community college can help teach the soft and hard skills needed for particular jobs in IT.
After the six years, students graduate with a no-cost associates degree in applied science, engineering, computers and related disciplines, along with the skills and knowledge they need to continue their studies or step easily into well paying, high potential jobs in the IT arena for multiple industries.
The paradigm has grown from one school in 2011 to 60 schools by September 2016, with over 300 large and small companies affiliated with P-TECH schools serving thousands of students.
So the future may not be as gloomy as predicted. Problems can be addressed if everyone works together to solve them. In the mean time, I will be taking the rest of the year off for long-overdue vacation. Perhaps I will go hike mountains and take naps, as Alex suggests above.