This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
This week, I am in Orlando, Florida for the [IBM Technical University], with focus on IBM storage, IBM Z mainframes and IBM Power servers. Here is my recap for morning break out sessions on Day 2.
A Survey of Deep Learning Techniques
Nin Lei, IBM Distinguished Engineer, presented a sample of Deep Learning techniques used today. CNN, RNN, and GAN.
Basic decision making: gather data, reviewed by subject matter expert, have an outcome. This is done for a variety of situations: fraudulent vs. legitimate credit card transaction, approve or reject loan application, tumor is benign or malignant. Machine Learning effectively replaces SME with a mathematical function.
Various tools are available for this: Tensorflow, SnapML, SAS, SPSS, are just a few.
Deep Learning is based on "Neural Networks", a subset of Machine Learning. There are input layer, one or more hidden layers, and then an output layer. For example, for a photo, each pixel could be an input feature. A 200x200 pixel photo represents 40,000 input values. In the past, there weren't more than three hidden layers. Today, we can have 20 to 50 layers, because we now have more computational power, with 95-97 percent accuracy.
For each connection between input layers and hidden layers, and output layers, you identify weights and biases. A research paper by Hornik 1989 posits that any machine learning can be performed by a sufficiently large neural network.
Convolution Neural Network (CNN) is often used for image recognition, for object classification or detection.
Some features are invariant. Location invariant means it doesn't matter where it is located within the photo. Color invariant means it does not matter what color it is, and can work with black-and-white or grayscale photos.
For example, for facial recognition, earlier layers are focused on identifying edges, and later layers identify facial features like eyes, nose and mouth.
Image recognition is used with self-driving cars, drones to determine power line maintenance or crop inspection, social media, video surveillance, medical image diagnosis, car racing, and ripeness of fruits and vegetables.
CNN is used for auto-encoding. This takes detailed photos, compresses them, and then can be used to decode back to something similar. It can takes weeks to train a model with a million photo images.
Recurrent Neural Network (RNN) is focused on time sequence.
This is useful for predicting sequences of letters or words. However, since mathematics are involved, a long sequence of multiplications will either get to zero or infinity, this is known as the "vanishing gradient problem".
The solution is "Long Short Term Memory" (LSTM) cells. Basically, the model selectively remembers information from previous steps, which reduces the number of multiplications.
RNN need to know related words. For example, men-women, king-queen, walking-walked, swimming-swam, Spain-Madrid. These are referred to as "embeddings", which are stored in the hidden layers for quick lookup.
Generative Adversarial Networks (GAN) are used to generate fake photos to train other models.
Sometimes, you do not have enough photos in each category for training, so you can generate fake images to help with the training system. Noise is fed into a "Generator" model, and then the results are evaluated by a "Discriminator" model, comparing the fake with real photos. Repetition allows each model to improve so that the fake photos become more realistic for training purposes.
The death of the one-size-fits-all cloud: The mainstreaming of multi-arch
Elise Spence and Drew Thorstensen, IBM Power Systems for Software Defined Cloud Infrastructure, presented this topic. The session was on IBM Cloud Private, and the multiple architectures supported by Docker and Kubernetes.
There are actually six different architectures supported for Docker containers:
While containers are "portable" between systems, the binaries are typically only written for a single architecture, typically Linux-x86 or Windows-x86, and won't run on POWER or IBM Z.
The solution is to create a multi-arch manifest file, and port all the binaries to all of these different architectures. This way, when the containerized application is run on POWER, the manifest will identify the POWER-based binaries.
Introduction to IBM Cloud Object Storage (powered by Cleversafe)
Before 2015, IBM offered two "Object Storage" products: IBM Spectrum Scale and IBM Spectrum Archive, and I was constantly having to compare and contrast IBM products to Cleversafe.
Not any more! With the IBM acquisition of Cleversafe, IBM now offers all three!
This session explained all of the features and functions of IBM Cloud Object Storage System, available as software, as pre-built systems, including a VersaStack CVD, and as Storage-as-a-Service (STaaS) in the IBM Cloud.
(IBM renamed Cleversafe DSnet to "IBM Cloud Object Storage System". I joked that if IBM ever acquired Coca-Cola, they would probably rename their signature soft drink as the "Brown Carbonated Sugar Liquid", or BroCarb SugarLiq for short!)
I provided a general overview, as well as the latest features of Concentrated Dispersal Mode and Compliance Enabled Vaults.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
Last year, Hurricanes Harvey, Irma, Jose, and Maria, ravaged various parts of North America and the Caribbean. My topic on Business Continuity and Disaster Recovery (BC/DR) was well attended. I have been working in BC/DR for most of my career, including the "High Availability Center of Competency" or HACOC, for short.
However, natural disasters like hurricanes, tornadoes, forest fires and floods represent less than 20 percent of all disasters. The majority of disasters, nearly 75 percent, arise from electrical power outages, human error, system failure and ransomware.
The seven tiers were developed by a group of IBM customers back in the 1980s, and have stood the test of time. I recently published an article in IBM Systems Magazine (January/February 2018) based on this presentation.
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods.
Finally, I covered some Hybrid Cloud Storage configurations, showing how a combination of Traditional IT, on-premise local private cloud, off-premise dedicated private cloud and public cloud, and be combined to provide added value.
Reporting and Monitoring: How to Verify your Storage is Being Used Efficiently
It is hard to believe that it was over 15 years ago that I was the chief architect for the software we now call IBM Spectrum Connect, Spectrum Control and Storage Insights. There are a variety of editions and bundles for this product, but my focus on this talk was on the advanced storage analytics found in IBM Virtual Storage Center and IBM Spectrum Control Advanced Edition.
I covered three use cases:
What storage tier to put your workload in, and how to move existing data into a faster or slower tier to meet business requirements and IT budgets.
For steady state environments, how to re-balance storage pools within a single tier to keep things even for optimal performance.
When it is time to decommission storage, how to transform volumes from one storage pool to another without downtime or outages.
Special thanks to Bryan Odom for his help in updating this presentation.
Spectrum Virtualization Data Reduction Pools 101
Barry Whyte, IBM Master Inventor and ATS for Storage Virtualization for Asia Pacific region, presented on how Data Reduction Pools were implemented in version 8.1.2 of Spectrum Virtualize. The software in the latest IBM SAN Volume Controller (SVC), IBM Storwize products, and IBM FlashSystem V9000.
Basically, rather than say we "re-wrote" the code, we prefer softer euphemisms like the code was "re-imagined" or, my favorite lately, "re-factored". Legacy Storage Pools will continue to be supported, but IBM anticipates that people over time will transition to the new Data Reduction Pools (DR Pools).
Like Legacy Storage Pools, the new DR Pools also support a mix of Fully-allocated, Thin-Provisioned, and Compressed-Thin volumes. IBM has made a statement of direction that it will offer Data Deduplication feature in the future, but these will only be on the new DR Pools.
While DR Pools are available today with version 8.1.2, there are a few restrictions. There is a limit of four DR Pools per cluster, and the amount of total capacity of each pool depends on the extent size and number of I/O groups configured. Some of the migration methods developed for Legacy Storage Pools are not available, and in reality don't make sense in the new DR pool scheme. Child Pools are not supported either.
One of the big improvements that DR Pools offer is in the area of compression. With Legacy Storage Pools, CPU cores were dedicated for compression, so they were either under-utilized or overwhelmed. With DR pools, all CPU cores can be used for either I/O or compression, which potentially can increase performance by up to 40 percent!
After the sessions, IBM had its "Solution Center Reception". This is a chance to relax and unwind after a long day, with food and drink, and various sponsors in booths to explain their latest offerings.
This is Katie Thacker from [FIT]. In March 2018, FIT was recognized as IBM’s Top Strategic Service Provider of the year!
These are Elizabeth Krivan and Kelly Bouchard, two recently-hired IBM storage sellers. They attended my sessions at the IBM Technical University in New Orleans last October, so it was good to see them again at my sessions here in Orlando.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
This week, I am in Orlando, Florida for the [IBM Technical University], with focus on IBM storage, IBM Z mainframes and IBM Power servers. Here is my recap for the keynote sessions on Day 1.
Art Beller, IBM Vice President of WW Systems Technical Sales
Art Beller, my third-line manager, kicked off the event. He explained that with [Artificial Intelligence], or AI for short, we are entering the "age of the incumbent". All across industries, the companies that have established dominance over the decades have the most data to get value from.
Kathryn Guarini, IBM Vice President Research Strategy
Kathryn provided an overview of the latest news on AI. Over 700 students at MIT, and 1,000 students at Stanford University, have signed up for "Intro to AI" classes. There are over 30,000 AI-related jobs in IT today. The investment in AI is 10 times more than it was just four years ago.
Kathryn explained there are three levels of AI: Narrow, Broad, and General. Narrow AI finally works, such as face recognition or speech-to-text translation. Broad AI is still a ways out, and General AI is not expected until year 2050.
An area of research is to "Learn more with less". For example, if you train a photo image recognition to identify different species of dogs, can you extend some of this learning to recognize different cats? This is often referred to as "Transfer Learning".
Cyber-criminals are already using AI, and if they can infiltrate AI training models, can introduce some scary scenarios. The next cyber battle-field will be AI vs. AI.
AI results need to be "Explainable", both in the training and debugging phases, as well as the infer/deployment phases. We need to detect and eliminate human biases, and rank different models on their fairness.
Kathryn gave some real examples:
Medical Sieve: An MRI scan captures over 10,000 images. Through AI, the top 25 most important images can be identified, making a doctor's job easier in identifying tumors.
Cancer Research: There are over 800 billion DNA base pairs to evaluate for different cancers, combined with 723 million published articles are relevant research. AI can help sort this out, matching the best research for the appropriate type of cancer.
Banking Regulations: There are over a million compliance documents, and some banks have more than 10,000 employees focused on enforcing compliance. About 10 percent of these compliance documents change every year, making this a moving target.
Fraud Detection: There are too many "false positives" in today's algorithms for suspicious spending behavior. AI can help identify this better.
Video Highlights: AI can be used to generate movie trailers or sports highlights by identify the most relevant portions of a movie or sporting event.
Reduce Air Pollution: China is investigating the use of AI to reduce air pollution in its country. Large cities like Beijing are particularly over-polluted.
Hillery Hunter, IBM Fellow and Director of Accelerated Cognitive Infrastructure at IBM Research
AI takes Terabytes of information, both structured and unstructured data, to develop a model that is very small, perhaps a few MB or GB.
The four steps are: identify your data sources, do some data preparation, train your model, and then infer using that model. Your data sources are stored in a Capacity Tier (often referred to as Data Lake). Inference must be done quickly, so a Performance tier is needed for that phase.
In some cases, data can't move, so for those situations, we need "Federated AI" where we can combine results from different systems.
IBM has added Distributed Deep Learning (DDL) to its PowerAI set of libraries. To estimate "Click-Thru Rate", a typical approach with 4.2 billion training examples took 70 minutes. With PowerAI DDL, this was reduced to 91 seconds. In another example, training that took nine days was reduced to four hours.
Lastly, Hillery mentioned "in-memory computing". Rather than reading data in from memory, and performing some computation on it, this new approach does part of the compute processing on the memory chip itself, eliminating a lot of data transfers.
Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist for storage
In previous years, IBM Technical University would offer brand-specific keynote sessions for IBM Z, IBM Power and IBM Storage. However, these were in the same time slot, so you could only see one of them. This year, IBM Storage was put into a different slot, so people could hear about their server of choice, and then also listen to the storage keynote.
Clod gave a state of the industry related to different storage media. For Flash, for example, he explained that Phase Change Memory is being developed, using the difference between amorphous and crystalline states to represent ones and zeros.
Tape is also seeing a resurgence. In 2005, Microsoft had declared tape was dead. Today, their Microsoft Azure is a big fan of tape to store data at reduced cost. Tape is 20 times less expensive than disk.
Clod summarized his talk by stating the key areas of storage development:
Optimizing for Artificial Intelligence
Automation for Security and Privacy
Data Governance and Management
You can follow along this week with Twitter hashtag #IBMTechU, or follow me at @az990tony.
The New Orleans event was a five-day event, but I had to leave Wednesday evening for other meetings, so missed out on the last two days.
I do plan to be there all of next week in Orlando. Look for me at one of my sessions, during the breaks, the Solutions Reception on Monday evening, the Poster Session on Tuesday evening, or Universal Studios event on Thursday evening.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(FTC Disclosure: I work for IBM. This blog post can be considered a "paid celebrity endorsement" of the IBM Z and IBM storage products mentioned below.)
DS8880 R8.3.3 Enhancements
Back in 2015, IBM [DS8880 models] of the DS8000 family. Sales drastically increased, in part because IBM re-designed the systems to be a standard 19-inch wide rack, rather than the 33-inch wide custom sizes used before. Many cloud service providers (CSP) and managed service providers (MSP) require 19-inch standard rack configurations.
To meet client requirements, the newest IBM mainframes, including Z14 model ZR1 and LinuxONE Rockhopper II, are now following the same 19-inch rack size!
IBM DS8880 models now have enhanced support for zHyperlink connections. Clients with existing 6-core DS8884/F or 8-core DS8886/F models can upgrade to add more cores for zHyperlink connectivity.
Cores per CEC
Maximum zHyperlink connections
The zHyperlink supports both 40-meter and 150-meter cables. This allows applications like DB2 to read data with substantially lower latency than traditional FICON attachment.
For IBM z/OS clients, the Transparent Cloud Tiering feature allows migration of data directly from DS8000 storage systems to the cloud. This eliminates migrating data through the IBM Z, consuming MIPS and FICON traffic, back out to a tape or virtual tape system. IBM now offers 10GbE cards for the DS8880, providing faster throughput than the existing 1GbE cards previously available.
IBM Spectrum Scale v5.0 for IBM Elastic Storage Server
IBM Spectrum Scale v5.0 was available as software last year, and now is available as a Software PID for Elastic Storage Server hardware.
The new version introduces per-drive editions for licensing: Data Access edition, and Data Management edition. Here are highlights of some of the features:
Enhancements to GUI usability, including managing file systems between ESS and non-ESS storage
Audit File Logging (Data Management Edition only) for Open, Close, Destroy (Delete), Rename, Unlink, Remove Directory, Extended Attributed change, Access Control List (ACL) change
Enhancements to Active File Management, providing WAN-caching for multi-site deployments
Independent KPMG certification will be done for Spectrum Scale v5.0 on ESS for the "Immutability" feature. Some people refer to this as WORM, Government Compliance, Tamperproof, or Non-Erasable, Non-Rewriteable (NENR) enforcement protection
Enhancements to Transparent Cloud Tiering, providing archive of less-active data to IBM Cloud Object Storage, IBM Cloud, or Amazon S3.
Certification for analytics on both x86 and POWER platforms: Hortonworks Data Platform (HDP) v2.6, and Ambari v2.5
Improved I/O performance for many small and large block size workloads simultaneously, including a 4 MB default block size with variable sub-block size based on block size choice
Spectrum Scale 5.0 is incorporated into "Elastic Storage Server Solution Release 5.3". It is unfortunate the numbering is different. Existing ESS clients can download this new ESS 5.3 code from IBM FixCentral today. Going forward, starting next week or so, new Elastic Storage Servers will ship with ESS solution release 5.3 pre-installed.
The TS4500 tape library supports both TS1100 and LTO tape drives.
This feature supports mixed media in a TS4500 tape library. If you are using Library-Managed Encryption (LME), then IBM Security Key Lifecycle Manager is required as the key manager with LTO drives and cartridges.