This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
This week, I am in Orlando, Florida for the [IBM Technical University], with focus on IBM storage, IBM Z mainframes and IBM Power servers. This is my recap of afternoon breakout sessions on Day 2.
Spectrum NAS 101 and key use cases
Chris Maestas presented IBM's latest addition to the Spectrum Storage family of Software-Defined Storage. Spectrum NAS was written from scratch in C/C++ language, instead of using open source code like SAMBA. It supports both NFS and SMB protocols.
Like IBM Cloud Object Storage, the Spectrum NAS software is shipped with the operating system, so you have a single ISO to run everything. You start with four nodes and can grow capacity and performance as needed by adding more nodes. All nodes have identical roles.
All of the storage is internal. Spectrum NAS uses DRAM memory, NVMe-based Solid State Drives (SSD), and spinning disk HDD. The NVMe drives must support at least five Drive Writes per Day (DWPD).
Each Spectrum NAS node can handle 2,000 connections, and up to 4,000 connections during fail-over processing. With 10GbE bandwidth, you can migrate 100 TB/day from other NAS devices to Spectrum NAS. If you want to try out Spectrum NAS yourself, there is a 60-day free trial offer now available. There are a collection of videos on the [Spectrum NAS YouTube channel] to walk you through the installation process.
Clients are Hyper for Hyperconverged
Marc Richardson and Bruce Jones, both from IBM Cognitive Systems, presented this client case study on successful deployment of IBM Hyperconverged Systems powered by Nutanix, often referred to as the "IBM CS" models of the POWER server line. The covered three use cases:
Modernize to Private Cloud
IBM CS models use the Nutanix Acropolis Hypervisor (AHV) to run Ubuntu and CentOS little-Endian virtual machines on POWER. The speakers claimed that they can run 50 percent faster, and 88 percent more workloads per core, than traditional x86 methods. IBM has made statement of direction that IBM CS models will support AIX 7.2 virtual machines later this year.
The IBM CS models can also run IBM Cloud Private, a collection of software that supports Docker and Kubernetes.
Simplify the Data Center
The client was not happy with the high prices of their external, high-end storage systems. When you add another IBM CS models to the cluster, you get more storage capacity and CPU capability at the same time, in lock step. What could be simpler?
Infrastructure for Modern Data Workloads
IBM CS models can run traditional Db2 and WebSphere applications. The client also reduced their costs by switching from expensive Oracle databases to open source databases like MongoDB and EnterpriseDB Postgres.
I was honored with being selected for this week's poster session. I was poster 16, explaining the What, Why and How of IBM Cloud Object Storage. Here is am posting with my colleague Heather Allen, IBM.
Kelly Groff, IBM FlashSystem, had poster 15 on how the embedded compression on the latest FlashSystem 900 models have almost no performance impact. Jeff Barnett, IBM, had poster 14 for IBM's Pay-as-you-grow Storage Utility Pricing.
Barry Whyte drew large crowds with his poster 13 on NVMe. Andy Kutner, IBM, had poster 11 on IBM Cloud Object Storage.
Fahima Zamir, IBM, had poster 29 on VersaStack solution, which combines best-of-breed x86 servers and switches from Cisco with IBM storage into a converged system. Sharie Mims from VSS is an IBM Business Partner.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
This week, I am in Orlando, Florida for the [IBM Technical University], with focus on IBM storage, IBM Z mainframes and IBM Power servers. Here is my recap for the keynote sessions on Day 1.
Art Beller, IBM Vice President of WW Systems Technical Sales
Art Beller, my third-line manager, kicked off the event. He explained that with [Artificial Intelligence], or AI for short, we are entering the "age of the incumbent". All across industries, the companies that have established dominance over the decades have the most data to get value from.
Kathryn Guarini, IBM Vice President Research Strategy
Kathryn provided an overview of the latest news on AI. Over 700 students at MIT, and 1,000 students at Stanford University, have signed up for "Intro to AI" classes. There are over 30,000 AI-related jobs in IT today. The investment in AI is 10 times more than it was just four years ago.
Kathryn explained there are three levels of AI: Narrow, Broad, and General. Narrow AI finally works, such as face recognition or speech-to-text translation. Broad AI is still a ways out, and General AI is not expected until year 2050.
An area of research is to "Learn more with less". For example, if you train a photo image recognition to identify different species of dogs, can you extend some of this learning to recognize different cats? This is often referred to as "Transfer Learning".
Cyber-criminals are already using AI, and if they can infiltrate AI training models, can introduce some scary scenarios. The next cyber battle-field will be AI vs. AI.
AI results need to be "Explainable", both in the training and debugging phases, as well as the infer/deployment phases. We need to detect and eliminate human biases, and rank different models on their fairness.
Kathryn gave some real examples:
Medical Sieve: An MRI scan captures over 10,000 images. Through AI, the top 25 most important images can be identified, making a doctor's job easier in identifying tumors.
Cancer Research: There are over 800 billion DNA base pairs to evaluate for different cancers, combined with 723 million published articles are relevant research. AI can help sort this out, matching the best research for the appropriate type of cancer.
Banking Regulations: There are over a million compliance documents, and some banks have more than 10,000 employees focused on enforcing compliance. About 10 percent of these compliance documents change every year, making this a moving target.
Fraud Detection: There are too many "false positives" in today's algorithms for suspicious spending behavior. AI can help identify this better.
Video Highlights: AI can be used to generate movie trailers or sports highlights by identify the most relevant portions of a movie or sporting event.
Reduce Air Pollution: China is investigating the use of AI to reduce air pollution in its country. Large cities like Beijing are particularly over-polluted.
Hillery Hunter, IBM Fellow and Director of Accelerated Cognitive Infrastructure at IBM Research
AI takes Terabytes of information, both structured and unstructured data, to develop a model that is very small, perhaps a few MB or GB.
The four steps are: identify your data sources, do some data preparation, train your model, and then infer using that model. Your data sources are stored in a Capacity Tier (often referred to as Data Lake). Inference must be done quickly, so a Performance tier is needed for that phase.
In some cases, data can't move, so for those situations, we need "Federated AI" where we can combine results from different systems.
IBM has added Distributed Deep Learning (DDL) to its PowerAI set of libraries. To estimate "Click-Thru Rate", a typical approach with 4.2 billion training examples took 70 minutes. With PowerAI DDL, this was reduced to 91 seconds. In another example, training that took nine days was reduced to four hours.
Lastly, Hillery mentioned "in-memory computing". Rather than reading data in from memory, and performing some computation on it, this new approach does part of the compute processing on the memory chip itself, eliminating a lot of data transfers.
Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist for storage
In previous years, IBM Technical University would offer brand-specific keynote sessions for IBM Z, IBM Power and IBM Storage. However, these were in the same time slot, so you could only see one of them. This year, IBM Storage was put into a different slot, so people could hear about their server of choice, and then also listen to the storage keynote.
Clod gave a state of the industry related to different storage media. For Flash, for example, he explained that Phase Change Memory is being developed, using the difference between amorphous and crystalline states to represent ones and zeros.
Tape is also seeing a resurgence. In 2005, Microsoft had declared tape was dead. Today, their Microsoft Azure is a big fan of tape to store data at reduced cost. Tape is 20 times less expensive than disk.
Clod summarized his talk by stating the key areas of storage development:
Optimizing for Artificial Intelligence
Automation for Security and Privacy
Data Governance and Management
You can follow along this week with Twitter hashtag #IBMTechU, or follow me at @az990tony.
Last year, Hurricanes Harvey, Irma, Jose, and Maria, ravaged various parts of North America and the Caribbean. My topic on Business Continuity and Disaster Recovery (BC/DR) was well attended. I have been working in BC/DR for most of my career, including the "High Availability Center of Competency" or HACOC, for short.
However, natural disasters like hurricanes, tornadoes, forest fires and floods represent less than 20 percent of all disasters. The majority of disasters, nearly 75 percent, arise from electrical power outages, human error, system failure and ransomware.
The seven tiers were developed by a group of IBM customers back in the 1980s, and have stood the test of time. I recently published an article in IBM Systems Magazine (January/February 2018) based on this presentation.
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods.
Finally, I covered some Hybrid Cloud Storage configurations, showing how a combination of Traditional IT, on-premise local private cloud, off-premise dedicated private cloud and public cloud, and be combined to provide added value.
Reporting and Monitoring: How to Verify your Storage is Being Used Efficiently
It is hard to believe that it was over 15 years ago that I was the chief architect for the software we now call IBM Spectrum Connect, Spectrum Control and Storage Insights. There are a variety of editions and bundles for this product, but my focus on this talk was on the advanced storage analytics found in IBM Virtual Storage Center and IBM Spectrum Control Advanced Edition.
I covered three use cases:
What storage tier to put your workload in, and how to move existing data into a faster or slower tier to meet business requirements and IT budgets.
For steady state environments, how to re-balance storage pools within a single tier to keep things even for optimal performance.
When it is time to decommission storage, how to transform volumes from one storage pool to another without downtime or outages.
Special thanks to Bryan Odom for his help in updating this presentation.
Spectrum Virtualization Data Reduction Pools 101
Barry Whyte, IBM Master Inventor and ATS for Storage Virtualization for Asia Pacific region, presented on how Data Reduction Pools were implemented in version 8.1.2 of Spectrum Virtualize. The software in the latest IBM SAN Volume Controller (SVC), IBM Storwize products, and IBM FlashSystem V9000.
Basically, rather than say we "re-wrote" the code, we prefer softer euphemisms like the code was "re-imagined" or, my favorite lately, "re-factored". Legacy Storage Pools will continue to be supported, but IBM anticipates that people over time will transition to the new Data Reduction Pools (DR Pools).
Like Legacy Storage Pools, the new DR Pools also support a mix of Fully-allocated, Thin-Provisioned, and Compressed-Thin volumes. IBM has made a statement of direction that it will offer Data Deduplication feature in the future, but these will only be on the new DR Pools.
While DR Pools are available today with version 8.1.2, there are a few restrictions. There is a limit of four DR Pools per cluster, and the amount of total capacity of each pool depends on the extent size and number of I/O groups configured. Some of the migration methods developed for Legacy Storage Pools are not available, and in reality don't make sense in the new DR pool scheme. Child Pools are not supported either.
One of the big improvements that DR Pools offer is in the area of compression. With Legacy Storage Pools, CPU cores were dedicated for compression, so they were either under-utilized or overwhelmed. With DR pools, all CPU cores can be used for either I/O or compression, which potentially can increase performance by up to 40 percent!
After the sessions, IBM had its "Solution Center Reception". This is a chance to relax and unwind after a long day, with food and drink, and various sponsors in booths to explain their latest offerings.
This is Katie Thacker from [FIT]. In March 2018, FIT was recognized as IBM’s Top Strategic Service Provider of the year!
These are Elizabeth Krivan and Kelly Bouchard, two recently-hired IBM storage sellers. They attended my sessions at the IBM Technical University in New Orleans last October, so it was good to see them again at my sessions here in Orlando.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
Last week, I presented at the "IBM TechU Comes to You" event in beautiful Nairobi, Kenya. This was a three-day event, so here is my recap of Day 3, Thursday Aug 4, 2016.
Business Continuity and Disaster Recovery for z Systems
I have been working in Business Continuity and Disaster Recovery my entire career at IBM, so when I was asked to give a "z Systems" mainframe slant to my standard BC/DR pitch, I was up to the challenge. IBM offers a complete set of solutions, and I presented best practices for each.
Data Protection, Management and Journey to the Cloud with IBM Spectrum Protect
This session was presented by Saumil Shah, IBM Spectrum Protect Sales Leader for Middle East, Turkey & Africa. I am glad that Saumil volunteered to cover IBM Spectrum Protect, as I already had six sessions on my plate for this week. My version tends to focus on the "What and How" of data protection, whereas Saumil focused instead on the "Why" of data protection. Why should you protect data, and why you should use IBM Spectrum Protect instead of the various other software out in the marketplace.
IBM Spectrum Virtualize - Understanding SVC, Storwize and the FlashSystem V9000
IBM Spectrum Virtualize is the new name for the code base shared by all of these products. I presented the latest features of SVC, Storwize and FlashSystem V9000 hardware models, as well as the latest software features.
How to combine the advantages of Storage Virtualization and Flash performance (the Turbocompression effect)
This session was presented by Dominique Salomon, IBM Certified IT Specialist Storage and European New Technology Introduction Leader. He works at the IBM Montpelier Briefing Center in France, a sister organization to the IBM Tucson Executive Briefing Center that I work in. The term "Turbocompression" was initially coined by his team in Montpelier to explain the combined benefits of Flash technology, Easy Tier automated sub-LUN tiering, with Real-time Compression.
I have to admit that the first time I heard this, I was skeptical. It sounded like a marketing gimmick to mention these together. However, once I saw the demo and the resulting numbers, I was convinced. IBM Easy Tier technology identifies and ranks which blocks are the busiest, and moves extents to the appropriate place. Real-time Compression can compress data in cache memory, flash and spinning disk, allowing more of the busiest blocks of data to reside in the fastest storage media. This means higher hit ratios for cache, lower latency for flash, and less wear-and-tear on the spinning disk drives.
Storage Integration with OpenStack
While OpenStack is used by more than 60 percent of Cloud Service providers, it is used by fewer than 10 percent of the Fortune 500 corporations. This represents an excellent opportunity for IBM, leading in having its storage products support this important open source interface.
IBM supports OpenStack Cinder interface for its block level devices, including DS8000 and XIV. IBM Supports OpenStack Swift for its object storage, including IBM Spectrum Scale, IBM Spectrum Archive, and IBM Cloud Object Storage System (formerly Cleversafe). IBM Spectrum Scale supports OpenStack Cinder, Swift, and Manila interfaces for a complete solution across volumes, files and objects.
Marlin Maddy, IBM Manager of Worldwide Systems Technical Events, served as master of ceremonies. He thanked the audience for attending, and drew names for prizes. This time these were Samsung "smart-watches".
Thursday evening, some people left, and the few of us remaining had dinner at the Intercontinental Hotel. I joined folks from USA, Germany and Middle East. I love our informal discussions! I learn so much listening to other points of view.
Last week, I presented at the "IBM TechU Comes to You" event in beautiful Nairobi, Kenya. This was a three-day event, so here is my recap of Day 2, Wednesday Aug 3, 2016.
IBM Spectrum Scale overview and update
This session was covered by Mack Kigada, IBM Executive Consultant for the "Executive Advisory Practice" portion of Systems Lab Services. This session explained the basic features of Spectrum Scale, including the latest features of version 4.2, and related Elastic Storage Server pre-built systems.
Software Defined Storage - IBM Spectrum Overview
This session was presented by Saumil Shah, IBM Spectrum Protect Sales Leader for Middle East, Turkey & Africa. Since SDS is an important topic, the conference coordinators schedule several speakers to present at different time slots, to give everyone a chance to hear the SDS message. Rather than using my same charts, Saumil used his own deck, which he customized based on his experience working in this region.
Flash and the Next Generation Data Center
This session was covered by Firat Ozturk, IBM FlashSystem Sales Leader for Middle East, Turkey & Africa. While IBM offers all-flash array versions of its DS8000, SVC and Storwize product lines, Firat focused on the IBM FlashSystem family, including the FlashSystem 900, FlashSystem V9000, and the new A9000/A9000R models.
According to IDC, Flash-based technologies are predicted to represent 50 percent of the storage capacity sold in 2018. Today it is about 10 percent, so that is a big leap. The primary reason he feels are new applications like Cloud and Mobile that are driving customer expectations to faster performance.
Which product should you get? Firat indicated that the FlashSystem 900 is ideal to boost the performance of specific applications, like Oracle or SAP HANA. The FlashSystem V9000 borrows all the code base from SVC and Storwize with Real-time compression ideal for OLTP and Database applications, while offering Storage Virtualization to protect your existing storage infrastructure investment. The FlashSystem A9000 and A9000R are targeted to Cloud deployments, as well as Server Virtualization and Virtual Desktop Infrastructure (VDI).
What is Big Data? Architectures and Practical Use Cases
I have been presenting this since 2013, but still draws a new crowd every time. Based on my [2015 Presentation], I made some updates to reflect IBM's latest support for Spark, and the new POWER8 solution offerings.
Storage Tiering on z Systems: Less Management, Lower Costs, Less Management, and Increased Performance
When I present Storage Tiering for distributed systems, I typically focus on Easy Tier feature of SAN Volume Controller, the Analytics-based storage optimization of Spectrum Control, and the Information Lifecycle Management (ILM) policies of Spectrum Scale and Spectrum Archive. This time, Glenn Anderson asked me to give this a "z Systems" slant, for a mainframe-oriented audience.
In this new version, I focused on Easy Tier on IBM DS8000 systems, Hierarchical Storage Management in DFSMShsm, and the new Class Transition features that were introduced initially with DFSMSoam for objects, and now extended for data sets.
Linux on IBM z Systems and its Participation in Open Source Ecosystem, including Blockchain
Wow! What a long title!
This session was presented by Holger Smolinski, IBM Senior Performance Analyst Linux and KVM on IBM z Systems from the Boeblingen, Germany Lab. Back in the late 1990s, Holger and I worked on porting Linux to the S/390 platform. I led a team to test all of the device drivers for IBM disk and tape storage systems, working with Holger and his team to fix the drivers and submit them to the Open Source Community, so that they would be incorporated formally into the latest Red Hat and SUSE distributions.
Holger gave quite an extensive overview of the entire Open Source Ecosystem that run on Linux on z System mainframes. Over 60 percent of new mainframe customers use Linux on z Systems operating system, and the complete set of capabilities, makes this quite practical.
One of the latest of these is [Blockchain], a new way to track transactions between organizations. The open source project for this is [HyperLedger]. Transactions are recorded into blocks that are encrypted with a hash code, which prevents tampering and fraud. These blocks are then chained together as transactions occur between organizations.
For example, if a product is manufactured in China, shipped over the Pacific Ocean by a shipping company, received at a port in the United States, processed by US Customs, then shipped via trucking company to the buyer, these all would be represented as transaction blocks chained together.
Wednesday we had free evening to explore on our own. Some of my colleagues went to an all-you-can eat steakhouse for dinner, but I will get plenty of that in my upcoming trip to Sao Paulo, Brazil, so went elsewhere.
Last week, I presented at the "IBM TechU Comes to You" event in beautiful Nairobi, Kenya. This was a three-day event, so here is my recap of Day 1, Tuesday Aug 2, 2016.
Opening Keynote Session
Once again, Marlin Maddy, IBM Manager of Worldwide Systems Technical Events, served as master of ceremonies. He arrived into Nairobi just a few hours earlier, and we were worried that one of us might have to jump in and take over if he had any delays in his flight schedule. Fortunately, he arrived and did a great job welcoming the audience.
Eric Jaoko, chief manager of Kenya's Rural Electrification Agency [REA], presented next. Back in 1973, the Kenyan government wanted to have all of its rural areas offering electrical service. Some 30 years later, in 2002, only 4 percent of the rural areas had achieved this. In 2006, the Kenyan government formed this new REA agency to accelerate the progress. By 2008, nearly 25 percent of rural areas were electrified. Currently (2016), they are now at 68 percent, including all primary schools (more than 20,000 across the country).
Eric mentioned that this success was in part to their partnership with IBM for Information Technology. REA switched from Oracle to SAP applications on IBM Power systems with IBM Storwize V7000, resulting in lower costs, less power consumption, easier to deploy and manage, redundancy and high availability, scalability and high speed access to critical data. Not surprisingly, IBM's leadership in "Mobility" plays another key role, since these areas are rural and often connected only by cellular phone service.
REA employees both AIX and Linux on POWER operating systems, and uses OpenStack to manage both the servers and storage components. PowerVM, PowerVC and PowerHA complete the solution to provide a more robust environment. REA found it was very easy to clone their SAP systems, which made it very easy to test software upgrades without impacting their production environments.
The next speaker was IBM's own Glenn Anderson, IBM z Systems Consultant and Worldwide Technical Events Content Manager. His talk was titled "Think Outside the Cubicle" to emphasize that there are changes underfoot in the IT industry. Rather than focusing on IT as a cost to be reduced, enlightened CEOs are discovering that IT can be used to optimize value for their organization.
One trend that has changed drastically is what IBM refers to as "Systems of Engagement". To better connect with clients, customers and suppliers, organizations now create conversations on social media channels, listen and react to those conversations, building communities that allow them to better understand and serve their markets.
Another trend was "Two-speed IT", often called "Bimodal IT", which indicates that some projects should have "fast-track" status, streamlining the process of design, development and deployment for new innovations. This is in contrast to traditional "slower" projects for mission critical "Systems of Record" operations, like databases and Online Transaction Processing (OLTP).
His last trend he covered was this notion of "Cognitive Business", the use of self-learning, natural language processing to assist in business decision making. Glenn compared the old way as a static map that indicated "You Are Here". The new way was more like GPS, which indicated where you are, where you want to be, and the steps to get there.
(You might ask "Why do business leaders need such assistance?" First, business executives cannot ingest and comprehend the vast amount of data they need to make correct decisions, causing them to make less-than-optimal choices with limited information. Second, business leaders are often only on the job a few years, moving around from one opportunity to another, and do not build the experience background that a computer that can ingest millions of documents can achieve much more quickly. Third, business leaders often are prone to bias, surrounding themselves with ["yes-men"], unwilling to accept any information that contradicts their world view. Computers do not have that bias, and are capable in finding insights, trends and patterns that business leaders might not have considered.)
Software Defined Storage -- What? Why? How?
I was honored to be asked to be the keynote kick-off for the IBM Storage track of this conference. There is still much confusion over the concept of Software Defined Storage (SDS). While there are many different positions on this, IBM has adopted the IDC definition, which requires all three criteria to be met:
Solutions based on Industry-standard, off-the-shelf components.
Solutions that offer the complete set of storage features and functions, such as point-in-time copies, data footprint reduction, technical refresh migration, and remote replication.
Solutions that are offered in multiple ways, such as software-only, pre-built systems using industry-standard off-the-shelf components, and cloud-based services.
IBM's SDS offerings include all of the IBM Spectrum Storage family available as software-only, pre-built systems like SAN Volume Controller and XIV Gen3, and cloud-based services like IBM Cloud Managed Backup and Archive, and IBM Cloud Object Storage System (formerly Cleversafe).
IBM ranks #1 in SDS marketplace, with over 40 percent marketshare. The advantage of IBM's approach is that it does not require a complete rip-and-replace of existing IT infrastructure. IBM solutions can work with your existing servers and storage that you have already in place! This allows for a smooth and graceful transition.
Cloud Computing Concepts and the Role of Infrastructure
This session was covered by Mack Kigada, IBM Executive Consultant for the "Executive Advisory Practice" portion of Systems Lab Services. Frankly, I think this should have been classified as a "Cross-Brand" rather than other "Storage", as it showed not just storage but also how servers and OpenStack participate in a complete Hybrid Cloud solution.
The new IBM FlashSystem A9000 GUI
This session was presented by Dominique Salomon, IBM Certified IT Specialist Storage and European New Technology Introduction Leader. He works at the IBM Montpelier Briefing Center in France, a sister organization to the IBM Tucson Executive Briefing Center that I work in.
When IBM was ready to launch its newest FlashSystem offering, which combines the low-latency IBM FlashCore technology from IBM FlashSystem 900 with the IBM Spectrum Accelerate software from XIV, they had to decide what Graphical User Interface [GUI] to deploy it with. The IBM development team had narrowed it down to three options:
Use the IBM XIV Gen3 GUI, which is installed client code that runs on a handful of select operating systems. This GUI is nine years old.
Adopt and modify the browser-based GUI used by all of the other IBM Storage systems like DS8000 and SAN Volume Controller. By using HTML5, AJAX and Dojo widgets, this newer approach eliminates Operating System and Java dependencies, and can run on desktops, laptops, tablets and smartphones. However, this technology is four years old.
Deploy a new GUI, adopting the latest techniques and methods, offering a new, simpler way to manage the new device.
The development team decided on the third option, and so Dominique spent the first half hour explaining what the IBM FlashSystem A9000 and A9000R systems are, and then the last half showing a live demo connecting back to his systems in Montpelier, France.
IBM XIV, Spectrum Accelerate and the new IBM FlashSystem A9000
This session was covered by Maurice "Mo" McCullough, IBM Storage Technical Content Leader for IBM Systems Worldwide Technical Events. In retrospect, he admitted that he should have scheduled this session before Dominique's session above, which would have reduced the amount of time and questions Dominique spent explaining the IBM FlashSystem A9000 and more time showing the new GUI.
Mo first covered the newest model of the XIV Gen3 pre-built system, the model 314. It has double the cache memory and double the processing cores to drastically improve Real-time compression. Then, he explained IBM Spectrum Accelerate, available as either software you can deploy on your own x86 servers on-premises, or in cloud-based servers from IBM SoftLayer. Finally, Mo covered the A9000 and A9000R, the newest members of the IBM FlashSystem family that share features and capabilities with the XIV Gen3 and Spectrum Accelerate offerings.
Tuesday evening we had a welcome reception for all the attendees, staff and speakers. This was a great time to relax and meet everyone on a social level.
Last week, September 11-13, I was in Johannesburg for the IBM Technical University! The event was held at the Hyatt Regency in the Rosebank section of town. This event was focused on IBM Systems, including storage, Power systems, and IBM Z mainframe servers. Here is my recap for the third and final day:
What else can you use that data for? Adventures in Data Reuse
Did you know that IBM invented "Copy Data Management" in 1998? I do, of course, since I was one of the inventors! Originally developed for DFSMS on z/OS, there are now copy data management solutions for a multitude of operating systems, databases and applications.
This session covered IBM Spectrum Protect Snapshot, IBM Spectrum Protect Plus, and IBM Spectrum Copy Data Management.
Copies of production data are not just for data protection and disaster recovery. The copies can be reused for other IT or business purposes:
Testing and DevOps - After a copy of production is made, columns of databases containing sensitive, personally-identifiable information (PII) can be masked, scrambled or obfuscated, to keep them out of the prying eyes of testers and developers. IBM Spectrum Copy Data Management offers data masking features.
Reporting and Analytics - Running reports or analytics against production data can drastically impact performance and cache hit rate on storage devices. Making copies to other systems, and running reports and analytics elsewhere makes a lot of sense.
Hybrid Cloud - Why limit your copies to just your own data center? Copies of data can be sent to off-premises to perform DevOps, Reporting and Analytics in the cloud.
Be Persistent in your Journey to Private Cloud
IBM offers persistent storage for IBM Cloud Private deployments. This includes IBM Spectrum Virtualize family of products, Spectrum Accelerate family of products, VersaStack converged systems, and DS8000 systems.
IBM Spectrum Access blueprints are available to deploy persistent storage for IBM Cloud Private software on VersaStack, POWER and IBM Z servers.
IBM Spectrum Connect provides the necessary interfaces for Kubernetes to claim persistent storage for Docker containers.
Is your data center ready for NVMe, NVMe-OF or FC-NVMe? Initiated in 2011, the NVMe standard is relatively young. I covered its short history, why zero-copy protocols like FCP and RDMA can drastically reduce latency, and all the components needed for a complete end-to-end solution.
Inside All-Flash Arrays, you can use standard 12Gbps SAS to connect to SCSI-based Solid-State Drives (SSD), or you can use the much faster PCiE bus at 32Gbps with NVMe-based drives.
NVMe provides for advanced parallelism, since flash is not mechanical, and does not rely on the position of a read/write head over a platter as spinning disks do. Traditional SSD pretend to be spinning disks, so often process one command at a time, to maintain the charade.
NVMe is designed to work only with flash devices, so it uses a streamlined 15 commands, versus the 34 commands in SCSI to handle other storage media.
But having an NVMe-inside All-Flash Array is not the end of the story. Rather than sending all of those SCSI commands across network, only for some to be disregarded when they arrive, you can send the streamlined NVMe commands instead. NVMe over the networks is available now. NVMe-OF offers support for Ethernet and InfiniBand, and FC-NVMe offers support for FCP.
The last stage is application exploitation from the host server. The industry still needs Operating System drivers, multipathing drivers, and applications that take advantage of NVMe. IBM anticipates this will occur later this year, and into 2019.
IBM Storage Infrastructure Optimization (SIO) assessment
Ishmail Shaik, IBM Lab Services, presented an interactive peek of what an SIO entails.
In 2005, I led a series of "Information Lifecycle Management" (ILM) studies for various clients, combining the methods from "disk studies" and "tape studies" that I had performed since the 1980s.
The ultimate win-win scenario, these ILM studies proved successful, not only saving the clients millions of dollars, but often resulting in follow-on sales of IBM storage hardware, software and services.
Over those 18 months, I trained several IBM Systems Lab Services colleagues in the process. These studies formed the basis of "Storage Infrastructure Optimization" assessments launched officially in 2011.
The SIO assessment process has evolved a lot since I was last involved with it. Here are a few of the changes I noticed from his presentation:
Core Modules - No longer just focused on Lifecycle Management, SIO studies offer four additional modules: Modernize & Transform, Business Resiliency, Manage & Control, and New Workloads.
Data Collection - The biggest challenge back then was collecting data to provide recommendations. I managed with in-person interviews and what little tools were available back then, collected into TCO spreadsheets, VISIO diagrams and PowerPoint slides. Today, we have sophisticated data collection tools, including IBM Spectrum Control, Storage Insights, Arxview, and Butterfly.
Engagement Workshop - SIO now has incorporated "Design Thinking" methodology to help clients prioritize findings into a set of short-term, medium-term and long-term recommendations.
The three-day event ended with a closing session, hosted by Mario Franzone.
Last week, September 11-13, I was in Johannesburg for the IBM Technical University! The event was held at the Hyatt Regency in the Rosebank section of town. This event was focused on IBM Systems, including storage, Power systems, and IBM Z mainframe servers. Here is my recap for the second day:
Nutanix 101: Intro to Hyperconverged Infrastructure and Private Cloud on IBM Power Systems
I attended this based on the abstract for this session:
"Learn in this session why IBM has partnered with Nutanix around hyperconvergence, how this architecture can help drive simplicity, performance and cost efficiency into your IT landscape. You will get both a high level overview on Nutanix, as well as how IBM CS Series is using the Nutanix software to deliver a worldclass application platform, followed by a live demo to show you how Nutanix works."
Sadly, I felt the title and abstract were partially misleading.
Rui Gonclaves from Nutanix gave a nice overview of how Nutanix software can help drive simplicity and cost efficiency to x86 server deployments. It supports VMware, Hyper-V and its own version of Linux KVM called the Acropolis Hypervisor (AHV). Its PRISM software helps to provide one-click management convenience for a cluster of x86 servers.
Nutanix considers its software to be the value of the solution, and treats the servers it runs on as mere commodities. By partnering with IBM, Nutanix adds another concubine to its harem. The only subtle reference to the new CS models was an IBM logo among the logos of Lenovo, HP, DellEMC, and Cisco UCS. Rui failed to cover any details of the CS models, nor their advantages over x86 servers.
(IBM, on the other hand, considers its hardware to be the value of the solution, and treats the applications as commodities. IBM Power servers are able to run open source databases like MongoDB and EnterpriseDB better. For example, a 3-node cluster of IBM CS822 servers (22-core models) was able to run more than twice the transactions per second (tps) per dollar than a comparable cluster of 24-core Dell CX630-10 machines.)
Rui finished his presentation 25 minutes early, so there would have been enough time to cover the CS models, or show a live demo, but that didn't happen either.
Save the World! Save your IT Budget with IBM Cloud Object Storage
All of the presenters at this conference were asked to come up with fun and quirky titles for their sessions. Since clients use IBM Cloud Object Storage (COS) to save large repositories of active archives, the phrase "Save the World!" has a double meaning.
IBM has clients with more than 100 PB deployments of IBM Cloud Object Storage, so the idea that you can "Save the world's amount of data" was not too outrageous.
IBM COS is relatively inexpensive, at a total cost of ownership that is up to 70 percent less expensive than traditional disk-based solutions. A lot of your data is probably static, stable, unstructured content ideal for low-cost storage with IBM COS, so the idea that you can save your IT budget wasn't outlandish either.
Discover advanced features & last announcements with IBM Spectrum Virtualize
When I saw this title, I was afraid it might overlap too much with my session "Dip your TOE in our Pool". Instead, Dominique Salomon from the IBM Client Experience Center in Montpelier France, presented a great overview of the basic and advanced features of Spectrum Virtualize family of products.
He cover automated tiering with IBM Easy Tier, data footprint reduction with Thin Provisioning, Compression and Deduplication, as well as Copy Services like FlashCopy and remote mirroring.
How big is your NAS? Sizing, Management, and Deployment
While I had fun coming up with fun and quirky titles for their sessions, their drawback is that it forces people to read the abstracts to understand what will be covered in each session.
In this session, I covered IBM's three main NAS offerings: Spectrum Scale, Spectrum NAS, and IBM Cloud Object Storage with NAS gateways from Ctera Networks, Avere, Panzura, and Nasuni.
The rest of the session was IBM's new File and Object Storage Design Engine (FOS-DE) studio, an online tool to help decide which of the three NAS solutions is the best fit, and rough sketch configuration that meets a client's specific capacity and performance requirements.
The FOS-DE tool is available at no charge to all IBM employees, IBM Business Partners, and prospective clients.
I wasn't planning to give a live demo, but I ended ten minutes early, and had decent Wi-Fi connection, so I was able to demonstrate the FOS-DE studio with the remainder of my time slot.
Nightmares and Dreams: Manage your entire Storage Infrastructure with IBM Spectrum Control and Storage Insights
What keeps you up at night? That was the question that motivated the title of this session. I organized this topic into three segments:
Visibility - Can you even understand your storage infrastructure? IBM Storage Insights is available at no additional charge for IBM block storage devices, and can greatly enhance your visibility into your capacity growth, performance bottlenecks, and other vital insights.
Control - Reporting is not enough, you need to take action? IBM Spectrum Control Standard Edition, Spectrum Connect, and Copy Services Manager can help configure, provision and perform other actions needed to your storage infrastructure.
Automation - As data centers grow, the actions required often overwhelm existing IT staff. IBM Spectrum Control Advanced Edition adds analytics and automation.
Johannesburg is nine hours ahead of my home town in Tucson, Arizona. Jet lag hit me hard this second day, so I opted out of the evening activities, and got some much needed rest.
Last week, September 11-13, I was in Johannesburg for the IBM Technical University! The event was held at the Hyatt Regency in the Rosebank section of town. This event was focused on IBM Systems, including storage, Power systems, and IBM Z mainframe servers. Here is my recap for the first day:
Opening Keynote Session
The conference was opened by a warm welcome from Ronnie Moodley, IBM Executive for Systems Hardware. He explained that we live in a VUCA world. For those who have not heard this term before, it is a four-letter acronym that conflates four different business challenges: [Volatility, Uncertainty, Complexity, and Ambiguity].
Ronnie also mentioned the shifts in marketing, from the "four P's" to the "four E's":
Clients are no longer evaluating individual products, but also services that come with it, the context on how it is used, the identify of users, and other characteristics that provide a complete experience.
With so many free, open-source alternatives, the question is not comparing the prices of competing products, but what do you exchange for choosing one option for another. Often referred to as the total cost of ownership (TCO) or "opportunity cost" in economic terms.
The Internet and cloud technologies now allow people to buy and use products practically anywhere. Having a bricks-and-mortal location on a busy street corner may no longer be a competitive advantage.
Old marketing methods relied on uni-directional promotion from corporate marketing teams. Today, social media, blogs, and word-of-mouth evangelism are providing greater influences on purchase decision.
The second segment was "The World is our Lab", by Kugendran Naidoo, IBM Research South Africa. Unlike some companies that consolidate all of their research to one location, IBM does research across the globe, with two locations in Africa (Nairobi, Kenya and here in Johannesburg, South Africa).
Dr. Naidoo explained that often research leads us into areas we weren't expecting. For example, an algorithm developed to detect black holes in space failed, but it turned out to be useful for detecting Wi-Fi hot spots.
This begins back in 1974, when Stephen Hawking theorized that under certain circumstances, small black holes might "evaporate" — and simultaneously emit radio signals. These hypothesized black holes were about the mass of Mount Everest, and smaller than an atom. Soon after, the physicist and engineer John O'Sullivan tried to find these signals.
If these small black holes were evaporating, they would emit radio signals as they vanished. But because of their great distance from us, these signals would be hard to identify because they would be tiny by the time they arrived, as well being buried in a background of louder 'noise'. Furthermore, this tiny signal would be 'smeared' (turned from a sharp spike into a rounded shape). So he and his colleagues came up with a wonderful mathematical tool to detect these tiny, smeared signals.
As it turned out, they never did find these small black holes.
In 1992, John O'Sullivan was at Commonwealth Scientific and Industrial Research Organization (CSIRO) in Australia, trying to develop computer networks that communicated without wires.
But there was a big problem. The signals he wanted to detect were tiny, smeared and buried in a background of louder 'noise'. Just like the black hole signals.
By a wonderful coincidence, his black hole mathematics turned out to be the key to Wi-Fi. CSIRO took out patents in Australia in 1992, and in the US in 1996. By 2000, they had some working chips.
Improve your NAS environment in One Day! Introducing IBM Spectrum NAS
IBM has been in the NAS storage business for decades. IBM Spectrum NAS is our most recent software defined storage. This session gave an overview on how Spectrum NAS is designed. This software can be deployed on as few as four nodes in less than an hour, leaving you the rest of the day to migrate your data from other NAS solutions.
IBM Spectrum NAS fills the gap between a single file server and expensive dual-controller models available commercially. A single file server, running perhaps Windows Storage Server or Linux with NFS and Samba, represents a single point of failure (SPOF). Lose the one server, and your department or team loses access to all of those shared files!
At the other extreme, commercial dual-controller NAS devices, such as those from NetApp or DellEMC, are loaded with advanced features and application-specific capabilities. Some people take advantage of these, others don't.
IBM Spectrum NAS is software defined storage that runs on four or more nodes, is highly available, and provides many of the advanced features offered by commercial dual-controller models at roughly half the total cost of ownership.
Dip your TOE in our Pool! iSER and Data Reduction with IBM Spectrum Virtualize
All of the presenters at this conference were asked to come up with fun and quirky titles for their sessions. The title is a bit of wordplay.
When IBM launched its SAN Volume Controller in 2003, I was one of the "Technical Evangelists" that traveled around the world to explain how it works. Today, 15 years later, I am still talking about how great this technology is.
Ethernet network interface cards that have co-processors to offload some of the TCP/IP processing are called TCP-Offload-Engines, or "TOE" cards.
IBM recently announced two new flavors of 25GbE cards, one that supports RDMA over Converged Ethernet (RoCE), and another that supports Internet Wide Area RDMA protocol (iWARP).
To implement data deduplication, the Spectrum Virtualize team refactored the code that handled pools of managed space. The original pools are now referred to as "Legacy Storage Pools", and the new pools are referred to as "Data Reduction Pools".
Fahima Zair, Tony Pearson, and Maria Lancaster
After the sessions, we had a nice evening reception to celebrate the General Availability of the IBM FlashSystem 9100. At events like these, many attendees are local and commute to the event, so I was happy to see many stuck around to have conversations with the experts.
I was able to reconnect with many of my colleagues, including Fahima Zair in charge of our VersaStack relationship with Cisco, and Maria Lancaster from our Storage Marketing team.
I was in Hollywood Florida for the IBM Systems Technical University. Here is my recap of the final two days, day 4 and 5.
The Pendulum Swings Back: Understanding Converged and Hyperconverged Systems
Once again, I presented my popular session on converged and hyperconverged systems. For converged, IBM offers IBM PureApplication systems with Power and x86 servers, as well as partnership with Cisco called VersaStack. Both support IBM Cloud Private as a platform for running applications.
For Hyperconverged, IBM offers Spectrum Accelerate and Spectrum Scale, as well as partnerships with SuperMicro that combines Spectrum Accelerate on SuperMicro x86 servers, and partnership with Nutanix for CS-models of Power servers pre-installed with Nutanix software.
Unlike other converged and hyperconverged solutions that act as isolated islands of compute and storage, IBM's solutions can be incorporated into an existing datacenter with IBM Cloud Private for orchestration, and IBM Spectrum Scale to provide common access to data.
The Seven Tiers of Business Continuity and Disaster Recovery
With all the natural disasters that happened last year in the USA, and the more recent ones all over the world, this session continues to draw a crowd.
The seven tiers range from the least expensive to most expensive. The least expensive involves restoring data from tapes stored in an offsite vault. Tape continues to be the least expensive storage medium, and can be used to bring up a company in a few days.
For faster recovery, there are options like electronic vaulting to virtual tape libraries, and now the use of Cloud storage for ubiquitous access to data from different locations.
Snapshots of entire volumes, virtual machines or databases are also quite popular. IBM offers IBM Spectrum Protect Snapshot, Spectrum Protect Plus, and Spectrum Copy Data Management for this.
Faster recovery is possible with remote mirroring. This involves sending all of the updates to a secondary location. In the event of a disaster, clients can switch processing with the data already there. IBM has over 800 clients able to do just that in less than 30 minutes.
Event Night by the Pool
Photography by Mo Reyes
While Hurricane Michael raged in upper Florida the week prior, the event coordinators were a bit nervous to offer an evening dinner event by the pool, but the weather cooperated!
Photography by Mo Reyes
I was a social butterfly, moving from table to table to talk to all of the various attendees. A light breeze and excellent food and music made for an enjoyable night!
The pool reception went on to about 10:00pm at night. IBM had lit up its logo into the pools for a great view from above. Perhaps just 30 minutes after arriving back to my hotel room, we had quite the thunderstorm! How incredibly lucky this did not happen during the event!
The following day, I presented my session on "Managing Risk with Data Footprint Reduction, a repeat of the session I did earlier that week.
This was a pleasant way to end the week! Aside from the heat and humidity being above average for October, it was a beautiful hotel in a lovely city.