This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
It is funny how an article or blog post can remind me of something long, long ago.
Back in 2005, my manager, Rich Lechner, was then the Executive Advocate for a client in Chicago. While visiting that client, he asked what the client wanted most. His answer, for IBM to come in and do an "Information Lifecycle Management" (ILM) study on his IT environment. He agreed to send me on-site for a week.
I had done disk and tape studies of this kind before, but this time, I was going to do an end-to-end to evaluate their growth, and where was the best storage media for different data types.
Joining me were three "observers" from IBM Lab Services: Barbara Read, Steve Bisel and Tom Moore. As if I did not have enough pressure from the client, now I had to be "watched" while I interviewed the storage administrators, generated and reviewed reports.
At the end of the week, I had provide the client's upper management with a list of short-term, mid-term and long-term recommendations. As a side benefit, the client decided to purchase two DS8000 storage systems, replacing their HDS equipment!
After that initial engagement, the four of us formed a team. We performed similar studies at other client locations. Barbara Read was the process expert who wrote the "Documents of Understanding". Steve was our financial expert, and used spreadsheets to show total cost of ownership comparisons. Tom was our infrastructure expert, and used Microsoft Visio to document the inventory of IT equipment, and how it was all interconnected.
I was the consultant and public speaker for the team. I was able to incorporate the work of the three others into a Powerpoint presentation. During the week, we would show initial findings to the client, and then follow it up a few weeks later with a full report.
A lot has changed in the past 13 years! First, ILM was renamed to "Storage Infrastructure Optimization" (SIO) studies. Our initial team trained dozens of other practitioners. Today, SIO studies are done all over the world.
This week -- Jan 29 to Feb 2, 2018 -- I am in New York city with other IBM Storage executives, to meet with Channel distributors and Business Partners. If you are in the NYC area, and wish to have a product briefing, or just dinner or drinks, let me know!
I believe the "T" stands for "Third generation", as we have had other 9132 boxes before. Here are the details:
Small: Just 1U in size
Ports: 8, 16 or 32 ports
Transceivers: 32, 16, 8, and 4 Gbps
Protocols: FCP only, no FICON, FCIP, FCoE or iSCSI
Why is this important? Because the 16 Gbps and 32 Gbps transceivers support NVMe over Fabrics. Let's do a quick NVMe recap:
Last May, IBM announced that its developers are re-tooling the end-to-end storage stack to support [New Faster Protocols for Flash Storage], to boost the experience of everyone consuming the massive amounts of data now being perpetuated across cloud services, retail, banking, travel and other industries.
NVMe is a new language protocol that is replacing traditional SAS and SATA standards for solid state drives (SSD). Through employing parallelism, to simultaneously process data across a network of devices, clients can anticipate significantly reduced delays caused by data bottlenecks and move higher volumes of data within their existing flash storage systems.
IBM's NVMe strategy is based on optimizing the entire storage system stack - from applications requiring the data to flash technology to store it. Through the development of its FlashSystem family of all-flash storage solutions, IBM recognized years ago that multiple technologies would be required to address the demands of ultra-low latency data processing. IBM is developing solutions with NVMe across its storage portfolio, which it plans to bring to market in 2018.
At the AI Summit New York, December 2017, IBM disclosed a [technology preview and demonstration] with the integration of IBM POWER9 Systems and IBM FlashSystem 900 using NVMe-over-Fabrics InfiniBand. This combination of technologies is ideally suited to run cognitive solutions such as IBM PowerAI Vision, which can ingest massive amounts of data while simultaneously completing real time inferencing (object detection).
Whether it is streams of data, transactional data, or batch processes, a consistent requirement is the lowest possible latency. Among the leading all flash storage vendors, IBM with its FlashSystem 900, has stuck to its mission delivering low latency all flash arrays. Along comes NVMe-oF, which is, at its core, about getting rid of latency.
How do you take an already low latency protocol, like InfiniBand or Fibre Channel, and make it faster? Replace SCSI with NVMe and enable NVMe from server to fabric to storage array.
The FlashSystem 900 has been shipping with InfiniBand using SRP (SCSI over RDMA) for many years. In the technology preview, the very same InfiniBand adapter, based on the Mellanox chip set, is instead used to support the OpenFabrics driver distribution and NVMe-oF InfiniBand.
While the demonstration last December used Infiniband, this is not the only transport. NVMe-OF can also be used with Ethernet, either using Internet Wide Area RDMA (iWARP) or RDMA over Converged Ethernet (RoCE). NVMe-OF over Fibre Channel is often referred to as FC-NVMe, and can drive NVMe over FCP or FCoE. Even though iWARP, RoCE and FCoE are all Ethernet-based, NVME-OF RDMA on the first two is different than FC-NVMe over FCoE.
Why not just drive NVMe commands over standard TCP/IP? The NVMe standards board is actually investigating this, but probably won't have anything until next year in 2019.
This week, IBM will be at the [Cisco Live!] event in Barcelona, Spain, talking about this new 9132T switch, as well as all of our VersaStack solutions! I won't be there, obviously, since I am in New York City, but if you are there, please send me photos! Barcelona is a wonderful city!
I hope everyone had a festive and restful winter break! I sure did!
(FCC Disclosure: I work for IBM. IBM is in our 17-day "quiet period" before it announces full-year and 4Q results on January 18. Therefore, I picked today's topic that has nothing to do with storage products, recent client wins, or financials.)
It's January, so I thought I would discuss [New Year's resolutions], a tradition in United States in which a person resolves to change an undesired trait or behavior, to accomplish a personal goal, or otherwise improve their life. Early Romans made promises to their god Janus, for whom the month of January is named.
Sadly, most of us are unsuccesful. This is often because the resolutions were unrealistic, people failed to measure and track their progress, or simply lost interest midyear.
From my own experience, most resolutions can be lumped into four major categories:
Get healthy: Eat better, lose weight, exercise more, sit less, quit smoking
Get organized: Stop procrastinating, pay off debt, de-clutter, switch to a better job, reduce stress
Become social: Spend more time with friends and family, meet new people, travel, volunteer for charity
Learn new skills: Learn a new language, take up a new hobby, learn to paint or create arts and crafts
A technique I use to develop presentations might help people keep New Year's Resolutions. The technique called [SCIPAB®], created by Mandel Communications, is an elegantly simple, six-step method for starting important conversations or create [Effective Presentations]. Since Resolutions are basically "conversations with yourself", let's give it a try!
Situation: "Oh No! The boss's daughter, Nell Fenwick, is tied to the railroad tracks!"
Complication: A train approaches!
Implication: If nobody does anything soon, she will die
Position: "I, Dudley Do-Right, will save her!"
Untie her from the tracks and set her free
Arrest the villain, Snidely Whiplash
Benefit: Nell lives! "Dudley Do-Right, you are my hero!"
Let's see how we can use this approach on different categories of resolutions. To get healthy, we might use:
Situation: "Oh No! My latest doctor visit indicates that my numbers are too high!"
(AMA Disclosure: I am not a doctor. This is not medical advice. Here numbers could represent any appropriate health measurement of your BMI, blood pressure, cholesterol, triglycerides, liver enzymes, or blood sugar, for example.)
Complication: I am not getting any younger.
Implication: I am at risk of heart disease, cancer, or other health issue. This situation will not go away on its own.
Position: I need to change my lifestyle to get healthy
Set appointment to see my doctor
Follow doctor's recommendations for diet, medication and exercise
Schedule follow-up appointments to measure and track progress
Benefit: My health measurements will return to normal range.
Rather than resolving to "Eat less and exercise more", the above approach is more focused on the end result, rather than intermediate actions, and therefore has a better chance of success, getting your health within normal range.
Let's try another one. To get better organized, we might use:
Situation: "Sigh! All of my projects are over budget and behind schedule, my desk is a mess, I forget important thoughts and ideas, and I am always late to meetings."
Complication: I just got assigned to lead project XYZ.
Implication: If I am not better organized, I could lose my job.
Position: I need to change my work routine to get organized.
Read David Allen's book and learn his system for "Getting Things Done" [GTD], or one of the many variants, like [GSD] or [ZTD].
Decide on where to write down and keep track of my thoughts, tasks and projects, either on paper like a notebook or [Hipster PDA], or an online mobile account like [Evernote] or [Google Keep]. Chose something that will be within arms reach 24 hours a day.
Work with project managers to track and measure progress of project XYZ.
Benefit: Project XYZ will be completed on schedule, within budget. I might even get a bonus, raise, or promotion!
I could go on, but you get the idea.
In his WSJ article [Blame it on the Brain], Jonah Lehrer cautions against trying to change too many habits all at once. If you have multiple resolutions, try to focus on establishing new habits for one resolution for a month or two, before starting the next one. Prioritize what is most important.
The study surveyed 5,676 leaders from various industries, education, and government agencies responsible for workforce development and labor/workforce policy. This was a truly global survey, with respondents from North and South America, the Nordics, Europe, Africa, Middle East and Asia.
A gloomy picture for the future
The survey paints a gloomy picture for the future. The majority of industry executives struggle to keep their workforce skills current, in light of rapidly changing technological advancements.
Only 55 percent of the respondents felt the current education system, from grade school up to university, were adequate to ensure lifelong learning and skills development. Most blamed inadequate investment from private industry in addressing these issues.
Any problem can be solved if (a) everyone agrees what the problem is, and (b) everyone feels it is high enough priority to solve. The study found there was a disparity of what the problem is, what the priorities are, and who should solve it.
In the book Class Counts: Education, Inequality, and the Shrinking Middle Class, the author Allan Ornstein argues ".. the debate centers on whether the government should take a backseat or manage the economy, whether a free market should prevail or whether we should redefine or tinker with market forces..."
Which workplace skills are in short supply?
Can we at least agree on which workplace skills are in short supply?
Not surprisingly, Industry leaders ranked the top three skills required:
Technical core capabilities for Science, Technology Engineering and Math [STEM]
Basic computer and software/application skills
Fundamental core capabilities around reading, writing and arithmetic (often called [the three Rs])
These are all "hard skills", referring to the knowledge, skills and competencies to perform specific tasks. Nearly 75 percent of corporate training budgets are focused on hard skills.
Government leaders, on the other hand, especially those that are responsible for labor/workforce policy, ranked the top three skills:
Ability to communicate effectively in a business context
Willingness to be flexible, agile and adaptable to change
Ability to work effectively in team environments
These would all be classified as "soft skills", referring to the people skills, social skills, communication and emotional intelligence to effectively navigate the environment and work well with others.
In fact, these government leaders felt that STEM, computer skills and "the three Rs" ranked the lowest requirements in their priority.
"Unless managers have forgotten everything they learned in Econ 101, they should recognize that one way to fill a vacancy is to offer qualified job seekers a compelling reason to take the job. Higher pay, better benefits, and more accommodating work hours are usually good reasons for job applicants to prefer one employment offer over another."
"... the long-hours pandemic is a symptom of the tech and design sectors' badge-of-honor-martyr-complex. ... part of the reason that women can't have it all is that American business has grown this time-macho culture, a relentless competition to work harder, stay later, pull more all-nighters, ... the classic 40-hour work week have trained us to measure our labor by the number of hours we log,... However, this mindset is dead wrong when applied to today's professionals. The value ... isn't the time they spend, but the value they create through their knowledge."
IT jobs require creativity and focus. In a feature article titled [Why you should work 4 hours a day, according to science], Alex Soojung-Kim Pang, author of Rest: Why You Get More Done When You Work Less, looks at the work habits of highly accomplished creative people through history and finds that they all shared a passion for their work, a terrific ambition to succeed, and an almost superhuman capacity to focus.
Yet when you look closely at their daily lives, they only spent a few hours a day doing what we would recognize as their most important work. The rest of the time, they were hiking mountains, taking naps, going on walks with friends, or just sitting and thinking.
Encouraging more students to develop the skills early
While we all agree that employers should raise salaries, offer better benefits, and fix their morally-corrupt culture of working too many hours, that only addresses part of the problem, the demand half of the equation. We also need to get kids to learn the hard and soft skills needed at an early age.
Do students have what it takes to work in the IT industry? John Rampton lists the [15 Characteristics of a Good Programmer]. Most are soft skills, with my favorites being: Laziness, Impatience and Hubris.
In his book Why Good People Can't Get Jobs: The Skills Gap and What Companies Can Do About It, Peter Cappelli advises corporations to take a more proactive role:
"... a huge part of the so-called skills gap actually springs from weak employer efforts to promote internal training for either current employees or future hires ... It makes no sense for the employers, as consumers of skills, to remain an arm's-length distance from the schools that produce those skills..."
The major stakeholders, from industry to education to government, should partner together. For example, the Chicago Public Schools (CPS) system will be the first in the United States to [require all students to take computer science] in high school, starting with the class graduating in 2020. Grants and training are being provided by IT industry giants like Google and Microsoft.
IBM is also doing its part with [a new education paradigm], called Pathways in Technology Early College High Schools [P-TECH]. Normal high school is typically four years (grades 9 to 12), but P-TECH is a system of innovative public schools spanning grades 9 to 14 that bring together the best elements of high school, college, and career. The additional two years (grades 13 and 14) of community college can help teach the soft and hard skills needed for particular jobs in IT.
After the six years, students graduate with a no-cost associates degree in applied science, engineering, computers and related disciplines, along with the skills and knowledge they need to continue their studies or step easily into well paying, high potential jobs in the IT arena for multiple industries.
The paradigm has grown from one school in 2011 to 60 schools by September 2016, with over 300 large and small companies affiliated with P-TECH schools serving thousands of students.
So the future may not be as gloomy as predicted. Problems can be addressed if everyone works together to solve them. In the mean time, I will be taking the rest of the year off for long-overdue vacation. Perhaps I will go hike mountains and take naps, as Alex suggests above.
It's official. We have changed our name! The Worldwide IBM Systems Executive Briefing Centers (EBC) are now being called the Worldwide IBM Systems Client Experience Centers!
I joined the Tucson EBC team in 2007. For the past 10 years, I have been running design workshops, consulting with clients and architecting solutions.
Why the name change? The term "Executive Briefing Center" implies one-way communication with [death by PowerPoint], which can be ineffective in today's dynamic and collaborative work environments.
Client expectations for two-way communications have given rise to immersive and interactive engagements where clients not only learn about IBM's solution offerings, they experience them.
Through hybrid briefing/workshop engagements, demonstrations, and active promotion of our ISV Ecosystem partners, we take clients on a journey where they envision utilizing our technology and solutions to achieve desired business outcomes. The new Client Experience Center moniker more accurately represents the work we do and the value we provide.
(Note: I realize that the new acronym for the Client Experience Center (CEC) is the same as the Central Electronic Complex (CEC) used in both storage and server products. I can assure you that the executives that decided to rename the centers had not chose this to be funny! Consider it a mere coincidence.)
Of course, changing the name is not cheap. We will have to update all of our websites, and order new signage, new water bottles, new coasters, new embroidered shirts, and new business cards, just to name a few!
The weather in Tucson is awesome these next few months, so come on down! Can't travel? We can come visit you, or do it over the phone via webinar.
Our Worldwide IBM Systems Client Experience Centers are located in:
Last Friday, I helped students learn about Science, Technology, Engineering and Math (STEM). This was the annual [2017 Arizona STEM Adventure] event in Tucson, Arizona. Once again, Pima Community College Northwest Campus provided the venue.
The event hosted 1,200 students, ranging from fourth to eighth grades. Buses collected them from ten different school districts in the area. Home-schooled, private-schooled and charter-schooled children participated as well.
There were three dozen exhibits, some were indoors, and others in tents outside. The weather was delightful for November.
IBM's exhibit used a simple bicycle wheel to demonstrate the properties of a [gyroscope]. A gyroscope is a spinning wheel that maintains its angular momentum. This can be useful for both measuring forces that try to affect it, as well as counteract those forces.
We had the kids stand on a rotating platform, holding the bicycle wheel with both hands. A volunteer would spin the wheel. If the kid leaned the wheel left or right, the platform would spin to counteract the force. (The effect can be accomplished while sitting on a swivel chair. See [Exploratorium] for an example.)
Gyroscopes are used in everything from airplanes to submarines to help with navigation, keeps space-based telescopes like the Hubble pointed in the right direction, helps to dig tunnels straight, and for [Steadicam] filming for Hollywood movies and [IBM Client Center videos!
According to the U.S. Environmental Protection Agency (EPA):
"The state has warmed about two degrees (F) in the last century. Throughout the southwestern United States, heat waves are becoming more common, and snow is melting earlier in spring. In the coming decades, changing the climate is likely to decrease the flow of water in the Colorado River, threaten the health of livestock, increase the frequency and intensity of wildfires, and convert some rangelands to desert." (Source: [What Climate Change means for Arizona])
Their robots stole the show! This one pictured here was remote controlled. Another one was able to pick up and throw basketballs.
(This is not my first exposure to FIRST. See my 2009 blog post [Helping Young Students] on how I helped fourth graders learn C programming language by building robots with LEGO Mindstorms.)
The team draws students from the five high schools of the Vail school district. I drive by one of these, the Vail Academy and High School, on the way to IBM Client Experience Center. This is not just for boys, about one third of the team are girls!
The students do the design of each robot, do the welding, even do the C++ programming, and participate in competitions!
Lunch and Logistics
With all the focus on science and technology exhibits, it is easy to forget all the work done behind the scenes. An [Eventbase] website was used to help us direct all of the students, teachers and volunteers to the right place.
Since we had enough volunteers for the IBM exhibit, I chose instead to be a "general volunteer" and was assigned the task of collecting and distributing lunches. For some schools, the students brought their own lunches on the bus, these were collected when they got off the bus, and distributed to them when it was their time to eat. For other schools, their staff packed lunches for each student.
We staggered the distribution into five groups, with color coded labels, starting from 10:30am, every 20 minutes, to 11:50am. The volunteers themselves did not eat until 1:30pm. We were provided pulled pork sandwiches from [Mama's Hawaiian BBQ], a local favorite!
This was a great day! There are plenty of problems that need to be solved in our world, and a shortage of scientists and engineers to solve them. Encouraging kids to pursue these careers is a good step forward.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
The Collaboration of Oak Ridge, Argonne, and Livermore [CORAL] is a joint procurement activity among three of the Department of Energy's National Laboratories launched in 2014 to build state-of-the-art high-performance computing (HPC) technologies that are essential for supporting U.S. national nuclear security and are key tool s used for technology advancement and scientific discovery.
Of course, when you hear "state-of-the-art technology", IBM is probably the first company that comes to mind!
The new IBM Spectrum Scale 5.0 has been greatly enhanced to meet CORAL requirements:
Dramatic improvements in I/O performance
Significant reduction in internode software path latency to support the newest low-latency, high-bandwidth hardware such as NVMe
Improved performance for many small and large block size workloads simultaneously from new 4 MB default block size with variable sub-block size based on block size choice
Improved metadata operation performance to a single directory from multiple nodes
Spectrum Scale 5.0 now handles automatically tuning more than twenty communication protocol and buffer management parameters, aiding setup for optimal performance. The enhanced GUI features many capabilities including performance, capacity, network monitoring, AFM (multicluster management), transparent cloud tiering, and enhanced maintenance and support, including interaction with IBM remote support.
Spectrum Scale 5.0 now offers file-level immutability. Previous releases supported immutability at the file set granularity, so this allows greater granularity. Immutability can be an effective tool as part of an overall Non-Erasable, Non-Rewriteable [NENR] compliance policy.
Spectrum Scale comes in both "Standard Edition" and "Data Management Edition". The latter offers some additional features, including Transparent Cloud Tiering, Asynchronous AFM Disaster Recovery support, and Encryption. Some additional enhancements to Data Management Edition in Spectrum Scale 5.0 are:
File audit logging capability to track user accesses to file system and events supported across all nodes and all protocols
Parseable data stored in secure retention-protected fileset
Data security following removal of physical media protected by on-disk encryption
The new IBM Storage Utility Offerings include the IBM FlashSystem 900 (9843-UF3), IBM Storwize V5030 (2078-U5A), and Storwize V7000 (2076-U7A) storage utility models that enable variable capacity usage and billing.
These models provide a fixed total capacity, with a base and variable usage subscription of that total capacity. IBM Spectrum Control Storage Insights is used to monitor the system capacity usage. It is used to report on capacity used beyond the base subscription capacity, referred to as variable usage.
The variable capacity usage is billed on a quarterly basis. This enables customers to grow or shrink their usage, and only pay for configured capacity.
Suppose you only need 300 TB today, but expect this to grow to 1 PB (1000 TB) over the course of three years. You install 1000 TB (1 PB) of capacity, and pay for the base 300 TB, plus whatever above this 300 TB you might be using during each subsequent quarter. After 36 months, you pay for the rest of capacity installed.
(There are comparable offerings from IBM's competitors, but they often require that you pay for at least 75 to 85 percent of the installed amount, and then you would need to continue to disrupt your operations with additional capacity installed throughout the 12 to 36 month period. IBM's approach allows you to avoid installation disruption during the entire 36 month period!)
IBM Spectrum Virtualize for Public Cloud V8.1.1 delivers a powerful solution for the deployment of IBM Spectrum Virtualize software in public cloud, starting with IBM Cloud. This new capability provides a monthly license to deploy and use Spectrum Virtualize in IBM Cloud to enable hybrid cloud solutions
Remote replication will be supported between Spectrum Virtualize-based appliances (including SAN Volume Controller (SVC), the Storwize family, IBM FlashSystem V9000, and VersaStack with Storwize family or SVC), or Spectrum Virtualize Software, to the IBM Cloud.
Using IP-based replication with Metro Mirror, Global Mirror, or Global Mirror with Change Volumes, clients can create secondary copies of on-premises data in the public cloud for disaster recovery. IBM has over 25 data centers around the world available to chose from. Remote copy services can also be used between two IBM Cloud data centers for improved availability.
The solution is based on bare metal servers. You can create either two- or four-node high availability clusters.
Spectrum Virtualize on-premise SVC and Storwize now also support 2.4 TB 10K rpm 2.5-inch SAS hard disk drives.
IBM has been holding various "Hackathons" and "Meetups" as a new way to reach out to prospective clients. IBM sponsored a meetup at the Austin Executive Briefing Center (EBC) to discuss Machine Learning with TensorFlow on IBM Power systems, October 26, 2017.
This was a joint event, co-sponsored by [IBM Watson/Cognitive Austin] and [Big Data/AI Revealed] meetup groups. Special thanks to my colleague Cathy Cocco, IBM Executive IT Architect with the IBM Austin EBC, for coordinating this event with their organizers.
(What is a Meetup? [Meetup.com is an online social networking website that facilitates in-person local group meetings. Meetup allows members to find and join groups unified by a common interest, such as books, games, pets, technology, careers or hobbies. In 2017, there are 32 million users with 280 thousand groups available across 182 countries.)
Here was the agenda for the event:
Registration, Pizza & Soft drinks
Tensorflow 101 presentation
Demo: Using TensorFlow for Financial Market Predictions on IBM POWER Systems
Lightning Talk: IBM Data Science Experience
Clarisse Taaffe-Hedglin: Intro to TensorFlow on IBM Power servers
Our guest speaker was my colleague Clarisse Taaffe-Hedglin, IBM Cognitive Senior Technical Architect, part of the same Worldwide Client Centers team that I work in. She flew in from Charlotte, NC.
Her topic was TensorFlow, an open source [Machine Learning] framework. TensorFlow was originally developed by Google, but was made open source in November 2015.
Machine Learning is popular in a variety of industries, from self-driving cars and trucks, speech recognition and video surveillance, to what movie to watch next on Netflix. There are three aspects to Machine Learning:
Data: Start with the data you want to analyze. This could be IoT sensor data, security logs, or social media feeds. Check out all that happens in an "Internet Minute"!
Compute: While mathematical computations can be performed on traditional CPUs, some frameworks are optimized and accelerated with Graphical Processing Units (GPU). These GPU can perform Teraflops of single and double precision calculations.
Technique: As methodology have gotten more complicated over the years, frameworks have evolved to match.
The [TensorFlow] framework is now one of the most popular among data scientists. You can download it for free at [Github].
Clarisse showed the various programming/calculation tools used by data scientists. The top five were: Python, R, SQL language, MapReduce, and Microsoft Excel.
Mathematical models come in many flavors. Clarisse explained they can be used to identify clusters of data that might have similar properties, or to perform classification, or linear regression. The results can be "descriptive", gaining a better understanding of what already is, or "predictive" for what might be.
Some frameworks like Chainer or Torch are more flexible, using a dynamic Build-by-Run approach. However, these do not scale well. Theano and TensorFlow, on the other hand, employ a Define-then-Run approach, which scales better for larger projects. With the growth in popularity with TensorFlow, the Theano framework has been "functionally stabilized".
Clarisse Taaffe-Hedglin: Financial Markets Demo
For the demo, Clarisse had historical stock closing data for USA, Australia and Asian stock markets. The hypothesis: We can determine a Buy/Sell for USA stocks based on the closing results of non-American stock results? This is a classic "Binary Classification" model. The other stock markets close 4-16 hours before the U.S. markets open, so this has real-world applicability.
Since the data was in different monetary units, she did some cleanup to normalize the data, removing out the trends, and converting everything to U.S. Dollars (USD).
Clarisse used "Supervised Learning" on 80 percent subset of the data, and then used the other 20 percent remaining data to validate how well it did.
As with any model, you measure how good it is by how close it results in the correct answer. Wrong answers are weighted by how bad they are. This is often referred to as "Loss" or "Cost". Different models can therefore be compared by minimizing the loss.
Using a simple y=wx+b mathematical model, she ran 30,000 iterations. After 5,000 iterations, the model was already guessing correctly 55 percent of the time, by the time we hit 30,000 this was up to 68 percent accuracy.
TensorFlow also supports "hidden layers", basically intermediate variables that are then used in subsequent layers for more complicated calculations. This is the way our brain works with neural networks. With two added layers, she re-ran the 30,000 iterations, and now was up to 73 percent accuracy.
Normally, this kind of analysis would take hours or days, but since TensorFlow takes advantage of the IBM Power8 CPU and NVidia Tesla K80 GPU in the IBM Power server, the whole thing finished in five minutes!
Tuhin Mahmed: Lightning Talk on IBM Data Science Experience (DSX)
Tuhin Mahmed, IBM Software Developer, is the organizer for the Big Data/AI meetup group. He wants to promote the idea of "Lightning Talks" where each person presents for just 10-15 minutes. This is a variant of the popular [Pecha Kucha] events.
To get things started, he presented 10-15 minutes on [IBM Data Science Experience], or DSX for short. Taking Multiple Listing Service (MLS) real estate data of closing prices on houses sold in a range of zip codes from the Austin Area, he mapped these on x-y axis. The x axis was square feet, and the y axis was closing price.
Using DSX, he was able to develop a mathematical model that estimates house closing prices based on their zip code and square footage.
This was a simple example, but it showed the power of Jupyter Notebooks, and how anyone can get a 30-day free trial of DSX for their own experimentation.
Currently, being a data scientist is more of an art than a science. This is one of those fields that takes only a few months to learn, but years to master.
Rather than building a model from scratch, data scientists can take existing models, and modify them to fit their needs. There are a variety of existing models available in what is called the "Model Zoo". Google has over 2,000 projects already.
Those interested in trying this out TensorFlow for themselves were directed to [Nimbix], a Cloud Service Provider that offers POWER servers with NVidia GPUs.
There were about 50 attendees, more than half identified themselves data scientists. As the first inaugural sponsored event for the IBM Austin EBC, I think this was a success!
If you are in the Austin area, the next meetup will be at the [Capital Factory] on Brazos Street on November 30, 2017.
This week, I am presenting at the IBM Systems Technical University for IBM Storage and POWER Systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency. There were about 800 clients attending.
This is my recap for the last few sessions before I left town, spanning Tuesday afternoon and Wednesday afternoon.
Reasons why IBM hyperconverged systems powered by Nutanix surpass other HCI from HPE, Cisco and more
Rob Simpson, Senior Strategic Marketing Manager at Nutanix, presented Nutanix hyperconverged systems. Nutanix runs on both x86 and POWER. For x86, it supports VMware, Microsoft Hyper-V, and Citrix XenServer, as well as their own Acropolis Hypervisor (AHV) derived from Linux KVM. For POWER, it uses AHV re-compiled for POWER chip set.
Hyperconverged systems can be sold in full rack configurations, as individual appliances, or as software that can be deployed on your own servers. Rob compared Nutanix against three competitive appliances: Dell EMC VxRAIL based on VMware VSAN, HPE Simplivity, and Cisco HyperFlex.
Everything you wanted to know about IBM Spectrum Scale metadata but didn't know to ask
Eric Sperley, IBM Software Defined Infrastructure Architect, presented the internal metadata structures used in IBM Spectrum Scale.
Why, oh why, did I attend this presentation? I had worked on Spectrum Scale back when it was called GPFS over 15 years ago, and thought I already knew everything about "inodes" that I ever wanted to, but Eric proved me wrong!
"Laws, like sausages, cease to inspire respect in proportion as we know how they are made."
--John Godfrey Saxe
A lot has changed! There have been a lot of improvements to the internal structures to improve parallel I/O performance, and reduce latency of administrative tasks.
IBM Spectrum Scale can be divided into different file systems, each of which can be configured with different performance characteristics and block size, such as random small files for scanned images, versus large sequential files for streaming videos.
My presentation was nowhere near as technical as Eric's above. I provided an overview of how IBM Spectrum Scale is configured, how it works, and how it interacts with IBM Cloud Object Storage System, Spectrum Protect, and System Archive.
I also covered the latest GSxS and GLxS models of the Elastic Storage Server, or ESS for short. These models provide awesome performance at low cost. The GSxS models are all-flash arrays for high performance. The GLxS models are hybrid with 2 Solid-State Drives and the rest NL-SAS 7200 rpm spinning disk for high capacity.
IBM COS new features
Andy Kutner, IBM Channel and Alliances Architect, presented the latest features in IBM Cloud Object Storage, IBM COS for short.
Compliance Enabled Vaults, or CEV for short, offer Non-Erasable, Non-Rewriteable (NENR) tamperproof protection for objects. Objects written to a CEV vault can not be deleted or replaced with newer versions, for a specific retention period.
(Note: Some folks mistakenly use the term "Write Once, Read Many" (WORM) for this. WORM applies only to tape, optical, paper tape, punched cards, and non-erasable ROM chips. For this reason, the term "Non-Erasable, Non-Rewriteable" (NENR), used in the U.S. Securities Exchange Commission (SEC 17a-4) regulation, has been created to extend this tamperproof protection to flash, disk and cloud-based storage architectures.)
The entry-level systems lowers the minimum capacity of systems. Before, IBM recommended at least 500 TB capacity to consider IBM COS. Now, the combination of embedded Accessers and Concentrated Dispersal mode, can lower the starting point to as low as 72 TB, but still allow you to grow to multiple PBs.
This week, I am presenting at the IBM Systems Technical University for Storage and POWER Systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency.
This is my recap for sessions on Day 2 morning.
FlashSystem A9000 and A9000R Overview
Andy Walls, IBM Fellow, CTO and Chief Architect,and Brent Yardley, IBM STSM and Master Inventor, co-presented this session. This was the "deep dive" of the A9000/R, a basic continuation of the one they did yesterday.
The Pendulum Swings Back -- Understanding converged and hyperconverged integrated systems
With IBM's partnership with Nutanix, this has become a particularly popular topic. I cover the last 50 years of storage evolution, from internal storage and external storage to NAS and SAN storage networks.
More recently, people have been willing to give up all those gains for something simpler, less powerful, less reliable, less expensive. Enter Converged and Hyperconverged Systems. IBM PureSystems and VersaStack lead the pack for Converged Systems, along with IBM Spectrum Scale, Spectrum Accelerate and Nutanix on IBM Power Systems for Hyperconverged Integrated Systems.
New Generation of Storage Tiering -- Less Management, Lower Costs, and Improved Performance
There are orders of magnitude between the fastest All-Flash Array and the least expensive tape storage. Ideally, there would be a "slider bar" that allowed people to select from the fastest to the least expensive. IBM offers a variety of solutions to offer this "slider bar", with automation to move data as needed between tiers.
I start with IBM Easy Tier, available on DS8000 and Spectrum Virtualize products, to IBM Virtual Storage Center where advanced analytics moves data to the right location, to IBM Spectrum Scale which provides the ultimate tiering, across multiple locations, between flash, disk and tape.
The lunches at these conferences are amazing, but then the "Big Easy" is known for its food!
This week, I am presenting at the IBM Systems Technical University for Storage and POWER systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency.
The afternoon sessions on Monday were all about Cloud.
Back in 2009, I was designated the IBM Cloud Storage Center of Competency for all of the IBM Systems client centers. That was nearly a decade ago, and I am still talking about Cloud Storage!
Since then, IBM has decided to be a "Cloud Platform" company, and now everyone wants to know about Cloud Storage. Cloud is not just to lower costs, as it once start out as, but now for innovation and business value.
Nearly all of IBM Storage is enabled for cloud, from our high-end FlashSystem, DS8000 and XIV flash and disk storage arrays, to our Spectrum Storage software suite, to our various tape products.
Building Private Cloud with Ubuntu and OpenPOWER
Ivan Dobos, from Canonical--the company that makes Ubuntu--presented Ubuntu on OpenPOWER. Other Linux distributions like Red Hat and SuSE distributions offer both a "community supported" version (OpenSUSE or CentOS), and an "enterprise version" (SLES and RHEL). Ubuntu doesn't fork their versions, they have a single version for everyone.
Ubuntu 14.04 LTS was made available as a Little-Endian distribution for IBM POWER and OpenPOWER. Ubuntu was the first Linux distribution to support CAPI and PowerKVM for the POWER8 platform.
(A note on release numbers. Ubuntu releases every April and October, so 14.04 represents 2014/April release. Every two years, a release is designated "Long Term Support" (LTS) which is supported for five years.)
Since version 16.04, Ubuntu offers the LXD Container Hypervisor, based on LXC, similar to Solaris Zones, but running as a daemon. Virtual Machines are heavy because they have their own kernel. Containers instead use the kernel of the underlying hypervisor, but limited to Linux guests. The Linux guests are can be older versions of Debian, Red Hat or SuSE, but with the latest, most secure kernel of Ubuntu for safety and security.
(Canonical gives Ubuntu away for free, but offers "Enterprise Services" for a fee to companies that want this added level of support. One of the features with Enterprise Services is "Live Kernel Update". Normally, updating the Linux kernel requires a reboot, which would cause outage to all of the VMs and containers running on that host server.)
Like VMs, you can launch containers, switch to bash shell, install software, run applications, and shut down containers, all isolated from other containers. The LXD daemon can run LXC and Docker containers. Some advantages of doing this:
Lift and Shift, live mobility from one system to another
Collocation of different workloads on same node
More efficient to use containers than Virtual Machines
14x greater density with LXD than traditional KVM or VMware (tested on x86)
Based on open source LXC containers
Ubuntu is designed for the "Elastic Hybrid Cloud". Canonical recommends combining on-premises data center with two or more public cloud providers. Scarcity has shifted from "code" to "operations". Are you ready to run applications you don't understand?
Total Cost of Ownership is shifting from code license costs to operational costs. Canonical offers a free, downloadable, operations orchestration platform called "Juju" to help install, configure and scale applications. Juju means "magic" in Swahili.
Scripts on Juju are called charms. There are Juju charms to install and configure things like MongoDB and IBM Spectrum Scale. Furthermore, Juju charms can be bundled together for more complicated deployments.
Juju is not limited to LXD, can be used with VMware, OpenStack, bare metal servers, and public clouds. It is available on Ubuntu, Red Hat and Windows. As a demo, Ivan built an entire working OpenStack environment, with 20 applications on 4 bare metal servers, all installed and launched with Juju.
For OpenStack, you can use the basic "Ubuntu OpenStack", or a more complete "Canonical OpenStack", or even have Canonical folks manage your environment for you.
Canonical MaaS (Metal-as-a-Service) uses hardware APIs to manage bare metal servers, providing physical provisioning, dynamic allocation for workloads, and even Ubuntu and CentOS operating system installs. Canonical has clients with over 100,000 servers managed with MaaS.
Introduction to IBM Cloud Object Storage System and its applications (powered by Cleversafe)
Before 2015, IBM offered two "Object Storage" products: IBM Spectrum Scale and IBM Spectrum Archive, and I was constantly having to compare and contrast IBM products to Cleversafe.
Not any more! With the IBM acquisition of Cleversafe, IBM now offers all three!
This session explained all of the features and functions of IBM Cloud Object Storage System, available as software, as pre-built systems, including a VersaStack CVD, and as Storage-as-a-Service (STaaS) in the IBM Cloud.
(IBM renamed Cleversafe DSnet to "IBM Cloud Object Storage System". I joked that if IBM ever acquired Coca-Cola, they would probably rename their signature soft drink as the "Brown Carbonated Sugar Liquid", or BroCarb SugarLiq for short!)
In the evening, we had a nice reception with food and drink at the Solution Center. The Solution Center has booths where all of the IBM and Business Partners have their experts answering questions and handing out brochures of their offerings.
This week, I am presenting at the IBM Systems Technical University for Storage and POWER Systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency.
Storage: Opening Keynote Session
Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist, and Craig Nelson, Brocade, co-presented this session.
Clod Barrera presented the latest in Storage trends. He organized his talk around four layers: Infrastructure, Storage Management, Storage Systems, and Storage Media.
Craig Nelson presented the changes in Storage Networking. With advancements in both server and storage bandwidth, the storage network becomes the bottleneck. Insane flash storage performance requires insanely fast storage networks. IBM offers Brocade-manufactured switches and directors that now support 32Gbps. Combining four paths together, these can offer Interswitch Connection Links (ICL) at 128 Gbps.
The Seven Tiers of Business Continuity and Disaster Recovery
With the recent Hurricans Harvey, Irma, Jose, and Maria, my topic on Business Continuity and Disaster Recovery (BC/DR) was well attended. I have been working in BC/DR for most of my career, including the "High Availability Center of Competency" or HACOC.
Back in 2005, I was here in New Orleans, the week before Hurricane Katrina, for the IBM Storage Symposium, August 22-26, the predecessor of this conference. I left on Friday, August 26, and the storm hit that weekend.
I met with people photographing all the buildings, in hopes to sell "before pictures" to insurance companies and filmmakers after the hurricane hit. Film director Spike Lee bought much of this footage. Smart!
However, natural disasters like hurricanes, tornados and floods represent less than 20 percent of all discasters. The majority of disasters, nearly 75 percent, arise from electrical power outages, human error, system failure and randsomware.
IBM FlashSystem Overview
Andy Walls, IBM Fellow, CTO and Chief Architect,and Brent Yardley, IBM STSM and Master Inventor, co-presented this session. Andy started with FlashSystem 900, V9000 and A9000/R.
The room was packed with standing room only, and Andy was answering so many questions that he never finished his portion, and Brent Yardley never had a chance to cover his portion.
Fortunately, there were "deep dive" sessions on FlashSystem 900, V9000 and A9000/R later in the week, so Andy suggested everyone go to lunch and attend these other more detailed sessions.
IBM introduces the eight generation of Linear Tape Open (LTO) tape drive technology, with corresponding support in all of the IBM tape libraries.
Fellow blogger Jon Toigo, of Drunkendata.com fame, came to Tucson to interview Lee Jesionowski, Ed Childers, Calline Sanchez, and me about this. Check out the various segments on YouTube or his website.
The LTO-8 cartridges are not yet available, but when they are, they will hold 12 TB raw capacity, or 30 TB effective capacity at 2.5-to-1 compression ratio. The new drives are N-1 compatible to read/write LTO-7 cartridge media.
Previous generations also supported reading N-2 generation tapes, LTO-8 breaks from that tradition and will not support LTO-6 cartridges at all.
LTO-8 comes in both "Full Height" (FH) and Half-Height (HH) models. The FH models can transfer data at 360 MB/sec (or 900 MB/sec effective at 2.5-to-1 compression), and the HH models at 300 MB/sec (or 750 MB/sec effective at 2.5-to-1).
LTO-8 supports IBM Spectrum Archive and the "Linear Tape File System" (LTFS) tape format for self-describing long-term retention of data.
Compliance storage has come under many names. For tape and optical media, we had "WORM" for Write-Once, Read-Many. For disk-based storage, we had "Fixed-Content" or "Content-Addressable Storage". For file systems, we had "Immutable Storage".
Fortunately, the clever folks who crafted the SEC 17a-4 law came up with an umbrella term: "Non-Erasable, Non-Rewriteable" (NENR) that covers all storage media, from WORM tape and optical, to tamperproof flash, disk and cloud-based solutions.
The other major change is "Concentrated Dispersal" mode, or "CD mode" for short. Erasure Coding works best when data is dispersed across three or more sites. When this happens, you can lose all of the data at one site, and still have 100 percent access to all data from the other locations.
IBM's "Information Dispersal Algorithm", or IDA for short, scattered slices of data across many servers. Great for high availability and performance, but often meant that the minimum deployment was 500TB or greater.
Not every organization is ready for such a large purchase. Some want to just [dip their toe in the water] with something smaller, less expensive. Well IBM delivered!
The new CD mode means that instead of one slice per Slicestor node, you can pack lots of slices on each node. Each slice will be on distinct disk drives, for high availability.
Entry-level configurations now can be as little as 72-104 TB, across 1, 2 or 3 sites.
Next month, I will be presenting at the IBM Systems Technical University for Storage and POWER. This conference will be held in New Orleans, Louisiana, October 16-20, 2017.
Instead of a "Meet the Experts" Q&A panel, this event will feature a "Poster Session". I had the pleasure of doing one of these down in Melbourne, Australia last month. For those who missed it, here are my blog posts:
By now, you have already decided on a title and abstract of your poster. You will need to figure out a quick and easy way to explain your poster, and as always, shorter is better. It reminds me of a famous quote:
"Sorry this letter is too long...
If I had more time, I could have made it shorter!
-- Blaise Pascal
The event team asked me to write some instructions on the mechanics of how to put together a poster for this, since it is new for many people. I use Microsoft PowerPoint 2013 and ImageMagick tools to accomplish this.
Arrangement of Slides
Posters for the IBM Systems Technical University in New Orleans will be 24x36 inches in size. If you print out your poster in 8.5x11 inch standard size letter pages, that would be eight slides, 2 columns, 4 rows. This leaves one inch border all around.
The event will provide both the foam board and double-sided sticky tape. You can bring your poster as a stack of Letter-sized pages in a folder, and assemble your poster at the event.
You can increase the size of individual image to 17x22, to offer the "Big Picture" view. Basically, we can take a standard 8.5x11 Letter size page, expand it onto four separate pages, and then put them on the poster! I will show you how in the steps below.
Lastly, you can have two big slides. If your poster is organized as "Before/After" or "Problem/Solution" then this arrangement could be perfect for you.
Setting Custom Paper Size on PowerPoint
In Melbourne, I had to use European A4 standard paper, and had to figure out how to do this in PowerPoint. I was surprised to learn that the PowerPoint default is 4:3 ratio of 10x7.5 inch, and that this is stretched to be whatever paper size you print on.
The difference is slight, but I prefer [WYSIWYG], so we will change the slide to "Custom size" and force it to 8.5x11 inches, with "Landscape" orientation. This will avoid anything looking stretched or squished on the big poster.
Converting a PowerPoint Slide to PNG Image file
If you would like to resize one or more of your PowerPoint slides, you will need to save those slides as images. Select "File" and "Save As" and as the format, choose "PNG" format. You can also select GIF or JPG, but I prefer PNG.
You can export all of your slides as images, in which case it will create a folder and number each slide individually. Or, you can select "Just This One" for the current slide.
By default, it will use the same name as your PPT file, just change the extension to PNG. I suggest you name the file something meaningful to you. In my examples below, I use "small.png" as the file name.
I am using PowerPoint 2013, which defaults to 96 dpi. So, an 8.5x11 paper becomes 1056x816 pixels in size.
If you have PowerPoint 2003 or higher, you can change the Windows registry to specify image resolutions. Not recommended for the faint of heart. Or anyone else. But here's the deal if you want to try (if the following doesn't make any sense, it might be better not to mess with the registry):
Quit PowerPoint if it's running
Navigate to HKEY_CURRENT_USER\Software\Microsoft\Office\X.0\PowerPoint\Options
(For X> above, substitute 16.0 for PowerPoint 2016, 15.0 for PowerPoint 2013, 14.0 for PowerPoint 2010, 12.0 for PowerPoint 2007 and 11.0 for PowerPoint 2003.
Add a new DWORD value named ExportBitmapResolution and set its DECIMAL value to the DPI value you want (for example, 300 means 300 dots per inch)
Close REGEDIT, start PowerPoint and test. Your files will be 3300x2550 pixels instead.
Resizing and splitting up PNG Image files
To expand and chop the slide into four letter-sized pages, we will use "ImageMagick", an open source tool you can download for free at [ImageMagick] is a collection of command line utilities. The first "identify" will confirm your pixel size for your PNG image. Replace "small.png" with whatever you named your PNG image above.
Lastly, we crop the "big.png" image we just created into four smaller pieces. Each piece will be exactly the size as your original image! The files will be named big_0.png, big_1.png, big_2.png and big_3.png.
Since the resulting four pieces are exactly the size of a page, you can put them back into your PowerPoint deck. Create four blank slides, select Insert then Pictures. Insert each picture (big_0.png, big_1.png, big_2.png, and big_3.png) as a separate page.
You can print this out, and bring with you to the event, or send it to someone to have them print for you.
Upload files to IBM@Box
This next step is completely optional, but found it adds a nice touch. As an IBMer, you can upload your presentation, and any documents, whitepapers or other materials, to [IBM@Box]. Create a directory that is unique to you, such as your last name and the conference. For example, I have "Pearson-STU-NOLA-2017" as my folder name.
You can create a "URL Link" to this folder. Select "Share", then "Share Link" to create a dialog box. It is important to specify "People with this link" if you want those outside of IBM, such as clients and IBM Business Partners, to have access.
Press the little "gear" button on the upper right, and it gives you options to customize the URL. Normally the URL is some long random sequence of characters, but you can rename it to something meaningful and easier to remember.
Generate a QR Code
Since you have a URL Share Link for your files on IBM@Box, you can generate a QR Code for this link, and include on your poster!
There are several online websites that can generate a QR Code for free. I use [QRme.com] in this example. Go to the website, copy in the URL, and press "Generate" button.
The QR Code is generated successfully, right click and "Save Image" to a file on your hard drive. This image can be inserted as a picture like we did above onto any slide. You can resize as needed.
In Melbourne, one of the posters had the QR Code at the top, with the Title, and it was impossible to see, so difficult to use a smartphone to scan the information. For this reason, I recommend putting the QR code in the center or lower right corner of your poster. Between shoulder and waist height for the audience, to be comfortable to scan.
I am looking forward to going back to New Orleans to speak at this conference!
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM announced a new product, IBM Spectrum Protect Plus. To understand why, I will need to discuss a bit of history related to Data Protection.
(FCC Disclosure: I work for IBM. This blog post can be considered a "paid celebrity endorsement" for IBM Spectrum Protect, IBM Spectrum Protect Snapshot, IBM Spectrum Protect for Virtual Environments, and IBM Spectrum Copy Data Management products. I was not paid in any manner to promote Geoffrey Moore's book mentioned below.)
IBM Spectrum Protect was originally developed as the Workstation Data Save Facility (WDSF) back in the 1980s, back when Personal Computers were just getting deployed.
I started in 1986 developing mainframe software, so we all had bulky 3270 terminals. When our area was offered 120 PCs to replace them, I was tasked with determining how to roll these out, 24 at a time, over five months.
My job was to determine who would get a PC in the first round, the second round, and so on. I handed out a simple one-page survey, asking everyone basic questions. Are you familiar with Personal Computers? Do have one at home? Are you comfortable using a mouse? My plan was to give those most familiar with them sooner, and those less familiar in later rounds.
However, it was my final question that sealed the deal:
How soon do you want a PC to replace your 3270 terminal?
[ ]Immediately [ ]Next month [ ]No Hurry [ ]Put me last [ ]Never!
Surprisingly, I had roughly 24 folks choosing each option on this last question, which made my decision process easy for me!
(In his book Crossing the Chasm, fellow author Geoffrey Moore would come up with similar groups: Innovators, Early Adopters, Early Majority, Late Majority, and Laggards. This is a great book and I highly recommend it!)
Of course, we used WDSF to back up the files. WDSF would later morph into DFDSM, then ADSM, then TSM, and now it is called IBM Spectrum Protect.
Over the decades, the product has evolved from just backing up data on personal computers. IBM Spectrum Protect can now protect all kinds of machines, from tablets, mobile devices, and smartphones, to virtual machines, databases, and application servers in the data center.
Besides creating backup versions of files, IBM Spectrum Protect can also migrate older, less frequently used files to less expensive media, as well as archive files for long-term retention.
Different files can be assigned to different "management classes" that determine policies to be applied and enforced on the backup, migration and archive copies. For backups, this includes how many versions to keep while the file exists, how many versions to keep after the original file is deleted, how long to keep those inactive versions.
Instead of a grandfather-father-son [backup tape rotation], full-plus-incremental, or full-plus-differential scheme employed by other backup software, IBM Spectrum Protect has a unique "Incremental-Forever" approach that reduces backup time, LAN bandwidth requirements, and backup storage media.
While most companies still backup to tape, IBM Spectrum Protect can backup to flash, disk, tape, virtual and physical tape libraries, object storage, and even to public Cloud Service Providers such as IBM Bluemix, Amazon S3, and Microsoft Azure.
IBM Spectrum Protect both client-side and server-side data footprint reduction technologies including compression and deduplication, eliminating the need for expensive, single-purpose data deduplication devices like Dell-EMC Data Domain.
IBM Spectrum Protect is recognized as a leader in Data Protection software, able to scale up to meet the demands of the largest enterprises. However, the parameters and options that IBM Spectrum Protect has acquired over time have been compared to the cockpit or flight deck of an airplane!
For clients with Virtual Machines, IBM offered three solutions:
IBM Spectrum Protect Snapshot
Formerly called Tivoli Storage FlashCopy Manager (FCM), [IBM Spectrum Protect Snapshot] takes frequent, near-instant, non-disruptive, application-aware backups and restores for SAP, Oracle and Db2. It can also be used for VMware using advanced snapshot technology, on both IBM and non-IBM storage systems.
IBM Spectrum Protect Snapshot can be used as a stand-alone product, or integrated with IBM Spectrum Protect to move the snapshots and FlashCopy targets to other storage media.
IBM Spectrum Protect for Virtual Environments (VE)
Formerly called IBM Tivoli Storage Manager for Virtual Environments, [IBM Spectrum Protect VE] protects both VMware and Microsoft Hyper-V virtual machines.
IBM Spectrum Protect VE safely moves backup workloads to a centralized IBM Spectrum Protect server and enables administrators to create backup policies or restore virtual machines with just a few clicks. It allows you to protect data without a traditional backup window.
IBM Spectrum Copy Data Management makes copies available to DBAs, Developers and VM administrators when and where they need them. While this product is focused on DevOps and Dev/Test workflows, it can also be used to automate and schedule snapshots that can serve as backups.
Surprisingly, many companies do not take advantage of these solutions. Even clients who already have IBM Spectrum Protect deployed either (a) simply use Spectrum Protect clients on individual VM guests, or (b) use third-party products to backup VMs outside of Spectrum Protect infrastructure.
"Problems cannot be solved with the same mind set that created them."
-- Albert Einstein
Smaller clients want something simpler to deploy, and easier to use and administer. Rather than simplify the products above, a process called "kneecapping" in the IT industry, IBM opted for a clean slate, [start-from-scratch] approach.
The result is IBM Spectrum Protect Plus, new software that was preview announced last Wednesday in time for this week's VMworld 2017 conference in Las Vegas, and next month's VMworld conference in Barcelona, Spain.
IBM Spectrum Protect Plus is available as either a stand-alone product, or integrated with IBM Spectrum Protect for long-term protection. It is focused exclusively on VMware and Hyper-V environments. General Availability is expected some time in 4Q 2017.
Key features include:
Simple to install in less than 15 minutes, configured in an hour
Easy to use by DBA, VM or application administrator. No IBM Spectrum Protect skills required for stand-alone deployment
Pre-defined Gold, Silver and Bronze policies are ready to use. Additional customized policies can be configured as needed
Supports both application-aware and crash-consistent methods
Data Footprint Reduction technologies including compression and deduplication
Instant data recovery to support DevOps, Dev/Test, Reporting, Analytics and Training
Granular search and restore of entire Virtual Machines, VMDKs, and individual files
As for the name, I would have prefered "IBM Spectrum Protect Basic Edition". The "Plus" implies that the new product is more advanced, or offers more features, than the existing Spectrum Protect editions.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Enhanced Spectrum Virtualize software
IBM announces v8.1 of the Spectrum Virtualize software that works with the latest models of SAN Volume Controller, Storwize and FlashSystem V9000 products.
This v8.1 release will not support older hardware. For these older models, continue to use v7.8.1 release until end of service and support:
SAN Volume Controller, CF8 and CG8 models
FlashSystem V840, AC0 model
Storwize V7000 Gen 1, models 1xx, 2xx and 3xx
Storwize V5000 Gen 1, models 24 C/E, 12 C/E
Storwize V3500 and V3700, all models
Hot Spare Node
Higher availability provided by automatically swapping a spare node into the cluster if the cluster detects a failing node. Following the N-port ID Virtualization (NPIV) features introduced in previous release, this new feature is available for SVC and FlashSystem V9000.
Spare nodes can also be extremely helpful with code updates and node refreshes. Update the code load on a spare node, and use this to roll forward the other nodes. In this manner, you are never in "single node" mode!
You can have up to four spare nodes per SVC cluster, and three spare nodes per FlashSystem V9000 cluster. These spares are "site-aware" to support Enhanced Stretch Cluster and HyperSwap configurations.
This feature requires Fibre Channel switches, so it won't work if you are using direct-attached SAS, iSCSI or FC point-to-point connections.
256 GB memory support
Spectrum Virtualize will now take full advantage of system memory, rather than just the first 64 GB. A fixed 12 GB is set aside for write cache, the rest is used for operating system code, read cache, and compression work space.
IBM supports up to 128 GB per canister on the Storwize V7000 Gen2+ models, and up to 256 GB for SAN Volume Controller SV1 and FlashSystem V9000 models.
One two-socket nodes, IBM previously dedicated specific cores to perform I/O operations, and others for Real-time Compression. With v8.1 release, the team implemented a more sophisticated multi-socket, multi-core, multi-threaded approach. Internal tests showed this improved performance 36 to 50 percent on SAN Volume Controller DH8 and SV1 models.
Enhancements for Encryption
IBM Security Key Lifecycle Manager (SKLM) support has been expanded to support up to three Key Server clones for a total of four Key Servers (one master and three clones).
You can use both central key management (SKLM servers) and local key management(using USB keys physically attached to the back of the controllers) at the same time. This can be useful to transition from one method or another, or use both concurrently for added flexibility.
Both SKLM and USB-based keys can also be used to encrypt FlashCopy targets written to the Cloud with Transparent Cloud Tiering.
Remote support assistance
IBM support engineers can perform system or upgrade recoveries over secure support sessions. This enables remote concurrent upgrades to be done securely and is only available only for clients who purchase Enterprise Class Support.
Since you are already sending periodic inventory updates as part of "call home" support, you might as well let IBM review the configuration and provide customized recommendations!
There is no additional cost, and this provides an additional review to catch any potential problems, single points of failure, or other issues that could be a problem later on.
Based on the success of the Hyper-Scale Manager GUI developed for the FlashSystem A9000, the new Spectrum Virtualize GUI offers an updated look and feel, with new fonts, colors, banner, navigation, dashboard, and other interactive elements.
New Pause Feature for Concurrent Code Update (CCU)
The Pause function will allow users to pause CCU indefinitely. This pause allows customers to do any problem determination, such as multi-pathing issues, or simply to pause the upgrade, take a break for lunch, then resume the upgrade when convenient to do so.
There were also enhances to the hardware models themselves.
IBM FlashSystem V9000
The IBM FlashSystem V9000 has two enhancements. First, there is an option to add a pair of AC3 nodes without AE2 enclosures to scale performance.
The second is the ability to add a single AC3 node for use as a hot spare node. You can have up to three of these extra AC3 spares per V9000 cluster.
IBM Storwize V7000
IBM Storwize V7000 Gen2+ offers increased cache of up to 256 GB per controller, 128 GB per canister. This follows on the heels of the recent increase to 256 GB per node for the SAN Volume Controller and FlashSystem V9000. More memory means more cache hit ratios for faster performance, and more compressed volumes.
900 GB 15K rpm 2.5-inch SAS drive
IBM SAN Volume Controller (SVC) and Storwize Family delivers an additional option with a 900 GB 15K rpm 2.5-inch SAS drive.
(Honestly, I didn't think we would see larger capacity 15K drives, but IBM was qualifying these for the DS8000 boxes, and made sense to add them to the Spectrum Virtualize hardware offerings as well.)
This week, I was in beautiful Melbourne, Australia for IBM Systems Technical University.
PowerAI overview and Cognitive Solutions on POWER
Anand Subramaniam, IBM Technical Specialist, presented this session on PowerAI. IBM packaged a collection of Machine Learning libraries, optimized them for POWER8 chip-set, and made this entire package freely available for download as "PowerAI".
IBM also is working on a priced value-add collection called "PowerAI Vision"
Hadoop Infrastructure solutions and Point-of-View
Alexis Giral, IBM Executive Storage Architect, presented the benefits of IBM Spectrum Scale using a simple example. Supposed you are gathering 40TB of sensor readings per day. How many TB of storage would you need to hold 2 years worth of data?
Traditionally, HDFS maintains three copies of the data. A recently added feature "HDFS-EC" provides erasure coding to reduce the overall storage requirements. Giral showed this chart:
5+4 Erasure Coding
Spectrum Scale ESS
8+3 Erasure Coding
And this is assuming all the data is hot. If you decide to keep only 30 percent hot, perhaps the most recent eight months, and the other 70 percent on colder storage, you may reduce your storage requirement costs even further.
IBM Cloud Object Storage - Redefining backup infrastructure
Maciej "Mac" Lasota, presented the use of IBM Cloud Object Storage as a backup repository. While IBM Spectrum Protect is the preferred choice, IBM COS also works well with Commvault and NetBackup.
He listed some of the challenges that companies have with backups to tape, and how IBM COS addresses these challenges.
(While IBM COS is three to four times more expensive than tape, it is a luxury many clients can now afford!)
He wrapped up the session showing five different deployments that he worked on for clients.
New Generation of Storage Tiering: Simpler Management, Lower Costs, and Improved Performance
With ever changing amounts of storage, it is hard to find metrics that are consistent year to year. Fortunately, we found I/O density as the metric to focus my efforts, armed with real data from Intelligent Information Lifecycle Management (IILM) studies done at various clients. From that, I was able to talk about storage tiering on three fronts:
IBM Easy Tier on DS8000 and Spectrum Virtualize to provide tiering within a system.
IBM Virtual Storage Center (VSC) to provide tiering between systems in a data center.
IBM Spectrum Scale, Spectrum Archive and IBM Cloud Object Storage System to provide global tiering across multiple locations, and across flash, disk, tape and cloud resources.
Spectrum Scale for Volume, File and Object Storage
IBM Spectrum Scale was formerly called GPFS and has been around since 1998. I am glad it was renamed, as GPFS suffered from "guilt by association" with other file systems, AFS, DFS, XFS, ZFS, and so on.
Spectrum Scale does so much more, supports volume, file and object level access, supports POSIX standards for Windows, AIX and Linux, support Hadoop and Spark with 100 percent compatible HDFS Transparency Connector, support NFS, SMB and iSCSI protocols, as well as OpenStack Swift and Amazon S3 object based access.
Initially designed for video streaming and High Performance Computing (HPC), IBM has extended its reach to work in a variety of workloads across different industries. More than 5,000 production systems are running at client locations.
Beating Ransomware! A deep exploration of threat vectors for applications and storage
Andrew Greenfield, IBM Global Engineer for Spectrum Storage, presented on the threat of ransomware. In addition to being an expert in various storage, he also is an expert in security.
If you think security is just setting up your network firewalls and turning on data-at-rest encryption on your storage, you are sadly mistaken. Many of the treat vectors come from the inside, disgruntled employees or temporary contractors who plant viruses, bombs and worms that may not activate until long after they leave.
There are now products called security information and event management (SIEM) that provide real-time analysis of security alerts generated by network hardware and applications. Two that Andrew was familiar with were IBM Qradar and Varonis. These identify standard and abnormal behavior patterns among users.
Andrew feels products like Splunk do a great job to collect information, but don't do the analysis that Qradar or Varonis do.
I was very pleased with this conference. This was a concentrated 3-day event, but everyone I talked to was happy with the format, and felt their time spent worthwhile!
This week, I was in beautiful Melbourne, Australia for IBM Systems Technical University. On Wednesday evening, we had a poster session.
(I have so many photos that I will split this post up into topics. This post will focus on IBM Z systems, see my other posts for storage and IBM Power systems.)
Topics can be anything that is of interest to your peers and colleagues. It can be research-related, a specific solution you implemented or an interesting customer case you want to share.
Linux Scalability at a Small Scale (or, An Adventure In Minimalist Multitudinousness)
Vic Cross, IBM Senior Systems Engineer, used the Ganglia Monitor System to generate traffic and measure 1,680 Linux guests on a single IBM Z mainframe LPAR with only 16GB of memory! His poster consisted of 18 pages of material, a mix of traditional presentation slides, screen shots of web pages, and densely detailed performance results.
Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization. It uses carefully engineered data structures and algorithms to achieve very low per-node overheads and high concurrency. The implementation is robust, has been ported to an extensive set of operating systems and processor architectures, and is currently in use on thousands of clusters around the world. It has been used to link clusters across university campuses and around the world and can scale to handle clusters with 2000 nodes. Learn more at [http://ganglia.sourceforge.net/]
Spectrum Scale 2 site cluster
Antony Steel, IBM Senior Consulting IT Specialist, presents an option to configure a 2 site GPFS (Spectrum Scale) "almost active-active" cluster when a 3rd site is not available. This option will require simple administrative tasks to make DR filesystem available should Production site fail. Spectrum Scale runs on IBM Z, IBM Power and x86 servers.
The poster used 13 traditional landscape slides, printed on what appears to be A4 paper. A4 is 297 mm wide, so three side by side exceeds the 841 mm width of the poster foam board. These were arranged with a title slide on top, and then 12 content slides in four rows of three.
While I was glad that someone else had a QR code on their poster, the placement was way at the top, and difficult for anyone to actually scan it. I thought of this, and had mine at waist level in the middle right side of my poster.
Life is better with Linux
I couldn't resist taking a photo of the back of this guy's tee-shirt, which says "Life is better with Linux"
In effect, tee-shirts can also be "posters", although that would make for an awkward "poster session" if everyone wore them? Pointing at your chest would be weird, and pointing to your back would be near impossible!
In 1999-2001, I helped the port of Linux to IBM S/390 mainframe chip-set architecture by testing and debugging the disk and tape device drivers. I was the first to install Linux on an IBM mainframe in Tucson, AZ!
I would then go on to work with SAN Volume Controller, Tivoli Storage Manager (now called Spectrum Protect), Tivoli Storage Productivity Center (now called Spectrum Control), and the General Parallel File System (GPFS, now called Spectrum Scale). All of these run on Linux!
I would become the "Linux storage expert" at conferences like SHARE and GUIDE. While my co-workers in DFSMS and z/OS felt Linux was just a fad, I predicted that Linux was going to be a major force in the IT industry. I was right, not only does Linux run on all of our IBM Z and Power servers, it is the underlying operating system for nearly all of IBM storage devices.
Today, I run Linux directly on my laptop, using a Windows KVM guest image as needed for specific projects or applications.
Erina Araki poses for a photo with one of the attendees, Marco. Erina was the organizer for this poster event, and was my primary contact to answer all of my questions. I think the poster session was a big success!
This week, I was in beautiful Melbourne, Australia for IBM Systems Technical University. On Wednesday evening, we had a poster session.
(I have so many photos that I will split this post up into topics. This post will focus on posters related to IBM Power systems. See my other posts for storage and IBM Z.)
Ding! IBM i Systems Management redefined with SQL
A poster presentation should trigger question-and-answer sessions, and the exchange of ideas and information regarding your topic.
Scott Forstie, IBM Db2 for i Business Architect, coined the phrase "Scott's Query Language", focused on Data Services for Db2 database on IBM i operating system. His design took several charts, printed in landscape mode, and organized in 3 columns of four charts each. His "title" page was printed twice, and placed on the left and right sides.
Scott explained GROUP_PTF_CURRENCY, LICENSE_EXPIRATION_CHECK and ACTIVE_JOB_INFO. I am not familiar with any of these things, but I enjoyed how passionate Scott was. He even had business cards for people to get more information at: [ibm.biz/Db2foriServices]
IBM Spectrum Scale with Hortonworks Data Platform
Chris Maestas, IBM Global Senior Solutions Architect, IBM and Par Hettinga, IBM Global SDI Enablement Leader, created this poster.
Hortonworks is a leading innovator in the industry, creating, distributing and supporting enterprise-ready open data platforms and modern data applications. They focus on driving innovation in open source communities such as Apache Hadoop, NiFi, and Spark. Their product, Hortonworks Data Platform (HDP), runs on both x86 and Power systems.
The poster design was clean, with basically three enlarged presentation slides. On the top, it explains that Hortonworks now supports IBM Spectrum Scale for storage of files and objects to be analyzed by Hadoop. On the bottom left, it shows how Spectrum Scale eliminates the ingest-and-discard approach used by other HDFS-based systems. On the bottom right, an architecture diagram to build your own "data lake".
Optimizing Power Performance with Affinity Groups – Real World 40Gbit LPM Results / Lessons Learnt
This poster employed a unique 1-6-6 design. Top slide was for title and author: Stephen Diwell, Senior Power Systems Engineer, DXC Technologies
In the middle, the poster had six traditional text-only presentation slides, arranged in two rows of three. LPAR Affinity Groups provides you the ability to give the Hypervisor a hint that you would like this group of LPARs to be located on processor Chips that are closer to each other. Use Affinity Groups to help the Hypervisor place LPARs nearer to the VIO Servers. LPARs that share common resources, like the Fibre Channel and Ethernet adapters within a VIO servers will obtain better performance and adapter throughput the closer they are. The lighting on some of these posters was really poor, and perhaps too dark to read small fonts like this.
At the bottom were performance bar chart results, in three rows of two. I like the use of color for the graphs. For a network job with 8 threads, Stephen achieved a 54% increase in network bandwidth for LPARs communicating on the same Chip to those communicating between Nodes in the E800 frame.
Sundata Power Server Cloud offering
Leave it to the marketing department of a local cloud service provider to turn their poster into an advertising billboard! This one was presented by Kon Kakanis, Managing Director, Sundata Pty Ltd
The Sundata poster encouraged people to move their AIX, IBM i and Linux on POWER workloads to their "PowerCloud" platform. They summarized their advantages into four bullet points:
Reliable and cost-effective partnership
Advice, Guidance and Support
Migration, management and support services
Located in Sydney and Brisbane
Founded in 1986, Sundata is an Australia-based organization to help their clients transform into the Cloud, select and deploy IT hardware and keep the lights on with ongoing support and managed services. They have over 100 corporations, government departments and schools enjoying a close and ongoing relationship.
The large fonts, simple design, and the cute cat-in-a-cape logo in the lower right corner captured peoples attention!
In between reading posters and talking to everyone, it was good to take a quick look out the floor-to-ceiling windows. At 297 meters, Eureka Tower has some amazing views. Here is one of the Yarra river and Central Business District.
This week, I was in beautiful Melbourne, Australia for IBM Systems Technical University. On Wednesday evening, we had a poster session. This was the first time I presented a poster session, so I was understandably very excited.
(I have so many photos that I will split this post up into topics. This post will focus on storage posters. See my other posts for IBM Power and Z systems.)
The venue was Eureka Skydeck 89, the top floor of the Eureka Tower. This tower is 297 meters tall (974 feet), and the views it afforded of the city of Melbourne were stunning.
Mo and I arrived early as I was one of the 11 finalists that got selected to present a poster. While it is a hot summer back in Arizona, it is cold here in Australia. I am glad we brought our heavy coats for the brisk 8-minute walk from our hotel, the Crown Promenade, to the Eureka Tower.
Posters are designed to present specific topics in a concise and interactive way to appeal to peers and colleagues at conferences and/or public displays. Everyone would be given an "A0" poster size foam board on which to tape on their poster, 841mm wide, and 1189 mm tall (roughly three feet by four feet).
Understanding Converged and Hyperconverged Systems
My design was simple. I took my summary chart from one of my presentations, and enlarged it to fit the "A0" poster size. I chose my "Pendulum Swings" presentation that explains the history of storage infrastructure, and the rise in interest in Converged and Hyperconverged Infrastructure.
In the early days of IT, storage was internal to its server, over time, storage outgrew its container, and we started having externally attached storage, and benefits like RAID and clustered servers for high availability. Then, SANs, LANs and WANs took the main stage, allowing for greater connectivity and distance.
But now, it seems the pendulum is swinging back with converged and hyperconverged systems. Converged Systems like IBM PureSystems, or VersaStack from IBM and Cisco, provide best-of-breed hardware for servers, storage and networks in a pre-cabled, pre-configured rack. With everything in a single rack, port count and cable distance limits are no longer a major concern.
Hyperconverged Systems, such as IBM Spectrum Scale, IBM Spectrum Accelerate, Nutanix or Simplivity, focus instead on offering commodity servers with internal flash and disk storage. Software-Defined Storage software is then used to glue together multiple units over a LAN infrastruture. With the huge increase in Flash and Disk capacities, a server with internal storage can hold many TB of data.
My poster included a "QR Code" that pointed to a link on BOX so that people could use their smartphones to access all of my presentations.
IBM Spectrum Scale with focus on Active File Management
A poster presents not all the details but the most important information.
Trishali Nayar, IBM AFM/Spectrum Scale Development from Pune India, had a poster on IBM Spectrum Scale with focus on Active File Management (AFM). She had a clean, simple design, basically two presentation slides enlarged to fill the poster size.
Active File Management (AFM) enables sharing of data across clusters, even if the networks are unreliable or have high latency. AFM allows you to create associations between IBM Spectrum Scale™ clusters or between IBM Spectrum Scale clusters and NFS data source. With AFM, you can implement a single name space view across sites around the world making your global name space truly global. You can also duplicate data for disaster recovery purposes without suffering from WAN latencies.
IBM Ubiquity Storage Service for Container Ecosystems
Your audience isn't trying to replicate your solution or case -- they are simply after the basics. Take for example, this poster on IBM's Ubiquity Storage Services.
Ashutosh Mate, IBM WW Senior Solutions Architect, created this poster on storage for Containers. Not to be confused with the Containers used in Spectrum Protect container pools, or the Containers supported by IBM Cloud Object Storage!
The poster had six enlarged presentation slides. Two at the top under "Abstract" covered business need and technology overview. The two in the middle under "Ubiquity Architecture" had a connection diagram and a list of supported environments. The last two under "IBM Vision" covered customer value, use cases, and additional resources.
As people transition from monolithic applications to microservices, IT is shifting from heavy Virtual Machines to lightweight Docker containers.
The Ubiquity project enables persistent storage for the Kubernetes and Docker container frameworks. It is a pluggable framework available for different storage systems. The framework interfaces with the storage systems, using their plugins. Different container frameworks can use Ubiquity concurrently, allowing access to different storage systems.
IBM has support for Spectrum Scale, all of the Spectrum Accelerate offerings (including XIV, FlashSystem A9000/R) and all of the Spectrum Virtualize offerings (including SVC, Storwize and FlashSystem V9000).
Single page handouts as "take-aways" was a nice extra touch.