This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
It is funny how an article or blog post can remind me of something long, long ago.
Back in 2005, my manager, Rich Lechner, was then the Executive Advocate for a client in Chicago. While visiting that client, he asked what the client wanted most. His answer, for IBM to come in and do an "Information Lifecycle Management" (ILM) study on his IT environment. He agreed to send me on-site for a week.
I had done disk and tape studies of this kind before, but this time, I was going to do an end-to-end to evaluate their growth, and where was the best storage media for different data types.
Joining me were three "observers" from IBM Lab Services: Barbara Read, Steve Bisel and Tom Moore. As if I did not have enough pressure from the client, now I had to be "watched" while I interviewed the storage administrators, generated and reviewed reports.
At the end of the week, I had provide the client's upper management with a list of short-term, mid-term and long-term recommendations. As a side benefit, the client decided to purchase two DS8000 storage systems, replacing their HDS equipment!
After that initial engagement, the four of us formed a team. We performed similar studies at other client locations. Barbara Read was the process expert who wrote the "Documents of Understanding". Steve was our financial expert, and used spreadsheets to show total cost of ownership comparisons. Tom was our infrastructure expert, and used Microsoft Visio to document the inventory of IT equipment, and how it was all interconnected.
I was the consultant and public speaker for the team. I was able to incorporate the work of the three others into a Powerpoint presentation. During the week, we would show initial findings to the client, and then follow it up a few weeks later with a full report.
A lot has changed in the past 13 years! First, ILM was renamed to "Storage Infrastructure Optimization" (SIO) studies. Our initial team trained dozens of other practitioners. Today, SIO studies are done all over the world.
This week -- Jan 29 to Feb 2, 2018 -- I am in New York city with other IBM Storage executives, to meet with Channel distributors and Business Partners. If you are in the NYC area, and wish to have a product briefing, or just dinner or drinks, let me know!
I believe the "T" stands for "Third generation", as we have had other 9132 boxes before. Here are the details:
Small: Just 1U in size
Ports: 8, 16 or 32 ports
Transceivers: 32, 16, 8, and 4 Gbps
Protocols: FCP only, no FICON, FCIP, FCoE or iSCSI
Why is this important? Because the 16 Gbps and 32 Gbps transceivers support NVMe over Fabrics. Let's do a quick NVMe recap:
Last May, IBM announced that its developers are re-tooling the end-to-end storage stack to support [New Faster Protocols for Flash Storage], to boost the experience of everyone consuming the massive amounts of data now being perpetuated across cloud services, retail, banking, travel and other industries.
NVMe is a new language protocol that is replacing traditional SAS and SATA standards for solid state drives (SSD). Through employing parallelism, to simultaneously process data across a network of devices, clients can anticipate significantly reduced delays caused by data bottlenecks and move higher volumes of data within their existing flash storage systems.
IBM's NVMe strategy is based on optimizing the entire storage system stack - from applications requiring the data to flash technology to store it. Through the development of its FlashSystem family of all-flash storage solutions, IBM recognized years ago that multiple technologies would be required to address the demands of ultra-low latency data processing. IBM is developing solutions with NVMe across its storage portfolio, which it plans to bring to market in 2018.
At the AI Summit New York, December 2017, IBM disclosed a [technology preview and demonstration] with the integration of IBM POWER9 Systems and IBM FlashSystem 900 using NVMe-over-Fabrics InfiniBand. This combination of technologies is ideally suited to run cognitive solutions such as IBM PowerAI Vision, which can ingest massive amounts of data while simultaneously completing real time inferencing (object detection).
Whether it is streams of data, transactional data, or batch processes, a consistent requirement is the lowest possible latency. Among the leading all flash storage vendors, IBM with its FlashSystem 900, has stuck to its mission delivering low latency all flash arrays. Along comes NVMe-oF, which is, at its core, about getting rid of latency.
How do you take an already low latency protocol, like InfiniBand or Fibre Channel, and make it faster? Replace SCSI with NVMe and enable NVMe from server to fabric to storage array.
The FlashSystem 900 has been shipping with InfiniBand using SRP (SCSI over RDMA) for many years. In the technology preview, the very same InfiniBand adapter, based on the Mellanox chip set, is instead used to support the OpenFabrics driver distribution and NVMe-oF InfiniBand.
While the demonstration last December used Infiniband, this is not the only transport. NVMe-OF can also be used with Ethernet, either using Internet Wide Area RDMA (iWARP) or RDMA over Converged Ethernet (RoCE). NVMe-OF over Fibre Channel is often referred to as FC-NVMe, and can drive NVMe over FCP or FCoE. Even though iWARP, RoCE and FCoE are all Ethernet-based, NVME-OF RDMA on the first two is different than FC-NVMe over FCoE.
Why not just drive NVMe commands over standard TCP/IP? The NVMe standards board is actually investigating this, but probably won't have anything until next year in 2019.
This week, IBM will be at the [Cisco Live!] event in Barcelona, Spain, talking about this new 9132T switch, as well as all of our VersaStack solutions! I won't be there, obviously, since I am in New York City, but if you are there, please send me photos! Barcelona is a wonderful city!
I hope everyone had a festive and restful winter break! I sure did!
(FCC Disclosure: I work for IBM. IBM is in our 17-day "quiet period" before it announces full-year and 4Q results on January 18. Therefore, I picked today's topic that has nothing to do with storage products, recent client wins, or financials.)
It's January, so I thought I would discuss [New Year's resolutions], a tradition in United States in which a person resolves to change an undesired trait or behavior, to accomplish a personal goal, or otherwise improve their life. Early Romans made promises to their god Janus, for whom the month of January is named.
Sadly, most of us are unsuccesful. This is often because the resolutions were unrealistic, people failed to measure and track their progress, or simply lost interest midyear.
From my own experience, most resolutions can be lumped into four major categories:
Get healthy: Eat better, lose weight, exercise more, sit less, quit smoking
Get organized: Stop procrastinating, pay off debt, de-clutter, switch to a better job, reduce stress
Become social: Spend more time with friends and family, meet new people, travel, volunteer for charity
Learn new skills: Learn a new language, take up a new hobby, learn to paint or create arts and crafts
A technique I use to develop presentations might help people keep New Year's Resolutions. The technique called [SCIPAB®], created by Mandel Communications, is an elegantly simple, six-step method for starting important conversations or create [Effective Presentations]. Since Resolutions are basically "conversations with yourself", let's give it a try!
Situation: "Oh No! The boss's daughter, Nell Fenwick, is tied to the railroad tracks!"
Complication: A train approaches!
Implication: If nobody does anything soon, she will die
Position: "I, Dudley Do-Right, will save her!"
Untie her from the tracks and set her free
Arrest the villain, Snidely Whiplash
Benefit: Nell lives! "Dudley Do-Right, you are my hero!"
Let's see how we can use this approach on different categories of resolutions. To get healthy, we might use:
Situation: "Oh No! My latest doctor visit indicates that my numbers are too high!"
(AMA Disclosure: I am not a doctor. This is not medical advice. Here numbers could represent any appropriate health measurement of your BMI, blood pressure, cholesterol, triglycerides, liver enzymes, or blood sugar, for example.)
Complication: I am not getting any younger.
Implication: I am at risk of heart disease, cancer, or other health issue. This situation will not go away on its own.
Position: I need to change my lifestyle to get healthy
Set appointment to see my doctor
Follow doctor's recommendations for diet, medication and exercise
Schedule follow-up appointments to measure and track progress
Benefit: My health measurements will return to normal range.
Rather than resolving to "Eat less and exercise more", the above approach is more focused on the end result, rather than intermediate actions, and therefore has a better chance of success, getting your health within normal range.
Let's try another one. To get better organized, we might use:
Situation: "Sigh! All of my projects are over budget and behind schedule, my desk is a mess, I forget important thoughts and ideas, and I am always late to meetings."
Complication: I just got assigned to lead project XYZ.
Implication: If I am not better organized, I could lose my job.
Position: I need to change my work routine to get organized.
Read David Allen's book and learn his system for "Getting Things Done" [GTD], or one of the many variants, like [GSD] or [ZTD].
Decide on where to write down and keep track of my thoughts, tasks and projects, either on paper like a notebook or [Hipster PDA], or an online mobile account like [Evernote] or [Google Keep]. Chose something that will be within arms reach 24 hours a day.
Work with project managers to track and measure progress of project XYZ.
Benefit: Project XYZ will be completed on schedule, within budget. I might even get a bonus, raise, or promotion!
I could go on, but you get the idea.
In his WSJ article [Blame it on the Brain], Jonah Lehrer cautions against trying to change too many habits all at once. If you have multiple resolutions, try to focus on establishing new habits for one resolution for a month or two, before starting the next one. Prioritize what is most important.
The study surveyed 5,676 leaders from various industries, education, and government agencies responsible for workforce development and labor/workforce policy. This was a truly global survey, with respondents from North and South America, the Nordics, Europe, Africa, Middle East and Asia.
A gloomy picture for the future
The survey paints a gloomy picture for the future. The majority of industry executives struggle to keep their workforce skills current, in light of rapidly changing technological advancements.
Only 55 percent of the respondents felt the current education system, from grade school up to university, were adequate to ensure lifelong learning and skills development. Most blamed inadequate investment from private industry in addressing these issues.
Any problem can be solved if (a) everyone agrees what the problem is, and (b) everyone feels it is high enough priority to solve. The study found there was a disparity of what the problem is, what the priorities are, and who should solve it.
In the book Class Counts: Education, Inequality, and the Shrinking Middle Class, the author Allan Ornstein argues ".. the debate centers on whether the government should take a backseat or manage the economy, whether a free market should prevail or whether we should redefine or tinker with market forces..."
Which workplace skills are in short supply?
Can we at least agree on which workplace skills are in short supply?
Not surprisingly, Industry leaders ranked the top three skills required:
Technical core capabilities for Science, Technology Engineering and Math [STEM]
Basic computer and software/application skills
Fundamental core capabilities around reading, writing and arithmetic (often called [the three Rs])
These are all "hard skills", referring to the knowledge, skills and competencies to perform specific tasks. Nearly 75 percent of corporate training budgets are focused on hard skills.
Government leaders, on the other hand, especially those that are responsible for labor/workforce policy, ranked the top three skills:
Ability to communicate effectively in a business context
Willingness to be flexible, agile and adaptable to change
Ability to work effectively in team environments
These would all be classified as "soft skills", referring to the people skills, social skills, communication and emotional intelligence to effectively navigate the environment and work well with others.
In fact, these government leaders felt that STEM, computer skills and "the three Rs" ranked the lowest requirements in their priority.
"Unless managers have forgotten everything they learned in Econ 101, they should recognize that one way to fill a vacancy is to offer qualified job seekers a compelling reason to take the job. Higher pay, better benefits, and more accommodating work hours are usually good reasons for job applicants to prefer one employment offer over another."
"... the long-hours pandemic is a symptom of the tech and design sectors' badge-of-honor-martyr-complex. ... part of the reason that women can't have it all is that American business has grown this time-macho culture, a relentless competition to work harder, stay later, pull more all-nighters, ... the classic 40-hour work week have trained us to measure our labor by the number of hours we log,... However, this mindset is dead wrong when applied to today's professionals. The value ... isn't the time they spend, but the value they create through their knowledge."
IT jobs require creativity and focus. In a feature article titled [Why you should work 4 hours a day, according to science], Alex Soojung-Kim Pang, author of Rest: Why You Get More Done When You Work Less, looks at the work habits of highly accomplished creative people through history and finds that they all shared a passion for their work, a terrific ambition to succeed, and an almost superhuman capacity to focus.
Yet when you look closely at their daily lives, they only spent a few hours a day doing what we would recognize as their most important work. The rest of the time, they were hiking mountains, taking naps, going on walks with friends, or just sitting and thinking.
Encouraging more students to develop the skills early
While we all agree that employers should raise salaries, offer better benefits, and fix their morally-corrupt culture of working too many hours, that only addresses part of the problem, the demand half of the equation. We also need to get kids to learn the hard and soft skills needed at an early age.
Do students have what it takes to work in the IT industry? John Rampton lists the [15 Characteristics of a Good Programmer]. Most are soft skills, with my favorites being: Laziness, Impatience and Hubris.
In his book Why Good People Can't Get Jobs: The Skills Gap and What Companies Can Do About It, Peter Cappelli advises corporations to take a more proactive role:
"... a huge part of the so-called skills gap actually springs from weak employer efforts to promote internal training for either current employees or future hires ... It makes no sense for the employers, as consumers of skills, to remain an arm's-length distance from the schools that produce those skills..."
The major stakeholders, from industry to education to government, should partner together. For example, the Chicago Public Schools (CPS) system will be the first in the United States to [require all students to take computer science] in high school, starting with the class graduating in 2020. Grants and training are being provided by IT industry giants like Google and Microsoft.
IBM is also doing its part with [a new education paradigm], called Pathways in Technology Early College High Schools [P-TECH]. Normal high school is typically four years (grades 9 to 12), but P-TECH is a system of innovative public schools spanning grades 9 to 14 that bring together the best elements of high school, college, and career. The additional two years (grades 13 and 14) of community college can help teach the soft and hard skills needed for particular jobs in IT.
After the six years, students graduate with a no-cost associates degree in applied science, engineering, computers and related disciplines, along with the skills and knowledge they need to continue their studies or step easily into well paying, high potential jobs in the IT arena for multiple industries.
The paradigm has grown from one school in 2011 to 60 schools by September 2016, with over 300 large and small companies affiliated with P-TECH schools serving thousands of students.
So the future may not be as gloomy as predicted. Problems can be addressed if everyone works together to solve them. In the mean time, I will be taking the rest of the year off for long-overdue vacation. Perhaps I will go hike mountains and take naps, as Alex suggests above.
It's official. We have changed our name! The Worldwide IBM Systems Executive Briefing Centers (EBC) are now being called the Worldwide IBM Systems Client Experience Centers!
I joined the Tucson EBC team in 2007. For the past 10 years, I have been running design workshops, consulting with clients and architecting solutions.
Why the name change? The term "Executive Briefing Center" implies one-way communication with [death by PowerPoint], which can be ineffective in today's dynamic and collaborative work environments.
Client expectations for two-way communications have given rise to immersive and interactive engagements where clients not only learn about IBM's solution offerings, they experience them.
Through hybrid briefing/workshop engagements, demonstrations, and active promotion of our ISV Ecosystem partners, we take clients on a journey where they envision utilizing our technology and solutions to achieve desired business outcomes. The new Client Experience Center moniker more accurately represents the work we do and the value we provide.
(Note: I realize that the new acronym for the Client Experience Center (CEC) is the same as the Central Electronic Complex (CEC) used in both storage and server products. I can assure you that the executives that decided to rename the centers had not chose this to be funny! Consider it a mere coincidence.)
Of course, changing the name is not cheap. We will have to update all of our websites, and order new signage, new water bottles, new coasters, new embroidered shirts, and new business cards, just to name a few!
The weather in Tucson is awesome these next few months, so come on down! Can't travel? We can come visit you, or do it over the phone via webinar.
Our Worldwide IBM Systems Client Experience Centers are located in:
Last Friday, I helped students learn about Science, Technology, Engineering and Math (STEM). This was the annual [2017 Arizona STEM Adventure] event in Tucson, Arizona. Once again, Pima Community College Northwest Campus provided the venue.
The event hosted 1,200 students, ranging from fourth to eighth grades. Buses collected them from ten different school districts in the area. Home-schooled, private-schooled and charter-schooled children participated as well.
There were three dozen exhibits, some were indoors, and others in tents outside. The weather was delightful for November.
IBM's exhibit used a simple bicycle wheel to demonstrate the properties of a [gyroscope]. A gyroscope is a spinning wheel that maintains its angular momentum. This can be useful for both measuring forces that try to affect it, as well as counteract those forces.
We had the kids stand on a rotating platform, holding the bicycle wheel with both hands. A volunteer would spin the wheel. If the kid leaned the wheel left or right, the platform would spin to counteract the force. (The effect can be accomplished while sitting on a swivel chair. See [Exploratorium] for an example.)
Gyroscopes are used in everything from airplanes to submarines to help with navigation, keeps space-based telescopes like the Hubble pointed in the right direction, helps to dig tunnels straight, and for [Steadicam] filming for Hollywood movies and [IBM Client Center videos!
According to the U.S. Environmental Protection Agency (EPA):
"The state has warmed about two degrees (F) in the last century. Throughout the southwestern United States, heat waves are becoming more common, and snow is melting earlier in spring. In the coming decades, changing the climate is likely to decrease the flow of water in the Colorado River, threaten the health of livestock, increase the frequency and intensity of wildfires, and convert some rangelands to desert." (Source: [What Climate Change means for Arizona])
Their robots stole the show! This one pictured here was remote controlled. Another one was able to pick up and throw basketballs.
(This is not my first exposure to FIRST. See my 2009 blog post [Helping Young Students] on how I helped fourth graders learn C programming language by building robots with LEGO Mindstorms.)
The team draws students from the five high schools of the Vail school district. I drive by one of these, the Vail Academy and High School, on the way to IBM Client Experience Center. This is not just for boys, about one third of the team are girls!
The students do the design of each robot, do the welding, even do the C++ programming, and participate in competitions!
Lunch and Logistics
With all the focus on science and technology exhibits, it is easy to forget all the work done behind the scenes. An [Eventbase] website was used to help us direct all of the students, teachers and volunteers to the right place.
Since we had enough volunteers for the IBM exhibit, I chose instead to be a "general volunteer" and was assigned the task of collecting and distributing lunches. For some schools, the students brought their own lunches on the bus, these were collected when they got off the bus, and distributed to them when it was their time to eat. For other schools, their staff packed lunches for each student.
We staggered the distribution into five groups, with color coded labels, starting from 10:30am, every 20 minutes, to 11:50am. The volunteers themselves did not eat until 1:30pm. We were provided pulled pork sandwiches from [Mama's Hawaiian BBQ], a local favorite!
This was a great day! There are plenty of problems that need to be solved in our world, and a shortage of scientists and engineers to solve them. Encouraging kids to pursue these careers is a good step forward.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
The Collaboration of Oak Ridge, Argonne, and Livermore [CORAL] is a joint procurement activity among three of the Department of Energy's National Laboratories launched in 2014 to build state-of-the-art high-performance computing (HPC) technologies that are essential for supporting U.S. national nuclear security and are key tool s used for technology advancement and scientific discovery.
Of course, when you hear "state-of-the-art technology", IBM is probably the first company that comes to mind!
The new IBM Spectrum Scale 5.0 has been greatly enhanced to meet CORAL requirements:
Dramatic improvements in I/O performance
Significant reduction in internode software path latency to support the newest low-latency, high-bandwidth hardware such as NVMe
Improved performance for many small and large block size workloads simultaneously from new 4 MB default block size with variable sub-block size based on block size choice
Improved metadata operation performance to a single directory from multiple nodes
Spectrum Scale 5.0 now handles automatically tuning more than twenty communication protocol and buffer management parameters, aiding setup for optimal performance. The enhanced GUI features many capabilities including performance, capacity, network monitoring, AFM (multicluster management), transparent cloud tiering, and enhanced maintenance and support, including interaction with IBM remote support.
Spectrum Scale 5.0 now offers file-level immutability. Previous releases supported immutability at the file set granularity, so this allows greater granularity. Immutability can be an effective tool as part of an overall Non-Erasable, Non-Rewriteable [NENR] compliance policy.
Spectrum Scale comes in both "Standard Edition" and "Data Management Edition". The latter offers some additional features, including Transparent Cloud Tiering, Asynchronous AFM Disaster Recovery support, and Encryption. Some additional enhancements to Data Management Edition in Spectrum Scale 5.0 are:
File audit logging capability to track user accesses to file system and events supported across all nodes and all protocols
Parseable data stored in secure retention-protected fileset
Data security following removal of physical media protected by on-disk encryption
The new IBM Storage Utility Offerings include the IBM FlashSystem 900 (9843-UF3), IBM Storwize V5030 (2078-U5A), and Storwize V7000 (2076-U7A) storage utility models that enable variable capacity usage and billing.
These models provide a fixed total capacity, with a base and variable usage subscription of that total capacity. IBM Spectrum Control Storage Insights is used to monitor the system capacity usage. It is used to report on capacity used beyond the base subscription capacity, referred to as variable usage.
The variable capacity usage is billed on a quarterly basis. This enables customers to grow or shrink their usage, and only pay for configured capacity.
Suppose you only need 300 TB today, but expect this to grow to 1 PB (1000 TB) over the course of three years. You install 1000 TB (1 PB) of capacity, and pay for the base 300 TB, plus whatever above this 300 TB you might be using during each subsequent quarter. After 36 months, you pay for the rest of capacity installed.
(There are comparable offerings from IBM's competitors, but they often require that you pay for at least 75 to 85 percent of the installed amount, and then you would need to continue to disrupt your operations with additional capacity installed throughout the 12 to 36 month period. IBM's approach allows you to avoid installation disruption during the entire 36 month period!)
IBM Spectrum Virtualize for Public Cloud V8.1.1 delivers a powerful solution for the deployment of IBM Spectrum Virtualize software in public cloud, starting with IBM Cloud. This new capability provides a monthly license to deploy and use Spectrum Virtualize in IBM Cloud to enable hybrid cloud solutions
Remote replication will be supported between Spectrum Virtualize-based appliances (including SAN Volume Controller (SVC), the Storwize family, IBM FlashSystem V9000, and VersaStack with Storwize family or SVC), or Spectrum Virtualize Software, to the IBM Cloud.
Using IP-based replication with Metro Mirror, Global Mirror, or Global Mirror with Change Volumes, clients can create secondary copies of on-premises data in the public cloud for disaster recovery. IBM has over 25 data centers around the world available to chose from. Remote copy services can also be used between two IBM Cloud data centers for improved availability.
The solution is based on bare metal servers. You can create either two- or four-node high availability clusters.
Spectrum Virtualize on-premise SVC and Storwize now also support 2.4 TB 10K rpm 2.5-inch SAS hard disk drives.
IBM has been holding various "Hackathons" and "Meetups" as a new way to reach out to prospective clients. IBM sponsored a meetup at the Austin Executive Briefing Center (EBC) to discuss Machine Learning with TensorFlow on IBM Power systems, October 26, 2017.
This was a joint event, co-sponsored by [IBM Watson/Cognitive Austin] and [Big Data/AI Revealed] meetup groups. Special thanks to my colleague Cathy Cocco, IBM Executive IT Architect with the IBM Austin EBC, for coordinating this event with their organizers.
(What is a Meetup? [Meetup.com is an online social networking website that facilitates in-person local group meetings. Meetup allows members to find and join groups unified by a common interest, such as books, games, pets, technology, careers or hobbies. In 2017, there are 32 million users with 280 thousand groups available across 182 countries.)
Here was the agenda for the event:
Registration, Pizza & Soft drinks
Tensorflow 101 presentation
Demo: Using TensorFlow for Financial Market Predictions on IBM POWER Systems
Lightning Talk: IBM Data Science Experience
Clarisse Taaffe-Hedglin: Intro to TensorFlow on IBM Power servers
Our guest speaker was my colleague Clarisse Taaffe-Hedglin, IBM Cognitive Senior Technical Architect, part of the same Worldwide Client Centers team that I work in. She flew in from Charlotte, NC.
Her topic was TensorFlow, an open source [Machine Learning] framework. TensorFlow was originally developed by Google, but was made open source in November 2015.
Machine Learning is popular in a variety of industries, from self-driving cars and trucks, speech recognition and video surveillance, to what movie to watch next on Netflix. There are three aspects to Machine Learning:
Data: Start with the data you want to analyze. This could be IoT sensor data, security logs, or social media feeds. Check out all that happens in an "Internet Minute"!
Compute: While mathematical computations can be performed on traditional CPUs, some frameworks are optimized and accelerated with Graphical Processing Units (GPU). These GPU can perform Teraflops of single and double precision calculations.
Technique: As methodology have gotten more complicated over the years, frameworks have evolved to match.
The [TensorFlow] framework is now one of the most popular among data scientists. You can download it for free at [Github].
Clarisse showed the various programming/calculation tools used by data scientists. The top five were: Python, R, SQL language, MapReduce, and Microsoft Excel.
Mathematical models come in many flavors. Clarisse explained they can be used to identify clusters of data that might have similar properties, or to perform classification, or linear regression. The results can be "descriptive", gaining a better understanding of what already is, or "predictive" for what might be.
Some frameworks like Chainer or Torch are more flexible, using a dynamic Build-by-Run approach. However, these do not scale well. Theano and TensorFlow, on the other hand, employ a Define-then-Run approach, which scales better for larger projects. With the growth in popularity with TensorFlow, the Theano framework has been "functionally stabilized".
Clarisse Taaffe-Hedglin: Financial Markets Demo
For the demo, Clarisse had historical stock closing data for USA, Australia and Asian stock markets. The hypothesis: We can determine a Buy/Sell for USA stocks based on the closing results of non-American stock results? This is a classic "Binary Classification" model. The other stock markets close 4-16 hours before the U.S. markets open, so this has real-world applicability.
Since the data was in different monetary units, she did some cleanup to normalize the data, removing out the trends, and converting everything to U.S. Dollars (USD).
Clarisse used "Supervised Learning" on 80 percent subset of the data, and then used the other 20 percent remaining data to validate how well it did.
As with any model, you measure how good it is by how close it results in the correct answer. Wrong answers are weighted by how bad they are. This is often referred to as "Loss" or "Cost". Different models can therefore be compared by minimizing the loss.
Using a simple y=wx+b mathematical model, she ran 30,000 iterations. After 5,000 iterations, the model was already guessing correctly 55 percent of the time, by the time we hit 30,000 this was up to 68 percent accuracy.
TensorFlow also supports "hidden layers", basically intermediate variables that are then used in subsequent layers for more complicated calculations. This is the way our brain works with neural networks. With two added layers, she re-ran the 30,000 iterations, and now was up to 73 percent accuracy.
Normally, this kind of analysis would take hours or days, but since TensorFlow takes advantage of the IBM Power8 CPU and NVidia Tesla K80 GPU in the IBM Power server, the whole thing finished in five minutes!
Tuhin Mahmed: Lightning Talk on IBM Data Science Experience (DSX)
Tuhin Mahmed, IBM Software Developer, is the organizer for the Big Data/AI meetup group. He wants to promote the idea of "Lightning Talks" where each person presents for just 10-15 minutes. This is a variant of the popular [Pecha Kucha] events.
To get things started, he presented 10-15 minutes on [IBM Data Science Experience], or DSX for short. Taking Multiple Listing Service (MLS) real estate data of closing prices on houses sold in a range of zip codes from the Austin Area, he mapped these on x-y axis. The x axis was square feet, and the y axis was closing price.
Using DSX, he was able to develop a mathematical model that estimates house closing prices based on their zip code and square footage.
This was a simple example, but it showed the power of Jupyter Notebooks, and how anyone can get a 30-day free trial of DSX for their own experimentation.
Currently, being a data scientist is more of an art than a science. This is one of those fields that takes only a few months to learn, but years to master.
Rather than building a model from scratch, data scientists can take existing models, and modify them to fit their needs. There are a variety of existing models available in what is called the "Model Zoo". Google has over 2,000 projects already.
Those interested in trying this out TensorFlow for themselves were directed to [Nimbix], a Cloud Service Provider that offers POWER servers with NVidia GPUs.
There were about 50 attendees, more than half identified themselves data scientists. As the first inaugural sponsored event for the IBM Austin EBC, I think this was a success!
If you are in the Austin area, the next meetup will be at the [Capital Factory] on Brazos Street on November 30, 2017.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Today, IBM announces a complete refresh of its IBM FlashSystem® all-flash array product line.
(FCC Disclosure: I work for IBM. Compression, data footprint reduction, and performance results, based here on internal IBM tests, vary widely by data and workload type. Your mileage may vary. This blog post can be considered a "paid celebrity endorsement".)
New FlashSystem 900 model AE3
The new AE3 model introduces new Microlatency cards at larger capacities: 3.6, 8.5 and 18 TB. Compare that to the previous model AE2 at 1.2, 2.9 and 5.7 TB.
These capacities are achieved by combining three-dimensional (3D) chip layout with Triple-Level Cell (TLC) transistors, often referred to as 3D-TLC. The previous technology was single-layer 2-dimensional, multi-level cells (MLC).
Last week, at IBM Systems Technical University in New Orleans, Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist, explained this via an analogy. The 2-dimensional is like a Bungalow. If you want to pack in more people, you need to make the rooms smaller, which is getting more difficult. Alternatively, you could build a multi-story skyscraper, adding more floors relieves pressure to shrink the rooms down.
Triple-level cell holds three bits per transistor. In the past, we had Single-level Cell (SLC) that stored one bit, and Multi-level Cell (MLC) that stored two bits. A future technology, Quad-level Cell (QLC) is not yet ready for production workloads in a datacenter.
The new AE3 models also offer Embedded inline Compression (EiC), with "Always-On" compression being done right on the Microlatency cards. With a fully-loaded 12 card 2U drawer, that is 10+P+S RAID-5 configuration, the amount of effective capacity is drastically increased:
FlashSystem 900 Model AE3
2U Drawer (Usable TB)
2U Drawer (Effective TB) w/EiC
The compression gets 2x to 3.5x on typical data, but your mileage may vary. The small latency cards are capped at 110 TB, and the medium and large at 220 TB effective capacity, to avoid overwhelming the on-board DRAM cache. For clients who need smaller amounts of flash, IBM will continue to sell the AE2 models with 1.2 TB MLC Microlatency cards.
After the compression, the data is encrypted with AES 256-bit encryption. This is same as the previous AE2 models, so nothing changing there.
The EiC compression and encryption do not impact performance. The new Microlatency cards achieve as low as 95 microsecond latency, about 10x faster than traditional Solid-State Drives (SSD) found in Dell EMC XtremIO and Pure Storage competitive offerings, and 40 percent faster than the new NVMe Solid-State drives. A 2U drawer can deliver up to 1.2 million IOPS, slightly more than the AE2 models (1.1 Million IOPS).
The new FlashSystem V9000 take advantage of the new FlashSystem 900 AE3 models, effectively tripling the usable capacity.
The interesting thing now is compression. Both are hardware-accelerated, with EiC being done on the Flash cards, and Real-time Compression (RtC) being done by the Intel QuickAssist chips in the controllers.
The EiC method works on 4KB blocks, so only gets 2.5x to 3.5x on typical data. The RtC method works on larger 32KB blocks, is therefore able to find more replicated sequence of characters, gets up to 5x ratio, with compressed data in the controller node cache for better cache hit ratios.
However, RtC is limited to only 512 volumes, so admins would run the [Comprestimator tool] and select the cache friendly workloads with the best compression, such as Databases and CAD/CAM images.
With new FlashSystem V9000, you now get the benefits of both. Continue to use RtC for data that is better served with 4x-5x compression, and let EiC compress everything else!
FlashSystem V9000 model AE3
Usable (1 drawer) TB
Usable (8 drawers) TB
Running a typical 70/30 workload, representing 70 percent reads and 30 percent writes, each controller pair can deliver up to 600,000 IOPS. With four V9000 controller pairs clustered together, that is 2.4 Million IOPS. For more read-intensive, cache-friendlier workloads, IBM has clocked the system up to 1.3 million IOPS per controller node-pair, and 5.2 million for a four-pair cluster.
As with the previous model, the FlashSystem V9000 offers "Easy Tier" automatic sub-LUN tiering, and "storage virtualization" to manage both SAS-attached and SAN-attached storage. Over 400 different devices from major vendors are supported. This means that the busiest blocks will be moved up to low-latency Flash, and less active data will be moved to spinning disk.
As with the FlashSystem V9000, the A9000/R model 425 use the new FlashSystem 900, increasing the effective capacity.
The A9000/R models will continue to do "Data Footprint Reduction" of pattern removal, data deduplication and RtC compression for data to achieve up to 5x compression ratio. However, to improve performance, internal metadata will not be compressed with RtC, allowing the underlying Flash cards to do EiC instead. This reduces CPU workload.
The FlashSystem A9000 model 425, aka "The Pod", has three grid controllers combined with the new FlashSystem 900 model AE3 for compact 8U solution that can store nearly a Petabyte. For smaller deployments, IBM also offers an 8-card partially-filled drawer for lower entry system size.
A9000 Model 425
Number of cards/drawer
Effective @5x TB
The FlashSystem A9000R model 425, aka "The Rack", has two to four grid elements, each grid element has two grid controllers and one FlashSystem 900 AE3 drawer. The previous 415 model supported five and six grid elements, but for now, model 425 is limited to just two, three or four. The A9000R model 425 supports all three Microlatency sizes, whereas the previous 415 model only supported medium (2.9 TB) and large (5.7 TB) sizes.
FlashSystem A9000R model 425
Usable (2 elements) TB
Usable (3 elements) TB
Usable (4 elements) TB
Performance of both the A9000 and A9000R are based on the number of grid controllers. Each grid controller gets about 300,000 IOPS. The A9000 pod with three controllers gets up to 900,000 IOPS. Each A9000R grid element has two controllers, so 600,000 IOPS per element, with 2.4 million IOPS for a maxed out four-element A9000R rack.
Along with the hardware changes, IBM released version 12.2 of the Spectrum Accelerate software that runs in the FlashSystem A9000/R models.
This version supports Asynchronous mirroring between FlashSystem A9000/R systems and IBM XIV Gen3 storage. The replication can go in either direction, but the intent is to use FlashSystem for production, replicating to XIV Gen3 at a disaster recovery facility. Version 12.2 also increased the number of volumes, snapshots, and consistency groups supported.
24,000 volumes and snaps
1024 consistency groups, 512 volumes per consistency group
The new version applies to both the new model 425, as well as the previous 415 models!
This week, I am presenting at the IBM Systems Technical University for IBM Storage and POWER Systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency. There were about 800 clients attending.
This is my recap for the last few sessions before I left town, spanning Tuesday afternoon and Wednesday afternoon.
Reasons why IBM hyperconverged systems powered by Nutanix surpass other HCI from HPE, Cisco and more
Rob Simpson, Senior Strategic Marketing Manager at Nutanix, presented Nutanix hyperconverged systems. Nutanix runs on both x86 and POWER. For x86, it supports VMware, Microsoft Hyper-V, and Citrix XenServer, as well as their own Acropolis Hypervisor (AHV) derived from Linux KVM. For POWER, it uses AHV re-compiled for POWER chip set.
Hyperconverged systems can be sold in full rack configurations, as individual appliances, or as software that can be deployed on your own servers. Rob compared Nutanix against three competitive appliances: Dell EMC VxRAIL based on VMware VSAN, HPE Simplivity, and Cisco HyperFlex.
Everything you wanted to know about IBM Spectrum Scale metadata but didn't know to ask
Eric Sperley, IBM Software Defined Infrastructure Architect, presented the internal metadata structures used in IBM Spectrum Scale.
Why, oh why, did I attend this presentation? I had worked on Spectrum Scale back when it was called GPFS over 15 years ago, and thought I already knew everything about "inodes" that I ever wanted to, but Eric proved me wrong!
"Laws, like sausages, cease to inspire respect in proportion as we know how they are made."
--John Godfrey Saxe
A lot has changed! There have been a lot of improvements to the internal structures to improve parallel I/O performance, and reduce latency of administrative tasks.
IBM Spectrum Scale can be divided into different file systems, each of which can be configured with different performance characteristics and block size, such as random small files for scanned images, versus large sequential files for streaming videos.
My presentation was nowhere near as technical as Eric's above. I provided an overview of how IBM Spectrum Scale is configured, how it works, and how it interacts with IBM Cloud Object Storage System, Spectrum Protect, and System Archive.
I also covered the latest GSxS and GLxS models of the Elastic Storage Server, or ESS for short. These models provide awesome performance at low cost. The GSxS models are all-flash arrays for high performance. The GLxS models are hybrid with 2 Solid-State Drives and the rest NL-SAS 7200 rpm spinning disk for high capacity.
IBM COS new features
Andy Kutner, IBM Channel and Alliances Architect, presented the latest features in IBM Cloud Object Storage, IBM COS for short.
Compliance Enabled Vaults, or CEV for short, offer Non-Erasable, Non-Rewriteable (NENR) tamperproof protection for objects. Objects written to a CEV vault can not be deleted or replaced with newer versions, for a specific retention period.
(Note: Some folks mistakenly use the term "Write Once, Read Many" (WORM) for this. WORM applies only to tape, optical, paper tape, punched cards, and non-erasable ROM chips. For this reason, the term "Non-Erasable, Non-Rewriteable" (NENR), used in the U.S. Securities Exchange Commission (SEC 17a-4) regulation, has been created to extend this tamperproof protection to flash, disk and cloud-based storage architectures.)
The entry-level systems lowers the minimum capacity of systems. Before, IBM recommended at least 500 TB capacity to consider IBM COS. Now, the combination of embedded Accessers and Concentrated Dispersal mode, can lower the starting point to as low as 72 TB, but still allow you to grow to multiple PBs.