Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
GDPR is the IT industry's next "Y2K crisis." Effective May 25, 2018, it ensures that any citizen of the European Union can review, rectify, and even erase any personal data from corporate datacenters. Companies that fail to respond to requests can be heavily fined. See Bob Yelland's quick 13-page guidebook on this, titled [GDPR - How it Works].
His team also developed the Non-Obvious Relationship Awareness (NORA) software for the casinos, combining the records of 15 million customers, 20,000 employees, and 18 different watch lists. If a casino did business with people on certain watch lists, they could be put out of business or heavily fined.
NORA alerts identified 24 active VIP players as known cheaters, 12 employees were active gamblers against company policy, 192 employees had possible relationships with casino vendors, and in seven cases the players were the vendor. One casino discovered they were paying to have one of these cheaters flown to Las Vegas to play at their tables!
(IBM acquired Jeff's company Systems Research and Development (SRD) back in 2005. I had the pleasure of working with Jeff during his 11 year stint at IBM, and participated in his G2 project that was later spun off in 2016 to form his newest company, Senzing. See my 2011 blog post [Storage Innovation Executive Summit] of Jeff's thoughts back then.)
Jeff identifies four challenges in complying with GDPR regulation. Suppose an EU citizen comes to your company and asks just to review all information that you have on them. How would you do that?
So this is Challenge #1: There are lot's of places to look. You have a customer database, loyalty club, marketing programs, vendor and supplier databases, and customer service. But wait, the person might have also been an employee! Does your employee database let you search for information on former employees?
Challenge #2 is that the data occurs in variations. Liz Reston could be stored as Elizabeth or Beth. Her last name might have changed from various marriages and divorces. Can you generate all of the variations to search on?
(I know this personally. I am not the only famous "Tony Pearson" out there. There is Tony Pearson, a cricket player in England. There is Tony Pearson, Chief of Staff in the Australian government. And finally, there is 61-year-old "Mr. Universe" Tony Pearson, the "Michael Jackson" of Bodybuilding. Needless to say, women who showed up at my house unannounced looking for him instead were sometimes disappointed!)
Challenge #3 is that existing systems have search limitations. Imagine going to a library that doesn't have a card catalog or computerized index. Rather, you need to go floor by floor, row by row, book by book, looking for the information you are looking for.
Human Resources software might only offer search options for name, date of birth or employee serial number. Hotel systems don't offer you search capabilities of billing or home addresses.
Small typos can result in incomplete search results. Home addresses, for example, are often written in different ways, suite or apartment numbers may be represented differently as well, and abbreviations may be used to represent fully-qualified names.
What are you going to do, ask the IT department to write custom SQL queries for you? One of the unexpected benefits of Jeff's NORA system was that it could match entities between databases by street address, a trick that normally isn't designed into most applications.
Challenge #4 is that not all things that look alike are alike. For example, Liz Reston and her co-dependent husband Bob might [share the same email address].
Family members might have the same home address and phone number. Sons are often named after their fathers, but don't always write "Senior" or Junior" or "III" at the end of their names.
In other cases, roommates in college, who are not related in any other way, might share the same home address. The same apartment number or home address could be used by different people as the house is sold or apartment is rented from one family to another.
It took Jeff decades to appreciate the results of these entity relationships, and then GDPR happened in 2016. When a citizen asks to review their personal data, which they can after May 25 for free, a company must deliver within 30 days. The person can then ask to rectify certain information, or have it erased altogether.
So what seems like a simple enough question, "What do we know about Liz Reston?" turns out to be challenging to answer for a variety of reasons. Jeff did a survey of over 1,000 European companies, here were the results:
Most companies are not ready, and are concerned about their ability to comply with this GDPR regulation.
Company expect an average of 246 requests per month.
The search will require accessing, on average, 43 different system databases.
Each database search will take seven minutes.
Companies will need to dedicate seven to eight full time employees to complete these search requests.
Having access to powerful enterprise-wide "single subject search" discovery tools, however, can also lead to search abuse. For example, a famous celebrity is admitted to a hospital, and suddenly sensitive information is leaked to the tabloids or paparazzi. Someone asks their friend, a police officer, to search the license plate on someone's vehicle. A father searches his corporate database for information on his daughter's new boyfriend.
To address this privacy concern, Jeff suggests a tamper-proof audit log that shows who searched for whom. Where are we going to get technology to do this? We already have it: Blockchain! That's right, the technology that enables Bitcoin to operate without government controls already includes a tamper-proof audit log for transactions.
Jeff's plans for his new company Senzing is to deliver software for different use cases, with APIs for popular programming languages like Java and Python, and a workbench that runs on Windows. He is also considering a "Community Edition" that could be affordable for even the smallest of businesses, with a challenge to the audience to please contribute to this as an open source project.
Last week, IBM clients, Business Partners and executives got together for the inaugural IBM [Think 2018] conference. There were over 30,000 attendees.
In an age of exponentially more data, connected devices and computing power, there are more ways for attackers to breach an organization than ever before. Teams are challenged to manage these threats as they deal with too many disparate tools from too many vendors, an enormous security and IT skills shortage, and a growing number of compliance mandates.
Marc van Zadelhoff, General Manager, IBM Security, kicked off the session "Ready For Anything: Build a Cyber Resilient Organization". The year 2017 was a tough year for security. People can relate to the number of security breaches that happened.
Why do companies struggle in this area? It is not just because hackers have become more sophisticated. IBM Security has over 8,000 security experts to help clients. When IBM is called in, we find 90 percent lack basic fundamentals from firewall rules and patch management. It takes on average 200 days for companies to detect breaches. Sadly, 77 percent do not have a response plan after the breach happens.
To help this, IBM has come up with new terminology. At a certain point, [the shit hits the fan], a Canadian phrase meaning "messy consequences are brought about by a previously secret situation becoming public." Marc explained that it often is accompanied by FBI agents showing up at the front door.
Marc referred to this event as "the Boom". All of the preparation and prevention happen "left of Boom". The clean-up, salvaging your brand reputation, and remediating the damage was called "right of Boom". Here are some examples of a Boom event:
Compromised Cloud app
Left of Boom is our domain of choice. We are surrounded with just security and IT problems, problems we have studied our entire careers, involving daily activities we complete with a sense of certainty.
Right of Boom is a completely different matter. Others get involved, including Legal, HR, and sometimes even the Board of Directors. These are distant, hazy problems that don't occur every day, and more uncertainty.
The Boom is not the initial breach, but when the breach becomes public, an average of 200 days later. Hackers can do quite a lot of damage during these 200 days. What might have started as phishing emails, might continue with access to sensitive databases, stolen credentials to other servers, access to internal networks, and additional compromises.
Likewise, companies should not expect to clean up the mess in just a few days either. IT forensics are used to determine the scope of the breach. Regulators and auditors are notified, press conferences and legal dispositions are scheduled to address the public concerns, and social media sentiment might fall.
Back in 2016, [IBM acquired Resilient] a security software company. Ted Julian, IBM VP Product Management and Co-Founder of Resilient, performed a live demo of this software. Basically, it is a dashboard that automates gathering incident data, determines the tasks required, and then orchestrates appropriate responses. This allows the security administrator to launch remediation directly in context.
Last year, over 1,400 customers have taken advantage of IBM's security breach simulator lab, the IBM X-Force Command Center. On the right side of the boom, time matters. What might take 90 minutes manually can be done in two minutes with IBM Resilient dashboard and the right amount of practice and training.
Next on stage were Wendi Whitmore, IBM Security Services, and Mike Errity, Vice President IBM Resiliency Services. While Wendi's team is handling the situation from afar, Mike's team lives in the data center. Mike explained Recovery Time Objective (RTO) and Recovery Point Objective (RPO), which applies to recovery after cyberattack, similar to Disaster Recovery after a hurricane.
Wendi indicates that executives need visibility into what is going on after a breach, and to have retainers involved in PR firms and other industry experts to be called on a short notice as needed right of boom.
Richard Puckett, Vice President Security Operations, Strategy and Architecture, at Thomson Reuters, was the final speaker. Richard spent the first six months of his job uplifting the security protocols at Thomson Reuters. They partnered with IBM to build up their talent for their Security Operation Center (SOC).
Threats are asymmetric. Unlike traditional physical threats from mobs of people, or trucks parked at the front door, cyber threats go undetected. Once they are detected, it can be difficult to identify the perpetrator. Richard suggests that good security requires good management. Patch management is not the sexiest, but is critical. Don't focus on shiny new objects, but rather fixing weak passwords and poor patch management procedures.
In the struggle to keep up, organizations are not doing a good job of mastering the security fundamentals. IBM believes that with the right approach, technologies and experts, our clients can fight back. IBM can deliver security and resiliency at the scale and speed necessary to protect businesses against the challenges of today, and tomorrow.
While Sal Khan was a hedge fund manager in Nor then California, he was also a math tutor to his cousin Nadia over the Internet in the evenings. This extended to 15 other family members. In November 2006, Sal started to record his teachings on a YouTube channel. His cousins liked the YouTube recordings better, as they could go at their own pace.
In 2007, Sal realized that many people who were not family-related were watching his educational videos on YouTube. Sal quit his job and set up [Khan Academy] as a non-profit organization. Unfortunately, the donations he received from students and parents were not enough to support his monthly expenses. However, he received a generous $10,000 US dollar donation from a parent who used the site with her kids.
Word got around. Bill Gates from Microsoft mentioned Khan Academy in an on-stage interview. Mr. Gates admired Sal's wife for letting him quit his job to pursue his interests.
(Later, Mr. Gates invited Sal to visit the Microsoft campus in Seattle, WA, asking him "What could Khan Academy achieve if you had more resources?" A question folks in public education, or the IT industry for that matter, rarely hear! )
By Fall 2010, the Gates Foundation, Google, [and other supporters] helped make this a fully funded organization, he was able to hire engineers and educators.
Sal gave an interesting analogy. Imagine building a house, the first step is to pour the concrete foundation, instructing the builders to "do what you can in two weeks". The inspection indicates problems, but you go ahead and build first floor with the same approach "do what you can in two weeks", then build second floor. Eventually, the house collapses.
Sal organized Khan Academy similar to [Kung Fu belt colors], rather than the manner students are grouped by age in traditional American schools, promoted lock-step, regardless of their readiness. Many students have gaps, and being moved to next grade just results in more gaps. The solution is to fill the gaps in a timely manner.
Sal gave three inspiring stories of some of his students:
Charlie dropped out of high school his freshman year. When he came back to school, he was put in remedial math and science classes. Charlie was able to catch up using Khan Academy, graduated as high school valedictorian, and went on to major in Computer Science at Princeton. Hearing this testimonial, Sal offered him an internship during his Junior year at Princetom. Charlie is now fully employed at Khan Academy.
Some engineers from Silicon Valley went to Mongolia to setup computer labs for kids in an orphanage. One orphan, Zaya, sent an [email with video] to Sal about how much she appreciated learning through Khan Academy. Zaya is now 19 years old, and one of the top contributors to Khan Academy in the Mongolian language, helping to educate her own people.
Seven years ago, a girl named Sultana living in Afghanistan. The Taliban took over her town, and physically prevented girls from attending school. Sultana had Internet access at home, and taught herself English. She asked her uncle to bring back any reading materials in English he could find. He brought back a Time magazine with an article on Khan Academy.
Between her ten hours' worth of household chores every day, Sultana taught herself math, chemistry, biology and physics using Khan Academy. She illegally crossed into Pakistan, a dangerous 30-hour journey, just to take the SAT exam and did surprisingly well.
Nicolas Kristof from the New York Times wrote an article [Meet Sultana, the Taliban's worst fear]. Sultana was able to get assylum into the United States, and is now doing research with a top physicist at MIT.
But how effective is Khan Academy overall? Working with the college test board, Sal was able to do efficacy studies. With 250,000 students using Khan Academy for PSAT/SAT prep for just 20 hours produced 100 percent extra gain. A similar study in Idaho found 80 percent extra gain with 10,500 students. In Brazil, a 7,000 student study found that one hour of Khan academy per week resulted in 30 percent more learning.
The videos on Khan Academy favor being simple and authentic, rather than high production value. The software and equipment used to make the first videos only cost a few hundred dollars. The costs are just 30 US cents per hour of learning.
Today, the free online learning resources cover preschool through early college education, including K-12 math, grammar, biology, chemistry, physics, economics, finance, history, and SAT prep. Khan Academy also provides teachers with tools and data so they can help their students develop the skills, habits, and mindsets they need to succeed in school and beyond.
The concept scales well. Khan Academy has over 150 employees, with another 14,000 volunteers helping with translations. Over 59 million students have registered across 190 countries. Every year, about 300,000 people send in donations. The webiste has had over 1.4 billion views.
Sal finished his talk with a thought experiment: Go back 400 years ago to Western Europe, a time when only about 10 percent of men, and 5 percent of women, could read. If you asked someone, back then, what percentage of people could be taught to read, they would estimate only 20 to 30 percent.
Today we know that nearly 100 percent of people can be taught to read. However, if you asked people today what percentage of people could become a software engineer, start a business, or write a novel, people respond only one to five percent.
IBM Watson is also helping out in the area of education. Register today at [Teacher Advisor]!
This week, IBM clients, Business Partners and executives get together for the inaugural IBM [Think 2018] conference. There are over 30,000 attendees.
This is a combination of last year's three events: Edge, InterConnect, and World of Watson (WoW). The combined event is divided into four "campuses":
Cloud and Data -- formerly covered at InterConnect
Modern Infrastructure -- formerly covered at Edge
Business and AI -- formerly covered at World of Watson
Security and Resiliency -- covered in the other three events
(I am not in Las Vegas! In my first post in this series, [Science Slam], I forgot to mention that I was not physically there, and have since been flooded with invitations and requests for one-on-one meetings with clients and cocktail parties. Sorry folks! I am in Tucson writing these blog posts by watching the live stream videos of the event.)
Putting Smart to Work
Ginni Rometty, IBM Chairman, President and CEO, kicked off the event. In the opening video, we realize that "smart" is just a placeholder, translated to "Putting Cloud to Work", "Putting AI to work", and so on.
An "interesting moment" that happens every 25 years, when business and technology change at the same time. Those who learn exponentially are disruptors, not victims of disruption.
[Moore's law]: Double the number of transistors on a chip every 18-24 months.
[Metcalfe's law]: The value of a network is related to the square of the number of nodes involved.
[Watson's law]: Ginni would like to coin this new law to refer to exponential learning from data using Artificial Intelligence (AI).
How much of the world's data is searchable? only about 20 percent. The other 80 percent is proprietary that provides competitive advantage. IBM is helping clients be the "incumbent disruptor".
Ginni covered three inflection points: your business, society, and IBM itself
Companies must go on the offense, leverage multiple digital platforms (plural), and empower people by enable "man+machine" learning in every process they have. What are better decisions worth? Over $2 Trillion US dollars!
Man+Machine better than man-alone and machine-alone. At [Credit Mutuel], a leading European bank, Watson technology is used to answer 60 percent of customer emails, and 95 percent of the employees there are happier about this.
IT technology represents both the greatest opportunity and the biggest issue of our time.
Trust and responsibility. We must be data stewards, with focus on privacy and security. Only 4 percent of data is encrypted.
Jobs and skills. Man+Machine augments man alone. 100 percent of jobs will change. Ginni coined the term "new collar jobs" a few years ago.
Inclusion is important. IBM is one of the leaders in this area with its 400,000 employees spanning all races, genders, and sexual orientations. IBM was awarded [Catalyst award] for companies making real change for women in the workplace. IBM is the only tech company to be ever awarded this, and this will be the fourth time IBM is honored with this award.
IBM has revamped its own HR with [Workday]. In 2016, Workday partnered with IBM on 7-year deal to use IBM Cloud for its platform. IBM in turn has switched its HR to using Workday applications.
Mainframe technologies and POWER9 are now on the IBM Cloud. IBM is also expanding IBM Cloud Private to include "IBM Cloud Private for Data".
To date, IBM has completed 16,000 Watson engagements to-date. Watson Oncology now in 150 hospitals analyzing 13 different types of cancer.
The big system Watson used to play Jeopardy in 2011 have been broken down to micro-services and APIs that are more easily consumable by applications.
IBM and Apple have announced integration with Watson. Apple [CoreML] natively goes to Watson. IBM can now go straight to Apple Swift code. A new "Watson Studio" allows you to develop AI models in the cloud, then deploy them in private on-premises.
IBM will also offer "Watson Assistant". In the past, buying Watson was like buying a puppy, you needed to train it yourself. If you wanted a vicious guard dog, or a seeing-eye dog, that was up to you. Now, IBM offers "Watson Assistant" which is pre-trained.
Secure to the core
IBM is obsessed with security and trust, from Blockchain to Pervasive Encryption.
In the past, IBM often tried to do this all on its own, but in today's business climate, IBM now has strategic partnerships in these many areas.
Lowell McAdam, Chairman and Chief Executive Officer, Verizon Communications was the first guest speaker.
April 2017, Verizon launched Oath, formed from the company’s acquisition of AOL and Yahoo, which houses more than 50 digital and technology brands that together engage more than 1 billion people worldwide.
(I personally have been working with Verizon for decades, back when they were just NyNEX, BellAtlantic, and GTE, before they acquired Vodaphone, MCI, AOL and Yahoo! I use FlickR, one of the Yahoo brands.)
With the acquisition of AOL and Yahoo, Verizon formed "Oath", with over 1.2 billion consumers. The name came from the promise to customers for giving them to get what they want, when they want them.
Largest fiber provider for the USA. We have enough fiber on hand to stretch to Mars.
They invest $18 billion per year, but often payoffs not for another 5 years. [5G Wireless network technology] is an example. Lowell feels that 5G will usher the "fourth" industrial revolution:
Speeds over 1Gbps for consumers, 25Gbps for commercial, compared to 10 Mbps typical today.
5G will support 1,000 more devices per cell site, enabling IoT like intelligent lighting, video surveillance, face recognition.
5G has short latency, 1 msec compared to 200 msec today to cell site and back. This shorter latency will enable Augmented Reality and Virtual Reality (AR/VR).
5G also reduces battery consumption, imagine only charging your cell phone once per month!
Verizon delivers value three ways:
Provide connectivity only. Verizon will continue to do this for some markets
Like IBM, Verizon promises it will not use customer data in any manner that the customer did not "opt in" for. Business is based on trust. Those business that lose trust have difficult time to regain it.
Shipping, Supply Chain and Global Trade
Michael J. White manages the Global Trade Digitization organization for Maersk. He was recently named CEO-designate of the IBM-Maersk Joint Venture.
Shipping products is $4 Trillion US Dollar business. As much as 80 percent of what we consume came over the ocean. On average, 20 percent of the shipping cost is administrative paperwork, however, in some cases, the administrative costs exceed the physical transport costs.
State of industry, over the last 5 years, has been 3.7 percent compound annual growth rate (CAGR). This is expected to increase to 4 percent as economies bounce back. Many companies run lean, expecting their supply chains to provide supplies "just in time".
Unfortunately, shipping is hugely inefficient, paper-based. This impedes growth of local trade. Take for example the shipment of a container of Avocados from Kenya to Netherlands: 30 entities involved, over 100 individuals, over 200 transactions.
Why did IBM-Maersk joint venture pick blockchain? Blockchain is not a solution searching for a problem. The problems are well known, and blockchain addresses them. Smart contracts and decentralized authority provides immutable trust, critical in an industry where many parties do not know each other.
IBM Maersk Joint venture was formed over the past 18 months to create the world's best global trading platform.. There are 25 companies on-boarding now, with another 40 companies have expressed interest to join soon.
Unlike the anonymity of Bitcoin that enables terrorists and murders for hire, IBM is focused on transparency that all parties identify each other.
Blockchain benefits all the key parties involved. Carriers benefit, customers benefit, and ports and terminals get information earlier upstream for better planning during peak periods, and this results in better utilization of resources available.
(Not everyone benefits - counterfeiters and corrupt government officials will not be happy with Blockchain used in this manner!)
Paperless transactions reduces re-keying information by 80 percent. Less re-keying means fewer mistakes, fewer typos.
This new global trade platform offers opportunities in adjacent blockchain networks for financial services, insurance, and food safety. To ensure food safety, Blockchain is used by Walmart, Kroger, Unilever and 20 others. One third of food grown is wasted.
Dave McKay, President & Chief Executive Officer, Royal Bank of Canada (RBC) was the next speaker. Dave graduated from the University of Waterloo, a COBOL computer programmer at heart. RBC still use COBOL programs in their banking applications!
RBC is the top bank in Canada, and would be #5 bank if it was based in the USA. It will be celebrating its upcoming 150th anniversary in 2019. Highest customer sat for multiple years running. RBC has 13 million customers. RBC is also Canada's #1 broker/dealer for investment banking.
Back in the 1980s, banks were only open 10am-3pm, and treated it as a privilege for clients to work with the bank. Account holders came in several times per week, and relationships were built with local branches. Today, account holders are not coming into branch offices, using ATMs and mobile phones instead.
In the past, consumers used their RBC Credit Cards, and this provided brand recognition for RBC. Today, traditional banking services are now being embedded into other value chains. With Apple Wallet, for example, you enter your RBC credit card once, and then nobody knows what bank you are using to pay for coffee.
Like any bank, RBC is focused on three areas: moving money, storing money, and lending money. AI is needed to evaluate these transactions into knowledge, to provide business value and insight. However, RBC has only 40 Applied and Pure data science researchers on staff. This was deemed not enough, so RBC partnered with IBM.
Cloud, the computer power and speed needed, RBC has 60 apps in development in the IBM Cloud. While silicon valley start-ups might "let the app fail faster in the hands of clients", that approach doesn't work with money transactions.
RBC has invested heavily in blockchain. It will transform how we work with others. Digital transformation not just technology, but also cultural change. Is RBC in the mortgage business or the "Housing enablement business"? Is it in the car loan business or "transportation enablement business"?
Working with small business, they want to focus on their own clients, not bookkeeping and accounting. RBC has deployed AI in the Cloud to create the Advisor's Virtual Assistant [AVA] application. There have been over 48 million interactions in the first four months!
RBC is also investing $500 million this year to build the IT skills of their employees.
RBC is also focused on the stewardship of data. The strength and trust of financial institutions is the core to a strong economy. RBC policies are based on "opt in" to provide value relevant to both clients and the bank. Banks that breach that trust will struggle.
Ginni (and the rest of the company) has re-invented IBM to achieve exponential change. The change impacts all industries, not just the three we saw on the stage during this keynote session.
To follow along with the rest of Think2018 conference, watch the live stream on [www.ibm.com/events/think/watch] or follow the twitter hashtag #Think2018
This week, IBM clients, Business Partners and executives get together for the new IBM [Think 2018] conference. This is a combination of last year's three events: Edge, InterConnect, and World of Watson (WoW).
(The theme this week is "Putting smart to work." Some might feel that this is a grammatically-incorrect use of the adjective [smart], referring to having quick-witted intelligence or being neat and well-dressed. Many words in the English language have multiple meanings and uses. The word smart is also a noun, referring to either business acumen, technical skills, or "a sharp stinging pain")
The keynote session today was "Science Slam: Unveiling 5 Breakthrough Technologies That Will Change the World!" by Arvind Krishna, IBM Research Director. IBM has over 3,000 researchers, in 12 labs, across six continents.
This talk was based on IBM's annual five-in-five, five predictions that might change the world in the next five years. For amusement, read my 10-year-old blog post [Five in five for 2008], including predictions for smart thermostats that can be controlled remotely, and self-driving cars.
("Science Slam" is IBM Research version of [Pecha Kucha], but instead of art students having 20 minutes to show 20 PowerPoint slides, each IBM research scientist has 5-7 minutes to explain the research project they are exploring. These are done both internally, as well as to audiences outside the company.)
Jamie Garcia served as emcee, introducing each of the five experts. Each spent 5-7 minutes, Science Slam style, on what projects they were working on.
1. Crypto-anchors and blockchain technology
‘Everything you don’t understand about money
combined with everything you don’t understand
about computers’ [25-minute video]
Andreas Kind presented first. Blockchain is not just a provenance system that enables Bitcoin and other cryptocurrencies, it can be used for other goods.
(The best layman explanation of blockchain and cryptocurrencies I saw was John Oliver's humorous take on his HBO show [Last Week Tonight]!)
Counterfeit goods, from cinnamon to footwear, to medicine and automotive parts, is estimated over $1.8 trillion US dollars. IBM is working on how to use blockchain for other things, such as to restore trust into global supply chain. IBM hopes to reduce the number of counterfeit goods in half or more.
Andreas explained tamper-proof technologies called "crypto-anchors" -- from indelible ink on pharmaceuticals to computers smaller than a grain of salt -- that can be used to track products as they travel from one country to the next.
2. Lattice Cryptography and Fully Homomorphic Encryption
Cecilia Boschini from IBM Zurich presented next. As quantum computers get more powerful, the basic math involving prime numbers that most current encryption models are based on become vulnerable.
(Don't worry, she assured the audience, hackers would need a 1000-Qubit quantum computer to break today's encryption codes, which don't exist yet!)
What we need are post-quantum or quantum-resistant mathematical models. Lattice Cryptography aims to use more difficult math equations to make it more difficult for hackers to break the code, even when armed with quantum computers.
Another challenge with existing encrypted data is that we must decrypt the data to perform computations on it. Fully Homomorphic Encryption, or [FHE] for short, allows computations to be done in its encrypted state. For example, if I had a list of names with credit card or social security numbers encrypted, I could sort this list alphabetically without decrypting any of the data.
3. AI-enabled robotic microscopes to monitor ocean water
Tom Zimmerman is known as IBM Almaden's [McGyver], able to use common technologies in new and innovative ways.
By 2025, over half of the world's population will be living in water-stressed locations. IBM is working on robotic microscopes that can be deployed across the oceans, connected to the Cloud, monitoring the state of plankton.
Why plankton? Plankton produces two-thirds of all oxygen we breathe, and serves as the "baby food" for all oceanic species. Tom has re-programmed "face recognition" in smartphone cameras to recognize plankton, identifying what they are doing and eating.
Monitoring plankton provides an "early warning system", the proverbial [canaries in the coal mine] for impending water problems.
4. Eliminating Bias from Artificial Intelligence (AI)
Information overload! Overwhelmed by too much, our brains sort it out by either looking only for differences, or focusing on what we are already familiar with that confirm our beliefs.
Not enough meaning. Lacking complete information, our brains fill the gaps and connect the dots to find patterns that aren't patterns at all. Racism, prejudice, and stereotypes are examples of this.
The need to act fast! Survival in some cases demands acting fast, to avoid being eaten by an animal, for example. Unfortunately, our brains favor the quick and simple, over the more important but often delayed, distant or complicated response.
What should we remember? We decide what to remember, and what to forget. Our brains often favor generalities over specifics, as they take up less space. The details we remember when we experience it, or often edited or reinforced after the fact.
IBM is collaborating with the Massachusetts Institute of Technology [MIT] to reduce bias in Artificial Intelligence by rating different AI models on fairness.
The AI models that will win in the future are those where the biases are tamed or eliminated altogether.
5. Quantum Computing
Talia Gershon was the last speaker.
Many problems become exponentially more difficult to solve with classical computers. For example, simulating protein molecular bonding gets more difficult the larger the molecules are, because you have more electron interactions.
Quantum Computers run at a temperature of 15 millikelvin (mK), which is 460 degrees below zero. The computation unit is called a [Qubit], and a 5-Qubit quantum computer can solve problems that your laptop can solve classically. IBM now has "IBM Q" with 50-Qubit computers available.
The IT industry is still in the early stages, but IBM Quantum Information Software development kit (QISkit) allows programmers to experiment and develop algorithms for this new computational model.
Over the next five years, IBM predicts that Quantum Computing will transition from the lab, to the mainstream, to solve problems that were previously too difficult or time-consuming to solve.
Back then, IBM allowed its employees the option to run Windows, Linux or Mac OS. Since then, dual-boot Windows/Linux configurations, like the one I had back then on my Thinkpad T410, proved too difficult for our help desk, so these are no longer allowed.
In 2015, I received my new Thinkpad T440p to replace the old T410 model. For those 20 to 25 percent of the IBM employee population that manage, support and connect directly to client networks, IBM required Linux encrypted with LUKS, using Windows as KVM guests when needed for specific applications. This is more secure than running Windows natively, preventing viruses and other malware to spread between IBM and its clients.
As I am occasionally asked to help out our colleagues in lab services or with critical situations, I decided to implement my laptop to match, just in case. RHEL is rock solid, and running Windows as KVM guests could not be easier. Not having to worry about Windows viruses while travelling on business is a huge benefit as well.
Upgrading from RHEL 6.1 all the way up to RHEL 6.9 was simply a push of a button, all the new applications and kernel get installed, followed by a quick reboot. The migration from RHEL 6.9 to RHEL 7.4, however, was a major undertaking.
In past migrations, I was moving from a working laptop to a second laptop, affording me to be fully productive on the old machine until I was ready to cut over. In this case, I am performing a fresh install on my existing machine. To avoid any problems or delays, I wrote myself an 8-page, 17 step migration plan to capture all the tasks I needed to do to minimize the impact to my productivity.
(Of cousre, IBM has a help desk. You hand over your laptop, they backup the home directory, wipe your system clean, fresh install, restore your home directory, and return the laptop to you 3-5 days later, leaving the rest of the tasks up to you. Basically, this would merely replace the first three of my 17 steps below. I did not feel like burdening our help desk, nor wait 3-5 days without a laptop!)
Here were my steps:
Backup my existing system
In addition to backing up all my individual files to the Cloud, I also used [Clonezilla] to create a full image backup of my 500GB drive to an external USB drive.
Not all data is in file form. I also exported my browser bookmarks, so that I could import them back later. I also ran an "rpm -qa" to get a list of my existing applications installed.
Initially, I thought to format the 4TB external drive in UDF format, which is readable by Windows, Linux and Mac OS and supports files that are larger than 4GB in size.
Not knowing whether I should use [ExFAT] or Universal Disk Format [UDF] format, I split the 4TB into two 1.9TB partitions, and formatted one as ExFAT, and the other as UDF. Both formats support files greater than 4GB in size, which I have, but I discovered that on the older RHEL 6.9 release, based on a 2.6 Linux kernel, you can only write 68GB of data to a UDF partition. This is fixed in later kernels, but doesn't help me with my existing RHEL 6.9 release.
Fortunately, the latest Clonezilla LiveCD chops up the cloned images into files small enough that you can write to a variety of formats, and has a newer kernel that allows writing the full capacity of UDF partition.
In a crisis, I can restore back to RHEL 6.9 within 2 hours. This was my "relief valve" if I encountered any major delays and had to go travel for business on short notice.
Fresh install of RHEL 7.4 Linux
This completely wipes clean my drive, and installs two partitions. A tiny "/boot" partition needed to boot the system, and the remaining drive capacity as a large LUKS-encrypted LVM, to be internally partitioned between "/" and "swap" logical volumes.
Copy all of my files back
The challenge is that some files might clobber some of the configurations of the new applications. For this reason, I created /home/tpearson/RHEL69 and put everything there, so that I can move them to the correct locations as appropriate.
Copying all the files back in this manner eliminated having to be tethered to the external USB drive.
Setup LAN connectivity
I have to connect to IBM and guest systems, so this configuration is important. This includes EAP, TLS and VPN configurations. I thought I could just re-use the certificates I have for RHEL 6.9, but no, I had to create and register fresh new certificates for RHEL 7.4 release.
Configure Cinnamon Desktop
RHEL 7.4 uses Gnome 3 by default, which is quite different than Gnome2 used in RHEL 6.9 release. I don't care for it, so I configured [Cinnamon desktop] instead. Many people who use Linux Mint or Ubuntu might be familiar with this, and for those switching from Windows or RHEL 6.9 Linux, Cinnamon has familiar "Start" button in lower left corner.
By default, our RHEL 7.4 image comes with Firefox and Chrome browsers, so all I needed to do was import the bookmarks that I had exported in step 1 above.
Configure KVM guests
I was able to bring over my Windows7 Kernel-Virtual Machine [KVM] from RHEL 6.9 and run without problems, but this was bloated and now consuming nearly 60GB of space. Therefore, I decided to get a fresh Windows7 and Windows10 guest images instead.
Like with Linux, I wrote down what applications I had installed on Windows, and used that to configure the Windows guests. Nearly everything I do runs natively on Linux, but I do use Microsoft Office (Powerpoint, Excel, Word) and a nice tool called [CutePDF] that allows me to print to PDF instead of an actual printer.
Windows10 comes with the "Print-to-PDF" feature built-in, so no need for CutePDF on that one.
Configure IBM Notes, Sametime and Gnote
IBM is a heavy user of [IBM Notes] (formerly called Lotus Notes), not just for email but also for its document management and database capabilities. Sametime is our "Instant Messenger" app. [Gnote] is a linux-based tool to store short notes, I use it for all of my email templates for quick copy-and-paste responses.
IBM recently made using printers super easy. Print to the common "Cloud printer", and then pick up your print-outs from any printer in the building, any IBM building, worldwide. I could print in Tucson, for example, and pick up my print-outs when I am in the IBM buildings in Austin, Texas!
I also had to configure my printer at home, for those days where I need to print a boarding pass or quick document.
Configure File Sharing
IBM has deployed IBM [Spectrum Scale] internally for employees to share files across the company called "Global Storage Architecture" (GSA). Configuration for me just meant having to find my local cell (tucgsa) for Tucson, and entering my credentials.
Install Docker and DSX Desktop
[DSX Desktop] is the local laptop version of IBM's cloud-based [Data Science Experience], allowing me to perform Hadoop and Spark analytics for the various projects I work on. It runs as a Docker container, so I had to configure Docker as well.
Install Multimedia Codecs
One of the big detractors for Linux, compared to Windows or Mac OS, is the lack of multimedia support. Linux distros, like Red Hat, don't ship with these pre-installed, leaving this as an exercise for the end user.
IBM does a lot of audio and video files, including replays of conference calls and webinars for internal training. I keep a collection of different audio and video files to ensure that I have everything configured correctly for proper playback.
Install GIMP and other software
The GNU Image Manipulation Program [GIMP] is a great tool for quick editing of graphics. Another tool, Inkscape is designed for vector graphics.
Configure file-level backup
In addition to doing full-volume image backups with Clonezilla, I back up individual files, which are sent over the IBM internal network to a central server. All I need is configure to my previous backup set, and create the appropriate include/exclude list.
Many employees might just back up their home directory, but I customize a lot of the Linux configuration, so I like to backup a few more directories. Here is what I choose to back up:
Congigure Grub2 boot configuration
RHEL 7.4 supports [Grub2], which allows you to boot iso files directly. I like to add Clonezilla and [SystemRescueCD] as boot options. These were simple enough to add, just follow instructions, copy files to the /boot directory, and create a menuentry for each.
Validate final configuration
After eight days, I have finally completed all these steps, and am able to validate that everything is working correctly. I did some sample workflows, such as:
Verify that I can launch Windows KVM guest, edit Powerpoint presentation, and print to PDF file.
Verify that I can open email, launching embedded URL links, and copy-and-paste templates from Gnote
Launch GIMP, verify that I can edit graphics, and import the results in a Powerpoint presentation.
Download and play a Webinar replay MP4 file
Fresh Clone of full volume image
Using the Clonezilla that I added to the Grub2 boot menu, I am able to backup my full 500GB drive. At this point, I will keep the RHEL 6.9 for a few weeks as emergency backup, but so far, everything seems to be working just fine.
This took longer than I expected, but am happy with the final result. Red Hat is rock-solid, and the new RHEL 7.4 allows me to run DSX Desktop, Windows 10, and some other applications that were not available on our previous RHEL 6.9 build.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Everyone is getting ready for next week's "Think 2018" event, so these might get missed under all the excitement.
IBM Spectrum Archive Enterprise Edition V1.2.6
IBM [Spectrum Archive] Enterprise Edition supports Linear Tape File System (LTFS) cartridges as part of a larger IBM Spectrum Scale deployment. Version 1.2.6 provides features to help transition from old technology to new technology, at the library, drive and cartridge level. It also adds support for "Little Endian" mode for IBM Power servers.
Tape library replacement procedure
Tape intermixing in pool for technology upgrade
Support for LTO 8 Media on LTO 8 drives
Support for Power Systems in Little Endian (LE) mode
IBM Copy Services Manager [CSM] was formerly knows as Tivoli Storage Productivity Center for Replication. It manages the copy services like FlashCopy and remote mirroring for DS8000, Spectrum Virtualize family, and Spectrum Accelerate family products. Version 6.2.2 adds some nice features:
Support for scheduled tasks against Copy Services Manager sessions
Support to create DS8000 system diagnostics from the Copy Services Manager GUI and CLI for issue resolution
New SNMP event and email notifications for any detected path failures
Ability to enable embedded Easy Tier heat map transfer to support full Copy Services Manager session configuration, including practice volumes
Next week, I will not be in Las Vegas for Think 2018. If you won't be there either, you might consider watching some of the livestream videos at [www.ibm.com/events/think/watch] starting March 19, 2018.
Many of you have seen the Storage announcements that were made last month on February 20. I gave you all the skinny about the context of the technology shift and some resources to go deeper still in my blog post [IBM Storage Announcements for February 2018].
So, there’s a lot going on in IBM Storage right now. I’m looking forward to the upcoming IBM Systems Technical University in Orlando, Florida, from April 30 to May 4, 2018.
TechU’s are my favorite events to attend. This is a true event for techies! You get hands-on labs, demos, technical sessions, and birds of a feather (BOF) sessions and open technology discussions.
There are over 200 sessions on IBM Storage. I have the honor of sharing the latest in storage technology and strategy. Here are the topics I am scheduled to present:
IBM hybrid cloud storage solutions
Managing risks with data footprint reduction
Information lifecycle management: Why archive is different than backup
The seven tiers of business continuity and disaster recovery
Introduction to IBM Cloud Object Storage System (powered by Cleversafe)
The pendulum swings back: Understanding Converged and Hyperconverged Systems
Reporting and monitoring: How to verify your storage is being used efficiently
Well, it's Tuesday again, and you know what that means? IBM Announcements! This week IBM announced new and refreshed storage products.
On Feb 20, there will be a [Live Stream event] to watch the announcements online. The event is at Half Moon Bay in California, starting at 9:30am Pacific Standard Time (PST).
IBM decided to do things a bit differently for this launch. Instead of dozens of stodgy press releases, IBM has opted to complement with a series of blog posts, with [Storage innovation drives 21st century business] providing an overall recap.
(FTC Disclosure: I work for IBM. This blog post can be considered a "paid celebrity endorsement")
IBM Spectrum NAS
IBM Spectrum NAS is a new software-defined storage offering to address three specific market segments:
General purpose file serving and home directories
Native SMB protocol NAS for Microsoft Windows Applications
File serving for Virtualization Environments, such as VMware and Hyper-V
IBM Spectrum NAS is software that you can run on your x86 servers, either bare metal or as Virtual Machines. You start with four nodes, and can scale out to tens of machines as you grow.
IBM Spectrum NAS was written from scratch, not based on open source SAMBA software. It has already been deployed internally within IBM last year, and now is being productized. It is very compatible with the SMB2 and SMB3.1 protocol specifications, and supports the NFS3, NFS4 and NFS4.1 protocols as well.
As a scale-out solution, it is both more robust and scalable than a single Windows server, and less expensive to run than traditional dual-controller NAS filers.
IBM Spectrum Protect has been enhanced to detect ransomware attacks, and improved auditing to meet European Union's General Data Protection Regulation [GDPR] privacy legislation.
(If you are not in Europe, and feel this legislation does not apply, you may be sadly mistaken. This legislation may affect any company that shares information with EU companies, or has even a single client from the European Union. Think of it as this year's [Y2K crisis]. It hits globally on May 25, 2018.)
IBM Spectrum Plus offers snapshot support for both VMware and Hyper-V virtualization environments. The vSnap repository can now be replicated to remote facility for Business Continuity and Disaster Recovery (BC/DR). IBM Spectrum Plus is now also available as a Software-as-a-Service (SaaS) offering on IBM Cloud.
IBM Spectrum Virtualize is the software in SAN Volume Controller, FlashSystem V9000, and the Storwize V7000 and V5000 series. It is also available as software you can deploy on your own x86 servers, or in the IBM Cloud. Fellow IBM master inventor and blogger Barry Whyte has a great post on the details of Spectrum Virtualize v8.1.2 latest release, including [Data Reduction Pools].
Cohasset Associates has reviewed the IBM Cloud Object Storage (IBM COS) Compliance Enabled Vaults (CEV) capability and determined that this feature meets the U.S. Security Exchange Commission SEC 17a-4 requirement for non-erasable, non-rewriteable (NENR) tamperproof enforcement.
Some clients also refer to this as Immutability, Content Addressable Storage, or Write-Once Read-Many (WORM). Rather than invent new terminology, IBM opts to use Non-erasable, Non-rewriteable to match the standard language in the SEC 17a-4 legislation.
IBM COS is now also eligible for "Storage Utility" pricing. See my blog post [ IBM Announcements 2017 November] for details on how Storage Utility pricing is implemented.
More than 15 years ago, I was the chief architect for IBM Spectrum Control, which back then was called the IBM TotalStorage Productivity Center.
A subset of IBM Spectrum Control was needed for a variety of IBM storage products to support VMware in a consistent manner, so IBM made this available as the "Spectrum Control Base Edition", entitled at no additional charge. Last year, IBM also merged in storage enablement for containerized environments like Docker.
Since "IBM Spectrum Control Base Edition with Storage Enablement for containerized environments" is too long to say, IBM shortened this to "Spectrum Connect". In addition to VMware and Docker support, Spectrum Connect also supports Microsoft PowerShell and IBM Cloud Private.
If you have 11.6.2a microcode on your XIV Gen3, you can now perform Online Volume Migration (OLVM) to FlashSystem A9000 and A9000R systems running 12.2.1 release. This will help clients in their migration efforts.
It is funny how an article or blog post can remind me of something long, long ago.
Back in 2005, my manager, Rich Lechner, was then the Executive Advocate for a client in Chicago. While visiting that client, he asked what the client wanted most. His answer, for IBM to come in and do an "Information Lifecycle Management" (ILM) study on his IT environment. He agreed to send me on-site for a week.
I had done disk and tape studies of this kind before, but this time, I was going to do an end-to-end to evaluate their growth, and where was the best storage media for different data types.
Joining me were three "observers" from IBM Lab Services: Barbara Read, Steve Bisel and Tom Moore. As if I did not have enough pressure from the client, now I had to be "watched" while I interviewed the storage administrators, generated and reviewed reports.
At the end of the week, I had provide the client's upper management with a list of short-term, mid-term and long-term recommendations. As a side benefit, the client decided to purchase two DS8000 storage systems, replacing their HDS equipment!
After that initial engagement, the four of us formed a team. We performed similar studies at other client locations. Barbara Read was the process expert who wrote the "Documents of Understanding". Steve was our financial expert, and used spreadsheets to show total cost of ownership comparisons. Tom was our infrastructure expert, and used Microsoft Visio to document the inventory of IT equipment, and how it was all interconnected.
I was the consultant and public speaker for the team. I was able to incorporate the work of the three others into a Powerpoint presentation. During the week, we would show initial findings to the client, and then follow it up a few weeks later with a full report.
A lot has changed in the past 13 years! First, ILM was renamed to "Storage Infrastructure Optimization" (SIO) studies. Our initial team trained dozens of other practitioners. Today, SIO studies are done all over the world.
This week -- Jan 29 to Feb 2, 2018 -- I am in New York city with other IBM Storage executives, to meet with Channel distributors and Business Partners. If you are in the NYC area, and wish to have a product briefing, or just dinner or drinks, let me know!
I believe the "T" stands for "Third generation", as we have had other 9132 boxes before. Here are the details:
Small: Just 1U in size
Ports: 8, 16 or 32 ports
Transceivers: 32, 16, 8, and 4 Gbps
Protocols: FCP only, no FICON, FCIP, FCoE or iSCSI
Why is this important? Because the 16 Gbps and 32 Gbps transceivers support NVMe over Fabrics. Let's do a quick NVMe recap:
Last May, IBM announced that its developers are re-tooling the end-to-end storage stack to support [New Faster Protocols for Flash Storage], to boost the experience of everyone consuming the massive amounts of data now being perpetuated across cloud services, retail, banking, travel and other industries.
NVMe is a new language protocol that is replacing traditional SAS and SATA standards for solid state drives (SSD). Through employing parallelism, to simultaneously process data across a network of devices, clients can anticipate significantly reduced delays caused by data bottlenecks and move higher volumes of data within their existing flash storage systems.
IBM's NVMe strategy is based on optimizing the entire storage system stack - from applications requiring the data to flash technology to store it. Through the development of its FlashSystem family of all-flash storage solutions, IBM recognized years ago that multiple technologies would be required to address the demands of ultra-low latency data processing. IBM is developing solutions with NVMe across its storage portfolio, which it plans to bring to market in 2018.
At the AI Summit New York, December 2017, IBM disclosed a [technology preview and demonstration] with the integration of IBM POWER9 Systems and IBM FlashSystem 900 using NVMe-over-Fabrics InfiniBand. This combination of technologies is ideally suited to run cognitive solutions such as IBM PowerAI Vision, which can ingest massive amounts of data while simultaneously completing real time inferencing (object detection).
Whether it is streams of data, transactional data, or batch processes, a consistent requirement is the lowest possible latency. Among the leading all flash storage vendors, IBM with its FlashSystem 900, has stuck to its mission delivering low latency all flash arrays. Along comes NVMe-oF, which is, at its core, about getting rid of latency.
How do you take an already low latency protocol, like InfiniBand or Fibre Channel, and make it faster? Replace SCSI with NVMe and enable NVMe from server to fabric to storage array.
The FlashSystem 900 has been shipping with InfiniBand using SRP (SCSI over RDMA) for many years. In the technology preview, the very same InfiniBand adapter, based on the Mellanox chip set, is instead used to support the OpenFabrics driver distribution and NVMe-oF InfiniBand.
While the demonstration last December used Infiniband, this is not the only transport. NVMe-OF can also be used with Ethernet, either using Internet Wide Area RDMA (iWARP) or RDMA over Converged Ethernet (RoCE). NVMe-OF over Fibre Channel is often referred to as FC-NVMe, and can drive NVMe over FCP or FCoE. Even though iWARP, RoCE and FCoE are all Ethernet-based, NVME-OF RDMA on the first two is different than FC-NVMe over FCoE.
Why not just drive NVMe commands over standard TCP/IP? The NVMe standards board is actually investigating this, but probably won't have anything until next year in 2019.
This week, IBM will be at the [Cisco Live!] event in Barcelona, Spain, talking about this new 9132T switch, as well as all of our VersaStack solutions! I won't be there, obviously, since I am in New York City, but if you are there, please send me photos! Barcelona is a wonderful city!
I hope everyone had a festive and restful winter break! I sure did!
(FCC Disclosure: I work for IBM. IBM is in our 17-day "quiet period" before it announces full-year and 4Q results on January 18. Therefore, I picked today's topic that has nothing to do with storage products, recent client wins, or financials.)
It's January, so I thought I would discuss [New Year's resolutions], a tradition in United States in which a person resolves to change an undesired trait or behavior, to accomplish a personal goal, or otherwise improve their life. Early Romans made promises to their god Janus, for whom the month of January is named.
Sadly, most of us are unsuccesful. This is often because the resolutions were unrealistic, people failed to measure and track their progress, or simply lost interest midyear.
From my own experience, most resolutions can be lumped into four major categories:
Get healthy: Eat better, lose weight, exercise more, sit less, quit smoking
Get organized: Stop procrastinating, pay off debt, de-clutter, switch to a better job, reduce stress
Become social: Spend more time with friends and family, meet new people, travel, volunteer for charity
Learn new skills: Learn a new language, take up a new hobby, learn to paint or create arts and crafts
A technique I use to develop presentations might help people keep New Year's Resolutions. The technique called [SCIPAB®], created by Mandel Communications, is an elegantly simple, six-step method for starting important conversations or create [Effective Presentations]. Since Resolutions are basically "conversations with yourself", let's give it a try!
Situation: "Oh No! The boss's daughter, Nell Fenwick, is tied to the railroad tracks!"
Complication: A train approaches!
Implication: If nobody does anything soon, she will die
Position: "I, Dudley Do-Right, will save her!"
Untie her from the tracks and set her free
Arrest the villain, Snidely Whiplash
Benefit: Nell lives! "Dudley Do-Right, you are my hero!"
Let's see how we can use this approach on different categories of resolutions. To get healthy, we might use:
Situation: "Oh No! My latest doctor visit indicates that my numbers are too high!"
(AMA Disclosure: I am not a doctor. This is not medical advice. Here numbers could represent any appropriate health measurement of your BMI, blood pressure, cholesterol, triglycerides, liver enzymes, or blood sugar, for example.)
Complication: I am not getting any younger.
Implication: I am at risk of heart disease, cancer, or other health issue. This situation will not go away on its own.
Position: I need to change my lifestyle to get healthy
Set appointment to see my doctor
Follow doctor's recommendations for diet, medication and exercise
Schedule follow-up appointments to measure and track progress
Benefit: My health measurements will return to normal range.
Rather than resolving to "Eat less and exercise more", the above approach is more focused on the end result, rather than intermediate actions, and therefore has a better chance of success, getting your health within normal range.
Let's try another one. To get better organized, we might use:
Situation: "Sigh! All of my projects are over budget and behind schedule, my desk is a mess, I forget important thoughts and ideas, and I am always late to meetings."
Complication: I just got assigned to lead project XYZ.
Implication: If I am not better organized, I could lose my job.
Position: I need to change my work routine to get organized.
Read David Allen's book and learn his system for "Getting Things Done" [GTD], or one of the many variants, like [GSD] or [ZTD].
Decide on where to write down and keep track of my thoughts, tasks and projects, either on paper like a notebook or [Hipster PDA], or an online mobile account like [Evernote] or [Google Keep]. Chose something that will be within arms reach 24 hours a day.
Work with project managers to track and measure progress of project XYZ.
Benefit: Project XYZ will be completed on schedule, within budget. I might even get a bonus, raise, or promotion!
I could go on, but you get the idea.
In his WSJ article [Blame it on the Brain], Jonah Lehrer cautions against trying to change too many habits all at once. If you have multiple resolutions, try to focus on establishing new habits for one resolution for a month or two, before starting the next one. Prioritize what is most important.
The study surveyed 5,676 leaders from various industries, education, and government agencies responsible for workforce development and labor/workforce policy. This was a truly global survey, with respondents from North and South America, the Nordics, Europe, Africa, Middle East and Asia.
A gloomy picture for the future
The survey paints a gloomy picture for the future. The majority of industry executives struggle to keep their workforce skills current, in light of rapidly changing technological advancements.
Only 55 percent of the respondents felt the current education system, from grade school up to university, were adequate to ensure lifelong learning and skills development. Most blamed inadequate investment from private industry in addressing these issues.
Any problem can be solved if (a) everyone agrees what the problem is, and (b) everyone feels it is high enough priority to solve. The study found there was a disparity of what the problem is, what the priorities are, and who should solve it.
In the book Class Counts: Education, Inequality, and the Shrinking Middle Class, the author Allan Ornstein argues ".. the debate centers on whether the government should take a backseat or manage the economy, whether a free market should prevail or whether we should redefine or tinker with market forces..."
Which workplace skills are in short supply?
Can we at least agree on which workplace skills are in short supply?
Not surprisingly, Industry leaders ranked the top three skills required:
Technical core capabilities for Science, Technology Engineering and Math [STEM]
Basic computer and software/application skills
Fundamental core capabilities around reading, writing and arithmetic (often called [the three Rs])
These are all "hard skills", referring to the knowledge, skills and competencies to perform specific tasks. Nearly 75 percent of corporate training budgets are focused on hard skills.
Government leaders, on the other hand, especially those that are responsible for labor/workforce policy, ranked the top three skills:
Ability to communicate effectively in a business context
Willingness to be flexible, agile and adaptable to change
Ability to work effectively in team environments
These would all be classified as "soft skills", referring to the people skills, social skills, communication and emotional intelligence to effectively navigate the environment and work well with others.
In fact, these government leaders felt that STEM, computer skills and "the three Rs" ranked the lowest requirements in their priority.
"Unless managers have forgotten everything they learned in Econ 101, they should recognize that one way to fill a vacancy is to offer qualified job seekers a compelling reason to take the job. Higher pay, better benefits, and more accommodating work hours are usually good reasons for job applicants to prefer one employment offer over another."
"... the long-hours pandemic is a symptom of the tech and design sectors' badge-of-honor-martyr-complex. ... part of the reason that women can't have it all is that American business has grown this time-macho culture, a relentless competition to work harder, stay later, pull more all-nighters, ... the classic 40-hour work week have trained us to measure our labor by the number of hours we log,... However, this mindset is dead wrong when applied to today's professionals. The value ... isn't the time they spend, but the value they create through their knowledge."
IT jobs require creativity and focus. In a feature article titled [Why you should work 4 hours a day, according to science], Alex Soojung-Kim Pang, author of Rest: Why You Get More Done When You Work Less, looks at the work habits of highly accomplished creative people through history and finds that they all shared a passion for their work, a terrific ambition to succeed, and an almost superhuman capacity to focus.
Yet when you look closely at their daily lives, they only spent a few hours a day doing what we would recognize as their most important work. The rest of the time, they were hiking mountains, taking naps, going on walks with friends, or just sitting and thinking.
Encouraging more students to develop the skills early
While we all agree that employers should raise salaries, offer better benefits, and fix their morally-corrupt culture of working too many hours, that only addresses part of the problem, the demand half of the equation. We also need to get kids to learn the hard and soft skills needed at an early age.
Do students have what it takes to work in the IT industry? John Rampton lists the [15 Characteristics of a Good Programmer]. Most are soft skills, with my favorites being: Laziness, Impatience and Hubris.
In his book Why Good People Can't Get Jobs: The Skills Gap and What Companies Can Do About It, Peter Cappelli advises corporations to take a more proactive role:
"... a huge part of the so-called skills gap actually springs from weak employer efforts to promote internal training for either current employees or future hires ... It makes no sense for the employers, as consumers of skills, to remain an arm's-length distance from the schools that produce those skills..."
The major stakeholders, from industry to education to government, should partner together. For example, the Chicago Public Schools (CPS) system will be the first in the United States to [require all students to take computer science] in high school, starting with the class graduating in 2020. Grants and training are being provided by IT industry giants like Google and Microsoft.
IBM is also doing its part with [a new education paradigm], called Pathways in Technology Early College High Schools [P-TECH]. Normal high school is typically four years (grades 9 to 12), but P-TECH is a system of innovative public schools spanning grades 9 to 14 that bring together the best elements of high school, college, and career. The additional two years (grades 13 and 14) of community college can help teach the soft and hard skills needed for particular jobs in IT.
After the six years, students graduate with a no-cost associates degree in applied science, engineering, computers and related disciplines, along with the skills and knowledge they need to continue their studies or step easily into well paying, high potential jobs in the IT arena for multiple industries.
The paradigm has grown from one school in 2011 to 60 schools by September 2016, with over 300 large and small companies affiliated with P-TECH schools serving thousands of students.
So the future may not be as gloomy as predicted. Problems can be addressed if everyone works together to solve them. In the mean time, I will be taking the rest of the year off for long-overdue vacation. Perhaps I will go hike mountains and take naps, as Alex suggests above.
It's official. We have changed our name! The Worldwide IBM Systems Executive Briefing Centers (EBC) are now being called the Worldwide IBM Systems Client Experience Centers!
I joined the Tucson EBC team in 2007. For the past 10 years, I have been running design workshops, consulting with clients and architecting solutions.
Why the name change? The term "Executive Briefing Center" implies one-way communication with [death by PowerPoint], which can be ineffective in today's dynamic and collaborative work environments.
Client expectations for two-way communications have given rise to immersive and interactive engagements where clients not only learn about IBM's solution offerings, they experience them.
Through hybrid briefing/workshop engagements, demonstrations, and active promotion of our ISV Ecosystem partners, we take clients on a journey where they envision utilizing our technology and solutions to achieve desired business outcomes. The new Client Experience Center moniker more accurately represents the work we do and the value we provide.
(Note: I realize that the new acronym for the Client Experience Center (CEC) is the same as the Central Electronic Complex (CEC) used in both storage and server products. I can assure you that the executives that decided to rename the centers had not chose this to be funny! Consider it a mere coincidence.)
Of course, changing the name is not cheap. We will have to update all of our websites, and order new signage, new water bottles, new coasters, new embroidered shirts, and new business cards, just to name a few!
The weather in Tucson is awesome these next few months, so come on down! Can't travel? We can come visit you, or do it over the phone via webinar.
Our Worldwide IBM Systems Client Experience Centers are located in:
Last Friday, I helped students learn about Science, Technology, Engineering and Math (STEM). This was the annual [2017 Arizona STEM Adventure] event in Tucson, Arizona. Once again, Pima Community College Northwest Campus provided the venue.
The event hosted 1,200 students, ranging from fourth to eighth grades. Buses collected them from ten different school districts in the area. Home-schooled, private-schooled and charter-schooled children participated as well.
There were three dozen exhibits, some were indoors, and others in tents outside. The weather was delightful for November.
IBM's exhibit used a simple bicycle wheel to demonstrate the properties of a [gyroscope]. A gyroscope is a spinning wheel that maintains its angular momentum. This can be useful for both measuring forces that try to affect it, as well as counteract those forces.
We had the kids stand on a rotating platform, holding the bicycle wheel with both hands. A volunteer would spin the wheel. If the kid leaned the wheel left or right, the platform would spin to counteract the force. (The effect can be accomplished while sitting on a swivel chair. See [Exploratorium] for an example.)
Gyroscopes are used in everything from airplanes to submarines to help with navigation, keeps space-based telescopes like the Hubble pointed in the right direction, helps to dig tunnels straight, and for [Steadicam] filming for Hollywood movies and [IBM Client Center videos!
According to the U.S. Environmental Protection Agency (EPA):
"The state has warmed about two degrees (F) in the last century. Throughout the southwestern United States, heat waves are becoming more common, and snow is melting earlier in spring. In the coming decades, changing the climate is likely to decrease the flow of water in the Colorado River, threaten the health of livestock, increase the frequency and intensity of wildfires, and convert some rangelands to desert." (Source: [What Climate Change means for Arizona])
Their robots stole the show! This one pictured here was remote controlled. Another one was able to pick up and throw basketballs.
(This is not my first exposure to FIRST. See my 2009 blog post [Helping Young Students] on how I helped fourth graders learn C programming language by building robots with LEGO Mindstorms.)
The team draws students from the five high schools of the Vail school district. I drive by one of these, the Vail Academy and High School, on the way to IBM Client Experience Center. This is not just for boys, about one third of the team are girls!
The students do the design of each robot, do the welding, even do the C++ programming, and participate in competitions!
Lunch and Logistics
With all the focus on science and technology exhibits, it is easy to forget all the work done behind the scenes. An [Eventbase] website was used to help us direct all of the students, teachers and volunteers to the right place.
Since we had enough volunteers for the IBM exhibit, I chose instead to be a "general volunteer" and was assigned the task of collecting and distributing lunches. For some schools, the students brought their own lunches on the bus, these were collected when they got off the bus, and distributed to them when it was their time to eat. For other schools, their staff packed lunches for each student.
We staggered the distribution into five groups, with color coded labels, starting from 10:30am, every 20 minutes, to 11:50am. The volunteers themselves did not eat until 1:30pm. We were provided pulled pork sandwiches from [Mama's Hawaiian BBQ], a local favorite!
This was a great day! There are plenty of problems that need to be solved in our world, and a shortage of scientists and engineers to solve them. Encouraging kids to pursue these careers is a good step forward.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
The Collaboration of Oak Ridge, Argonne, and Livermore [CORAL] is a joint procurement activity among three of the Department of Energy's National Laboratories launched in 2014 to build state-of-the-art high-performance computing (HPC) technologies that are essential for supporting U.S. national nuclear security and are key tool s used for technology advancement and scientific discovery.
Of course, when you hear "state-of-the-art technology", IBM is probably the first company that comes to mind!
The new IBM Spectrum Scale 5.0 has been greatly enhanced to meet CORAL requirements:
Dramatic improvements in I/O performance
Significant reduction in internode software path latency to support the newest low-latency, high-bandwidth hardware such as NVMe
Improved performance for many small and large block size workloads simultaneously from new 4 MB default block size with variable sub-block size based on block size choice
Improved metadata operation performance to a single directory from multiple nodes
Spectrum Scale 5.0 now handles automatically tuning more than twenty communication protocol and buffer management parameters, aiding setup for optimal performance. The enhanced GUI features many capabilities including performance, capacity, network monitoring, AFM (multicluster management), transparent cloud tiering, and enhanced maintenance and support, including interaction with IBM remote support.
Spectrum Scale 5.0 now offers file-level immutability. Previous releases supported immutability at the file set granularity, so this allows greater granularity. Immutability can be an effective tool as part of an overall Non-Erasable, Non-Rewriteable [NENR] compliance policy.
Spectrum Scale comes in both "Standard Edition" and "Data Management Edition". The latter offers some additional features, including Transparent Cloud Tiering, Asynchronous AFM Disaster Recovery support, and Encryption. Some additional enhancements to Data Management Edition in Spectrum Scale 5.0 are:
File audit logging capability to track user accesses to file system and events supported across all nodes and all protocols
Parseable data stored in secure retention-protected fileset
Data security following removal of physical media protected by on-disk encryption
The new IBM Storage Utility Offerings include the IBM FlashSystem 900 (9843-UF3), IBM Storwize V5030 (2078-U5A), and Storwize V7000 (2076-U7A) storage utility models that enable variable capacity usage and billing.
These models provide a fixed total capacity, with a base and variable usage subscription of that total capacity. IBM Spectrum Control Storage Insights is used to monitor the system capacity usage. It is used to report on capacity used beyond the base subscription capacity, referred to as variable usage.
The variable capacity usage is billed on a quarterly basis. This enables customers to grow or shrink their usage, and only pay for configured capacity.
Suppose you only need 300 TB today, but expect this to grow to 1 PB (1000 TB) over the course of three years. You install 1000 TB (1 PB) of capacity, and pay for the base 300 TB, plus whatever above this 300 TB you might be using during each subsequent quarter. After 36 months, you pay for the rest of capacity installed.
(There are comparable offerings from IBM's competitors, but they often require that you pay for at least 75 to 85 percent of the installed amount, and then you would need to continue to disrupt your operations with additional capacity installed throughout the 12 to 36 month period. IBM's approach allows you to avoid installation disruption during the entire 36 month period!)
IBM Spectrum Virtualize for Public Cloud V8.1.1 delivers a powerful solution for the deployment of IBM Spectrum Virtualize software in public cloud, starting with IBM Cloud. This new capability provides a monthly license to deploy and use Spectrum Virtualize in IBM Cloud to enable hybrid cloud solutions
Remote replication will be supported between Spectrum Virtualize-based appliances (including SAN Volume Controller (SVC), the Storwize family, IBM FlashSystem V9000, and VersaStack with Storwize family or SVC), or Spectrum Virtualize Software, to the IBM Cloud.
Using IP-based replication with Metro Mirror, Global Mirror, or Global Mirror with Change Volumes, clients can create secondary copies of on-premises data in the public cloud for disaster recovery. IBM has over 25 data centers around the world available to chose from. Remote copy services can also be used between two IBM Cloud data centers for improved availability.
The solution is based on bare metal servers. You can create either two- or four-node high availability clusters.
Spectrum Virtualize on-premise SVC and Storwize now also support 2.4 TB 10K rpm 2.5-inch SAS hard disk drives.
IBM has been holding various "Hackathons" and "Meetups" as a new way to reach out to prospective clients. IBM sponsored a meetup at the Austin Executive Briefing Center (EBC) to discuss Machine Learning with TensorFlow on IBM Power systems, October 26, 2017.
This was a joint event, co-sponsored by [IBM Watson/Cognitive Austin] and [Big Data/AI Revealed] meetup groups. Special thanks to my colleague Cathy Cocco, IBM Executive IT Architect with the IBM Austin EBC, for coordinating this event with their organizers.
(What is a Meetup? [Meetup.com is an online social networking website that facilitates in-person local group meetings. Meetup allows members to find and join groups unified by a common interest, such as books, games, pets, technology, careers or hobbies. In 2017, there are 32 million users with 280 thousand groups available across 182 countries.)
Here was the agenda for the event:
Registration, Pizza & Soft drinks
Tensorflow 101 presentation
Demo: Using TensorFlow for Financial Market Predictions on IBM POWER Systems
Lightning Talk: IBM Data Science Experience
Clarisse Taaffe-Hedglin: Intro to TensorFlow on IBM Power servers
Our guest speaker was my colleague Clarisse Taaffe-Hedglin, IBM Cognitive Senior Technical Architect, part of the same Worldwide Client Centers team that I work in. She flew in from Charlotte, NC.
Her topic was TensorFlow, an open source [Machine Learning] framework. TensorFlow was originally developed by Google, but was made open source in November 2015.
Machine Learning is popular in a variety of industries, from self-driving cars and trucks, speech recognition and video surveillance, to what movie to watch next on Netflix. There are three aspects to Machine Learning:
Data: Start with the data you want to analyze. This could be IoT sensor data, security logs, or social media feeds. Check out all that happens in an "Internet Minute"!
Compute: While mathematical computations can be performed on traditional CPUs, some frameworks are optimized and accelerated with Graphical Processing Units (GPU). These GPU can perform Teraflops of single and double precision calculations.
Technique: As methodology have gotten more complicated over the years, frameworks have evolved to match.
The [TensorFlow] framework is now one of the most popular among data scientists. You can download it for free at [Github].
Clarisse showed the various programming/calculation tools used by data scientists. The top five were: Python, R, SQL language, MapReduce, and Microsoft Excel.
Mathematical models come in many flavors. Clarisse explained they can be used to identify clusters of data that might have similar properties, or to perform classification, or linear regression. The results can be "descriptive", gaining a better understanding of what already is, or "predictive" for what might be.
Some frameworks like Chainer or Torch are more flexible, using a dynamic Build-by-Run approach. However, these do not scale well. Theano and TensorFlow, on the other hand, employ a Define-then-Run approach, which scales better for larger projects. With the growth in popularity with TensorFlow, the Theano framework has been "functionally stabilized".
Clarisse Taaffe-Hedglin: Financial Markets Demo
For the demo, Clarisse had historical stock closing data for USA, Australia and Asian stock markets. The hypothesis: We can determine a Buy/Sell for USA stocks based on the closing results of non-American stock results? This is a classic "Binary Classification" model. The other stock markets close 4-16 hours before the U.S. markets open, so this has real-world applicability.
Since the data was in different monetary units, she did some cleanup to normalize the data, removing out the trends, and converting everything to U.S. Dollars (USD).
Clarisse used "Supervised Learning" on 80 percent subset of the data, and then used the other 20 percent remaining data to validate how well it did.
As with any model, you measure how good it is by how close it results in the correct answer. Wrong answers are weighted by how bad they are. This is often referred to as "Loss" or "Cost". Different models can therefore be compared by minimizing the loss.
Using a simple y=wx+b mathematical model, she ran 30,000 iterations. After 5,000 iterations, the model was already guessing correctly 55 percent of the time, by the time we hit 30,000 this was up to 68 percent accuracy.
TensorFlow also supports "hidden layers", basically intermediate variables that are then used in subsequent layers for more complicated calculations. This is the way our brain works with neural networks. With two added layers, she re-ran the 30,000 iterations, and now was up to 73 percent accuracy.
Normally, this kind of analysis would take hours or days, but since TensorFlow takes advantage of the IBM Power8 CPU and NVidia Tesla K80 GPU in the IBM Power server, the whole thing finished in five minutes!
Tuhin Mahmed: Lightning Talk on IBM Data Science Experience (DSX)
Tuhin Mahmed, IBM Software Developer, is the organizer for the Big Data/AI meetup group. He wants to promote the idea of "Lightning Talks" where each person presents for just 10-15 minutes. This is a variant of the popular [Pecha Kucha] events.
To get things started, he presented 10-15 minutes on [IBM Data Science Experience], or DSX for short. Taking Multiple Listing Service (MLS) real estate data of closing prices on houses sold in a range of zip codes from the Austin Area, he mapped these on x-y axis. The x axis was square feet, and the y axis was closing price.
Using DSX, he was able to develop a mathematical model that estimates house closing prices based on their zip code and square footage.
This was a simple example, but it showed the power of Jupyter Notebooks, and how anyone can get a 30-day free trial of DSX for their own experimentation.
Currently, being a data scientist is more of an art than a science. This is one of those fields that takes only a few months to learn, but years to master.
Rather than building a model from scratch, data scientists can take existing models, and modify them to fit their needs. There are a variety of existing models available in what is called the "Model Zoo". Google has over 2,000 projects already.
Those interested in trying this out TensorFlow for themselves were directed to [Nimbix], a Cloud Service Provider that offers POWER servers with NVidia GPUs.
There were about 50 attendees, more than half identified themselves data scientists. As the first inaugural sponsored event for the IBM Austin EBC, I think this was a success!
If you are in the Austin area, the next meetup will be at the [Capital Factory] on Brazos Street on November 30, 2017.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Today, IBM announces a complete refresh of its IBM FlashSystem® all-flash array product line.
(FCC Disclosure: I work for IBM. Compression, data footprint reduction, and performance results, based here on internal IBM tests, vary widely by data and workload type. Your mileage may vary. This blog post can be considered a "paid celebrity endorsement".)
New FlashSystem 900 model AE3
The new AE3 model introduces new Microlatency cards at larger capacities: 3.6, 8.5 and 18 TB. Compare that to the previous model AE2 at 1.2, 2.9 and 5.7 TB.
These capacities are achieved by combining three-dimensional (3D) chip layout with Triple-Level Cell (TLC) transistors, often referred to as 3D-TLC. The previous technology was single-layer 2-dimensional, multi-level cells (MLC).
Last week, at IBM Systems Technical University in New Orleans, Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist, explained this via an analogy. The 2-dimensional is like a Bungalow. If you want to pack in more people, you need to make the rooms smaller, which is getting more difficult. Alternatively, you could build a multi-story skyscraper, adding more floors relieves pressure to shrink the rooms down.
Triple-level cell holds three bits per transistor. In the past, we had Single-level Cell (SLC) that stored one bit, and Multi-level Cell (MLC) that stored two bits. A future technology, Quad-level Cell (QLC) is not yet ready for production workloads in a datacenter.
The new AE3 models also offer Embedded inline Compression (EiC), with "Always-On" compression being done right on the Microlatency cards. With a fully-loaded 12 card 2U drawer, that is 10+P+S RAID-5 configuration, the amount of effective capacity is drastically increased:
FlashSystem 900 Model AE3
2U Drawer (Usable TB)
2U Drawer (Effective TB) w/EiC
The compression gets 2x to 3.5x on typical data, but your mileage may vary. The small latency cards are capped at 110 TB, and the medium and large at 220 TB effective capacity, to avoid overwhelming the on-board DRAM cache. For clients who need smaller amounts of flash, IBM will continue to sell the AE2 models with 1.2 TB MLC Microlatency cards.
After the compression, the data is encrypted with AES 256-bit encryption. This is same as the previous AE2 models, so nothing changing there.
The EiC compression and encryption do not impact performance. The new Microlatency cards achieve as low as 95 microsecond latency, about 10x faster than traditional Solid-State Drives (SSD) found in Dell EMC XtremIO and Pure Storage competitive offerings, and 40 percent faster than the new NVMe Solid-State drives. A 2U drawer can deliver up to 1.2 million IOPS, slightly more than the AE2 models (1.1 Million IOPS).
The new FlashSystem V9000 take advantage of the new FlashSystem 900 AE3 models, effectively tripling the usable capacity.
The interesting thing now is compression. Both are hardware-accelerated, with EiC being done on the Flash cards, and Real-time Compression (RtC) being done by the Intel QuickAssist chips in the controllers.
The EiC method works on 4KB blocks, so only gets 2.5x to 3.5x on typical data. The RtC method works on larger 32KB blocks, is therefore able to find more replicated sequence of characters, gets up to 5x ratio, with compressed data in the controller node cache for better cache hit ratios.
However, RtC is limited to only 512 volumes, so admins would run the [Comprestimator tool] and select the cache friendly workloads with the best compression, such as Databases and CAD/CAM images.
With new FlashSystem V9000, you now get the benefits of both. Continue to use RtC for data that is better served with 4x-5x compression, and let EiC compress everything else!
FlashSystem V9000 model AE3
Usable (1 drawer) TB
Usable (8 drawers) TB
Running a typical 70/30 workload, representing 70 percent reads and 30 percent writes, each controller pair can deliver up to 600,000 IOPS. With four V9000 controller pairs clustered together, that is 2.4 Million IOPS. For more read-intensive, cache-friendlier workloads, IBM has clocked the system up to 1.3 million IOPS per controller node-pair, and 5.2 million for a four-pair cluster.
As with the previous model, the FlashSystem V9000 offers "Easy Tier" automatic sub-LUN tiering, and "storage virtualization" to manage both SAS-attached and SAN-attached storage. Over 400 different devices from major vendors are supported. This means that the busiest blocks will be moved up to low-latency Flash, and less active data will be moved to spinning disk.
As with the FlashSystem V9000, the A9000/R model 425 use the new FlashSystem 900, increasing the effective capacity.
The A9000/R models will continue to do "Data Footprint Reduction" of pattern removal, data deduplication and RtC compression for data to achieve up to 5x compression ratio. However, to improve performance, internal metadata will not be compressed with RtC, allowing the underlying Flash cards to do EiC instead. This reduces CPU workload.
The FlashSystem A9000 model 425, aka "The Pod", has three grid controllers combined with the new FlashSystem 900 model AE3 for compact 8U solution that can store nearly a Petabyte. For smaller deployments, IBM also offers an 8-card partially-filled drawer for lower entry system size.
A9000 Model 425
Number of cards/drawer
Effective @5x TB
The FlashSystem A9000R model 425, aka "The Rack", has two to four grid elements, each grid element has two grid controllers and one FlashSystem 900 AE3 drawer. The previous 415 model supported five and six grid elements, but for now, model 425 is limited to just two, three or four. The A9000R model 425 supports all three Microlatency sizes, whereas the previous 415 model only supported medium (2.9 TB) and large (5.7 TB) sizes.
FlashSystem A9000R model 425
Usable (2 elements) TB
Usable (3 elements) TB
Usable (4 elements) TB
Performance of both the A9000 and A9000R are based on the number of grid controllers. Each grid controller gets about 300,000 IOPS. The A9000 pod with three controllers gets up to 900,000 IOPS. Each A9000R grid element has two controllers, so 600,000 IOPS per element, with 2.4 million IOPS for a maxed out four-element A9000R rack.
Along with the hardware changes, IBM released version 12.2 of the Spectrum Accelerate software that runs in the FlashSystem A9000/R models.
This version supports Asynchronous mirroring between FlashSystem A9000/R systems and IBM XIV Gen3 storage. The replication can go in either direction, but the intent is to use FlashSystem for production, replicating to XIV Gen3 at a disaster recovery facility. Version 12.2 also increased the number of volumes, snapshots, and consistency groups supported.
24,000 volumes and snaps
1024 consistency groups, 512 volumes per consistency group
The new version applies to both the new model 425, as well as the previous 415 models!
This week, I am presenting at the IBM Systems Technical University for IBM Storage and POWER Systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency. There were about 800 clients attending.
This is my recap for the last few sessions before I left town, spanning Tuesday afternoon and Wednesday afternoon.
Reasons why IBM hyperconverged systems powered by Nutanix surpass other HCI from HPE, Cisco and more
Rob Simpson, Senior Strategic Marketing Manager at Nutanix, presented Nutanix hyperconverged systems. Nutanix runs on both x86 and POWER. For x86, it supports VMware, Microsoft Hyper-V, and Citrix XenServer, as well as their own Acropolis Hypervisor (AHV) derived from Linux KVM. For POWER, it uses AHV re-compiled for POWER chip set.
Hyperconverged systems can be sold in full rack configurations, as individual appliances, or as software that can be deployed on your own servers. Rob compared Nutanix against three competitive appliances: Dell EMC VxRAIL based on VMware VSAN, HPE Simplivity, and Cisco HyperFlex.
Everything you wanted to know about IBM Spectrum Scale metadata but didn't know to ask
Eric Sperley, IBM Software Defined Infrastructure Architect, presented the internal metadata structures used in IBM Spectrum Scale.
Why, oh why, did I attend this presentation? I had worked on Spectrum Scale back when it was called GPFS over 15 years ago, and thought I already knew everything about "inodes" that I ever wanted to, but Eric proved me wrong!
"Laws, like sausages, cease to inspire respect in proportion as we know how they are made."
--John Godfrey Saxe
A lot has changed! There have been a lot of improvements to the internal structures to improve parallel I/O performance, and reduce latency of administrative tasks.
IBM Spectrum Scale can be divided into different file systems, each of which can be configured with different performance characteristics and block size, such as random small files for scanned images, versus large sequential files for streaming videos.
My presentation was nowhere near as technical as Eric's above. I provided an overview of how IBM Spectrum Scale is configured, how it works, and how it interacts with IBM Cloud Object Storage System, Spectrum Protect, and System Archive.
I also covered the latest GSxS and GLxS models of the Elastic Storage Server, or ESS for short. These models provide awesome performance at low cost. The GSxS models are all-flash arrays for high performance. The GLxS models are hybrid with 2 Solid-State Drives and the rest NL-SAS 7200 rpm spinning disk for high capacity.
IBM COS new features
Andy Kutner, IBM Channel and Alliances Architect, presented the latest features in IBM Cloud Object Storage, IBM COS for short.
Compliance Enabled Vaults, or CEV for short, offer Non-Erasable, Non-Rewriteable (NENR) tamperproof protection for objects. Objects written to a CEV vault can not be deleted or replaced with newer versions, for a specific retention period.
(Note: Some folks mistakenly use the term "Write Once, Read Many" (WORM) for this. WORM applies only to tape, optical, paper tape, punched cards, and non-erasable ROM chips. For this reason, the term "Non-Erasable, Non-Rewriteable" (NENR), used in the U.S. Securities Exchange Commission (SEC 17a-4) regulation, has been created to extend this tamperproof protection to flash, disk and cloud-based storage architectures.)
The entry-level systems lowers the minimum capacity of systems. Before, IBM recommended at least 500 TB capacity to consider IBM COS. Now, the combination of embedded Accessers and Concentrated Dispersal mode, can lower the starting point to as low as 72 TB, but still allow you to grow to multiple PBs.
This week, I am presenting at the IBM Systems Technical University for Storage and POWER Systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency.
This is my recap for sessions on Day 2 morning.
FlashSystem A9000 and A9000R Overview
Andy Walls, IBM Fellow, CTO and Chief Architect,and Brent Yardley, IBM STSM and Master Inventor, co-presented this session. This was the "deep dive" of the A9000/R, a basic continuation of the one they did yesterday.
The Pendulum Swings Back -- Understanding converged and hyperconverged integrated systems
With IBM's partnership with Nutanix, this has become a particularly popular topic. I cover the last 50 years of storage evolution, from internal storage and external storage to NAS and SAN storage networks.
More recently, people have been willing to give up all those gains for something simpler, less powerful, less reliable, less expensive. Enter Converged and Hyperconverged Systems. IBM PureSystems and VersaStack lead the pack for Converged Systems, along with IBM Spectrum Scale, Spectrum Accelerate and Nutanix on IBM Power Systems for Hyperconverged Integrated Systems.
New Generation of Storage Tiering -- Less Management, Lower Costs, and Improved Performance
There are orders of magnitude between the fastest All-Flash Array and the least expensive tape storage. Ideally, there would be a "slider bar" that allowed people to select from the fastest to the least expensive. IBM offers a variety of solutions to offer this "slider bar", with automation to move data as needed between tiers.
I start with IBM Easy Tier, available on DS8000 and Spectrum Virtualize products, to IBM Virtual Storage Center where advanced analytics moves data to the right location, to IBM Spectrum Scale which provides the ultimate tiering, across multiple locations, between flash, disk and tape.
The lunches at these conferences are amazing, but then the "Big Easy" is known for its food!
This week, I am presenting at the IBM Systems Technical University for Storage and POWER systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency.
The afternoon sessions on Monday were all about Cloud.
Back in 2009, I was designated the IBM Cloud Storage Center of Competency for all of the IBM Systems client centers. That was nearly a decade ago, and I am still talking about Cloud Storage!
Since then, IBM has decided to be a "Cloud Platform" company, and now everyone wants to know about Cloud Storage. Cloud is not just to lower costs, as it once start out as, but now for innovation and business value.
Nearly all of IBM Storage is enabled for cloud, from our high-end FlashSystem, DS8000 and XIV flash and disk storage arrays, to our Spectrum Storage software suite, to our various tape products.
Building Private Cloud with Ubuntu and OpenPOWER
Ivan Dobos, from Canonical--the company that makes Ubuntu--presented Ubuntu on OpenPOWER. Other Linux distributions like Red Hat and SuSE distributions offer both a "community supported" version (OpenSUSE or CentOS), and an "enterprise version" (SLES and RHEL). Ubuntu doesn't fork their versions, they have a single version for everyone.
Ubuntu 14.04 LTS was made available as a Little-Endian distribution for IBM POWER and OpenPOWER. Ubuntu was the first Linux distribution to support CAPI and PowerKVM for the POWER8 platform.
(A note on release numbers. Ubuntu releases every April and October, so 14.04 represents 2014/April release. Every two years, a release is designated "Long Term Support" (LTS) which is supported for five years.)
Since version 16.04, Ubuntu offers the LXD Container Hypervisor, based on LXC, similar to Solaris Zones, but running as a daemon. Virtual Machines are heavy because they have their own kernel. Containers instead use the kernel of the underlying hypervisor, but limited to Linux guests. The Linux guests are can be older versions of Debian, Red Hat or SuSE, but with the latest, most secure kernel of Ubuntu for safety and security.
(Canonical gives Ubuntu away for free, but offers "Enterprise Services" for a fee to companies that want this added level of support. One of the features with Enterprise Services is "Live Kernel Update". Normally, updating the Linux kernel requires a reboot, which would cause outage to all of the VMs and containers running on that host server.)
Like VMs, you can launch containers, switch to bash shell, install software, run applications, and shut down containers, all isolated from other containers. The LXD daemon can run LXC and Docker containers. Some advantages of doing this:
Lift and Shift, live mobility from one system to another
Collocation of different workloads on same node
More efficient to use containers than Virtual Machines
14x greater density with LXD than traditional KVM or VMware (tested on x86)
Based on open source LXC containers
Ubuntu is designed for the "Elastic Hybrid Cloud". Canonical recommends combining on-premises data center with two or more public cloud providers. Scarcity has shifted from "code" to "operations". Are you ready to run applications you don't understand?
Total Cost of Ownership is shifting from code license costs to operational costs. Canonical offers a free, downloadable, operations orchestration platform called "Juju" to help install, configure and scale applications. Juju means "magic" in Swahili.
Scripts on Juju are called charms. There are Juju charms to install and configure things like MongoDB and IBM Spectrum Scale. Furthermore, Juju charms can be bundled together for more complicated deployments.
Juju is not limited to LXD, can be used with VMware, OpenStack, bare metal servers, and public clouds. It is available on Ubuntu, Red Hat and Windows. As a demo, Ivan built an entire working OpenStack environment, with 20 applications on 4 bare metal servers, all installed and launched with Juju.
For OpenStack, you can use the basic "Ubuntu OpenStack", or a more complete "Canonical OpenStack", or even have Canonical folks manage your environment for you.
Canonical MaaS (Metal-as-a-Service) uses hardware APIs to manage bare metal servers, providing physical provisioning, dynamic allocation for workloads, and even Ubuntu and CentOS operating system installs. Canonical has clients with over 100,000 servers managed with MaaS.
Introduction to IBM Cloud Object Storage System and its applications (powered by Cleversafe)
Before 2015, IBM offered two "Object Storage" products: IBM Spectrum Scale and IBM Spectrum Archive, and I was constantly having to compare and contrast IBM products to Cleversafe.
Not any more! With the IBM acquisition of Cleversafe, IBM now offers all three!
This session explained all of the features and functions of IBM Cloud Object Storage System, available as software, as pre-built systems, including a VersaStack CVD, and as Storage-as-a-Service (STaaS) in the IBM Cloud.
(IBM renamed Cleversafe DSnet to "IBM Cloud Object Storage System". I joked that if IBM ever acquired Coca-Cola, they would probably rename their signature soft drink as the "Brown Carbonated Sugar Liquid", or BroCarb SugarLiq for short!)
In the evening, we had a nice reception with food and drink at the Solution Center. The Solution Center has booths where all of the IBM and Business Partners have their experts answering questions and handing out brochures of their offerings.
This week, I am presenting at the IBM Systems Technical University for Storage and POWER Systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency.
Storage: Opening Keynote Session
Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist, and Craig Nelson, Brocade, co-presented this session.
Clod Barrera presented the latest in Storage trends. He organized his talk around four layers: Infrastructure, Storage Management, Storage Systems, and Storage Media.
Craig Nelson presented the changes in Storage Networking. With advancements in both server and storage bandwidth, the storage network becomes the bottleneck. Insane flash storage performance requires insanely fast storage networks. IBM offers Brocade-manufactured switches and directors that now support 32Gbps. Combining four paths together, these can offer Interswitch Connection Links (ICL) at 128 Gbps.
The Seven Tiers of Business Continuity and Disaster Recovery
With the recent Hurricans Harvey, Irma, Jose, and Maria, my topic on Business Continuity and Disaster Recovery (BC/DR) was well attended. I have been working in BC/DR for most of my career, including the "High Availability Center of Competency" or HACOC.
Back in 2005, I was here in New Orleans, the week before Hurricane Katrina, for the IBM Storage Symposium, August 22-26, the predecessor of this conference. I left on Friday, August 26, and the storm hit that weekend.
I met with people photographing all the buildings, in hopes to sell "before pictures" to insurance companies and filmmakers after the hurricane hit. Film director Spike Lee bought much of this footage. Smart!
However, natural disasters like hurricanes, tornados and floods represent less than 20 percent of all discasters. The majority of disasters, nearly 75 percent, arise from electrical power outages, human error, system failure and randsomware.
IBM FlashSystem Overview
Andy Walls, IBM Fellow, CTO and Chief Architect,and Brent Yardley, IBM STSM and Master Inventor, co-presented this session. Andy started with FlashSystem 900, V9000 and A9000/R.
The room was packed with standing room only, and Andy was answering so many questions that he never finished his portion, and Brent Yardley never had a chance to cover his portion.
Fortunately, there were "deep dive" sessions on FlashSystem 900, V9000 and A9000/R later in the week, so Andy suggested everyone go to lunch and attend these other more detailed sessions.
Tomorrow, I will be presenting at the STU Orlando for Storage and Cognitive Systems (formerly POWER servers). This conference will be held in New Orleans, Louisiana, October 16-20, 2017.
Here is my speaking schedule:
The Seven Tiers of Business Continuity and Disaster Recovery (BC/TR)
IBM's Cloud Storage Options
Introduction of IBM Cloud Object Storage System and its Applications (powered by Cleversafe)
The Pendulum Swings Back -- Understanding Converged and Hyperconverged Integrated Systems
New generation of storage tiering: Simpler management, lower costs, and increased performance
Introduction of IBM Cloud Object Storage System and its Applications (powered by Cleversafe) **repeat**
IBM Spectrum Scale for File and Object storage
If these topics seem familiar, I have presented them at prior events earlier this year, including the STU Orlando in Orlando Florida, and the one in Melbourne Australia. However, I have made updates! New products have been announced!
If you are planning to attend, here are some of my past blog posts to help you get up to speed:
STU Orlando - Orlando, Florida
This event was a large 5-day event to replace the technical portion of IBM's previous "Edge" conference.
This event was a smaller 3-day event to bring STU to other countries. We used to call these "Edge Comes to You" events, but now we call them "IBM Systems Technical University" just like the ones in the USA.
The STU at New Orleans will be a 5-day event. Instead of a "Meet the Experts" session, they are having a "Poster Session" in its place. Many of the posters will have QR codes, so make sure you have a "QR Scanner" application installed on your smartphone so you can scan them quickly!
Everyone, speakers and attendees alike, should consider making a QR code for themselves for this event. Go to [any number of websites] that generate a QR code. This could a VCF file with all of your contact information, a link to your blog or website, or point to your presentations on Slideshare or IBM@Box.
The next time someone at the event asks for this information, display the QR code on your smartphone, and let them scan it. Alternatively, you can send the image via MMS text message.
(My QR Code is fully functional, go ahead and practice scanning it with your smartphone for practice!)
I arrive in to New Orleans Sunday afternoon, so if you are in town, give me a shout! Or tweet me at @az990tony
IBM introduces the eight generation of Linear Tape Open (LTO) tape drive technology, with corresponding support in all of the IBM tape libraries.
Fellow blogger Jon Toigo, of Drunkendata.com fame, came to Tucson to interview Lee Jesionowski, Ed Childers, Calline Sanchez, and me about this. Check out the various segments on YouTube or his website.
The LTO-8 cartridges are not yet available, but when they are, they will hold 12 TB raw capacity, or 30 TB effective capacity at 2.5-to-1 compression ratio. The new drives are N-1 compatible to read/write LTO-7 cartridge media.
Previous generations also supported reading N-2 generation tapes, LTO-8 breaks from that tradition and will not support LTO-6 cartridges at all.
LTO-8 comes in both "Full Height" (FH) and Half-Height (HH) models. The FH models can transfer data at 360 MB/sec (or 900 MB/sec effective at 2.5-to-1 compression), and the HH models at 300 MB/sec (or 750 MB/sec effective at 2.5-to-1).
LTO-8 supports IBM Spectrum Archive and the "Linear Tape File System" (LTFS) tape format for self-describing long-term retention of data.
Compliance storage has come under many names. For tape and optical media, we had "WORM" for Write-Once, Read-Many. For disk-based storage, we had "Fixed-Content" or "Content-Addressable Storage". For file systems, we had "Immutable Storage".
Fortunately, the clever folks who crafted the SEC 17a-4 law came up with an umbrella term: "Non-Erasable, Non-Rewriteable" (NENR) that covers all storage media, from WORM tape and optical, to tamperproof flash, disk and cloud-based solutions.
The other major change is "Concentrated Dispersal" mode, or "CD mode" for short. Erasure Coding works best when data is dispersed across three or more sites. When this happens, you can lose all of the data at one site, and still have 100 percent access to all data from the other locations.
IBM's "Information Dispersal Algorithm", or IDA for short, scattered slices of data across many servers. Great for high availability and performance, but often meant that the minimum deployment was 500TB or greater.
Not every organization is ready for such a large purchase. Some want to just [dip their toe in the water] with something smaller, less expensive. Well IBM delivered!
The new CD mode means that instead of one slice per Slicestor node, you can pack lots of slices on each node. Each slice will be on distinct disk drives, for high availability.
Entry-level configurations now can be as little as 72-104 TB, across 1, 2 or 3 sites.
Next month, I will be presenting at the IBM Systems Technical University for Storage and POWER. This conference will be held in New Orleans, Louisiana, October 16-20, 2017.
Instead of a "Meet the Experts" Q&A panel, this event will feature a "Poster Session". I had the pleasure of doing one of these down in Melbourne, Australia last month. For those who missed it, here are my blog posts:
By now, you have already decided on a title and abstract of your poster. You will need to figure out a quick and easy way to explain your poster, and as always, shorter is better. It reminds me of a famous quote:
"Sorry this letter is too long...
If I had more time, I could have made it shorter!
-- Blaise Pascal
The event team asked me to write some instructions on the mechanics of how to put together a poster for this, since it is new for many people. I use Microsoft PowerPoint 2013 and ImageMagick tools to accomplish this.
Arrangement of Slides
Posters for the IBM Systems Technical University in New Orleans will be 24x36 inches in size. If you print out your poster in 8.5x11 inch standard size letter pages, that would be eight slides, 2 columns, 4 rows. This leaves one inch border all around.
The event will provide both the foam board and double-sided sticky tape. You can bring your poster as a stack of Letter-sized pages in a folder, and assemble your poster at the event.
You can increase the size of individual image to 17x22, to offer the "Big Picture" view. Basically, we can take a standard 8.5x11 Letter size page, expand it onto four separate pages, and then put them on the poster! I will show you how in the steps below.
Lastly, you can have two big slides. If your poster is organized as "Before/After" or "Problem/Solution" then this arrangement could be perfect for you.
Setting Custom Paper Size on PowerPoint
In Melbourne, I had to use European A4 standard paper, and had to figure out how to do this in PowerPoint. I was surprised to learn that the PowerPoint default is 4:3 ratio of 10x7.5 inch, and that this is stretched to be whatever paper size you print on.
The difference is slight, but I prefer [WYSIWYG], so we will change the slide to "Custom size" and force it to 8.5x11 inches, with "Landscape" orientation. This will avoid anything looking stretched or squished on the big poster.
Converting a PowerPoint Slide to PNG Image file
If you would like to resize one or more of your PowerPoint slides, you will need to save those slides as images. Select "File" and "Save As" and as the format, choose "PNG" format. You can also select GIF or JPG, but I prefer PNG.
You can export all of your slides as images, in which case it will create a folder and number each slide individually. Or, you can select "Just This One" for the current slide.
By default, it will use the same name as your PPT file, just change the extension to PNG. I suggest you name the file something meaningful to you. In my examples below, I use "small.png" as the file name.
I am using PowerPoint 2013, which defaults to 96 dpi. So, an 8.5x11 paper becomes 1056x816 pixels in size.
If you have PowerPoint 2003 or higher, you can change the Windows registry to specify image resolutions. Not recommended for the faint of heart. Or anyone else. But here's the deal if you want to try (if the following doesn't make any sense, it might be better not to mess with the registry):
Quit PowerPoint if it's running
Navigate to HKEY_CURRENT_USER\Software\Microsoft\Office\X.0\PowerPoint\Options
(For X> above, substitute 16.0 for PowerPoint 2016, 15.0 for PowerPoint 2013, 14.0 for PowerPoint 2010, 12.0 for PowerPoint 2007 and 11.0 for PowerPoint 2003.
Add a new DWORD value named ExportBitmapResolution and set its DECIMAL value to the DPI value you want (for example, 300 means 300 dots per inch)
Close REGEDIT, start PowerPoint and test. Your files will be 3300x2550 pixels instead.
Since the resulting four pieces are exactly the size of a page, you can put them back into your PowerPoint deck. Create four blank slides, select Insert then Pictures. Insert each picture (big_0.png, big_1.png, big_2.png, and big_3.png) as a separate page.
You can print this out, and bring with you to the event, or send it to someone to have them print for you.
Upload files to IBM@Box
This next step is completely optional, but found it adds a nice touch. As an IBMer, you can upload your presentation, and any documents, whitepapers or other materials, to [IBM@Box]. Create a directory that is unique to you, such as your last name and the conference. For example, I have "Pearson-STU-NOLA-2017" as my folder name.
You can create a "URL Link" to this folder. Select "Share", then "Share Link" to create a dialog box. It is important to specify "People with this link" if you want those outside of IBM, such as clients and IBM Business Partners, to have access.
Press the little "gear" button on the upper right, and it gives you options to customize the URL. Normally the URL is some long random sequence of characters, but you can rename it to something meaningful and easier to remember.
Generate a QR Code
Since you have a URL Share Link for your files on IBM@Box, you can generate a QR Code for this link, and include on your poster!
There are several online websites that can generate a QR Code for free. I use [QRme.com] in this example. Go to the website, copy in the URL, and press "Generate" button.
The QR Code is generated successfully, right click and "Save Image" to a file on your hard drive. This image can be inserted as a picture like we did above onto any slide. You can resize as needed.
In Melbourne, one of the posters had the QR Code at the top, with the Title, and it was impossible to see, so difficult to use a smartphone to scan the information. For this reason, I recommend putting the QR code in the lower right corner of your poster. Between shoulder and waist height for the audience, to be comfortable to scan.
I am looking forward to going back to New Orleans to speak at this conference!