This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections platform will be sunset on December 31, 2019. On January 1, 2020, this community and its apps will no longer be available. More details available on our FAQ.
This week, I am in Orlando, Florida for the [IBM Technical University], with focus on IBM storage, IBM Z mainframes and IBM Power servers. This is my recap of afternoon breakout sessions on Day 2.
Spectrum NAS 101 and key use cases
Chris Maestas presented IBM's latest addition to the Spectrum Storage family of Software-Defined Storage. Spectrum NAS was written from scratch in C/C++ language, instead of using open source code like SAMBA. It supports both NFS and SMB protocols.
Like IBM Cloud Object Storage, the Spectrum NAS software is shipped with the operating system, so you have a single ISO to run everything. You start with four nodes and can grow capacity and performance as needed by adding more nodes. All nodes have identical roles.
All of the storage is internal. Spectrum NAS uses DRAM memory, NVMe-based Solid State Drives (SSD), and spinning disk HDD. The NVMe drives must support at least five Drive Writes per Day (DWPD).
Each Spectrum NAS node can handle 2,000 connections, and up to 4,000 connections during fail-over processing. With 10GbE bandwidth, you can migrate 100 TB/day from other NAS devices to Spectrum NAS. If you want to try out Spectrum NAS yourself, there is a 60-day free trial offer now available. There are a collection of videos on the [Spectrum NAS YouTube channel] to walk you through the installation process.
Clients are Hyper for Hyperconverged
Marc Richardson and Bruce Jones, both from IBM Cognitive Systems, presented this client case study on successful deployment of IBM Hyperconverged Systems powered by Nutanix, often referred to as the "IBM CS" models of the POWER server line. The covered three use cases:
Modernize to Private Cloud
IBM CS models use the Nutanix Acropolis Hypervisor (AHV) to run Ubuntu and CentOS little-Endian virtual machines on POWER. The speakers claimed that they can run 50 percent faster, and 88 percent more workloads per core, than traditional x86 methods. IBM has made statement of direction that IBM CS models will support AIX 7.2 virtual machines later this year.
The IBM CS models can also run IBM Cloud Private, a collection of software that supports Docker and Kubernetes.
Simplify the Data Center
The client was not happy with the high prices of their external, high-end storage systems. When you add another IBM CS models to the cluster, you get more storage capacity and CPU capability at the same time, in lock step. What could be simpler?
Infrastructure for Modern Data Workloads
IBM CS models can run traditional Db2 and WebSphere applications. The client also reduced their costs by switching from expensive Oracle databases to open source databases like MongoDB and EnterpriseDB Postgres.
I was honored with being selected for this week's poster session. I was poster 16, explaining the What, Why and How of IBM Cloud Object Storage. Here is am posting with my colleague Heather Allen, IBM.
Kelly Groff, IBM FlashSystem, had poster 15 on how the embedded compression on the latest FlashSystem 900 models have almost no performance impact. Jeff Barnett, IBM, had poster 14 for IBM's Pay-as-you-grow Storage Utility Pricing.
Barry Whyte drew large crowds with his poster 13 on NVMe. Andy Kutner, IBM, had poster 11 on IBM Cloud Object Storage.
Fahima Zamir, IBM, had poster 29 on VersaStack solution, which combines best-of-breed x86 servers and switches from Cisco with IBM storage into a converged system. Sharie Mims from VSS is an IBM Business Partner.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
This week, I am in Orlando, Florida for the [IBM Technical University], with focus on IBM storage, IBM Z mainframes and IBM Power servers. Here is my recap for morning break out sessions on Day 2.
A Survey of Deep Learning Techniques
Nin Lei, IBM Distinguished Engineer, presented a sample of Deep Learning techniques used today. CNN, RNN, and GAN.
Basic decision making: gather data, reviewed by subject matter expert, have an outcome. This is done for a variety of situations: fraudulent vs. legitimate credit card transaction, approve or reject loan application, tumor is benign or malignant. Machine Learning effectively replaces SME with a mathematical function.
Various tools are available for this: Tensorflow, SnapML, SAS, SPSS, are just a few.
Deep Learning is based on "Neural Networks", a subset of Machine Learning. There are input layer, one or more hidden layers, and then an output layer. For example, for a photo, each pixel could be an input feature. A 200x200 pixel photo represents 40,000 input values. In the past, there weren't more than three hidden layers. Today, we can have 20 to 50 layers, because we now have more computational power, with 95-97 percent accuracy.
For each connection between input layers and hidden layers, and output layers, you identify weights and biases. A research paper by Hornik 1989 posits that any machine learning can be performed by a sufficiently large neural network.
Convolution Neural Network (CNN) is often used for image recognition, for object classification or detection.
Some features are invariant. Location invariant means it doesn't matter where it is located within the photo. Color invariant means it does not matter what color it is, and can work with black-and-white or grayscale photos.
For example, for facial recognition, earlier layers are focused on identifying edges, and later layers identify facial features like eyes, nose and mouth.
Image recognition is used with self-driving cars, drones to determine power line maintenance or crop inspection, social media, video surveillance, medical image diagnosis, car racing, and ripeness of fruits and vegetables.
CNN is used for auto-encoding. This takes detailed photos, compresses them, and then can be used to decode back to something similar. It can takes weeks to train a model with a million photo images.
Recurrent Neural Network (RNN) is focused on time sequence.
This is useful for predicting sequences of letters or words. However, since mathematics are involved, a long sequence of multiplications will either get to zero or infinity, this is known as the "vanishing gradient problem".
The solution is "Long Short Term Memory" (LSTM) cells. Basically, the model selectively remembers information from previous steps, which reduces the number of multiplications.
RNN need to know related words. For example, men-women, king-queen, walking-walked, swimming-swam, Spain-Madrid. These are referred to as "embeddings", which are stored in the hidden layers for quick lookup.
Generative Adversarial Networks (GAN) are used to generate fake photos to train other models.
Sometimes, you do not have enough photos in each category for training, so you can generate fake images to help with the training system. Noise is fed into a "Generator" model, and then the results are evaluated by a "Discriminator" model, comparing the fake with real photos. Repetition allows each model to improve so that the fake photos become more realistic for training purposes.
The death of the one-size-fits-all cloud: The mainstreaming of multi-arch
Elise Spence and Drew Thorstensen, IBM Power Systems for Software Defined Cloud Infrastructure, presented this topic. The session was on IBM Cloud Private, and the multiple architectures supported by Docker and Kubernetes.
There are actually six different architectures supported for Docker containers:
While containers are "portable" between systems, the binaries are typically only written for a single architecture, typically Linux-x86 or Windows-x86, and won't run on POWER or IBM Z.
The solution is to create a multi-arch manifest file, and port all the binaries to all of these different architectures. This way, when the containerized application is run on POWER, the manifest will identify the POWER-based binaries.
Introduction to IBM Cloud Object Storage (powered by Cleversafe)
Before 2015, IBM offered two "Object Storage" products: IBM Spectrum Scale and IBM Spectrum Archive, and I was constantly having to compare and contrast IBM products to Cleversafe.
Not any more! With the IBM acquisition of Cleversafe, IBM now offers all three!
This session explained all of the features and functions of IBM Cloud Object Storage System, available as software, as pre-built systems, including a VersaStack CVD, and as Storage-as-a-Service (STaaS) in the IBM Cloud.
(IBM renamed Cleversafe DSnet to "IBM Cloud Object Storage System". I joked that if IBM ever acquired Coca-Cola, they would probably rename their signature soft drink as the "Brown Carbonated Sugar Liquid", or BroCarb SugarLiq for short!)
I provided a general overview, as well as the latest features of Concentrated Dispersal Mode and Compliance Enabled Vaults.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
Last year, Hurricanes Harvey, Irma, Jose, and Maria, ravaged various parts of North America and the Caribbean. My topic on Business Continuity and Disaster Recovery (BC/DR) was well attended. I have been working in BC/DR for most of my career, including the "High Availability Center of Competency" or HACOC, for short.
However, natural disasters like hurricanes, tornadoes, forest fires and floods represent less than 20 percent of all disasters. The majority of disasters, nearly 75 percent, arise from electrical power outages, human error, system failure and ransomware.
The seven tiers were developed by a group of IBM customers back in the 1980s, and have stood the test of time. I recently published an article in IBM Systems Magazine (January/February 2018) based on this presentation.
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods.
Finally, I covered some Hybrid Cloud Storage configurations, showing how a combination of Traditional IT, on-premise local private cloud, off-premise dedicated private cloud and public cloud, and be combined to provide added value.
Reporting and Monitoring: How to Verify your Storage is Being Used Efficiently
It is hard to believe that it was over 15 years ago that I was the chief architect for the software we now call IBM Spectrum Connect, Spectrum Control and Storage Insights. There are a variety of editions and bundles for this product, but my focus on this talk was on the advanced storage analytics found in IBM Virtual Storage Center and IBM Spectrum Control Advanced Edition.
I covered three use cases:
What storage tier to put your workload in, and how to move existing data into a faster or slower tier to meet business requirements and IT budgets.
For steady state environments, how to re-balance storage pools within a single tier to keep things even for optimal performance.
When it is time to decommission storage, how to transform volumes from one storage pool to another without downtime or outages.
Special thanks to Bryan Odom for his help in updating this presentation.
Spectrum Virtualization Data Reduction Pools 101
Barry Whyte, IBM Master Inventor and ATS for Storage Virtualization for Asia Pacific region, presented on how Data Reduction Pools were implemented in version 8.1.2 of Spectrum Virtualize. The software in the latest IBM SAN Volume Controller (SVC), IBM Storwize products, and IBM FlashSystem V9000.
Basically, rather than say we "re-wrote" the code, we prefer softer euphemisms like the code was "re-imagined" or, my favorite lately, "re-factored". Legacy Storage Pools will continue to be supported, but IBM anticipates that people over time will transition to the new Data Reduction Pools (DR Pools).
Like Legacy Storage Pools, the new DR Pools also support a mix of Fully-allocated, Thin-Provisioned, and Compressed-Thin volumes. IBM has made a statement of direction that it will offer Data Deduplication feature in the future, but these will only be on the new DR Pools.
While DR Pools are available today with version 8.1.2, there are a few restrictions. There is a limit of four DR Pools per cluster, and the amount of total capacity of each pool depends on the extent size and number of I/O groups configured. Some of the migration methods developed for Legacy Storage Pools are not available, and in reality don't make sense in the new DR pool scheme. Child Pools are not supported either.
One of the big improvements that DR Pools offer is in the area of compression. With Legacy Storage Pools, CPU cores were dedicated for compression, so they were either under-utilized or overwhelmed. With DR pools, all CPU cores can be used for either I/O or compression, which potentially can increase performance by up to 40 percent!
After the sessions, IBM had its "Solution Center Reception". This is a chance to relax and unwind after a long day, with food and drink, and various sponsors in booths to explain their latest offerings.
This is Katie Thacker from [FIT]. In March 2018, FIT was recognized as IBM’s Top Strategic Service Provider of the year!
These are Elizabeth Krivan and Kelly Bouchard, two recently-hired IBM storage sellers. They attended my sessions at the IBM Technical University in New Orleans last October, so it was good to see them again at my sessions here in Orlando.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
This week, I am in Orlando, Florida for the [IBM Technical University], with focus on IBM storage, IBM Z mainframes and IBM Power servers. Here is my recap for the keynote sessions on Day 1.
Art Beller, IBM Vice President of WW Systems Technical Sales
Art Beller, my third-line manager, kicked off the event. He explained that with [Artificial Intelligence], or AI for short, we are entering the "age of the incumbent". All across industries, the companies that have established dominance over the decades have the most data to get value from.
Kathryn Guarini, IBM Vice President Research Strategy
Kathryn provided an overview of the latest news on AI. Over 700 students at MIT, and 1,000 students at Stanford University, have signed up for "Intro to AI" classes. There are over 30,000 AI-related jobs in IT today. The investment in AI is 10 times more than it was just four years ago.
Kathryn explained there are three levels of AI: Narrow, Broad, and General. Narrow AI finally works, such as face recognition or speech-to-text translation. Broad AI is still a ways out, and General AI is not expected until year 2050.
An area of research is to "Learn more with less". For example, if you train a photo image recognition to identify different species of dogs, can you extend some of this learning to recognize different cats? This is often referred to as "Transfer Learning".
Cyber-criminals are already using AI, and if they can infiltrate AI training models, can introduce some scary scenarios. The next cyber battle-field will be AI vs. AI.
AI results need to be "Explainable", both in the training and debugging phases, as well as the infer/deployment phases. We need to detect and eliminate human biases, and rank different models on their fairness.
Kathryn gave some real examples:
Medical Sieve: An MRI scan captures over 10,000 images. Through AI, the top 25 most important images can be identified, making a doctor's job easier in identifying tumors.
Cancer Research: There are over 800 billion DNA base pairs to evaluate for different cancers, combined with 723 million published articles are relevant research. AI can help sort this out, matching the best research for the appropriate type of cancer.
Banking Regulations: There are over a million compliance documents, and some banks have more than 10,000 employees focused on enforcing compliance. About 10 percent of these compliance documents change every year, making this a moving target.
Fraud Detection: There are too many "false positives" in today's algorithms for suspicious spending behavior. AI can help identify this better.
Video Highlights: AI can be used to generate movie trailers or sports highlights by identify the most relevant portions of a movie or sporting event.
Reduce Air Pollution: China is investigating the use of AI to reduce air pollution in its country. Large cities like Beijing are particularly over-polluted.
Hillery Hunter, IBM Fellow and Director of Accelerated Cognitive Infrastructure at IBM Research
AI takes Terabytes of information, both structured and unstructured data, to develop a model that is very small, perhaps a few MB or GB.
The four steps are: identify your data sources, do some data preparation, train your model, and then infer using that model. Your data sources are stored in a Capacity Tier (often referred to as Data Lake). Inference must be done quickly, so a Performance tier is needed for that phase.
In some cases, data can't move, so for those situations, we need "Federated AI" where we can combine results from different systems.
IBM has added Distributed Deep Learning (DDL) to its PowerAI set of libraries. To estimate "Click-Thru Rate", a typical approach with 4.2 billion training examples took 70 minutes. With PowerAI DDL, this was reduced to 91 seconds. In another example, training that took nine days was reduced to four hours.
Lastly, Hillery mentioned "in-memory computing". Rather than reading data in from memory, and performing some computation on it, this new approach does part of the compute processing on the memory chip itself, eliminating a lot of data transfers.
Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist for storage
In previous years, IBM Technical University would offer brand-specific keynote sessions for IBM Z, IBM Power and IBM Storage. However, these were in the same time slot, so you could only see one of them. This year, IBM Storage was put into a different slot, so people could hear about their server of choice, and then also listen to the storage keynote.
Clod gave a state of the industry related to different storage media. For Flash, for example, he explained that Phase Change Memory is being developed, using the difference between amorphous and crystalline states to represent ones and zeros.
Tape is also seeing a resurgence. In 2005, Microsoft had declared tape was dead. Today, their Microsoft Azure is a big fan of tape to store data at reduced cost. Tape is 20 times less expensive than disk.
Clod summarized his talk by stating the key areas of storage development:
Optimizing for Artificial Intelligence
Automation for Security and Privacy
Data Governance and Management
You can follow along this week with Twitter hashtag #IBMTechU, or follow me at @az990tony.
The New Orleans event was a five-day event, but I had to leave Wednesday evening for other meetings, so missed out on the last two days.
I do plan to be there all of next week in Orlando. Look for me at one of my sessions, during the breaks, the Solutions Reception on Monday evening, the Poster Session on Tuesday evening, or Universal Studios event on Thursday evening.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(FTC Disclosure: I work for IBM. This blog post can be considered a "paid celebrity endorsement" of the IBM Z and IBM storage products mentioned below.)
DS8880 R8.3.3 Enhancements
Back in 2015, IBM [DS8880 models] of the DS8000 family. Sales drastically increased, in part because IBM re-designed the systems to be a standard 19-inch wide rack, rather than the 33-inch wide custom sizes used before. Many cloud service providers (CSP) and managed service providers (MSP) require 19-inch standard rack configurations.
To meet client requirements, the newest IBM mainframes, including Z14 model ZR1 and LinuxONE Rockhopper II, are now following the same 19-inch rack size!
IBM DS8880 models now have enhanced support for zHyperlink connections. Clients with existing 6-core DS8884/F or 8-core DS8886/F models can upgrade to add more cores for zHyperlink connectivity.
Cores per CEC
Maximum zHyperlink connections
The zHyperlink supports both 40-meter and 150-meter cables. This allows applications like DB2 to read data with substantially lower latency than traditional FICON attachment.
For IBM z/OS clients, the Transparent Cloud Tiering feature allows migration of data directly from DS8000 storage systems to the cloud. This eliminates migrating data through the IBM Z, consuming MIPS and FICON traffic, back out to a tape or virtual tape system. IBM now offers 10GbE cards for the DS8880, providing faster throughput than the existing 1GbE cards previously available.
IBM Spectrum Scale v5.0 for IBM Elastic Storage Server
IBM Spectrum Scale v5.0 was available as software last year, and now is available as a Software PID for Elastic Storage Server hardware.
The new version introduces per-drive editions for licensing: Data Access edition, and Data Management edition. Here are highlights of some of the features:
Enhancements to GUI usability, including managing file systems between ESS and non-ESS storage
Audit File Logging (Data Management Edition only) for Open, Close, Destroy (Delete), Rename, Unlink, Remove Directory, Extended Attributed change, Access Control List (ACL) change
Enhancements to Active File Management, providing WAN-caching for multi-site deployments
Independent KPMG certification will be done for Spectrum Scale v5.0 on ESS for the "Immutability" feature. Some people refer to this as WORM, Government Compliance, Tamperproof, or Non-Erasable, Non-Rewriteable (NENR) enforcement protection
Enhancements to Transparent Cloud Tiering, providing archive of less-active data to IBM Cloud Object Storage, IBM Cloud, or Amazon S3.
Certification for analytics on both x86 and POWER platforms: Hortonworks Data Platform (HDP) v2.6, and Ambari v2.5
Improved I/O performance for many small and large block size workloads simultaneously, including a 4 MB default block size with variable sub-block size based on block size choice
Spectrum Scale 5.0 is incorporated into "Elastic Storage Server Solution Release 5.3". It is unfortunate the numbering is different. Existing ESS clients can download this new ESS 5.3 code from IBM FixCentral today. Going forward, starting next week or so, new Elastic Storage Servers will ship with ESS solution release 5.3 pre-installed.
The TS4500 tape library supports both TS1100 and LTO tape drives.
This feature supports mixed media in a TS4500 tape library. If you are using Library-Managed Encryption (LME), then IBM Security Key Lifecycle Manager is required as the key manager with LTO drives and cartridges.
GDPR is the IT industry's next "Y2K crisis." Effective May 25, 2018, it ensures that any citizen of the European Union can review, rectify, and even erase any personal data from corporate datacenters. Companies that fail to respond to requests can be heavily fined. See Bob Yelland's quick 13-page guidebook on this, titled [GDPR - How it Works].
His team also developed the Non-Obvious Relationship Awareness (NORA) software for the casinos, combining the records of 15 million customers, 20,000 employees, and 18 different watch lists. If a casino did business with people on certain watch lists, they could be put out of business or heavily fined.
NORA alerts identified 24 active VIP players as known cheaters, 12 employees were active gamblers against company policy, 192 employees had possible relationships with casino vendors, and in seven cases the players were the vendor. One casino discovered they were paying to have one of these cheaters flown to Las Vegas to play at their tables!
(IBM acquired Jeff's company Systems Research and Development (SRD) back in 2005. I had the pleasure of working with Jeff during his 11 year stint at IBM, and participated in his G2 project that was later spun off in 2016 to form his newest company, Senzing. See my 2011 blog post [Storage Innovation Executive Summit] of Jeff's thoughts back then.)
Jeff identifies four challenges in complying with GDPR regulation. Suppose an EU citizen comes to your company and asks just to review all information that you have on them. How would you do that?
So this is Challenge #1: There are lot's of places to look. You have a customer database, loyalty club, marketing programs, vendor and supplier databases, and customer service. But wait, the person might have also been an employee! Does your employee database let you search for information on former employees?
Challenge #2 is that the data occurs in variations. Liz Reston could be stored as Elizabeth or Beth. Her last name might have changed from various marriages and divorces. Can you generate all of the variations to search on?
(I know this personally. I am not the only famous "Tony Pearson" out there. There is Tony Pearson, a cricket player in England. There is Tony Pearson, Chief of Staff in the Australian government. And finally, there is 61-year-old "Mr. Universe" Tony Pearson, the "Michael Jackson" of Bodybuilding. Needless to say, women who showed up at my house unannounced looking for him instead were sometimes disappointed!)
Challenge #3 is that existing systems have search limitations. Imagine going to a library that doesn't have a card catalog or computerized index. Rather, you need to go floor by floor, row by row, book by book, looking for the information you are looking for.
Human Resources software might only offer search options for name, date of birth or employee serial number. Hotel systems don't offer you search capabilities of billing or home addresses.
Small typos can result in incomplete search results. Home addresses, for example, are often written in different ways, suite or apartment numbers may be represented differently as well, and abbreviations may be used to represent fully-qualified names.
What are you going to do, ask the IT department to write custom SQL queries for you? One of the unexpected benefits of Jeff's NORA system was that it could match entities between databases by street address, a trick that normally isn't designed into most applications.
Challenge #4 is that not all things that look alike are alike. For example, Liz Reston and her co-dependent husband Bob might [share the same email address].
Family members might have the same home address and phone number. Sons are often named after their fathers, but don't always write "Senior" or Junior" or "III" at the end of their names.
In other cases, roommates in college, who are not related in any other way, might share the same home address. The same apartment number or home address could be used by different people as the house is sold or apartment is rented from one family to another.
It took Jeff decades to appreciate the results of these entity relationships, and then GDPR happened in 2016. When a citizen asks to review their personal data, which they can after May 25 for free, a company must deliver within 30 days. The person can then ask to rectify certain information, or have it erased altogether.
So what seems like a simple enough question, "What do we know about Liz Reston?" turns out to be challenging to answer for a variety of reasons. Jeff did a survey of over 1,000 European companies, here were the results:
Most companies are not ready, and are concerned about their ability to comply with this GDPR regulation.
Company expect an average of 246 requests per month.
The search will require accessing, on average, 43 different system databases.
Each database search will take seven minutes.
Companies will need to dedicate seven to eight full time employees to complete these search requests.
Having access to powerful enterprise-wide "single subject search" discovery tools, however, can also lead to search abuse. For example, a famous celebrity is admitted to a hospital, and suddenly sensitive information is leaked to the tabloids or paparazzi. Someone asks their friend, a police officer, to search the license plate on someone's vehicle. A father searches his corporate database for information on his daughter's new boyfriend.
To address this privacy concern, Jeff suggests a tamper-proof audit log that shows who searched for whom. Where are we going to get technology to do this? We already have it: Blockchain! That's right, the technology that enables Bitcoin to operate without government controls already includes a tamper-proof audit log for transactions.
Jeff's plans for his new company Senzing is to deliver software for different use cases, with APIs for popular programming languages like Java and Python, and a workbench that runs on Windows. He is also considering a "Community Edition" that could be affordable for even the smallest of businesses, with a challenge to the audience to please contribute to this as an open source project.
Last week, IBM clients, Business Partners and executives got together for the inaugural IBM [Think 2018] conference. There were over 30,000 attendees.
In an age of exponentially more data, connected devices and computing power, there are more ways for attackers to breach an organization than ever before. Teams are challenged to manage these threats as they deal with too many disparate tools from too many vendors, an enormous security and IT skills shortage, and a growing number of compliance mandates.
Marc van Zadelhoff, General Manager, IBM Security, kicked off the session "Ready For Anything: Build a Cyber Resilient Organization". The year 2017 was a tough year for security. People can relate to the number of security breaches that happened.
Why do companies struggle in this area? It is not just because hackers have become more sophisticated. IBM Security has over 8,000 security experts to help clients. When IBM is called in, we find 90 percent lack basic fundamentals from firewall rules and patch management. It takes on average 200 days for companies to detect breaches. Sadly, 77 percent do not have a response plan after the breach happens.
To help this, IBM has come up with new terminology. At a certain point, [the shit hits the fan], a Canadian phrase meaning "messy consequences are brought about by a previously secret situation becoming public." Marc explained that it often is accompanied by FBI agents showing up at the front door.
Marc referred to this event as "the Boom". All of the preparation and prevention happen "left of Boom". The clean-up, salvaging your brand reputation, and remediating the damage was called "right of Boom". Here are some examples of a Boom event:
Compromised Cloud app
Left of Boom is our domain of choice. We are surrounded with just security and IT problems, problems we have studied our entire careers, involving daily activities we complete with a sense of certainty.
Right of Boom is a completely different matter. Others get involved, including Legal, HR, and sometimes even the Board of Directors. These are distant, hazy problems that don't occur every day, and more uncertainty.
The Boom is not the initial breach, but when the breach becomes public, an average of 200 days later. Hackers can do quite a lot of damage during these 200 days. What might have started as phishing emails, might continue with access to sensitive databases, stolen credentials to other servers, access to internal networks, and additional compromises.
Likewise, companies should not expect to clean up the mess in just a few days either. IT forensics are used to determine the scope of the breach. Regulators and auditors are notified, press conferences and legal dispositions are scheduled to address the public concerns, and social media sentiment might fall.
Back in 2016, [IBM acquired Resilient] a security software company. Ted Julian, IBM VP Product Management and Co-Founder of Resilient, performed a live demo of this software. Basically, it is a dashboard that automates gathering incident data, determines the tasks required, and then orchestrates appropriate responses. This allows the security administrator to launch remediation directly in context.
Last year, over 1,400 customers have taken advantage of IBM's security breach simulator lab, the IBM X-Force Command Center. On the right side of the boom, time matters. What might take 90 minutes manually can be done in two minutes with IBM Resilient dashboard and the right amount of practice and training.
Next on stage were Wendi Whitmore, IBM Security Services, and Mike Errity, Vice President IBM Resiliency Services. While Wendi's team is handling the situation from afar, Mike's team lives in the data center. Mike explained Recovery Time Objective (RTO) and Recovery Point Objective (RPO), which applies to recovery after cyberattack, similar to Disaster Recovery after a hurricane.
Wendi indicates that executives need visibility into what is going on after a breach, and to have retainers involved in PR firms and other industry experts to be called on a short notice as needed right of boom.
Richard Puckett, Vice President Security Operations, Strategy and Architecture, at Thomson Reuters, was the final speaker. Richard spent the first six months of his job uplifting the security protocols at Thomson Reuters. They partnered with IBM to build up their talent for their Security Operation Center (SOC).
Threats are asymmetric. Unlike traditional physical threats from mobs of people, or trucks parked at the front door, cyber threats go undetected. Once they are detected, it can be difficult to identify the perpetrator. Richard suggests that good security requires good management. Patch management is not the sexiest, but is critical. Don't focus on shiny new objects, but rather fixing weak passwords and poor patch management procedures.
In the struggle to keep up, organizations are not doing a good job of mastering the security fundamentals. IBM believes that with the right approach, technologies and experts, our clients can fight back. IBM can deliver security and resiliency at the scale and speed necessary to protect businesses against the challenges of today, and tomorrow.
While Sal Khan was a hedge fund manager in Nor then California, he was also a math tutor to his cousin Nadia over the Internet in the evenings. This extended to 15 other family members. In November 2006, Sal started to record his teachings on a YouTube channel. His cousins liked the YouTube recordings better, as they could go at their own pace.
In 2007, Sal realized that many people who were not family-related were watching his educational videos on YouTube. Sal quit his job and set up [Khan Academy] as a non-profit organization. Unfortunately, the donations he received from students and parents were not enough to support his monthly expenses. However, he received a generous $10,000 US dollar donation from a parent who used the site with her kids.
Word got around. Bill Gates from Microsoft mentioned Khan Academy in an on-stage interview. Mr. Gates admired Sal's wife for letting him quit his job to pursue his interests.
(Later, Mr. Gates invited Sal to visit the Microsoft campus in Seattle, WA, asking him "What could Khan Academy achieve if you had more resources?" A question folks in public education, or the IT industry for that matter, rarely hear! )
By Fall 2010, the Gates Foundation, Google, [and other supporters] helped make this a fully funded organization, he was able to hire engineers and educators.
Sal gave an interesting analogy. Imagine building a house, the first step is to pour the concrete foundation, instructing the builders to "do what you can in two weeks". The inspection indicates problems, but you go ahead and build first floor with the same approach "do what you can in two weeks", then build second floor. Eventually, the house collapses.
Sal organized Khan Academy similar to [Kung Fu belt colors], rather than the manner students are grouped by age in traditional American schools, promoted lock-step, regardless of their readiness. Many students have gaps, and being moved to next grade just results in more gaps. The solution is to fill the gaps in a timely manner.
Sal gave three inspiring stories of some of his students:
Charlie dropped out of high school his freshman year. When he came back to school, he was put in remedial math and science classes. Charlie was able to catch up using Khan Academy, graduated as high school valedictorian, and went on to major in Computer Science at Princeton. Hearing this testimonial, Sal offered him an internship during his Junior year at Princetom. Charlie is now fully employed at Khan Academy.
Some engineers from Silicon Valley went to Mongolia to setup computer labs for kids in an orphanage. One orphan, Zaya, sent an [email with video] to Sal about how much she appreciated learning through Khan Academy. Zaya is now 19 years old, and one of the top contributors to Khan Academy in the Mongolian language, helping to educate her own people.
Seven years ago, a girl named Sultana living in Afghanistan. The Taliban took over her town, and physically prevented girls from attending school. Sultana had Internet access at home, and taught herself English. She asked her uncle to bring back any reading materials in English he could find. He brought back a Time magazine with an article on Khan Academy.
Between her ten hours' worth of household chores every day, Sultana taught herself math, chemistry, biology and physics using Khan Academy. She illegally crossed into Pakistan, a dangerous 30-hour journey, just to take the SAT exam and did surprisingly well.
Nicolas Kristof from the New York Times wrote an article [Meet Sultana, the Taliban's worst fear]. Sultana was able to get assylum into the United States, and is now doing research with a top physicist at MIT.
But how effective is Khan Academy overall? Working with the college test board, Sal was able to do efficacy studies. With 250,000 students using Khan Academy for PSAT/SAT prep for just 20 hours produced 100 percent extra gain. A similar study in Idaho found 80 percent extra gain with 10,500 students. In Brazil, a 7,000 student study found that one hour of Khan academy per week resulted in 30 percent more learning.
The videos on Khan Academy favor being simple and authentic, rather than high production value. The software and equipment used to make the first videos only cost a few hundred dollars. The costs are just 30 US cents per hour of learning.
Today, the free online learning resources cover preschool through early college education, including K-12 math, grammar, biology, chemistry, physics, economics, finance, history, and SAT prep. Khan Academy also provides teachers with tools and data so they can help their students develop the skills, habits, and mindsets they need to succeed in school and beyond.
The concept scales well. Khan Academy has over 150 employees, with another 14,000 volunteers helping with translations. Over 59 million students have registered across 190 countries. Every year, about 300,000 people send in donations. The webiste has had over 1.4 billion views.
Sal finished his talk with a thought experiment: Go back 400 years ago to Western Europe, a time when only about 10 percent of men, and 5 percent of women, could read. If you asked someone, back then, what percentage of people could be taught to read, they would estimate only 20 to 30 percent.
Today we know that nearly 100 percent of people can be taught to read. However, if you asked people today what percentage of people could become a software engineer, start a business, or write a novel, people respond only one to five percent.
IBM Watson is also helping out in the area of education. Register today at [Teacher Advisor]!
This week, IBM clients, Business Partners and executives get together for the inaugural IBM [Think 2018] conference. There are over 30,000 attendees.
This is a combination of last year's three events: Edge, InterConnect, and World of Watson (WoW). The combined event is divided into four "campuses":
Cloud and Data -- formerly covered at InterConnect
Modern Infrastructure -- formerly covered at Edge
Business and AI -- formerly covered at World of Watson
Security and Resiliency -- covered in the other three events
(I am not in Las Vegas! In my first post in this series, [Science Slam], I forgot to mention that I was not physically there, and have since been flooded with invitations and requests for one-on-one meetings with clients and cocktail parties. Sorry folks! I am in Tucson writing these blog posts by watching the live stream videos of the event.)
Putting Smart to Work
Ginni Rometty, IBM Chairman, President and CEO, kicked off the event. In the opening video, we realize that "smart" is just a placeholder, translated to "Putting Cloud to Work", "Putting AI to work", and so on.
An "interesting moment" that happens every 25 years, when business and technology change at the same time. Those who learn exponentially are disruptors, not victims of disruption.
[Moore's law]: Double the number of transistors on a chip every 18-24 months.
[Metcalfe's law]: The value of a network is related to the square of the number of nodes involved.
[Watson's law]: Ginni would like to coin this new law to refer to exponential learning from data using Artificial Intelligence (AI).
How much of the world's data is searchable? only about 20 percent. The other 80 percent is proprietary that provides competitive advantage. IBM is helping clients be the "incumbent disruptor".
Ginni covered three inflection points: your business, society, and IBM itself
Companies must go on the offense, leverage multiple digital platforms (plural), and empower people by enable "man+machine" learning in every process they have. What are better decisions worth? Over $2 Trillion US dollars!
Man+Machine better than man-alone and machine-alone. At [Credit Mutuel], a leading European bank, Watson technology is used to answer 60 percent of customer emails, and 95 percent of the employees there are happier about this.
IT technology represents both the greatest opportunity and the biggest issue of our time.
Trust and responsibility. We must be data stewards, with focus on privacy and security. Only 4 percent of data is encrypted.
Jobs and skills. Man+Machine augments man alone. 100 percent of jobs will change. Ginni coined the term "new collar jobs" a few years ago.
Inclusion is important. IBM is one of the leaders in this area with its 400,000 employees spanning all races, genders, and sexual orientations. IBM was awarded [Catalyst award] for companies making real change for women in the workplace. IBM is the only tech company to be ever awarded this, and this will be the fourth time IBM is honored with this award.
IBM has revamped its own HR with [Workday]. In 2016, Workday partnered with IBM on 7-year deal to use IBM Cloud for its platform. IBM in turn has switched its HR to using Workday applications.
Mainframe technologies and POWER9 are now on the IBM Cloud. IBM is also expanding IBM Cloud Private to include "IBM Cloud Private for Data".
To date, IBM has completed 16,000 Watson engagements to-date. Watson Oncology now in 150 hospitals analyzing 13 different types of cancer.
The big system Watson used to play Jeopardy in 2011 have been broken down to micro-services and APIs that are more easily consumable by applications.
IBM and Apple have announced integration with Watson. Apple [CoreML] natively goes to Watson. IBM can now go straight to Apple Swift code. A new "Watson Studio" allows you to develop AI models in the cloud, then deploy them in private on-premises.
IBM will also offer "Watson Assistant". In the past, buying Watson was like buying a puppy, you needed to train it yourself. If you wanted a vicious guard dog, or a seeing-eye dog, that was up to you. Now, IBM offers "Watson Assistant" which is pre-trained.
Secure to the core
IBM is obsessed with security and trust, from Blockchain to Pervasive Encryption.
In the past, IBM often tried to do this all on its own, but in today's business climate, IBM now has strategic partnerships in these many areas.
Lowell McAdam, Chairman and Chief Executive Officer, Verizon Communications was the first guest speaker.
April 2017, Verizon launched Oath, formed from the company’s acquisition of AOL and Yahoo, which houses more than 50 digital and technology brands that together engage more than 1 billion people worldwide.
(I personally have been working with Verizon for decades, back when they were just NyNEX, BellAtlantic, and GTE, before they acquired Vodaphone, MCI, AOL and Yahoo! I use FlickR, one of the Yahoo brands.)
With the acquisition of AOL and Yahoo, Verizon formed "Oath", with over 1.2 billion consumers. The name came from the promise to customers for giving them to get what they want, when they want them.
Largest fiber provider for the USA. We have enough fiber on hand to stretch to Mars.
They invest $18 billion per year, but often payoffs not for another 5 years. [5G Wireless network technology] is an example. Lowell feels that 5G will usher the "fourth" industrial revolution:
Speeds over 1Gbps for consumers, 25Gbps for commercial, compared to 10 Mbps typical today.
5G will support 1,000 more devices per cell site, enabling IoT like intelligent lighting, video surveillance, face recognition.
5G has short latency, 1 msec compared to 200 msec today to cell site and back. This shorter latency will enable Augmented Reality and Virtual Reality (AR/VR).
5G also reduces battery consumption, imagine only charging your cell phone once per month!
Verizon delivers value three ways:
Provide connectivity only. Verizon will continue to do this for some markets
Like IBM, Verizon promises it will not use customer data in any manner that the customer did not "opt in" for. Business is based on trust. Those business that lose trust have difficult time to regain it.
Shipping, Supply Chain and Global Trade
Michael J. White manages the Global Trade Digitization organization for Maersk. He was recently named CEO-designate of the IBM-Maersk Joint Venture.
Shipping products is $4 Trillion US Dollar business. As much as 80 percent of what we consume came over the ocean. On average, 20 percent of the shipping cost is administrative paperwork, however, in some cases, the administrative costs exceed the physical transport costs.
State of industry, over the last 5 years, has been 3.7 percent compound annual growth rate (CAGR). This is expected to increase to 4 percent as economies bounce back. Many companies run lean, expecting their supply chains to provide supplies "just in time".
Unfortunately, shipping is hugely inefficient, paper-based. This impedes growth of local trade. Take for example the shipment of a container of Avocados from Kenya to Netherlands: 30 entities involved, over 100 individuals, over 200 transactions.
Why did IBM-Maersk joint venture pick blockchain? Blockchain is not a solution searching for a problem. The problems are well known, and blockchain addresses them. Smart contracts and decentralized authority provides immutable trust, critical in an industry where many parties do not know each other.
IBM Maersk Joint venture was formed over the past 18 months to create the world's best global trading platform.. There are 25 companies on-boarding now, with another 40 companies have expressed interest to join soon.
Unlike the anonymity of Bitcoin that enables terrorists and murders for hire, IBM is focused on transparency that all parties identify each other.
Blockchain benefits all the key parties involved. Carriers benefit, customers benefit, and ports and terminals get information earlier upstream for better planning during peak periods, and this results in better utilization of resources available.
(Not everyone benefits - counterfeiters and corrupt government officials will not be happy with Blockchain used in this manner!)
Paperless transactions reduces re-keying information by 80 percent. Less re-keying means fewer mistakes, fewer typos.
This new global trade platform offers opportunities in adjacent blockchain networks for financial services, insurance, and food safety. To ensure food safety, Blockchain is used by Walmart, Kroger, Unilever and 20 others. One third of food grown is wasted.
Dave McKay, President & Chief Executive Officer, Royal Bank of Canada (RBC) was the next speaker. Dave graduated from the University of Waterloo, a COBOL computer programmer at heart. RBC still use COBOL programs in their banking applications!
RBC is the top bank in Canada, and would be #5 bank if it was based in the USA. It will be celebrating its upcoming 150th anniversary in 2019. Highest customer sat for multiple years running. RBC has 13 million customers. RBC is also Canada's #1 broker/dealer for investment banking.
Back in the 1980s, banks were only open 10am-3pm, and treated it as a privilege for clients to work with the bank. Account holders came in several times per week, and relationships were built with local branches. Today, account holders are not coming into branch offices, using ATMs and mobile phones instead.
In the past, consumers used their RBC Credit Cards, and this provided brand recognition for RBC. Today, traditional banking services are now being embedded into other value chains. With Apple Wallet, for example, you enter your RBC credit card once, and then nobody knows what bank you are using to pay for coffee.
Like any bank, RBC is focused on three areas: moving money, storing money, and lending money. AI is needed to evaluate these transactions into knowledge, to provide business value and insight. However, RBC has only 40 Applied and Pure data science researchers on staff. This was deemed not enough, so RBC partnered with IBM.
Cloud, the computer power and speed needed, RBC has 60 apps in development in the IBM Cloud. While silicon valley start-ups might "let the app fail faster in the hands of clients", that approach doesn't work with money transactions.
RBC has invested heavily in blockchain. It will transform how we work with others. Digital transformation not just technology, but also cultural change. Is RBC in the mortgage business or the "Housing enablement business"? Is it in the car loan business or "transportation enablement business"?
Working with small business, they want to focus on their own clients, not bookkeeping and accounting. RBC has deployed AI in the Cloud to create the Advisor's Virtual Assistant [AVA] application. There have been over 48 million interactions in the first four months!
RBC is also investing $500 million this year to build the IT skills of their employees.
RBC is also focused on the stewardship of data. The strength and trust of financial institutions is the core to a strong economy. RBC policies are based on "opt in" to provide value relevant to both clients and the bank. Banks that breach that trust will struggle.
Ginni (and the rest of the company) has re-invented IBM to achieve exponential change. The change impacts all industries, not just the three we saw on the stage during this keynote session.
To follow along with the rest of Think2018 conference, watch the live stream on [www.ibm.com/events/think/watch] or follow the twitter hashtag #Think2018