Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Day 1 included keynote sessions. Here is my recap for the morning.
General Session "The Quantum Age"
Amy Hirst, IBM Director of Systems Training, served as emcee for the General Session. The theme this week is "Power of Knowledge, Power of Technology, Power of You. You to the IBM'th power".
Chris Schnabel, IBM Q Offering Manager, explained what "IBM Q" is.
Chris feels "our intuition of what we can compute is wrong". Classical (non-Quantum) computing has evolved over past 100 years.
Consider Molecular geometry. The best supercomputer can only handle the smallest molecules, those with 40 to 50 electrons, and even then are unable to calculate bond lengths within 10 percent accuracy. Quantum computing can.
Another area is what computer scientists call the "Traveling Salesman Problem". If you had a list of 57 cities, what would be the optimal path to minimize the distance traveled to get to all of the cities. Doing an exhaustive search would be 10 to the 76th power. Dynamic Programming techniques provide some shortcuts, reducing this down to 10 to the 20th power, but still, that is impossible on most computers.
Chris mentioned that there are easy problems to solve in polynomial time, and hard problems that are exponential, in that they get worse and worse the bigger the input set. There will always be hard problems.
"Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy."
-- Richard Feynman
Nature encodes information, but not in ones and zeros. Quantum computers are measured on the number of Qubits, their error rate, etc. The three factors that IBM focuses on are Coherence, Controllability and Connectivity.
Chris explained how Superposition and Entanglement are used in Quantum Computers. I won't bore you with the details here, but rather save this for a future post.
Today: 5 to 16 Qubits (can be simulated with today's classical computers. 5 Qubits is the power of your typical laptop)
Near future: 50-100 Qubits (too big to simulate on supercomputers), with answers that are approximate or correct only 2/3 of the time.
Future: millions of Qubits, fault-tolerant to provide exact, precise answers consistently.
Quantum Computing opens up a new range of problems, what Chris call "Quantum Easy" problems. Problems that might take years to solve on classical supercomputers could be solved in seconds on a Quantum computer.
Chris showed a picture of [Colossus], the first digital electronic computer used in the 1940s. Quantum computing today is like 1940's of classical computing.
IBM is now working on Hybrid Quantum-Classical algorithms, for example:
Quantum Chemistry - can be used in material design, healthcare pharmaceuticals
Optimization - logistics/shipping, risk analytics
There are different ways to build a quantum computer. IBM chose a single-junction transmon design, using Josephson junctions. While the chips are small, the refrigerators they are contained in are huge, and have to keep the chips at very cold 15 milliKelvin temperature (minus 459 Fahrenheit)!
To get people excited about Quantum computing, IBM created the "IBM Q Experience" [ibm.com/ibmq] that allows the public to run algorithms on a basic 5 Qubit system using a simple drag-and-drop interface to put different transformational gates in sequence.
IBM Research team were shocked to see 17 publications in prestigious journals make practical use of this 5 qubit system! Since then, IBM now offers a Software Developers Kit (SDK) called QISkit (pronounced Cheese-kit) as a text-based alternative to the drag-and-drop interface.
Amy Hirst came back on stage to remind people to use Twitter hashtag #ibmtechu to follow the event. There are two more events like this planned for the end of the year. A Power/Storage conference in New Orleans, October 16-20, and another event focused on z Systems mainframe, November 13-17.
Pendulum Swings Back -- Understanding Converged and Hyperconverged Systems
This presentation has an interesting back-story. At a client briefing, I was asked to explain the difference between "Converged" and "Hyperconverged" Systems, which I did with the analogy of a pendulum. I used the whiteboard, and then later made it into a single chart.
At the far left of the pendulum, I start with mainframe systems of the early 1950s that had internal storage. As the pendulum swings to the middle, I discuss the added benefits of external storage, from RAID protection and Cache memory to centralized management and backup.
To the far right of the pendulum, it swings over to networked storage, from NAS to SAN attached devices for flash, disk and tape. This offers excellent advantages, including greater host connectivity, and greater distances supported to help with things like disaster recovery.
Here is where the pendulum swings back. IBM introduced the AS/400 a long while ago, and more recently IBM PureSystems that combined servers, storage and switches into a single rack configuration. Other vendors had similar offerings, such as VCE Vblock, Flexpod from NetApp and Cisco, and Oracle Exadata.
Lately, the pendulum has swung fully back to internal storage, with storage-rich servers running specialized software on commodity servers. There are two kinds:
Pre-built systems like Nutanix, Simplivity or EVO:Rail which are x86 based server systems, pre-installed with software and internal flash and disk storage.
Software that can be deployed on your own choice of hardware, such as IBM Spectrum Accelerate, IBM Spectrum Scale FPO, or VMware VSAN.
So, over time, my single slide has evolved, and fleshed out into a full blown hour-long presentation!
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods.
Finally, I covered some of our new public cloud storage offerings, using OpenStack Swift and Amazon S3 protocols to access objects off premises, including the new Cold Vault and Flex pricing on IBM Cloud Object Storage System in IBM Bluemix Cloud.
(FCC Disclosure: I work for IBM. I have no financial interest in SUSE, Scality, or any other storage vendor mentioned in this post. This blog post can be considered a "paid celebrity endorsement" for IBM Storwize, IBM Cloud Object Storage, and IBM Spectrum Storage software mentioned below.)
The study takes a realistic request for 250 TB of storage, at 25 percent compound annual growth rate (CAGR), to store infrequently accessed data in an online archive, and then looks at the Total Cost of Ownership (TCO) over five year period.
The study compares five different Software-Defined Solutions and three pre-built systems. The Software-defined solutions come as software-only, requiring that you purchase the hardware separately and build it yourself. The three pre-built systems were chosen from the top three storage vendors in the marketplace: Dell EMC, IBM and NetApp.
The cost of support is factored in, as it should be. To keep things equal, no data reduction like data deduplication or compression were used.
In an odd approach, the study mixes block, file and object based approaches all in the same study.
You can read the full 14-page study (linked above). I have organized the results into a single table, ranked from best to worst, color coded for the best deals in green ($100K to $200K), moderate solutions in yellow ($200K to $300K) and most expensive in red (over $300K). I put the software-only options on the left and pre-built systems on the right.
SUSE Enterprise Storage 4
IBM Storwize V5010
DataCore SAN Symphony
Red Hat Ceph Storage
Dell EMC Unity 300
I am often asked, "Isn't the software-only, build-it-yourself approach, always the lowest cost option?" Now, I can answer, "Sometimes yes, sometimes no." Fortunately, IBM offers Software-Defined Storage in a variety of packaging options including software-only, pre-built systems, and in the Cloud as a service.
IBM Storwize V5010 is based on IBM Spectrum Virtualize software, which you can deploy as software-only on your own x86 servers. This was not mentioned in the study, and perhaps it is my job to remind people that this option is also available for those who want to build their own storage.
For that matter, IBM Cloud Object Storage System -- available as software-only, pre-built systems, and in the Cloud -- might also be a cost-effective alternative.
Next week I will be in Orlando, Florida for the IBM Systems Technical University. If you are attending, stop by one of my presentations, or look for me at the Solution Center at one of the IBM peds, or attend the "Meet the Experts for IBM Storage" on Thursday!
I have been blogging for more than 10 years now, so I am no stranger to commenting on competitive comparisons. In some cases, I am setting the record straight, and other times, poking fun at competitor results, claims or conclusions. This comparison from Brian Carmody was too juicy to ignore.
(FCC Disclosure: I work for IBM. I have no financial interest in Infinidat, Dell EMC, nor Pure Storage, mentioned in this post. I do have friends and former co-workers who now work for Infinidat. This blog post can be considered a "paid celebrity endorsement" for IBM FlashSystem products.)
Here is an excerpt, I have added (Infinidat) wherever Brian says "we" just so there is no confusion:
"... So last week we (Infinidat) finally got around to running the same profiles against an INFINIDAT F6230 in our Waltham Solution Center, configured with 1.1TB of DDR-4 DRAM, 200TB TLC NAND, and 480 3TB Nearline HDDs.
In summary, we (Infinidat) wrecked the Pure and EMC systems. Here are the results side by side with EMC's data:
EMC Unity 600F
16K IOPS (80% Read)
9x Pure, 5x Unity
256K BW MBps
10.6x Pure, 3x Unity
4.5x Pure, 1.6x Unity
Steady-state latency (ms)
1/7 Pure, 1/2 Unity
By the way, we (Infinidat) took the liberty of running the test with a 200TB data set instead of Pure and EMC's 50TB because modern workloads require performance at scale, and we ran it with in-line compression enabled because our compression algorithm doesn't hurt performance.
This was an interesting test to run, and we (Infinidat) hope it helps the storage industry move away from media type wars and benchmarks (you will lose every time on performance if INFINIDAT is in the mix) ..."
Notice anything wrong here? anything missing?
The Tortoise beat "Hare 1" and "Hare 2", but did not invite the Cheetah to the race?
Brian was smart enough not to compare their product to anything from IBM. IBM has a wide variety of All-Flash Arrays, including the DS8880F models, the Storwize V7000F and V5030F models, and Elastic Storage Server models. However, for this workload, IBM would probably recommend the FlashSystem V9000, A9000 or A9000R.
Any All-Flash Array with a steady-state latency of 2 milliseconds or greater is embarassing, but then Infinibox is not really an All-Flash Array.
The architecture of their Infinibox appears much like the original XIV. It has a mix of DRAM memory and SSD cache, combined with spinning drives. It offers only compression, not data deduplication. Unlike the IBM XIV powered by six to 15 servers, the Infinibox appears under-powered with just three servers.
The Infinibox uses software-based in-line compression, which must put a huge tax on the few CPUs they have in those three servers. Infinidat chose not to compress the data in their cache, probably to reduce the additional overhead on their over-taxed CPUs.
The IBM FlashSystem V9000 has an innovative design, based on IBM Spectrum Virtualize, the mature software that you also find in the IBM SAN Volume Controller and Storwize family of products.
The FlashSystem V9000 offers hardware-accelerated compression. IBM takes advantage of the integrated Intel QuickAssist co-processor which runs the compression algorithm 20 times faster than standard Intel Broadwell CPU.
IBM compresses its cache, using a two-tier approach. The "upper cache" receives the data uncompressed, so that it can then tell the application to continue, for fastest turn-around time. Then the data is compressed, and stored in the "lower cache", optimizing the value and benefits of DRAM memory. Many databases get up to 80 percent savings, resulting in a 5-to-1 benefit in DRAM cache memory.
The IBM FlashSystem A9000 and A9000R also have an innovative, based on IBM Spectrum Accelerate, the code originally developed for IBM XIV storage system.
(Fun fact: Infinidat's founder, [Moshe Yanai], was formerly the founder and designer of XIV, and it appears that Infinidat is just a re-design of old XIV technology architecture, re-packaged with a few differences. Since Moshe left, IBM has drastically enhanced the IBM XIV.)
Like the IBM Spectrum Virtualize family, the IBM FlashSystem A9000 and A9000R have hardware-accelerated in-line compression, and two-tier approach to cache. The "upper cache" receives the data uncompressed, then the data is compressed and deduplicated, and stored in the "lower cache", optimizing the value and benefits of DRAM memory.
The IBM FlashSystem A9000 and A9000R also offer in-line data deduplication. Modern workloads are virtualized, and Virtual Machine (VM) and Virtual Desktop Infrastructure (VDI) get significant benefits from data deduplication. Infinidat does not play here. For the FlashSystem A9000, most of the metadata related to data deduplication is in cache, minimizing the overhead.
IBM FlashSystem A9000 and A9000R have full performance that blows these published Infinibox results away WITH compression and deduplication turned on.
Brian ran a workload that used the DRAM and SSD cache exclusively, eliminating the reality that any REAL WORLD workload would have to tap into those much slower spinning drives. This is not really a side-to-side benchmark. He is comparing his live run on Infinibox to published numbers from a previous comparison run on a completely different set of data.
This raises the question, why pay for all those spinning drives at all, if you plan to only use the DRAM and Flash storage for your workloads?
A week later, Brian followed up with another post [The INFINIDAT Challenge], acknowledging his comparison was bogus. Here's an excerpt. Again, I have added (Infinidat) wherever Brian is referring to his employer just so there is no confusion:
"... It's not likely that a room full of storage engineers will ever agree on parameters for a synthetic benchmark since storage evaluations are competitive and control of test parameters will invariably predetermine the 'winner'. However, I hope we can all agree that synthetic benchmarks are a waste of time, and that real world performance is what matters in the data center.
So, what can we (Infinidat) do about it?
We (Infinidat) cordially invite every enterprise storage customer who wants lower latency and lower storage cost to visit [FasterThanAllFlash.com] and sign up for The INFINIDAT Challenge.
We (Infinidat) will Give you an Infinibox system to test
We (Infinidat) will Help you clone and test your environment with Infinibox
We (Infinidat) Guarantee your applications will run faster on Infinibox than your All-Flash Array.
If we (Infinidat) fail, we'll take the system back and Donate $10,000 to the charity of your choice.
If our technology delivers, you can keep the system, and we'll (Infinidat) Donate $10,000 in your name to the charity of our choice (The American Cancer Society).
Thanks again to all who participated in the dialog over the past week. I know the post generated some controversy. Traditional storage companies are fighting for their lives trying to keep enterprise storage expensive; indeed their business models are predicated upon maintaining price levels from a bygone era...."
As consolidation play doing full range of data services, I do not see this Infinibox working out. Talking to clients who have the Infinibox, the performance deteriorates in REAL WORLD workloads as you add more data to the unit.
The Infinibox seems fine for workloads that do not demand high performance, so I was surprised Brian compared it to All-Flash arrays. The Infinibox is out of its league!
(To be fair, Pure Storage and EMC XtremeIO aren't really in the same league as IBM FlashSystem, either, given that both of those products are based on commodity SSD. IBM FlashSystem models are consistently 4 to 10 times lower latency than these Commodity-SSD based competitors.)
The Infinibox also lacks features many people expect in an Enterprise-class storage array, like Call-Home capability to identify problems quickly, and Synchronous remote mirroring for disaster recovery. It is often common for startups like Infinidat to deliver a [Minimum Viable Product] as their first offering.
To paraphrase Brian himself, your applications will lose every time on performance if INFINIDAT is in your datacenter.
The new TS1155 enterprise tape drive can write up to 15 TB uncompressed data to existing JD/JZ/JL media.
It can read/write existing 10TB-formatted JD media, and 7TB-formatted JC media, written by former TS1150 drives. It also can offer read-only support for older 4TB-formatted JC media from TS1140 drives.
These are uncompressed capacities, and some clients achieve 2x or 3x compression on top of these capacities. This depends heavily on the type of data. Your mileage may vary, as they say.
Most of the rest of the features of the TS1150 drives carry forward., The performance 360 MB/sec is similar, encryption via IBM Security Key Lifecycle Manager (SKLM) is similar, and support for IBM Spectrum Archive via Linear Tape File System (LTFS) format is similar.
An interesting development is that the TS1155, in addition to standard 8Gb Fibre Channel attach, is the first IBM enterprise drive to also offer 10Gb Ethernet support. IBM will offer both RDMA over Converged Ethernet (RoCE) as well as iSCSI support.
The newest member of the IBM Spectrum Storage software family, IBM Spectrum Copy Data Management automates the creation of snapshot images (FlashCopy for those familiar with IBM terminology) on IBM, NetApp and EMC storage arrays. These copies can be made for various uses, such as DevOps, Dev/Test, Backup/Restore, and Disaster Recovery.
At some data centers, these copies can consume as much as 60 percent of your total storage space, because often each developer and tester are generating their own copies. Instead, having copies automated, registered, cataloged, and made available to developers and testers eliminates rogue copies.
This release adds support for additional databases, including Microsoft SQL Server on physical machines, SAP HANA in-memory databases, and Epic/Caché from InterSystems used in Electronic Health Records (EHR) management systems.
IBM also adds support for long-distance Vmotion for VMware virtual machine images. The target for this movement is IBM Spectrum Accelerate running on IBM Bluemix Cloud, supporting Hybrid Cloud configurations.
Over the past ten years, my co-workers have asked to write a "guest post" on this blog. This time, Moshe Weiss, IBM Senior Manager, Development and Design, has offered the following post, not in his own voice, but in the voice of his "baby", the Hyper-Scale Manager software.
You might think this is a strange approach, but today we have robots that can dance, and cars that can drive themselves! If software could talk, this is what IBM Hyper-Scale Manager would say:
"I was born a year ago.
It wasn't an easy birth… there were many complications. In fact, so many, that I was almost prematurely born!
Most of my development, in preparation for labor and delivery, was done within the last 6 months of the overall 18 months. I was shaped and designed, and sometimes re-shaped, three times. Lots of assumptions had to be made in hopes to ease a successful delivery and help bring me to full term of the birthing process.
During my first year of maturity, I focused on learning how customers used me; what frustrated them the most, and what they loved or 'almost' loved, while still needing refinement and redesign.
The number of customers adopting me grew higher and higher, as did the number of complaints and bugs that I had to deal with, and my users’ frustrations and dislikes because I wasn't yet a complete solution and still had some missing features.
I was renewed four times! Each time of which improved me and made my senses better, faster, adding new capabilities that helped make me more approachable, intuitive and delightful.
Choosing how to renew, and what to add to each renewal, is not an easy task. Basically, it was about prioritizing user experience versus gaps that were deferred from my birth, versus differentiators to make me unique and sell more, versus features in my roadmap, versus investing huge efforts in my quality.
Each renewal was a complex process with lots of features and behaviors to add, while trying to make my customers’ life a bit easier, since features that were important to them were sometimes considered low priority.
But, there were also good times during my first year:
Huge customer adoption rate
100 new customers in two months!
Growing was a great thing and my parents were and are still so proud! But, like with most things, it came with a price - a lot of sustain issues from the field, requests for changes and bad feedback that I am hard to use and missing core elements.
Being a new baby in the Storage world is not a simple thing, as expectations are huge (mainly because of my successful elder brother, the XIV GUI) and I must quickly keep up with all of them.
Although, I am getting tons of good feedback for being revolutionary and unique. People are emotionally engaged with me, and being that I’m a baby, I love to see emotions!
Huge marketing efforts to put me center stage
However, because of some initial problems at the start -- I am a new product, remember? -- I was thrown out of multiple customer sites, and some sales/marketing guys just stopped believing in me. That made me sad.
My parents did a great job, though, in talking, explaining and demonstrating what I can do, together with what I can’t do now, but will do soon. This really helped in some areas, and customers began to see what my parents saw in me for so many years.
I’m really enthusiastic to hear what people will think of me when I’m two years old!
As part of the renewal I had four times during my first year, design elements were reconsidered, redesigned and rewritten to find the best solutions ever. No product has come even close to what I suggest to the world… I am so proud of myself!
Additionally, my parents wrote approximately 20 patents on my User Interface (UI) elements and User Experience (UX) concepts, which makes me extremely unique.
Prioritization of what goes in and what doesn't, especially during a time when fewer and fewer babysitters handled me during that year. It was a real challenge. Read my parent's post [How to drive forward an exhausted team?] for more details.
But my parents did it! They succeeded to add cool features like:
Filter analytics and free text, making the filter a great experience that everyone is using.
Great UX improvements like redesigning the tabs, adding right click menus, and adding more on-boarding enablers
Improving the dashboard.
Improving my core business, capacity management (four different times!), and still working on it.
Adding features that were initially deferred in my birth. Deferring features back then was the way to make my birth go smoother. Now, these missing features annoy people.
Improving quality dramatically, adding automation to the way people test me.
Adding differentiators, like the health widget, with more than 20 best practices that provide helpful tips to the customer when there’s a need to change something in their environment, to avoid future issues.
Continue to bring added values for the 'A-family'. I am monitoring: FlashSystem A9000/R, XIV and Spectrum Accelerate, both on and off premises. This added value makes for a family with the most powerful management solutions and experience."
If you are planning to attend the upcoming IBM Systems Technical University, Orlando Florida, May 22-26, There will also be a variety of hands-on labs. I recommend participating in the hands-on session to feel and witness the next release of IBM Hyper-Scale Manager.
This week, I was part of an all-day event called "Healthcare and Research Trends & Directions in a Cognitive World" at the IBM Executive Briefing Center (EBC) in Rochester, MN. I was one of many presenters covering Information Technology to improve healthcare outcomes. Todd Stacy, IBM Director Server Sales for US Public Market, served as our emcee.
This was a great day. Special thanks to Kathy Lehr, Trish Froeschle, and Scott Gass for organizing this event! We had clients from a variety of Health Care and Life Science industry backgrounds. I certainly learned a few things myself.
Dr. Michael Weiner, IBM Chief Medical Information Officer, Watson Health, covered some of the real challenges not just facing the United States, but also other countries. On average, healthcare in USA [costs over $10,000 USD per American citizen]! Compare that to only $3,700 USD for the folks in the United Kingdom! In fact, nearly all industrial nations spend between $2,000 and $5,000 per person. Where does all the U.S. money go?
A big challenge is our ever-aging population. Every day, there are 10,000 [Baby Boomers] reaching their 65th birthday, with fewer people in the 25-44 age group to work as nurses to take care of them. About 15 percent of the US population are elderly (over age 65) and this is expected to grow to 20 percent in year 2040. The situation is even worse in Japan, where 25 percent of the population today is elderly, and this is expected to be 40 percent by year 2060.
New Care Models
In some countries, like Australia and Japan, post office workers who spent their time delivering mail, now can stop in to check in on elderly people. As people ship less mail, using social media or email instead, this keeps the postal workers employed, in a manner that provides society value.
The USA enjoys one of the lowest costs for food, but then suffers from an epidemic of obesity, with over 34 percent of Americans are obese. When New York City eliminated Trans Fats, heart attacks dropped considerably.
In 2009, the Health Information Technology for Economic and Clinical Health [HITECH] Act required the digitization of medical information, known as "Meaningful Use", which has greatly influenced healthcare facilities. This was implemented by a combination of incentives and penalties. Now, more than than 92 percent of hospitals in the USA have digitized medical information! The rest are still using paper and Xray film images. Some places were initially exempted, such as Assisted Living Homes for example, so there is still more work to be done.
An advantage of using computer-based solutions like Artificial Intelligence is that it eliminates bias. When a woman walks into an Emergency Room complaining about chest pains, few health staff would consider this a sign of heart attack. When a man does same, health staff considers heart attack as the first diagnosis, at the risk of missing out on other possibilities.
Every year, over a million articles related to healthcare research are published. Who can read all this in a timely manner? IBM Watson! After [winning in Jeopardy], IBM Watson was "sent to medical school" to learn how to assist doctors in diagnosing patients.
Transforming Health Care Data Management with IBM Spectrum Storage
Greg Tevis, IBM Software Defined Storage Architect, and Raj Tandon, IBM Senior Strategist, co-presented this introduction to IBM Spectrum Storage family of products. They covered examples with IBM Spectrum Virtualize, IBM Spectrum Control, IBM Spectrum Protect, IBM Spectrum Scale, IBM Cloud Object Storage, and IBM Copy Data Management. The latter having support directly for EPIC and Cache databases.
Cognitive Imaging Solutions for Healthcare Providers
Jason Crites, IBM Healthcare and Life Sciences Data Solutions Leader, and Wayland Vacek, Enterprise Sales Manager for Merge, presented IBM Watson Imaging Clinical Review, from IBM's acquisition of the Merge company. The solution is based on IBM Spectrum Scale as the back-end storage repository.
Merge has been around for more than 20 years, with clinical workflow offerings in Cardiology, Radiology, Orthopedics and Eye care. Often, IBM Watson is able to identify things in medical images that escape the review or radiologists or other medical specialists.
At HIMSS conference earlier this year, The human radiologists were shown a collection of images used to train IBM Watson. The human radiologists only identified 20 percent of the images correctly, while IBM Watson got all of them, every time. In many cases, human radiologists have only a few seconds to look at an Xray image. Computers like IBM Watson are now fast enough to compete directly with human radiologists in the same number of seconds.
Building a Foundation for the Cognitive Era in Healthcare and Life Sciences
Dr. Jane Yu, IBM Systems Architect, Healthcare & Life Sciences, and Dr. Frank Lee, IBM Global Sales Leader, IBM Software Defined Infrastructure & Life Sciences, co-presented this topic. They present five challenges:
Growing data volumes are making it more difficult to manage, process and store this data.
Scientists find themselves spending more than 80 percent of their time manually integrating data from silos, and less than 20 percent of their time doing actual research and deriving insights from their analyses.
Compute- and data-intensive workflows may take days to complete on existing server and storage systems.
IT organizations must keep up with rapidly evolving applications, development frameworks, and databases for preferred. Health care Life Science (HCLS) applications. This includes SAS, Matlab, Hadoop, Spark, NoSQL databases, as well as Deep Learning and Machine Learning workloads.
Scientific integrity and government mandates increasingly require collaboration across organizational boundaries.
In one example, Sidra Medical and Research Center plans to map the genomes of all 250,000 citizens in the Middle Eastern country of Qatar. Imagine that processing each Qatari citizen will generate 200 GB of data for this project, resulting in 50 Petabytes (PB) of data!
Combining IBM Spectrum Compute products with IBM Spectrum Scale storage, can help address these challenges.
Modernize & Transform Helathcare with IBM Storage Solutions
Finally, I presented a 90-minute breakout session that covered three solution areas:
Flash storage to speed up medical records and research. Those who have already implemented Electronic Health Records (EHR) for "Meaningful Use" compliance recognize the value this provides to improving healthcare. Adding All-Flash Arrays such as IBM FlashSystem, Storwize V7000F or DS8000F can drastically improve application performance.
Spectrum Scale and IBM Cloud Object Storage for Vendor Neutral Archive. It seems silly that each PACS vendor has its own little island of storage. A better approach is to send all PACS data from various vendors into a "Vendor-Neutral" storage repository. Both IBM Spectrum Scale and IBM Cloud Object Storage System, either linked together or used separately, can be part of a VNA solution.
VersaStack to simplify deployments. VersaStack is a Converged System that combines best-of-breed Cisco servers and switches with best-of-breed IBM storage, pre-cabled, pre-configured, and pre-loaded with all the necessary software to manage the environment as a single entity. This can reduce the time it takes to deploy new medical applications from weeks to just hours.
Next month, I will be presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. There will not be an "IBM Edge" conference this year, so this is your best opportunity to hear the latest information on all of the IBM server and storage products at one conference.
I will be there! Here are the topics I will be presenting:
The pendulum swings back -- Understanding Converged and Hyperconverged environments
IBM cloud storage options
Software Defined Storage -- Why? What? How?
Business continuity -- The seven tiers of business continuity and disaster recovery
Introduction to object storage and its applications - Cleversafe
New generation of storage tiering: Less management, lower investment and increased performance
IBM Spectrum Scale for file and object storage
This conference is not all lectures, which some refer to as "Death by Powerpoint".
There will also be a variety of hands-on labs. I recommend participating in the hands-on session to feel and witness the next release of IBM Hyper-Scale Manager, which is the management application for what IBM calls its A-line storage family -- FlashSystem A9000/R, XIV Storage System, and Spectrum Accelerate software.
Hyper-Scale Manager is the most advanced GUI in the market today, may help reduce your management total cost of ownership (TCO) in half!
You can [Enroll Today!] There is an "early-bird" special to save hundreds of dollars if you enroll by April 16!
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Last week, at the InterConnect conference, IBM's premiere Cloud and Mobile event, there were announcements regarding IBM Cloud Object Storage.
IBM Cloud Object Storage Cold Vault
It seems that all the major Cloud Service Providers (CSPs) offer storage for warm, cool, and cold data.
IBM Cloud Object Storage Cold Vault service gives clients access to cold storage data on the IBM Cloud and is designed to lead the category for cold data among major competitors. Cold Vault joins the existing Standard and Vault tiers.
For rarely accessed mixed workloads and applications
Archiving, long-term data retention, historical data compliance
For cooler workloads accessed once a month or less
Archiving, short-term data retention, digital asset preservation, tape replacement and disaster recovery
For warm data, active workloads accessed multiple times per month
DevOps, Social, mobile, collaboration and analytics
These tiers are available in both "Regional" and "Cross Regional" deployment models. Customers can choose between Regional Service, which spreads their data across multiple data centers in a given region, and Cross Regional Service, which spans at least three geographically dispersed regions. For example, Regional may have data in three data centers all located in Texas, but Cross-Regional might have data in California, Texas and New York.
IBM Cloud Object Storage Flex
In the past, clients used "Information Lifecycle Management" (ILM) policies to place data where it was initially needed, then move it to less expensive tiers storage as it ages and is accessed less frequently. This was is done between Flash, Disk and Tape storage on premises, but the concept could also be applied to off-premises Cloud storage as well.
Flex is a new cloud storage service offering simplified pricing for clients whose data usage patterns are difficult to predict. Flex enables clients to benefit from the cost savings of cold storage for rarely accessed data, while maintaining high accessibility to all data.
Data that is accessed several times per month (warm) will be charged at Standard rates, data that is accessed once a month or so (cool) will be charged Vault rates, and data that is rarely accessed (cold) will be charged at Cold Vault rates.
In effect, this gives you ILM cost savings without ever having to move your data between Standard, Vault or Cold Vault tiers.
IBM Cloud Object Storage partnership with NetApp AltaVault
Many backup software products have been enhanced to write directly to IBM Cloud Object Storage, including IBM Spectrum Protect (formerly known as IBM Tivoli Storage Manager or TSM for short), as well as Commvault Simpana and Veritas NetBackup.
IBM extends this partnership to NetApp. NetApp AltaVault cloud backup solutions will now automatically be able to send backups to IBM Cloud Object Storage on IBM Cloud. Incorporating native support for IBM Cloud Object Storage into AltaVault solutions allows NetApp users to deploy backup data to the IBM Cloud.
IBM Cloud Object Storage for Bluemix Garage
For software developers, [IBM Bluemix Garage] provides a global consultancy with the DNA of a startup that combines the best practices of Design Thinking, Agile Development and Lean Startup to accelerate application development and cloud innovation.
Now, you can use IBM Cloud Object Storage in the IBM Cloud to support Garage method projects.
Want to try it out? With Promo Code "COSFREE", you can get a free year of Standard Cross-region IBM Cloud Object Storage with up to 25 GB/month access! See [Free Storage Promotion] for details.
This week, IBM InterConnect conference is going on in Las Vegas, Nevada.
One time in Las Vegas, I took the gondola ride at the Venetian Hotel. These are not boats with a motor on a chain or track, a but actually steered and propelled independently by the gondolier. At various points on our path, our gondolier would serenade our group with beautiful Italian songs.
As the ride was ending, I asked our gondolier how long their training program was to do this job. He told me "six weeks". I said "Wow, I would love to learn how to sing Italian songs like that in six weeks". He corrected me, "No, silly, they only hire experienced singers, and teach them six weeks to manage the gondola by turning the oar in the water."
(FCC Disclosure: I work for IBM. I have no financial interest in the Venetian Hotel, CBS Studios, or the producers of any television shows mentioned in this post. David Spark has provided me a complimentary copy of his book. This blog post can be considered an "unpaid celebrity endorsement" for the book reviewed below.)
InterConnect 2017 includes "Concourse", a trade show floor with people showing off the latest technologies. In the past 25 years, I have attended many conferences, and on occasion I have worked "booth duty". I am not in Las Vegas this week, so this post is advice to those that are.
One time, when the coordinators for an upcoming conference announced at an all-hands meeting they were looking for "a number of knowledgeable and outgoing volunteers" to work the IBM booth, one of the employees in the audience asked "How many of each?" While this might have meant to draw laughs, it underscored a real problem.
In many IT and engineering fields, the terms "knowledgeable" and "outgoing" are seen as mutually exclusive. People are either one or the other. A study titled [Personality types in software engineering], by Luiz Fernando Capretz of The University of Western Ontario, analyzed Myers-Briggs Type Indicator of personality and found the majority of engineers were "Introverts".
This line of thinking is further reinforced by the various characters on the television shows like "The Big Bang Theory". If you are familiar with the show, you have Sheldon and Amy are the most knowledgeable, but also the most socially awkward, and then you have Penny and Howard, less knowledgeable but at the more outgoing end of the spectrum.
I understand that for many engineers, working a booth at a trade show is far outside their "comfort zone". But what do you think is more likely, that you can train an engineer to work a booth in six weeks, be more outgoing, hold the right conversations, tell the right stories -- or -- train a professional model, a young, good looking man or woman, who is already outgoing and friendly, to answer technical engineering questions about your products and services?
I have been attending conferences for over 25 years, and occasionally have worked a booth or two. I started out as an engineer, but went through extensive training for public speaking, talking to the media and press, and moderating Q&A Expert panels.
Sadly, most people who work the booth get little to no training at all. You might be told your scheduled hours, how to scan bar codes on badges, and where the brochures and swag are stored. Then, you get your official "shirt" and told to wear it with a certain color pants, so that everyone looks like part of the team.
Fortunately, fellow blogger David Spark, of Spark Media Solutions, has written a book titled "Three feet from Seven Figures" with loads of advice on how to work a booth with one-on-one engagement techniques to qualify more leads at trade shows.
The title of his book warrants a bit of explanation. When you are working a booth, potential buyers and influencers are walking by, often just three feet away from you, and these could represent million-dollar opportunities.
Too often, the folks working a booth take a passive approach. They look down at their phones, chat with their colleagues, and basically wait for complete strangers to ask them a question or request a demo. This non-verbal communication can really be a turn-off. David explains this in all-too-familiar detail and how to be more actively engaged.
David shows how to break the ice and build rapport with each attendee, how to qualify them as legitimate leads, and how to handle each type of situation.
For qualified leads, you need to maximize the opportunity. If you imagine how much a company spends to send its employees to work the booth, plus the cost of the booth itself, and divide it by the limited number of hours that the trade show floor is open, you quickly realize that each hour is precious.
Your time is valuable, and certainly their time is valuable also. Let's not spend too much time on a single lead, but rather capture the information, end the conversation, and move on.
If you are working a booth at IBM InterConnect, or plan to work a booth at an event later this year, I highly recommend getting this book! It is available in a variety of hard copy and online formats at [ThreeFeetBook.com].
I am not in Las Vegas this week for this year's event, but the sessions will be streamed live through [IBM GO].
IBM Systems Technical University - May 22-26, 2017 - Orlando, FL
IBM Systems Technical University is the evolution of a variety of other conferences related to servers, storage and software. Starting out as the "IBM Storage Symposium", then added "System x" servers and renamed to "Storage and System x University", then dropped "System x" when IBM sold off that business to Lenovo.
A few years ago, it was renamed "Edge", initially just focused on Storage, but then two years ago combined with System z mainframe servers and POWER Systems for IBM i and AIX platforms. It also covers software products that previously had their own conferences, like IBM Pulse or MaximoWorld
Last year, the IBM Marketing team tried a daring experiment. Let's change "Edge" to be a "Cognitive Solutions and Cloud Platform" conference, with emphasis on IT Infrastructure.
The experiment failed. Not because IBM Systems don't support these new initiatives, but because the audience were more interested to hear about how IBM Systems help their current day-to-day business. As many attendees told me, "If we wanted to hear about Cognitive or Cloud, we have plenty of other of conferences that cover that already!"
While 40 percent of IBM revenues are generated from Cognitive Solutions and Cloud Platform, the other 60 percent are traditional, on-premise, systems-of-record application workloads, the kind that business, non-profit groups, and government agencies have been using for the past few decades!
To address this need, IBM offered three-day "IBM Systems Technical University" events at various locations. Last year, I presented storage topics at events in Atlanta, Austin, Bogota, Boston, Chicago, Dubai, Nairobi, and São Paulo.
We will have several of those this year as well. The main one will be a full 5-day event, May 22-26, in Orlando Florida. I will be there presenting various sessions on storage!
IBM World of Watson - October 29-November 2, 2017 - Las Vegas, NV
This is a Cognitive Solutions and Cloud Platform conference, with an emphasis on Analytics and Database technologies.
I did not attend World of Watson, or WoW for short, last year, but it was an evolution of the conference previously called "IBM Insight". I am sure everything from DB2 and Open Source databases to Hadoop and Spark will be covered this year as well.
In writing this post, I realize that this year will be like a "Conference Sandwich". Cognitive-and-Cloud at the top and bottom, with all the meat, veggies and garnish in the middle!