This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
This week, IBM sponsored a nice multi-client event in San Juan, Puerto Rico. I was quite impressed with the quality of this video. Our marketing department has really done a good job on this!
This event was not just multi-client, but also spanned different industry sectors. IBM recently has realigned to five different sectors, and we had clients from different sectors attending the event.
The night before, I was able to meet most of the other IBM executives who came down for the event. Unfortunately, two were delayed because of the snow storms in the Northeast part of the United States, but they were able to arrive the next day.
The venue was the El Touro restaurant, near the Hilton Caribe. The weather was just right, about 75 degrees and breezy. It was a little humid for me, but everyone else were just happy to be out of the cold. Meanwhile it is nearly 90 degrees in Tucson, Arizona where I am from.
This was billed as a "Lunch and Learn" and the food was delicious! In an effort to keep it simple, we had small dishes of fish with fruit-based cream sauce, paella with rabbit meat and rice, pork belly, Crema Catalana and a churo for dessert. This gave everyone a sample taste of everything, without having to order off a menu.
We basically took the same approach with the presentation. First, Marcos Obermaeir and Marcos Otero, the two leads for this event, thanked the audience and explained their new roles. Marcos Obermaeir is focused on Financial and Insurance sector, while Marcos Otero focused on Communications sector.
Next we had Debbie Niven and Roopam Master, both IBM Executives, explain their roles, and how IBM can help both clients and Business Partners in Puerto Rico.
I presented samples of much larger presentations on three topics. First, the excitement over Software Defined Storage with IBM Spectrum Storage family of products. Second, IBM Spectrum Scale as a better replacement for Hadoop File System (HDFS) for Hadoop, IBM BigInsights and Hortonworks analytics deployments. Third, IBM Cloud Object Storage, and how this can be combined with IBM Spectrum Protect to backup your data to object storage either on premises, or in the Cloud.
I could have easily spoken an hour on each topic, but instead, we shortened to about 20 minutes each, in keeping with the "Tapas" theme of the restaurant. This allowed those clients who wanted to hear more to have a reason to request a follow-up visit or call.
After the clients left, the IBM team had a reception for the IBM Business Partners. About 80 percent of IBM's storage business in Puerto Rico is done through IBM Business Partners, so they are an important link in IBM's "Go-to-Market" strategy.
The moon was nearly full, and the breeze and waves were a spectacular backdrop to the conversations I had with each person I met.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM Storwize V5030F and V7000F all-flash high-density expansion enclosure
The 5U-high, 92-drive expansion enclosure introduced for the IBM Storwize V5000 and V7000 is now available for the all-flash models V5030F and V7000F. High-density expansion enclosure Model A9F requires IBM Spectrum Virtualize Software V7.8, or later, for operation.
The enclosure allows any mix of "Tier 0" write-endurance SSD at 1.6TB and 3.2TB capacities, and "Tier 1" read-intensive SSD at 1.92TB, 3.84TB, 7.68TB and 15.36TB capacities.
Storwize V5030F control enclosure models support attachment of up to 40U of expansion enclosures, which equates to eight high-density expansion enclosures, up to 760 drives per control enclosure, and up to 1,056 per clustered system.
Storwize V7000F control enclosure models support attachment of up to eight high-density expansion enclosures, up to 760 drives per control enclosure, and up to 3,040 drives per clustered system.
IBM has adopted "Agile" process for all of its IBM Spectrum Storage software. Spectrum Virtualize is offered in a variety of forms. IBM offers the FlashSystem V9000, SAN Volume Controller, Storwize family, and Spectrum Virtualize as software that runs on Lenovo and SuperMicro servers. This means quarterly delivery of new features and functions!
Lots of small enhancements were added in this release:
Apply Quality-of-Service (QoS) to a Host Cluster in terms of IOPS and or MB/s throughput.
SAN Congestion reporting, via buffer credit starvation reporting in Spectrum Control and via the XML statistics reporting, for the 16Gbps FCP Host Bus Adapter (HBA).
Resizing for Metro Mirror and Global Mirror remote copy services of thin provisioned volumes.
Consistency Protection for Metro Mirror and Global Mirror. You can now define "Change Volumes" to be used in the event of problems with MM or GM, it will switch over to GMCV mode.
Increased FlashCopy Background Copy Rates
Proactive Host Failover during temporary and permanent node removals from cluster
IBM Aspera® Files cloud service helps to enable fast, easy, and secure exchange of files and folders of any size between users, even across separate organizations. Aspera Files is currently available in three all-inclusive editions of Personal, Business, and Enterprise. Clients can subscribe either to a committed amount of data transferred on a monthly or annual basis or as a pay-per-use option.
Personal edition now includes 20 authorized users and a single workspace.
Business edition now includes 100 authorized users, 100 workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, and support for Single-Sign-On.
Enterprise edition now includes 500 authorized users, no limit on number of workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, and support for Single-Sign-On.
IBM is now introducing a new "Elite edition" includes 2500 authorized users, no limit on number of workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, support for Single-Sign-On, and access to IBM Aspera Developer Network and nonproduction organization.
With the addition of the new Elite edition, clients have the flexibility to subscribe to additional functionality in Aspera Files that helps provide higher value and greater differentiation. The Elite edition is available as a subscription and on a pay-per-use basis.
In addition to the existing charge metric of data transferred, a user subscription metric is now included for all four editions. Each edition comes with an included number of authorized users in addition to other key features and capabilities.
Well, it's Tuesday again, and you know what that means? IBM Announcements! There were lots of announcements today, so I have split this up into two posts. One for the Tape and Cloud announcements, and the other for the Spectrum Storage family.
IBM Spectrum Virtualize Software V7.8.1
IBM Spectrum Virtualize&trade: V7.8.1 is the latest software for FlashSystem V9000, SAN Volume Controller and Storwize products.
Last release, IBM introduced "Host Groups" for clusters that needed to share a common set of volumes. This release offers "Host cluster I/O throttling": I/O throttling can be managed at the host level (individual or groups) and at managed disk levels for improved performance management,and GUI support.
Increased background FlashCopy transfer rates: This feature enables you to increase the rate of background FlashCopy transfers, providing faster copies as the infrastructure allows. This takes advantage of the higher performance capabilities of today's systems, processing the copy in a shorter period of time. The default was 64 MB/sec, and now we can go up to 2 GB/sec, for those who want their FlashCopy to be done as fast as possible.
Port Congestion Statistic: Zero buffer credits help detect SAN congestion in performance-related issues, improving support in high-performance environments. IBM had this for the 8Gbps FCP cards, but not for the 16Gbps cards, so now that's fixed.
Resizing of volumes in remote mirror relationships: Target volumes in remote mirror relationships will be automatically resized when source volumes are resized. Lots of clients asked for this, and IBM delivered!
Consistency protection for Metro/Global Mirror relationships: An automatic restart of mirroring relationships after a link fails between the mirror sites improves disaster recovery scenarios, helping to ensure the applications are protected throughout the process.
When IBM introduced "Global Mirror with Change Volumes" (GM CV), I wanted to call it "Trickle Mirror", because the primary site takes a FlashCopy, trickles the data over, then FlashCopy at the remote site. Now, clients using traditional Metro or Global Mirror can add "Change Volumes" as protection. In the unlikely event a network disruption occurs, it drops down to GMCV until the link resumes full speed.
Support of SuperMicro servers for the Spectrum Virtualize as Software Only offering: Support for x86-based Intel™ servers by SuperMicro for Spectrum Virtualize Software is available with this release.
Last year, IBM offered Spectrum Virtualize as software that could run on Lenovo servers. However, now there are clients who want alternative server choices.
Supermicro SuperServer 2028U-TRTP+ is supported to run Spectrum Virtualize Software. This is a great option for end clients, managed service or cloud service providers deploying private clouds, building hosted services, or using software-defined storage on third party Intel servers. This a fully inclusive license with all key features available on Spectrum Virtualize in a single, downloadable image.
IBM Spectrum Control V5.2.13 and IBM Virtual Storage Center V5.2.13
We often joke that IBM Virtual Storage Center is the [Happy Meal] combining storage virtualization with Spectrum Virtualize hardware like FlashSystem V9000, SAN Volume Controller or Storwize as the "hamburger", Spectrum Control as the "fries" and "Spectrum Protect Snapshot" as the "soft drink". Storage Analytics was included as a "prize inside" only available in the VSC bundle to entice clients to chose this option.
Whenever IBM updates Spectrum Control, they often put out a new version of the Virtual Storage Center bundle as well. I was the Chief Architect for Spectrum Control 2001-2002, and Technical Evangelist for SVC in 2003 when we first introduced the product, so I have long history with both products.
This release provides additional information and performance metrics on Dell EMC VMAX and EMC VNX devices. This is done natively, they do not need to be virtualized by Spectrum Virtualize as was often done in the past.
IBM now offers better visibility of drives within IBM Cloud Object Storage Slicestor® nodes. IBM acquired Cleversafe 18 months ago, and are working to get it under the Spectrum Control management umbrella.
IBM Spectrum Scale™ file system to external pool correlation. Spectrum Scale can migrate data to three different type of "external pools":
Cloud Object pool, either on-premise Object Storage or off-premise Cloud Service Provider storage.
Spectrum Protect pool, where Spectrum Protect manages the migrated data on one of 700 supported devices, including tape, virtual tape, optical, flash, disk, object storage or cloud.
Spectrum Archive pool, where data is written directly to physical tape using the Industry-standard LTFS format.
This release provides additional information on the copy data panel about SAN Volume Controller (SVC) HyperSwap® and vDisk mirror.
While the "Virtual Storage Center" bundle is an awesome deal, some clients have asked for the "Vegetarian Option" (Fries and Drink only). Why? Because they want the advanced storage analytics (prize inside) for other devices like DS8000, XIV, etc. So, IBM created the "IBM Spectrum Control Advanced Edition", which has everything in VSC except the Spectrum Virtualize itself.
Advanced edition adds improvements to the chargeback report. It also includes IBM Spectrum Protect™ Snapshot V8.1 release.
IBM Spectrum Control Storage Insights Software as a Service
Storage Insights is IBM's "Software-as-a-Service" reporting-only offering subset of Spectrum Control Advanced Edition. It includes direct support for Dell EMC VMAX, VNX, and VNXe storage systems. This is huge! Now, clients who have only EMC hardware can now, on a monthly basis, figure out where they are wasting money and decrease their costs.
Other features carried over include the enhanced drive support for IBM® Cloud Object Storage, enhanced external capacity views for IBM Spectrum Scale™ and additional replication views for vDisk mirror and HyperSwap® relationships for SAN Volume Controller (SVC) and Storwize® devices that I mention above.
Well, it's Tuesday again, and you know what that means? IBM Announcements! There were lots of announcements today, so I have split this up into two posts. One for the Tape and Cloud announcements, and the other for the Spectrum Storage family.
IBM TS7700 Virtual Tape System
IBM TS7700 release 4.1.1 now supports seven- and eight-way grids with approved RPQs. Before this, grids could only have up to six TS7700 systems connected together.
IBM also plans to extend the capacity of the TS7760 base frame to over 600 TB, and to extend the capacity of a fully configured TS7760 system to over 2.45 PB, before compression, by supporting 8 TB disk drives. This is a huge increase over the 4TB and 6TB drives used today.
IBM offers the IBM Cloud Object Storage System in three ways: as software, as pre-built systems, and as a cloud server on IBM Bluemix (formerly known as SoftLayer).
For those not familiar with IBM Cloud Object Storage (IBM COS), consider it "Valet Parking" for your storage. In a valet parking environment, you have valet parking attendants that drive the cars, parking garages that hold the cars, and a manager that oversees the operation. With IBM COS, you have Accesser® nodes that receive and retrieve your data like valet parking attendants, you have Slicestor® nodes that store your objects like cars in a parking garage, and you have IBM COS Manager to oversee the operation.
Today, IBM announced new HDD options for their S01, S03 and S03 models of Slicestor nodes. These are all 7200 rpm, 3.5-inch Nearline drives, at capacities of 4 TB, 6 TB, 8 TB and 10 TB.
In addition, a short-range 40 GbE SFP+ transceiver is available for ordering on IBM Cloud Object Storage Accesser models A00, A01, and A02, and IBM Cloud Object Storage Slicestor models S01 and S02. This improves the performance of data transfer between the Accesser nodes and the Slicestor nodes. Think of it like shortening the distance valet parking attendants have to drive your car to the garage and run back.
I have been presenting Cloud Storage for nearly 10 years now. People are often shocked to learn that most of the major cloud providers -- including Amazon, Google, Microsoft -- do not offer "Data at Rest" encryption on their storage offerings.
Why not? Because it would mean investing in Self-Encrypting Drives, Key management software, and other related technology to make it happen. Instead, Cloud Service Providers (CSPs) expect you to encrypt the data in software. Most users encrypt data before it lands on the cloud, but what if you create the data in the cloud?
IBM solved this by offering IBM Cloud Object Storage in its IBM Cloud (formerly known as SoftLayer). It has integrated encryption software that takes care of this for you.
This new product, IBM Multi-Cloud Data Encryption V1.0, enables you to encrypt files, folders, and volumes in any cloud while maintaining local control of encryption keys. It integrates with IBM Security Key Lifecycle Manager (SKLM). This is designed to allow you to move cipher data between clouds that are running Multi-Cloud Data Encryption without decrypting and re-encrypting the data.
For example, you can use IBM Multi-Cloud Data Encryption to protect your data on Amazon, Google or Microsoft, then later realize that you can save a ton of money moving to IBM Cloud instead, and you are now able to move the data over seamlessly!
(Back in 2010, I poked fun at EMC with my post [VPLEX: EMC's Latest Wheel is Round]. I pointed out that EMC's announcement of "new features" that already existed in IBM's SAN Volume Controller. Oops! They did it again!)
Basically, Dell EMC is working on a new "2 Tiers" approach that combines high-performance flash tier with high-capacity object storage. Guess what? IBM already offers this! Why wait?
IBM Spectrum Scale, formerly known as the General Parallel File System (GPFS), supports POSIX, HDFS, OpenStack Swift, Amazon S3, NFS, SMB and iSCSI protocols.
Spectrum Scale can provide this front-end abstraction layer between flash and object storage, including IBM Cloud Object Storage system and IBM Bluemix (formerly SoftLayer) cloud services.
But why limit yourself to just two tiers? IBM Spectrum Scale can also support 15K, 10K and 7200 RPM spinning disk drive tiers, as well as virtual or physical tape tier, the ultimate low-cost high-capacity tier!
Several years ago, IBM coined the phrase "FLAPE" to discuss the two-tier approach of combining Flash with Tape using Spectrum Scale as the front-end abstraction layer.
Perhaps we should call combinations of Flash and Object "FLobject" storage? If the name catches on, you read it here first!
IBM is in a transition from being a "Systems, Software and Services" company, to become the leading "Cognitive Solutions and Cloud Platform" company. IBM has been in this transformation for the past three years or so, and [over 40 percent of its revenue] now comes from these strategic initiatives.
The purpose of AI and cognitive systems developed and applied by the IBM company is to augment human intelligence. Our technology, products, services and policies will be designed to enhance and extend human capability, expertise and potential. Our position is based not only on principle but also on science.
Cognitive systems will not realistically attain consciousness or independent agency. Rather, they will increasingly be embedded in the processes, systems, products and services by which business and society function -- all of which will and should remain within human control.
For cognitive systems to fulfill their world-changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear:
When and for what purposes AI is being applied in the cognitive solutions we develop and deploy.
The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions.
The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built, often through years of experience. We will work with our clients to protect their data and insights, and will encourage our clients, partners and industry colleagues to adopt similar practices.
The economic and societal benefits of this new era will not be realized if the human side of the equation is not supported. This is uniquely important with cognitive technology, which augments human intelligence and expertise and works collaboratively with humans.
Therefore, the IBM company will work to help students, workers and citizens acquire the skills and knowledge to engage safely, securely and effectively in a relationship with cognitive systems, and to perform the new kinds of work and jobs that will emerge in a cognitive economy.
This week, I was reminded that back in 2011, Watson beat two human players, Ken Jennings and Brad Rutter on the TV game show "Jeopardy!" On his last response, Ken wrote "I for one welcome our new computer overlords." With IBM investing heavily in Cognitive Solutions, should people be worried, or welcome the new technology?
Back in 1950, Isaac Asimov proposed "Three laws of robots":
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Let's take a look at how Artificial Intelligence has been represented in the movies over the past few decades. I have put these in chronological order when they were initially released in the United States.
(FCC Disclosure and Spoiler Alert: I work for IBM. This blog post can be considered a "paid celebrity endorsement" for cognitive solutions made by IBM. While IBM may have been involved or featured in some of these movies, I have no financial interest in them. I have seen them all and highly recommend them. I am hoping that you have all seen these, or at least familiar enough with their plot lines that I am not spoiling them for you.)
2001: A Space Odyssey
Back in 1968, Stanley Kubrick and Arthur C. Clarke made a masterpiece movie about a mysterious obelisk floating near Jupiter. To investigate, a crew of human beings takes a space ship managed by a sentient computer named [HAL-9000].
(Many people thought HAL was a subtle reference to IBM. Stanley Kubrick clarifies:
"By the way, just to show you how interpretation can sometimes be bewildering: A cryptographer went to see the film, and he said, 'Oh. I get it. Each letter of HAL's name is one letter ahead of IBM. The H is one letter in front of I, the A is one letter in front of B, and the L is one letter in front of M.'
Now this is a pure coincidence, because HAL's name is an acronym of heuristic and algorithmic, the two methods of computer programming...an almost inconceivable coincidence. It would have taken a cryptographer to have noticed that."
Source: The Making of 2001: A Space Odyssey, Eye Magazine Interview, Modern Library, pp. 249)
The problem arises when HAL-9000 refuses commands from the astronauts. The astronauts are not in control, HAL-9000 was given separate orders from ground control back on earth, and it has determined it would be more successful without the crew.
In 1973, Michael Crichton wrote and directed this movie about an amusement park with three uniquely themed areas: Medieval World, Roman World, and Westworld. Robots are used to staff the parks to make them more realistic, interacting with the guests in character appropriate for each time period.
A malfunction spreads like a computer virus among the robots, causing them to harm or kill the park's guests. Yul Brenner played a robot called simply "the Gunslinger". Equipped with fast reflexes and infrared vision, the Gunslinger proves especially deadly!
(Michael Crichton also wrote "Jurassic Park", which had a similar story line involving dinosaurs with catastrophic results!)
Last year, HBO launched a TV series called "Westworld", based on the same themes covered in this movie. The first season of 10 episodes just finished, and the next season is scheduled for 2018.
Directed by Ridley Scott, this 1982 movie stars Harrison Ford as Rick Deckard, a law enforcement officer. Rick is tasked to hunt down and "retire" four cognitive androids named "replicants" that have killed some humans and are now in search of their creator, a man named J. F. Sebastian.
(I enjoy the euphemisms used in these movies. Terms like kill, murder or assassinate apply to humans but not machines. The word "retire" in this movie refers to destruction of the robots. As we say in IBM, "retirement is not something you do, it is something done to you!")
Destroying machines does not carry the same emotional toll as killing humans, but this movie explores that empathy. A sequel called "Blade Runner 2049" will be released later this year.
In 1983, Matthew Broderick plays David, a young high school student who hacks into the U.S. Military's War Operation Plan Response (WOPR) computer. The WOPR was designed to run various strategic games, including war game simulations, learning as it goes. David decides to initiate the game "Global Thermonuclear War", and the military responds as if the threats were real.
Can the computer learn that the only way to win a war is not to wage it in the first place? And if a computer can learn this, can our human leaders learn this too?
In this series of movies, a franchise spanning from 1984 to 2009, the US Military builds a defense grid computer called [Skynet]. After cognitive learning at an alarming rate, Skynet becomes self-aware, and decides to launch missiles, starting a nuclear war that kills over 3 billion people.
Arnold Schwarzenegger plays the Terminator model T-800, a cognitive solution in human form designed by Skynet to finish the job and kill the remainder of humanity.
In this 2004 movie, Will Smith plays Del Spooner, a technophobic cop who investigates a crime committed by a cognitive robot.
(Many people associate the title with author Isaac Asimov. A short story called "I, Robot" written by Earl and Otto Binder was published in the January 1939 issue of 'Amazing Stories', well before the unrelated and more well-known book 'I, Robot' (1950), a collection of short stories, by Asimov.
Asimov admitted to being heavily influenced by the Binder short story. The title of Asimov's collection was changed to "I, Robot" by the publisher, against Asimov's wishes. Source: IMDB)
Del Spooner uncovers a bigger threat to humanity, not just a single malfunctioning robot, but rather the Virtual Interactive Kinesthetic Interface, or simply VIKI for short, a cognitive solution that controls all robots. VIKI interprets Asimov's three laws in a manner not originally intended.
In this 2015 movie, Domhnall Gleeson plays Caleb, a 26 year old programmer at the world's largest internet company. Caleb wins a competition to spend a week at a private mountain retreat. However, when Caleb arrives he discovers that he must interact with Ava, the world's first true artificial intelligence, a beautiful robot played by Alicia Vikander.
(The title derives from the Latin phrase "Deus Ex-Machina," meaning "a god from the Machine," a phrase that originated in Greek tragedies. Sources: IMDB)
Nathan, the reclusive CEO of this company, relishes this opportunity to have Caleb participate in this experiment, explaining how Artificial Intelligence (AI) will transform the world.
(The three main characters all have appropriate biblical names. Ava is a form of Eve, the first woman; Nathan was a prophet in the court of David; and Caleb was a spy sent by Moses to evaluate the Promised Land. Source: IMDB)
The premise is based in part on the famous [Turing Test], developed by Alan Turing. This is designed to test a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
Movies that depict the bad guys as a particular nationality, ethnicity or religion may be offensive to some movie audiences. Instead, having dinosaurs, monsters, aliens or robots provides a villain that all people can fear equally. This helps movie makers reach a more global audience!
Of course, if robots, androids and other forms of Artificial Intelligence did exactly what humans expect them to, we would not have the tense, thrilling action movies to watch on the big screen.
This is not a complete list of movies. Enter in the comments below your favorite movie that features Artificial Intelligence and why it is your favorite!
(As IBM is focused on its transformation from a "Systems, Software and Services" company to a "Cognitive Solutions and Cloud Platform" company, it seems appropriate to highlight my 1,000 blog post on the concept of cognitive solutions.)
A lot of people ask me to explain what exactly does IBM mean by "cognitive", which is a fair question. Let's start with the [Dictionary definition]:
of or relating to cognition; concerned with the act or process of knowing, perceiving, etc.
of or relating to the mental processes of perception, memory, judgment, and reasoning, as contrasted with emotional and volitional processes.
What exactly does IBM mean by Cognitive? IBM has taken this definition, and focused on four key strategic areas:
In the summer of 1981, I spent a summer debugging a "Pascal" compiler at the University of Texas at Austin. I wasn't told that was what I was doing. Rather, I was tasked with writing sample Pascal programs that would demonstrate the features and capabilities of the language.
Every day, I would come up with a concept of a program, punch up the cards, run it through the CDC hopper, and verify that it would work properly. If I didn't have it working by lunch, I would take it to the "help desk", they would look it over, and tell me how to fix it after I got back.
Most of the time, it was a mistake in my software. A few times, however, it was a flaw in the compiler itself. My programs were basically test cases, and the Pascal Compiler development team was fixing or enhancing the compiler code every time I had a problem.
Compilers basically work by parsing the program text, looking for fixed keywords that are entered in a specifically prescribed order to make sense. Other keywords may represent data types, variables, constants or pre-defined macros.
But compilers are not cognitive. Cognitive solutions can understand natural language, and have to handle all the ambiguity of words not being in the correct order, or different words having different meanings.
As an Electrical Engineer, I had to take many classes on classical analog signal processing. In fact, all computers have some amount of analog components, where threshold processing is used to differentiate a zero (0) from a one (1).
For example, if a "zero" value was represented by 1 volt, and a "one" value by 5 volts, then you can set a threshold at 3 volts. Any voltage less than 3 would be considered a "zero" value, and anything 3 volts or greater a "one" value.
But threshold processing is not cognitive. Cognitive solutions also use thresholds, but their thresholds are dynamically determined, through advanced analytics and statistical mathematical models, and may adjust up and down as needed, based on machine learning over time.
IBM Research is proud to have developed the world's most advanced caching algorithms for its storage systems. Cache memory is very fast, but also very expensive, so offered in limited quantities. Caching algorithms decide which blocks of data should remain in cache, and which should be kicked out.
Ideally, a block in read cache would be kicked out precisely after the last time it was read, with little or no expectation for being read again anytime soon. Likewise, a block in write cache would be destaged to persistent storage precisely after the last time it was updated, with little or no expectation for being updated again anytime soon.
Traditional approach is "Least Recently Used" or [LRU]. Cache entries that were read recently or updated recently, would be placed on the top of the list, and the least referenced would be at the bottom of the list. When space is needed in cache, the entries at the bottom of the list would be kicked out.
IBM's [Adaptive Cache Algorithm outperforms LRU]. For example, on a workstation disk drive workload, at 16MB cache, LRU delivers a hit ratio of 4.24 percent while ARC achieves a hit ratio of 23.82 percent, and, for a SPC1 benchmark, at 4GB cache, LRU delivers a hit ratio of 9.19 percent while ARC achieves a hit ratio of 20 percent.
But caching algorithms, including IBM's Adaptive Cache, are not cognitive. These algorithms respond pragmatically based on the current state of the cache. Cognitive solutions learn, and improve with usage. This is often referred to as "Machine Learning".
The human-computer interface (HCI) has much room for improvement in a variety of areas.
Take for example a snack vending machine. In college, we had assignments to simulate the computing logic of these. We had to interact with the buyer, receive coins entered into the slot--nickels, dimes and quarters representing 5, 10 and 25 cents--determine a total monetary balance, and then dispense snacks of various prices and return an appropriate amount of change, if any. There is even a [greedy algorithm] designed to optimize how the change is returned.
But vending machines are not cognitive. Like the caching algorithms, vending machines interact based on fixed programmatic logic, treating all buyers in the same manner. Cognitive solutions can interact with different users in different ways, customized to their needs, and these interactions can improve over time, based on machine learning.
IBM is exploring the use of Cognitive Solutions in a variety of different industries, from Healthcare to Retail, Financial Services to Manufacturing, and more.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(Yes, OK, it's actually Thursday. I wrote this post weeks ago, but was embargoed until Jan 10, and then was asked to wait until Jan 12 so that the IBM Marketing team could translate my text into 15 different languages.)
This week, the IBM DS8000 team announces a new High Performance Flash Enclosure (HPFE-Gen2) and a series of All-Flash Array DS8880F models that exploit this new technology.
New High Performance Flash Enclosure (HPFE-Gen2)
The original HPFE was 1U high with 16 or 30 flash cards, and could support RAID-5 or RAID-10. Most used RAID-5, resulting in four array sites of 6+P each, leaving two cards for spare. These 1.8-inch cards were only 400 or 800 GB in size, so the maximum raw capacity was only 24TB per 1U enclosure.
The new HPFE-Gen2 enclosure is a complete re-design, consisting of two Microbays and two TeraPacks. The I/O Bays attach to the Microbays via PCIe Gen3. The Microbays in turn attach to both TeraPacks via redundant 6 Gb or 12 Gb SAS.
Each TeraPack holds 24 flash cards each. Since the TeraPacks come in pairs, you can install 16, 32 or 48 flash cards per enclosure. Each 16-card set represents two array sites, for a maximum of six array sites per HPFE-Gen2.
RAID-5 for 400/800 GB. Two 6+P arrays, four 7+P arrays, and two spares.
RAID-6 for 400/800/1600/3200 GB. Two 5+P+Q arrays, four 6+P+Q arrays, and two spares.
RAID-10 for 400/800/1600/3200 GB. Two 3+3 arrays, four 4+4 arrays, and four spares.
(Technically, these new "Flash cards" are 2.5-inch Solid State Drives (SSD) placed into the HPFE Gen2 connected to the PCIe Gen3 interface, with 50 percent additional capacity to tolerate up to 10 drive-writes-per-day (DWDP). IBM will continue to call them "Flash Cards" for naming consistency between the two generations of HPFE.)
The new HPFE-Gen2 enclosures are substantially faster, offering up to 90 percent more IOPS, and up to 268 percent more throughput (GB/sec). The Microbays use a new flash-optimized ASIC to perform the RAID calculations.
New All-Flash Array DS8880F models
IBM introduces the DS8884F, DS8886F and DS8888F that are based entirely on the HPFE-Gen2 enclosures described above.
Hybrid - HDD/SSD/HPFE mix
Hybrid - HDD/SSD/HPFE mix
AFA - HPFE only
AFA - HPFE-Gen2 only
AFA - HPFE-Gen2 only
AFA - HPFE-Gen2 only
New zHyperLink connection
Also, as a "Statement of Direction", IBM intends to deliver field upgradable support for zHyperLink on existing IBM System Storage DS8880 machines for connection to z System servers. zHyperLink is a short-distance, mainframe-attach link designed for lower latency than High Performance FICON.
Typical latency with FICON/zHPF is around 140-170 microseconds, and this new zHyperLink is estimated to reduce this down to 20-30 microseconds, but is limited to 150 meter fiber optic cable distance. zHyperLink is intended to speed up DB2® for z/OS® transaction processing and improve active log throughput.
Last month, I had the pleasure to help train Watson in its latest mission, to help answer questions from sellers, this are not just for the IBM feet on the street, but also for IBM distributors and IBM Business Partners as well.
"... [survey by SearchYourCloud] revealed 'workers took up to 8 searches to find the right document and information.' Here are a few other statistics that help tell the tale of information overload and wasted time spent searching for correct information -- either external or internal:
'According to a McKinsey report, employees spend 1.8 hours every day -- 9.3 hours per week, on average -- searching and gathering information. Put another way, businesses hire 5 employees but only 4 show up to work; the fifth is off searching for answers, but not contributing any value.' Source: [Time Searching for Information]
'19.8 percent of business time -- the equivalent of one day per working week -- is wasted by employees searching for information to do their job effectively,' according to Interact. Source: [A Fifth of Business Time is Wasted]
IDC data shows that 'the knowledge worker spends about 2.5 hours per day, or roughly 30 percent of the workday, searching for information ... 60 percent [of company executives] felt that time constraints and lack of understanding of how to find information were preventing their employees from finding the information they needed.' Source: [Information: The Lifeblood of the Enterprise]."
In the early days of the Internet, before search engines like Google or Bing, I competed in [Internet Scavenger Hunts]. A dozen or more contestants would be in a room, and would be given a list of 20 questions to find answers for. Each of us would then hunt down answers on the Internet. The person to find the most documented answers before time runs out wins. It was quite the challenge!
Over the years, I have honed my skills as a [Search Ninja]. With over 30 years of experience in IBM Storage, many sellers come to me for answers. Sometimes sellers are just too lazy to look for the answers themselves, too busy trying to meet client deadlines, or too green to know where to look.
A good portion of my 60-hour week is spent helping sellers find the answers they are looking for. Sometimes I dig into the [SSIC], product data sheets, or various IBM Redbooks.
Other times, I would confer with experts, engineers and architects in particular development teams. Often, I learn something new myself. In a few cases, I have turned some questions into ideas for blog posts!
It was no surprise when I was asked to help train Watson for the new "Systems SmartSeller" tool. This will be a tool that runs on smartphones or desktops to help answer questions that sellers might need to respond to RFP or other client queries.
The premise was simple. Treat Watson as a student at "Cognitive University" taking classes from dozens of IBM professors, in a series of semesters, or "phases".
Phase I involved building the "Corpus", the set of documents related to z Systems, POWER systems, Storage and SDI solutions; and a "Grading Tool" that would be used as the Graphical User Interface. I was not involved in phase I.
Phase II was where I came in. Hundreds of questions are categorized by product area. I worked on 500 questions for storage. For each question, Watson had up to eleven different responses, typically a paragraph from the Corpus. My job as a professor was to grade the responses to some 500 storage questions:
★ (one star)
Irrelevant, answer not even storage-related
★★ (two stars)
Relevant, at least it is storage-related, but does not answer the question, or answers it poorly
★★★ (three stars)
Relevant, adequately answers the question
★★★★ (four stars)
Relevant, answers the question well
Most of the answers were either 1-star (not storage related) or 2-star (mentioned storage, but poor response). I would search through the existing Corpus looking for a better answer, and at best found only 3-star responses, which I would add to the list and grade as a 3-star response.
I then searched the Internet for better answers. Once I found a good match, I would type up a 4-star response, add it to the list, and point it to the appropriate resources on the Web.
Other professors, who were also looking at these questions, would then get to grade my suggested responses as well. Watson would learn based on the consensus of how appropriate and accurate each response was graded.
I don't know where the Cognitive University team got some of the questions, but they were quite representative of the ones I get every week. In some cases, the seller didn't understand the question he heard from the client, making it difficult for me to figure out what they were actually asking for.
It reminds me of that parlor game ["Telephone" or "Chinese Whispers"], in which one person whispers a message to the ear of the next person through a line of people until the last player announces the message to the entire group. I have actually played this at an IBM event in China!
Watson needs to parse the question into nouns and verbs, and use that Natural Linguistic Programming (NLP) to then search the Corpus for appropriate answer. I determined three challenges for Watson in this case:
The questions are not always fully formed sentences. For example, "Object storage?" Is this asking what is object storage in general, or rather what does IBM offer in this area?
The questions often do not spell the names of products correctly, or use informal abbreviations. "Can Store-wise V7 do RtC?" is a typical example, short for "Can the IBM Storwize V7000 storage controller perform Real-time Compression?"
The questions ask what is planned in the future. "When will IBM offer feature x in product y?" I am sorry, but Watson is not [Zoltar, the fortune teller]!
I managed to grade the responses in the two weeks we were given. Part of my frustration was the grading tool itself was a bit buggy, and I spent some time trying to track down some of its flaws.
The next phase is in late January and February. This will give the Cognitive University team a chance to update the Corpus, improve the grading interface, and find more professors and different set of questions. I volunteered the most recent four years' worth of my blog posts to be added to the Corpus.
Maybe this tool will help me turn my 60-hour week back to the 40-hour week it should be!
Fellow blogger Chris Mellor from The Register has an interesting post titled [It's a ratchet: Old storage guard face incoming tech squeeze]. Chris opines that the big traditional storage vendors -- which he refers to as the "old guard": Dell EMC, HDS, HPE, IBM and NetApp -- are being squeezed out by startups with new technologies.
Last week, I saw the play [Fiddler on the Roof], a musical production by Arizona Theater Company (ATC), and thought of various parallels with Chris's post.
For those not familiar, the story centers around a father named Tevye and his wife trying to stick to tradition, with five daughters who are open to breaking with tradition to get married. The family lives in a small rural town, back in a time long ago when people were persecuted for their religious and ethnic background. Aren't you glad we live in [more enlightened times]!
Back to Chris Mellor, he writes in his post:
"This old guard has so far failed to squash newcomers in the all-flash array, hyperscale, object and software-defined storage areas. This is despite the established firms adopting these technologies and acquiring some startups."
Should the old guard try to squash newcomers? Often, these startups provide much needed innovations that move the IT industry forward.
In the play, Tevye wants to stick to tradition, whereby the town's matchmaker would find a husband for each daughter, and he, as father of each bride, would then provide his permission and blessing to the match.
Obviously, these startups are neither asking the old guard for their permission nor their blessing. While I can't speak for the rest of the "old guard", IBM is leading in these various spaces. Let's look at each of these new trends.
All-Flash Arrays (AFA)
The category of "All-Flash Arrays" include both purpose-built hardware as well as traditional devices based on solid-state drives (SSD). While the R&D investment needed for purpose-built hardware can limit this to some of the largest vendors, nearly any startup can slap commodity SSD into traditional HDD controllers and call it AFA.
IBM offers the world's fastest AFA, and has been a leader in the AFA category for the past three years, investing over $1 Billion USD on its FlashSystem, DS8000, Elastic Storage Server (ESS), SVC and Storwize product families.
Software-Defined Storage (SDS)
While the definition for SDS is still in a bit of flux, IDC has tried to identify three characteristics:
Storage software stack that can be installed on commodity resources (x86 hardware, hypervisors, or cloud) and/or off-the-shelf computing hardware
SDS should offer a full suite of storage services
Federation between the underlying persistent data placement resources to enable data mobility of its tenants between these resources
IBM has been ranked [Number 1 in Software Defined Storage] for several years now, investing over $1 Billion USD in its IBM Spectrum Storage family. This collection of software is implemented in a variety of offerings, including pre-built systems, software that you can deploy on commodity off-the-shelf servers, and in the Cloud.
Object storage breaks tradition with block and file-based storage solutions. Rather than reading and writing files using POSIX, NFS or SMB protocols, objects are accessed via HTTP GET and PUT requests. The two most common protocols are Amazon S3 and OpenStack Swift.
Object storage is ideal for static and stable data that either never changes, or changes infrequently. A lot of new workloads are based on unstructured data that falls in this category, such as Big Data Analytics, High-performance Computing (HPC), and active archives.
In the latest IDC Marketscape, [IBM is ranked #1 in Object Storage]. IBM has actually three software-defined storage offerings that support Object access methods. IBM Spectrum Scale, IBM Spectrum Archive and IBM Cloud Object storage System. The latter from 2015 acquisition of Cleversafe.
"Hyperscale leverages commodity servers and a software-defined approach, scaling the resources needed for applications and storage separately. As storage needs grow, companies can add servers running software-defined storage (SDS) to the storage tier to expand capacity... Data is automatically distributed across the entire cluster of storage servers as new nodes are added to the system... With hyperscale, .. cluster nodes network together to form a storage resource pool."
This breaks from the tradition of dual-controller high-end arrays, which scale-up, rather than scale-out. IBM offers its IBM Spectrum Accelerate, IBM Spectrum Scale, and IBM Cloud Object Storage System to fill this hyperscale requirement.
In the play, Tevye realizes the world is changing all around him, he can either fight these changes and stick to tradition, or accept that he must change also, and move on. After 105 years, IBM continues to lead the IT industry, primarily by adopting new trends and technologies, moving to new business opportunities as they present themselves.
IBM is doing a bit of year-end housekeeping. The Storage Community (storagecommunity.org) will be discontinued as of January 1, 2017.
IBM will continue to host a community for all of its followers and contributors to share insights on the latest trends in storage at [ibm.co/StorageSolutions].
All of the most recent IBM content from storagecommunity.org will now be available at this new domain. IBM hopes that you will continue to engage in its community of storage industry thought leaders.
If you would like to contribute to the new community, please [register here]. Simply click the silhouette icon in the top right-hand corner of the page and select "register." Input your email address and create a password, then sign in. You will receive an email from IBM with further instructions to get you set up.
IBM's twitter handle (@SmarterStorage) will also be sunset as of January 1, 2017, but I encourage you to follow @IBMStorage, or my own twitter handle @az990tony, for the latest storage news and announcements from IBM.
Last Thursday, Dec 15, I had the pleasure to present to 162 clients and IBM Business Partners, followed by the premiere showing of [Rogue One, a Star Wars movie]!
(FCC Disclosure: I work for IBM. This blog post can be considered a "paid celebrity endorsement" for IBM products and services. I have no financial interest in Lucasfilm Ltd, or its parent company Disney, LEGO company, or any competitor mentioned in this post.. I was not compensated to review this film or mention it on my blog. All graphics from the film used in this blog and related presentation were publicly available under the U.S. "fair use" doctrine. There are no spoilers in this blog, so keep reading!)
This event was a collaboration between:
Arrow, one of IBM's distributors
Corus360, an IBM Business Partner
Regal Medlock 18, a theater with comfy seats with a bar that serves beer and wine
As a public speaker for IBM, I get to travel all over the world, and throughout the United States. This trip wraps up my travel for 2016, with 34 weeks on the road!
Normally, when I am asked to present, I am given a list of products or topics to cover. This time, I was just given the title "Has Your Data Gone Rogue? -- Using IBM Flash and solutions to obtain enhanced business insights" and the suggestion to keep within the theme of Star Wars.
I had 45 minutes to cover whatever I thought would be something of interest to the clients in the audience, which spanned a variety industries from Healthcare and Financial services, to Retail and Manufacturing.
I turned to mind-mapping software to brainstorm some ideas. On my smartphone, I use an app called [SimpleMind], and on my laptop, I use [View Your Mind (vym)]. Here is what I came up with:
I arrived to the theater early to setup and mingle with the clients in the lobby. The sponsors that organized this event had gifts to raffle off, including two drones, and three Star Wars themed LEGO sets.
I was told to be done by 7:30pm. It turns out that the movie is streamed electronically, rather than having the actual media distributed physically to the theaters, as a way to prevent piracy.
My PowerPoint charts were in 16:9 format to fill the screen. This was perhaps the biggest screen I had ever presented on! I look so tiny in comparison!
IBM has been a leader in all-flash arrays for the past three years in a row, and as an IBM Business Partner, Corus360 has been one of our top sellers in the Southeastern United States. IBM offers a wide array of choices, from DS8000 to FlashSystem to the new [IBM DeepFlash Elastic Storage Server (ESS)].
Rebels are inquisitive. IBM is considered number one in Analytics. For every type of question, IBM has analytics to help answer. Here are some examples:
What is happening? -- Descriptive Analytics
Why did this happen? -- Diagnostic Analytics
What might happen next? -- Predictive Analytics
What actions should we take? -- Prescriptive Analytics
I focused on the use of Hadoop and Spark with the [IBM Spectrum Scale] software pre-installed on the DeepFlash ESS device. The DeepFlash ESS combines powerful POWER8 servers with the DeepFlash 150, a 3U high JBOF that holds up to 64 solid-state boards 8TB each, optimized for analytics of unstructured data content.
Spectrum Scale is supported on any open source distribution of Hadoop and Spark, and is an optional add-on to [IBM BigInsights]. [IBM HDFS Transparency Connector] has 100 percent compatibility, allowing Hadoop and Spark analytics programs run directly without modification.
To provide valuable insight to the storage environment itself, IBM offers IBM Spectrum Control. The newest edition is [IBM Spectrum Control Storage Insights], a Software-as-a-Service (SaaS) that charges on a monthly per-capacity basis. Perfect for the Rebel Alliance on a tight budget and schedule!
The Galactic Empire has a different set of problems. They are behind schedule, having worked on the Death Star for the past 20 years, and upper management is growing impatient. A major test is imminent to prove its progress.
To speed development and test efforts, IBM offers a variety of FlashSystem products:
IBM FlashSystem 900
the World's Fastest Storage®, roughly 5 to 10 times faster than competitors based on commodity Solid State Drives (SSD) like Dell EMC XtremIO and PureStorage.
IBM FlashSystem V9000
adds the robust functionality of IBM Spectrum Virtualize, with Real-time Compression, Thin Provisioning, FlashCopy snapshots, and remote mirroring. Like the IBM SAN Volume Controller and Storwize family of products, the FlashSystem V9000 can virtualize almost 400 different storage devices from a variety of vendors.
IBM FlashSystem A9000 and A9000R
add the robust functionality of IBM Spectrum Accelerate, offering Real-time compression and data deduplication, making it ideal for Cloud, Virtual Machine and Virtual Desktop deployments.
As we learned in earlier episodes I to III of the Star Wars saga, a big problem was too many clones. IBM Spectrum Storage family has introduced the newest member: IBM Spectrum Copy Data Management. This software creates and catalogs data base clones to help with development and test efforts, reducing the number of rogue copies.
Lastly, the Empire must keep its secrets safe and protected. I covered the basics of data-at-rest encryption, the use of symmetric and asymmetric keys, [IBM Security Key Lifecycle Manager (SKLM), and how these are deployed on IBM flash, disk and tape products.
Then, we watched the movie. I found it quite entertaining!
Well, it's Tuesday again, and you know what that means? IBM Announcements!
I just got back from my vacation, so this is a guest post from my colleagues Moshe Weiss, Senior Manager, Development and Design, IBM Storage; and Diane Benjuya, Portfolio Marketing Manager for IBM Spectrum Accelerate.
1. What is IBM announcing?
Today IBM announces another leap forward in storage management, with the availability of IBM Hyper-Scale Manager version 5.1. In April 2016, when IBM announced IBM FlashSystem A9000 and A9000R, they also introduced a fully revamped GUI: IBM Hyper-Scale Manager 5.0. That version brought FlashSystem A9000/A9000R clients a terrific new storage management experience, with advanced look and feel, analytics tools, and other enhancements for managing smarter, with greater simplicity and in less time.
Hyper-Scale Manager 5.x dramatically reduces task time -- by 45% for this task
With Hyper-Scale Manager 5.1, IBM is bringing this exceptional GUI and unified user management experience across the entire set of Spectrum Accelerate-based products, which IBMers internally refer to as the "A family":
IBM Spectrum Accelerate software
IBM XIV Storage System
IBM Hyper-Scale Manager lets you view and move quickly across software-defined, disk based, and all-flash storage in seconds, equipping you with the information you need to ensure every application is performing at its peak.
2. What is innovative about the new GUI -- how does it help clients?
IBM Hyper-Scale Manager 5 makes storage management more insightful and easier in multiple ways, helping clients find info, act and troubleshoot faster. Concepts implemented include: web application with tablet-ready design, single page application, strong navigation scheme, smart filter with analytics, capacity trend/forecast, call for action, better communication using social media. All this helps users make fast, informed decisions while being able to see at a glance the impact of any change on the environment, including into the future. IBM team has designed it over the past three years working closely with clients and using Design Thinking methodology.
Get a holistic view of your storage
Provisioning, Monitoring and Troubleshooting
Find everything, get anywhere
Call for action!
The IBM team applied an "emotional design" approach that makes users feel emotionally attached to GUI for its coolness and elegance -- making the experience not just more productive but also more pleasant.
Version 5.1 brings many exciting and important new features to ease the client's day to day activities. Here are some key ones:
Managing your "A Family" in one UI
Instantly gain insights, spot problematic areas
Integrated Capacity Analytics
4. Any unique features that will be focused on?
The IT industry is entering a cognitive era, right? So IBM has brought cognitive into the GUI. The GUI actually learns each user's habits and preferences over time and adapts the experience to the specific user.
5. How does 5.1 add value to the family of products based on Spectrum Accelerate software?
Hyper-Scale manager makes this powerful family for private, public, hybrid block storage clouds that much more attractive and relevant. Just imagine yourself:
Waking up, driving to the office, opening the UI and seeing that your FlashSystem A9000 systems are doing worse than your XIV in terms of IOPS. Scary, but no worries.
You drill down to the specific FlashSystem A9000 by comparing IOPS. You find that a QoS performance class is deliberately reducing performance for the host. A quick analysis, and you find that it is due to the contract with the host. After a short chat with the host admin, you establish better terms, and decide to stop the IO limitation on the volumes and move them to a disk-based XIV to reduce dollar-per-TB cost.
You look for the best candidate by looking at the capacity trend/forecast charts for each XIV and at growth rate per month. You compare performance metrics and chose the preferred XIV to move the volumes to.
You migrate the volumes from the A9000 to the chosen XIV using the same interface, creating connectivity in one click. You then add the same host configuration as for the A9000 to the XIV in a second click. Then just map and monitor the new IO statistics with a third click. Easy!
Imagine carrying out your daily work and decisions -- creating volumes, monitoring, mirroring, troubleshooting and configuring -- across different systems of different types within the family in single clicks -- without the need to move between user interfaces. You can think of Hyper-Scale Manager 5.1 as a GUI come alive: a dynamic, breathing, thinking work enhancer that simplifies and helps you make the most of your investment.
Come see it in action! Register now for the [Live demo webinar], scheduled for Wednesday, November 9, 2016, from 10am to 11:30am MST!
Download the software from [IBM Fix Central], installation is one click and takes just seconds!
Here is an infographic!
Comments? Feedback? Enter them below. Both Moshe and Diane would be pleased to hear from you!
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(OK, yes, today is Friday, but I was busy getting married on Tuesday, so IBM pushed the announcements out one day to Wednesday, and technically I am writing this blog post during my honeymoon vacation, so the IBM marketing team and my new wife both cut me some slack. Work/Life balance is all about compromises, right?)
IBM DS8880 Storage System
The IBM DS8880 comes in three models, the DS8884 entry level, the DS8886 enterprise level, and the DS8888 all-flash array. IBM offers 1, 2, 3 and 4 year warranties.
The new High Performance Flash Enclosure (HPFE) Gen2 delivers more capacity than Gen1. The 2U flash enclosures are configured in pairs with each enclosure supporting up to twenty-four 2.5-inch flash cards in capacities 400 GB, 800 GB, 1.6 TB and 3.2 TB.
The HPFE Gen2 are currently available for both the DS8884 and DS8886 models. The maximum flash capacity for the DS8886 increases from 96 TB to 614.4 TB, delivering reduced storage costs through lesser cost per IOPS with this new flash enclosure. IBM has made a statement of direction to offer these HPFE Gen2 on the DS8888 as well.
To improve security, IBM DS8880 now supports customer-defined digital certificates for authentication, and configurable Hardware Management Console (HMC) firewall support.
For IBM's mainframe clients, IBM now offers "Extents-level" space release support for z/OS®, DSCLI (Command Line Interface) support for z/OS environment, and FICON® Information Unit (IU) pacing improvements.
IBM Spectrum Virtualize™ V7.8 delivers support for the latest SAN Volume Controller, FlashSystem V9000 and Storwize® product family, and adds new software functionality and improvements
In conjunction with [IBM Spectrum Copy Data Management], Spectrum Virtualize v7.8 offers flexible data protection with transparent cloud tiering to leverage the cloud as FlashCopy targets and restore these snapshots from the cloud on select platforms.
However, the encryption keys are kept on USB thumb drives, which are either left in the USB ports on the back of the hardware, or locked away in a safe, only to be retrieved as needed when rebooting the systems or upgrading the firmware.
Now, IBM Spectrum Virtualize v7.8 supports the IBM Security Key Lifecycle Manager (SKLM) to manage encryption keys. IBM continues to support USB thumb drives if you prefer, but SKLM is used to manage keys for most of the rest of IBM products, and provides centralized management.
The SVC and Storwize models can directly attach via 12Gb SAS to expansion drawers. At the time, we supported 2U-high 12-bay that support Large Form Factor (LFF) 3.5-inch Nearline (7200 rpm) drives, and 2U-high 24-bay that support the Small Form Factor (SFF) 2.5-inch drives (SSD, 15K, 10K and 7200 rpm).
With Spectrum Virtualize v7.8, IBM now offers a third option, the 5U-high 92-bay that supports both LFF and SFF drives. This new expansion can be attached to Storwize V5000 Gen2, Storwize V7000 (models 524/Gen2 and 624/Gen2+), and SVC (models DH8 and SV1).
For the 12-bay and 92-bay, IBM now supports 10TB capacity 3.5-inch Nearline drives. For the 24-bay and 92-bay, IBM now supports 7.68 TB and 15.36 TB capacity Solid State Drives (SSD).
For those concerned about the phrase "lower endurance" in the press release, let me explain. SSD have a bit of extra capacity included. If you write the full capacity of the drive every day for a year, you will "burn up" about one percent of the capacity.
To handle ten "Full Drive Writes per Day" (10 FDWP) over the course of five years, IBM adds 50 percent extra spare capacity above the 400 GB, 800 GB, 1.6 TB and 3.2 TB capacities. So, a 400GB full-endurance drive is really 600 GB inside. These were sometimes referred to as "Enterprise" SSD.
For the larger device sizes, the IT industry has determined that 1 FDWP is sufficient, so instead of 50 percent spare capacity, IBM adds only 5 percent extra. The 7.68 TB is really 8.06 TB inside. These were earlier referred to as "Read-Intensive" SSD. These come in 1.92 TB, 3.84 TB, 7.68 TB and 15.36 TB capacities.
IBM is also offering non-disruptive model conversions. Storwize V5010 can now be converted to V5020, and V5020 can be converted to V5030. The Storwize V7000 Model 524 (Gen2) can be converted to model 624 (Gen2+).
The DeepFlash 150 is the perfect JBOF addition to the ESS family. The current ESS models had either 2U-high 24-drive bays, or 4U-high 60-drive bays. This new model is 3U-high with 64 high-capacity (8 TB) Board Solid State Drives (BSSD).
The ESS includes all the features of IBM Spectrum Scale, including both 8+2 and 8+3 Erasure Coding data protection. This provides file and object access to data, including POSIX compliance for Windows, Linux and AIX operating systems, as well as HDFS-compliant access for big data analytics.
Last month, I presented at the "IBM Technical University" event in beautiful Atibaia, Brazil. Here is my recap of the event.
Marcelo Porto, IBM General Manager for Brazil and Client Unit Executive for Retail
What a great way to start a conference! Marcelo asked if everyone was comfortable? Everyone cheered in the affirmative.
He then said "Well, not for long. We will take you out of your comfort zone! You will disrupt yourself, and disrupt your companies. You will learn about new technologies and solutions that will make you very uncomfortable."
He explained how everything is virtual, specifically the three companies Airbnb, Waze, Uber. All of these three have new transformational business models, and he suggested all companies should follow suit.
He then said people need to be focused on four things:
Adopting an "agile attitude"
Act like you own the company
Don't cling to the past
Have the courage to re-invent yourself and your company
Frank Koja, IBM Vice President for Sales, Enterprise Systems Hardware
(Managers and business leaders could probably raise this percentage considerably if they talked to their employees before making decisions, but that's another blog post!)
Frank showed a video of an IBM client, Plenty of Fish (POF). This is a worldwide dating site with three million POF members in Brazil. They now process over 30,000 requests and/or messages per minute. FlashSystem connected to 30 servers makes that possible.
OpenPower consortium started with just 5 companies in 2014 for technology collaboration. Today, 250 members across 26 countries in six continents collaborate to make POWER technology as ubiquitous a commodity as Intel x86.
Frank then switched to "Business models" innovation. Out of the audience of about 800 people, only 10 raised their hands that have heard of Blockchain (he asked IBMers not to raise their hands, as all IBMers have heard of Blockchain!).
Frank feels that Blockchain is the most disruptive innovation since Internet banking. Blockchain affects supply chain, finance, insurance, shipping logistics, customs inspections, and government registrations.
A video showed a woman from Everledger, which uses Blockchain for shipping diamonds. IBM offers Blockchain on LinuxOne mainframe servers.
Hybrid Cloud is point of no return, including Local, Dedicated and Public clouds. Frank feels we need to cloudify all business processes.
Mauro Angelo, IBM Enterprise Strategy & Industry Solutions Director
Mauro explained that ideas are turned into inventions, and inventions are put to good use to bring forth innovations.
If your business is not cognitive you are a full era behind. Machine learning is not knew. IBM DeepBlue beat Grandmaster in Chess tournament back in 1997.
Mauro then focused on eight specific trends:
Systems of Engagement (SoE)
This is the combination of Mobile applications and Social business. IBM invited the first smartphone, the Simon, back in 1994. Apple's iPhone came later in 2007. Pokemon Go is example of augmented reality.
Cloud offers new service and location models. IBM [SoftLayer], [Bluemix], and [Kenexa] are a few examples.
There have been a lot of enhancements in this space, including Natural Language Processing (NLP), visual recognition, even smell recognition. Cognitive solutions can also identify the appropriate context, such as GPS location. And Cognitive solutions can interact with users to ask for clarifications. It can process "Big Data", the collection of non-structured data that normal Relational Database Management Systems (RDBMS) do not touch. Finally, they can learn, something often referred to as "Machine Learning".
In 2011, IBM Watson beat two humans at the TV show game Jeopardy! Today, [Dino, a toy from CogniToys] provides Watson-like capabilities to children.
Mauro got one for his daughter. She naturally interacts with toy. "How much does an elephant weigh?" she asks. "It depends on the elephant, but a fully grown elephant weighs more than 2,000 kilos" it responds. That's cool.
Wearables like Fitbit can track blood pressure, minutes of exercise, total steps walked. IBM helped Under Armour company develop an app in this space.
Eliminates middlemen or trusted third party (TTP). The hotel chain, Hilton, is testing out a robot called Pepper, which can use Blockchain to book tennis courts.
These are technologies thinner than a strand of hair, measured in nanometers. The focus is to develop stronger, lighter materials, and macromolecules for life sciences for medicine delivery.
Mass customization meets personalization and fast design prototypes. This is not just limited to plastic, but also metal, paper, wood, biomaterials, ceramics, food, and even cement.
Cement? That's right. A Chinese company prints houses using a cement 3D printer. In a country of over one billion people, this company has figured out how to build houses without human laborers.
Internet of Things (IoT)
Olli, a 12-person self-driving bus, is the brainchild of Local Motors. They are testing it out in National Harbor, and hope to roll it out to cities like Copenhagen, Miami, and Las Vegas.
Luis Liguori, IBM Distinguished Engineer and CTO for IBM Brazil
What does IBM mean by "Digital transformation?" What separates success from failure? Developed countries from less developed countries?
Is it culture? Whether people focus on the long term, or just the short term? Does the culture encourage you to foresee the future, and adapt accordingly? Does the culture encourage you to be brave and bold? Do you hide behind Business case return on investments (ROI)? Does your culture consider conflict to be good or bad? The answer: Good!
Does your company have a purpose? When humans no longer serve purpose, they die. The same is true for companies. He said the secret to success is the four "R's" -- Relevant, Resources, Reputation and Rigor.
For example, in 1996, the Kodak was ranked the 4th largest, it filed bankruptcy in 2012 because it was no longer relevant.
Consider Samsung. Samsung has lost its reputation with the latest "Samsung Galaxy Note7" fiasco of exploding batteries!
Airbnb is an example of Digital Transformation. Who knew that there were lots of people who wanted to rent out their bedrooms and bathrooms to strangers!
Luis feels that successful companies are either born digital, or transforming to digital. Industries are merging. Lines are blurring between industries. The latest acquisition between AT&T and Time Warner is an example.
Cognitive brings intelligence to decision making. For example, Watson health has been put to task to focus on Leukemia. In one case, Watson was able to [pinpoint a rare form of Leukemia] that had misdiagnosed and being treated incorrectly with little effect.
Why cognitive? Because human beings cannot read or remember as well as computers. There are thousands of peer-reviewed articles published every day. People are afraid to act to avoid mistakes. Computers are fearless.
Did you know that Brazil celebrates "Black Friday"? There is no "Thanksgiving" in Brazil, but retailers liked the idea of having people stand outside in the middle of the night to start their Christmas shopping! A few years ago, there were [a few problems], but in most recent years, it has shown to help [boost retail sales.] Based on these initial purchases, Watson can be used to help drive the rest of the Christmas retail season.
Watson can analyze personality based on social media writings. The world will be taken over by digital natives. The last century was focused inward, or "ego-centric", but in this 21st century, we will be focused outward, towards a complete "ecosystem".
Who are your competitors? Are they the companies that make products and services similar to yours? No! They are the companies that are competing for your customer's time and attention.
While I speak English and Spanish fluently, my Brazilian Portuguese is terribly rusty. We had several rooms with a pair of real-time translators. I presented the following:
Software Defined Storage -- Why? What? How?
The Pendulum Swings Back -- Understanding Converged and Hyperconverged Environments
IBM Spectrum Scale for File and Object Storage
IBM Storage integration with OpenStack
Introduction to IBM Cloud Object Storage System and its Applications (powered by Cleversafe)
IBM's Cloud Storage Options
All of my sessions were well received, and well attended!
Photo by Dominique Salomon,
IBM Certified IT Specialist
On Wednesday night, we had a nice pool-side reception. Beers, Caiparinhas, and Caiparoskies. Caiparinhas combine a sugarcane juice-based distilled alcohol called cachaça with muddled limes and added sugar. Caiparoskies combined vodka with muddled kiwi fruit.
(Many of the IBMers from United States skipped this event to get dinner early, so they could then come back in time to watch the third and final US Presidential Debate. Because of the time zone changes, this didn't start til 11:00pm, so they could have easily attended the event and had dinner, with plenty of time to spare!)
There was also a live band! This three part band had two guitarists and one lead singer. The lead singer also did maracas and drums while singing. They covered both English and Portuguese language songs.
Rodrigo Giaffredo, IBM Engagement Catalyst
Rodrigo gave the closing session. Wearing jeans and sneakers, he reminded me of the casual storytelling style of Jeff Jonas. He organized his stories around four points:
Consider the battle between Twitter vs. Pownce in 2007. Twitter won because it offered better ways to limit what you read, or who you communicate to, through methods like Hashtags, groups, etc.
Henry Ford disrupted transportation. He realized that Time and space is money. However, as he famously said "If I asked people what they wanted, they would have said faster horses!"
Today the challenge is processing data faster. The company that is able to process faster has economic advantage.
Strong ideas focus on user needs. Weak ideas are tactical and features. Consider the [Hippo Roller]. For centuries, African women and children carried water from far away wells either on their hands on or their heads. Much of it would fall out during the long walks. The Hippo Roller holds 90 liters (about 24 gallons) and rolls easily over rough terrain.
Rodrigo showed an graph. On the y-axis was "Importance" and the x-axis "Feasibility". Solutions in the upper right corner are obvious choices. Solutions in the upper left, important but not very feasible, are considered "big bets". Solutions in the lower right, feasible but not very important, he labeled "amenities".
Most designers, architects and developers know that the later the error is found, the more expensive it is to fix. A prototype is worth a thousand meetings.
Take the company Zappos, which sells shoes online over the Internet. The founder, Nick Swinmurn, tried to get investors, getting a typical response: "What are you drinking?" (In USA, we would ask what are you smoking, but this is the way the Brazilians say it.)
With no investors, Nick built a simple website, took pictures of shoes, and fulfilled orders by purchasing the shoes from local San Francisco retailers and shipping them to the clients.
Nick started this in 1999, and finally got some $20 Million USD in funding in 2004. His simple prototype allowed him to focus on post sales support. Zappos was recognized as having the best call center, moving his operations to Las vegas, NV.
Consider the challenges of urban mobility.
Both methods eventually result in a car, but the agile prototypes allow for more effective experimental milestones.
As for Zappos, its prototype proved successful. Amazon acquired them for $1.2 Billion USD in 2009.
It is that simple: Understand, explore, prototype, and evaluate. IBM has adopted "Design Thinking" across its development organizations to better meet the needs of the marketplace.
Overall, it was a delightful event. It is nearly summer down in the Southern hemisphere, so a bit warm and humid. The attendees were all looking forward to a turn-around in the Brazilian economy, and the business opportunities that brings.
Well it's Tuesday again, and you know what that means? IBM announcements!
Today, IBM announced a few things related to storage.
IBM Spectrum Copy Data Management
This new member of the IBM Spectrum Storage family helps manage all of those snapshot and FlashCopy images made to support DevOps, data protection, disaster recovery, and Hybrid Cloud computing environments.
The software automates the creation and catalog the copy data on existing storage infrastructure, such as snapshots, vaults, clones, and replicas. This can be especially useful with Oracle, Microsoft SQL server, and other databases that are often copied to support application development, testing, and data protection.
Initially, the following storage devices are supported:
IBM storage systems running IBM Spectrum Virtualize™ Software V7.3, and later, including IBM SAN Volume Controller, IBM Storwize®, and IBM FlashSystem® V9000
Storage systems running IBM Spectrum Accelerate™ 11.5.3, and later, including IBM FlashSystem A9000, A9000R, and IBM XIV® and the Supermicro Hyperconverged Appliance
IBM SKLM is IBM's lead offering for creating and managing encryption keys used by various Flash, Disk, Tape and SAN products.
This software release enhances the separation of duties for better alignment with regulatory requirements, simplifying the administrative access, LDAP integration, and device certificate TrustStore management. Device-group key import and export improves the flexibility in key management across multiple organizations.
For those using Hardware Security Modules [HSM], this software now offers HSM-based backup and restore of the encryption key database.
IBM is also enhancing its support of the Key Management Interoperability Protocol [KMIP], an industry standard to support encryption keys and the products that use them. This release now supports integration with any KMIP-compliant device from any vendor, including the introduction of KMIP Opaque and Suite B profiles.
IBM Storage Networking MDS 9000 24/10-port SAN Extension Module
The new MDS 9000 24/10-port SAN Extension Module is supported on MDS 9700 Series Multilayer SAN Fabric Directors. It supports 24 Fibre Channel Ports (auto-negotiating 2/4/8/10/16 Gbps), eight (1/10) GbE Fibre Channel over IP (FCIP) for long-distance replication, and two 40 GbE FCIP ports.
The modules support virtual SAN (VSAN), Hardware-based encryption to help secure sensitive traffic with Internet Protocol Security (IPsec), and hardware-based compression to dramatically enhance performance for both high-speed and low-speed links. This can help reduce costs for long-distance replication over expensive WAN infrastructure.
Two years ago, the folks at University of Toronto asked me to help their graduate students build a "Watson" running entirely on IBM SoftLayer to see if this would be a worthwhile class project. Needless to say, it was more difficult than they expected, but we managed to pull it off during that summer, able to answer a handful of simple questions from a single page corpus.
Last month, [Industry Leaders Establish Partnership on AI], combining the talents from Amazon, DeepMind/Google, Facebook, IBM and Microsoft, to form a non-profit to explore best practices and ethical questions related to Watson and other Artificial Intelligence applications.
Since data is at the core of any Artificial Intelligence, IBM is pleased to announce today that IBM Cloud Object Storage System is now available on IBM SoftLayer. This is based on the Cleversafe technology IBM acquired last year.
While other cloud service providers have offered data storage in the cloud, this new offering also allows hybrid configurations with geographically dispersed erasure coding. Unlike RAID which protects against the loss of one or two drives, erasure coding can protect against a larger number of concurrent failures. For example, using an Information Dispersal Algorithm of "7+5", where seven pieces of data are encoded on twelve independent disks, the system can lose up to five disk drives without losing any data.
Click graphic to view larger
Combining this with Geographically Dispersed Configuration across three or more sites means that you can lose an entire data center, four of the twelve disks, and still have instant full access to all of your data from eight drives at the other locations. In the graphic, you see two on-premise data centers combined with a third location in IBM SoftLayer.
Today, I met with Teresa Ferraro and Mike Buttrum from FirstRain in their Manhattan office in downtown New York City. IBM recently contracted FirstRain to provide IBMers like myself with analytics on publicly-available news to keep us informed for business meetings. Here's how IBMers can get the most out of this service.
Basically, FirstRain takes a list and generates the best summaries of publicly-available news that are most relevant. You can organize into different channels. Here I have seven channels.
Companies to watch refer to existing or prospective clients that I plan to be talking with soon. Some of my colleagues are assigned to specific clients, so they can set this up once and enjoy the news for the rest of the year. I, on the other hand, meet with different clients every week, so I will be updating this list on a frequent basis.
I have divided the Competitors between major ones, and smaller startups. Since I am often working with business partners and distributors, I made that a separate channel as well.
For product lines, I picked three: Data migration, Data storage solutions, and Software defined storage.
For conferences where I don't know which companies will attend, such as the IBM Technical University, I can set up information by territory. Here is one for Brazil.
I also attend industry-oriented events, so I can pick those vertical markets that might be helpful with dinner conversations. In this example, I chose Energy, Electric Utilities and Gas Utilities.
Once you have your channels configured, you get your results in various sections:
Management Changes lists any changes in top C-level positions, who left the company, who got recently hired.
Key Developments indicates news like mergers and acquisitions and government regulations.
First Reads prioritizes the top six articles for your channel. You can access more, but these six will get you started as you have your morning coffee.
First Tweets gives you the six most relevant tweets, if those articles above were just "TL;DR"
A section on Business Influencers and Market Drivers is interesting to see who the big players are, and what topics are driving the most conversation. Here's an example from my Energy/Electric/Gas channel:
The Most Talked About section covers quotes and commentary about the most talked about companies in your channel.
With most news sources focused on politics, weather and celebrity gossip, it is nice to have a quicker, more focused approach to get the news I need to prepare for my client briefings. Special thanks to my hosts Teresa and Mike for their hospitality!
This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year.
Day 4, the last day of the conference, is only a partial day, and many people opted to leave on Wednesday evening, or Thursday morning instead. The breakfast and lunch meals had fewer people than the previous days. Here is my recap of day 4 Thursday breakout sessions.
Building Hyperconverged Infrastructure for Next-Generation Workloads
Supermicro is more than happy to customize these, upgrading the CPU, RAM, disk or networking connectivity as needed. This solution is roughly half the price of Nutanix, and offers a better Next-Business-Day/9am-to-5pm support package .
The last time I was in Las Vegas, I presented this topic at [IBM Interconnect conference]. Back then, I was given only 20 minutes, was placed on the Solutions Expo showroom floor, competing with the noise and traffic of attendees going to lunch.
This time, it was much better, a large room, and a bigger-than-expected audience given that it was scheduled on Thursday morning.
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods. I wrapped up the session covering the various storage solutions that IBM offers for all four Cloud Storage types.
IBM Storwize and IBM FlashSystem with VersaStack versus NetApp FlexPod
Norm Patten, part of the IBM Competitive Project Office Storage Team, presented a competitive comparison between VersaStack with IBM storage, versus FlexPod with NetApp storage.
Commodity Solid State Drives (SSD) and Shingled Magnetic Recording [SMR] offer low-cost, high-capacity storage.
However, they have their own set of problems, so IBM is developing software that can be included in IBM Spectrum Accelerate, Spectrum Scale, and Spectrum Virtualize to optimize their utility.
The concept of Log-Structured Array has been around since 1988. The IBM RAMAC Virtual Array back in the 1990s used it. NetApp's Write-Anywhere File System (WAFL) is an implementation of the [Log-Structured File System] general concept.
SALSA combines Log-Structured Array with enhancements borrowed from the IBM FlashSystem design, that I covered in my Monday and Wednesday presentations, to enhance write endurance by as much as 4.6 times!
This was an NDA session, so I cannot blog any of the details.
World-class Flash-optimized Data Reduction and Efficiency with IBM FlashSystem A9000 and A9000R
Tomer Carmeli, IBM Offering Manager for the A9000 and A9000R presented. He presented an overview of these models on Monday, so this session was focused on the data footprint reduction technologies.
Basically, it is a three step process. First, all "standard patterns" are removed. IBM has identified some 260 standard patterns that are 8KB in length, such as all zeros, all ones, or all spaces, and replaces these blocks immediately with a pattern token.
Second, [SHA-1] 20-byte hash codes are computed on 8KB pieces on a rolling 4KB alignment boundary. In other words, if a 64KB block of data is written, bytes 0-to-8KB are hashed an compared to existing hash codes. If no match, then bites 4KB-to-12KB are hashed, and so on. This approach nearly doubles the likelihood of finding duplicates. When a block match is found, the algorithm can replacing them with pointer and reference count.
Third, any unique data that still remains is compressed using Lempel-Ziv algorithm. This is done using the [Intel® QuickAssist]. This co-processor can compress data 20 times faster than software algorithms running on general-purpose x86 processors.
Do you want an estimate of how much "reduction ratio" you may achieve? IBM has developed two estimator tools to help. The first tool is a complete scan for data expected to be dedupe-friendly. It is a slow process, taking 8 hours per TB. This would be ideal for Virtual Desktop Infrastructure or backup copies.
The second tool is the infamous [Comprestimator] that IBM has had for awhile to help estimate compression savings for IBM Spectrum Virtualize storage solutions like SVC, Storwize and FlashSystem V9000. This tool is very fast, looking at only a statistically-valid subset of the data.
The results of both tools are merged, and the result is within five percent accuracy. This allows IBM to offer guidance on which data to place on these new A9000 and A9000R models, as well as offer a "reduction ratio" guarantee.
A client asked me why I bother to attend other sessions, when I probably know most of the material they present. I explained that I can always learn from others. I can honestly say that I learned something new and useful at every session I attended.
This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year. Here is my recap of Day 3 Wednesday.
Become your own Storage Consultant
Gary Graham, IBM Field Technical Specialist for Storage, and Brian Pioreck, IBM Client Technical Specialist for Storage, co-presented this session. This session explained how to use IBM's 30-day free trial of IBM Spectrum Control Storage Insights, a cloud-based services offering.
(Note: 15 years ago, I was the chief architect of version 1 of what we now call IBM Spectrum Control. I am pleased to see how well this product has evolved over the years.)
Storage Insights provides a reporting-only subset of the popular IBM Spectrum Control Standard and Advanced editions. It reports on IBM storage devices, as well as any non-IBM devices that are virtualized behind IBM Spectrum Virtualize products like SAN Volume Controller (SVC), Storwize, and FlashSystem V9000.
If you are a storage administrator, consider trying this out for 30 days, get some immediate results. Since it is cloud-based, you only need a Windows, Linux or AIX system to install a "collector" on site. This collector sends data up to the Cloud at one of IBM SoftLayer facilities. The installation process takes only 30 minutes, and you can download the code from the Internet.
If you find Storage Insights valuable, helping you reclaim some unused space, or provide other insight that saves your company money, consider buying the service, for only 250 US Dollars per 50 TB monitored. If you want more than just monitoring and reporting, consider one of the on-premise solutions like IBM Spectrum Control Standard, or IBM Spectrum Control Advanced edition, which provide provisioning and configuration capabilities as well.
Enhance your Security posture with At-Rest Encryption using the latest IBM Spectrum Virtualize
All of the IBM Spectrum Virtualize products support Data-at-Rest Encryption. For direct-attached storage, the 12Gb SAS controller performs hardware-assisted encryption.
For SAN-attached storage via FCP, FCoE or iSCSI back-end devices, IBM uses the [AES-NI instruction set] that comes included in certain Intel CPU processors.
Last November 2015, [IBM acquired Cleversafe] for $1.3 Billion US dollars because Cleversafe has the brand name recognition as the #1 Object Storage vendor the past two years in a row (2014 and 2015). On July 1 of this year, the transformation was complete, and their flagship product was officially renamed to the IBM Cloud Object Storage System, which some abbreviate informally as IBM COS.
Since then, IBM has been busy integrating IBM COS into the rest of the storage portfolio. I explained how IBM COS can be used for all kinds of static-and-stable data, but not suited for frequently changed data, such as Virtual machines or Databases.
Object storage can be access via NFS or SMB NAS-protocols using a gateway product, like IBM Spectrum Scale, or those from third-party partners like Ctera, Avere, Nasuni or Panzura. It can also be used as an alternative to tape for backup copies, and is already supported by the major backup software like IBM Spectrum Protect, Commvault Simpana, or Veritas NetBackup.
A few years ago, I explained to a client that Converged and Hyperconverged were like a pendulum swinging back. Over the past few decades, we have gone from internal disk, to externally attached disk, to SAN and LAN networks.
Each time, we gained more flexibility, greater connectivity and longer distances. Then I explained that Converged and Hyperconverged is like going backwards, the pendulum swinging back to the days of internal and direct-attached storage. The analogy was a hit, and thus this session was born!
IBM offers multiple Converged Systems. IBM PureSystems, PureData, PurePower and PureApplication solutions offer racks of compute, storage and network gear. Last year, IBM collaborated with Cisco to create VersaStack, a converged system that combines Cisco's x86 blade servers and switches with IBM FlashSystem and Storwize products.
IBM also offers Hyperconverged solutions. IBM Spectrum Accelerate allows the compute, storage and network functions run on 3 to 15 VMware ESXi hosts to form a cluster. The cluster can then make iSCSI-based volumes available to other virtual machines running on these same hosts. The volumes can also be made available to servers outside the cluster, such as bare metal servers or other Hypervisors. This is available as software-only, or you can get pre-built system called the Supermicro Hyperconvergence Appliance.
IBM Spectrum Scale provides a clustered file system that allows the compute, storage and network functions to run on 3 to 16,000 machines. Formerly called General Parallel File System (GPFS), IBM Spectrum Scale has been around for over 18 years. Over 200 of the world's largest "Top 500" supercomputers run IBM Spectrum Scale today.
IBM Spectrum Virtualize and IBM Storwize Birds-of-a-Feather
Barry Whyte, fellow blogger and IBM Master Inventor, presented an overview of the latest features, and where IBM is headed in 2017 for the IBM Spectrum Virtualize family of products. Barry now works in Advanced Technical Skills for Storage Virtualization Asia/Pacific Region.
The group then moved to another room offering delicious food and drink, as Eric Stouffer, IBM Director, Storwize Offering Manager and Business Line Exec, presented the future areas that IBM is consider for this product family.
All of this was done under Non-Disclosure Agreements (NDA), preventing me from blogging any details. Back in 2003, Las Vegas started a marketing campaign ["What Happens in Vegas, Stays in Vegas"]. Coincidentally, this is the same year IBM introduced the IBM SAN Volume Controller, the first product in the IBM Spectrum Virtualize family.
This was a long day, but was pleased with the large audiences I had at my sessions.
This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year. Here is my recap of breakout sessions on Day 2.
Introducing IBM FlashSystem A9000 and A9000R: Grid Architecture Designed for the Hybrid Cloud
Tomer Carmeli, IBM Offering Manager for the A9000 and A9000R presented. Both models offer data-at-rest encryption, snapshots, remote mirroring, and data footprint reduction, assuming 5.26:1, a combination of pattern removal, data deduplication and hardware-assisted Real-time compression.
The A9000 is an 8U high pod that can fit into existing racks. It comes in 60TB, 150TB and 300TB effective capacity.
The A9000R includes its own 42U rack. The rack is organized as two to six "grid elements" combined with two InfiniBand switches. Grid elements come in 150TB and 300TB effective capacities, giving you up to a whopping 1.8 PB in a single rack!
Similar to the IBM XIV and IBM Spectrum Accelerate offerings, the A9000 and A9000R support Hyper-Scale features. Hyper-Scale Manager lets you manage up to 144 devices on a single pane of glass. Hyper-Scale Mobility lets you move volumes (LUNs) non-disruptively from one device to another.
Different data compresses or dedupes at different ratios. Your mileage may vary. Unless you are evaluating a JBOF (just a bunch of flash) device, there is a great difference between raw, usable, and effective capacity. Raw capacity can be calculated by the size of each chip, times the number of chips. Usable capacity factors out RAID, and any spare capacity set aside for RAID rebuild and garbage collection. Effective capacity indicates the amount of information that can be stored by taking advantage of data footprint reduction technologies, such as compression or data deduplication.
IBM offers three options:
Measured Estimate -- IBM has a set of data reduction estimator tools that can scan your existing data, and estimate your reduction ratio, within 5 percent accuracy.
Competitive Match -- If a competitor had run their own set of estimator tools, IBM might be able to match the reduction ratio, without repeating the analysis, by just reviewing the competitor results.
"Sight unseen" -- without analyzing your actual data, reduction ratio is determine by the type of data (DB2, Oracle, SQL server, etc.), based on experience with similar data at other data centers.
Both A9000 and A9000R models are published at 250 microsecond latency, about 30 times faster than traditional spinning disk, although some workloads actually can run even faster than that. Assuming 5.26:1 reduction, these sell for about $1.50 per effective GB.
Flash Primer - Ready to move from disk storage?
Patricia Crowell, IBM Worldwide FlashSystem Enablement manager, presented. She presented an interesting time line:
First Solid-State Drive (SSD)
First Flash card, such as for digital cameras
First USB stick
Flash used in specialized IT appliances
Flash for the enterprise - Microsoft and UCSD paper on SSD
In 2012, Microsoft Research and University of California San Diego published ["The Bleak Future of NAND Flash Memory"], 8 pages, by Laura M. Grupp, John D. Davis, and Steven Swanson. Here is an excerpt:
"The technology trends we have described put SSDs in an unusual position for a cutting-edge technology: SSDs will continue to improve by some metrics (notably density and cost per bit), but everything else about them is poised to get worse. This makes the future of SSDs cloudy: While the growing capacity of SSDs and high IOP rates will make them attractive in many applications, the reduction in performance that is necessary to increase capacity while keeping costs in check may make it difficult for SSDs to scale as a viable technology for some applications"
IBM disagreed with this bleak assessment, announced it was investing $1 billion US Dollars into this technology, acquired Texas Memory Systems, and has deployed flash throughout its product line. For the past three years, IBM has been the #1 vendor for Flash storage systems.
Patricia offered the following example. What would it take to run 20 million IOPS? Here's a comparison:
Disk systems 15K rpm
Disk systems 7200 rpm
How to migrate from SONAS to IBM Spectrum Scale/ESS using Active File Manager
Paul Schena, IBM Senior IT Specialist, presented his experiences migrating existing SONAS data to new IBM Spectrum Scale or Elastic Storage Server (ESS) deployments. SONAS is going End-of-Service (EOS) on April 30, 2018, so it is never too soon to start this migration.
Paul gave two different methodologies. The first used Active File Management (AFM):
Setup an IBM Spectrum Scale "Gateway Node" in "Independent-Writer" AFM mode. Paul recommends 10 threads per gateway node.
Issue an AFM pre-fetch, disabling the "cache eviction" feature to ensure data remains. AFM transfers the directory structure, file data including sparse files, Access Control Lists (ACL), extended attributes.
Define your exports with no-root-squash and move your user mounts to the new systems
Once all the data is moved, convert the cache filesets to regular filesets
Define your quotas, export settings, ILM policies and rules
Decommision the SONAS
The second used Robocopy and Rsync, which may be required if there is high-latency, long-distance connection that prevents proper AFM connections:
Configure IBM Spectrum Scale CES servers to appropriate NFS and/or SMB protocols
Use Robocopy and/or Rsync as appropriate to move the data to the new system
Decommision the SONAS
Having it all: Hybrid Cloud Storage Services for Block, Power and Backup
Clint Parish, Director of Enterprise Solutions and Services for VSS, and Marc The'berge, Business Development for Supermicro, co-presented this session.
VSS offers POWER8-based Cloud services. They consider themselves "boutique" with POWER8 servers, able to run AIX, IBM i and Linux on POWER applications, but not at the scale and size of larger x86-based clouds like Amazon Web Services or Microsoft Azure.
For IBM i, they attach to IBM Storwize V7000. For AIX and Linux on POWER, they use IBM Storwize V7000 and/or Supermicro Hyperconverged Appliance, a pre-built system based on IBM Spectrum Accelerate.
Supermicro offers three "tee-shirt sizes", their small systems have six nodes, medium with 9 nodes, and large with 15 nodes. Unlike other Hyperconverged systems, the ones from Supermicro include a rack, and are pre-cabled with all the necessary Ethernet switches necessary to make a complete solution.
To offer backup services, VSS uses IBM Spectrum Protect with the Supermicro appliances.
In the evening, we were treated with a concert with Train, known for songs like "Meet Virginia", "Hey Soul Sister", "Calling all Angels" and "Drops of Jupiter". They played all of these, plus covered some songs by Led Zeppelin, Journey, Queen and Aerosmith,
This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year. Here is my recap of breakout sessions for Monday, Sep 19, 2016:
How do you storage a Zettabyte? IBM and Microsoft Know...
A [Zettabyte] is a million Petabytes, or a billion Terabytes, of data. Most clients I deal with have less than 10 PB of centralized storage in their data center, but there are a few that have much larger data repositories.
Ed Childers, IBM STSM and manager for Tape and LTFS development, and Aaron Ogus, Microsoft Architect, discussed different solutions developed by IBM and Microsoft. IBM's solution has been productized, and is available as IBM Spectrum Scale and IBM Spectrum Archive. Microsoft's solution is not productized, but is being "operationalized" to be used within Microsoft's Azure Cloud.
Not surprisingly, to be able to store a Zettabyte of data, you have to be creative and cost-effective with storage media. The current winner is magnetic tape, which continues to be 20 times less expensive than disk. IBM developed the Linear Tape File System (LTFS) and then shared it with other leading IT vendors. Ed also covered some future storage media developments, from using Macro-molecular strands of DNA, to Phase Change Memory (PCM).
All Flash is not Created Equal - Contrasting IBM FlashSystem with Solid State Drives (SSD)
Many IBM FlashSystem presentations focus on the product, but don't explain the underlying technology, specifically what differentiates IBM FlashSystem from substantially slower competitive alternatives like EMC XtremIO and PureStorage that are based instead on fallible commodity Solid State Drives (SSD).
By working closely with our chip vendor, Micron, IBM was able to improve the write endurance of these Multi-level cell (MLC) chips by 9.4x, and reduce write amplification by 45 percent.
I explained IBM's clever asymmetrical wear-level balancing, heat segregation, read disturb mitigation, voltage level shifting, and health binning, all of which contribute to the performance and reliability of this solution. IBM's innovative Error Correcting Code provides LDPC-like correction strength but at much faster BCH-like latency speed.
This was a popular session. Despite being moved to a much larger room, they still had to turn people away, so I will be repeating this session on Wednesday, 11:00am.
Real-time Compression: Bendingo and Adelaide Bank's Perspective
James Harris, Senior Storage Systems Specialist for [Bendingo and Adelaide Bank], presented his success story with the use of Real-time Compression. Oracle RAC databases got 60-70 percent savings. SQL databases got 70-80 percent savings. VMware VMFS datastores average 50 percent savings. For IBM i, he is getting 60-70 percent savings for SYSBAS, and over 70 percent savings of the rest of his IBM i production data.
As a result, the bank has not had to make any Capital Expenditures (CAPEX) for disk for 2-3 years since they started compressing in 2014.
Storage Options for Big Data and Analytics: IBM FlashSystem or Traditional Disk Systems?
Eric Sperley, IBM Software Defined Storage Architect, presented the basics of Hadoop and the Hadoop File System (HDFS), then explained how IBM Spectrum Scale, when combined with the right tiers of flash and disk technology, could be used to optimize an environment for big data analytics.
The Solutions EXPO is open all day, for people to visit the booths in between sessions. I stopped in for the evening reception. This is a great way to catch up on the latest products, re-connect with some clients or colleagues that I haven't seen in person for awhile, and meet new friends.
Shown here is Angie Welchert, who just started working for IBM a few years ago! I took her around to introduce her to some IBM executives at the Solutions EXPO.
This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year.
General Session - Outthink Status Quo
This week's motto is "Outthink the Status Quo.. Before the Status Quo disrupts your business!
Tom Rosamilia, IBM Senior VP for IBM Systems (and my fifth-line manager), kicked off the event. There are about 5,500 people at this event. He mentioned that just like a picture is worth a thousand words, "a prototype is worth a thousand meetings."
He showed a video of our client "Plenty of Fish" [POF], which is a dating site. They have 100 million members, of which 4 million access their site every day. IBM FlashSystem paid for itself, with an ROI payback period of 2 months.
Jason Pontin, Editor in Chief and Publisher of [MIT Technology Review], mentioned three major areas to watch:
Explosive innovation in Artificial Intelligence (AI), including IBM Watson, machine learning, etc.
Pervasive computing, including augmented reality or virtual reality, what IBM calls Internet of Things (IoT)
Re-writing life, directly editing genomes for healthcare and agriculture
Jason feels there are two major challenges for humans. First, what is the "future of work"? People are no longer working for the same company for their entire career. Rather, they come and go, moving in and out of companies. Second, how will we deliver food and water to the 9.6 billion population expected to exist by 2050, with added challenge of climate change. Ed Walsh, IBM General Manager for Storage and Software Defined Infrastructure, presented next. Last year, I was asked to throw my hat in the ring to be the next General Manager of IBM Storage. I was up against some strong competition, and in the end upper management selected Ed Walsh instead. He is a good choice, and I support his efforts.
Matt Cadieux, CIO for [Red Bull Racing], presented on the IT challenges of designing, building and racing Formula One racing cars. They have 21 races per year, and each race has slightly different specifications, forcing Red Bull Racing to break down and rebuild their cars for each race.
Michael Lawley, Senior IT Vice President for [HealthPlan Services], explained how his business grew 300 percent in the past four years. Their workloads are very "spiky", so it is good that they can scale up or down their IT infrastructure 3-4x as needed, within minutes.
Jacob Yundt, CIO for University of Pittsburgh Medical Center [UPMC], explained the importance of genomics as the next frontier of medicine. Genomics allows for more accurate cancer determinations, which helps target specific treatments. They moved from x86-based clusters to those based on Power LC models from IBM. For analytics, they chose IBM Power8 S822L servers with Elastic Storage Server (ESS) and the Hadoop Transparency Layer.
Lastly, Terri Virnig hosted two technology partners to the stage for some major announcements. First, Jim Totton from Red Hat, announced that RHEV v4 (based on Linux KVM) is announced for POWER platform. Secondly, Scott Gnau, CTO for [Hortonworks], announced that Hortonworks will run on the POWER platform, as part of IBM and Hortonworks Open Data Platform [ODP] initiative.
Trends & Directions: The Future of Storage in the Cloud and Cognitive Era
Eric Herzog, IBM Vice President, Product Marketing and Management Software Defined Infrastructure, served as emcee for this session.
Ed Walsh, IBM General Manager for IBM Storage and Software Defined Infrastructure, marveled at IBM's "storied history in storage innovation". He suggests clients should modernize and transform their business with IBM broadest storage portfolio in the IT industry.
Clod Barrera, IBM Engineer and the Chief Technical Strategist for IBM Systems Storage, explained that in the past 60 years of disk systems, areal density has improved by a factor of one billion. Unfortunately, that is slowing down, and we won't see such improvements anymore.
Bina Hallman, IBM Vice President, Software Defined Storage Solutions Offering Management, hosted a panel of clients, including:
Bob Osterlin, from [Nuance], that has 5-10 PB of data using IBM Spectrum Scale for voice recognition software.
Rich Spurlock, from [Cobalt Iron], that provides Backup-as-a-Service using IBM Spectrum Protect. Their clients experience an 80 percent reduction in operating expenditures (OPEX) using Spectrum Protect.
Moshe Perez, from [RR Media], that provides television channel distribution like ESPN and BBC to other countries. They use IBM Spectrum Accelerate to handle the demand peaks, such as the Olympics.
Mike Kuhn, IBM Vice President for Storage Solutions Offering Management, also hosted a panel of clients, including:
Kevin Muha, from [UPMC], managing 13 PB of storage, across a variety of IBM storage devices, including 700 TB of FlashSystem V9000.
Bill Reed, CTO for [Arizona State Land Department], that uses VersaStack with IBM FlashSystem V9000 for geographic information system [GIS] applications. They manage over 9.2 million acres to help fund K-12 schools in Arizona.
Owen Morley, from Plenty of Fish [POF] dating website, evaluated nearly every flash device in the market, and chose IBM FlashSystem. "The one metric that matters is Latency!"
These were the two main keynote sessions on Monday morning. During the rest of the week there will be over 285 storage-related breakout sessions, dozens of labs, and 7 panels.
This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year. In previous years, this conference was held in May, June or July, but this year, it was moved back to September, to coincide with the 60th Anniversary of IBM Disk Systems.
I have arrived safely to Las Vegas, and checked in at Edge 2016 Conferenece Registration.
This year, the Solutions EXPO opens early, on Sunday with a reception. This gives people a chance to go to booth #330 to make appointments for one-on-one with various IBM Executives!
I was able to catch up with co-workers I have not seen in a while! There is a whole section on IBM storage products such as the IBM DS8888 All-Flash Array, as well as software products like IBM Spectrum Protect and IBM Spectrum Control.
On Monday, my session "All Flash is Not Created Equal: Tony Pearson Contrasts IBM FlashSystem and SSD" has moved from the tiny room to a much larger room "Studio A". There was a lot of demand for this session, so I have agreed to present this again, as a repeat session, on Wednesday.
Edge will be different in many ways this year. The past few years we had separate "Executive Edge" for C-level executives, "Winning Edge" for IBM Business Partners, and "Technical Edge" for server, network and storage administrators.
This year, all 1,000 sessions are combined back into one, but with clever hints in the titles. The words "General Session", "Outthink" or "Cognitive" are used to indicate C-level executive talks. Those that use the terms "Winning" or "Community" target IBM Business Partners, Managed Service Providers and Cloud Service Providers. Those that mention z Systems, POWER servers, or Storage solutions, often adding the term "Deep-Dive", are technical.
(Unlike other sessions that might appeal to one portion of the audience or another, mine are suitable for everyone, from C-level executives and IBM Business Partners to storage administrators. To help people find them under the new naming scheme, I have added "Tony Pearson Presents", or words to that effect.)
About 260 breakout sessions relate to IBM Storage, but there are only 20 or so time slots, so obviously you can't see them all in person.
I strongly suggest you pick about three to five topics per time slot, so that you are not overwhelmed by the dozens of choices during the event. This allows you to make a quick decision on which one you finally decide on during each time slot.
Occasionally, a session might get canceled, postponed, or be so full of attendees that nobody else is allowed in, so having three to five topics selected allows you to chose an alternate.
Here is my schedule for next week at Edge 2016.
Trends & Directions: The Future of Storage in the Cloud and Cognitive Era
All Flash is Not Created Equal: Tony Pearson Contrasts IBM FlashSystem and SSD
MGM Grand - Studio 9
Solution EXPO: Reception
Edge at Night: Poolside Reception and Concert "Train"
Tony Pearson Presents IBM Cloud Object Storage System and Its Applications
MGM Grand - Room 114
The Pendulum Swings Back: Tony Pearson Explains Converged and Hyperconverged Environments
MGM Grand - Room 113
Solution EXPO: Reception
Tony Pearson Presents IBM's Cloud Storage Options
MGM Grand - Room 116
My colleagues Dave Dabney or Adam Bergren will be located at the WW Systems Client Centers Booth 125 of the Solution EXPO.
If you are active in Social Media, consider using the hashtags #IBMedge, #IBMstorage, and #IBMcloud. You can follow me on Twitter, my handle is @az990tony
For those interested in a one-on-one meeting with me, over breakfast, lunch or dinner, or some other time, I have several slots still available. Fill out a request form on BriefingSource at: [https://briefingsource.dst.ibm.com/]
SAP HANA is an in-memory, relational database management system supported on Linux for x86 and POWER servers. The "HANA" acronym is short for "High-Performance Analytic Appliance" software. By keeping the data in memory, analytics and queries can be performed much faster than from traditional disk repositories.
Server memory, however, is volatile storage, so the data needs to be stored on persistent storage such as flash or disk drives. SAP has certified several configurations, some involve IBM Spectrum Scale solutions. I will use the following graphic to explain the three configurations.
Linux on x86-64 with Spectrum Scale FPO
With SAP HANA on Lenovo x86-64 servers, SAP has certified internal flash or disk drives running IBM Spectrum Scale in "File Placement Optimization" (FPO) mode. FPO provides a shared-nothing architecture that matches the SAP HANA architecture. IBM Spectrum Protect can backup this configuration, providing data protection and disaster recovery support.
Linux on POWER with Elastic Storage Server
With SAP HANA on POWER servers, SAP has certified external Elastic Storage Server (ESS). Not only is POWER the better platform to run SAP HANA than x86-64, but Elastic Storage Server offers excellent erasure coding to provide excellent rebuild times and storage efficiency.
The ESS is a pre-built system that combines IBM Spectrum Scale software with server and storage hardware. IBM Spectrum Protect can also backup this configuration, providing data protection and disaster recovery support.
Block-level Storage over Storage Area Network (SAN)
Various IBM block-level devices are support for SAP HANA on both Linux on x86-64 and Linux on POWER. Unfortunately, SAP only has certified (to date) the use of the XFS file system. The problem many clients mention about this configuration is the lack of end-to-end backup and disaster recovery. This is solved by the Spectrum Scale configurations in the previous two examples.
Other combinations, such as SAP HANA on POWER with Spectrum Scale FPO, or on x86-64 servers with Elastic Storage Serer, are either not SAP-certified, or not directly supported by SAP without their approval.
IBM and SAP have worked closely together for many years, and I am glad to see SAP HANA and IBM Spectrum Scale based solutions continue this tradition.
As we get to larger and larger flash and spinning disk drives, a common question I get is whether to use RAID-5 versus RAID-6. Here is my take on the matter.
A quick review of basic probability statistics
Failure rates are based on probabilities. Take for example a traditional six-sided die, with numbers one through six represented as dots on each face. What are the chances that we can roll the die several times in a row, that we will have no sixes ever rolled? You might think that if there is a 1/6 (16.6 percent) chance to roll a six, then you would guarantee hit a six after six rolls. That is not the case.
# of Rolls
Probability of no sixes (percent)
So, even after 24 rolls, there is more than 1 percent chance of not rolling a six at all. The formula is (1-1/6) to the 24th power.
Let's say that rolling one to five is success, and rolling a six is a failure. Being successful requires that no sixes appear in a sequence of events. This is the concept I will use for the rest of this post. If you don't care for the math, jump down to the "Summary of Results" section below.
Error Correcting Codes (ECC) and Unreadable Read Errors (URE)
When I speak to my travel agent, I have to provide my six-character [Record Locator] code. Pronouncing individual letters can be error prone, so we use a "spelling alphabet".
The International Radiotelephony Spelling Alphabet, sometimes known as the [NATO phonetic alphabet], has 26 code words assigned to the 26 letters of the English alphabet in alphabetical order as follows: Alfa, Bravo, Charlie, Delta, Echo, Foxtrot, Golf, Hotel, India, Juliett, Kilo, Lima, Mike, November, Oscar, Papa, Quebec, Romeo, Sierra, Tango, Uniform, Victor, Whiskey, X-ray, Yankee, Zulu.
Foxtrot Golf Mike Oscar Victor Whiskey
Foxtrot Gold Mine Oscar Vector Whisker
Boxcart Golf Miko Boxcart Victor Whiskey
Having five or so characters to represent a single character may seem excessive, but you can see that this can be helpful when communications link has static, or background noise is loud, as is often the case at the airport!
If spelling words are misheard, either (a) they are close enough like "Gold" for "Golf", or "Whisker" for "Whiskey", that the correct word is known, or (b) not close enough, such that "Boxcart" could refer to either "Foxtrot" or "Oscar" that we can at least detect that the failure occurred.
For data transfers, or data that is written, and later read back, the functional equivalent is an Error Correcting Code [ECC], used in transmission and storage of data. Some basic ECC can correct a single bit error, and detect double bit errors as failures. More sophisticated ECC can correct multiple bit errors up to a certain number of bits, and detect most anything worse.
When reading a block, sector or page of data from a storage device, if the ECC detects an error, but is unable to correct the bits involved, we call this an "Unrecoverable Read Error", or URE for short.
Bit Error Rate (BER)
Different storage devices have different block, sector or page sizes. Some use 512 bytes, 4096 bytes or 8192 bytes, for example. To normalize likelihood of errors, the industry has simplified this to a single bit error rate or BER, represented often as a power of 10.
Bit Error Rate per read (BER)
Consumer HDD (PC/Laptops)
Enterprise 15k/10k/7200 rpm
Solid-State and Flash
IBM TS1150 tape
In other words, the chance that a bit is unreadable on optical media is 1 in 10 trillion (1E13), on enterprise 15k drives is 1 in 10 quadrillion, and on LTO-7 tape is 1 in 10 quintillion.
There are eight bits per byte, so reading 1 GB of data is like rolling the die eight billion times. The chance of successfully reading 1GB on DVD, then would be (1 - 1/1E13) to the 8 billionth power, or 99.92 percent, or conversely a 0.08 percent chance of failure.
In this paper, Google had studied drive failure using an "Annual Failure Rate" or AFR. Here are two graphs from this paper:
This first graph shows AFR by age. Some drives fail in their first 3-6 months, often called "infant mortality". Then they are fairly reliable for a few years, down to 1.7 percent, then as they get older, they start to fail more often, up to 8.3 percent.
This second graph factors in how busy the drives are. Dividing the drive set into quartiles, "Low" represents the least busy drives (the bottom quartile), "Medium" represents the median two quartiles, and "High" represents the busiest drives, the top quartile. Not surprisingly, the busiest drives tend to fail more often than medium-busy drives.
Given an AFR, what are the chances a drive will fail in the next hour? There are 8,766 hours per year, so the success of a drive over the course of a year is like rolling the die 8,766 times. This allows us to calculate a "Drive Error Rate" or DER:
Drive Error Rate per hour (DER)
For example, an AFR=3 drive has a 1 in 287,800 chance of failing in a particular hour. The probability this drive will fail in the next 24 hours would be like rolling the die 24 times. The formula is (1-1/287,800) to the 24th power, resulting in a failure rate of roughly 0.008 percent.
Let's take a typical RAID-5 rank with 600GB drives at 15K rpm, in a 7+P RAID-5 configuration.
During normal processing, if a URE occurs on a individual drive, RAID comes to the rescue. The system can rebuild the data from parity, and correct the broken block of data.
When a drive fails, however, we don't have this rescue, so a URE that occurs during the rebuild process is catastrophic. How likely is this? Data is read from the other seven drives, and written to a spare empty drive. At 8 bits per byte, reading 4200 GB of data is rolling the die 33.6 trillion times. The formula is then (1-1/E16) to the 33.6 trillionth power, or approximately 0.372 percent chance of URE during the rebuild process.
The time to perform the rebuild depends heavily on the speed of the drive, and how busy the RAID rank is doing other work. Under heavy load, the rebuild might only run at 25 MB/sec, and under no workload perhaps 90 MB/sec. If we take a 60 MB/sec moderate rebuild rate, then it would take 10,000 seconds or nearly 3 hours. The chance that any of the seven drives fail during these three hours, at AFR=10 rolling the DER die (7 x 3) 21 times, results in a 0.025 percent chance of failure.
It is nearly 15 times more likely to get a URE failure than a second drive failure. A rebuild failure would happen with either of these, with a probability of 0.397 percent.
The situation gets worse with higher capacity Nearline drives. Let's do a RAID-5 rank with 6TB Nearline drives at 7200 rpm, in a 7+P configuration. The likelihood of URE reading 42 TB of data, is rolling the die 336 trillion times, or approximately 3.66 percent chance of URE failure. Yikes!
The time to rebuild is also going to take longer. A moderate rebuild rate might only be 30 MB/sec, so that rebuilding a 6TB drive would take 55 hours. The chance that one of the other seven drives fail, assuming again AFR=10, during these 55 hours results in a 0.462 percent.
This time, a URE failure is nearly eight times more likely than a double drive failure. The chance of a rebuild failure is 4.12 percent. Good thing you backed up to tape or object storage!
The math can be done easily using modern spreadsheet software. The URE failure rate is based on the quantity of data read from the remaining drives, so a 4+P with 600GB drives is the same as 8+P with 300GB drives. Both read 2.4 TB of data to recalculate from parity. The Double Drive failure rate is based on the number of drives being read times the number of hours during the rebuild. Slower, higher capacity drives take longer to rebuild. However, in both the 15K and 7200rpm examples, the chance of a URE failure was 8 to 15 times more likely than double drive failure.
Many of the problems associated with RAID-5 above can be mitigated with RAID-6.
After a single drive fails, any URE during rebuild can be corrected from parity. However, if a second drive fails during the rebuild process, then a URE on the remaining drives would be a problem.
Let's start with the 600GB 15k drives in a 6+P+Q RAID-6 configuration. The chance of a second drive failing is 0.0252 percent, as we calculated above. The likelihood of a URE is then based on the remaining six drives, 3600 GB of data. Doing the math, that is 0.0319 percent chance. So, the change of a URE during RAID-6 failure is the probability of both occurring, roughly 0.0000806 percent. Far more reliable than RAID-5!
Likewise, we can calculate the probability of a triple drive failure. After the second drive fails, the likelihood of a third drive at AFR=10, results in 0.00000546 percent.
Combining these, the chance of failure of rebuild is 0.000861 percent.
Switching to 6 TB Nearline drives, in a 6+P+Q RAID-6 configuration, we can do the math in the same manner. The likelihood of URE and two drives failing is 0.0145 percent, and for triple drive failure is 0.00183 percent. Chance of rebuild failure is 0.0163 percent.
Summary of Results
Putting all the results in a table, we have the following:
RAID-5 rebuild failure (percent)
RAID-6 rebuild failure (percent)
600GB 15K rpm
6 TB 7200rpm
Hopefully, I have shown you how to calculate these yourself, so that you can plug in your own drive sizes, rebuild rates, and other parameters to convince yourself of this.
In all cases, RAID-6 drastically reduced the probability of rebuild failure. With modern cache-based systems, the write-penalty associated with additional parity generally does not impact application performance. As clients transition from faster 15K drives to slower, higher capacity 10K and 7200 rpm drives, I highly recommend using RAID-6 instead of RAID-5 in all cases.
As I have mentioned before, I started this blog on September 1, 2006 as part of IBM's big ["50 Years of Disk Systems Innovation"] campaign. IBM introduced the first commercial disk system on September 13, 1956 and so the 50th anniversary was in 2006. That means this month, IBM celebrates the "Diamond" anniversary, 60 years of Disk Systems!
For those who missed it, IBM announced last Tuesday encryption capability for the TS1120 drive, our enterprise tape drive that read and write 3592 cartridges. Do you need special cartridges for this? No! Use the sames ones you have already been using!
You can read more about it www.ibm.com/storage/tape."
Short and sweet, but it got me started, and I ended up writing 21 blog posts that first month. You can read blog posts from all 10 years by looking at the left panel of my blog under "Archive".
While traditional disk and tape storage are still very important and relevant in today's environment, IBM has also expanded into other technologies:
In 2012, IBM [acquired Texas Memory Systems]. In 2014, IBM shipped 62PB, more Flash capacity than any other vendor. In 2015, continued its #1 status, shipping 170PB of Flash, again, more than any other vendor.
IBM has flash everywhere, from the advanced FlashSystem 900, V9000, A9000 and A9000R models, to other all-flash array and hybrid flash-and-disk systems a with various sets of features and functions to meet a variety of workload requirements.
The DS8888 all-flash array, and the DS8886 and DS8884 hybrid flash-and-disk systems round out the latest in the DS8000 storage systems family. SAN Volume Controller and Storwize family of products, based on IBM Spectrum Virtualize software, also have all-flash array and hybrid configurations. The most recent being the Gen2+ models of Storwize V7000F and V5030F. The latest solution is the DeepFlash 150 models, designed for analytics and unstructured data.
Between internally-developed IBM Spectrum Scale and IBM Spectrum Archive, and IBM's [acquisition of Cleversafe], IBM is ranked #1 in Object Storage. IBM Cloud Object Storage System, IBM's new name for Cleversafe's flagship product, is available as software-only, pre-built systems, or in the IBM SoftLayer cloud.
Software-Defined Storage (SDS) with IBM Spectrum Storage
Last year, IBM re-branded its various storage software products under the "IBM Spectrum Storage" family. Earlier this year, IBM announced the new [IBM Spectrum Storage Suite license] which makes it even easier to procure, either with a perpetual software license, elastic monthly licensing, or utility license that combines some of each.
IBM is ranked #1 in Software-Defined Storage, with over 40 percent marketshare, offering solutions as Software-only, pre-built systems, and in IBM SoftLayer cloud.