This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
(FCC Disclosure: I work for IBM. I have no financial interest in SUSE, Scality, or any other storage vendor mentioned in this post. This blog post can be considered a "paid celebrity endorsement" for IBM Storwize, IBM Cloud Object Storage, and IBM Spectrum Storage software mentioned below.)
The study takes a realistic request for 250 TB of storage, at 25 percent compound annual growth rate (CAGR), to store infrequently accessed data in an online archive, and then looks at the Total Cost of Ownership (TCO) over five year period.
The study compares five different Software-Defined Solutions and three pre-built systems. The Software-defined solutions come as software-only, requiring that you purchase the hardware separately and build it yourself. The three pre-built systems were chosen from the top three storage vendors in the marketplace: Dell EMC, IBM and NetApp.
The cost of support is factored in, as it should be. To keep things equal, no data reduction like data deduplication or compression were used.
In an odd approach, the study mixes block, file and object based approaches all in the same study.
You can read the full 14-page study (linked above). I have organized the results into a single table, ranked from best to worst, color coded for the best deals in green ($100K to $200K), moderate solutions in yellow ($200K to $300K) and most expensive in red (over $300K). I put the software-only options on the left and pre-built systems on the right.
SUSE Enterprise Storage 4
IBM Storwize V5010
DataCore SAN Symphony
Red Hat Ceph Storage
Dell EMC Unity 300
I am often asked, "Isn't the software-only, build-it-yourself approach, always the lowest cost option?" Now, I can answer, "Sometimes yes, sometimes no." Fortunately, IBM offers Software-Defined Storage in a variety of packaging options including software-only, pre-built systems, and in the Cloud as a service.
IBM Storwize V5010 is based on IBM Spectrum Virtualize software, which you can deploy as software-only on your own x86 servers. This was not mentioned in the study, and perhaps it is my job to remind people that this option is also available for those who want to build their own storage.
For that matter, IBM Cloud Object Storage System -- available as software-only, pre-built systems, and in the Cloud -- might also be a cost-effective alternative.
Next week I will be in Orlando, Florida for the IBM Systems Technical University. If you are attending, stop by one of my presentations, or look for me at the Solution Center at one of the IBM peds, or attend the "Meet the Experts for IBM Storage" on Thursday!
The first official day of the [Systems Technical University 2014] conference had keynote sessions in the morning. The conference features experts from IBM Power Systems, IBM System x, IBM PureSystems, and IBM System Storage.
The keynote sessions were started with Amy Purdy, IBM Director of Technical Training Services, the group that is running this conference.
This conference is not focused on System z solutions, as many of the System z clients were in New York City for this birthday event, but it came up several times during the keynote sessions.
(FTC Disclosure: I work for IBM, and this blog post may be considered a paid, celebrity endorsement of IBM products and services. IBM has business relationship with both Intel and Amazon mentioned during the course of the keynote sessions, but I have no financial stake in either company. I was the chief architect for DFSMS, the storage management component of the z/OS mainframe operating system, and was part of the team that ported Linux to the System z mainframe.)
Nicolas Sekkaki, IBM Vice President of Systems and Technology Group in Europe, discussed IBM's commitment to client's privacy, the x86 and POWER server platforms, and a variety of mind-bogging announcements. He is focused on three trends: Big Data, Cloud, and Mobile.
IBM is focusing its hardware efforts on high-value, high-margin solutions such as System Storage, POWER Systems and System zEnterprise mainframe environments. Did you know that 65 percent of the world's business transactions are processed by either POWER systems or System zEnterprise mainframe?
IBM is also extending its continued focus on Linux and Open Source initiatives. For the System zEnterprise mainframes, 78 percent of our clients run Linux on System z. Over 290 clients have added the "zBX" option that allows them to run Windows and AIX on the mainframe as well. It is now less expensive to run workloads on System zEnterprise -- about 1 dollar per day per server -- than public cloud offerings from Amazon Web Services. Linux on POWER also has lower Total Cost of Ownership (TCO) than Linux-x86.
Nicolas also mentioned major changes for the POWER Systems, starting with the [OpenPOWER Consortium], formed by IBM, Google, Mellanox, NVIDIA and Tyan.
The move makes POWER hardware and software available to open development for the first time as well as making POWER Intellectual Property licensable to others, greatly expanding the ecosystem of innovators on the platform. The consortium will offer open-source POWER firmware, the software that controls basic chip functions. By doing this, IBM and the consortium can offer unprecedented customization in creating new styles of server hardware for a variety of computing workloads.
IBM POWER has switched from being "Big Endian" to being "Bi-Endian", allowing operating systems to choose between "Big Endian" or "Little Endian" modes. The Big Endian mode allows for Linux compatibility with the System zEnterprise mainframe, and the Little Endian mode for compatibility with Linux-x86.
Thorston Kahrmann, Intel Account Director for EMEA, presented Intel's rich history of collaboration with IBM, from technologies like BlueTooth and PCiE Generation 3, to platforms like BladeCenter and NeXtScale, to Industry Standards.
IBM had a lot of "firsts" in the x86 server area, including the first 16-processor server, the first to offer hot-swap memory, and over 100 leading performance benchmarks.
The latest Intel Xeon chip is the E7 version 2. For example, changing from DB2 v10.1 on the old E7, to running DB2 BLU columnar acceleration on the new E7 version 2, resulted in a 148 times increase in performance. A query on a 10TB database that previously took four hours was completed in under 90 seconds.
Thorston also wanted to remind the audience that nearly every System Storage product from IBM, from the high-end XIV, SAN Volume Controller, SONAS and FlashSystem V840, to midrange and entry level Storwize products, are all based on Intel's x86 processors.
Louise covered the findings from the latest 2012 CEO study, gathering insight from 1709 CEO interviews. The major focus areas for CEOs are:
Empowering employees through company-wide values
Engaging customers as individuals, rather than via demographics
Amplifying innovation with strategic and tactical partnerships
With smartphones, tablets and ubiquitous Internet access, everyone is now a technologist, so that IT is now becoming a competitive differentiator. IT projects and Business projects are no longer separate. If your IT department is seen as an expense, it will continue to get its budget cut. If, however, your IT department is part of your revenue stream, then it can be viewed as an asset.
Sadly, over 75 percent of IT projects fail, either are way over budget, delivered late, or some combination of the two. Business leaders are pushing for IT improvements, but often CIOs are too afraid to take the risks to move the business forward. Louise cited three reasons for this, which she called the three C's:
The IT and Business leaders did not full understand the context of the project.
The content of the project was not properly defined between IT and Business architects.
The collaboration between IT and Business personnel was not properly established.
Louise wrapped up her session with asking a simple question: How much is the cost of a light bulb. Some might focus on the cost of the bulb itself, while others might add the cost of maintenance, having ladders and personnel to replace them as needed, and others might include the electricity consumed. Both Business and IT leaders need to focus on Total Cost of Ownership (TCO) in their planning.
Amy Hirst, IBM Director, z Systems, Power, & Storage Technical Training, kicked off the general session.
Dr. Seshadri "Sesha" Subbanna, IBM Corporate Innovation and Technology Evaluation, asked the audience what capability is needed to drive business growth. A recent poll indicated that the ability for businesses to innovate was the number one response.
The IT industry has had its own version of growth. Consider the Apollo 11 [Guidance Computer] used to land a man on the moon had just 4KB or RAM, and 36KB or ROM. A typical smartphone has 62,000,000 times as much.
The Appollo missions led and motivated the Integrated-Circuit technology, but soon, maybe in the next 10 years, Dr. Subbanna feels that Silicon may run its course. Today, both POWER8 and z13 servers are based on 22nm. IBM has projected possible reductions to 17nm, 13nm, 10nm, and finally 7nm. That's it, smaller than 7nm may not be possible without hitting atomic issues.
The City of Rio de Janeiro, Brazil is a good example. In 2010, heavy rains resulted in flooding and landslides that killed over 110 residents. To prevent such high death rates in the future, IBM helped the city government predictive analytics and forecasting that allows "rain simulations" to see how well the city can handle different situations.
IBM is already looking for a more holistic view of systems, and new technologies like cognitive computing. New 3D technology allows various chip technologies to be stacked as layers on a single chip. For example, you could have computer on the bottom layer, flash non-volatile storage in middle layers, and networking at top layer. Connecting the layers is merely a matter of drilling holds and filling them with metal.
The idea that compute is the center of the universe, with a mainframe server surrounded by input and output "peripheral" storage devices, is giving way to a more storage-centric model, where central storage repositories (or data lakes) are accessed by "peripheral" smartphones, tablets and variety of servers. For example, the IBM DB2 Accerlation Appliance acts as a storage-centric model that IBM z System mainframes can connect to, send data in, process complex database queries, and get the results 2000x faster.
In another client example, IBM helped a bank in China to determine optimal placement of bank branches, based on public information of average salary levels of each neighborhood.
CPU processors are also getting help from co-processor accelerators like GPU (Graphical Processing Unit) and FPGA (Field Programmable Gate Arrays). Comparing a single IBM POWER8 server that is CAPI-attached to an IBM FlashSystem to a stack of x86 servers with internal SSD, the POWER8 solution connsumes 12x less rackspace, consumes 12x less electricity, and reduces per-user costs from $24/user for x86 down to $7.50/user on POWER8.
While social media, mobile phones and the Internet of Things (IoT) generate a lot data. If you then factor the "context multiplier effect" of all the links, connections and cross-references, you quickly see that data is growing at incredible rates.
Another issue is the difficulty to identify application inter-dependencies. Forecasting disruptive anamolies can be quite difficult. In one example, adminstrators received warning messages 65 minutes before a major outage, but they did not respond in time because they were unable to understand the full implications.
Cognitive computing is different than the tabulating and programming paradigms of prior decades. It is focused on Natural Language Processing, citing evidence to base responsed, and the ability to learn and improve based on learning from experience. The IBM Watson group is working with Memorial Sloane Kettering to help oncology doctors with cancer patients.
In an interesting demo, IBM Watson computer analyzed thousands of "TED Talk" videos, and was able to respond to search queries by playing a 30-second video clip that most closely address the search topic.
Cognitive computing is also looking at "Neuro-Synaptic" chips that work very much like the neurons and synapses in the brain. I have seen some of this work already at the IBM Almaden Research Center in California.
The general session ended with a Q&A panel with Dr. Subbanna, Frank De Gilio, and Bill Starke.
This week, I am attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
Sunday, I attended a series from IBM Research talking about the latest research areas.
7110A Future Directions in Enterprise Mobile Computing
Gabi Zodik (IBM) presented. Mobile and wearables are transforming all industries. Enabling technologies are required to support the new computing models that are cognitive in nature. Real-time proactive decisions can be made based on the mobile context of a user. Driven by the huge amounts of data produced by mobile devices, the next wave in computing will need to exploit data and computing at the edge of the network.
Future mobile apps will have to be cognitive to "understand" user intentions based on all the available interactions and unstructured data. A new distributed programming paradigm is emerging to meet these needs, which has to deal with massive amounts of data and devices. While the compute and storage capacity on individual devices is small, collectively they exceed all of the servers and storage in Cloud datacenters.
7107A Wearables in the Enterprise
Asaf Adi (IBM) presented. Wearable technology is booming. It is only our imagination that will limit the number of industrial, military, consumer and healthcare applications for this new emerging technology. Wearables are transforming industries and professions, enabling new business opportunities. From a show of hands, half the audience was wearing smart technology already.
In one example, he focused on construction industry. In the USA alone, there are thousands of workplace injuries, costing $190 Billion dollars. Wearable technologies can be incorporated into a hardhat to bright orange vest. In a steel mill, heat stress can be determined from ambient temperature and an employee's heart rate. Over time, we will have multiple wearables, communicating to each other.
In another example, he was able to make a hand gesture (waving his hand in front of his smartphone), and use that to generate code fragment that can be used by software developers to detect that particular hand gesture was made in any application.
Wearables cannot assume they are always connected to the Cloud. Take for example mining, where miners are deep below the ground. Technology to ensure safety needs to work regardless of connectivity.
Privacy is also a big concern. Wearables should not be used by employers to monitor every movement and activity of the employees.
7152A Cognitive IoT -- Today, Tomorrow and Beyond
Alessandro Curioni (IBM) presented. Today's sensors aren't up to the task of unlocking the complex links between people, places and things. To reach the next level, we need technologies that enable them to gather and integrate data from many sources, to reason over that data, and to learn from it. IBM calls this the Cognitive Internet of Things (IoT).
We already know IoT data can be used to predict maintenance needs, but what if it can also help designers engineer more reliable products from scratch? In addition, with advancements in nanotechnology and machine learning we can bring the power of cognitive to the edge—where the data is collected. Imagine tiny edge computers providing Watson services on every sensor?
It is estimated that we have 13 billion IoT sensors today, and that this will more than double to 29 billion by year 2020. This introduces new security threats, new levels of employee engagement, and fundamental shifts in business models.
Sadly, 88 percent of all the IoT is dark, meaning that it is not collected or processed for analysis. While the IT industry has done amazing things with the other 12 percent, we realize that programming techniques are too limited.
That is why cognitive is needed to unleash the value of the data. IBM Watson offers excellent capabilities, including Natural Language Processing (NLP), Machine Learning (ML), Image/Video analytics, and Text Analytics.
Manufacturers like Whirlpool are investigating use of IoT for home appliances, like refrigerators, washers and dryers. This is just the beginning, other industries including Healthcare, Retail, Oil, Mining and Farming will also benefit.
7108A Blockchain and the Future of Finance
Ramesh Gopinath (IBM) presented. Transferring products and funds today is inefficient, expensive, and vulnerable. Blockchain is an emerging fabric for transaction services. It has the potential to radically transform multi-party business networks, enabling significant cost and risk reduction and innovative new business models.
About 18 months ago, the "Blockchain" concept was not ready for business. Since then, Apache has accepted the "HyperLedger" project, with 17 founding companies.
Imagine a company in China or India exporting a product to a company in USA. There may be 10 or some companies or agencies involved, including multiple banks, port authorities, trucking companies, etc. The hand off the equipment, and ensure all parties are paid, some 30 different paper documents may be needed. Each company maintains their own set of records, and all the middlemen take their cut.
Blockchain represents a digitally-signed, encrypted, immutable "ledger" that records all of the steps related to a particular transaction. Since each new block has a checksum of all of the previous blocks, it prevents tampering and fraud. All parties have access to all of the ledger, eliminating discrepancies between different repositories of records.
This can be used to sell stocks, buy real estate, or transfer financial funds to your family overseas. Each party involved in a Blockchain has a node in a peer-to-peer network of nodes that can access a shared Blockchain request. A user initiates the transaction, and the nodes in the network use a Practical Byzantine Fault Tolerance [PBFT] protocol.
By providing [disintermediation], fewer middlemen in the process reduces costs, processing time, and risks. The method allows for the user's transactional privacy, but also ensures accountability and auditability.
7234A Building Cloud Infrastructure for Next-Generation Workloads
Krishna Nathan (IBM) presented. Today's cloud providers are efficient at providing today's cloud services at low costs. However, this efficiency comes with the penalty of inflexible instance types and no real guarantees on performance or quality of service.
Today's systems are organized and optimized for transactional processing, a result of evolution of the past 60 years. Relational Databases offer specific features like Atomicity, Consistency, Isolation, and Durability, known collectively as [ACID].
However, we are expanding beyond "automating our world", or "understanding our world". This means tapping into 90% unstructured workloads, multi-modal scanning, noise-tolerant with variable precision and probabilistic outcomes.
Cloud Providers have used the "best practices" of transactional datacenters. Consequently, next-generation workloads that often do not share the characteristics of traditional workloads are limited in expressing their full potential because of these infrastructure limitations. Now they need to focus on four characteristics: Locality, Composability, Heterogeneity, and Dynamic resource allocation.
New workloads need a combination of CPU, GPU, NVMe, and other resources. How do you schedule which equipment to deploy for incoming workload processing that optimizes performance? By taking these factors into account, clever Cloud providers can optimize performance results to provide best fit for each workload request.
7135A Storing and Using Data in the Cloud -- Putting Together the Puzzle Pieces
Michael Factor (IBM) presented. What do OpenStack Swift, Spark, CouchDB, Kafka and ElasticSearch have in common? They are all open source, they all are available on IBM's cloud today, and they all focus on storage and using data. The trick, though, is putting these puzzle pieces together to solve real problems. You need smart integration between data services motivated by real examples from domains such as IoT, transport and retail.
There are a plethora of of open services to manage data. A recent IDC Analyst study indicates that the worlds data will grow from 8.6 Zetabytes today to 40 Zetabytes in 2020. Michael gave some eye-opening comparisons. If the data was stored on 10-TB hard disk drives, we could make some physical comparisons:
Imagine stacking all of those disk drives one on top of each like a stack of books. the stack today would be 22,000 kilometers, more than half the way to geosynchronous orbiting satellites, but would be over 100,000 kilometers, way past those satellites in 2020.
The weight of those drives today would be comparable to the weight of 1,450 Airbus 380 airplanes. In 2020, they would weigh 6,755 Airbus 380 airplanes.
If the drives were spread across the entire Mandalay Bay convention center floor, they would be 1.7 meters deep today (about 5 feet), but would be 8 meters deep in 2020.
An example of the EMT Madrid bus company using real-time sensors to react to traffic conditions.
Here are the various pieces:
OpenStack Swift -- provides object storage
ElasticSearch, based on Apache Lucene - search engine, such as for metadata or queries
Apache Spark - combines SQL, streams and complex analytics, with filter pushdown support
Apache Parquet -- a column-based data format to replace row-based Comma-Separated-Variable (CSV) format
Apache Kafka - a message bus, works with dashDB and Secor
Beyond programming "glue", we need smart integration to get an order of magnitude boost in performance.
"Information is moving—you know, nightly news is one way, of course, but it's also moving through the blogosphere and through the Internets." --- George W. Bush
As multinational companies transition to becomeglobally integrated enterprises, information is going to move across nationalboundaries. Laws that pertain to how data is stored and access need to be addressed.
Jon W Toigo over at DrunkenData.com discusses an Interesting proposal on Google Censorship. The New York Sun reports that NYC comptroller, Williams Thompson Jr. istargeting both Google and Yahoo over theirpolicies of abiding the local laws in each country they do business in.The proposal includes asking Google to fight local laws, publicize when Google complies withlocal laws, and publicize when local governments ask Google to comply with their laws. While Toigo focuses on Google, this issue applies to Yahoo, Microsoft, and many other companies that do business in multiple countries.
I admire when government officials use diplomacy to influence the policy of other governments, andwhen individuals act to influence the policies of those who govern them, but Thompson isdoing neither.In this matter, Thompson is trying to influence thepolicies of another government outside his jurisdiction, as a manager of investments in companies that do business there.Investors have two choices when trying to influence how companies do business.
Stop investing in those companies
Purchase shares, and vote your portion of the shares.
It appears Thompson is exercising the latter, proposing that this issue be brought to shareholder vote via proxy.There can only be two results from such a vote, either:
Shareholders vote for it, and Google changes the way it does business in this and other countries, possibly stops doing business in countries that don't appreciate hegemony.
Shareholders vote against it, and Google continues to do a great balancing act, complying with laws and their owncorporate culture
Did we forget that we have censorship in the USA as well? Would Thompson's proposalsapply to the rules and regs that our own government requires?
IBM does business in most, not all, countries on this planet. In the countries we don't do business in, we havegood reason not to. For the countries we do, we comply with all the laws that apply in each case.When I travel to these countries, including some of the countries specifically targeted by this proposal, I must abide by their laws. No exceptions.
The world is shrinking, and technologies now allow companies to become globally integrated. Before writing"The World Is Flat", Thomas Friedman wrote a book titled The Lexus and TheOlive Tree, which covers all the various issues related to conflicts between global companies and the countriesand cultures they do business in.
This reminds me of the wisdom of the Prime Directiveintroduced in the late 1960s on the popular TV show "Star Trek". The concept was simple, honor the sovereigntyof other cultures, on other worlds, and play by their rules when you are on their planet.I say "wisdom" in that it took me years to truly appreciate this idea.Initially, I considered this just a plot device to introduce conflict each time the captain and crew of thestarship "Enterprise" visits a new location, and discovers a culture different than their own. But over the years, as I have traveled to many countries, I began to see and understandthe wisdom of the "Prime Directive", and it applies as much now, in real life, as it did back then in the futuristic 1960s TV show.
Who are we to say that our way of doing things is the one and only way to do them?
New Generation Storage Tiering: Less Management, Lower Investment and Increased Performance
This was not just an update to my session last year in Brussels, Belgium. Rather, I decided to start over and focus I/O density as the metric to focus my efforts, armed with real data from Intelligent Storage Tiering Analysis (ISTA) studies done at various clients. From that, I was able to talk about storage tiering on three fronts:
Storage tiering between Flash and disk. IBM FlashSystem and IBM Easy Tier on DS8000 and Storwize family for hybrid Flash-and-disk configurations.
Storage tiering between disk and tape. HSM and Information Lifecycle Management (ILM) on SONAS, Storwize V7000 Unified and LTFS-EE.
Storage tiering automation across your entire environment. ISTA studies can help identify a target mix of Tier 0, Tier 1, Tier 2 and Tier 3 storage. SmartCloud Virtual Storage Center can recommend or perform the movement of LUNs to more appropriate tiers, based on age and I/O density measurements.
Next Generation FlashSystem 840 and V840, Architecture Deep Dive
Detlef Helmbrecht, from the IBM Advanced Technical Skills team in Germany, presented this deep dive in our latest IBM FlashSystem offerings. He started with an analogy. Latency is like a single car driving down an empty highway. IOPS, on the other hand, is like a lot of cars stuck in slow traffic, with all lanes filled on the autobahn. While there are more cars transported on a full highway, the individual cars are not driving very fast. Flash versus disk has similar comparisons.
Detlef explained the differences between the previous FlashSystem 810/820 with the new 840, as well as talk about the FlashAdapter 90 now available as a PCIe card.
Finally, we talked about SAN Volume Controller combined with Flash, and the new FlashSystem V840 which combines SVC and FlashSystem 840 to have an incredibly function-rich, robust solution.
Data Footprint Reduction - Understanding IBM Storage Efficiency Options
My last session of the week! This session covered all of the various technologies for data footprint reduction, including Thin Provisioning, Space-efficient FlashCopy and snapshots, Real-time compression and data deduplication. Frankly, I wasn't expecting many people to attend the last session of the last day, but nearly 50% of the seats were filled, so I was quite pleased on the turn-out.
Fun Fact: Istanbul is considered by TripAdvisor in 2014 as the #1 most popular city to visit in Europe!
My session on IBM Cloud Object Storage had three sections. First, I covered an overview of what "Object Storage" was in general, how this differs from traditional block or file storage approaches.
Second, I explained what is unique and different of IBM Cloud Object Storage System, formerly called DsNet from Cleversafe. IBM acquired Cleversafe in 2015.
Third, I explained the various applications, use cases and industries that can take advantage of Object Storage.
IBM Storage and the NVMe Revolution
Brian Sherman, IBM Distinguished Engineer for Storage Advanced Technical Services, presented an overview of NVMe, NVMe Over Fabric (NVMeOF) and what IBM is doing in this area.
How to Build a Rockstar Personal Brand
Andrea Edwards, The Digital Conversationalist, is a globally award winning B2B communications professional with more than 20 years' worth of experience from around the globe, including 12 years exclusively in Asia Pacific. IBM has hired her in the Asia Pacific region to train many IBMers in Social Media.
She condensed her normal 5-6 hour training down to a single hour for this event. She explained why building a personal brand was important, how to do it, and why businesses and organizations should encourage their employees to do so.
For example, who has the most influence on most people? Behind friends and family are bloggers. Bloggers are more influential than journalists, religious leaders, celebrities and politicians.
(As the #1 blogger of IBM, I am considered to already have a "rockstar personal brand". I am pleased to see that IBM is taking social media seriously. I have been blogging since 2006, and have influenced over $4 billion US dollars in IBM revenue in the past 11 years.)
IBM Spectrum Virtualize technical updates
Andrew Martin, IBM Spectrum Virtualize Support Architect, presented the last 18 months of enhancements to Spectrum Virtualize, from v7.6.1 introduced in March 2016 to v7.8.1 released earlier this year.
He managed to highlight quite a few enhnacements:
Distributed RAID 5 and RAID 6
Integrated Compresstimator tool
New hardware: SVC, Storwize V7000 Gen2+, Storwize V5000 Gen 2, and 92-drive 5U High Density Expansion Enclosure
N-Port ID Virtualization (NPIV)
Virtualization Over iSCSI
Encryption for Distributed RAID Arrays
64GB Read Cache
Tier 1 Flash Support
Compressed IP Replication
Spectrum Virtualize as Software for Lenovo and SuperMicro servers
Host Clusters and Throttling
Raised limit to 10,000 Volumes
Transparent Cloud Tiering
Storwize Model Conversions
IBM SKLM Support for Encryption
Consistency Protection for Metro and Global Mirror remote-distance replication
Andrew called this a "reverse roadmap", rather than a session that presents where we are going in the next 18 months, he presented where we have been.
Solution Center Reception
Here I am with Morgan Tracey and Jenna Brooker from Computer Merchants, an IBM Business Partner.
Not only were Computer Merchants a sponsor with a booth at the Solution Center, but they also gave a customer testimonial at one of the breakout sessions on how they were able to use IBM Artificial Intelligence to help with their business.
I also spent time at the SuSE booth. SuSE is a distributor of Linux that runs on x86, POWER and IBM Z mainframe systems.
While I was working, Mo took a tour to Phillip Island. On the way, they stopped at Maru to feed kangaroos and take pictures with Koala bears.
At Phillip Island, Mo watched penguins come out of the ocean, waddle up on shore and march to their burroughs. This happens every evening and is one of the top tourist attractions near Melbourne.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the afternoon sessions of Day 1.
Storage Brand Opening Session - Craig Nelson
Craig Nelson, Brocade manager for IBM Field Sales Channel, indicated the network equipment is the bridge that brings servers and storage together.
The squeeze -- faster servers and Flash storage causes storage networking to become the bottleneck. Fibre Channel will remain the protocol of choice for the next decade.
"Speed is the net currency of Business" -- Marc Benioff, Salesforce CEO.
Craig drew an analogy. We have been focused on making hard disk drives faster, and then Flash changed the game. Likewise, car manufacturers have focused on making gas engines better, and then Tesla Motors introduces an electric car with insane performance. The early models actually had an "Insane Mode".
The new Gen6 models of IBM b-type SAN equipment will support 32Gbps and 128Gbps ports. That's Insane!
Later models of Tesla Motors offer a "Ludicrous Mode". For flash storage, it is NVMe. NVMe can get storage down to 20 microsecond latency. That's Ludicrous!
Craig put in a plug for two Brocade sessions: "BEWARE - The four potholes on your road to success when deploying flash storage" and "Tune up your storage network! Is it healthy enough for flash storage and next-gen server platforms?"
Storage Brand Opening Session - Clod Barrera
Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist, presenting storage industry trends.
IDC predicts data capacity to grow 60-80% CAGR. This would require 44 percent drop in $/GB per year to maintain flat budget. Unfortunately, flash media cost is only dropping 25-30 percent per year, and spinning disk only 19 percent per year.
Since storage media will not offset capacity growth, we need other technologies to compensate, including compression, deduplication, defensible disposal, and "cold" storage to tape or optical media.
The smallest persistent storage that IBM has been able to achieve is 12 atoms. Current disk technology is 1200 atoms. Since 1956, IBM and the rest of the IT industry have improved storage 9 orders of magnitude, and now there are only 2 orders of magnitude left.
Clod poked fun at the "Star Wars: Rogue One" movie, indicating that their idea of the future of storage was a huge tape library. See my December 2016 blog post [Has your data gone rogue?]
What does it take to storage information forever? Tape will certainly be around. IBM Zurich demonstrated a 220TB back in 2015 as proof of technology.
A good example of the need for long-term retention are US films. Of those from the silent era, over 90 percent are lost. Over half of the films prior to 1950 are lost. The silver nitrate film stock that the reels were made of have deteriorated. Now that more movies are made digitally, can we do better?
Clouds will move from 10GbE to 25GbE. No slow down for FC in datacenters. Flash storage and object storage are both growing quickly
Move over Software-Defined Storage, Converged and Hyperconverged systems, the new up-and-coming thing are "Composable Systems deployed in Pods" adjustable hourly by workload requirements.
To protect against Ransomware, use "air gap" protection, not on the same network as production workload.
New storage models are needed for Cognitive workloads. Clod put in a plug for Joe Dain's presentation "Introducing cognitive index and search for IBM Cloud Object Storage leveraging Watson"
Storage Brand Opening Session - Axel Koester
Axel Koester, IBM Storage Chief Technologist, presented more storage industry directions.
What will the world look like in 10 years. Today mostly procedural programming, with some statistical big data, and a bit of machine learning. In 10 years, it will be mostly statistical and machine learning, with very little procedural programming. Why? Because it is faster to train computers with Machine Learning, than to program procedurally.
Examples of machine learning are IBM Watson, Google AlphaGo, drive-AI. Axel would rather be a passenger in a machine-learned self-driving car, than a procedurally-programmed one.
Neural networks to interpret hand-written numbers. Welcome to "Unsupervised learning".
A subset of Machine Learning is Deep Learning, a major breakthrough in 2006. Deep Learning is a subset of Machine Learning that uses three or more layers of neural networks. For example, face recognition "deep learning" algorithms can also be used to detect defects through visual inspection of circuit boards.
How does this impact storage?
Procedural -- archive test cases used
Statistical -- store all data for parallel processing
Machine Learning - train sample data, then archive and re-train yearly. Driving 5 minutes = 4 TB of sensor data used for self-driving cars
For Neural processing, x86 CPU are suitable for prototyping. GPU co-processors better, efficient but uncommon. IBM has developed the "TrueNorth" chip does nothing by Neural - 4096 cores with only 70 mW of energy consumption. No clock, instead dendrites, synapses, axons and neurons.
Instead of "Build or Buy?" the new question is "Train or Buy?" Train with confidential data, or buy ready-to-run 100% pre-trained cognitive systems as a service.
AI Frameworks are available on Docker containers with Kubernetes with Persistent storage (Ubiquity) such as Spectrum Scale. These frameworks include DL4J, Chainer, Caffe, torch, theano, tensorflow.
NVMe -- NVM is local only, how to do HA and DR? There are three options:
DB asynchronous shadowing
DB mirroring over NVMeOF
Cluster file system replication of persistent data, such as IBM Spectrum Scale
Example car manufacturer with 50 SAP HANA in memory instances on 4 Spectrum Scale nodes. IBM achieved 50,000 new files per second. Most NAS systems do much less.
Faster media on smaller electronics Holmium atoms on Magnesium Oxide on silver base, resulting in "single atom storage." ATM needle tip magnetizes, measured with Tunnel Magneto-resistance. Unfortunately, reading the data causes it to lose its value, so it is not as persistent as the 12-atom method described by Clod earlier.
As the title suggests, I explained why there is so much interest in Software-Defined Storage in the IT industry, what software-defined storage is, and how to deploy these solutions in your existing infrastructure without the full rip-and-replace. I covered which IBM products are available as software, pre-built systems and/or Cloud services.
I am back safely from my travels to New Zealand and Australia, and would like to wish everyone today a Happy [Earth Day]!
The Tucson area has been continuously-inhabited by people for the past 3,500 years. One of the great challenges for this arid desert region is water. Recently, Tucson was selected for a [2013 IBM Smarter Cities Challenge] grant. Here is an excerpt from a blog post by Tucson Mayor Jonathan Rothschild titled [Ensuring Tucson's Water Future]:
"One critical area for cost-effective investment is technology. We are converting all of our customer water meters to digital in order to reduce the amount of labor required to manually read all the 225,000 customer meters each month. And we are replacing our Supervisory Control and Data Acquisition (SCADA) system in order to improve our ability to control and manage our water distribution system.
I was pleased that Tucson was selected for a 2013 IBM Smarter Cities Challenge grant. As a result, a team of senior IBM executives came to Tucson for three weeks to listen to our story, learn about our water system and lend their expertise. They came from North Carolina, Texas, New York, California and Virginia to learn about how one of the most arid American cities is setting the standard for wise water use. The IBM team lived in our community and worked with the Tucson Water Department. They learned a great deal and helped us even more.
The Smarter Cities team's final report delivered exactly what we were looking for. It contained a roadmap with both shorter and longer term recommendations. The report did not recommend additional investments beyond our means, but it did make an effective case for the timing and scheduling of our planned investments – recommendations which will help us achieve better near-term results while we develop sustainable practices for this ongoing project. The four areas of improvement detailed in the roadmap were:
Improve customer service with automated metering
Modernize our meter management systems
Implement advanced operations management systems
Build additional capacities for our existing information technology systems
It's clear that IBM has made a strategic decision to focus on the opportunities and challenges facing cities around the world through its Smarter Cities program. They understand that a city is a 'system of systems,' and that comprehensive analyses of the ways these systems interact with one another and with the populations they serve are critical to improving the quality of life of citizens everywhere. IBM's selection of Tucson as a global smarter city has given us the chance to demonstrate that we have some of the highest standards for resource management, conservation, financial planning and community engagement for municipal water departments anywhere in the United States."
While this is certainly good for the environment, IBM's focus on helping the Earth become a smarter planet has been good for its bottom line as well. According to the latest 1Q 2013 financial results, IBM revenues related to Smarter Planet initiatives, including the Smarter Cities campaign, have increased 25 percent year-to-year.
This week, I am presenting at the IBM Systems Technical University for Storage and POWER Systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency.
This is my recap for sessions on Day 2 morning.
FlashSystem A9000 and A9000R Overview
Andy Walls, IBM Fellow, CTO and Chief Architect,and Brent Yardley, IBM STSM and Master Inventor, co-presented this session. This was the "deep dive" of the A9000/R, a basic continuation of the one they did yesterday.
The Pendulum Swings Back -- Understanding converged and hyperconverged integrated systems
With IBM's partnership with Nutanix, this has become a particularly popular topic. I cover the last 50 years of storage evolution, from internal storage and external storage to NAS and SAN storage networks.
More recently, people have been willing to give up all those gains for something simpler, less powerful, less reliable, less expensive. Enter Converged and Hyperconverged Systems. IBM PureSystems and VersaStack lead the pack for Converged Systems, along with IBM Spectrum Scale, Spectrum Accelerate and Nutanix on IBM Power Systems for Hyperconverged Integrated Systems.
New Generation of Storage Tiering -- Less Management, Lower Costs, and Improved Performance
There are orders of magnitude between the fastest All-Flash Array and the least expensive tape storage. Ideally, there would be a "slider bar" that allowed people to select from the fastest to the least expensive. IBM offers a variety of solutions to offer this "slider bar", with automation to move data as needed between tiers.
I start with IBM Easy Tier, available on DS8000 and Spectrum Virtualize products, to IBM Virtual Storage Center where advanced analytics moves data to the right location, to IBM Spectrum Scale which provides the ultimate tiering, across multiple locations, between flash, disk and tape.
The lunches at these conferences are amazing, but then the "Big Easy" is known for its food!