This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
This week, IBM InterConnect conference is going on in Las Vegas, Nevada.
One time in Las Vegas, I took the gondola ride at the Venetian Hotel. These are not boats with a motor on a chain or track, a but actually steered and propelled independently by the gondolier. At various points on our path, our gondolier would serenade our group with beautiful Italian songs.
As the ride was ending, I asked our gondolier how long their training program was to do this job. He told me "six weeks". I said "Wow, I would love to learn how to sing Italian songs like that in six weeks". He corrected me, "No, silly, they only hire experienced singers, and teach them six weeks to manage the gondola by turning the oar in the water."
(FCC Disclosure: I work for IBM. I have no financial interest in the Venetian Hotel, CBS Studios, or the producers of any television shows mentioned in this post. David Spark has provided me a complimentary copy of his book. This blog post can be considered an "unpaid celebrity endorsement" for the book reviewed below.)
InterConnect 2017 includes "Concourse", a trade show floor with people showing off the latest technologies. In the past 25 years, I have attended many conferences, and on occasion I have worked "booth duty". I am not in Las Vegas this week, so this post is advice to those that are.
One time, when the coordinators for an upcoming conference announced at an all-hands meeting they were looking for "a number of knowledgeable and outgoing volunteers" to work the IBM booth, one of the employees in the audience asked "How many of each?" While this might have meant to draw laughs, it underscored a real problem.
In many IT and engineering fields, the terms "knowledgeable" and "outgoing" are seen as mutually exclusive. People are either one or the other. A study titled [Personality types in software engineering], by Luiz Fernando Capretz of The University of Western Ontario, analyzed Myers-Briggs Type Indicator of personality and found the majority of engineers were "Introverts".
This line of thinking is further reinforced by the various characters on the television shows like "The Big Bang Theory". If you are familiar with the show, you have Sheldon and Amy are the most knowledgeable, but also the most socially awkward, and then you have Penny and Howard, less knowledgeable but at the more outgoing end of the spectrum.
I understand that for many engineers, working a booth at a trade show is far outside their "comfort zone". But what do you think is more likely, that you can train an engineer to work a booth in six weeks, be more outgoing, hold the right conversations, tell the right stories -- or -- train a professional model, a young, good looking man or woman, who is already outgoing and friendly, to answer technical engineering questions about your products and services?
I have been attending conferences for over 25 years, and occasionally have worked a booth or two. I started out as an engineer, but went through extensive training for public speaking, talking to the media and press, and moderating Q&A Expert panels.
Sadly, most people who work the booth get little to no training at all. You might be told your scheduled hours, how to scan bar codes on badges, and where the brochures and swag are stored. Then, you get your official "shirt" and told to wear it with a certain color pants, so that everyone looks like part of the team.
Fortunately, fellow blogger David Spark, of Spark Media Solutions, has written a book titled "Three feet from Seven Figures" with loads of advice on how to work a booth with one-on-one engagement techniques to qualify more leads at trade shows.
The title of his book warrants a bit of explanation. When you are working a booth, potential buyers and influencers are walking by, often just three feet away from you, and these could represent million-dollar opportunities.
Too often, the folks working a booth take a passive approach. They look down at their phones, chat with their colleagues, and basically wait for complete strangers to ask them a question or request a demo. This non-verbal communication can really be a turn-off. David explains this in all-too-familiar detail and how to be more actively engaged.
David shows how to break the ice and build rapport with each attendee, how to qualify them as legitimate leads, and how to handle each type of situation.
For qualified leads, you need to maximize the opportunity. If you imagine how much a company spends to send its employees to work the booth, plus the cost of the booth itself, and divide it by the limited number of hours that the trade show floor is open, you quickly realize that each hour is precious.
Your time is valuable, and certainly their time is valuable also. Let's not spend too much time on a single lead, but rather capture the information, end the conversation, and move on.
If you are working a booth at IBM InterConnect, or plan to work a booth at an event later this year, I highly recommend getting this book! It is available in a variety of hard copy and online formats at [ThreeFeetBook.com].
This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year.
Day 4, the last day of the conference, is only a partial day, and many people opted to leave on Wednesday evening, or Thursday morning instead. The breakfast and lunch meals had fewer people than the previous days. Here is my recap of day 4 Thursday breakout sessions.
Building Hyperconverged Infrastructure for Next-Generation Workloads
Supermicro is more than happy to customize these, upgrading the CPU, RAM, disk or networking connectivity as needed. This solution is roughly half the price of Nutanix, and offers a better Next-Business-Day/9am-to-5pm support package .
The last time I was in Las Vegas, I presented this topic at [IBM Interconnect conference]. Back then, I was given only 20 minutes, was placed on the Solutions Expo showroom floor, competing with the noise and traffic of attendees going to lunch.
This time, it was much better, a large room, and a bigger-than-expected audience given that it was scheduled on Thursday morning.
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods. I wrapped up the session covering the various storage solutions that IBM offers for all four Cloud Storage types.
IBM Storwize and IBM FlashSystem with VersaStack versus NetApp FlexPod
Norm Patten, part of the IBM Competitive Project Office Storage Team, presented a competitive comparison between VersaStack with IBM storage, versus FlexPod with NetApp storage.
Commodity Solid State Drives (SSD) and Shingled Magnetic Recording [SMR] offer low-cost, high-capacity storage.
However, they have their own set of problems, so IBM is developing software that can be included in IBM Spectrum Accelerate, Spectrum Scale, and Spectrum Virtualize to optimize their utility.
The concept of Log-Structured Array has been around since 1988. The IBM RAMAC Virtual Array back in the 1990s used it. NetApp's Write-Anywhere File System (WAFL) is an implementation of the [Log-Structured File System] general concept.
SALSA combines Log-Structured Array with enhancements borrowed from the IBM FlashSystem design, that I covered in my Monday and Wednesday presentations, to enhance write endurance by as much as 4.6 times!
This was an NDA session, so I cannot blog any of the details.
World-class Flash-optimized Data Reduction and Efficiency with IBM FlashSystem A9000 and A9000R
Tomer Carmeli, IBM Offering Manager for the A9000 and A9000R presented. He presented an overview of these models on Monday, so this session was focused on the data footprint reduction technologies.
Basically, it is a three step process. First, all "standard patterns" are removed. IBM has identified some 260 standard patterns that are 8KB in length, such as all zeros, all ones, or all spaces, and replaces these blocks immediately with a pattern token.
Second, [SHA-1] 20-byte hash codes are computed on 8KB pieces on a rolling 4KB alignment boundary. In other words, if a 64KB block of data is written, bytes 0-to-8KB are hashed an compared to existing hash codes. If no match, then bites 4KB-to-12KB are hashed, and so on. This approach nearly doubles the likelihood of finding duplicates. When a block match is found, the algorithm can replacing them with pointer and reference count.
Third, any unique data that still remains is compressed using Lempel-Ziv algorithm. This is done using the [Intel® QuickAssist]. This co-processor can compress data 20 times faster than software algorithms running on general-purpose x86 processors.
Do you want an estimate of how much "reduction ratio" you may achieve? IBM has developed two estimator tools to help. The first tool is a complete scan for data expected to be dedupe-friendly. It is a slow process, taking 8 hours per TB. This would be ideal for Virtual Desktop Infrastructure or backup copies.
The second tool is the infamous [Comprestimator] that IBM has had for awhile to help estimate compression savings for IBM Spectrum Virtualize storage solutions like SVC, Storwize and FlashSystem V9000. This tool is very fast, looking at only a statistically-valid subset of the data.
The results of both tools are merged, and the result is within five percent accuracy. This allows IBM to offer guidance on which data to place on these new A9000 and A9000R models, as well as offer a "reduction ratio" guarantee.
A client asked me why I bother to attend other sessions, when I probably know most of the material they present. I explained that I can always learn from others. I can honestly say that I learned something new and useful at every session I attended.
I am not in Las Vegas this week for this year's event, but the sessions will be streamed live through [IBM GO].
IBM Systems Technical University - May 22-26, 2017 - Orlando, FL
IBM Systems Technical University is the evolution of a variety of other conferences related to servers, storage and software. Starting out as the "IBM Storage Symposium", then added "System x" servers and renamed to "Storage and System x University", then dropped "System x" when IBM sold off that business to Lenovo.
A few years ago, it was renamed "Edge", initially just focused on Storage, but then two years ago combined with System z mainframe servers and POWER Systems for IBM i and AIX platforms. It also covers software products that previously had their own conferences, like IBM Pulse or MaximoWorld
Last year, the IBM Marketing team tried a daring experiment. Let's change "Edge" to be a "Cognitive Solutions and Cloud Platform" conference, with emphasis on IT Infrastructure.
The experiment failed. Not because IBM Systems don't support these new initiatives, but because the audience were more interested to hear about how IBM Systems help their current day-to-day business. As many attendees told me, "If we wanted to hear about Cognitive or Cloud, we have plenty of other of conferences that cover that already!"
While 40 percent of IBM revenues are generated from Cognitive Solutions and Cloud Platform, the other 60 percent are traditional, on-premise, systems-of-record application workloads, the kind that business, non-profit groups, and government agencies have been using for the past few decades!
To address this need, IBM offered three-day "IBM Systems Technical University" events at various locations. Last year, I presented storage topics at events in Atlanta, Austin, Bogota, Boston, Chicago, Dubai, Nairobi, and São Paulo.
We will have several of those this year as well. The main one will be a full 5-day event, May 22-26, in Orlando Florida. I will be there presenting various sessions on storage!
IBM World of Watson - October 29-November 2, 2017 - Las Vegas, NV
This is a Cognitive Solutions and Cloud Platform conference, with an emphasis on Analytics and Database technologies.
I did not attend World of Watson, or WoW for short, last year, but it was an evolution of the conference previously called "IBM Insight". I am sure everything from DB2 and Open Source databases to Hadoop and Spark will be covered this year as well.
In writing this post, I realize that this year will be like a "Conference Sandwich". Cognitive-and-Cloud at the top and bottom, with all the meat, veggies and garnish in the middle!
Last week, I presented at the "IBM TechU Comes to You" event in beautiful Nairobi, Kenya. This was a three-day event, so here is my recap of Day 3, Thursday Aug 4, 2016.
Business Continuity and Disaster Recovery for z Systems
I have been working in Business Continuity and Disaster Recovery my entire career at IBM, so when I was asked to give a "z Systems" mainframe slant to my standard BC/DR pitch, I was up to the challenge. IBM offers a complete set of solutions, and I presented best practices for each.
Data Protection, Management and Journey to the Cloud with IBM Spectrum Protect
This session was presented by Saumil Shah, IBM Spectrum Protect Sales Leader for Middle East, Turkey & Africa. I am glad that Saumil volunteered to cover IBM Spectrum Protect, as I already had six sessions on my plate for this week. My version tends to focus on the "What and How" of data protection, whereas Saumil focused instead on the "Why" of data protection. Why should you protect data, and why you should use IBM Spectrum Protect instead of the various other software out in the marketplace.
IBM Spectrum Virtualize - Understanding SVC, Storwize and the FlashSystem V9000
IBM Spectrum Virtualize is the new name for the code base shared by all of these products. I presented the latest features of SVC, Storwize and FlashSystem V9000 hardware models, as well as the latest software features.
How to combine the advantages of Storage Virtualization and Flash performance (the Turbocompression effect)
This session was presented by Dominique Salomon, IBM Certified IT Specialist Storage and European New Technology Introduction Leader. He works at the IBM Montpelier Briefing Center in France, a sister organization to the IBM Tucson Executive Briefing Center that I work in. The term "Turbocompression" was initially coined by his team in Montpelier to explain the combined benefits of Flash technology, Easy Tier automated sub-LUN tiering, with Real-time Compression.
I have to admit that the first time I heard this, I was skeptical. It sounded like a marketing gimmick to mention these together. However, once I saw the demo and the resulting numbers, I was convinced. IBM Easy Tier technology identifies and ranks which blocks are the busiest, and moves extents to the appropriate place. Real-time Compression can compress data in cache memory, flash and spinning disk, allowing more of the busiest blocks of data to reside in the fastest storage media. This means higher hit ratios for cache, lower latency for flash, and less wear-and-tear on the spinning disk drives.
Storage Integration with OpenStack
While OpenStack is used by more than 60 percent of Cloud Service providers, it is used by fewer than 10 percent of the Fortune 500 corporations. This represents an excellent opportunity for IBM, leading in having its storage products support this important open source interface.
IBM supports OpenStack Cinder interface for its block level devices, including DS8000 and XIV. IBM Supports OpenStack Swift for its object storage, including IBM Spectrum Scale, IBM Spectrum Archive, and IBM Cloud Object Storage System (formerly Cleversafe). IBM Spectrum Scale supports OpenStack Cinder, Swift, and Manila interfaces for a complete solution across volumes, files and objects.
Marlin Maddy, IBM Manager of Worldwide Systems Technical Events, served as master of ceremonies. He thanked the audience for attending, and drew names for prizes. This time these were Samsung "smart-watches".
Thursday evening, some people left, and the few of us remaining had dinner at the Intercontinental Hotel. I joined folks from USA, Germany and Middle East. I love our informal discussions! I learn so much listening to other points of view.
Last week, I presented at the "IBM TechU Comes to You" event in beautiful Nairobi, Kenya. This was a three-day event, so here is my recap of Day 1, Tuesday Aug 2, 2016.
Opening Keynote Session
Once again, Marlin Maddy, IBM Manager of Worldwide Systems Technical Events, served as master of ceremonies. He arrived into Nairobi just a few hours earlier, and we were worried that one of us might have to jump in and take over if he had any delays in his flight schedule. Fortunately, he arrived and did a great job welcoming the audience.
Eric Jaoko, chief manager of Kenya's Rural Electrification Agency [REA], presented next. Back in 1973, the Kenyan government wanted to have all of its rural areas offering electrical service. Some 30 years later, in 2002, only 4 percent of the rural areas had achieved this. In 2006, the Kenyan government formed this new REA agency to accelerate the progress. By 2008, nearly 25 percent of rural areas were electrified. Currently (2016), they are now at 68 percent, including all primary schools (more than 20,000 across the country).
Eric mentioned that this success was in part to their partnership with IBM for Information Technology. REA switched from Oracle to SAP applications on IBM Power systems with IBM Storwize V7000, resulting in lower costs, less power consumption, easier to deploy and manage, redundancy and high availability, scalability and high speed access to critical data. Not surprisingly, IBM's leadership in "Mobility" plays another key role, since these areas are rural and often connected only by cellular phone service.
REA employees both AIX and Linux on POWER operating systems, and uses OpenStack to manage both the servers and storage components. PowerVM, PowerVC and PowerHA complete the solution to provide a more robust environment. REA found it was very easy to clone their SAP systems, which made it very easy to test software upgrades without impacting their production environments.
The next speaker was IBM's own Glenn Anderson, IBM z Systems Consultant and Worldwide Technical Events Content Manager. His talk was titled "Think Outside the Cubicle" to emphasize that there are changes underfoot in the IT industry. Rather than focusing on IT as a cost to be reduced, enlightened CEOs are discovering that IT can be used to optimize value for their organization.
One trend that has changed drastically is what IBM refers to as "Systems of Engagement". To better connect with clients, customers and suppliers, organizations now create conversations on social media channels, listen and react to those conversations, building communities that allow them to better understand and serve their markets.
Another trend was "Two-speed IT", often called "Bimodal IT", which indicates that some projects should have "fast-track" status, streamlining the process of design, development and deployment for new innovations. This is in contrast to traditional "slower" projects for mission critical "Systems of Record" operations, like databases and Online Transaction Processing (OLTP).
His last trend he covered was this notion of "Cognitive Business", the use of self-learning, natural language processing to assist in business decision making. Glenn compared the old way as a static map that indicated "You Are Here". The new way was more like GPS, which indicated where you are, where you want to be, and the steps to get there.
(You might ask "Why do business leaders need such assistance?" First, business executives cannot ingest and comprehend the vast amount of data they need to make correct decisions, causing them to make less-than-optimal choices with limited information. Second, business leaders are often only on the job a few years, moving around from one opportunity to another, and do not build the experience background that a computer that can ingest millions of documents can achieve much more quickly. Third, business leaders often are prone to bias, surrounding themselves with ["yes-men"], unwilling to accept any information that contradicts their world view. Computers do not have that bias, and are capable in finding insights, trends and patterns that business leaders might not have considered.)
Software Defined Storage -- What? Why? How?
I was honored to be asked to be the keynote kick-off for the IBM Storage track of this conference. There is still much confusion over the concept of Software Defined Storage (SDS). While there are many different positions on this, IBM has adopted the IDC definition, which requires all three criteria to be met:
Solutions based on Industry-standard, off-the-shelf components.
Solutions that offer the complete set of storage features and functions, such as point-in-time copies, data footprint reduction, technical refresh migration, and remote replication.
Solutions that are offered in multiple ways, such as software-only, pre-built systems using industry-standard off-the-shelf components, and cloud-based services.
IBM's SDS offerings include all of the IBM Spectrum Storage family available as software-only, pre-built systems like SAN Volume Controller and XIV Gen3, and cloud-based services like IBM Cloud Managed Backup and Archive, and IBM Cloud Object Storage System (formerly Cleversafe).
IBM ranks #1 in SDS marketplace, with over 40 percent marketshare. The advantage of IBM's approach is that it does not require a complete rip-and-replace of existing IT infrastructure. IBM solutions can work with your existing servers and storage that you have already in place! This allows for a smooth and graceful transition.
Cloud Computing Concepts and the Role of Infrastructure
This session was covered by Mack Kigada, IBM Executive Consultant for the "Executive Advisory Practice" portion of Systems Lab Services. Frankly, I think this should have been classified as a "Cross-Brand" rather than other "Storage", as it showed not just storage but also how servers and OpenStack participate in a complete Hybrid Cloud solution.
The new IBM FlashSystem A9000 GUI
This session was presented by Dominique Salomon, IBM Certified IT Specialist Storage and European New Technology Introduction Leader. He works at the IBM Montpelier Briefing Center in France, a sister organization to the IBM Tucson Executive Briefing Center that I work in.
When IBM was ready to launch its newest FlashSystem offering, which combines the low-latency IBM FlashCore technology from IBM FlashSystem 900 with the IBM Spectrum Accelerate software from XIV, they had to decide what Graphical User Interface [GUI] to deploy it with. The IBM development team had narrowed it down to three options:
Use the IBM XIV Gen3 GUI, which is installed client code that runs on a handful of select operating systems. This GUI is nine years old.
Adopt and modify the browser-based GUI used by all of the other IBM Storage systems like DS8000 and SAN Volume Controller. By using HTML5, AJAX and Dojo widgets, this newer approach eliminates Operating System and Java dependencies, and can run on desktops, laptops, tablets and smartphones. However, this technology is four years old.
Deploy a new GUI, adopting the latest techniques and methods, offering a new, simpler way to manage the new device.
The development team decided on the third option, and so Dominique spent the first half hour explaining what the IBM FlashSystem A9000 and A9000R systems are, and then the last half showing a live demo connecting back to his systems in Montpelier, France.
IBM XIV, Spectrum Accelerate and the new IBM FlashSystem A9000
This session was covered by Maurice "Mo" McCullough, IBM Storage Technical Content Leader for IBM Systems Worldwide Technical Events. In retrospect, he admitted that he should have scheduled this session before Dominique's session above, which would have reduced the amount of time and questions Dominique spent explaining the IBM FlashSystem A9000 and more time showing the new GUI.
Mo first covered the newest model of the XIV Gen3 pre-built system, the model 314. It has double the cache memory and double the processing cores to drastically improve Real-time compression. Then, he explained IBM Spectrum Accelerate, available as either software you can deploy on your own x86 servers on-premises, or in cloud-based servers from IBM SoftLayer. Finally, Mo covered the A9000 and A9000R, the newest members of the IBM FlashSystem family that share features and capabilities with the XIV Gen3 and Spectrum Accelerate offerings.
Tuesday evening we had a welcome reception for all the attendees, staff and speakers. This was a great time to relax and meet everyone on a social level.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM Storwize V5030F and V7000F all-flash high-density expansion enclosure
The 5U-high, 92-drive expansion enclosure introduced for the IBM Storwize V5000 and V7000 is now available for the all-flash models V5030F and V7000F. High-density expansion enclosure Model A9F requires IBM Spectrum Virtualize Software V7.8, or later, for operation.
The enclosure allows any mix of "Tier 0" write-endurance SSD at 1.6TB and 3.2TB capacities, and "Tier 1" read-intensive SSD at 1.92TB, 3.84TB, 7.68TB and 15.36TB capacities.
Storwize V5030F control enclosure models support attachment of up to 40U of expansion enclosures, which equates to eight high-density expansion enclosures, up to 760 drives per control enclosure, and up to 1,056 per clustered system.
Storwize V7000F control enclosure models support attachment of up to eight high-density expansion enclosures, up to 760 drives per control enclosure, and up to 3,040 drives per clustered system.
IBM has adopted "Agile" process for all of its IBM Spectrum Storage software. Spectrum Virtualize is offered in a variety of forms. IBM offers the FlashSystem V9000, SAN Volume Controller, Storwize family, and Spectrum Virtualize as software that runs on Lenovo and SuperMicro servers. This means quarterly delivery of new features and functions!
Lots of small enhancements were added in this release:
Apply Quality-of-Service (QoS) to a Host Cluster in terms of IOPS and or MB/s throughput.
SAN Congestion reporting, via buffer credit starvation reporting in Spectrum Control and via the XML statistics reporting, for the 16Gbps FCP Host Bus Adapter (HBA).
Resizing for Metro Mirror and Global Mirror remote copy services of thin provisioned volumes.
Consistency Protection for Metro Mirror and Global Mirror. You can now define "Change Volumes" to be used in the event of problems with MM or GM, it will switch over to GMCV mode.
Increased FlashCopy Background Copy Rates
Proactive Host Failover during temporary and permanent node removals from cluster
IBM Aspera® Files cloud service helps to enable fast, easy, and secure exchange of files and folders of any size between users, even across separate organizations. Aspera Files is currently available in three all-inclusive editions of Personal, Business, and Enterprise. Clients can subscribe either to a committed amount of data transferred on a monthly or annual basis or as a pay-per-use option.
Personal edition now includes 20 authorized users and a single workspace.
Business edition now includes 100 authorized users, 100 workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, and support for Single-Sign-On.
Enterprise edition now includes 500 authorized users, no limit on number of workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, and support for Single-Sign-On.
IBM is now introducing a new "Elite edition" includes 2500 authorized users, no limit on number of workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, support for Single-Sign-On, and access to IBM Aspera Developer Network and nonproduction organization.
With the addition of the new Elite edition, clients have the flexibility to subscribe to additional functionality in Aspera Files that helps provide higher value and greater differentiation. The Elite edition is available as a subscription and on a pay-per-use basis.
In addition to the existing charge metric of data transferred, a user subscription metric is now included for all four editions. Each edition comes with an included number of authorized users in addition to other key features and capabilities.
Edge will be different in many ways this year. The past few years we had separate "Executive Edge" for C-level executives, "Winning Edge" for IBM Business Partners, and "Technical Edge" for server, network and storage administrators.
This year, all 1,000 sessions are combined back into one, but with clever hints in the titles. The words "General Session", "Outthink" or "Cognitive" are used to indicate C-level executive talks. Those that use the terms "Winning" or "Community" target IBM Business Partners, Managed Service Providers and Cloud Service Providers. Those that mention z Systems, POWER servers, or Storage solutions, often adding the term "Deep-Dive", are technical.
(Unlike other sessions that might appeal to one portion of the audience or another, mine are suitable for everyone, from C-level executives and IBM Business Partners to storage administrators. To help people find them under the new naming scheme, I have added "Tony Pearson Presents", or words to that effect.)
About 260 breakout sessions relate to IBM Storage, but there are only 20 or so time slots, so obviously you can't see them all in person.
I strongly suggest you pick about three to five topics per time slot, so that you are not overwhelmed by the dozens of choices during the event. This allows you to make a quick decision on which one you finally decide on during each time slot.
Occasionally, a session might get canceled, postponed, or be so full of attendees that nobody else is allowed in, so having three to five topics selected allows you to chose an alternate.
Here is my schedule for next week at Edge 2016.
Trends & Directions: The Future of Storage in the Cloud and Cognitive Era
All Flash is Not Created Equal: Tony Pearson Contrasts IBM FlashSystem and SSD
MGM Grand - Studio 9
Solution EXPO: Reception
Edge at Night: Poolside Reception and Concert "Train"
Tony Pearson Presents IBM Cloud Object Storage System and Its Applications
MGM Grand - Room 114
The Pendulum Swings Back: Tony Pearson Explains Converged and Hyperconverged Environments
MGM Grand - Room 113
Solution EXPO: Reception
Tony Pearson Presents IBM's Cloud Storage Options
MGM Grand - Room 116
My colleagues Dave Dabney or Adam Bergren will be located at the WW Systems Client Centers Booth 125 of the Solution EXPO.
If you are active in Social Media, consider using the hashtags #IBMedge, #IBMstorage, and #IBMcloud. You can follow me on Twitter, my handle is @az990tony
For those interested in a one-on-one meeting with me, over breakfast, lunch or dinner, or some other time, I have several slots still available. Fill out a request form on BriefingSource at: [https://briefingsource.dst.ibm.com/]
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM Elastic Storage Server
Replacing the older "GSn" and "GLn" models, IBM announces the "Second Generation" GSnS and GLnS models (the second "S" stands for Second Generation), the "n" continues to refer to the number of storage drawers. All of these have a pair of POWER8 servers to drive amazing performance at a low price point.
The "GSnS" models are based on smaller 2U, 24-drive storage drawers, with 3.84 and 15.36 TB Tier-1 Read-intensive Solid-State Drives (SSD). The "GLnS" models are based on larger 5U, 84-drive storage drawers, with 4TB, 8TB and 10TB nearline (7200 rpm) spinning disk.
These new models have the latest IBM Spectrum Scale software pre-installed.
In addition to IBM's two existing Hyperconverged offerings--IBM Spectrum Accelerate for x86 servers, and IBM Spectrum Scale for x86, POWER and z Systems servers--IBM Power Systems now offers a third option. This integrated offering combines Nutanix's Enterprise Cloud Platform software with IBM Power Systems™ hardware to deliver a turnkey hyperconverged solution that targets critical workloads in large enterprises.
Nutanix is offered and will be defaulted/required on these Power® servers only:
While "Hyperconvergence" is still fairly new, and only about 1 percent of data centers have deployed this new technology, I am glad that IBM is a leader in this space with multiple offerings across both x86 and POWER systems platforms.
This week, IBM sponsored a nice multi-client event in San Juan, Puerto Rico. I was quite impressed with the quality of this video. Our marketing department has really done a good job on this!
This event was not just multi-client, but also spanned different industry sectors. IBM recently has realigned to five different sectors, and we had clients from different sectors attending the event.
The night before, I was able to meet most of the other IBM executives who came down for the event. Unfortunately, two were delayed because of the snow storms in the Northeast part of the United States, but they were able to arrive the next day.
The venue was the El Touro restaurant, near the Hilton Caribe. The weather was just right, about 75 degrees and breezy. It was a little humid for me, but everyone else were just happy to be out of the cold. Meanwhile it is nearly 90 degrees in Tucson, Arizona where I am from.
This was billed as a "Lunch and Learn" and the food was delicious! In an effort to keep it simple, we had small dishes of fish with fruit-based cream sauce, paella with rabbit meat and rice, pork belly, Crema Catalana and a churo for dessert. This gave everyone a sample taste of everything, without having to order off a menu.
We basically took the same approach with the presentation. First, Marcos Obermaeir and Marcos Otero, the two leads for this event, thanked the audience and explained their new roles. Marcos Obermaeir is focused on Financial and Insurance sector, while Marcos Otero focused on Communications sector.
Next we had Debbie Niven and Roopam Master, both IBM Executives, explain their roles, and how IBM can help both clients and Business Partners in Puerto Rico.
I presented samples of much larger presentations on three topics. First, the excitement over Software Defined Storage with IBM Spectrum Storage family of products. Second, IBM Spectrum Scale as a better replacement for Hadoop File System (HDFS) for Hadoop, IBM BigInsights and Hortonworks analytics deployments. Third, IBM Cloud Object Storage, and how this can be combined with IBM Spectrum Protect to backup your data to object storage either on premises, or in the Cloud.
I could have easily spoken an hour on each topic, but instead, we shortened to about 20 minutes each, in keeping with the "Tapas" theme of the restaurant. This allowed those clients who wanted to hear more to have a reason to request a follow-up visit or call.
After the clients left, the IBM team had a reception for the IBM Business Partners. About 80 percent of IBM's storage business in Puerto Rico is done through IBM Business Partners, so they are an important link in IBM's "Go-to-Market" strategy.
The moon was nearly full, and the breeze and waves were a spectacular backdrop to the conversations I had with each person I met.
Next month, I will be presenting at the IBM Systems Technical University for Storage and POWER. This conference will be held in New Orleans, Louisiana, October 16-20, 2017.
Instead of a "Meet the Experts" Q&A panel, this event will feature a "Poster Session". I had the pleasure of doing one of these down in Melbourne, Australia last month. For those who missed it, here are my blog posts:
By now, you have already decided on a title and abstract of your poster. You will need to figure out a quick and easy way to explain your poster, and as always, shorter is better. It reminds me of a famous quote:
"Sorry this letter is too long...
If I had more time, I could have made it shorter!
-- Blaise Pascal
The event team asked me to write some instructions on the mechanics of how to put together a poster for this, since it is new for many people. I use Microsoft PowerPoint 2013 and ImageMagick tools to accomplish this.
Arrangement of Slides
Posters for the IBM Systems Technical University in New Orleans will be 24x36 inches in size. If you print out your poster in 8.5x11 inch standard size letter pages, that would be eight slides, 2 columns, 4 rows. This leaves one inch border all around.
The event will provide both the foam board and double-sided sticky tape. You can bring your poster as a stack of Letter-sized pages in a folder, and assemble your poster at the event.
You can increase the size of individual image to 17x22, to offer the "Big Picture" view. Basically, we can take a standard 8.5x11 Letter size page, expand it onto four separate pages, and then put them on the poster! I will show you how in the steps below.
Lastly, you can have two big slides. If your poster is organized as "Before/After" or "Problem/Solution" then this arrangement could be perfect for you.
Setting Custom Paper Size on PowerPoint
In Melbourne, I had to use European A4 standard paper, and had to figure out how to do this in PowerPoint. I was surprised to learn that the PowerPoint default is 4:3 ratio of 10x7.5 inch, and that this is stretched to be whatever paper size you print on.
The difference is slight, but I prefer [WYSIWYG], so we will change the slide to "Custom size" and force it to 8.5x11 inches, with "Landscape" orientation. This will avoid anything looking stretched or squished on the big poster.
Converting a PowerPoint Slide to PNG Image file
If you would like to resize one or more of your PowerPoint slides, you will need to save those slides as images. Select "File" and "Save As" and as the format, choose "PNG" format. You can also select GIF or JPG, but I prefer PNG.
You can export all of your slides as images, in which case it will create a folder and number each slide individually. Or, you can select "Just This One" for the current slide.
By default, it will use the same name as your PPT file, just change the extension to PNG. I suggest you name the file something meaningful to you. In my examples below, I use "small.png" as the file name.
I am using PowerPoint 2013, which defaults to 96 dpi. So, an 8.5x11 paper becomes 1056x816 pixels in size.
If you have PowerPoint 2003 or higher, you can change the Windows registry to specify image resolutions. Not recommended for the faint of heart. Or anyone else. But here's the deal if you want to try (if the following doesn't make any sense, it might be better not to mess with the registry):
Quit PowerPoint if it's running
Navigate to HKEY_CURRENT_USER\Software\Microsoft\Office\X.0\PowerPoint\Options
(For X> above, substitute 16.0 for PowerPoint 2016, 15.0 for PowerPoint 2013, 14.0 for PowerPoint 2010, 12.0 for PowerPoint 2007 and 11.0 for PowerPoint 2003.
Add a new DWORD value named ExportBitmapResolution and set its DECIMAL value to the DPI value you want (for example, 300 means 300 dots per inch)
Close REGEDIT, start PowerPoint and test. Your files will be 3300x2550 pixels instead.
Resizing and splitting up PNG Image files
To expand and chop the slide into four letter-sized pages, we will use "ImageMagick", an open source tool you can download for free at [ImageMagick] is a collection of command line utilities. The first "identify" will confirm your pixel size for your PNG image. Replace "small.png" with whatever you named your PNG image above.
Lastly, we crop the "big.png" image we just created into four smaller pieces. Each piece will be exactly the size as your original image! The files will be named big_0.png, big_1.png, big_2.png and big_3.png.
Since the resulting four pieces are exactly the size of a page, you can put them back into your PowerPoint deck. Create four blank slides, select Insert then Pictures. Insert each picture (big_0.png, big_1.png, big_2.png, and big_3.png) as a separate page.
You can print this out, and bring with you to the event, or send it to someone to have them print for you.
Upload files to IBM@Box
This next step is completely optional, but found it adds a nice touch. As an IBMer, you can upload your presentation, and any documents, whitepapers or other materials, to [IBM@Box]. Create a directory that is unique to you, such as your last name and the conference. For example, I have "Pearson-STU-NOLA-2017" as my folder name.
You can create a "URL Link" to this folder. Select "Share", then "Share Link" to create a dialog box. It is important to specify "People with this link" if you want those outside of IBM, such as clients and IBM Business Partners, to have access.
Press the little "gear" button on the upper right, and it gives you options to customize the URL. Normally the URL is some long random sequence of characters, but you can rename it to something meaningful and easier to remember.
Generate a QR Code
Since you have a URL Share Link for your files on IBM@Box, you can generate a QR Code for this link, and include on your poster!
There are several online websites that can generate a QR Code for free. I use [QRme.com] in this example. Go to the website, copy in the URL, and press "Generate" button.
The QR Code is generated successfully, right click and "Save Image" to a file on your hard drive. This image can be inserted as a picture like we did above onto any slide. You can resize as needed.
In Melbourne, one of the posters had the QR Code at the top, with the Title, and it was impossible to see, so difficult to use a smartphone to scan the information. For this reason, I recommend putting the QR code in the center or lower right corner of your poster. Between shoulder and waist height for the audience, to be comfortable to scan.
I am looking forward to going back to New Orleans to speak at this conference!
As I have mentioned before, I started this blog on September 1, 2006 as part of IBM's big ["50 Years of Disk Systems Innovation"] campaign. IBM introduced the first commercial disk system on September 13, 1956 and so the 50th anniversary was in 2006. That means this month, IBM celebrates the "Diamond" anniversary, 60 years of Disk Systems!
For those who missed it, IBM announced last Tuesday encryption capability for the TS1120 drive, our enterprise tape drive that read and write 3592 cartridges. Do you need special cartridges for this? No! Use the sames ones you have already been using!
You can read more about it www.ibm.com/storage/tape."
Short and sweet, but it got me started, and I ended up writing 21 blog posts that first month. You can read blog posts from all 10 years by looking at the left panel of my blog under "Archive".
While traditional disk and tape storage are still very important and relevant in today's environment, IBM has also expanded into other technologies:
In 2012, IBM [acquired Texas Memory Systems]. In 2014, IBM shipped 62PB, more Flash capacity than any other vendor. In 2015, continued its #1 status, shipping 170PB of Flash, again, more than any other vendor.
IBM has flash everywhere, from the advanced FlashSystem 900, V9000, A9000 and A9000R models, to other all-flash array and hybrid flash-and-disk systems a with various sets of features and functions to meet a variety of workload requirements.
The DS8888 all-flash array, and the DS8886 and DS8884 hybrid flash-and-disk systems round out the latest in the DS8000 storage systems family. SAN Volume Controller and Storwize family of products, based on IBM Spectrum Virtualize software, also have all-flash array and hybrid configurations. The most recent being the Gen2+ models of Storwize V7000F and V5030F. The latest solution is the DeepFlash 150 models, designed for analytics and unstructured data.
Between internally-developed IBM Spectrum Scale and IBM Spectrum Archive, and IBM's [acquisition of Cleversafe], IBM is ranked #1 in Object Storage. IBM Cloud Object Storage System, IBM's new name for Cleversafe's flagship product, is available as software-only, pre-built systems, or in the IBM SoftLayer cloud.
Software-Defined Storage (SDS) with IBM Spectrum Storage
Last year, IBM re-branded its various storage software products under the "IBM Spectrum Storage" family. Earlier this year, IBM announced the new [IBM Spectrum Storage Suite license] which makes it even easier to procure, either with a perpetual software license, elastic monthly licensing, or utility license that combines some of each.
IBM is ranked #1 in Software-Defined Storage, with over 40 percent marketshare, offering solutions as Software-only, pre-built systems, and in IBM SoftLayer cloud.
The article starts out giving background history of the current mess we are in. Here is an excerpt:
"Throughout most of U.S. history, American high school students were routinely taught vocational and job-ready skills along with the three Rs: reading, writing and arithmetic...
...But in the 1950s, a different philosophy emerged: the theory that students should follow separate educational tracks according to ability...
Ability tracking did not sit well with educators or parents, who believed students were assigned to tracks not by aptitude, but by socio-economic status and race. ...
...The backlash against tracking, however, did not bring vocational education back to the academic core. Instead, the focus shifted to preparing all students for college, and college prep is still the center of the U.S. high school curriculum..."
My father was a mechanical engineer who enjoyed fixing cars and woodworking on the weekends. I had plenty of "vocational training" growing up at home, no need for me to have this in school, allowing me to focus on getting ready for college.
Nicholas asks legitimate questions at this stage: "So what’s the harm in prepping kids for college? Won’t all students benefit from a high-level, four-year academic degree program?" His initial response is:
"... As it turns out, not really. For one thing, people have a huge and diverse range of different skills and learning styles. Not everyone is good at math, biology, history and other traditional subjects that characterize college-level work.
Not everyone is fascinated by Greek mythology, or enamored with Victorian literature, or enraptured by classical music. Some students are mechanical; others are artistic. Some focus best in a lecture hall or classroom; still others learn best by doing, and would thrive in the studio, workshop or shop floor..."
Hard to argue that people are different, and learn in different ways. Not everyone is meant for college.
"...And not everyone goes to college. The latest figures from the U.S. Bureau of Labor Statistics (BLS) show that about 68 percent of high school students attend college. That means over 30 percent graduate with neither academic nor job skills..."
Here is what I have most problems with. To think that the 30 percent of high schools students graduate, but do not go to college, have neither academic nor job skills? I disagree with this, as there are many jobs where the academic and job skill training they received in high school is more than adequate. Nicholas then doubled down:
"...But even the 68 percent aren't doing so well. Almost 40 percent of students who begin four-year college programs don’t complete them, which translates into a whole lot of wasted time, wasted money, and burdensome student loan debt. Of those who do finish college, one-third or more will end up in jobs they could have had without a four-year degree. The BLS found that 37 percent of currently employed college grads are doing work for which only a high school degree is required.
It is true that earnings studies show college graduates earn more over a lifetime than high school graduates. However, these studies have some weaknesses. For example, over 53 percent of recent college graduates are unemployed or under-employed. And income for college graduates varies widely by major – philosophy graduates don’t nearly earn what business studies graduates do. Finally, earnings studies compare college graduates to all high school graduates. But the subset of high school students who graduate with vocational training – those who go into well-paying, skilled jobs – the picture for non-college graduates looks much rosier.
Yet despite the growing evidence that four-year college programs serve fewer and fewer of our students, states continue to cut vocational programs..."
There are a lot of successful billionaires who did not complete four yeas of college: Bill Gates, Steve Jobs, Michael Dell, Henry Ford, and Howard Hughes, just to name a few.
If you feel that the only purpose of attending high school or college is to get job-specific skills, then you are missing out on all the other aspects of those that teach you valuable life lessons, getting along with others, teamwork, communications, and other "soft skills" that aren't necessarily job-specific.
Teenagers entering college are still growing up, trying to figure out what they want to do with their lives, discovering new ideas, new ways of thinking, and networking with people of different backgrounds and cultures.
"...The U.S. economy has changed. The manufacturing sector is growing and modernizing, creating a wealth of challenging, well-paying, highly skilled jobs for those with the skills to do them. The demise of vocational education at the high school level has bred a skills shortage in manufacturing today, and with it a wealth of career opportunities for both under-employed college grads and high school students looking for direct pathways to interesting, lucrative careers. Many of the jobs in manufacturing are attainable through apprenticeships, on-the-job training, and vocational programs offered at community colleges. They don’t require expensive, four-year degrees for which many students are not suited..."
The skills shortage is real, but until employers are willing to pay people for what they're worth, the situation will not be resolved. The free market has a way to fix skills shortages. High demand raises salaries, and causes people to invest in high school and college education in part to vie for these positions. That is in part why medical doctors are paid so much.
"...The modern workplace favors those with solid, transferable skills who are open to continued learning. Most young people today will have many jobs over the course of their lifetime, and a good number will have multiple careers that require new and more sophisticated skills..."
A few years ago, I was hosting clients for dinner in Tucson. The sales rep had brought his daughter and her roommate along, as there was a shooting at their college campus and classes were canceled for the week. The daughter asserted, "In 18 months, I will no longer have to learn anything again. I will be done with school." Her roommate chimed in, "Ha! I am a year ahead of you, and only six months away from that!"
I was the bearer of bad news. "Ladies," I said, "you will have to get used to learning new things the rest of your lives." The highest ranking client at the table overheard me, and she re-iterated, "Ladies, that is probably the best advice I have heard in awhile. I suggest you heed it carefully."
A big part of high school and college education is to teach you how to learn on your own. Learn to read, search out information, take measurements, gather data, make plans, and ask the right questions. These are skills that are useful in a wide variety of careers.
Nicholas concludes with:
"...Just a few decades ago, our public education system provided ample opportunities for young people to learn about careers in manufacturing and other vocational trades. Yet, today, high-schoolers hear barely a whisper about the many doors that the vocational education path can open. The “college-for-everyone” mentality has pushed awareness of other possible career paths to the margins. The cost to the individuals and the economy as a whole is high. If we want everyone’s kid to succeed, we need to bring vocational education back to the core of high school learning."
I agree the educational system in United States is broken, but I am not sure I agree with everything that Nicholas writes in this article.
Well, it's Tuesday again, and you know what that means? IBM Announcements! There were lots of announcements today, so I have split this up into two posts. One for the Tape and Cloud announcements, and the other for the Spectrum Storage family.
IBM Spectrum Virtualize Software V7.8.1
IBM Spectrum Virtualize&trade: V7.8.1 is the latest software for FlashSystem V9000, SAN Volume Controller and Storwize products.
Last release, IBM introduced "Host Groups" for clusters that needed to share a common set of volumes. This release offers "Host cluster I/O throttling": I/O throttling can be managed at the host level (individual or groups) and at managed disk levels for improved performance management,and GUI support.
Increased background FlashCopy transfer rates: This feature enables you to increase the rate of background FlashCopy transfers, providing faster copies as the infrastructure allows. This takes advantage of the higher performance capabilities of today's systems, processing the copy in a shorter period of time. The default was 64 MB/sec, and now we can go up to 2 GB/sec, for those who want their FlashCopy to be done as fast as possible.
Port Congestion Statistic: Zero buffer credits help detect SAN congestion in performance-related issues, improving support in high-performance environments. IBM had this for the 8Gbps FCP cards, but not for the 16Gbps cards, so now that's fixed.
Resizing of volumes in remote mirror relationships: Target volumes in remote mirror relationships will be automatically resized when source volumes are resized. Lots of clients asked for this, and IBM delivered!
Consistency protection for Metro/Global Mirror relationships: An automatic restart of mirroring relationships after a link fails between the mirror sites improves disaster recovery scenarios, helping to ensure the applications are protected throughout the process.
When IBM introduced "Global Mirror with Change Volumes" (GM CV), I wanted to call it "Trickle Mirror", because the primary site takes a FlashCopy, trickles the data over, then FlashCopy at the remote site. Now, clients using traditional Metro or Global Mirror can add "Change Volumes" as protection. In the unlikely event a network disruption occurs, it drops down to GMCV until the link resumes full speed.
Support of SuperMicro servers for the Spectrum Virtualize as Software Only offering: Support for x86-based Intel™ servers by SuperMicro for Spectrum Virtualize Software is available with this release.
Last year, IBM offered Spectrum Virtualize as software that could run on Lenovo servers. However, now there are clients who want alternative server choices.
Supermicro SuperServer 2028U-TRTP+ is supported to run Spectrum Virtualize Software. This is a great option for end clients, managed service or cloud service providers deploying private clouds, building hosted services, or using software-defined storage on third party Intel servers. This a fully inclusive license with all key features available on Spectrum Virtualize in a single, downloadable image.
IBM Spectrum Control V5.2.13 and IBM Virtual Storage Center V5.2.13
We often joke that IBM Virtual Storage Center is the [Happy Meal] combining storage virtualization with Spectrum Virtualize hardware like FlashSystem V9000, SAN Volume Controller or Storwize as the "hamburger", Spectrum Control as the "fries" and "Spectrum Protect Snapshot" as the "soft drink". Storage Analytics was included as a "prize inside" only available in the VSC bundle to entice clients to chose this option.
Whenever IBM updates Spectrum Control, they often put out a new version of the Virtual Storage Center bundle as well. I was the Chief Architect for Spectrum Control 2001-2002, and Technical Evangelist for SVC in 2003 when we first introduced the product, so I have long history with both products.
This release provides additional information and performance metrics on Dell EMC VMAX and EMC VNX devices. This is done natively, they do not need to be virtualized by Spectrum Virtualize as was often done in the past.
IBM now offers better visibility of drives within IBM Cloud Object Storage Slicestor® nodes. IBM acquired Cleversafe 18 months ago, and are working to get it under the Spectrum Control management umbrella.
IBM Spectrum Scale™ file system to external pool correlation. Spectrum Scale can migrate data to three different type of "external pools":
Cloud Object pool, either on-premise Object Storage or off-premise Cloud Service Provider storage.
Spectrum Protect pool, where Spectrum Protect manages the migrated data on one of 700 supported devices, including tape, virtual tape, optical, flash, disk, object storage or cloud.
Spectrum Archive pool, where data is written directly to physical tape using the Industry-standard LTFS format.
This release provides additional information on the copy data panel about SAN Volume Controller (SVC) HyperSwap® and vDisk mirror.
While the "Virtual Storage Center" bundle is an awesome deal, some clients have asked for the "Vegetarian Option" (Fries and Drink only). Why? Because they want the advanced storage analytics (prize inside) for other devices like DS8000, XIV, etc. So, IBM created the "IBM Spectrum Control Advanced Edition", which has everything in VSC except the Spectrum Virtualize itself.
Advanced edition adds improvements to the chargeback report. It also includes IBM Spectrum Protect™ Snapshot V8.1 release.
IBM Spectrum Control Storage Insights Software as a Service
Storage Insights is IBM's "Software-as-a-Service" reporting-only offering subset of Spectrum Control Advanced Edition. It includes direct support for Dell EMC VMAX, VNX, and VNXe storage systems. This is huge! Now, clients who have only EMC hardware can now, on a monthly basis, figure out where they are wasting money and decrease their costs.
Other features carried over include the enhanced drive support for IBM® Cloud Object Storage, enhanced external capacity views for IBM Spectrum Scale™ and additional replication views for vDisk mirror and HyperSwap® relationships for SAN Volume Controller (SVC) and Storwize® devices that I mention above.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
The Collaboration of Oak Ridge, Argonne, and Livermore [CORAL] is a joint procurement activity among three of the Department of Energy's National Laboratories launched in 2014 to build state-of-the-art high-performance computing (HPC) technologies that are essential for supporting U.S. national nuclear security and are key tool s used for technology advancement and scientific discovery.
Of course, when you hear "state-of-the-art technology", IBM is probably the first company that comes to mind!
The new IBM Spectrum Scale 5.0 has been greatly enhanced to meet CORAL requirements:
Dramatic improvements in I/O performance
Significant reduction in internode software path latency to support the newest low-latency, high-bandwidth hardware such as NVMe
Improved performance for many small and large block size workloads simultaneously from new 4 MB default block size with variable sub-block size based on block size choice
Improved metadata operation performance to a single directory from multiple nodes
Spectrum Scale 5.0 now handles automatically tuning more than twenty communication protocol and buffer management parameters, aiding setup for optimal performance. The enhanced GUI features many capabilities including performance, capacity, network monitoring, AFM (multicluster management), transparent cloud tiering, and enhanced maintenance and support, including interaction with IBM remote support.
Spectrum Scale 5.0 now offers file-level immutability. Previous releases supported immutability at the file set granularity, so this allows greater granularity. Immutability can be an effective tool as part of an overall Non-Erasable, Non-Rewriteable [NENR] compliance policy.
Spectrum Scale comes in both "Standard Edition" and "Data Management Edition". The latter offers some additional features, including Transparent Cloud Tiering, Asynchronous AFM Disaster Recovery support, and Encryption. Some additional enhancements to Data Management Edition in Spectrum Scale 5.0 are:
File audit logging capability to track user accesses to file system and events supported across all nodes and all protocols
Parseable data stored in secure retention-protected fileset
Data security following removal of physical media protected by on-disk encryption
The new IBM Storage Utility Offerings include the IBM FlashSystem 900 (9843-UF3), IBM Storwize V5030 (2078-U5A), and Storwize V7000 (2076-U7A) storage utility models that enable variable capacity usage and billing.
These models provide a fixed total capacity, with a base and variable usage subscription of that total capacity. IBM Spectrum Control Storage Insights is used to monitor the system capacity usage. It is used to report on capacity used beyond the base subscription capacity, referred to as variable usage.
The variable capacity usage is billed on a quarterly basis. This enables customers to grow or shrink their usage, and only pay for configured capacity.
Suppose you only need 300 TB today, but expect this to grow to 1 PB (1000 TB) over the course of three years. You install 1000 TB (1 PB) of capacity, and pay for the base 300 TB, plus whatever above this 300 TB you might be using during each subsequent quarter. After 36 months, you pay for the rest of capacity installed.
(There are comparable offerings from IBM's competitors, but they often require that you pay for at least 75 to 85 percent of the installed amount, and then you would need to continue to disrupt your operations with additional capacity installed throughout the 12 to 36 month period. IBM's approach allows you to avoid installation disruption during the entire 36 month period!)
IBM Spectrum Virtualize for Public Cloud V8.1.1 delivers a powerful solution for the deployment of IBM Spectrum Virtualize software in public cloud, starting with IBM Cloud. This new capability provides a monthly license to deploy and use Spectrum Virtualize in IBM Cloud to enable hybrid cloud solutions
Remote replication will be supported between Spectrum Virtualize-based appliances (including SAN Volume Controller (SVC), the Storwize family, IBM FlashSystem V9000, and VersaStack with Storwize family or SVC), or Spectrum Virtualize Software, to the IBM Cloud.
Using IP-based replication with Metro Mirror, Global Mirror, or Global Mirror with Change Volumes, clients can create secondary copies of on-premises data in the public cloud for disaster recovery. IBM has over 25 data centers around the world available to chose from. Remote copy services can also be used between two IBM Cloud data centers for improved availability.
The solution is based on bare metal servers. You can create either two- or four-node high availability clusters.
Spectrum Virtualize on-premise SVC and Storwize now also support 2.4 TB 10K rpm 2.5-inch SAS hard disk drives.
Well, it's Tuesday again, and you know what that means? IBM Announcements! There were a lot of IBM Power System announcements on Tuesday, so the IBM Power team asked us to wait until Thursday to post about all of the IBM storage announcements, to avoid overwhelming excitement levels with the press and analysts.
(FTC Disclosure: I work for IBM. I have either worked on the code, developed marketing materials, and/or represented each of the products below in my professional capacity. This blog post can be considered a "paid celebrity endorsement")
A few months ago, IBM re-factored its internals. Spectrum Virtualize will continue to support its legacy storage pools, but also offered "Data Reduction Pools", or "DR pools" for short. At the time, this supported only Thin Provisioning and Compression. See fellow blogger Barry Whyte's post on [Data Reduction Pools] for more details.
Spectrum Virtualize 8.1.3 release now adds Data Duplication and RESTful API support for the Spectrum Virtualize family, including SAN Volume Controller, FlashSystem V9000 and Storwize products. These features also apply to Spectrum Virtualize as software only, and to Spectrum Virtualize for the Public Cloud.
Data Deduplication is a form of data footprint reduction. Like the deduplication in Spectrum Protect and FlashSystem A9000/R products, Spectrum Virtualize will use SHA1 hash codes to identify duplicate 8K blocks. If the hash code of the block about to be written does not match any existing hash code previously written to the cluster, it is considered unique data.
Legacy storage pools supported three kinds of volumes: fully-allocated, thin-provisioned, and compressed-thin volumes. The new DR pools support five kinds: fully-allocated, thin-provisioned, deduped-thin, compressed-thin, and deduped-compressed-thin volumes.
The new deduplication feature is included at no additional charge with the base Spectrum Virtualize license.
The RESTful API enables storage admins to easily automate common tasks with industry-standard tools. RestAPI support is available to interface with the command-line interface (CLI), create vDisk volumes and generate views normally available through the CLI, and secure authentication to the IBM Spectrum Virtualize family.
The SAN Volume Controller, FlashSystem V9000 and Storwize family now also support 12TB drives for internal storage. These are 7200 rpm 3.5 inch drives that can be in the 2U 12-bay or 5U 92-bay expansion drawers, or directly in the 12-bay Storwize controllers. Spectrum Virtualize 7.8.1 is the minimum level to support these high-capacity disks.
IBM Spectrum Virtualize for Public Cloud, available on IBM Cloud, has been enhanced to support a full eight node cluster (four node-pairs, or "I/O Groups" as they are called). This can be used as a target for remote mirror from your Spectrum Virtualize cluster on premises.
IBM offers data footprint reduction, high availability, and technical refresh guarantee programs for these products. See Ernie Pitt's blog post on [Peace of Mind with IBM Storage].
IBM Spectrum Scale 5.0 is highly scalable file and object storage system. It is available as software, pre-built appliances, and in the Cloud.
The pre-built appliances are called "Elastic Storage Server", combining Spectrum Scale software on two IBM Power servers with drawers of flash or disk drives.
IBM introduces two new "Hybrid" models to the ESS family. GH14 has one 2U drawer with 24 Solid State Drives (SSD) combined with four 5U drawers with 7200rpm spinning disk. The GH2R has two 2U drawers with four 5U drawers.
Like the GS models, the SSD are either 3.84TB or 15.3TB capacities. The 5U drawers are similar to those in the GL models, either 4TB, 8TB or 10TB capacities.
A new Enterprise Slim Rack (S42) is now available to hold these. The S42 is available for all ESS orders, including the GS, GL and new GH models.
IBM has shortened the name of "Spectrum Control Storage Insights" to just "Storage Insights" and made it available in two flavors: Storage Insights, and Storage Insights Pro.
Storage Insights is a no-cost cloud Artificial Intelligence (AI) service that provides common monitoring capabilities to all of your IBM block-level storage, including IBM FlashSystem, SAN Volume Controller (SVC), Storwize, DS8000 models and IBM XIV Storage Systems. Here are some of the capabilities offered:
View the health, performance, and capacity of all your IBM-supported devices from a single place
Filter storage device events to help you focus on the things that require your immediate attention
Act on predictive insights provided by device intelligence before anomalies have an impact on service levels
Use actionable data you get to resolve more issues on your own
Open and view IBM support tickets
Enable IBM Support to automatically collect log packages with no interaction with the client
IBM Storage Insights Pro includes everything in Storage Insights as well as these additional capabilities. This is a fee-based cloud service, licensed per TiB per month, for the added functionality:
Business impact analysis
Data placement optimization with tier planning
Capacity optimization with reclamation planning
Supports file and object storage, including IBM Spectrum Scale, Elastic Storage Server (ESS), and IBM Cloud Object Storage (IBM COS)
Both Storage Insights and Storage Insights Pro use a "data collector" that runs on premises. This can be any bare metal server or Virtual Machine running Windows, Linux or AIX operating system connected to the SAN, with access to the Internet to upload the data to the IBM Cloud.
If you have IBM block storage today, there is no reason not to try this out. You can download the "data collector" and start using Storage Insights right away. If you like it, consider upgrading to Storage Insights Pro, or the full on-premise Spectrum Control product.
We have a new member of the ever-growing IBM Spectrum Storage family! IBM Spectrum Discover is modern metadata management software that delivers data insight for petabyte-scale, unstructured data.
IBM Spectrum Discover easily connects to IBM Cloud Object Storage (COS) and IBM Spectrum Scale and Elastic Storage Server (ESS) to rapidly ingest, consolidate, and index metadata for billions of files and objects, providing a rich layer of metadata on top of these storage sources. IBM plans to extend support to other platforms next year.
This metadata enables data scientists, storage administrators, and data stewards to efficiently manage, classify, and gain insights from massive amounts of unstructured data. The insights gained accelerate large-scale analytics, improve storage economics, and help with governance to create competitive advantage, speed critical research, and mitigate risk.
This initial release is labeled v2.0 as IBM has deployed this in beta form already at various client locations. Here are some key highlights:
Event-notifications and policy-based workflows to automate metadata ingestion and metadata indexing at a petabyte scale
Fine-grained views of storage consumption based on a wide range of system and custom metadata
Fast, efficient search through petabytes of data, resulting in highly relevant results for large-scale analytics
Ability to quickly differentiate mission-critical business data from data that can either be deleted or moved to a cheaper, colder tier
Policy-based custom tagging that enables organizations to classify and categorize data, and align this data with the needs of the business
A software developers kit (SDK) to build action agents that extract metadata from file headers and content, automate data movement, and provide integration to open source software, such as Apache Spark, Apache Tika, PyTorch, Caffe and TensorFlow, to facilitate data identification and speed large-scale data processing
The latest IBM FlashSystem 900 comes in two models, the AE3 "full purchase" model, and the UF3 "storage utility pricing" model where you pay less initially, and then more as you consume more of the capacity. They are the same hardware, just licensed differently.
Currently, IBM offers FCP or InfiniBand host attachment, with up to twelve 3.6TB, 8.5TB or 18TB modules (PCiE card). A full 2U drawer would be configured as 10+P+S RAID5 for high availability and data protection.
Each module offers embedded compression chip, but modules only had enough DRAM cache to allow a maximum of compressed 22TB effective data, so while the 3.6TB and 8.5TB could compress data up to 2.5x, the 18TB card was somewhat limited at 1.2x, which might be fine for some already-compressed data like MP3 audio, or JPEG photos.
This month, IBM offers new XL MicroLatency Modules, 18TB cards with enough DRAM cache to support 44TB compressed data, up to an effective 2.4x compression ratio. A full twelve-module drawer could hold up to 440TB of effective capacity.
IBM also now offers a quad-port 16Gb FCP card that supports both SCSI and NVMe commands over fabric. This is often denoted as either FC-NVMe or NVMe/FC. The FlashSystem 900 already supported NVMe-OF for InfiniBand (see my blog post [IBM February 2018 Announcements])
IBM Cloud Tape Connector for z/OS is a software-defined storage solution that provides an alternative to virtual tape libraries like the TS7760. Here are some highlights:
Robust virtual tape emulation solution with e-vaulting to cloud-based offsite storage for cold, archival, or backup data. Virtual tape emulation simulates IBM compatible tape controllers, tape drives, and tape volumes, maintained on any IBM z/OS-compatible disk system, such as IBM DS8000. IBM Cloud Tape Connector for z/OS provides several vault, transfer, and recovery options to support business continuity and resiliency.
Sequential z/OS data set cloud storage and retrieval. Sequential data sets stored on disk or flash storage can be moved to the cloud by IBM Cloud Tape Connector for z/OS without the requirement of performing a tape-write operation.
Automatic application recall of data from cloud, whether e-vaulted through virtual tape emulation or copied directly to the cloud.
Pervasive encryption support. This feature enables enterprises to ensure that any data copied to the cloud is encrypted before it is transmitted, automatically protecting and handling the encryption keys.
Support for IBM Cloud Object Storage using S3 protocol, as well as Amazon S3, Hitachi HCP protocol, and EMC Elastic Cloud Service Protocol.
Last week, I presented at the "IBM TechU Comes to You" event in beautiful Nairobi, Kenya. This was a three-day event, so here is my recap of Day 2, Wednesday Aug 3, 2016.
IBM Spectrum Scale overview and update
This session was covered by Mack Kigada, IBM Executive Consultant for the "Executive Advisory Practice" portion of Systems Lab Services. This session explained the basic features of Spectrum Scale, including the latest features of version 4.2, and related Elastic Storage Server pre-built systems.
Software Defined Storage - IBM Spectrum Overview
This session was presented by Saumil Shah, IBM Spectrum Protect Sales Leader for Middle East, Turkey & Africa. Since SDS is an important topic, the conference coordinators schedule several speakers to present at different time slots, to give everyone a chance to hear the SDS message. Rather than using my same charts, Saumil used his own deck, which he customized based on his experience working in this region.
Flash and the Next Generation Data Center
This session was covered by Firat Ozturk, IBM FlashSystem Sales Leader for Middle East, Turkey & Africa. While IBM offers all-flash array versions of its DS8000, SVC and Storwize product lines, Firat focused on the IBM FlashSystem family, including the FlashSystem 900, FlashSystem V9000, and the new A9000/A9000R models.
According to IDC, Flash-based technologies are predicted to represent 50 percent of the storage capacity sold in 2018. Today it is about 10 percent, so that is a big leap. The primary reason he feels are new applications like Cloud and Mobile that are driving customer expectations to faster performance.
Which product should you get? Firat indicated that the FlashSystem 900 is ideal to boost the performance of specific applications, like Oracle or SAP HANA. The FlashSystem V9000 borrows all the code base from SVC and Storwize with Real-time compression ideal for OLTP and Database applications, while offering Storage Virtualization to protect your existing storage infrastructure investment. The FlashSystem A9000 and A9000R are targeted to Cloud deployments, as well as Server Virtualization and Virtual Desktop Infrastructure (VDI).
What is Big Data? Architectures and Practical Use Cases
I have been presenting this since 2013, but still draws a new crowd every time. Based on my [2015 Presentation], I made some updates to reflect IBM's latest support for Spark, and the new POWER8 solution offerings.
Storage Tiering on z Systems: Less Management, Lower Costs, Less Management, and Increased Performance
When I present Storage Tiering for distributed systems, I typically focus on Easy Tier feature of SAN Volume Controller, the Analytics-based storage optimization of Spectrum Control, and the Information Lifecycle Management (ILM) policies of Spectrum Scale and Spectrum Archive. This time, Glenn Anderson asked me to give this a "z Systems" slant, for a mainframe-oriented audience.
In this new version, I focused on Easy Tier on IBM DS8000 systems, Hierarchical Storage Management in DFSMShsm, and the new Class Transition features that were introduced initially with DFSMSoam for objects, and now extended for data sets.
Linux on IBM z Systems and its Participation in Open Source Ecosystem, including Blockchain
Wow! What a long title!
This session was presented by Holger Smolinski, IBM Senior Performance Analyst Linux and KVM on IBM z Systems from the Boeblingen, Germany Lab. Back in the late 1990s, Holger and I worked on porting Linux to the S/390 platform. I led a team to test all of the device drivers for IBM disk and tape storage systems, working with Holger and his team to fix the drivers and submit them to the Open Source Community, so that they would be incorporated formally into the latest Red Hat and SUSE distributions.
Holger gave quite an extensive overview of the entire Open Source Ecosystem that run on Linux on z System mainframes. Over 60 percent of new mainframe customers use Linux on z Systems operating system, and the complete set of capabilities, makes this quite practical.
One of the latest of these is [Blockchain], a new way to track transactions between organizations. The open source project for this is [HyperLedger]. Transactions are recorded into blocks that are encrypted with a hash code, which prevents tampering and fraud. These blocks are then chained together as transactions occur between organizations.
For example, if a product is manufactured in China, shipped over the Pacific Ocean by a shipping company, received at a port in the United States, processed by US Customs, then shipped via trucking company to the buyer, these all would be represented as transaction blocks chained together.
Wednesday we had free evening to explore on our own. Some of my colleagues went to an all-you-can eat steakhouse for dinner, but I will get plenty of that in my upcoming trip to Sao Paulo, Brazil, so went elsewhere.
Last year, Hurricanes Harvey, Irma, Jose, and Maria, ravaged various parts of North America and the Caribbean. My topic on Business Continuity and Disaster Recovery (BC/DR) was well attended. I have been working in BC/DR for most of my career, including the "High Availability Center of Competency" or HACOC, for short.
However, natural disasters like hurricanes, tornadoes, forest fires and floods represent less than 20 percent of all disasters. The majority of disasters, nearly 75 percent, arise from electrical power outages, human error, system failure and ransomware.
The seven tiers were developed by a group of IBM customers back in the 1980s, and have stood the test of time. I recently published an article in IBM Systems Magazine (January/February 2018) based on this presentation.
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods.
Finally, I covered some Hybrid Cloud Storage configurations, showing how a combination of Traditional IT, on-premise local private cloud, off-premise dedicated private cloud and public cloud, and be combined to provide added value.
Reporting and Monitoring: How to Verify your Storage is Being Used Efficiently
It is hard to believe that it was over 15 years ago that I was the chief architect for the software we now call IBM Spectrum Connect, Spectrum Control and Storage Insights. There are a variety of editions and bundles for this product, but my focus on this talk was on the advanced storage analytics found in IBM Virtual Storage Center and IBM Spectrum Control Advanced Edition.
I covered three use cases:
What storage tier to put your workload in, and how to move existing data into a faster or slower tier to meet business requirements and IT budgets.
For steady state environments, how to re-balance storage pools within a single tier to keep things even for optimal performance.
When it is time to decommission storage, how to transform volumes from one storage pool to another without downtime or outages.
Special thanks to Bryan Odom for his help in updating this presentation.
Spectrum Virtualization Data Reduction Pools 101
Barry Whyte, IBM Master Inventor and ATS for Storage Virtualization for Asia Pacific region, presented on how Data Reduction Pools were implemented in version 8.1.2 of Spectrum Virtualize. The software in the latest IBM SAN Volume Controller (SVC), IBM Storwize products, and IBM FlashSystem V9000.
Basically, rather than say we "re-wrote" the code, we prefer softer euphemisms like the code was "re-imagined" or, my favorite lately, "re-factored". Legacy Storage Pools will continue to be supported, but IBM anticipates that people over time will transition to the new Data Reduction Pools (DR Pools).
Like Legacy Storage Pools, the new DR Pools also support a mix of Fully-allocated, Thin-Provisioned, and Compressed-Thin volumes. IBM has made a statement of direction that it will offer Data Deduplication feature in the future, but these will only be on the new DR Pools.
While DR Pools are available today with version 8.1.2, there are a few restrictions. There is a limit of four DR Pools per cluster, and the amount of total capacity of each pool depends on the extent size and number of I/O groups configured. Some of the migration methods developed for Legacy Storage Pools are not available, and in reality don't make sense in the new DR pool scheme. Child Pools are not supported either.
One of the big improvements that DR Pools offer is in the area of compression. With Legacy Storage Pools, CPU cores were dedicated for compression, so they were either under-utilized or overwhelmed. With DR pools, all CPU cores can be used for either I/O or compression, which potentially can increase performance by up to 40 percent!
After the sessions, IBM had its "Solution Center Reception". This is a chance to relax and unwind after a long day, with food and drink, and various sponsors in booths to explain their latest offerings.
This is Katie Thacker from [FIT]. In March 2018, FIT was recognized as IBM’s Top Strategic Service Provider of the year!
These are Elizabeth Krivan and Kelly Bouchard, two recently-hired IBM storage sellers. They attended my sessions at the IBM Technical University in New Orleans last October, so it was good to see them again at my sessions here in Orlando.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the sessions of Day 3.
Ethernet-only SANs -- Myth or Reality?
Anuj Chandra, IBM Advisory Engineer, presented an excellent overview of Ethernet-based SANs. He started with a quick history of Ethernet, starting with Robert Metcalfe's original drawing for his concept.
In the past, Ethernet was used for email and message transfer, and so dropped packets were tolerated. However, with the use of Ethernet for SANs, many standards have been adopted to make Ethernet networks more robust. These meet requirements for Flow Control, Congestion management, low latency, data integrity and confidentiality, network isolation, and high availability.
These standards are known as IEEE 802.1Q "Data Center Bridging", including 8012.Qbb Priority Flow Control, 802.1Qaz Enhanced Transmission Selection, 802.1Qau Congestion Notification. There is also the IETF Transparent Interconnection of Lots of Links (TRILL) to replace Spanning Tree Protocol (STP). All of these features are negotiated between endpoints server and storage. Ethernet that supports these new standards is often referred to as "Converged Ethernet" since it handles both traditional email/message traffic as well as SAN data traffic.
In addition to 1GbE and 10GbE, we now have 2.5, 5, 20, 40, 50, 100 Gb Ethernet speeds. By 2020, Anuj estimates over half of all Ethernet ports will be 25 GbE or faster. Amazingly, some of these can work on existing 10BASE-T cables.
Anuj also covered Remote Direct Memory Access (RDMA), and the RDMA-capable Network Interface Cards (RNIC) that support them. In one chart, shown here, Anuj explained Infiniband, RDMA over Converged Ethernet (RoCE) and RoCE v2, and Internet Wide Area RDMA Protocol (iWARP).
While many of these enhancements were intended for Fibre Channel over Ethernet (FCoE), the beneficiary has been iSCSI. Now there is iSCSI Extensions for RDMA (iSER) to take even more advantage of these changes, and can work with Infiniband, RoCE or iWARP. All of these networks can also be used as the basis for NVMe over Fabric (NVMeOF).
Ethernet is the backbone of Cloud usage, and IBM is well positioned to take advantage of these new networking technologies.
Digital Video Surveillance solutions for extended video evidence protection
Dave Taylor, IBM Executive Architect for Software Defined Storage solutions, presented this session on Digital Video Surveillance (DVS).
Most video surveillance is either analog-based, going to standard VHS tapes, or file-based. Sadly, security guards that watch live camera feeds lose their attention span after 22 minutes.
There are an estimated 72 million cameras globally, with 1.5 million more every year.
City governments spend 57 percent of their budget on "public safety". This can include body cams for police departments. Taser International, now called AXON, dominates the body-cam market.
City budgets may not be prepared to store all of this video content into a cloud that complies with Criminal Justice Information Services (CJIS) standards. These Cloud services tend to be more expensive, as the videos must be treated as evidence, tamper-proof, and with appropriate chain of custody.
DVS is not just storing movies. IBM offers Intelligent Video Analytics. It is important to be able to derive insight and actionable response.
Storage capacity adds up quickly. Standard 1080p (1920 by 1080 pixel) camera generates 2.92 GB per hour, 70 GB per day, and over 2TB per month. If you have 1,000 cameras, that's over 2PB of data.
For xProtect servers running Windows, the Tiger Bridge Connector can be used to move the video files to either IBM Spectrum Scale or IBM Cloud Object Storage.
Deep Dive into HyperSwap for Active-Active applications and Disaster Recovery
Andrew Greenfield, IBM Global Engineer for Storage, explained the different ways HyperSwap is implemented across the IBM storage portfolio.
For IBM DS8000, HyperSwap is based on Metro Mirror synchronous replication. In the event that the primary DS8000 fails, the host server can automatically re-direct all I/O to the secondary DS8000. This is often referred to as "High Availability" (HA), and in some cases can serve as Disaster Recovery.
For IBM Spectrum Virtualize products, including SAN Volume Controller (SVC), FlashSystem V9000, Storwize V7000 and V5000 products, as well as Spectrum Virtualize sold as software, the implementation is different.
Previously, SVC offered Stretched Clusters, which put one node in one site, and a second node at another site, which allows for an Active/Active configuration. Unfortunately, the nodes in FlashSystem V9000 and Storwize are "connected at the hip", effectively bolted together, so putting separate nodes in different locations was not possible. To solve this, IBM developed HyperSwap that allows one node-pair to replicate across sites to another node-pair in the same Spectrum Virtualize cluster.
However, even though it is called "HyperSwap", it is not implemented in any way similar to the DS8000 method. Instead, Spectrum Virtualize uses the Global Mirror with Change Volumes to replicate data between sites.
IBM Storage and VMware Integration
This session was co-presented by Brian Sherman, IBM Distinguished Engineer, and Steve Solewin, IBM Corporate Solutions Architect.
For nearly two decades, IBM is a "Technology Alliance Partner" with VMware. To provide consistent integration to all the features and functions of VMware, IBM Spectrum Control Base Edition (SCBE) is provided at no additional charge for IBM DS8000, XIV, FlashSystem and Spectrum Virtualize products.
SCBE is downloadable as an RPM for RedHat Enterprise Linux (RHEL) can run bare-metal or as a VM.
For those using Hyper-Scale Manager, it will automatically install a special A-line-only version of SCBE. It will install SCBE, but it will only manage the A-line products (FlashSystem A9000, FlashSystem A9000R, XIV and Spectrum Accelerate).
Storage admins can define "storage services" that can be assigned to vCenter. This allows VMware admins to allocate storage in self-service mode.
After the meetings were over, IBM had a special event at the Universal City Walk to enjoy some drinks, food, and conversation, and to watch Blue Man Group.