This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Last year, Hurricanes Harvey, Irma, Jose, and Maria, ravaged various parts of North America and the Caribbean. My topic on Business Continuity and Disaster Recovery (BC/DR) was well attended. I have been working in BC/DR for most of my career, including the "High Availability Center of Competency" or HACOC, for short.
However, natural disasters like hurricanes, tornadoes, forest fires and floods represent less than 20 percent of all disasters. The majority of disasters, nearly 75 percent, arise from electrical power outages, human error, system failure and ransomware.
The seven tiers were developed by a group of IBM customers back in the 1980s, and have stood the test of time. I recently published an article in IBM Systems Magazine (January/February 2018) based on this presentation.
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods.
Finally, I covered some Hybrid Cloud Storage configurations, showing how a combination of Traditional IT, on-premise local private cloud, off-premise dedicated private cloud and public cloud, and be combined to provide added value.
Reporting and Monitoring: How to Verify your Storage is Being Used Efficiently
It is hard to believe that it was over 15 years ago that I was the chief architect for the software we now call IBM Spectrum Connect, Spectrum Control and Storage Insights. There are a variety of editions and bundles for this product, but my focus on this talk was on the advanced storage analytics found in IBM Virtual Storage Center and IBM Spectrum Control Advanced Edition.
I covered three use cases:
What storage tier to put your workload in, and how to move existing data into a faster or slower tier to meet business requirements and IT budgets.
For steady state environments, how to re-balance storage pools within a single tier to keep things even for optimal performance.
When it is time to decommission storage, how to transform volumes from one storage pool to another without downtime or outages.
Special thanks to Bryan Odom for his help in updating this presentation.
Spectrum Virtualization Data Reduction Pools 101
Barry Whyte, IBM Master Inventor and ATS for Storage Virtualization for Asia Pacific region, presented on how Data Reduction Pools were implemented in version 8.1.2 of Spectrum Virtualize. The software in the latest IBM SAN Volume Controller (SVC), IBM Storwize products, and IBM FlashSystem V9000.
Basically, rather than say we "re-wrote" the code, we prefer softer euphemisms like the code was "re-imagined" or, my favorite lately, "re-factored". Legacy Storage Pools will continue to be supported, but IBM anticipates that people over time will transition to the new Data Reduction Pools (DR Pools).
Like Legacy Storage Pools, the new DR Pools also support a mix of Fully-allocated, Thin-Provisioned, and Compressed-Thin volumes. IBM has made a statement of direction that it will offer Data Deduplication feature in the future, but these will only be on the new DR Pools.
While DR Pools are available today with version 8.1.2, there are a few restrictions. There is a limit of four DR Pools per cluster, and the amount of total capacity of each pool depends on the extent size and number of I/O groups configured. Some of the migration methods developed for Legacy Storage Pools are not available, and in reality don't make sense in the new DR pool scheme. Child Pools are not supported either.
One of the big improvements that DR Pools offer is in the area of compression. With Legacy Storage Pools, CPU cores were dedicated for compression, so they were either under-utilized or overwhelmed. With DR pools, all CPU cores can be used for either I/O or compression, which potentially can increase performance by up to 40 percent!
After the sessions, IBM had its "Solution Center Reception". This is a chance to relax and unwind after a long day, with food and drink, and various sponsors in booths to explain their latest offerings.
This is Katie Thacker from [FIT]. In March 2018, FIT was recognized as IBM’s Top Strategic Service Provider of the year!
These are Elizabeth Krivan and Kelly Bouchard, two recently-hired IBM storage sellers. They attended my sessions at the IBM Technical University in New Orleans last October, so it was good to see them again at my sessions here in Orlando.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
Well, it's Tuesday again, and you know what that means? IBM Announcements! There were a lot of IBM Power System announcements on Tuesday, so the IBM Power team asked us to wait until Thursday to post about all of the IBM storage announcements, to avoid overwhelming excitement levels with the press and analysts.
(FTC Disclosure: I work for IBM. I have either worked on the code, developed marketing materials, and/or represented each of the products below in my professional capacity. This blog post can be considered a "paid celebrity endorsement")
A few months ago, IBM re-factored its internals. Spectrum Virtualize will continue to support its legacy storage pools, but also offered "Data Reduction Pools", or "DR pools" for short. At the time, this supported only Thin Provisioning and Compression. See fellow blogger Barry Whyte's post on [Data Reduction Pools] for more details.
Spectrum Virtualize 8.1.3 release now adds Data Duplication and RESTful API support for the Spectrum Virtualize family, including SAN Volume Controller, FlashSystem V9000 and Storwize products. These features also apply to Spectrum Virtualize as software only, and to Spectrum Virtualize for the Public Cloud.
Data Deduplication is a form of data footprint reduction. Like the deduplication in Spectrum Protect and FlashSystem A9000/R products, Spectrum Virtualize will use SHA1 hash codes to identify duplicate 8K blocks. If the hash code of the block about to be written does not match any existing hash code previously written to the cluster, it is considered unique data.
Legacy storage pools supported three kinds of volumes: fully-allocated, thin-provisioned, and compressed-thin volumes. The new DR pools support five kinds: fully-allocated, thin-provisioned, deduped-thin, compressed-thin, and deduped-compressed-thin volumes.
The new deduplication feature is included at no additional charge with the base Spectrum Virtualize license.
The RESTful API enables storage admins to easily automate common tasks with industry-standard tools. RestAPI support is available to interface with the command-line interface (CLI), create vDisk volumes and generate views normally available through the CLI, and secure authentication to the IBM Spectrum Virtualize family.
The SAN Volume Controller, FlashSystem V9000 and Storwize family now also support 12TB drives for internal storage. These are 7200 rpm 3.5 inch drives that can be in the 2U 12-bay or 5U 92-bay expansion drawers, or directly in the 12-bay Storwize controllers. Spectrum Virtualize 7.8.1 is the minimum level to support these high-capacity disks.
IBM Spectrum Virtualize for Public Cloud, available on IBM Cloud, has been enhanced to support a full eight node cluster (four node-pairs, or "I/O Groups" as they are called). This can be used as a target for remote mirror from your Spectrum Virtualize cluster on premises.
IBM offers data footprint reduction, high availability, and technical refresh guarantee programs for these products. See Ernie Pitt's blog post on [Peace of Mind with IBM Storage].
IBM Spectrum Scale 5.0 is highly scalable file and object storage system. It is available as software, pre-built appliances, and in the Cloud.
The pre-built appliances are called "Elastic Storage Server", combining Spectrum Scale software on two IBM Power servers with drawers of flash or disk drives.
IBM introduces two new "Hybrid" models to the ESS family. GH14 has one 2U drawer with 24 Solid State Drives (SSD) combined with four 5U drawers with 7200rpm spinning disk. The GH2R has two 2U drawers with four 5U drawers.
Like the GS models, the SSD are either 3.84TB or 15.3TB capacities. The 5U drawers are similar to those in the GL models, either 4TB, 8TB or 10TB capacities.
A new Enterprise Slim Rack (S42) is now available to hold these. The S42 is available for all ESS orders, including the GS, GL and new GH models.
IBM has shortened the name of "Spectrum Control Storage Insights" to just "Storage Insights" and made it available in two flavors: Storage Insights, and Storage Insights Pro.
Storage Insights is a no-cost cloud Artificial Intelligence (AI) service that provides common monitoring capabilities to all of your IBM block-level storage, including IBM FlashSystem, SAN Volume Controller (SVC), Storwize, DS8000 models and IBM XIV Storage Systems. Here are some of the capabilities offered:
View the health, performance, and capacity of all your IBM-supported devices from a single place
Filter storage device events to help you focus on the things that require your immediate attention
Act on predictive insights provided by device intelligence before anomalies have an impact on service levels
Use actionable data you get to resolve more issues on your own
Open and view IBM support tickets
Enable IBM Support to automatically collect log packages with no interaction with the client
IBM Storage Insights Pro includes everything in Storage Insights as well as these additional capabilities. This is a fee-based cloud service, licensed per TiB per month, for the added functionality:
Business impact analysis
Data placement optimization with tier planning
Capacity optimization with reclamation planning
Supports file and object storage, including IBM Spectrum Scale, Elastic Storage Server (ESS), and IBM Cloud Object Storage (IBM COS)
Both Storage Insights and Storage Insights Pro use a "data collector" that runs on premises. This can be any bare metal server or Virtual Machine running Windows, Linux or AIX operating system connected to the SAN, with access to the Internet to upload the data to the IBM Cloud.
If you have IBM block storage today, there is no reason not to try this out. You can download the "data collector" and start using Storage Insights right away. If you like it, consider upgrading to Storage Insights Pro, or the full on-premise Spectrum Control product.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the sessions of Day 3.
Ethernet-only SANs -- Myth or Reality?
Anuj Chandra, IBM Advisory Engineer, presented an excellent overview of Ethernet-based SANs. He started with a quick history of Ethernet, starting with Robert Metcalfe's original drawing for his concept.
In the past, Ethernet was used for email and message transfer, and so dropped packets were tolerated. However, with the use of Ethernet for SANs, many standards have been adopted to make Ethernet networks more robust. These meet requirements for Flow Control, Congestion management, low latency, data integrity and confidentiality, network isolation, and high availability.
These standards are known as IEEE 802.1Q "Data Center Bridging", including 8012.Qbb Priority Flow Control, 802.1Qaz Enhanced Transmission Selection, 802.1Qau Congestion Notification. There is also the IETF Transparent Interconnection of Lots of Links (TRILL) to replace Spanning Tree Protocol (STP). All of these features are negotiated between endpoints server and storage. Ethernet that supports these new standards is often referred to as "Converged Ethernet" since it handles both traditional email/message traffic as well as SAN data traffic.
In addition to 1GbE and 10GbE, we now have 2.5, 5, 20, 40, 50, 100 Gb Ethernet speeds. By 2020, Anuj estimates over half of all Ethernet ports will be 25 GbE or faster. Amazingly, some of these can work on existing 10BASE-T cables.
Anuj also covered Remote Direct Memory Access (RDMA), and the RDMA-capable Network Interface Cards (RNIC) that support them. In one chart, shown here, Anuj explained Infiniband, RDMA over Converged Ethernet (RoCE) and RoCE v2, and Internet Wide Area RDMA Protocol (iWARP).
While many of these enhancements were intended for Fibre Channel over Ethernet (FCoE), the beneficiary has been iSCSI. Now there is iSCSI Extensions for RDMA (iSER) to take even more advantage of these changes, and can work with Infiniband, RoCE or iWARP. All of these networks can also be used as the basis for NVMe over Fabric (NVMeOF).
Ethernet is the backbone of Cloud usage, and IBM is well positioned to take advantage of these new networking technologies.
Digital Video Surveillance solutions for extended video evidence protection
Dave Taylor, IBM Executive Architect for Software Defined Storage solutions, presented this session on Digital Video Surveillance (DVS).
Most video surveillance is either analog-based, going to standard VHS tapes, or file-based. Sadly, security guards that watch live camera feeds lose their attention span after 22 minutes.
There are an estimated 72 million cameras globally, with 1.5 million more every year.
City governments spend 57 percent of their budget on "public safety". This can include body cams for police departments. Taser International, now called AXON, dominates the body-cam market.
City budgets may not be prepared to store all of this video content into a cloud that complies with Criminal Justice Information Services (CJIS) standards. These Cloud services tend to be more expensive, as the videos must be treated as evidence, tamper-proof, and with appropriate chain of custody.
DVS is not just storing movies. IBM offers Intelligent Video Analytics. It is important to be able to derive insight and actionable response.
Storage capacity adds up quickly. Standard 1080p (1920 by 1080 pixel) camera generates 2.92 GB per hour, 70 GB per day, and over 2TB per month. If you have 1,000 cameras, that's over 2PB of data.
For xProtect servers running Windows, the Tiger Bridge Connector can be used to move the video files to either IBM Spectrum Scale or IBM Cloud Object Storage.
Deep Dive into HyperSwap for Active-Active applications and Disaster Recovery
Andrew Greenfield, IBM Global Engineer for Storage, explained the different ways HyperSwap is implemented across the IBM storage portfolio.
For IBM DS8000, HyperSwap is based on Metro Mirror synchronous replication. In the event that the primary DS8000 fails, the host server can automatically re-direct all I/O to the secondary DS8000. This is often referred to as "High Availability" (HA), and in some cases can serve as Disaster Recovery.
For IBM Spectrum Virtualize products, including SAN Volume Controller (SVC), FlashSystem V9000, Storwize V7000 and V5000 products, as well as Spectrum Virtualize sold as software, the implementation is different.
Previously, SVC offered Stretched Clusters, which put one node in one site, and a second node at another site, which allows for an Active/Active configuration. Unfortunately, the nodes in FlashSystem V9000 and Storwize are "connected at the hip", effectively bolted together, so putting separate nodes in different locations was not possible. To solve this, IBM developed HyperSwap that allows one node-pair to replicate across sites to another node-pair in the same Spectrum Virtualize cluster.
However, even though it is called "HyperSwap", it is not implemented in any way similar to the DS8000 method. Instead, Spectrum Virtualize uses the Global Mirror with Change Volumes to replicate data between sites.
IBM Storage and VMware Integration
This session was co-presented by Brian Sherman, IBM Distinguished Engineer, and Steve Solewin, IBM Corporate Solutions Architect.
For nearly two decades, IBM is a "Technology Alliance Partner" with VMware. To provide consistent integration to all the features and functions of VMware, IBM Spectrum Control Base Edition (SCBE) is provided at no additional charge for IBM DS8000, XIV, FlashSystem and Spectrum Virtualize products.
SCBE is downloadable as an RPM for RedHat Enterprise Linux (RHEL) can run bare-metal or as a VM.
For those using Hyper-Scale Manager, it will automatically install a special A-line-only version of SCBE. It will install SCBE, but it will only manage the A-line products (FlashSystem A9000, FlashSystem A9000R, XIV and Spectrum Accelerate).
Storage admins can define "storage services" that can be assigned to vCenter. This allows VMware admins to allocate storage in self-service mode.
After the meetings were over, IBM had a special event at the Universal City Walk to enjoy some drinks, food, and conversation, and to watch Blue Man Group.
Last week, I presented at the "IBM TechU Comes to You" event in beautiful Nairobi, Kenya. This was a three-day event, so here is my recap of Day 2, Wednesday Aug 3, 2016.
IBM Spectrum Scale overview and update
This session was covered by Mack Kigada, IBM Executive Consultant for the "Executive Advisory Practice" portion of Systems Lab Services. This session explained the basic features of Spectrum Scale, including the latest features of version 4.2, and related Elastic Storage Server pre-built systems.
Software Defined Storage - IBM Spectrum Overview
This session was presented by Saumil Shah, IBM Spectrum Protect Sales Leader for Middle East, Turkey & Africa. Since SDS is an important topic, the conference coordinators schedule several speakers to present at different time slots, to give everyone a chance to hear the SDS message. Rather than using my same charts, Saumil used his own deck, which he customized based on his experience working in this region.
Flash and the Next Generation Data Center
This session was covered by Firat Ozturk, IBM FlashSystem Sales Leader for Middle East, Turkey & Africa. While IBM offers all-flash array versions of its DS8000, SVC and Storwize product lines, Firat focused on the IBM FlashSystem family, including the FlashSystem 900, FlashSystem V9000, and the new A9000/A9000R models.
According to IDC, Flash-based technologies are predicted to represent 50 percent of the storage capacity sold in 2018. Today it is about 10 percent, so that is a big leap. The primary reason he feels are new applications like Cloud and Mobile that are driving customer expectations to faster performance.
Which product should you get? Firat indicated that the FlashSystem 900 is ideal to boost the performance of specific applications, like Oracle or SAP HANA. The FlashSystem V9000 borrows all the code base from SVC and Storwize with Real-time compression ideal for OLTP and Database applications, while offering Storage Virtualization to protect your existing storage infrastructure investment. The FlashSystem A9000 and A9000R are targeted to Cloud deployments, as well as Server Virtualization and Virtual Desktop Infrastructure (VDI).
What is Big Data? Architectures and Practical Use Cases
I have been presenting this since 2013, but still draws a new crowd every time. Based on my [2015 Presentation], I made some updates to reflect IBM's latest support for Spark, and the new POWER8 solution offerings.
Storage Tiering on z Systems: Less Management, Lower Costs, Less Management, and Increased Performance
When I present Storage Tiering for distributed systems, I typically focus on Easy Tier feature of SAN Volume Controller, the Analytics-based storage optimization of Spectrum Control, and the Information Lifecycle Management (ILM) policies of Spectrum Scale and Spectrum Archive. This time, Glenn Anderson asked me to give this a "z Systems" slant, for a mainframe-oriented audience.
In this new version, I focused on Easy Tier on IBM DS8000 systems, Hierarchical Storage Management in DFSMShsm, and the new Class Transition features that were introduced initially with DFSMSoam for objects, and now extended for data sets.
Linux on IBM z Systems and its Participation in Open Source Ecosystem, including Blockchain
Wow! What a long title!
This session was presented by Holger Smolinski, IBM Senior Performance Analyst Linux and KVM on IBM z Systems from the Boeblingen, Germany Lab. Back in the late 1990s, Holger and I worked on porting Linux to the S/390 platform. I led a team to test all of the device drivers for IBM disk and tape storage systems, working with Holger and his team to fix the drivers and submit them to the Open Source Community, so that they would be incorporated formally into the latest Red Hat and SUSE distributions.
Holger gave quite an extensive overview of the entire Open Source Ecosystem that run on Linux on z System mainframes. Over 60 percent of new mainframe customers use Linux on z Systems operating system, and the complete set of capabilities, makes this quite practical.
One of the latest of these is [Blockchain], a new way to track transactions between organizations. The open source project for this is [HyperLedger]. Transactions are recorded into blocks that are encrypted with a hash code, which prevents tampering and fraud. These blocks are then chained together as transactions occur between organizations.
For example, if a product is manufactured in China, shipped over the Pacific Ocean by a shipping company, received at a port in the United States, processed by US Customs, then shipped via trucking company to the buyer, these all would be represented as transaction blocks chained together.
Wednesday we had free evening to explore on our own. Some of my colleagues went to an all-you-can eat steakhouse for dinner, but I will get plenty of that in my upcoming trip to Sao Paulo, Brazil, so went elsewhere.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
The Collaboration of Oak Ridge, Argonne, and Livermore [CORAL] is a joint procurement activity among three of the Department of Energy's National Laboratories launched in 2014 to build state-of-the-art high-performance computing (HPC) technologies that are essential for supporting U.S. national nuclear security and are key tool s used for technology advancement and scientific discovery.
Of course, when you hear "state-of-the-art technology", IBM is probably the first company that comes to mind!
The new IBM Spectrum Scale 5.0 has been greatly enhanced to meet CORAL requirements:
Dramatic improvements in I/O performance
Significant reduction in internode software path latency to support the newest low-latency, high-bandwidth hardware such as NVMe
Improved performance for many small and large block size workloads simultaneously from new 4 MB default block size with variable sub-block size based on block size choice
Improved metadata operation performance to a single directory from multiple nodes
Spectrum Scale 5.0 now handles automatically tuning more than twenty communication protocol and buffer management parameters, aiding setup for optimal performance. The enhanced GUI features many capabilities including performance, capacity, network monitoring, AFM (multicluster management), transparent cloud tiering, and enhanced maintenance and support, including interaction with IBM remote support.
Spectrum Scale 5.0 now offers file-level immutability. Previous releases supported immutability at the file set granularity, so this allows greater granularity. Immutability can be an effective tool as part of an overall Non-Erasable, Non-Rewriteable [NENR] compliance policy.
Spectrum Scale comes in both "Standard Edition" and "Data Management Edition". The latter offers some additional features, including Transparent Cloud Tiering, Asynchronous AFM Disaster Recovery support, and Encryption. Some additional enhancements to Data Management Edition in Spectrum Scale 5.0 are:
File audit logging capability to track user accesses to file system and events supported across all nodes and all protocols
Parseable data stored in secure retention-protected fileset
Data security following removal of physical media protected by on-disk encryption
The new IBM Storage Utility Offerings include the IBM FlashSystem 900 (9843-UF3), IBM Storwize V5030 (2078-U5A), and Storwize V7000 (2076-U7A) storage utility models that enable variable capacity usage and billing.
These models provide a fixed total capacity, with a base and variable usage subscription of that total capacity. IBM Spectrum Control Storage Insights is used to monitor the system capacity usage. It is used to report on capacity used beyond the base subscription capacity, referred to as variable usage.
The variable capacity usage is billed on a quarterly basis. This enables customers to grow or shrink their usage, and only pay for configured capacity.
Suppose you only need 300 TB today, but expect this to grow to 1 PB (1000 TB) over the course of three years. You install 1000 TB (1 PB) of capacity, and pay for the base 300 TB, plus whatever above this 300 TB you might be using during each subsequent quarter. After 36 months, you pay for the rest of capacity installed.
(There are comparable offerings from IBM's competitors, but they often require that you pay for at least 75 to 85 percent of the installed amount, and then you would need to continue to disrupt your operations with additional capacity installed throughout the 12 to 36 month period. IBM's approach allows you to avoid installation disruption during the entire 36 month period!)
IBM Spectrum Virtualize for Public Cloud V8.1.1 delivers a powerful solution for the deployment of IBM Spectrum Virtualize software in public cloud, starting with IBM Cloud. This new capability provides a monthly license to deploy and use Spectrum Virtualize in IBM Cloud to enable hybrid cloud solutions
Remote replication will be supported between Spectrum Virtualize-based appliances (including SAN Volume Controller (SVC), the Storwize family, IBM FlashSystem V9000, and VersaStack with Storwize family or SVC), or Spectrum Virtualize Software, to the IBM Cloud.
Using IP-based replication with Metro Mirror, Global Mirror, or Global Mirror with Change Volumes, clients can create secondary copies of on-premises data in the public cloud for disaster recovery. IBM has over 25 data centers around the world available to chose from. Remote copy services can also be used between two IBM Cloud data centers for improved availability.
The solution is based on bare metal servers. You can create either two- or four-node high availability clusters.
Spectrum Virtualize on-premise SVC and Storwize now also support 2.4 TB 10K rpm 2.5-inch SAS hard disk drives.
Well, it's Tuesday again, and you know what that means? IBM Announcements! There were lots of announcements today, so I have split this up into two posts. One for the Tape and Cloud announcements, and the other for the Spectrum Storage family.
IBM Spectrum Virtualize Software V7.8.1
IBM Spectrum Virtualize&trade: V7.8.1 is the latest software for FlashSystem V9000, SAN Volume Controller and Storwize products.
Last release, IBM introduced "Host Groups" for clusters that needed to share a common set of volumes. This release offers "Host cluster I/O throttling": I/O throttling can be managed at the host level (individual or groups) and at managed disk levels for improved performance management,and GUI support.
Increased background FlashCopy transfer rates: This feature enables you to increase the rate of background FlashCopy transfers, providing faster copies as the infrastructure allows. This takes advantage of the higher performance capabilities of today's systems, processing the copy in a shorter period of time. The default was 64 MB/sec, and now we can go up to 2 GB/sec, for those who want their FlashCopy to be done as fast as possible.
Port Congestion Statistic: Zero buffer credits help detect SAN congestion in performance-related issues, improving support in high-performance environments. IBM had this for the 8Gbps FCP cards, but not for the 16Gbps cards, so now that's fixed.
Resizing of volumes in remote mirror relationships: Target volumes in remote mirror relationships will be automatically resized when source volumes are resized. Lots of clients asked for this, and IBM delivered!
Consistency protection for Metro/Global Mirror relationships: An automatic restart of mirroring relationships after a link fails between the mirror sites improves disaster recovery scenarios, helping to ensure the applications are protected throughout the process.
When IBM introduced "Global Mirror with Change Volumes" (GM CV), I wanted to call it "Trickle Mirror", because the primary site takes a FlashCopy, trickles the data over, then FlashCopy at the remote site. Now, clients using traditional Metro or Global Mirror can add "Change Volumes" as protection. In the unlikely event a network disruption occurs, it drops down to GMCV until the link resumes full speed.
Support of SuperMicro servers for the Spectrum Virtualize as Software Only offering: Support for x86-based Intel™ servers by SuperMicro for Spectrum Virtualize Software is available with this release.
Last year, IBM offered Spectrum Virtualize as software that could run on Lenovo servers. However, now there are clients who want alternative server choices.
Supermicro SuperServer 2028U-TRTP+ is supported to run Spectrum Virtualize Software. This is a great option for end clients, managed service or cloud service providers deploying private clouds, building hosted services, or using software-defined storage on third party Intel servers. This a fully inclusive license with all key features available on Spectrum Virtualize in a single, downloadable image.
IBM Spectrum Control V5.2.13 and IBM Virtual Storage Center V5.2.13
We often joke that IBM Virtual Storage Center is the [Happy Meal] combining storage virtualization with Spectrum Virtualize hardware like FlashSystem V9000, SAN Volume Controller or Storwize as the "hamburger", Spectrum Control as the "fries" and "Spectrum Protect Snapshot" as the "soft drink". Storage Analytics was included as a "prize inside" only available in the VSC bundle to entice clients to chose this option.
Whenever IBM updates Spectrum Control, they often put out a new version of the Virtual Storage Center bundle as well. I was the Chief Architect for Spectrum Control 2001-2002, and Technical Evangelist for SVC in 2003 when we first introduced the product, so I have long history with both products.
This release provides additional information and performance metrics on Dell EMC VMAX and EMC VNX devices. This is done natively, they do not need to be virtualized by Spectrum Virtualize as was often done in the past.
IBM now offers better visibility of drives within IBM Cloud Object Storage Slicestor® nodes. IBM acquired Cleversafe 18 months ago, and are working to get it under the Spectrum Control management umbrella.
IBM Spectrum Scale™ file system to external pool correlation. Spectrum Scale can migrate data to three different type of "external pools":
Cloud Object pool, either on-premise Object Storage or off-premise Cloud Service Provider storage.
Spectrum Protect pool, where Spectrum Protect manages the migrated data on one of 700 supported devices, including tape, virtual tape, optical, flash, disk, object storage or cloud.
Spectrum Archive pool, where data is written directly to physical tape using the Industry-standard LTFS format.
This release provides additional information on the copy data panel about SAN Volume Controller (SVC) HyperSwap® and vDisk mirror.
While the "Virtual Storage Center" bundle is an awesome deal, some clients have asked for the "Vegetarian Option" (Fries and Drink only). Why? Because they want the advanced storage analytics (prize inside) for other devices like DS8000, XIV, etc. So, IBM created the "IBM Spectrum Control Advanced Edition", which has everything in VSC except the Spectrum Virtualize itself.
Advanced edition adds improvements to the chargeback report. It also includes IBM Spectrum Protect™ Snapshot V8.1 release.
IBM Spectrum Control Storage Insights Software as a Service
Storage Insights is IBM's "Software-as-a-Service" reporting-only offering subset of Spectrum Control Advanced Edition. It includes direct support for Dell EMC VMAX, VNX, and VNXe storage systems. This is huge! Now, clients who have only EMC hardware can now, on a monthly basis, figure out where they are wasting money and decrease their costs.
Other features carried over include the enhanced drive support for IBM® Cloud Object Storage, enhanced external capacity views for IBM Spectrum Scale™ and additional replication views for vDisk mirror and HyperSwap® relationships for SAN Volume Controller (SVC) and Storwize® devices that I mention above.
The article starts out giving background history of the current mess we are in. Here is an excerpt:
"Throughout most of U.S. history, American high school students were routinely taught vocational and job-ready skills along with the three Rs: reading, writing and arithmetic...
...But in the 1950s, a different philosophy emerged: the theory that students should follow separate educational tracks according to ability...
Ability tracking did not sit well with educators or parents, who believed students were assigned to tracks not by aptitude, but by socio-economic status and race. ...
...The backlash against tracking, however, did not bring vocational education back to the academic core. Instead, the focus shifted to preparing all students for college, and college prep is still the center of the U.S. high school curriculum..."
My father was a mechanical engineer who enjoyed fixing cars and woodworking on the weekends. I had plenty of "vocational training" growing up at home, no need for me to have this in school, allowing me to focus on getting ready for college.
Nicholas asks legitimate questions at this stage: "So what’s the harm in prepping kids for college? Won’t all students benefit from a high-level, four-year academic degree program?" His initial response is:
"... As it turns out, not really. For one thing, people have a huge and diverse range of different skills and learning styles. Not everyone is good at math, biology, history and other traditional subjects that characterize college-level work.
Not everyone is fascinated by Greek mythology, or enamored with Victorian literature, or enraptured by classical music. Some students are mechanical; others are artistic. Some focus best in a lecture hall or classroom; still others learn best by doing, and would thrive in the studio, workshop or shop floor..."
Hard to argue that people are different, and learn in different ways. Not everyone is meant for college.
"...And not everyone goes to college. The latest figures from the U.S. Bureau of Labor Statistics (BLS) show that about 68 percent of high school students attend college. That means over 30 percent graduate with neither academic nor job skills..."
Here is what I have most problems with. To think that the 30 percent of high schools students graduate, but do not go to college, have neither academic nor job skills? I disagree with this, as there are many jobs where the academic and job skill training they received in high school is more than adequate. Nicholas then doubled down:
"...But even the 68 percent aren't doing so well. Almost 40 percent of students who begin four-year college programs don’t complete them, which translates into a whole lot of wasted time, wasted money, and burdensome student loan debt. Of those who do finish college, one-third or more will end up in jobs they could have had without a four-year degree. The BLS found that 37 percent of currently employed college grads are doing work for which only a high school degree is required.
It is true that earnings studies show college graduates earn more over a lifetime than high school graduates. However, these studies have some weaknesses. For example, over 53 percent of recent college graduates are unemployed or under-employed. And income for college graduates varies widely by major – philosophy graduates don’t nearly earn what business studies graduates do. Finally, earnings studies compare college graduates to all high school graduates. But the subset of high school students who graduate with vocational training – those who go into well-paying, skilled jobs – the picture for non-college graduates looks much rosier.
Yet despite the growing evidence that four-year college programs serve fewer and fewer of our students, states continue to cut vocational programs..."
There are a lot of successful billionaires who did not complete four yeas of college: Bill Gates, Steve Jobs, Michael Dell, Henry Ford, and Howard Hughes, just to name a few.
If you feel that the only purpose of attending high school or college is to get job-specific skills, then you are missing out on all the other aspects of those that teach you valuable life lessons, getting along with others, teamwork, communications, and other "soft skills" that aren't necessarily job-specific.
Teenagers entering college are still growing up, trying to figure out what they want to do with their lives, discovering new ideas, new ways of thinking, and networking with people of different backgrounds and cultures.
"...The U.S. economy has changed. The manufacturing sector is growing and modernizing, creating a wealth of challenging, well-paying, highly skilled jobs for those with the skills to do them. The demise of vocational education at the high school level has bred a skills shortage in manufacturing today, and with it a wealth of career opportunities for both under-employed college grads and high school students looking for direct pathways to interesting, lucrative careers. Many of the jobs in manufacturing are attainable through apprenticeships, on-the-job training, and vocational programs offered at community colleges. They don’t require expensive, four-year degrees for which many students are not suited..."
The skills shortage is real, but until employers are willing to pay people for what they're worth, the situation will not be resolved. The free market has a way to fix skills shortages. High demand raises salaries, and causes people to invest in high school and college education in part to vie for these positions. That is in part why medical doctors are paid so much.
"...The modern workplace favors those with solid, transferable skills who are open to continued learning. Most young people today will have many jobs over the course of their lifetime, and a good number will have multiple careers that require new and more sophisticated skills..."
A few years ago, I was hosting clients for dinner in Tucson. The sales rep had brought his daughter and her roommate along, as there was a shooting at their college campus and classes were canceled for the week. The daughter asserted, "In 18 months, I will no longer have to learn anything again. I will be done with school." Her roommate chimed in, "Ha! I am a year ahead of you, and only six months away from that!"
I was the bearer of bad news. "Ladies," I said, "you will have to get used to learning new things the rest of your lives." The highest ranking client at the table overheard me, and she re-iterated, "Ladies, that is probably the best advice I have heard in awhile. I suggest you heed it carefully."
A big part of high school and college education is to teach you how to learn on your own. Learn to read, search out information, take measurements, gather data, make plans, and ask the right questions. These are skills that are useful in a wide variety of careers.
Nicholas concludes with:
"...Just a few decades ago, our public education system provided ample opportunities for young people to learn about careers in manufacturing and other vocational trades. Yet, today, high-schoolers hear barely a whisper about the many doors that the vocational education path can open. The “college-for-everyone” mentality has pushed awareness of other possible career paths to the margins. The cost to the individuals and the economy as a whole is high. If we want everyone’s kid to succeed, we need to bring vocational education back to the core of high school learning."
I agree the educational system in United States is broken, but I am not sure I agree with everything that Nicholas writes in this article.
As I have mentioned before, I started this blog on September 1, 2006 as part of IBM's big ["50 Years of Disk Systems Innovation"] campaign. IBM introduced the first commercial disk system on September 13, 1956 and so the 50th anniversary was in 2006. That means this month, IBM celebrates the "Diamond" anniversary, 60 years of Disk Systems!
For those who missed it, IBM announced last Tuesday encryption capability for the TS1120 drive, our enterprise tape drive that read and write 3592 cartridges. Do you need special cartridges for this? No! Use the sames ones you have already been using!
You can read more about it www.ibm.com/storage/tape."
Short and sweet, but it got me started, and I ended up writing 21 blog posts that first month. You can read blog posts from all 10 years by looking at the left panel of my blog under "Archive".
While traditional disk and tape storage are still very important and relevant in today's environment, IBM has also expanded into other technologies:
In 2012, IBM [acquired Texas Memory Systems]. In 2014, IBM shipped 62PB, more Flash capacity than any other vendor. In 2015, continued its #1 status, shipping 170PB of Flash, again, more than any other vendor.
IBM has flash everywhere, from the advanced FlashSystem 900, V9000, A9000 and A9000R models, to other all-flash array and hybrid flash-and-disk systems a with various sets of features and functions to meet a variety of workload requirements.
The DS8888 all-flash array, and the DS8886 and DS8884 hybrid flash-and-disk systems round out the latest in the DS8000 storage systems family. SAN Volume Controller and Storwize family of products, based on IBM Spectrum Virtualize software, also have all-flash array and hybrid configurations. The most recent being the Gen2+ models of Storwize V7000F and V5030F. The latest solution is the DeepFlash 150 models, designed for analytics and unstructured data.
Between internally-developed IBM Spectrum Scale and IBM Spectrum Archive, and IBM's [acquisition of Cleversafe], IBM is ranked #1 in Object Storage. IBM Cloud Object Storage System, IBM's new name for Cleversafe's flagship product, is available as software-only, pre-built systems, or in the IBM SoftLayer cloud.
Software-Defined Storage (SDS) with IBM Spectrum Storage
Last year, IBM re-branded its various storage software products under the "IBM Spectrum Storage" family. Earlier this year, IBM announced the new [IBM Spectrum Storage Suite license] which makes it even easier to procure, either with a perpetual software license, elastic monthly licensing, or utility license that combines some of each.
IBM is ranked #1 in Software-Defined Storage, with over 40 percent marketshare, offering solutions as Software-only, pre-built systems, and in IBM SoftLayer cloud.
Next month, I will be presenting at the IBM Systems Technical University for Storage and POWER. This conference will be held in New Orleans, Louisiana, October 16-20, 2017.
Instead of a "Meet the Experts" Q&A panel, this event will feature a "Poster Session". I had the pleasure of doing one of these down in Melbourne, Australia last month. For those who missed it, here are my blog posts:
By now, you have already decided on a title and abstract of your poster. You will need to figure out a quick and easy way to explain your poster, and as always, shorter is better. It reminds me of a famous quote:
"Sorry this letter is too long...
If I had more time, I could have made it shorter!
-- Blaise Pascal
The event team asked me to write some instructions on the mechanics of how to put together a poster for this, since it is new for many people. I use Microsoft PowerPoint 2013 and ImageMagick tools to accomplish this.
Arrangement of Slides
Posters for the IBM Systems Technical University in New Orleans will be 24x36 inches in size. If you print out your poster in 8.5x11 inch standard size letter pages, that would be eight slides, 2 columns, 4 rows. This leaves one inch border all around.
The event will provide both the foam board and double-sided sticky tape. You can bring your poster as a stack of Letter-sized pages in a folder, and assemble your poster at the event.
You can increase the size of individual image to 17x22, to offer the "Big Picture" view. Basically, we can take a standard 8.5x11 Letter size page, expand it onto four separate pages, and then put them on the poster! I will show you how in the steps below.
Lastly, you can have two big slides. If your poster is organized as "Before/After" or "Problem/Solution" then this arrangement could be perfect for you.
Setting Custom Paper Size on PowerPoint
In Melbourne, I had to use European A4 standard paper, and had to figure out how to do this in PowerPoint. I was surprised to learn that the PowerPoint default is 4:3 ratio of 10x7.5 inch, and that this is stretched to be whatever paper size you print on.
The difference is slight, but I prefer [WYSIWYG], so we will change the slide to "Custom size" and force it to 8.5x11 inches, with "Landscape" orientation. This will avoid anything looking stretched or squished on the big poster.
Converting a PowerPoint Slide to PNG Image file
If you would like to resize one or more of your PowerPoint slides, you will need to save those slides as images. Select "File" and "Save As" and as the format, choose "PNG" format. You can also select GIF or JPG, but I prefer PNG.
You can export all of your slides as images, in which case it will create a folder and number each slide individually. Or, you can select "Just This One" for the current slide.
By default, it will use the same name as your PPT file, just change the extension to PNG. I suggest you name the file something meaningful to you. In my examples below, I use "small.png" as the file name.
I am using PowerPoint 2013, which defaults to 96 dpi. So, an 8.5x11 paper becomes 1056x816 pixels in size.
If you have PowerPoint 2003 or higher, you can change the Windows registry to specify image resolutions. Not recommended for the faint of heart. Or anyone else. But here's the deal if you want to try (if the following doesn't make any sense, it might be better not to mess with the registry):
Quit PowerPoint if it's running
Navigate to HKEY_CURRENT_USER\Software\Microsoft\Office\X.0\PowerPoint\Options
(For X> above, substitute 16.0 for PowerPoint 2016, 15.0 for PowerPoint 2013, 14.0 for PowerPoint 2010, 12.0 for PowerPoint 2007 and 11.0 for PowerPoint 2003.
Add a new DWORD value named ExportBitmapResolution and set its DECIMAL value to the DPI value you want (for example, 300 means 300 dots per inch)
Close REGEDIT, start PowerPoint and test. Your files will be 3300x2550 pixels instead.
Resizing and splitting up PNG Image files
To expand and chop the slide into four letter-sized pages, we will use "ImageMagick", an open source tool you can download for free at [ImageMagick] is a collection of command line utilities. The first "identify" will confirm your pixel size for your PNG image. Replace "small.png" with whatever you named your PNG image above.
Lastly, we crop the "big.png" image we just created into four smaller pieces. Each piece will be exactly the size as your original image! The files will be named big_0.png, big_1.png, big_2.png and big_3.png.
Since the resulting four pieces are exactly the size of a page, you can put them back into your PowerPoint deck. Create four blank slides, select Insert then Pictures. Insert each picture (big_0.png, big_1.png, big_2.png, and big_3.png) as a separate page.
You can print this out, and bring with you to the event, or send it to someone to have them print for you.
Upload files to IBM@Box
This next step is completely optional, but found it adds a nice touch. As an IBMer, you can upload your presentation, and any documents, whitepapers or other materials, to [IBM@Box]. Create a directory that is unique to you, such as your last name and the conference. For example, I have "Pearson-STU-NOLA-2017" as my folder name.
You can create a "URL Link" to this folder. Select "Share", then "Share Link" to create a dialog box. It is important to specify "People with this link" if you want those outside of IBM, such as clients and IBM Business Partners, to have access.
Press the little "gear" button on the upper right, and it gives you options to customize the URL. Normally the URL is some long random sequence of characters, but you can rename it to something meaningful and easier to remember.
Generate a QR Code
Since you have a URL Share Link for your files on IBM@Box, you can generate a QR Code for this link, and include on your poster!
There are several online websites that can generate a QR Code for free. I use [QRme.com] in this example. Go to the website, copy in the URL, and press "Generate" button.
The QR Code is generated successfully, right click and "Save Image" to a file on your hard drive. This image can be inserted as a picture like we did above onto any slide. You can resize as needed.
In Melbourne, one of the posters had the QR Code at the top, with the Title, and it was impossible to see, so difficult to use a smartphone to scan the information. For this reason, I recommend putting the QR code in the center or lower right corner of your poster. Between shoulder and waist height for the audience, to be comfortable to scan.
I am looking forward to going back to New Orleans to speak at this conference!
This week, IBM sponsored a nice multi-client event in San Juan, Puerto Rico. I was quite impressed with the quality of this video. Our marketing department has really done a good job on this!
This event was not just multi-client, but also spanned different industry sectors. IBM recently has realigned to five different sectors, and we had clients from different sectors attending the event.
The night before, I was able to meet most of the other IBM executives who came down for the event. Unfortunately, two were delayed because of the snow storms in the Northeast part of the United States, but they were able to arrive the next day.
The venue was the El Touro restaurant, near the Hilton Caribe. The weather was just right, about 75 degrees and breezy. It was a little humid for me, but everyone else were just happy to be out of the cold. Meanwhile it is nearly 90 degrees in Tucson, Arizona where I am from.
This was billed as a "Lunch and Learn" and the food was delicious! In an effort to keep it simple, we had small dishes of fish with fruit-based cream sauce, paella with rabbit meat and rice, pork belly, Crema Catalana and a churo for dessert. This gave everyone a sample taste of everything, without having to order off a menu.
We basically took the same approach with the presentation. First, Marcos Obermaeir and Marcos Otero, the two leads for this event, thanked the audience and explained their new roles. Marcos Obermaeir is focused on Financial and Insurance sector, while Marcos Otero focused on Communications sector.
Next we had Debbie Niven and Roopam Master, both IBM Executives, explain their roles, and how IBM can help both clients and Business Partners in Puerto Rico.
I presented samples of much larger presentations on three topics. First, the excitement over Software Defined Storage with IBM Spectrum Storage family of products. Second, IBM Spectrum Scale as a better replacement for Hadoop File System (HDFS) for Hadoop, IBM BigInsights and Hortonworks analytics deployments. Third, IBM Cloud Object Storage, and how this can be combined with IBM Spectrum Protect to backup your data to object storage either on premises, or in the Cloud.
I could have easily spoken an hour on each topic, but instead, we shortened to about 20 minutes each, in keeping with the "Tapas" theme of the restaurant. This allowed those clients who wanted to hear more to have a reason to request a follow-up visit or call.
After the clients left, the IBM team had a reception for the IBM Business Partners. About 80 percent of IBM's storage business in Puerto Rico is done through IBM Business Partners, so they are an important link in IBM's "Go-to-Market" strategy.
The moon was nearly full, and the breeze and waves were a spectacular backdrop to the conversations I had with each person I met.