Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
IBM introduces the eight generation of Linear Tape Open (LTO) tape drive technology, with corresponding support in all of the IBM tape libraries.
Fellow blogger Jon Toigo, of Drunkendata.com fame, came to Tucson to interview Lee Jesionowski, Ed Childers, Calline Sanchez, and me about this. Check out the various segments on YouTube or his website.
The LTO-8 cartridges are not yet available, but when they are, they will hold 12 TB raw capacity, or 30 TB effective capacity at 2.5-to-1 compression ratio. The new drives are N-1 compatible to read/write LTO-7 cartridge media.
Previous generations also supported reading N-2 generation tapes, LTO-8 breaks from that tradition and will not support LTO-6 cartridges at all.
LTO-8 comes in both "Full Height" (FH) and Half-Height (HH) models. The FH models can transfer data at 360 MB/sec (or 900 MB/sec effective at 2.5-to-1 compression), and the HH models at 300 MB/sec (or 750 MB/sec effective at 2.5-to-1).
LTO-8 supports IBM Spectrum Archive and the "Linear Tape File System" (LTFS) tape format for self-describing long-term retention of data.
Compliance storage has come under many names. For tape and optical media, we had "WORM" for Write-Once, Read-Many. For disk-based storage, we had "Fixed-Content" or "Content-Addressable Storage". For file systems, we had "Immutable Storage".
Fortunately, the clever folks who crafted the SEC 17a-4 law came up with an umbrella term: "Non-Erasable, Non-Rewriteable" (NENR) that covers all storage media, from WORM tape and optical, to tamperproof flash, disk and cloud-based solutions.
The other major change is "Concentrated Dispersal" mode, or "CD mode" for short. Erasure Coding works best when data is dispersed across three or more sites. When this happens, you can lose all of the data at one site, and still have 100 percent access to all data from the other locations.
IBM's "Information Dispersal Algorithm", or IDA for short, scattered slices of data across many servers. Great for high availability and performance, but often meant that the minimum deployment was 500TB or greater.
Not every organization is ready for such a large purchase. Some want to just [dip their toe in the water] with something smaller, less expensive. Well IBM delivered!
The new CD mode means that instead of one slice per Slicestor node, you can pack lots of slices on each node. Each slice will be on distinct disk drives, for high availability.
Entry-level configurations now can be as little as 72-104 TB, across 1, 2 or 3 sites.
Next month, I will be presenting at the IBM Systems Technical University for Storage and POWER. This conference will be held in New Orleans, Louisiana, October 16-20, 2017.
Instead of a "Meet the Experts" Q&A panel, this event will feature a "Poster Session". I had the pleasure of doing one of these down in Melbourne, Australia last month. For those who missed it, here are my blog posts:
By now, you have already decided on a title and abstract of your poster. You will need to figure out a quick and easy way to explain your poster, and as always, shorter is better. It reminds me of a famous quote:
"Sorry this letter is too long...
If I had more time, I could have made it shorter!
-- Blaise Pascal
The event team asked me to write some instructions on the mechanics of how to put together a poster for this, since it is new for many people. I use Microsoft PowerPoint 2013 and ImageMagick tools to accomplish this.
Arrangement of Slides
Posters for the IBM Systems Technical University in New Orleans will be 24x36 inches in size. If you print out your poster in 8.5x11 inch standard size letter pages, that would be eight slides, 2 columns, 4 rows. This leaves one inch border all around.
The event will provide both the foam board and double-sided sticky tape. You can bring your poster as a stack of Letter-sized pages in a folder, and assemble your poster at the event.
You can increase the size of individual image to 17x22, to offer the "Big Picture" view. Basically, we can take a standard 8.5x11 Letter size page, expand it onto four separate pages, and then put them on the poster! I will show you how in the steps below.
Lastly, you can have two big slides. If your poster is organized as "Before/After" or "Problem/Solution" then this arrangement could be perfect for you.
Setting Custom Paper Size on PowerPoint
In Melbourne, I had to use European A4 standard paper, and had to figure out how to do this in PowerPoint. I was surprised to learn that the PowerPoint default is 4:3 ratio of 10x7.5 inch, and that this is stretched to be whatever paper size you print on.
The difference is slight, but I prefer [WYSIWYG], so we will change the slide to "Custom size" and force it to 8.5x11 inches, with "Landscape" orientation. This will avoid anything looking stretched or squished on the big poster.
Converting a PowerPoint Slide to PNG Image file
If you would like to resize one or more of your PowerPoint slides, you will need to save those slides as images. Select "File" and "Save As" and as the format, choose "PNG" format. You can also select GIF or JPG, but I prefer PNG.
You can export all of your slides as images, in which case it will create a folder and number each slide individually. Or, you can select "Just This One" for the current slide.
By default, it will use the same name as your PPT file, just change the extension to PNG. I suggest you name the file something meaningful to you. In my examples below, I use "small.png" as the file name.
I am using PowerPoint 2013, which defaults to 96 dpi. So, an 8.5x11 paper becomes 1056x816 pixels in size.
If you have PowerPoint 2003 or higher, you can change the Windows registry to specify image resolutions. Not recommended for the faint of heart. Or anyone else. But here's the deal if you want to try (if the following doesn't make any sense, it might be better not to mess with the registry):
Quit PowerPoint if it's running
Navigate to HKEY_CURRENT_USER\Software\Microsoft\Office\X.0\PowerPoint\Options
(For X> above, substitute 16.0 for PowerPoint 2016, 15.0 for PowerPoint 2013, 14.0 for PowerPoint 2010, 12.0 for PowerPoint 2007 and 11.0 for PowerPoint 2003.
Add a new DWORD value named ExportBitmapResolution and set its DECIMAL value to the DPI value you want (for example, 300 means 300 dots per inch)
Close REGEDIT, start PowerPoint and test. Your files will be 3300x2550 pixels instead.
Since the resulting four pieces are exactly the size of a page, you can put them back into your PowerPoint deck. Create four blank slides, select Insert then Pictures. Insert each picture (big_0.png, big_1.png, big_2.png, and big_3.png) as a separate page.
You can print this out, and bring with you to the event, or send it to someone to have them print for you.
Upload files to IBM@Box
This next step is completely optional, but found it adds a nice touch. As an IBMer, you can upload your presentation, and any documents, whitepapers or other materials, to [IBM@Box]. Create a directory that is unique to you, such as your last name and the conference. For example, I have "Pearson-STU-NOLA-2017" as my folder name.
You can create a "URL Link" to this folder. Select "Share", then "Share Link" to create a dialog box. It is important to specify "People with this link" if you want those outside of IBM, such as clients and IBM Business Partners, to have access.
Press the little "gear" button on the upper right, and it gives you options to customize the URL. Normally the URL is some long random sequence of characters, but you can rename it to something meaningful and easier to remember.
Generate a QR Code
Since you have a URL Share Link for your files on IBM@Box, you can generate a QR Code for this link, and include on your poster!
There are several online websites that can generate a QR Code for free. I use [QRme.com] in this example. Go to the website, copy in the URL, and press "Generate" button.
The QR Code is generated successfully, right click and "Save Image" to a file on your hard drive. This image can be inserted as a picture like we did above onto any slide. You can resize as needed.
In Melbourne, one of the posters had the QR Code at the top, with the Title, and it was impossible to see, so difficult to use a smartphone to scan the information. For this reason, I recommend putting the QR code in the lower right corner of your poster. Between shoulder and waist height for the audience, to be comfortable to scan.
I am looking forward to going back to New Orleans to speak at this conference!
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM announced a new product, IBM Spectrum Protect Plus. To understand why, I will need to discuss a bit of history related to Data Protection.
(FCC Disclosure: I work for IBM. This blog post can be considered a "paid celebrity endorsement" for IBM Spectrum Protect, IBM Spectrum Protect Snapshot, IBM Spectrum Protect for Virtual Environments, and IBM Spectrum Copy Data Management products. I was not paid in any manner to promote Geoffrey Moore's book mentioned below.)
IBM Spectrum Protect was originally developed as the Workstation Data Save Facility (WDSF) back in the 1980s, back when Personal Computers were just getting deployed.
I started in 1986 developing mainframe software, so we all had bulky 3270 terminals. When our area was offered 120 PCs to replace them, I was tasked with determining how to roll these out, 24 at a time, over five months.
My job was to determine who would get a PC in the first round, the second round, and so on. I handed out a simple one-page survey, asking everyone basic questions. Are you familiar with Personal Computers? Do have one at home? Are you comfortable using a mouse? My plan was to give those most familiar with them sooner, and those less familiar in later rounds.
However, it was my final question that sealed the deal:
How soon do you want a PC to replace your 3270 terminal?
[ ]Immediately [ ]Next month [ ]No Hurry [ ]Put me last [ ]Never!
Surprisingly, I had roughly 24 folks choosing each option on this last question, which made my decision process easy for me!
(In his book Crossing the Chasm, fellow author Geoffrey Moore would come up with similar groups: Innovators, Early Adopters, Early Majority, Late Majority, and Laggards. This is a great book and I highly recommend it!)
Of course, we used WDSF to back up the files. WDSF would later morph into DFDSM, then ADSM, then TSM, and now it is called IBM Spectrum Protect.
Over the decades, the product has evolved from just backing up data on personal computers. IBM Spectrum Protect can now protect all kinds of machines, from tablets, mobile devices, and smartphones, to virtual machines, databases, and application servers in the data center.
Besides creating backup versions of files, IBM Spectrum Protect can also migrate older, less frequently used files to less expensive media, as well as archive files for long-term retention.
Different files can be assigned to different "management classes" that determine policies to be applied and enforced on the backup, migration and archive copies. For backups, this includes how many versions to keep while the file exists, how many versions to keep after the original file is deleted, how long to keep those inactive versions.
Instead of a grandfather-father-son [backup tape rotation], full-plus-incremental, or full-plus-differential scheme employed by other backup software, IBM Spectrum Protect has a unique "Incremental-Forever" approach that reduces backup time, LAN bandwidth requirements, and backup storage media.
While most companies still backup to tape, IBM Spectrum Protect can backup to flash, disk, tape, virtual and physical tape libraries, object storage, and even to public Cloud Service Providers such as IBM Bluemix, Amazon S3, and Microsoft Azure.
IBM Spectrum Protect both client-side and server-side data footprint reduction technologies including compression and deduplication, eliminating the need for expensive, single-purpose data deduplication devices like Dell-EMC Data Domain.
IBM Spectrum Protect is recognized as a leader in Data Protection software, able to scale up to meet the demands of the largest enterprises. However, the parameters and options that IBM Spectrum Protect has acquired over time have been compared to the cockpit or flight deck of an airplane!
For clients with Virtual Machines, IBM offered three solutions:
IBM Spectrum Protect Snapshot
Formerly called Tivoli Storage FlashCopy Manager (FCM), [IBM Spectrum Protect Snapshot] takes frequent, near-instant, non-disruptive, application-aware backups and restores for SAP, Oracle and Db2. It can also be used for VMware using advanced snapshot technology, on both IBM and non-IBM storage systems.
IBM Spectrum Protect Snapshot can be used as a stand-alone product, or integrated with IBM Spectrum Protect to move the snapshots and FlashCopy targets to other storage media.
IBM Spectrum Protect for Virtual Environments (VE)
Formerly called IBM Tivoli Storage Manager for Virtual Environments, [IBM Spectrum Protect VE] protects both VMware and Microsoft Hyper-V virtual machines.
IBM Spectrum Protect VE safely moves backup workloads to a centralized IBM Spectrum Protect server and enables administrators to create backup policies or restore virtual machines with just a few clicks. It allows you to protect data without a traditional backup window.
IBM Spectrum Copy Data Management makes copies available to DBAs, Developers and VM administrators when and where they need them. While this product is focused on DevOps and Dev/Test workflows, it can also be used to automate and schedule snapshots that can serve as backups.
Surprisingly, many companies do not take advantage of these solutions. Even clients who already have IBM Spectrum Protect deployed either (a) simply use Spectrum Protect clients on individual VM guests, or (b) use third-party products to backup VMs outside of Spectrum Protect infrastructure.
"Problems cannot be solved with the same mind set that created them."
-- Albert Einstein
Smaller clients want something simpler to deploy, and easier to use and administer. Rather than simplify the products above, a process called "kneecapping" in the IT industry, IBM opted for a clean slate, [start-from-scratch] approach.
The result is IBM Spectrum Protect Plus, new software that was preview announced last Wednesday in time for this week's VMworld 2017 conference in Las Vegas, and next month's VMworld conference in Barcelona, Spain.
IBM Spectrum Protect Plus is available as either a stand-alone product, or integrated with IBM Spectrum Protect for long-term protection. It is focused exclusively on VMware and Hyper-V environments. General Availability is expected some time in 4Q 2017.
Key features include:
Simple to install in less than 15 minutes, configured in an hour
Easy to use by DBA, VM or application administrator. No IBM Spectrum Protect skills required for stand-alone deployment
Pre-defined Gold, Silver and Bronze policies are ready to use. Additional customized policies can be configured as needed
Supports both application-aware and crash-consistent methods
Data Footprint Reduction technologies including compression and deduplication
Instant data recovery to support DevOps, Dev/Test, Reporting, Analytics and Training
Granular search and restore of entire Virtual Machines, VMDKs, and individual files
As for the name, I would have prefered "IBM Spectrum Protect Basic Edition". The "Plus" implies that the new product is more advanced, or offers more features, than the existing Spectrum Protect editions.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Enhanced Spectrum Virtualize software
IBM announces v8.1 of the Spectrum Virtualize software that works with the latest models of SAN Volume Controller, Storwize and FlashSystem V9000 products.
This v8.1 release will not support older hardware. For these older models, continue to use v7.8.1 release until end of service and support:
SAN Volume Controller, CF8 and CG8 models
FlashSystem V840, AC0 model
Storwize V7000 Gen 1, models 1xx, 2xx and 3xx
Storwize V5000 Gen 1, models 24 C/E, 12 C/E
Storwize V3500 and V3700, all models
Hot Spare Node
Higher availability provided by automatically swapping a spare node into the cluster if the cluster detects a failing node. Following the N-port ID Virtualization (NPIV) features introduced in previous release, this new feature is available for SVC and FlashSystem V9000.
Spare nodes can also be extremely helpful with code updates and node refreshes. Update the code load on a spare node, and use this to roll forward the other nodes. In this manner, you are never in "single node" mode!
You can have up to four spare nodes per SVC cluster, and three spare nodes per FlashSystem V9000 cluster. These spares are "site-aware" to support Enhanced Stretch Cluster and HyperSwap configurations.
This feature requires Fibre Channel switches, so it won't work if you are using direct-attached SAS, iSCSI or FC point-to-point connections.
256 GB memory support
Spectrum Virtualize will now take full advantage of system memory, rather than just the first 64 GB. A fixed 12 GB is set aside for write cache, the rest is used for operating system code, read cache, and compression work space.
IBM supports up to 128 GB per canister on the Storwize V7000 Gen2+ models, and up to 256 GB for SAN Volume Controller SV1 and FlashSystem V9000 models.
One two-socket nodes, IBM previously dedicated specific cores to perform I/O operations, and others for Real-time Compression. With v8.1 release, the team implemented a more sophisticated multi-socket, multi-core, multi-threaded approach. Internal tests showed this improved performance 36 to 50 percent on SAN Volume Controller DH8 and SV1 models.
Enhancements for Encryption
IBM Security Key Lifecycle Manager (SKLM) support has been expanded to support up to three Key Server clones for a total of four Key Servers (one master and three clones).
You can use both central key management (SKLM servers) and local key management(using USB keys physically attached to the back of the controllers) at the same time. This can be useful to transition from one method or another, or use both concurrently for added flexibility.
Both SKLM and USB-based keys can also be used to encrypt FlashCopy targets written to the Cloud with Transparent Cloud Tiering.
Remote support assistance
IBM support engineers can perform system or upgrade recoveries over secure support sessions. This enables remote concurrent upgrades to be done securely and is only available only for clients who purchase Enterprise Class Support.
Since you are already sending periodic inventory updates as part of "call home" support, you might as well let IBM review the configuration and provide customized recommendations!
There is no additional cost, and this provides an additional review to catch any potential problems, single points of failure, or other issues that could be a problem later on.
Based on the success of the Hyper-Scale Manager GUI developed for the FlashSystem A9000, the new Spectrum Virtualize GUI offers an updated look and feel, with new fonts, colors, banner, navigation, dashboard, and other interactive elements.
New Pause Feature for Concurrent Code Update (CCU)
The Pause function will allow users to pause CCU indefinitely. This pause allows customers to do any problem determination, such as multi-pathing issues, or simply to pause the upgrade, take a break for lunch, then resume the upgrade when convenient to do so.
There were also enhances to the hardware models themselves.
IBM FlashSystem V9000
The IBM FlashSystem V9000 has two enhancements. First, there is an option to add a pair of AC3 nodes without AE2 enclosures to scale performance.
The second is the ability to add a single AC3 node for use as a hot spare node. You can have up to three of these extra AC3 spares per V9000 cluster.
IBM Storwize V7000
IBM Storwize V7000 Gen2+ offers increased cache of up to 256 GB per controller, 128 GB per canister. This follows on the heels of the recent increase to 256 GB per node for the SAN Volume Controller and FlashSystem V9000. More memory means more cache hit ratios for faster performance, and more compressed volumes.
900 GB 15K rpm 2.5-inch SAS drive
IBM SAN Volume Controller (SVC) and Storwize Family delivers an additional option with a 900 GB 15K rpm 2.5-inch SAS drive.
(Honestly, I didn't think we would see larger capacity 15K drives, but IBM was qualifying these for the DS8000 boxes, and made sense to add them to the Spectrum Virtualize hardware offerings as well.)
This week, I was in beautiful Melbourne, Australia for IBM Systems Technical University.
PowerAI overview and Cognitive Solutions on POWER
Anand Subramaniam, IBM Technical Specialist, presented this session on PowerAI. IBM packaged a collection of Machine Learning libraries, optimized them for POWER8 chip-set, and made this entire package freely available for download as "PowerAI".
IBM also is working on a priced value-add collection called "PowerAI Vision"
Hadoop Infrastructure solutions and Point-of-View
Alexis Giral, IBM Executive Storage Architect, presented the benefits of IBM Spectrum Scale using a simple example. Supposed you are gathering 40TB of sensor readings per day. How many TB of storage would you need to hold 2 years worth of data?
Traditionally, HDFS maintains three copies of the data. A recently added feature "HDFS-EC" provides erasure coding to reduce the overall storage requirements. Giral showed this chart:
5+4 Erasure Coding
Spectrum Scale ESS
8+3 Erasure Coding
And this is assuming all the data is hot. If you decide to keep only 30 percent hot, perhaps the most recent eight months, and the other 70 percent on colder storage, you may reduce your storage requirement costs even further.
IBM Cloud Object Storage - Redefining backup infrastructure
Maciej "Mac" Lasota, presented the use of IBM Cloud Object Storage as a backup repository. While IBM Spectrum Protect is the preferred choice, IBM COS also works well with Commvault and NetBackup.
He listed some of the challenges that companies have with backups to tape, and how IBM COS addresses these challenges.
(While IBM COS is three to four times more expensive than tape, it is a luxury many clients can now afford!)
He wrapped up the session showing five different deployments that he worked on for clients.
New Generation of Storage Tiering: Simpler Management, Lower Costs, and Improved Performance
With ever changing amounts of storage, it is hard to find metrics that are consistent year to year. Fortunately, we found I/O density as the metric to focus my efforts, armed with real data from Intelligent Information Lifecycle Management (IILM) studies done at various clients. From that, I was able to talk about storage tiering on three fronts:
IBM Easy Tier on DS8000 and Spectrum Virtualize to provide tiering within a system.
IBM Virtual Storage Center (VSC) to provide tiering between systems in a data center.
IBM Spectrum Scale, Spectrum Archive and IBM Cloud Object Storage System to provide global tiering across multiple locations, and across flash, disk, tape and cloud resources.
Spectrum Scale for Volume, File and Object Storage
IBM Spectrum Scale was formerly called GPFS and has been around since 1998. I am glad it was renamed, as GPFS suffered from "guilt by association" with other file systems, AFS, DFS, XFS, ZFS, and so on.
Spectrum Scale does so much more, supports volume, file and object level access, supports POSIX standards for Windows, AIX and Linux, support Hadoop and Spark with 100 percent compatible HDFS Transparency Connector, support NFS, SMB and iSCSI protocols, as well as OpenStack Swift and Amazon S3 object based access.
Initially designed for video streaming and High Performance Computing (HPC), IBM has extended its reach to work in a variety of workloads across different industries. More than 5,000 production systems are running at client locations.
Beating Ransomware! A deep exploration of threat vectors for applications and storage
Andrew Greenfield, IBM Global Engineer for Spectrum Storage, presented on the threat of ransomware. In addition to being an expert in various storage, he also is an expert in security.
If you think security is just setting up your network firewalls and turning on data-at-rest encryption on your storage, you are sadly mistaken. Many of the treat vectors come from the inside, disgruntled employees or temporary contractors who plant viruses, bombs and worms that may not activate until long after they leave.
There are now products called security information and event management (SIEM) that provide real-time analysis of security alerts generated by network hardware and applications. Two that Andrew was familiar with were IBM Qradar and Varonis. These identify standard and abnormal behavior patterns among users.
Andrew feels products like Splunk do a great job to collect information, but don't do the analysis that Qradar or Varonis do.
I was very pleased with this conference. This was a concentrated 3-day event, but everyone I talked to was happy with the format, and felt their time spent worthwhile!
This week, I was in beautiful Melbourne, Australia for IBM Systems Technical University. On Wednesday evening, we had a poster session.
(I have so many photos that I will split this post up into topics. This post will focus on IBM Z systems, see my other posts for storage and IBM Power systems.)
Topics can be anything that is of interest to your peers and colleagues. It can be research-related, a specific solution you implemented or an interesting customer case you want to share.
Linux Scalability at a Small Scale (or, An Adventure In Minimalist Multitudinousness)
Vic Cross, IBM Senior Systems Engineer, used the Ganglia Monitor System to generate traffic and measure 1,680 Linux guests on a single IBM Z mainframe LPAR with only 16GB of memory! His poster consisted of 18 pages of material, a mix of traditional presentation slides, screen shots of web pages, and densely detailed performance results.
Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization. It uses carefully engineered data structures and algorithms to achieve very low per-node overheads and high concurrency. The implementation is robust, has been ported to an extensive set of operating systems and processor architectures, and is currently in use on thousands of clusters around the world. It has been used to link clusters across university campuses and around the world and can scale to handle clusters with 2000 nodes. Learn more at [http://ganglia.sourceforge.net/]
Spectrum Scale 2 site cluster
Antony Steel, IBM Senior Consulting IT Specialist, presents an option to configure a 2 site GPFS (Spectrum Scale) "almost active-active" cluster when a 3rd site is not available. This option will require simple administrative tasks to make DR filesystem available should Production site fail. Spectrum Scale runs on IBM Z, IBM Power and x86 servers.
The poster used 13 traditional landscape slides, printed on what appears to be A4 paper. A4 is 297 mm wide, so three side by side exceeds the 841 mm width of the poster foam board. These were arranged with a title slide on top, and then 12 content slides in four rows of three.
While I was glad that someone else had a QR code on their poster, the placement was way at the top, and difficult for anyone to actually scan it. I thought of this, and had mine at waist level in the middle right side of my poster.
Life is better with Linux
I couldn't resist taking a photo of the back of this guy's tee-shirt, which says "Life is better with Linux"
In effect, tee-shirts can also be "posters", although that would make for an awkward "poster session" if everyone wore them? Pointing at your chest would be weird, and pointing to your back would be near impossible!
In 1999-2001, I helped the port of Linux to IBM S/390 mainframe chip-set architecture by testing and debugging the disk and tape device drivers. I was the first to install Linux on an IBM mainframe in Tucson, AZ!
I would then go on to work with SAN Volume Controller, Tivoli Storage Manager (now called Spectrum Protect), Tivoli Storage Productivity Center (now called Spectrum Control), and the General Parallel File System (GPFS, now called Spectrum Scale). All of these run on Linux!
I would become the "Linux storage expert" at conferences like SHARE and GUIDE. While my co-workers in DFSMS and z/OS felt Linux was just a fad, I predicted that Linux was going to be a major force in the IT industry. I was right, not only does Linux run on all of our IBM Z and Power servers, it is the underlying operating system for nearly all of IBM storage devices.
Today, I run Linux directly on my laptop, using a Windows KVM guest image as needed for specific projects or applications.
Erina Araki poses for a photo with one of the attendees, Marco. Erina was the organizer for this poster event, and was my primary contact to answer all of my questions. I think the poster session was a big success!
This week, I was in beautiful Melbourne, Australia for IBM Systems Technical University. On Wednesday evening, we had a poster session.
(I have so many photos that I will split this post up into topics. This post will focus on posters related to IBM Power systems. See my other posts for storage and IBM Z.)
Ding! IBM i Systems Management redefined with SQL
A poster presentation should trigger question-and-answer sessions, and the exchange of ideas and information regarding your topic.
Scott Forstie, IBM Db2 for i Business Architect, coined the phrase "Scott's Query Language", focused on Data Services for Db2 database on IBM i operating system. His design took several charts, printed in landscape mode, and organized in 3 columns of four charts each. His "title" page was printed twice, and placed on the left and right sides.
Scott explained GROUP_PTF_CURRENCY, LICENSE_EXPIRATION_CHECK and ACTIVE_JOB_INFO. I am not familiar with any of these things, but I enjoyed how passionate Scott was. He even had business cards for people to get more information at: [ibm.biz/Db2foriServices]
IBM Spectrum Scale with Hortonworks Data Platform
Chris Maestas, IBM Global Senior Solutions Architect, IBM and Par Hettinga, IBM Global SDI Enablement Leader, created this poster.
Hortonworks is a leading innovator in the industry, creating, distributing and supporting enterprise-ready open data platforms and modern data applications. They focus on driving innovation in open source communities such as Apache Hadoop, NiFi, and Spark. Their product, Hortonworks Data Platform (HDP), runs on both x86 and Power systems.
The poster design was clean, with basically three enlarged presentation slides. On the top, it explains that Hortonworks now supports IBM Spectrum Scale for storage of files and objects to be analyzed by Hadoop. On the bottom left, it shows how Spectrum Scale eliminates the ingest-and-discard approach used by other HDFS-based systems. On the bottom right, an architecture diagram to build your own "data lake".
Optimizing Power Performance with Affinity Groups – Real World 40Gbit LPM Results / Lessons Learnt
This poster employed a unique 1-6-6 design. Top slide was for title and author: Stephen Diwell, Senior Power Systems Engineer, DXC Technologies
In the middle, the poster had six traditional text-only presentation slides, arranged in two rows of three. LPAR Affinity Groups provides you the ability to give the Hypervisor a hint that you would like this group of LPARs to be located on processor Chips that are closer to each other. Use Affinity Groups to help the Hypervisor place LPARs nearer to the VIO Servers. LPARs that share common resources, like the Fibre Channel and Ethernet adapters within a VIO servers will obtain better performance and adapter throughput the closer they are. The lighting on some of these posters was really poor, and perhaps too dark to read small fonts like this.
At the bottom were performance bar chart results, in three rows of two. I like the use of color for the graphs. For a network job with 8 threads, Stephen achieved a 54% increase in network bandwidth for LPARs communicating on the same Chip to those communicating between Nodes in the E800 frame.
Sundata Power Server Cloud offering
Leave it to the marketing department of a local cloud service provider to turn their poster into an advertising billboard! This one was presented by Kon Kakanis, Managing Director, Sundata Pty Ltd
The Sundata poster encouraged people to move their AIX, IBM i and Linux on POWER workloads to their "PowerCloud" platform. They summarized their advantages into four bullet points:
Reliable and cost-effective partnership
Advice, Guidance and Support
Migration, management and support services
Located in Sydney and Brisbane
Founded in 1986, Sundata is an Australia-based organization to help their clients transform into the Cloud, select and deploy IT hardware and keep the lights on with ongoing support and managed services. They have over 100 corporations, government departments and schools enjoying a close and ongoing relationship.
The large fonts, simple design, and the cute cat-in-a-cape logo in the lower right corner captured peoples attention!
In between reading posters and talking to everyone, it was good to take a quick look out the floor-to-ceiling windows. At 297 meters, Eureka Tower has some amazing views. Here is one of the Yarra river and Central Business District.
This week, I was in beautiful Melbourne, Australia for IBM Systems Technical University. On Wednesday evening, we had a poster session. This was the first time I presented a poster session, so I was understandably very excited.
(I have so many photos that I will split this post up into topics. This post will focus on storage posters. See my other posts for IBM Power and Z systems.)
The venue was Eureka Skydeck 89, the top floor of the Eureka Tower. This tower is 297 meters tall (974 feet), and the views it afforded of the city of Melbourne were stunning.
Mo and I arrived early as I was one of the 11 finalists that got selected to present a poster. While it is a hot summer back in Arizona, it is cold here in Australia. I am glad we brought our heavy coats for the brisk 8-minute walk from our hotel, the Crown Promenade, to the Eureka Tower.
Posters are designed to present specific topics in a concise and interactive way to appeal to peers and colleagues at conferences and/or public displays. Everyone would be given an "A0" poster size foam board on which to tape on their poster, 841mm wide, and 1189 mm tall (roughly three feet by four feet).
Understanding Converged and Hyperconverged Systems
My design was simple. I took my summary chart from one of my presentations, and enlarged it to fit the "A0" poster size. I chose my "Pendulum Swings" presentation that explains the history of storage infrastructure, and the rise in interest in Converged and Hyperconverged Infrastructure.
In the early days of IT, storage was internal to its server, over time, storage outgrew its container, and we started having externally attached storage, and benefits like RAID and clustered servers for high availability. Then, SANs, LANs and WANs took the main stage, allowing for greater connectivity and distance.
But now, it seems the pendulum is swinging back with converged and hyperconverged systems. Converged Systems like IBM PureSystems, or VersaStack from IBM and Cisco, provide best-of-breed hardware for servers, storage and networks in a pre-cabled, pre-configured rack. With everything in a single rack, port count and cable distance limits are no longer a major concern.
Hyperconverged Systems, such as IBM Spectrum Scale, IBM Spectrum Accelerate, Nutanix or Simplivity, focus instead on offering commodity servers with internal flash and disk storage. Software-Defined Storage software is then used to glue together multiple units over a LAN infrastruture. With the huge increase in Flash and Disk capacities, a server with internal storage can hold many TB of data.
My poster included a "QR Code" that pointed to a link on BOX so that people could use their smartphones to access all of my presentations.
IBM Spectrum Scale with focus on Active File Management
A poster presents not all the details but the most important information.
Trishali Nayar, IBM AFM/Spectrum Scale Development from Pune India, had a poster on IBM Spectrum Scale with focus on Active File Management (AFM). She had a clean, simple design, basically two presentation slides enlarged to fill the poster size.
Active File Management (AFM) enables sharing of data across clusters, even if the networks are unreliable or have high latency. AFM allows you to create associations between IBM Spectrum Scale™ clusters or between IBM Spectrum Scale clusters and NFS data source. With AFM, you can implement a single name space view across sites around the world making your global name space truly global. You can also duplicate data for disaster recovery purposes without suffering from WAN latencies.
IBM Ubiquity Storage Service for Container Ecosystems
Your audience isn't trying to replicate your solution or case -- they are simply after the basics. Take for example, this poster on IBM's Ubiquity Storage Services.
Ashutosh Mate, IBM WW Senior Solutions Architect, created this poster on storage for Containers. Not to be confused with the Containers used in Spectrum Protect container pools, or the Containers supported by IBM Cloud Object Storage!
The poster had six enlarged presentation slides. Two at the top under "Abstract" covered business need and technology overview. The two in the middle under "Ubiquity Architecture" had a connection diagram and a list of supported environments. The last two under "IBM Vision" covered customer value, use cases, and additional resources.
As people transition from monolithic applications to microservices, IT is shifting from heavy Virtual Machines to lightweight Docker containers.
The Ubiquity project enables persistent storage for the Kubernetes and Docker container frameworks. It is a pluggable framework available for different storage systems. The framework interfaces with the storage systems, using their plugins. Different container frameworks can use Ubiquity concurrently, allowing access to different storage systems.
IBM has support for Spectrum Scale, all of the Spectrum Accelerate offerings (including XIV, FlashSystem A9000/R) and all of the Spectrum Virtualize offerings (including SVC, Storwize and FlashSystem V9000).
Single page handouts as "take-aways" was a nice extra touch.
This week, I was in beautiful Melbourne, Australia for IBM Systems Technical University. Here is my recap of Day 2.
The Truth Behind Converged/Hyperconverged Solutions
Abilio De Oliveira, IBM Client Technical Specialist, presented his thoughts on Converged and Hyperconverged solutions.
I went to hear what Abilio had to say, as I was presenting a similar session later the same day. There is a lot of hype surrounding both Converged and Hyperconverged systems, and Abilio was not buying it. He cautioned that there were over 25 vendors in this space, and often what they claim does not match reality.
He ended with a hilarious comparison, using the Television shows "Finding Bigfoot" and "Monster Hunters" as analogies.
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods.
Finally, I covered some of our new public cloud storage offerings, using OpenStack Swift and Amazon S3 protocols to access objects off premises, including the new Cold Vault and Flex pricing on IBM Cloud Object Storage System in IBM Bluemix Cloud.
A guide to assist you to build a business continuity solution
Alexis Giral, IBM Executive Storage Architect, presented business continuity and the various technologies IBM has to offer for disaster recovery.
I went to hear what Alexis had to say, as I was presenting a similar session later the same day. The first part of his presentation was nearly identical overview of basic concepts, such as recovery point objective (RPO) and recovery time objective (RTO), but the rest of his talk focused on the technologies in the storage products to use for each Business Continuity tier.
Pendulum Swings Back -- Understanding Converged and Hyperconverged Systems
For Converged Infrastructure, IBM and Cisco have greatly expanded the offerings in VersaStack. IBM supports SVC, Storwize V7000, Storwize V5000, FlashSystem 900, FlashSystem V9000 and FlashSystem A9000. The Cisco UCS x86 servers can be configured for IBM Cloud Object Storage System. VersaStack also supports Cisco CloudCenter to provide Hybrid solution taking advantage of IBM Spectrum Copy Data Management.
For Hyperconverged Infrastructure, IBM offers both Spectrum Accelerate and Spectrum Scale software. Recently, IBM has partnered with Nutanix to provide pre-installed POWER8 servers that run a customized version of their Acropolis Hypervisor. This supports Little-Endian Linux distributions from Centos and Ubuntu to run as Virtual Machines.
Business Continuity - The seven tiers of Disaster Recovery
Back in 1983, a task force of IBM clients at a GUIDE conference developed "Seven Business Continuity Tiers for Disaster Recovery", which I refer to as "BC Tiers". I divided the presentation into three sections:
Backup and Restore: BC tiers 1 through 3 are based on backup and restore methodologies. I explained how to backup Hadoop analytics data, all of the various options for IBM Spectrum Protect software, and how to encrypt the tape data that gets sent off premises.
Rapid Data Recovery: BC tiers 4 and 5 reduce the Recovery Point Objective (RPO) and Recovery Time Objective (RTO) with snapshots, database journal shadowing, and IBM Cloud Object Storage.
Continuous Operations BC tiers 6 and 7 provide data replication mirroring across locations. I covered 2-site, 3-site and 4-site configurations. I added details on IBM GDR for Power Systems which supports AIX, IBM i and Linux on POWER disaster recovery with DS8000 and Spectrum Virtualize storage.
While I was working, Mo took a city tour. Here she is taking a picture on the river walk along Melbourne's Yarra River.
Melbourne is a very clean city, people are friendly, and the architecture of the various buildings in the "Central Business District", or CBD as the locals call it, is stunning. Every building is unique!
Tonight we have a special "poster session" on the top floor of Melbourne's tallest building that is said to have excellent views of the city.
My session on IBM Cloud Object Storage had three sections. First, I covered an overview of what "Object Storage" was in general, how this differs from traditional block or file storage approaches.
Second, I explained what is unique and different of IBM Cloud Object Storage System, formerly called DsNet from Cleversafe. IBM acquired Cleversafe in 2015.
Third, I explained the various applications, use cases and industries that can take advantage of Object Storage.
IBM Storage and the NVMe Revolution
Brian Sherman, IBM Distinguished Engineer for Storage Advanced Technical Services, presented an overview of NVMe, NVMe Over Fabric (NVMeOF) and what IBM is doing in this area.
How to Build a Rockstar Personal Brand
Andrea Edwards, The Digital Conversationalist, is a globally award winning B2B communications professional with more than 20 years' worth of experience from around the globe, including 12 years exclusively in Asia Pacific. IBM has hired her in the Asia Pacific region to train many IBMers in Social Media.
She condensed her normal 5-6 hour training down to a single hour for this event. She explained why building a personal brand was important, how to do it, and why businesses and organizations should encourage their employees to do so.
For example, who has the most influence on most people? Behind friends and family are bloggers. Bloggers are more influential than journalists, religious leaders, celebrities and politicians.
(As the #1 blogger of IBM, I am considered to already have a "rockstar personal brand". I am pleased to see that IBM is taking social media seriously. I have been blogging since 2006, and have influenced over $4 billion US dollars in IBM revenue in the past 11 years.)
IBM Spectrum Virtualize technical updates
Andrew Martin, IBM Spectrum Virtualize Support Architect, presented the last 18 months of enhancements to Spectrum Virtualize, from v7.6.1 introduced in March 2016 to v7.8.1 released earlier this year.
He managed to highlight quite a few enhnacements:
Distributed RAID 5 and RAID 6
Integrated Compresstimator tool
New hardware: SVC, Storwize V7000 Gen2+, Storwize V5000 Gen 2, and 92-drive 5U High Density Expansion Enclosure
N-Port ID Virtualization (NPIV)
Virtualization Over iSCSI
Encryption for Distributed RAID Arrays
64GB Read Cache
Tier 1 Flash Support
Compressed IP Replication
Spectrum Virtualize as Software for Lenovo and SuperMicro servers
Host Clusters and Throttling
Raised limit to 10,000 Volumes
Transparent Cloud Tiering
Storwize Model Conversions
IBM SKLM Support for Encryption
Consistency Protection for Metro and Global Mirror remote-distance replication
Andrew called this a "reverse roadmap", rather than a session that presents where we are going in the next 18 months, he presented where we have been.
Solution Center Reception
Here I am with Morgan Tracey and Jenna Brooker from Computer Merchants, an IBM Business Partner.
Not only were Computer Merchants a sponsor with a booth at the Solution Center, but they also gave a customer testimonial at one of the breakout sessions on how they were able to use IBM Artificial Intelligence to help with their business.
I also spent time at the SuSE booth. SuSE is a distributor of Linux that runs on x86, POWER and IBM Z mainframe systems.
While I was working, Mo took a tour to Phillip Island. On the way, they stopped at Maru to feed kangaroos and take pictures with Koala bears.
At Phillip Island, Mo watched penguins come out of the ocean, waddle up on shore and march to their burroughs. This happens every evening and is one of the top tourist attractions near Melbourne.
Last week, I was in São Paulo, Brazil for IBM Systems Technical University.
Did the resort ask these two security guards to dress up as clowns? No, it turns out these were clowns dressed up as security guards! On other days, they were dressed in drag as housewives, or as Jamaican Rastafari in dreadlocks and tie-dyed tee shirts. Some of the attendees enjoyed their comic relief.
Here is my recap of Day 3 breakout sessions:
Demystifying Transparent Cloud Tiering for DS8000 and DFSMShsm
Ricardo Alan, IBM Client Technical Specialist, covered this recently announced synergy between DS8000 firmware and DFSMShsm, a part of the z/OS operating system for IBM Z mainframes.
(Historical note: I started my career as a software engineer for DFHSM, which was later renamed DFSMShsm, working my way up to lead architect for DFSMShsm, and later as chief architect for DFSMS overall. A good portion of my 19 patents are related to these products.)
Since the 1970s, mainframe clients were able to move less active data from expensive disk storage to lower cost tape media. DFSMShsm would be read data sets into the mainframe processor, chop them up into 16KB blocks, and then write them out to tape, often through an automated tape library.
Transparent Cloud Tiering introduces an alternative option. DFSMShsm now identifies which tracks of data need to be re-located, sends the request to IBM DS8000 storage device, and the IBM DS8000 sends the tracks as objects to the Cloud. Any application that references these data sets would automatically trigger a recall to bring the data back from the Cloud.
This feature is available for the DS8870 and DS8880 models, using the existing Ethernet ports already installed. No additional hardware is required. Enhancements to DFSMShsm will be rolled out via SPEs on z/OS releases. Initially, the system uses OpenStack Swift object protocol, but IBM has plans to support Amazon S3 protocol as well.
Data Migration Challenges and Solutions with IBM Enterprise Storage
Sidney Varoni Jr. presented this session on data migration methods. Data is migrated for three reasons. First, to re-balance across multiple storage arrays. If you bring in a new storage array, you often want to move data from older arrays to balance the workload.
The second reason is to get rid of old hardware altogether, you need to migrate the data to new hardware. With Dell acquisition of EMC, for example, many clients are using tools like TDMF to move data off of EMC and onto IBM DS8000 storage systems. IBM DS8000 storage systems are faster, easier to use and less expensive to operate from a total cost of ownership (TCO) than comparable capacity of EMC VMAX devices.
The third reason is to migrate from one data center to another. The average data center was built 10-15 years ago, and many no longer meet the needs and requirements of newer IT operations. Some clients are building new data centers, while others are moving their data to co-location facilities.
NVMe Over Fabrics: The next evolution in high performance for SSD interfaces is NVMe
Waner Dall Averde, Territory Representative from Brocade, presented this session on NVMe and NVMe Over Fabric (NVMeOF). As a joke, he showed this chart in Japanese.
(Fun Fact: The first Japanese immigrants arrived in Brazil in 1908. Brazil is home to the largest Japanese population outside Japan. Source: Wikipedia)
For the past 20 years, the Advanced Host Controller Interface (AHCI) served as the communication mechanism to send SCSI commands to SAS and SATA disk devices.
Unfortunately, AHCI is now the bottleneck between faster servers and faster Non-Volatile Memory such as Flash and Solid State Drive (SSD) storage devices. It only supports a handful of commands on a single command queue.
NVMe offers a replacement for the SCSI command set. It can support up to 64,000 commands on as many as 64,000 parallel command queues. Designed for 32 Gbps PCIe bus speeds, it is faster than traditional 6 Gbps and 12 Gbps SAS connections, reducing latency by 200 microseconds.
Unfortunately, PCIe cables are limited to just a few inches. PCIe Gen 1 supported 15 inches, PCIe Gen 2 supported 12 inches, and PCIe Gen 3 only 8 inches. To provide greater distances, NVMeOF allows the NVMe command set to be carried over long-distance networks, such as Ethernet, Infiniband or Fibre Channel.
Brocade Gen5 (16 Gbps) and Gen6 (32 and 128 Gbps) Fibre Channel switches and directors already support NVMeOF, and are designed to allow co-existence between NVMe and SCSI commands for smooth transition in mixed environments. Clients can buy their networking gear directly from IBM.
IBM Power Systems Flash Cache Acceleration
Petra Bührer, IBM Offering Manager for Power Systems software, explained recent the performance enhancement called "Flash Cache Acceleration".
This is a feature on POWER8 servers running AIX 7.1 TL4 SP2, AIX 7.2 TL0 SP0 – or higher. By using internal or direct-attach SSD, the operating system can cache most active blocks of data from external storage systems.
While this is certified for use with Oracle, it supports only single-instance databases. Oracle RAC and other active/active configurations are not supported at this time.
The Secret to IBM Disk Encryption - Deep Dive
As if Mo McCullough, one of the event coordinators for this conference, was not busy enough with keeping the conference going, he also gave technical presentations.
With the excitement over the IBM z14 end-to-end encryption announcement, there has been increased demand for everything related to encryption and security.
Unfortunately, I had to leave for the airport before the "Closing Session". The Club Med Lake Paradise resort was 60-90 minutes away from the GRU airport, and rush hour traffic in a city of 12 million people can get really bad.
Last week, I was in São Paulo, Brazil for IBM Systems Technical University.
Instead of separate physical rooms for each breakout session, this event had "virtual rooms". One speaker called it the "Software Defined Stage". Basically, there were five "rooms" in the main ballroom, and another eight rooms in a second ballroom.
Rather than blasting out each speaker's voice over loudspeakers, each speaker spoke softly into a headset microphone. All attendees wore headsets. Rooms 1 through 4 offered real-time translation, so attendees could chose to hear in English or Brazilian Portuguese.
In the other 13 "rooms", local speakers spoke in Brazilian Portuguese, but you still had to wear headsets to avoid speaking louder than the speaker next to you. For many of these, the charts were written in English.
My translators, Luciana and Marilia, explained to me the advantage of this approach. When speakers use English language, those who prefer must hear the real-time translation wore the "headphone of shame" which advertised to all others that an attendee's English proficiency was poor.
Sometimes, those who did not understand English well would not wear their headsets, nod or laugh with other attendees, but fail to understand the message. By forcing everyone to wear headsets, there is no stigma associated, and everyone can discreetly select the language they prefer to listen in.
Here is my recap for the breakout sessions on Day 2:
In this presentation, I gave an overview of interest in Cloud technologies, including OpenStack and RESTful APIs to manage server and storage resources. I then covered IBM Hybrid Cloud Storage configurations in five categories:
Cold storage for data infrequently accessed
Backup and Snapshot storage
Disaster Recovery storage
Daily Operations and Reporting
Special thanks to Chris Vollmar and Brian Sherman for their help in preparing this presentation.
Data Optimization: How to verify your data is being used efficiently
It is hard to believe that it was over 15 years ago that I was the chief architect for the software we now call IBM Spectrum Control. There are a variety of editions and bundles for this product, but my focus on this talk was on the advanced storage analytics found in IBM Virtual Storage Center and IBM Spectrum Control Advanced Edition.
I covered three use cases:
What storage tier to put your workload in, and how to move existing data into a faster or slower tier to meet business requirements and IT budgets.
For steady state environments, how to re-balance storage pools within a single tier to keep things even for optimal performance.
When it is time to decommission storage, how to transform volumes from one storage pool to another without downtime or outages.
Special thanks to Bryan Odom for his help in preparing this presentation.
IBM Hyperconverged Systems powered by Nutanix: Technical Overview
Ricardo Matinata, IBM Senior Technical Staff Member for Linux, KVM and Cloud on POWER, presented the latest IBM CS models for POWER systems that are pre-installed with Nutanix software running their Acropolis Hypervisor (AHV) to run Linux on POWER application virtual machines.
Managing Risks with Thin Provisioning, Compression, and Data Deduplication
This session had four parts. First, an overview of "Data Footprint Reduction" technologies, like compression, data deduplication, space-efficient snapshots and thin provisioning.
Second, a look at how these technologies can get storage administrators in trouble. Much like airlines selling more tickets than seats on the airplane, storage administrators may over-provision based on data reduction estimates, and then suddenly run out of storage capacity.
Third, an overview of IBM FlashSystem A9000 and A9000R products, often referred to as "A9000/R" to cover both as a family. These models offer data footprint reduction for all data.
Finally, I explain how the Hyper-Scale Manager GUI can help with reporting and analytics to avoid these risks. This GUI is available for the FlashSystem A9000/R, as well as XIV Gen3 and Spectrum Accelerate software clusters.
Special thanks to Rivka Matosevich for her help in preparing this presentation.
The Right Flash for the Right Workload
Fabiano Gomes, IBM Client Technical Specialist, presented IBM's portfolio of All-Flash Arrays, from FlashSystem and DS8000F to Elastic Storage Server and Storwize V7000F and V5000F models. Each of these have their own characteristics, which might favor one over the others for particular workloads and use cases.
The day was capped off with a nice evening reception at the pool bar. Bartenders were serving Caiparinhas, a Brazilian cocktail traditionally made sugar cane liquor, sugar and lime, but in this case offered in other flavors, such as pineapple or passion fruit.
Last week, I was in São Paulo, Brazil for IBM Systems Technical University.
Luciana and Marilia
While I speak Spanish fluently, my Brazilian Portuguese is a bit rusty, so I was asked to present in English language, and let these two real-time translators, Luciana and Marilia, speak on my behalf.
A big challenge is that English is a terse language, but Brazilian Portuguese is more verbose. It takes more syllables, and thus more time, to perform real-time translation. I have learned to pause at the end of each sentence to give a chance for my translators to catch up.
Servers (2 syllables)
Servidores (4 syllables)
Storage (2 syllables)
Armazenamento (6 syllables)
In this table, you can see that some technical terms take more syllables in Brazilian Portuguese than English. Often, I heard the local speakers just say "Servers" or "Storage" for convenience.
Here is my recap of breakout sessions on Day 1.
IBM Storage Trends and Directions
Alcides Bertazi, IBM Executive IT Specialist, presented the latest in Storage Trends and Directions.
Introduction to Object Storage and its Applications
This session had three sections. First, I covered an overview of what "Object Storage" was in general, how this differs from traditional block or file storage approaches.
Second, I explained what is unique and different of IBM Cloud Object Storage System, formerly called DsNet from Cleversafe. IBM acquired Cleversafe in 2015.
Third, I explained the various applications, use cases and industries that can take advantage of Object Storage.
IBM Spectrum Copy Data Management for Beginners
Eduardo Tomaz, IBM Client Technical Sales for Software Defined Storage solutions, presented an overview of IBM Spectrum Copy Data Management (CDM), the newest member of the IBM Spectrum Storage family.
IBM Spectrum Protect Update
Rosane Lagnor, IBM Certified IT Specialist - Storage Consultant Lab Services, and her two colleagues co-presented this session on the latest of IBM Spectrum Protect. The review went chronologically, from v7.1.4 introduced in late 2015, all the way to v8.1.1 release, the latest generally available.
(Note: IBM just announced v8.1.2 but is not generally available yet in Brazil.)
I managed to understand the local speakers in their native Brazilian Portuguese language. In many cases, the charts were in English language, so I was able to read in English what I may not have understood was spoken.
Last week, I was in São Paulo, Brazil for IBM Systems Technical University. With over 12 million people, it is the most-populous city in the Americas. Our venue was the Club Med Lake Paradise resort on the outskirts of town. We had about 700 attendees.
We had several local speakers do the opening session. Here is my recap:
Marcelo Porto, IBM General Manager for Brazil
This year, IBM Brazil celebrates 100 year anniversary. This all happened because Valentim Boucas persuaded IBM then-President Thomas Watson, Sr. to approve the establishment of a Rio de Janeiro office for the sale of IBM machines beginning in 1917.
For 100 years now, IBM has thrived with a set of core values. In every era in the past, IBM systems have been perfect for the business needs at the time, from punch cards to personal computers. But what got us here won't get us there in the future. The biggest challenge to transformation is people and culture. We must break the chains that hold us to the past. IBM drives disruption.
To prepare for the future, Marcelo recommended the following. First, learn English, because the English language is the "API of Business". Second, keep a curious mind. Seek out new things to learn. The new world needs skills and expertise in a variety of areas. Third, watch the movie "Hidden Figures", starring the IBM mainframe computer.
IBM Watson computer now speaks and understands Brazilian Portuguese language. Groupo Fleury uses Watson for genomics research. MRV Engineering uses this for chatbots. Mae de deus Hospital uses this for Oncology, as cancer patients now dominate the percentage of patients there. Walmart uses Blockchain to focus on food safety.
IBM Watson is used at Pinacoteca de São Paulo Museum to offer "Voz de Arte", the ability to ask IBM Watson about each painting in handheld smartphone devices. An example of this was available in the Solution Center.
In addition to natural language processing (NLP), IBM Watson can also do image recognition, a task normally only humans could do.
Watson can validate signatures, perform facial recognition at different angles, and even identify shirts, pants and shoes of fashion models in photographs.
Companies and organizations that are unable to transform data into insights and business decisions will fail.
Mauro D'Angelo, IBM Strategy and Business Development for Brazil
Why are companies like Uber and Airbnb successful? Mauro felt that it was because they had a proper Cloud infrastructure combined with the right data architecture.
(In this case, "success" is based on company valuation, often billions of US dollars. However, many of these companies are not profitable, losing millions of dollars in an aggressive effort to gain customers and establish their platform. It might take 12 to 24 months before a new customer becomes profitable.)
The data explosion is driving digital transformation. Cognitive systems must understand natural language, reason, learn and interact with humans. Machine Learning is much like training a puppy. You need to reward good behavior and fix bad behavior, and be patient, as it takes a long time.
In USA, patients asking Doctors for a diagnosis get only 50 percent correct on first consultation. Often, additional doctors or additional tests are needed to finally get correct assessments. In Brazil, it is probably less than 50 percent. Hopefully, Watson will help improve this.
Watson can also detect emotional tone and personality in social media. Is a customer angry? This could help prioritize which customer issues to address first.
Schools have not changed since the days of Aristotle. Mauro showed a picture of a school taken in 1934, and a picture of the same classroom, taken recently, showing it is nearly the same. Students want to learn anytime, anywhere, and from any channel.
At Georgia Tech University, a professor told his engineering students that there were nine "Teacher Assistants" (TAs) available to help answer questions online. One of these was [Jill Watson], which was the IBM Watson computer responding to the students. The students could not tell that Jill was not human!
In traditional schools, a teacher may reach only 50 to 60 students. Compare this to [Khan Academy] that offers video instruction that have had over 1.3 million views!
Frank Koja, IBM Systems Vice President for Brazil
When you buy something over the internet, what is your decision criteria? Often, it is lowest cost. Digital transformation often requires re-invention.
Trust beats risk. The new IBM z14 mainframe focuses on trust, with end-to-end encryption, Blockchain and Machine Learning. zHyperLink drastically improves the connection between mainframe and IBM DS8880 storage. IBM is helping over 400 clients adopting Blockchain.
The FlashSystem A9000 and A9000R models are 30x faster than traditional disk systems, and more dense, able to consolidate 20 racks down to one.
The new "PowerAI" bundle combines together a complete offering for Machine Learning and Deep Learning (ML/DL) for Power systems, taking advantage of GPU and NVlink capabilities.
The "waitless" world has arrived.
This was a good start for the conference. The three speakers of the opening session were passionate of what they were talking about, and people were excited to learn more as the week progressed.
The article starts out giving background history of the current mess we are in. Here is an excerpt:
"Throughout most of U.S. history, American high school students were routinely taught vocational and job-ready skills along with the three Rs: reading, writing and arithmetic...
...But in the 1950s, a different philosophy emerged: the theory that students should follow separate educational tracks according to ability...
Ability tracking did not sit well with educators or parents, who believed students were assigned to tracks not by aptitude, but by socio-economic status and race. ...
...The backlash against tracking, however, did not bring vocational education back to the academic core. Instead, the focus shifted to preparing all students for college, and college prep is still the center of the U.S. high school curriculum..."
My father was a mechanical engineer who enjoyed fixing cars and woodworking on the weekends. I had plenty of "vocational training" growing up at home, no need for me to have this in school, allowing me to focus on getting ready for college.
Nicholas asks legitimate questions at this stage: "So what’s the harm in prepping kids for college? Won’t all students benefit from a high-level, four-year academic degree program?" His initial response is:
"... As it turns out, not really. For one thing, people have a huge and diverse range of different skills and learning styles. Not everyone is good at math, biology, history and other traditional subjects that characterize college-level work.
Not everyone is fascinated by Greek mythology, or enamored with Victorian literature, or enraptured by classical music. Some students are mechanical; others are artistic. Some focus best in a lecture hall or classroom; still others learn best by doing, and would thrive in the studio, workshop or shop floor..."
Hard to argue that people are different, and learn in different ways. Not everyone is meant for college.
"...And not everyone goes to college. The latest figures from the U.S. Bureau of Labor Statistics (BLS) show that about 68 percent of high school students attend college. That means over 30 percent graduate with neither academic nor job skills..."
Here is what I have most problems with. To think that the 30 percent of high schools students graduate, but do not go to college, have neither academic nor job skills? I disagree with this, as there are many jobs where the academic and job skill training they received in high school is more than adequate. Nicholas then doubled down:
"...But even the 68 percent aren't doing so well. Almost 40 percent of students who begin four-year college programs don’t complete them, which translates into a whole lot of wasted time, wasted money, and burdensome student loan debt. Of those who do finish college, one-third or more will end up in jobs they could have had without a four-year degree. The BLS found that 37 percent of currently employed college grads are doing work for which only a high school degree is required.
It is true that earnings studies show college graduates earn more over a lifetime than high school graduates. However, these studies have some weaknesses. For example, over 53 percent of recent college graduates are unemployed or under-employed. And income for college graduates varies widely by major – philosophy graduates don’t nearly earn what business studies graduates do. Finally, earnings studies compare college graduates to all high school graduates. But the subset of high school students who graduate with vocational training – those who go into well-paying, skilled jobs – the picture for non-college graduates looks much rosier.
Yet despite the growing evidence that four-year college programs serve fewer and fewer of our students, states continue to cut vocational programs..."
There are a lot of successful billionaires who did not complete four yeas of college: Bill Gates, Steve Jobs, Michael Dell, Henry Ford, and Howard Hughes, just to name a few.
If you feel that the only purpose of attending high school or college is to get job-specific skills, then you are missing out on all the other aspects of those that teach you valuable life lessons, getting along with others, teamwork, communications, and other "soft skills" that aren't necessarily job-specific.
Teenagers entering college are still growing up, trying to figure out what they want to do with their lives, discovering new ideas, new ways of thinking, and networking with people of different backgrounds and cultures.
"...The U.S. economy has changed. The manufacturing sector is growing and modernizing, creating a wealth of challenging, well-paying, highly skilled jobs for those with the skills to do them. The demise of vocational education at the high school level has bred a skills shortage in manufacturing today, and with it a wealth of career opportunities for both under-employed college grads and high school students looking for direct pathways to interesting, lucrative careers. Many of the jobs in manufacturing are attainable through apprenticeships, on-the-job training, and vocational programs offered at community colleges. They don’t require expensive, four-year degrees for which many students are not suited..."
The skills shortage is real, but until employers are willing to pay people for what they're worth, the situation will not be resolved. The free market has a way to fix skills shortages. High demand raises salaries, and causes people to invest in high school and college education in part to vie for these positions. That is in part why medical doctors are paid so much.
"...The modern workplace favors those with solid, transferable skills who are open to continued learning. Most young people today will have many jobs over the course of their lifetime, and a good number will have multiple careers that require new and more sophisticated skills..."
A few years ago, I was hosting clients for dinner in Tucson. The sales rep had brought his daughter and her roommate along, as there was a shooting at their college campus and classes were canceled for the week. The daughter asserted, "In 18 months, I will no longer have to learn anything again. I will be done with school." Her roommate chimed in, "Ha! I am a year ahead of you, and only six months away from that!"
I was the bearer of bad news. "Ladies," I said, "you will have to get used to learning new things the rest of your lives." The highest ranking client at the table overheard me, and she re-iterated, "Ladies, that is probably the best advice I have heard in awhile. I suggest you heed it carefully."
A big part of high school and college education is to teach you how to learn on your own. Learn to read, search out information, take measurements, gather data, make plans, and ask the right questions. These are skills that are useful in a wide variety of careers.
Nicholas concludes with:
"...Just a few decades ago, our public education system provided ample opportunities for young people to learn about careers in manufacturing and other vocational trades. Yet, today, high-schoolers hear barely a whisper about the many doors that the vocational education path can open. The “college-for-everyone” mentality has pushed awareness of other possible career paths to the margins. The cost to the individuals and the economy as a whole is high. If we want everyone’s kid to succeed, we need to bring vocational education back to the core of high school learning."
I agree the educational system in United States is broken, but I am not sure I agree with everything that Nicholas writes in this article.