Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Continuing my drawn out coverage of IBM's big storage launch of February 9, today I'll cover the IBM System Storage TS7680 ProtecTIER data deduplication gateway for System z.
On the host side, TS7680 connects to mainframe systems running z/OS or z/VM over FICON attachment, emulating an automated tape library with 3592-J1A devices. The TS7680 includes two controllers that emulate the 3592 C06 model, with 4 FICON ports each. Each controller emulates up to 128 virtual 3592 tape drives, for a total of 256 virtual drives per TS7680 system. The mainframe sees up to 1 million virtual tape cartridges, up to 100GB raw capacity each, before compression. For z/OS, the automated library has full SMS Tape and Integrated Library Management capability that you would expect.
Inside, the two control units are both connected to a redundant pair cluster of ProtecTIER engines running the HyperFactor deduplication algorithm that is able to process the deduplication inline, as data is ingested, rather than post-process that other deduplication solutions use. These engines are similar to the TS7650 gateway machines for distributed systems.
On the back end, these ProtecTIER deduplication engines are then connected to external disk, up to 1PB. If you get 25x data deduplication ratio on your data, that would be 25PB of mainframe data stored on only 1PB of physical disk. The disk can be any disk supported by ProtecTIER over FCP protocol, not just the IBM System Storage DS8000, but also the IBM DS4000, DS5000 or IBM XIV storage system, various models of EMC and HDS, and of course the IBM SAN Volume Controller (SVC) with all of its supported disk systems.
It's Tuesday, and that means more IBM announcements!
I haven't even finished blogging about all the other stuff that got announced last week, and here we are with more announcements. Since IBM's big [Pulse 2010 Conference] is next week, I thought I would cover this week's announcement on Tivoli Storage Manager (TSM) v6.2 release. Here are the highlights:
Client-Side Data Deduplication
This is sometimes referred to as "source-side" deduplication, as storage admins can get confused on which servers are clients in a TSM client-server deployment. The idea is to identify duplicates at the TSM client node, before sending to the TSM server. This is done at the block level, so even files that are similar but not identical, such as slight variations from a master copy, can benefit. The dedupe process is based on a shared index across all clients, and the TSM server, so if you have a file that is similar to a file on a different node, the duplicate blocks that are identical in both would be deduplicated.
This feature is available for both backup and archive data, and can also be useful for archives using the IBM System Storage Archive Manager (SSAM) v6.2 interface.
Simplified management of Server virtualization
TSM 6.2 improves its support of VMware guests by adding auto-discovery. Now, when you spontaneously create a new virtual machine OS guest image, you won't have to tell TSM, it will discover this automatically! TSM's legendary support of VMware Consolidated Backup (VCB) now eliminates the manual process of keeping track of guest images. TSM also added support of the Vstorage API for file level backup and recovery.
While IBM is the #1 reseller of VMware, we also support other forms of server virtualization. In this release, IBM adds support for Microsoft Hyper-V, including support using Microsoft's Volume Shadow Copy Services (VSS).
Automated Client Deployment
Do you have clients at all different levels of TSM backup-archive client code deployed all over the place? TSM v6.2 can upgrade these clients up to the latest client level automatically, using push technology, from any client running v5.4 and above. This can be scheduled so that only certain clients are upgraded at a time.
Simultaneous Background Tasks
The TSM server has many background administrative tasks:
Migration of data from one storage pool to another, based on policies, such as moving backups and archives on a disk pool over to a tape pools to make room for new incoming data.
Storage pool backup, typically data on a disk pool is copied to a tape pool to be kept off-site.
Copy active data. In TSM terminology, if you have multiple backup versions, the most recent version is called the active version, and the older versions are called inactive. TSM can copy just the active versions to a separate, smaller disk pool.
In previous releases, these were done one at a time, so it could make for a long service window. With TSM v6.2, these three tasks are now run simultaneously, in parallel, so that they all get done in less time, greatly reducing the server maintenance window, and freeing up tape drives for incoming backup and archive data. Often, the same file on a disk pool is going to be processed by two or more of these scheduled tasks, so it makes sense to read it once and do all the copies and migrations at one time while the data is in buffer memory.
Enhanced Security during Data Transmission
Previous releases of TSM offered secure in-flight transmission of data for Windows and AIX clients. This security uses Secure Socket Layer (SSL) with 256-bit AES encryption. With TSM v6.2, this feature is expanded to support Linux, HP-UX and Solaris.
Improved support for Enterprise Resource Planning (ERP) applications
I remember back when we used to call these TDPs (Tivoli Data Protectors). TSM for ERP allows backup of ERP applications, seemlessly integrating with database-specific tools like IBM DB2, Oracle RMAN, and SAP BR*Tools. This allows one-to-many and many-to-one configurations between SAP servers and TSM servers. In other words, you can have one SAP server backup to several TSM servers, or several SAP servers backup to a single TSM server. This is done by splitting up data bases into "sub-database objects", and then process each object separately. This can be extremely helpful if you have databases over 1TB in size. In the event that backing up an object fails and has to be re-started, it does not impact the backup of the other objects.
Continuing on the [IBM Storage Launch of February 9], John Sing has offered to write the following guest post about the [announcement] of IBM Scale Out Network Attached Storage [IBM SONAS]. John and I have known each other for a while, traveled the world to work with clients and speak at conferences. He is an Executive IT Consultant on the SONAS team.
Guest Post written by John Sing, IBM San Jose, California
What is IBM SONAS? It’s many things, so let’s start with this list:
It’s IBM’s delivery of a productized, pre-packaged Scale Out NAS global virtual file server, delivered in a easy-to-use appliance
IBM’s solution for large enterprise file-based storage requirements, where massive scale in capacity and extreme performance is required, especially for today’s modern analytics-based Competitive Advantage IT applications
Scales to many petabytes of usable storage and billions of files in a single global namespace
Provides integrated central management, central deployment of petabyte levels of storage
Modular commercial-off-the-shelf [COTS] building blocks. I/O, storage, network capacity scale independently of each other. Up to 30 interface nodes and 60 storage nodes, in an IBM General Parallel File System [GPFS]-based cluster. Each 10Gb CEE interface node port is capable of streaming at 900 MB/sec
Files are written in block-sized chunks, striped over as many multiple disk drives in parallel – aggregating throughput on a massive scale (both read and write), as well as providing auto-tuning, auto-balancing
Functionality delivered via one program product, IBM SONAS Software, which provides all of above functions, along with clustered CIFS, NFS v2/v3 with session auto-failover, FTP, high availability, and more
IBM SONAS makes automated tiered storage achievable and realistic at petabyte levels:
Integrated high performance parallel scan engine capable of identifying files at over 10 million files per minute per node
Integrated parallel data movement engine to physically relocate the data within tiered storage
And we’re just scratching the surface. IBM has plans to deploy additional protocols, storage hardware options, and software features.
However, the real question of interest should be, “who really needs that much storage capacity and throughput horsepower?”
The answer may surprise you. IMHO, the answer is: almost any modern enterprise that intends to stay competitive. Hmmm…… Consider this: the reason that IT exists today is no longer to simply save cost (that may have been true 10 years ago). Everyone is reducing cost… but how much competitive advantage is purchased through “let’s cut our IT budget by 10% this year”?
Notice that in today’s world, there are (many) bright people out there, changing our world every day through New Intelligence Competitive Advantage analytics-based IT applications such as real time GPS traffic data, real time energy monitoring and redirection, real time video feed with analytics, text analytics, entity analytics, real time stream computing, image recognition applications, HDTV video on demand, etc. Think of how GPS industry, cell phone / Twitter / Facebook, iPhone and iPad applications, as examples, are creating whole new industries and markets almost overnight.
Then start asking yourself, “What's behind these Competitive Advantage IT applications – as they are the ones that are driving all my storage growth? Why do they need so much storage? What do those applications mean for my storage requirements?”
To be “real-time”, long-held IT paradigms are being broken every day. Things like “data proximity”: we can no longer can extract terabytes of data from production databases and load them to a data warehouse – where’s the “real-time” in that? Instead, today’s modern analytics-based applications demand:
Multiple processes and servers (sometimes numbering in the 100s) simultaneously ….
Running against hundreds of terabytes of data of live production data, streaming in from expanding number of smarter sensors, input devices, users
Producing digital image-intensive results that must be programatically sent to an ever increasing number of mobile devices in geographically dispersed storage
Requiring parallel performance levels, that used to be the domain only of High Performance Computing (HPC)
This is a major paradigm shift in storage – and that is the solution and storage capabilities that IBM SONAS is designed to address. And of course, you should be able to save significant cost through the SONAS global virtual file server consolidation and virtualization as well.
Certainly, this topic warrants more discussion. If you found it interesting, contact me, your local IBM Business Partner or IBM Storage rep to discuss Competitive Advantage IT applications and SONAS further.
Wrapping up my coverage of the Data Center Conference 2009, the week ends with a celebration. This year we had six "Hospitality Suites" sponsored by various different vendors. Each suite has its own theme, decorations and entertainment. The first suite was VMware's "Cloud 9 Ultra Lounge" which offered blue cotton candy martinis. IBM is the leading reseller of VMware.
When the red martini liquid was poured on top of the blue cotton candy, the result was a nasty muddish brown grey color. The guy on the left chose to get the martini without the blue cotton candy. We joked that this is perhaps a good metaphor for cloud computing in general. It looks good on paper, until you actually put it all together and realize it does not look as blue and puffy as you were expecting. However, it tasted good!
Next suite was sponsored by Cisco, one of IBM's storage networking partners. Cisco also decorated in blue, as the guy Jake in the middle demonstrates.
Next suite was sponsored by Brocade, our supplier for IBM-branded networking gear. They went with a red-and-black color scheme. Sadly, many of my pictures inside involved straight jackets and unicycles, so not appropriate for this blog. However, it was easy to remember that they were talking about their "extraordinary networks". Makes you want to help out Brocade by contacting your nearest IBM storage sales rep and buy yourself a SAN768B or two.
Somewhere along the way, we picked up Hawaiian leis at the "Margaritaville" Hospitality Suite, compliments of sponsor APC by Schneider Electric. We had the best "Filet Mignon" appetizers at "Club Dedupe" by our competitor DataDomain, and some fun with my friends over at Computer Associates' "Top Gun" suite. Pictured at right are Paula Koziol with Christian Barrera from Argentina. A good time was had by all.
Well, it's Tuesday again, but this time, today we had our third big storage launch of 2009! A lot got announced today as part of IBM's big "Dynamic Infrastructure" marketing campaign. I will just focus on the
disk-related announcements today:
IBM System Storage DS8700
IBM adds a new model to its DS8000 series with the
[IBM System Storage DS8700]. Earlier this month, fellow blogger and arch-nemesis Barry Burke from EMC posted [R.I.P DS8300] on this mistaken assumption that the new DS8700 meant that DS8300 was going away, or that anyone who bought a DS8300 recently would be out of luck. Obviously, I could not respond until today's announcement, as the last thing I want to do is lose my job disclosing confidential information. BarryB is wrong on both counts:
IBM will continue to sell the DS8100 and DS8300, in addition to the new DS8700.
Clients can upgrade their existing DS8100 or DS8300 systems to DS8700.
BarryB's latest post [What's In a Name - DS8700] is fair game, given all the fun and ridicule everyone had at his expense over EMC's "V-Max" name.
So the DS8700 is new hardware with only 4 percent new software. On the hardware side, it uses faster POWER6 processors instead of POWER5+, has faster PCI-e buses instead of the RIO-G loops, and faster four-port device adapters (DAs) for added bandwidth between cache and drives. The DS8700 can be ordered as a single-frame dual 2-way that supports up to 128 drives and 128GB of cache, or as a dual 4-way, consisting of one primary frame, and up to four expansion frames, with up to 384GB of cache and 1024 drives.
Not mentioned explicitly in the announcements were the things the DS8700 does not support:
ESCON attachment - Now that FICON is well-established for the mainframe market, there is no need to support the slower, bulkier ESCON options. This greatly reduced testing effort. The 2-way DS8700 can support up to 16 four-port FICON/FCP host adapters, and the 4-way can support up to 32 host adapters, for a maximum of 128 ports. The FICON/FCP host adapter ports can auto-negotiate between 4Gbps, 2Gbps and 1Gbps as needed.
LPAR mode - When IBM and HDS introduced LPAR mode back in 2004, it sounded like a great idea the engineers came up with. Most other major vendors followed our lead to offer similar "partitioning". However, it turned out to be what we call in the storage biz a "selling apple" not a "buying apple". In other words, something the salesman can offer as a differentiating feature, but that few clients actually use. It turned out that supporting both LPAR and non-LPAR modes merely doubled the testing effort, so IBM got rid of it for the DS8700.
Update: I have been reminded that both IBM and HDS delivered LPAR mode within a month of each other back in 2004, so it was wrong for me to imply that HDS followed IBM's lead when obviously development happened in both companies for the most part concurrently prior to that. EMC was late to the "partition" party, but who's keeping track?
Initial performance tests show up to 50 percent improvement for random workloads, and up to 150 percent improvement for sequential workloads, and up to 60 percent improvement in background data movement for FlashCopy functions. The results varied slightly between Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes, and I hope to see some SPC-1 and SPC-2 benchmark numbers published soon.
The DS8700 is compatible for Metro Mirror, Global Mirror, and Metro/Global Mirror with the rest of the DS8000 series, as well as the ESS model 750, ESS model 800 and DS6000 series.
New 600GB FC and FDE drives
IBM now offers [600GB drives] for the DS4700 and DS5020 disk systems, as well as the EXP520 and EXP810 expansion drawers. In each case, we are able to pack up to 16 drives into a 3U enclosure.
Personally, I think the DS5020 should have been given a DS4xxx designation, as it resembles the DS4700
more than the other models of the DS5000 series. Back in 2006-2007, I was the marketing strategist for IBM System Storage product line, and part of my job involved all of the meetings to name or rename products. Mostly I gave reasons why products should NOT be renamed, and why it was important to name the products correctly at the beginning.
IBM System Storage SAN Volume Controller hardware and software
Fellow IBM master inventory Barry Whyte has been covering the latest on the [SVC 2145-CF8 hardware]. IBM put out a press release last week on this, and today is the formal announcement with prices and details. Barry's latest post
[SVC CF8 hardware and SSD in depth] covers just part of the entire
The other part of the announcement was the [SVC 5.1 software] which can be loaded
on earlier SVC models 8F2, 8F4, and 8G4 to gain better performance and functionality.
To avoid confusion on what is hardware machine type/model (2145-CF8 or 2145-8A4) and what is software program (5639-VC5 or 5639-VW2), IBM has introduced two new [Solution Offering Identifiers]:
5465-028 Standard SAN Volume Controller
5465-029 Entry Edition SAN Volume Controller
The latter is designed for smaller deployments, supports only a single SVC node-pair managing up to
150 disk drives, available in Raven Black or Flamingo Pink.
EXN3000 and EXP5060 Expansion Drawers
IBM offers the [EXN3000 for the IBM N series]. These expansion drawers can pack 24 drives in a 4U enclosure. The drives can either be all-SAS, or all-SATA, supporting 300GB, 450GB, 500GB and 1TB size capacity drives.
The [EXP5060 for the IBM DS5000 series] is a high-density expansion drawer that can pack up to 60 drives into a 4U enclosure. A DS5100 or DS5300
can handle up to eight of these expansion drawers, for a total of 480 drives.
Pre-installed with Tivoli Storage Productivity Center Basic Edition. Basic Edition can be upgraded with license keys to support Data, Disk and Standard Edition to extend support and functionality to report and manage XIV, N series, and non-IBM disk systems.
Pre-installed with Tivoli Key Lifecycle Manager (TKLM). This can be used to manage the Full Disk Encryption (FDE) encryption-capable disk drives in the DS8000 and DS5000, as well as LTO and TS1100 series tape drives.
IBM Tivoli Storage FlashCopy Manager v2.1
The [IBM Tivoli Storage FlashCopy Manager V2.1] replaces two products in one. IBM used
to offer IBM Tivoli Storage Manager for Copy Services (TSM for CS) that protected Windows application data, and IBM Tivoli Storage Manager for Advanced Copy Services (TSM for ACS) that protected AIX application data.
The new product has some excellent advantages. FlashCopy Manager offers application-aware backup of LUNs containing SAP, Oracle, DB2, SQL server and Microsoft Exchange data. It can support IBM DS8000, SVC and XIV point-in-time copy functions, as well as the Volume Shadow Copy Services (VSS) interfaces of the IBM DS5000, DS4000 and DS3000 series disk systems. It is priced by the amount of TB you copy, not on the speed or number of CPU processors inside the server.
Don't let the name fool you. IBM FlashCopy Manager does not require that you use Tivoli Storage Manager (TSM) as your backup product. You can run IBM FlashCopy Manager on its own, and it will manage your FlashCopy target versions on disk, and these can be backed up to tape or another disk using any backup product. However, if you are lucky enough to also be using TSM, then there is optional integration that allows TSM to manage the target copies, move them to tape, inventory them in its DB2 database, and provide complete reporting.
Yup, that's a lot to announce in one day. And this was just the disk-related portion of the launch!
I saw this as an opportunity to promote the new IBM Tivoli Storage Manager v6.1 which offers a variety of new scalability features, and continues to provide excellent economies of scale for large deployments, in my post [IBM has scalable backup solutions].
"So does TSM scale? Sure! Just add more servers. But this is not an economy of scale. Nothing gets less expensive as the capacity grows. You get a more or less linear growth of costs that is directly correlated to the growth of primary storage capacity. (Technically, it costs will jump at regular and predictable intervals, by regular and predictable and equal amounts, as you add TSM servers to the infrastructure--but on average it is a direct linear growth. Assuming you are right sized right now, if you were to double your primary storage capacity, you would double the size of the TSM infrastructure, and double your associated costs.)"
I talked about inaccurate vendor FUD in my post [The murals in restaurants], and recently, I saw StorageBod's piece, [FUDdy Waters]. So what would "economies of scale" look like? Using Scott's own words:
Without Economies of Scale
"If it costs you $5 to backup a given amount of data, it probably costs you $50 to back up 10 times that amount of data, and $500 to back up 100 times that amount of data."
With Economies of Scalee
"If anybody can figure out how to get costs down to $40 for 10 times the amount of data, and $300 for 100 times the amount of data, they will have an irrefutable advantage over anybody that has not been able to leverage economies of scale."
So, let's do some simple examples. I'll focus on a backup solution just for employee workstations, each employee has 100GB of personal data to backup on their laptop or PC. We'll look at a one-person company, a ten-person company, and a hundred-person company.
Case 1: The one-person company
Here the sole owner needs a backup solution. Here are all the steps she might perform:
Spend hours of time evaluating different backup products available, and make sure her operating system, file system and applications are supported
Spend hours shopping for external media, this could be an external USB disk drive, optical DVD drive, or tape drive, and confirm it is supported by the selected backup software.
Purchase the backup software, external drive, and if optical or tape, blank media cartridges.
Spend time learning the product, purchase "Backup for Dummies" or similar book, and/or taking a training class.
Install and configure the software
Operate the software, or set it up to run automatically, and take the media offsite at the end of the day, and back each morning
Case 2: The ten-person company
I guess if each of the ten employees went off and performed all of the same steps as above, there would be no economies of scale.
Fortunately, co-workers are amazingly efficient in avoiding unnecessary work.
Rather than have all ten people evaluate backup solutions, have one person do it. If everyone runs the same or similar operating system, file systems and applications, this can be done about the same as the one-person case.
Ditto on the storage media. Why should 10 people go off and evaluate their own storage media. One person can do it for all ten people in about the same time as it takes for one person.
Purchasing the software and hardware. Ok, here is where some costs may be linear, depending on your choices. Some software vendors give bulk discounts, so purchasing 10 seats of the same software could be less than 10 times the cost of one license. As for storage hardware, it might be possible to share drives and even media. Perhaps one or two storage systems can be shared by the entire team.
For a lot of backup software, most of the work is in the initial set up, then it runs automatically afterwards. That is the case for TSM. You create a "dsm.opt" file, and it can list all of the include/exclude files and other rules and policies. Once the first person sets this up, they share it with their co-workers.
Hopefully, if storage hardware was consolidated, such that you have fewer drives than people, you can probably have fewer people responsible for operations. For example, let's have the first five employees sharing one drive managed by Joe, and the second five employees sharing a second drive managed by Sally. Only two people need to spend time taking media offsite, bringing it back and so on.
Case 3: The hundred-person company
Again, it is possible that a hundred-person company consists of 10 departments of 10 people each, and they all follow the above approach independently, resulting in no economies of scale. But again, that is not likely.
Here one or a few people can invest time to evaluate backup solutions. Certainly far less than 100 times the effort for a one-person company.
Same with storage media. With 100 employees, you can now invest in a tape library with robotic automation.
Purchase of software and hardware. Again, discounts will probably apply for large deployments. Purchasing 1 tape library for all one hundred people is less than 10 times the cost and effort of 10 departments all making independent purchases.
With a hundred employees, you may have some differences in operating system, file systems and applications. Still, this might mean two to five versions of dsm.opt, and not 10 or 100 independent configurations.
Operations is where the big savings happen. TSM has "progressive incremental backup" so it only backs up changed data. Other backup schemes involve taking period full backups which tie up the network and consume a lot of back end resources. In head-to-head comparisons between IBM Tivoli Storage Manager and Symantec's NetBackup, IBM TSM was shown to use significantly less network LAN bandwidth, less disk storage capacity, and fewer tape cartridges than NetBackup.
The savings are even greater with data deduplication. Either using hardware, like IBM TS76750 ProtecTIER data deduplication solution, or software like the data deduplication capability built-in with IBM TSM v6.1, you can take advantage of the fact that 100 employees might have a lot of common data between them.
So, I have demonstrated how savings through economies of scale are achieved using IBM Tivoli Storage Manager. Adding one more person in each case is cheaper than the first person. The situation is not linear as Scott suggests. But what about larger deployments? IBM TS3500 Tape Library can hold one PB of data in only 10 square feet of data center floorspace. The IBM TS7650G gateway can manage up to 1 PB of disk, holding as much as 25 PB of backup copies. IT Analysts Tony Palmer, Brian Garrett and Lauren Whitehouse from Enterprise Strategy Group tried IBM TSM v6.1 out for themselves and wrote up a ["Lab Validation"] report. Here is an excerpt:
"Backup/recovery software that embeds data reduction technology can address all three of these factors handily. IBM TSM 6.1 now has native deduplication capabilities built into its Extended Edition (EE) as a no-cost option. After data is written to the primary disk pool, a deduplication operation can be scheduled to eliminate redundancy at the sub-file level. Data deduplication, as its name implies, identifies and eliminates redundant data.
TSM 6.1 also includes features that optimize TSM scalability and manageability to meet increasingly demanding service levels resulting from relentless data growth. The move from a proprietary back-end database to IBM DB2 improves scalability, availability, and performance without adding complexity; the DB2 database is automatically maintained and managed by TSM. IBM upgraded the monitoring and reporting capabilities to near real-time and completely redesigned the dashboard that provides visibility into the system. TSM and TSM EE include these enhanced monitoring and reporting capabilities at no cost."
The majority of Fortune 1000 customers use IBM Tivoli Storage Manager, and it is the backup software that IBM uses itself in its own huge data centers, including the cloud computing facilities. In combination with IBM Tivoli FastBack for remote office/branch office (ROBO) situations, and complemented with point-in-time and disk mirroring hardware capabilities such as IBM FlashCopy, Metro Mirror, and Global Mirror, IBM Tivoli Storage Manager can be an effective, scalable part of a complete Unified Recovery Management solution.
This week, scientists at IBM Research and the California Institute of
Technology announced a scientific advancement that could be a major
breakthrough in enabling the semiconductor industry to pack more power
and speed into tiny computer chips, while making them more energy
efficient and less expensive to manufacture. IBM is a leader in
solid-state technology, and this scientific breakthrough shows promise.
But first, a discussion of how solid-state chips are made in the first place. Basically, a round thin wafer is etched using [photolithography]
with lots of tiny transistor circuits. The same chip is repeated over
and over on a single wafer, and once the wafer is complete, it is
chopped up into little individual squares. Wikipedia has a nice article
on [semiconductor device fabrication], but I found this
[YouTube video] more illuminating.
Up until now, the industry was able to get features down to 22
nanometers, and were hitting physical limitations to get down to
anything smaller. The new development from IBM and Caltech is to use
self-assembling DNA strands, folded into specific shapes using other
strands that act as staples, and then using these folded structures as
scaffolding to place in nanotubes. The result? Features as small as 6 nanometers. How cool is that?
While NAND Flash Solid-State Drives are available today, this new
technique can help develop newer, better technologies like Phase Change
Continuing my week in Chicago, for the IBM Storage Symposium 2008, we had sessions that focused on individual products. IBM System Storage SAN Volume Controller (SVC) was a popular topic.
SVC - Everything you wanted to know, but were afraid to ask!
Bill Wiegand, IBM ATS, who has been working with SAN Volume Controller since it was first introduced in 2003. answered some frequently asked questions about IBM System Storage SAN Volume Controller.
Do you have to upgrade all of your HBAs, switches and disk arrays to the recommended firmware levels before upgrading SVC? No. These are recommended levels, but not required. If you do plan to update firmware levels, focus on the host end first, switches next, and disk arrays last.
How do we request special support for stuff not yet listed on the Interop Matrix?
Submit an RPQ/SCORE, same as for any other IBM hardware.
How do we sign up for SVC hints and tips? Go to the IBM
[SVC Support Site] and select the "My Notifications" under the "Stay Informed" box on the right panel.
When we call IBM for SVC support, do we select "Hardware" or "Software"?
While the SVC is a piece of hardware, there are very few mechanical parts involved. Unless there are sparks,
smoke, or front bezel buttons dangling from springs, select "Software". Most of the questions are
related to the software components of SVC.
When we have SVC virtualizing non-IBM disk arrays, who should we call first?
IBM has world-renown service, with some of IT's smartest people working the queues. All of the major storage vendors play nice
as part of the [TSAnet Agreement when a mutual customer is impacted.
When in doubt, call IBM first, and if necessary, IBM will contact other vendors on your behalf to resolve.
What is the difference between livedump and a Full System Dump?
Most problems can be resolved with a livedump. While not complete information, it is generally enough,
and is completely non-disruptive. Other times, the full state of the machine is required, so a Full System Dump
is requested. This involves rebooting one of the two nodes, so virtual disks may temporarily run slower on that
What does "svc_snap -c" do?The "svc_snap" command on the CLI generates a snap file, which includes the cluster error log and trace files from all nodes. The "-c" parameter includes the configuration and virtual-to-physical mapping that can be useful for
disaster recovery and problem determination.
I just sent IBM a check to upgrade my TB-based license on my SVC, how long should I wait for IBM to send me a software license key?
IBM trusts its clients. No software license key will be sent. Once the check clears, you are good to go.
During migration from old disk arrays to new disk arrays, I will temporarily have 79TB more disk under SVC management, do I need to get a temporary TB-based license upgrade during the brief migration period?
Nope. Again, we trust you. However, if you are concerned about this at all, contact IBM and they will print out
a nice "Conformance Letter" in case you need to show your boss.
How should I maintain my Windows-based SVC Master Console or SSPC server?
Treat this like any other Windows-based server in your shop, install Microsoft-recommended Windows updates,
run Anti-virus scans, and so on.
Where can I find useful "How To" information on SVC?
Specify "SAN Volume Controller" in the search field of the
[IBM Redbooks vast library of helpful books.
I just added more managed disks to my managed disk group (MDG), can I get help writing a script to redistribute the extents to improve wide-striping performance?
Yes, IBM has scripting tools available for download on
[AlphaWorks]. For example, svctools will take
the output of the "lsinfo" command, and generate the appropriate SVC CLI to re-migrate the disks around to optimize
performance. Of course, if you prefer, you can use IBM Tivoli Storage Productivity Center instead for a more
Any rules of thumb for sizing SVC deployments?
IBM's Disk Magic tool includes support for SVC deployments. Plan for 250 IOPS/TB for light workloads,
500 IOPS/TB for average workloads, and 750 IOPS/TB for heavy workloads.
Can I migrate virtual disks from one manage disk group (MDG) to another of different extent size?
Yes, the new Vdisk Mirroring capability can be used to do this. Create the mirror for your Vdisk between the
two MDGs, wait for the copy to complete, and then split the mirror.
Can I add or replace SVC nodes non-disruptively? Absolutely, see the Technotes
[SVC Node Replacement page.
Can I really order an SVC EE in Flamingo Pink? Yes. While my blog post that started all
this [Pink It and Shrink It] was initially just some Photoshop humor, the IBM product manager for SVC accepted this color choice as an RPQ option.
The default color remains Raven Black.
The focus on square footage resulted in higher density. This reminds me of the classicIBM commercial ["The Heist"] where Gil panics that the roomful of servers are missing, and Ned explains that it was all consolidated ontoa single IBM server.
I suspect few people picked up on the fact that the acronym for["new enterprise datacenter"] spells "Ned", ourdonut-eating hero in these series of videos.
Costs in the data center are proportional to power usage rather than space.
Power efficiency is more of a behavior problem than it is a technology problem.
This is definitely a step in the right direction. Both servers and storage systems consume a large portionof the energy on the data center floor. IBM Tivoli Usage and Accounting Manager can includeenergy consumption as part of the chargeback calculations.
However, I have to assume his real question is ... "what is the quick and easy way for me to build a lightweight database app like Microsoft Access that I can distribute as a standalone executable?"
To which I would say "Lotus has a program called Approach, which is part of Lotus SmartSuite, which some people still use. However, a lot of the focus in IBM now centers around the lightweight Cloudscape database which IBM acquired from Informix, which is now known as the [open source project called Derby]. Many IBM and Lotus products, such as Lotus Expeditor use the JDBC connection to Derby, which allows you to use Windows, Linux, Flash, etc. ... with no vendor lock in".
I am familiar with Cloudscape, and I evaluated it as a potential database for IBM TotalStorage Productivity Center, when I was the lead architect defining the version 1 release. It runs entirely on Java, which is both a plus and minus. Plus in that it runs anywhere Java runs, but a minus in that it is not optimized for high performance or large scalability. Because of this, we decided instead on using the full commercial DB2 database instead for Productivity Center.
Not to be undone, my colleagues over at DB2 offered a different alternative, [DB2 Express-C], which runs on a variety of Windows, Linux-x86, and Linux on POWER platforms. It is "free" as in beer, not free as in speech, which means you can download and use it today at no charge, and even ship products with it included, but you are not allowed to modify and distribute altered versions of it, as you can with "free as in speech" open source code, as in the case of Derby above (see [Apache License 2.0"] for details).
As I see it, DB2 Express-C has two key advantages. First, if you like the free version, you can purchase a "support contract" for those that need extra hand-holding, or are using this as part of a commercial business venture. Second,for those who do prefer vendor lock-in, it is easyto upgrade Express-C to the full IBM DB2 database product, so if you are developing a product intended for use with DB2, you can develop it first with DB2 Express-C, and migrate up to full DB2 commercial version when you are ready.
This is perhaps more information than you probably expected for such a simple question. Meanwhile, I am stilltrying to figure out MySQL as part of my [OLPC volunteer project].
Well, we had another successful event in Second Life today.
Unlike our April 26 launch of our System Storage products for IBM Business Partners only, this time we decided this time to make it as a "Meet the Storage Experts" Q&A Panel format, and open up registration to everyone. Thesubject matter experts sat at the front of the room on four stools. We had six rows of chairs arrangedsemi-circularly.
Shown above, from left to right, are the avatars of our four experts:
IBM System Storage N series, focusing on recent N3000 disk system announcements
Harold Pike (holding the microphone while speaking)
IBM System Storage DS3000 and DS4000 series, focusing on recent DS3000 disk system announcements
IBM System Storage TS series, focusing on recent TS2230, TS3400 and TS7700 tape system announcements
IBM storage networking, focusing on recent IBM SAN256B director blade announcements
While Eric was a veteran Second Lifer, having presented at our April event, the other three were trainedon how to raise their hand, speak into the microphone, sit on the stool, and so on. I want to thank allof our experts for putting in this effort!
The event was produced by Katrina H Smith. She did a great job, and made sure we were on top ofall the issues and tasks required to get the job done. Running a Second Life event is every bit ashard as running a real face-to-face event. We had several meetings to discuss venue details, placementof chairs, placement of product demos, audio/video recording, wall decorations, tee-shirt and coffee mug design, logistics, and so on.
I acted as moderator/emcee for the event. That is my back in the picture above. The process wassimple, modeled after the "Birds of a Feather" sessions at events like SHARE and the IBMStorage and Storage Networking Symposium. We threw out a list of topics the experts would cover,and people in the audience would "raise their left hand". I, as the moderator, would then walkover to each person, and hold out the microphone for them to ask the question. I would then repeat the question and ask the appropriate expert to provide an answer. We defined gestures onhow to "raise hand" and "put hand down" that we gave to each registered participant.
We had four dedicated "camera-avatars" in world to capture both video and screenshots.Our video editors are now working to edit "highlight videos" that we can use at future events, for training materials, and for our internal "BlueTube" online video system.
The room was filled with examples of each of our products, made into 3D objects that were dimensionallycorrect, and "textured" with photographs of the actual products. If you click on an object, you get a "notecard" that provided more information. Special thanks to Scott Bissmeyer for making all of theseobjects for us.
We made posters of each expert and placed them in all four corners of the room. On the bottom of each coffee mug was a picture of each of the experts, and if you walked under each of the posters, you were"dispensed" a coffee mug matching the expert shown in the poster.Participants could "Collect all Four!" When you bring the coffee mug up to takea sip, the picture on the bottom of the mug is exposed for all to see.And as a final give-away to the audience, we made a variety of event tee-shirts and polo-shirts.
At the end of the session, we asked everyone to click on the "Survey" kiosk near the exit door. We askedsix simple questions using SurveyMonkey.com that took only a fewminutes to process. We found asking questions immediately at the end of the event was the best way tocapture this feedback.
From a "Green" perspective, we had people registered from the following countries: US, India, Mexico,Australia, United Kingdom, Brazil, Germany, Argentina, Chile, China, Canada, and Venezuela. Second Lifeallows all these people who probably could not travel, or could not afford the time and expense to travel,to participate in a simulated face-to-face meeting without energy consumption of traditional travel methods.
More importantly, we got several leads for business. People often ask "Yes, but is there any businessassociated with this?" This time, there was, based on the answers to the questions, several avatars asked for a real sales call to follow-up on the products and offerings they were discussed.
With such a great success, we have already scheduled our next Second Life event, November 8. Mark your calendars! I'll postmore details on the registration process of the November event when available.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(FTC Disclosure: I work for IBM. This blog post can be considered a "paid celebrity endorsement" of the IBM Z and IBM storage products mentioned below.)
DS8880 R8.3.3 Enhancements
Back in 2015, IBM [DS8880 models] of the DS8000 family. Sales drastically increased, in part because IBM re-designed the systems to be a standard 19-inch wide rack, rather than the 33-inch wide custom sizes used before. Many cloud service providers (CSP) and managed service providers (MSP) require 19-inch standard rack configurations.
To meet client requirements, the newest IBM mainframes, including Z14 model ZR1 and LinuxONE Rockhopper II, are now following the same 19-inch rack size!
IBM DS8880 models now have enhanced support for zHyperlink connections. Clients with existing 6-core DS8884/F or 8-core DS8886/F models can upgrade to add more cores for zHyperlink connectivity.
Cores per CEC
Maximum zHyperlink connections
The zHyperlink supports both 40-meter and 150-meter cables. This allows applications like DB2 to read data with substantially lower latency than traditional FICON attachment.
For IBM z/OS clients, the Transparent Cloud Tiering feature allows migration of data directly from DS8000 storage systems to the cloud. This eliminates migrating data through the IBM Z, consuming MIPS and FICON traffic, back out to a tape or virtual tape system. IBM now offers 10GbE cards for the DS8880, providing faster throughput than the existing 1GbE cards previously available.
IBM Spectrum Scale v5.0 for IBM Elastic Storage Server
IBM Spectrum Scale v5.0 was available as software last year, and now is available as a Software PID for Elastic Storage Server hardware.
The new version introduces per-drive editions for licensing: Data Access edition, and Data Management edition. Here are highlights of some of the features:
Enhancements to GUI usability, including managing file systems between ESS and non-ESS storage
Audit File Logging (Data Management Edition only) for Open, Close, Destroy (Delete), Rename, Unlink, Remove Directory, Extended Attributed change, Access Control List (ACL) change
Enhancements to Active File Management, providing WAN-caching for multi-site deployments
Independent KPMG certification will be done for Spectrum Scale v5.0 on ESS for the "Immutability" feature. Some people refer to this as WORM, Government Compliance, Tamperproof, or Non-Erasable, Non-Rewriteable (NENR) enforcement protection
Enhancements to Transparent Cloud Tiering, providing archive of less-active data to IBM Cloud Object Storage, IBM Cloud, or Amazon S3.
Certification for analytics on both x86 and POWER platforms: Hortonworks Data Platform (HDP) v2.6, and Ambari v2.5
Improved I/O performance for many small and large block size workloads simultaneously, including a 4 MB default block size with variable sub-block size based on block size choice
Spectrum Scale 5.0 is incorporated into "Elastic Storage Server Solution Release 5.3". It is unfortunate the numbering is different. Existing ESS clients can download this new ESS 5.3 code from IBM FixCentral today. Going forward, starting next week or so, new Elastic Storage Servers will ship with ESS solution release 5.3 pre-installed.
The TS4500 tape library supports both TS1100 and LTO tape drives.
This feature supports mixed media in a TS4500 tape library. If you are using Library-Managed Encryption (LME), then IBM Security Key Lifecycle Manager is required as the key manager with LTO drives and cartridges.
GDPR is the IT industry's next "Y2K crisis." Effective May 25, 2018, it ensures that any citizen of the European Union can review, rectify, and even erase any personal data from corporate datacenters. Companies that fail to respond to requests can be heavily fined. See Bob Yelland's quick 13-page guidebook on this, titled [GDPR - How it Works].
His team also developed the Non-Obvious Relationship Awareness (NORA) software for the casinos, combining the records of 15 million customers, 20,000 employees, and 18 different watch lists. If a casino did business with people on certain watch lists, they could be put out of business or heavily fined.
NORA alerts identified 24 active VIP players as known cheaters, 12 employees were active gamblers against company policy, 192 employees had possible relationships with casino vendors, and in seven cases the players were the vendor. One casino discovered they were paying to have one of these cheaters flown to Las Vegas to play at their tables!
(IBM acquired Jeff's company Systems Research and Development (SRD) back in 2005. I had the pleasure of working with Jeff during his 11 year stint at IBM, and participated in his G2 project that was later spun off in 2016 to form his newest company, Senzing. See my 2011 blog post [Storage Innovation Executive Summit] of Jeff's thoughts back then.)
Jeff identifies four challenges in complying with GDPR regulation. Suppose an EU citizen comes to your company and asks just to review all information that you have on them. How would you do that?
So this is Challenge #1: There are lot's of places to look. You have a customer database, loyalty club, marketing programs, vendor and supplier databases, and customer service. But wait, the person might have also been an employee! Does your employee database let you search for information on former employees?
Challenge #2 is that the data occurs in variations. Liz Reston could be stored as Elizabeth or Beth. Her last name might have changed from various marriages and divorces. Can you generate all of the variations to search on?
(I know this personally. I am not the only famous "Tony Pearson" out there. There is Tony Pearson, a cricket player in England. There is Tony Pearson, Chief of Staff in the Australian government. And finally, there is 61-year-old "Mr. Universe" Tony Pearson, the "Michael Jackson" of Bodybuilding. Needless to say, women who showed up at my house unannounced looking for him instead were sometimes disappointed!)
Challenge #3 is that existing systems have search limitations. Imagine going to a library that doesn't have a card catalog or computerized index. Rather, you need to go floor by floor, row by row, book by book, looking for the information you are looking for.
Human Resources software might only offer search options for name, date of birth or employee serial number. Hotel systems don't offer you search capabilities of billing or home addresses.
Small typos can result in incomplete search results. Home addresses, for example, are often written in different ways, suite or apartment numbers may be represented differently as well, and abbreviations may be used to represent fully-qualified names.
What are you going to do, ask the IT department to write custom SQL queries for you? One of the unexpected benefits of Jeff's NORA system was that it could match entities between databases by street address, a trick that normally isn't designed into most applications.
Challenge #4 is that not all things that look alike are alike. For example, Liz Reston and her co-dependent husband Bob might [share the same email address].
Family members might have the same home address and phone number. Sons are often named after their fathers, but don't always write "Senior" or Junior" or "III" at the end of their names.
In other cases, roommates in college, who are not related in any other way, might share the same home address. The same apartment number or home address could be used by different people as the house is sold or apartment is rented from one family to another.
It took Jeff decades to appreciate the results of these entity relationships, and then GDPR happened in 2016. When a citizen asks to review their personal data, which they can after May 25 for free, a company must deliver within 30 days. The person can then ask to rectify certain information, or have it erased altogether.
So what seems like a simple enough question, "What do we know about Liz Reston?" turns out to be challenging to answer for a variety of reasons. Jeff did a survey of over 1,000 European companies, here were the results:
Most companies are not ready, and are concerned about their ability to comply with this GDPR regulation.
Company expect an average of 246 requests per month.
The search will require accessing, on average, 43 different system databases.
Each database search will take seven minutes.
Companies will need to dedicate seven to eight full time employees to complete these search requests.
Having access to powerful enterprise-wide "single subject search" discovery tools, however, can also lead to search abuse. For example, a famous celebrity is admitted to a hospital, and suddenly sensitive information is leaked to the tabloids or paparazzi. Someone asks their friend, a police officer, to search the license plate on someone's vehicle. A father searches his corporate database for information on his daughter's new boyfriend.
To address this privacy concern, Jeff suggests a tamper-proof audit log that shows who searched for whom. Where are we going to get technology to do this? We already have it: Blockchain! That's right, the technology that enables Bitcoin to operate without government controls already includes a tamper-proof audit log for transactions.
Jeff's plans for his new company Senzing is to deliver software for different use cases, with APIs for popular programming languages like Java and Python, and a workbench that runs on Windows. He is also considering a "Community Edition" that could be affordable for even the smallest of businesses, with a challenge to the audience to please contribute to this as an open source project.
Last week, IBM clients, Business Partners and executives got together for the inaugural IBM [Think 2018] conference. There were over 30,000 attendees.
In an age of exponentially more data, connected devices and computing power, there are more ways for attackers to breach an organization than ever before. Teams are challenged to manage these threats as they deal with too many disparate tools from too many vendors, an enormous security and IT skills shortage, and a growing number of compliance mandates.
Marc van Zadelhoff, General Manager, IBM Security, kicked off the session "Ready For Anything: Build a Cyber Resilient Organization". The year 2017 was a tough year for security. People can relate to the number of security breaches that happened.
Why do companies struggle in this area? It is not just because hackers have become more sophisticated. IBM Security has over 8,000 security experts to help clients. When IBM is called in, we find 90 percent lack basic fundamentals from firewall rules and patch management. It takes on average 200 days for companies to detect breaches. Sadly, 77 percent do not have a response plan after the breach happens.
To help this, IBM has come up with new terminology. At a certain point, [the shit hits the fan], a Canadian phrase meaning "messy consequences are brought about by a previously secret situation becoming public." Marc explained that it often is accompanied by FBI agents showing up at the front door.
Marc referred to this event as "the Boom". All of the preparation and prevention happen "left of Boom". The clean-up, salvaging your brand reputation, and remediating the damage was called "right of Boom". Here are some examples of a Boom event:
Compromised Cloud app
Left of Boom is our domain of choice. We are surrounded with just security and IT problems, problems we have studied our entire careers, involving daily activities we complete with a sense of certainty.
Right of Boom is a completely different matter. Others get involved, including Legal, HR, and sometimes even the Board of Directors. These are distant, hazy problems that don't occur every day, and more uncertainty.
The Boom is not the initial breach, but when the breach becomes public, an average of 200 days later. Hackers can do quite a lot of damage during these 200 days. What might have started as phishing emails, might continue with access to sensitive databases, stolen credentials to other servers, access to internal networks, and additional compromises.
Likewise, companies should not expect to clean up the mess in just a few days either. IT forensics are used to determine the scope of the breach. Regulators and auditors are notified, press conferences and legal dispositions are scheduled to address the public concerns, and social media sentiment might fall.
Back in 2016, [IBM acquired Resilient] a security software company. Ted Julian, IBM VP Product Management and Co-Founder of Resilient, performed a live demo of this software. Basically, it is a dashboard that automates gathering incident data, determines the tasks required, and then orchestrates appropriate responses. This allows the security administrator to launch remediation directly in context.
Last year, over 1,400 customers have taken advantage of IBM's security breach simulator lab, the IBM X-Force Command Center. On the right side of the boom, time matters. What might take 90 minutes manually can be done in two minutes with IBM Resilient dashboard and the right amount of practice and training.
Next on stage were Wendi Whitmore, IBM Security Services, and Mike Errity, Vice President IBM Resiliency Services. While Wendi's team is handling the situation from afar, Mike's team lives in the data center. Mike explained Recovery Time Objective (RTO) and Recovery Point Objective (RPO), which applies to recovery after cyberattack, similar to Disaster Recovery after a hurricane.
Wendi indicates that executives need visibility into what is going on after a breach, and to have retainers involved in PR firms and other industry experts to be called on a short notice as needed right of boom.
Richard Puckett, Vice President Security Operations, Strategy and Architecture, at Thomson Reuters, was the final speaker. Richard spent the first six months of his job uplifting the security protocols at Thomson Reuters. They partnered with IBM to build up their talent for their Security Operation Center (SOC).
Threats are asymmetric. Unlike traditional physical threats from mobs of people, or trucks parked at the front door, cyber threats go undetected. Once they are detected, it can be difficult to identify the perpetrator. Richard suggests that good security requires good management. Patch management is not the sexiest, but is critical. Don't focus on shiny new objects, but rather fixing weak passwords and poor patch management procedures.
In the struggle to keep up, organizations are not doing a good job of mastering the security fundamentals. IBM believes that with the right approach, technologies and experts, our clients can fight back. IBM can deliver security and resiliency at the scale and speed necessary to protect businesses against the challenges of today, and tomorrow.
While Sal Khan was a hedge fund manager in Nor then California, he was also a math tutor to his cousin Nadia over the Internet in the evenings. This extended to 15 other family members. In November 2006, Sal started to record his teachings on a YouTube channel. His cousins liked the YouTube recordings better, as they could go at their own pace.
In 2007, Sal realized that many people who were not family-related were watching his educational videos on YouTube. Sal quit his job and set up [Khan Academy] as a non-profit organization. Unfortunately, the donations he received from students and parents were not enough to support his monthly expenses. However, he received a generous $10,000 US dollar donation from a parent who used the site with her kids.
Word got around. Bill Gates from Microsoft mentioned Khan Academy in an on-stage interview. Mr. Gates admired Sal's wife for letting him quit his job to pursue his interests.
(Later, Mr. Gates invited Sal to visit the Microsoft campus in Seattle, WA, asking him "What could Khan Academy achieve if you had more resources?" A question folks in public education, or the IT industry for that matter, rarely hear! )
By Fall 2010, the Gates Foundation, Google, [and other supporters] helped make this a fully funded organization, he was able to hire engineers and educators.
Sal gave an interesting analogy. Imagine building a house, the first step is to pour the concrete foundation, instructing the builders to "do what you can in two weeks". The inspection indicates problems, but you go ahead and build first floor with the same approach "do what you can in two weeks", then build second floor. Eventually, the house collapses.
Sal organized Khan Academy similar to [Kung Fu belt colors], rather than the manner students are grouped by age in traditional American schools, promoted lock-step, regardless of their readiness. Many students have gaps, and being moved to next grade just results in more gaps. The solution is to fill the gaps in a timely manner.
Sal gave three inspiring stories of some of his students:
Charlie dropped out of high school his freshman year. When he came back to school, he was put in remedial math and science classes. Charlie was able to catch up using Khan Academy, graduated as high school valedictorian, and went on to major in Computer Science at Princeton. Hearing this testimonial, Sal offered him an internship during his Junior year at Princetom. Charlie is now fully employed at Khan Academy.
Some engineers from Silicon Valley went to Mongolia to setup computer labs for kids in an orphanage. One orphan, Zaya, sent an [email with video] to Sal about how much she appreciated learning through Khan Academy. Zaya is now 19 years old, and one of the top contributors to Khan Academy in the Mongolian language, helping to educate her own people.
Seven years ago, a girl named Sultana living in Afghanistan. The Taliban took over her town, and physically prevented girls from attending school. Sultana had Internet access at home, and taught herself English. She asked her uncle to bring back any reading materials in English he could find. He brought back a Time magazine with an article on Khan Academy.
Between her ten hours' worth of household chores every day, Sultana taught herself math, chemistry, biology and physics using Khan Academy. She illegally crossed into Pakistan, a dangerous 30-hour journey, just to take the SAT exam and did surprisingly well.
Nicolas Kristof from the New York Times wrote an article [Meet Sultana, the Taliban's worst fear]. Sultana was able to get assylum into the United States, and is now doing research with a top physicist at MIT.
But how effective is Khan Academy overall? Working with the college test board, Sal was able to do efficacy studies. With 250,000 students using Khan Academy for PSAT/SAT prep for just 20 hours produced 100 percent extra gain. A similar study in Idaho found 80 percent extra gain with 10,500 students. In Brazil, a 7,000 student study found that one hour of Khan academy per week resulted in 30 percent more learning.
The videos on Khan Academy favor being simple and authentic, rather than high production value. The software and equipment used to make the first videos only cost a few hundred dollars. The costs are just 30 US cents per hour of learning.
Today, the free online learning resources cover preschool through early college education, including K-12 math, grammar, biology, chemistry, physics, economics, finance, history, and SAT prep. Khan Academy also provides teachers with tools and data so they can help their students develop the skills, habits, and mindsets they need to succeed in school and beyond.
The concept scales well. Khan Academy has over 150 employees, with another 14,000 volunteers helping with translations. Over 59 million students have registered across 190 countries. Every year, about 300,000 people send in donations. The webiste has had over 1.4 billion views.
Sal finished his talk with a thought experiment: Go back 400 years ago to Western Europe, a time when only about 10 percent of men, and 5 percent of women, could read. If you asked someone, back then, what percentage of people could be taught to read, they would estimate only 20 to 30 percent.
Today we know that nearly 100 percent of people can be taught to read. However, if you asked people today what percentage of people could become a software engineer, start a business, or write a novel, people respond only one to five percent.
IBM Watson is also helping out in the area of education. Register today at [Teacher Advisor]!
This week, IBM clients, Business Partners and executives get together for the inaugural IBM [Think 2018] conference. There are over 30,000 attendees.
This is a combination of last year's three events: Edge, InterConnect, and World of Watson (WoW). The combined event is divided into four "campuses":
Cloud and Data -- formerly covered at InterConnect
Modern Infrastructure -- formerly covered at Edge
Business and AI -- formerly covered at World of Watson
Security and Resiliency -- covered in the other three events
(I am not in Las Vegas! In my first post in this series, [Science Slam], I forgot to mention that I was not physically there, and have since been flooded with invitations and requests for one-on-one meetings with clients and cocktail parties. Sorry folks! I am in Tucson writing these blog posts by watching the live stream videos of the event.)
Putting Smart to Work
Ginni Rometty, IBM Chairman, President and CEO, kicked off the event. In the opening video, we realize that "smart" is just a placeholder, translated to "Putting Cloud to Work", "Putting AI to work", and so on.
An "interesting moment" that happens every 25 years, when business and technology change at the same time. Those who learn exponentially are disruptors, not victims of disruption.
[Moore's law]: Double the number of transistors on a chip every 18-24 months.
[Metcalfe's law]: The value of a network is related to the square of the number of nodes involved.
[Watson's law]: Ginni would like to coin this new law to refer to exponential learning from data using Artificial Intelligence (AI).
How much of the world's data is searchable? only about 20 percent. The other 80 percent is proprietary that provides competitive advantage. IBM is helping clients be the "incumbent disruptor".
Ginni covered three inflection points: your business, society, and IBM itself
Companies must go on the offense, leverage multiple digital platforms (plural), and empower people by enable "man+machine" learning in every process they have. What are better decisions worth? Over $2 Trillion US dollars!
Man+Machine better than man-alone and machine-alone. At [Credit Mutuel], a leading European bank, Watson technology is used to answer 60 percent of customer emails, and 95 percent of the employees there are happier about this.
IT technology represents both the greatest opportunity and the biggest issue of our time.
Trust and responsibility. We must be data stewards, with focus on privacy and security. Only 4 percent of data is encrypted.
Jobs and skills. Man+Machine augments man alone. 100 percent of jobs will change. Ginni coined the term "new collar jobs" a few years ago.
Inclusion is important. IBM is one of the leaders in this area with its 400,000 employees spanning all races, genders, and sexual orientations. IBM was awarded [Catalyst award] for companies making real change for women in the workplace. IBM is the only tech company to be ever awarded this, and this will be the fourth time IBM is honored with this award.
IBM has revamped its own HR with [Workday]. In 2016, Workday partnered with IBM on 7-year deal to use IBM Cloud for its platform. IBM in turn has switched its HR to using Workday applications.
Mainframe technologies and POWER9 are now on the IBM Cloud. IBM is also expanding IBM Cloud Private to include "IBM Cloud Private for Data".
To date, IBM has completed 16,000 Watson engagements to-date. Watson Oncology now in 150 hospitals analyzing 13 different types of cancer.
The big system Watson used to play Jeopardy in 2011 have been broken down to micro-services and APIs that are more easily consumable by applications.
IBM and Apple have announced integration with Watson. Apple [CoreML] natively goes to Watson. IBM can now go straight to Apple Swift code. A new "Watson Studio" allows you to develop AI models in the cloud, then deploy them in private on-premises.
IBM will also offer "Watson Assistant". In the past, buying Watson was like buying a puppy, you needed to train it yourself. If you wanted a vicious guard dog, or a seeing-eye dog, that was up to you. Now, IBM offers "Watson Assistant" which is pre-trained.
Secure to the core
IBM is obsessed with security and trust, from Blockchain to Pervasive Encryption.
In the past, IBM often tried to do this all on its own, but in today's business climate, IBM now has strategic partnerships in these many areas.
Lowell McAdam, Chairman and Chief Executive Officer, Verizon Communications was the first guest speaker.
April 2017, Verizon launched Oath, formed from the company’s acquisition of AOL and Yahoo, which houses more than 50 digital and technology brands that together engage more than 1 billion people worldwide.
(I personally have been working with Verizon for decades, back when they were just NyNEX, BellAtlantic, and GTE, before they acquired Vodaphone, MCI, AOL and Yahoo! I use FlickR, one of the Yahoo brands.)
With the acquisition of AOL and Yahoo, Verizon formed "Oath", with over 1.2 billion consumers. The name came from the promise to customers for giving them to get what they want, when they want them.
Largest fiber provider for the USA. We have enough fiber on hand to stretch to Mars.
They invest $18 billion per year, but often payoffs not for another 5 years. [5G Wireless network technology] is an example. Lowell feels that 5G will usher the "fourth" industrial revolution:
Speeds over 1Gbps for consumers, 25Gbps for commercial, compared to 10 Mbps typical today.
5G will support 1,000 more devices per cell site, enabling IoT like intelligent lighting, video surveillance, face recognition.
5G has short latency, 1 msec compared to 200 msec today to cell site and back. This shorter latency will enable Augmented Reality and Virtual Reality (AR/VR).
5G also reduces battery consumption, imagine only charging your cell phone once per month!
Verizon delivers value three ways:
Provide connectivity only. Verizon will continue to do this for some markets
Like IBM, Verizon promises it will not use customer data in any manner that the customer did not "opt in" for. Business is based on trust. Those business that lose trust have difficult time to regain it.
Shipping, Supply Chain and Global Trade
Michael J. White manages the Global Trade Digitization organization for Maersk. He was recently named CEO-designate of the IBM-Maersk Joint Venture.
Shipping products is $4 Trillion US Dollar business. As much as 80 percent of what we consume came over the ocean. On average, 20 percent of the shipping cost is administrative paperwork, however, in some cases, the administrative costs exceed the physical transport costs.
State of industry, over the last 5 years, has been 3.7 percent compound annual growth rate (CAGR). This is expected to increase to 4 percent as economies bounce back. Many companies run lean, expecting their supply chains to provide supplies "just in time".
Unfortunately, shipping is hugely inefficient, paper-based. This impedes growth of local trade. Take for example the shipment of a container of Avocados from Kenya to Netherlands: 30 entities involved, over 100 individuals, over 200 transactions.
Why did IBM-Maersk joint venture pick blockchain? Blockchain is not a solution searching for a problem. The problems are well known, and blockchain addresses them. Smart contracts and decentralized authority provides immutable trust, critical in an industry where many parties do not know each other.
IBM Maersk Joint venture was formed over the past 18 months to create the world's best global trading platform.. There are 25 companies on-boarding now, with another 40 companies have expressed interest to join soon.
Unlike the anonymity of Bitcoin that enables terrorists and murders for hire, IBM is focused on transparency that all parties identify each other.
Blockchain benefits all the key parties involved. Carriers benefit, customers benefit, and ports and terminals get information earlier upstream for better planning during peak periods, and this results in better utilization of resources available.
(Not everyone benefits - counterfeiters and corrupt government officials will not be happy with Blockchain used in this manner!)
Paperless transactions reduces re-keying information by 80 percent. Less re-keying means fewer mistakes, fewer typos.
This new global trade platform offers opportunities in adjacent blockchain networks for financial services, insurance, and food safety. To ensure food safety, Blockchain is used by Walmart, Kroger, Unilever and 20 others. One third of food grown is wasted.
Dave McKay, President & Chief Executive Officer, Royal Bank of Canada (RBC) was the next speaker. Dave graduated from the University of Waterloo, a COBOL computer programmer at heart. RBC still use COBOL programs in their banking applications!
RBC is the top bank in Canada, and would be #5 bank if it was based in the USA. It will be celebrating its upcoming 150th anniversary in 2019. Highest customer sat for multiple years running. RBC has 13 million customers. RBC is also Canada's #1 broker/dealer for investment banking.
Back in the 1980s, banks were only open 10am-3pm, and treated it as a privilege for clients to work with the bank. Account holders came in several times per week, and relationships were built with local branches. Today, account holders are not coming into branch offices, using ATMs and mobile phones instead.
In the past, consumers used their RBC Credit Cards, and this provided brand recognition for RBC. Today, traditional banking services are now being embedded into other value chains. With Apple Wallet, for example, you enter your RBC credit card once, and then nobody knows what bank you are using to pay for coffee.
Like any bank, RBC is focused on three areas: moving money, storing money, and lending money. AI is needed to evaluate these transactions into knowledge, to provide business value and insight. However, RBC has only 40 Applied and Pure data science researchers on staff. This was deemed not enough, so RBC partnered with IBM.
Cloud, the computer power and speed needed, RBC has 60 apps in development in the IBM Cloud. While silicon valley start-ups might "let the app fail faster in the hands of clients", that approach doesn't work with money transactions.
RBC has invested heavily in blockchain. It will transform how we work with others. Digital transformation not just technology, but also cultural change. Is RBC in the mortgage business or the "Housing enablement business"? Is it in the car loan business or "transportation enablement business"?
Working with small business, they want to focus on their own clients, not bookkeeping and accounting. RBC has deployed AI in the Cloud to create the Advisor's Virtual Assistant [AVA] application. There have been over 48 million interactions in the first four months!
RBC is also investing $500 million this year to build the IT skills of their employees.
RBC is also focused on the stewardship of data. The strength and trust of financial institutions is the core to a strong economy. RBC policies are based on "opt in" to provide value relevant to both clients and the bank. Banks that breach that trust will struggle.
Ginni (and the rest of the company) has re-invented IBM to achieve exponential change. The change impacts all industries, not just the three we saw on the stage during this keynote session.
To follow along with the rest of Think2018 conference, watch the live stream on [www.ibm.com/events/think/watch] or follow the twitter hashtag #Think2018
This week, IBM clients, Business Partners and executives get together for the new IBM [Think 2018] conference. This is a combination of last year's three events: Edge, InterConnect, and World of Watson (WoW).
(The theme this week is "Putting smart to work." Some might feel that this is a grammatically-incorrect use of the adjective [smart], referring to having quick-witted intelligence or being neat and well-dressed. Many words in the English language have multiple meanings and uses. The word smart is also a noun, referring to either business acumen, technical skills, or "a sharp stinging pain")
The keynote session today was "Science Slam: Unveiling 5 Breakthrough Technologies That Will Change the World!" by Arvind Krishna, IBM Research Director. IBM has over 3,000 researchers, in 12 labs, across six continents.
This talk was based on IBM's annual five-in-five, five predictions that might change the world in the next five years. For amusement, read my 10-year-old blog post [Five in five for 2008], including predictions for smart thermostats that can be controlled remotely, and self-driving cars.
("Science Slam" is IBM Research version of [Pecha Kucha], but instead of art students having 20 minutes to show 20 PowerPoint slides, each IBM research scientist has 5-7 minutes to explain the research project they are exploring. These are done both internally, as well as to audiences outside the company.)
Jamie Garcia served as emcee, introducing each of the five experts. Each spent 5-7 minutes, Science Slam style, on what projects they were working on.
1. Crypto-anchors and blockchain technology
‘Everything you don’t understand about money
combined with everything you don’t understand
about computers’ [25-minute video]
Andreas Kind presented first. Blockchain is not just a provenance system that enables Bitcoin and other cryptocurrencies, it can be used for other goods.
(The best layman explanation of blockchain and cryptocurrencies I saw was John Oliver's humorous take on his HBO show [Last Week Tonight]!)
Counterfeit goods, from cinnamon to footwear, to medicine and automotive parts, is estimated over $1.8 trillion US dollars. IBM is working on how to use blockchain for other things, such as to restore trust into global supply chain. IBM hopes to reduce the number of counterfeit goods in half or more.
Andreas explained tamper-proof technologies called "crypto-anchors" -- from indelible ink on pharmaceuticals to computers smaller than a grain of salt -- that can be used to track products as they travel from one country to the next.
2. Lattice Cryptography and Fully Homomorphic Encryption
Cecilia Boschini from IBM Zurich presented next. As quantum computers get more powerful, the basic math involving prime numbers that most current encryption models are based on become vulnerable.
(Don't worry, she assured the audience, hackers would need a 1000-Qubit quantum computer to break today's encryption codes, which don't exist yet!)
What we need are post-quantum or quantum-resistant mathematical models. Lattice Cryptography aims to use more difficult math equations to make it more difficult for hackers to break the code, even when armed with quantum computers.
Another challenge with existing encrypted data is that we must decrypt the data to perform computations on it. Fully Homomorphic Encryption, or [FHE] for short, allows computations to be done in its encrypted state. For example, if I had a list of names with credit card or social security numbers encrypted, I could sort this list alphabetically without decrypting any of the data.
3. AI-enabled robotic microscopes to monitor ocean water
Tom Zimmerman is known as IBM Almaden's [McGyver], able to use common technologies in new and innovative ways.
By 2025, over half of the world's population will be living in water-stressed locations. IBM is working on robotic microscopes that can be deployed across the oceans, connected to the Cloud, monitoring the state of plankton.
Why plankton? Plankton produces two-thirds of all oxygen we breathe, and serves as the "baby food" for all oceanic species. Tom has re-programmed "face recognition" in smartphone cameras to recognize plankton, identifying what they are doing and eating.
Monitoring plankton provides an "early warning system", the proverbial [canaries in the coal mine] for impending water problems.
4. Eliminating Bias from Artificial Intelligence (AI)
Information overload! Overwhelmed by too much, our brains sort it out by either looking only for differences, or focusing on what we are already familiar with that confirm our beliefs.
Not enough meaning. Lacking complete information, our brains fill the gaps and connect the dots to find patterns that aren't patterns at all. Racism, prejudice, and stereotypes are examples of this.
The need to act fast! Survival in some cases demands acting fast, to avoid being eaten by an animal, for example. Unfortunately, our brains favor the quick and simple, over the more important but often delayed, distant or complicated response.
What should we remember? We decide what to remember, and what to forget. Our brains often favor generalities over specifics, as they take up less space. The details we remember when we experience it, or often edited or reinforced after the fact.
IBM is collaborating with the Massachusetts Institute of Technology [MIT] to reduce bias in Artificial Intelligence by rating different AI models on fairness.
The AI models that will win in the future are those where the biases are tamed or eliminated altogether.
5. Quantum Computing
Talia Gershon was the last speaker.
Many problems become exponentially more difficult to solve with classical computers. For example, simulating protein molecular bonding gets more difficult the larger the molecules are, because you have more electron interactions.
Quantum Computers run at a temperature of 15 millikelvin (mK), which is 460 degrees below zero. The computation unit is called a [Qubit], and a 5-Qubit quantum computer can solve problems that your laptop can solve classically. IBM now has "IBM Q" with 50-Qubit computers available.
The IT industry is still in the early stages, but IBM Quantum Information Software development kit (QISkit) allows programmers to experiment and develop algorithms for this new computational model.
Over the next five years, IBM predicts that Quantum Computing will transition from the lab, to the mainstream, to solve problems that were previously too difficult or time-consuming to solve.
Back then, IBM allowed its employees the option to run Windows, Linux or Mac OS. Since then, dual-boot Windows/Linux configurations, like the one I had back then on my Thinkpad T410, proved too difficult for our help desk, so these are no longer allowed.
In 2015, I received my new Thinkpad T440p to replace the old T410 model. For those 20 to 25 percent of the IBM employee population that manage, support and connect directly to client networks, IBM required Linux encrypted with LUKS, using Windows as KVM guests when needed for specific applications. This is more secure than running Windows natively, preventing viruses and other malware to spread between IBM and its clients.
As I am occasionally asked to help out our colleagues in lab services or with critical situations, I decided to implement my laptop to match, just in case. RHEL is rock solid, and running Windows as KVM guests could not be easier. Not having to worry about Windows viruses while travelling on business is a huge benefit as well.
Upgrading from RHEL 6.1 all the way up to RHEL 6.9 was simply a push of a button, all the new applications and kernel get installed, followed by a quick reboot. The migration from RHEL 6.9 to RHEL 7.4, however, was a major undertaking.
In past migrations, I was moving from a working laptop to a second laptop, affording me to be fully productive on the old machine until I was ready to cut over. In this case, I am performing a fresh install on my existing machine. To avoid any problems or delays, I wrote myself an 8-page, 17 step migration plan to capture all the tasks I needed to do to minimize the impact to my productivity.
(Of cousre, IBM has a help desk. You hand over your laptop, they backup the home directory, wipe your system clean, fresh install, restore your home directory, and return the laptop to you 3-5 days later, leaving the rest of the tasks up to you. Basically, this would merely replace the first three of my 17 steps below. I did not feel like burdening our help desk, nor wait 3-5 days without a laptop!)
Here were my steps:
Backup my existing system
In addition to backing up all my individual files to the Cloud, I also used [Clonezilla] to create a full image backup of my 500GB drive to an external USB drive.
Not all data is in file form. I also exported my browser bookmarks, so that I could import them back later. I also ran an "rpm -qa" to get a list of my existing applications installed.
Initially, I thought to format the 4TB external drive in UDF format, which is readable by Windows, Linux and Mac OS and supports files that are larger than 4GB in size.
Not knowing whether I should use [ExFAT] or Universal Disk Format [UDF] format, I split the 4TB into two 1.9TB partitions, and formatted one as ExFAT, and the other as UDF. Both formats support files greater than 4GB in size, which I have, but I discovered that on the older RHEL 6.9 release, based on a 2.6 Linux kernel, you can only write 68GB of data to a UDF partition. This is fixed in later kernels, but doesn't help me with my existing RHEL 6.9 release.
Fortunately, the latest Clonezilla LiveCD chops up the cloned images into files small enough that you can write to a variety of formats, and has a newer kernel that allows writing the full capacity of UDF partition.
In a crisis, I can restore back to RHEL 6.9 within 2 hours. This was my "relief valve" if I encountered any major delays and had to go travel for business on short notice.
Fresh install of RHEL 7.4 Linux
This completely wipes clean my drive, and installs two partitions. A tiny "/boot" partition needed to boot the system, and the remaining drive capacity as a large LUKS-encrypted LVM, to be internally partitioned between "/" and "swap" logical volumes.
Copy all of my files back
The challenge is that some files might clobber some of the configurations of the new applications. For this reason, I created /home/tpearson/RHEL69 and put everything there, so that I can move them to the correct locations as appropriate.
Copying all the files back in this manner eliminated having to be tethered to the external USB drive.
Setup LAN connectivity
I have to connect to IBM and guest systems, so this configuration is important. This includes EAP, TLS and VPN configurations. I thought I could just re-use the certificates I have for RHEL 6.9, but no, I had to create and register fresh new certificates for RHEL 7.4 release.
Configure Cinnamon Desktop
RHEL 7.4 uses Gnome 3 by default, which is quite different than Gnome2 used in RHEL 6.9 release. I don't care for it, so I configured [Cinnamon desktop] instead. Many people who use Linux Mint or Ubuntu might be familiar with this, and for those switching from Windows or RHEL 6.9 Linux, Cinnamon has familiar "Start" button in lower left corner.
By default, our RHEL 7.4 image comes with Firefox and Chrome browsers, so all I needed to do was import the bookmarks that I had exported in step 1 above.
Configure KVM guests
I was able to bring over my Windows7 Kernel-Virtual Machine [KVM] from RHEL 6.9 and run without problems, but this was bloated and now consuming nearly 60GB of space. Therefore, I decided to get a fresh Windows7 and Windows10 guest images instead.
Like with Linux, I wrote down what applications I had installed on Windows, and used that to configure the Windows guests. Nearly everything I do runs natively on Linux, but I do use Microsoft Office (Powerpoint, Excel, Word) and a nice tool called [CutePDF] that allows me to print to PDF instead of an actual printer.
Windows10 comes with the "Print-to-PDF" feature built-in, so no need for CutePDF on that one.
Configure IBM Notes, Sametime and Gnote
IBM is a heavy user of [IBM Notes] (formerly called Lotus Notes), not just for email but also for its document management and database capabilities. Sametime is our "Instant Messenger" app. [Gnote] is a linux-based tool to store short notes, I use it for all of my email templates for quick copy-and-paste responses.
IBM recently made using printers super easy. Print to the common "Cloud printer", and then pick up your print-outs from any printer in the building, any IBM building, worldwide. I could print in Tucson, for example, and pick up my print-outs when I am in the IBM buildings in Austin, Texas!
I also had to configure my printer at home, for those days where I need to print a boarding pass or quick document.
Configure File Sharing
IBM has deployed IBM [Spectrum Scale] internally for employees to share files across the company called "Global Storage Architecture" (GSA). Configuration for me just meant having to find my local cell (tucgsa) for Tucson, and entering my credentials.
Install Docker and DSX Desktop
[DSX Desktop] is the local laptop version of IBM's cloud-based [Data Science Experience], allowing me to perform Hadoop and Spark analytics for the various projects I work on. It runs as a Docker container, so I had to configure Docker as well.
Install Multimedia Codecs
One of the big detractors for Linux, compared to Windows or Mac OS, is the lack of multimedia support. Linux distros, like Red Hat, don't ship with these pre-installed, leaving this as an exercise for the end user.
IBM does a lot of audio and video files, including replays of conference calls and webinars for internal training. I keep a collection of different audio and video files to ensure that I have everything configured correctly for proper playback.
Install GIMP and other software
The GNU Image Manipulation Program [GIMP] is a great tool for quick editing of graphics. Another tool, Inkscape is designed for vector graphics.
Configure file-level backup
In addition to doing full-volume image backups with Clonezilla, I back up individual files, which are sent over the IBM internal network to a central server. All I need is configure to my previous backup set, and create the appropriate include/exclude list.
Many employees might just back up their home directory, but I customize a lot of the Linux configuration, so I like to backup a few more directories. Here is what I choose to back up:
Congigure Grub2 boot configuration
RHEL 7.4 supports [Grub2], which allows you to boot iso files directly. I like to add Clonezilla and [SystemRescueCD] as boot options. These were simple enough to add, just follow instructions, copy files to the /boot directory, and create a menuentry for each.
Validate final configuration
After eight days, I have finally completed all these steps, and am able to validate that everything is working correctly. I did some sample workflows, such as:
Verify that I can launch Windows KVM guest, edit Powerpoint presentation, and print to PDF file.
Verify that I can open email, launching embedded URL links, and copy-and-paste templates from Gnote
Launch GIMP, verify that I can edit graphics, and import the results in a Powerpoint presentation.
Download and play a Webinar replay MP4 file
Fresh Clone of full volume image
Using the Clonezilla that I added to the Grub2 boot menu, I am able to backup my full 500GB drive. At this point, I will keep the RHEL 6.9 for a few weeks as emergency backup, but so far, everything seems to be working just fine.
This took longer than I expected, but am happy with the final result. Red Hat is rock-solid, and the new RHEL 7.4 allows me to run DSX Desktop, Windows 10, and some other applications that were not available on our previous RHEL 6.9 build.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Everyone is getting ready for next week's "Think 2018" event, so these might get missed under all the excitement.
IBM Spectrum Archive Enterprise Edition V1.2.6
IBM [Spectrum Archive] Enterprise Edition supports Linear Tape File System (LTFS) cartridges as part of a larger IBM Spectrum Scale deployment. Version 1.2.6 provides features to help transition from old technology to new technology, at the library, drive and cartridge level. It also adds support for "Little Endian" mode for IBM Power servers.
Tape library replacement procedure
Tape intermixing in pool for technology upgrade
Support for LTO 8 Media on LTO 8 drives
Support for Power Systems in Little Endian (LE) mode
IBM Copy Services Manager [CSM] was formerly knows as Tivoli Storage Productivity Center for Replication. It manages the copy services like FlashCopy and remote mirroring for DS8000, Spectrum Virtualize family, and Spectrum Accelerate family products. Version 6.2.2 adds some nice features:
Support for scheduled tasks against Copy Services Manager sessions
Support to create DS8000 system diagnostics from the Copy Services Manager GUI and CLI for issue resolution
New SNMP event and email notifications for any detected path failures
Ability to enable embedded Easy Tier heat map transfer to support full Copy Services Manager session configuration, including practice volumes
Next week, I will not be in Las Vegas for Think 2018. If you won't be there either, you might consider watching some of the livestream videos at [www.ibm.com/events/think/watch] starting March 19, 2018.
Many of you have seen the Storage announcements that were made last month on February 20. I gave you all the skinny about the context of the technology shift and some resources to go deeper still in my blog post [IBM Storage Announcements for February 2018].
So, there’s a lot going on in IBM Storage right now. I’m looking forward to the upcoming IBM Systems Technical University in Orlando, Florida, from April 30 to May 4, 2018.
TechU’s are my favorite events to attend. This is a true event for techies! You get hands-on labs, demos, technical sessions, and birds of a feather (BOF) sessions and open technology discussions.
There are over 200 sessions on IBM Storage. I have the honor of sharing the latest in storage technology and strategy. Here are the topics I am scheduled to present:
IBM hybrid cloud storage solutions
Managing risks with data footprint reduction
Information lifecycle management: Why archive is different than backup
The seven tiers of business continuity and disaster recovery
Introduction to IBM Cloud Object Storage System (powered by Cleversafe)
The pendulum swings back: Understanding Converged and Hyperconverged Systems
Reporting and monitoring: How to verify your storage is being used efficiently