This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Kevin's perspective focused on the evolution over the past 100 years of "information science", in six chapters: sensing, memory, processing, logic, connecting, and architecture. He covers the technology from IBM Punched Cards and core memory, to the latest optical chips and the DeepQA technology in IBM Watson.
Steve's perspective was on IBM as a corporation, and how IBM and other corporations have evolved over the past century. In the late 19th century and early 20th century, "Internationals" had their headquarters in the United States, and regional sales and distribution offices elsewhere. The mid-20th century gave rise to "Multinationals" that invested more heavily in regional headquarters scattered across the globe. Today, in the 21st century, IBM and its clients are [Globally Integrated Entrprises] that move work to the lowest costs, best skills, and most attractive business climates.
Jeffrey M. O'Brien
Jeffrey M. O'Brien has been a senior editor [Fortune] and [Wired] magazines, and his work has appeared in The Best of Technology Writing, The Best American Science and Nature Writing, and The Best American Science Writing.
Jeffrey's perspective is on the impact technology has on humanity, organized into five steps towards progress: Seeing, Mapping, Understanding, Believing, and Acting. These steps have been around long before IBM, and Jeffrey is able to draw parallels to such efforts as Lewis & Clark mapping out the Louisiana Purchase, advancements in genetically modified foods, and the thousands of IBMers required to land a man on the moon.
This afternoon, everyone at the IBM Tucson site will be getting together to celebrate IBM's Centernnial!
Well, it's Tuesday, in the United States at least, and you know what that means... IBM Announcements! I am actually down under in Sydney, Australia, and it is Wednesday already as I write this. I feel like a time traveler.
IBM announces their latest disk system, the [IBM System Storage DCS3700], designed for high-performance computing (HPC), business analytics, video broadcasting, and other sequential workloads. The "DCS" stands for Deep Computing Storage. IBM already has the DCS9900 for large enterprise deployments, so this smaller DCS3700 is targeted for midrange deployments.
In a compact 4U package, the DCS3700 packs dual active-active controllers and up to 60 disk drives. The controller drawer can support two additional expansion drawers, of 60 drives each in 4U drawers, for a maximum total of 180 drives in 12U of rack space. Packed with "green" 7200RPM energy-efficient 2TB drives, a system can have up to a 360TB raw capacity. The system supports RAID levels 0, 1, 3, 5, 6, and 10.
The system comes with the latest 6Gbps SAS connections for host attachment, but you can choose 8Gbps Fibre Channel Protocol (FCP) instead, allowing the DCS3700 to be managed by SVC or Storwize V7000.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(Yes, OK, it's actually Thursday. I wrote this post weeks ago, but was embargoed until Jan 10, and then was asked to wait until Jan 12 so that the IBM Marketing team could translate my text into 15 different languages.)
This week, the IBM DS8000 team announces a new High Performance Flash Enclosure (HPFE-Gen2) and a series of All-Flash Array DS8880F models that exploit this new technology.
New High Performance Flash Enclosure (HPFE-Gen2)
The original HPFE was 1U high with 16 or 30 flash cards, and could support RAID-5 or RAID-10. Most used RAID-5, resulting in four array sites of 6+P each, leaving two cards for spare. These 1.8-inch cards were only 400 or 800 GB in size, so the maximum raw capacity was only 24TB per 1U enclosure.
The new HPFE-Gen2 enclosure is a complete re-design, consisting of two Microbays and two TeraPacks. The I/O Bays attach to the Microbays via PCIe Gen3. The Microbays in turn attach to both TeraPacks via redundant 6 Gb or 12 Gb SAS.
Each TeraPack holds 24 flash cards each. Since the TeraPacks come in pairs, you can install 16, 32 or 48 flash cards per enclosure. Each 16-card set represents two array sites, for a maximum of six array sites per HPFE-Gen2.
RAID-5 for 400/800 GB. Two 6+P arrays, four 7+P arrays, and two spares.
RAID-6 for 400/800/1600/3200 GB. Two 5+P+Q arrays, four 6+P+Q arrays, and two spares.
RAID-10 for 400/800/1600/3200 GB. Two 3+3 arrays, four 4+4 arrays, and four spares.
(Technically, these new "Flash cards" are 2.5-inch Solid State Drives (SSD) placed into the HPFE Gen2 connected to the PCIe Gen3 interface, with 50 percent additional capacity to tolerate up to 10 drive-writes-per-day (DWDP). IBM will continue to call them "Flash Cards" for naming consistency between the two generations of HPFE.)
The new HPFE-Gen2 enclosures are substantially faster, offering up to 90 percent more IOPS, and up to 268 percent more throughput (GB/sec). The Microbays use a new flash-optimized ASIC to perform the RAID calculations.
New All-Flash Array DS8880F models
IBM introduces the DS8884F, DS8886F and DS8888F that are based entirely on the HPFE-Gen2 enclosures described above.
Hybrid - HDD/SSD/HPFE mix
Hybrid - HDD/SSD/HPFE mix
AFA - HPFE only
AFA - HPFE-Gen2 only
AFA - HPFE-Gen2 only
AFA - HPFE-Gen2 only
New zHyperLink connection
Also, as a "Statement of Direction", IBM intends to deliver field upgradable support for zHyperLink on existing IBM System Storage DS8880 machines for connection to z System servers. zHyperLink is a short-distance, mainframe-attach link designed for lower latency than High Performance FICON.
Typical latency with FICON/zHPF is around 140-170 microseconds, and this new zHyperLink is estimated to reduce this down to 20-30 microseconds, but is limited to 150 meter fiber optic cable distance. zHyperLink is intended to speed up DB2® for z/OS® transaction processing and improve active log throughput.
Well, it's Wednesday, and you know what that means... IBM Announcements!
(Actually most IBM announcements are on Tuesdays, but IBM gave me extra time to recover from my trip to Europe!)
Today, IBM announced [IBM PureSystems], a new family of expert-integrated systems that combine storage, servers, networking, and software, based on IBM's decades of experience in the IT industry. You can register for the [Launch Event] today (April 11) at 2pm EDT, and download the companion "Integrated Expertise" event app for Apple, Android or Blackberry smartphones.
(If you are thinking, "Hey, wait a minute, hasn't this been done before?" you are not alone. Yes, IBM introduced the System/360 back in 1964, and the AS/400 back in 1988, so today's announcement is on scheduled for this 24-year cycle. Based on IBM's past success in this area, others have followed, most recently, Oracle, HP and Cisco.)
Initially, there are two offerings:
IBM PureFlex™ System
IBM PureFlex is like IaaS-in-a-box, allowing you to manage the system as a pool of virtual resources. It can be used for private cloud deployments, hybrid cloud deployments, or by service providers to offer public cloud solutions. IBM drinks its own champagne, and will have no problem integrating these into its [IBM SmartCloud] offerings.
To simplify ordering, the IBM PureFlex comes in three tee-shirt sizes: Express, Standard and Enterprise.
IBM PureFlex is based on a 10U-high, 19-inch wide, standard rack-mountable chassis that holds 14 bays, organized in a 7 by 2 matrix. Unlike BladeCenter where blades are inserted vertically, the IBM PureFlex nodes are horizontal. Some of the nodes take up a single bay (half-wide), but a few are full-wide, take up two bays, the full 19-inch width of the chassis. Compute and storage snap in the front, while power supplies, fans, and networking snap in the back. You can fit up to four chassis in a standard 42U rack.
Unlike competitive offerings, IBM does not limit you to x86 architectures. Both x86 and POWER-based compute nodes can be mixed into a single chassis. Out of the box, the IBM PureFlex supports four operating systems (AIX, IBM i, Linux and Windows), four server hypervisors (Hyper-V, Linux KVM, PowerVM, and VMware), and two storage hypervisors (SAN Volume Controller and Storwize V7000).
There are a variety of storage options for this. IBM will offer SSD and HDD inside the compute nodes themselves, direct-attached storage nodes, and an integrated version of the Storwize V7000 disk system. Of course, every IBM System Storage product is supported as external storage. Since Storwize V7000 and SAN Volume Controller support external virtualization, many non-IBM devices will be supported automatically as well.
Networking is also optimized, with options for 10Gb and 40Gb Ethernet/FCoE, 40Gb and 56Gb Infiniband, 8Gbps and 16Gbps Fibre Channel. Much of the networking traffic can be handled within the chassis, to minimize traffic on external switches and directors.
For management, IBM offers the Flex System Manager, that allows you to manage all the resources from a single pane of glass. The goal is to greatly simplify the IT lifecycle experience of procurement, installation, deployment and maintenance.
IBM PureApplication™ System
IBM PureApplication is like PaaS-in-a-box. Based on the IBM PureFlex infrastructure, the IBM PureApplication adds additional software layers focused on transactional web, business logic, and database workloads. Initially, it will offer two platforms: Linux platform based on x86 processors, Linux KVM and Red Hat Enterprise Linux (RHEL); and a UNIX platform based on POWER7 processors, PowerVM and AIX operating system. It will be offered in four tee-shirt sizes (small, medium, large and extra large).
In addition to having IBM's middleware like DB2 and WebSphere optimized for this platform, over 600 companies will announce this week that they will support and participate in the IBM PureSystems ecosystem as well. Already, there are 150 "Patterns of Expertise" ready to deploy from IBM PureSystem Centre, a kind of a "data center app store", borrowing an idea used today with smartphones.
By packaging applications in this manner, workloads can easily shift between private, hybrid and public clouds.
If you are unhappy with the inflexibility of your VCE Vblock, HP Integrity, or Oracle ExaLogic, talk to your local IBM Business Partner or Sales Representative. We might be able to buy your boat anchor off your hands, as part of an IBM PureSystems sale, with an attractive IBM Global Financing plan.
IBM Senior Certified Executive IT Architect
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(Note from Lloyd: This is my first post on this blog! )
IBM recently announced version 2.0 of the Storage Enabler for Containers solution. This version of the IBM Storage Enabler for Containers solution now supports IBM Spectrum Scale for dynamic provisioning of storage for stateful containers to go along with IBM block storage devices like the DS8000 family and systems running IBM Spectrum Virtualize and IBM Spectrum Accelerate. IBM Storage Enabler for Containers, allows IBM storage systems to be used as persistent volumes for stateful applications running in IBM Cloud Private clusters or Kubernetes clusters. To learn more, read [IBM Storage Solutions for IBM Cloud Private Blueprint].
IBM Storage Enabler for Containers v2.0 extends IBM Spectrum Connect v3.6 for IBM block storage and IBM Spectrum Scale for file storage, respectively, to Kubernetes-orchestrated container environments. IBM Storage Enabler for Containers currently supports only using block storage or IBM Spectrum Scale for dynamic storage provisioning within a single cluster using the Ubiquity/FlexVolume solution developed by IBM and provided to the Open Source community project. Refer to IBM Storage Enabler for Containers Release Notes for supported operating systems tables. Once the IBM Storage Enabler for Containers is installed and an IBM Spectrum Scale file system is mounted on the host supporting the Kubernetes pods, using IBM Spectrum Scale Kubernetes, or IBM Cloud Private clusters now can be consumers of the file system for stateful container applications.
The IBM Storage Enabler for Containers enables Kubernetes dynamic provisioning for creating and deleting volumes on IBM storage systems in place of host path only as was the original design supported by Kubernetes with IBM block or file storage systems. For details about volume provisioning with Kubernetes, refer to [Kubernetes Concepts: Volumes]. In addition, the IBM Storage Enabler for Containers utilizes the full set of Kubernetes FlexVolume APIs for volume operations on a host. The operations include initiation, attachment/detachment, mounting/unmounting etc.
The IBM Spectrum Scale file system or file systems must already exist, and they must be mounted on the physical or virtual host supporting the Kubernetes Pod/containers being deployed. Once mounted, then using the IBM Spectrum Scale API and IBM Storage Enabler for Containers solution, via either Helm Charts or K8s deployments the container/pod deployments can gain access to dynamic provisioned storage from Spectrum Scale for stateful container deployments all in an automated manner.
In his post on Rough Type titled ["McKinsey surveys the new software landscape"], Nick Carr discusses the growing acceptance in the marketplace for Software-as-a-Service, or SaaS.He summarizes the results of McKinsey's recent[Enterprise Software Customer Survey 2008].IBM is already well established as part of the Web 2.0 Big "5" (the other four are Google, Yahoo, Amazon and Microsoft), so it may not be much surprise that it introduced some new offerings focused on this emerging market.
For managed hosting, [IBM Managed Storage Services] hasbeen extended to support archive data through its entire lifecycle: supporting access, migration, non-erasablenon-rewriteable (NENR) protection, and expiration/destruction. This offering supports locating the storage onthe customer premises, a hosting center, or an IBM Service Deliver Center. IBM's blended disk and tape approachprovides a better alignment between information value and storage costs.
Last December IBM acquired Arsenal Digital, which offers a remote "Enterprise Email Archive" service, supporting retention policies that can apply per user,per group, or even my message, as needed. This service provides fast user access to email archives, as well as e-discovery search. The search is not just for the email body text, but supports over 370different attachment types as well. Deduplication technology is used to reduce the actual amount of storage needed by 80percent. All of this with the security and comfort of knowing that these email archives are encrypted and protected in a disaster recovery class datacenter managed by IBM.Blocks and Files presents their thoughts on this in the article["IBM storing data and mail in the cloud"].
The Radicati Group has published some interesting statistics about email archive in[Volume 4, Issue 3]. Here's an excerpt:
"In 2007, a typical corporate email account receives about 18 MB of data per day. This number is expected to grow to over 28 MB by 2011. Today, there is no way to effectively manage these messages, but with the help of an archiving solution.
Today, the worldwide percentage of corporate mailboxes protected by archiving solutions is estimated to be around 14%, however it is growing at a fast pace, and is expected to reach over 70% by 2011.
A survey of 102 corporate organizations worldwide, showed that 68% of large businesses view compliance as their top security concern in 2007."
For those who are actually providing these services to others, over the cloud, then you might want to use the new[IBM System x iDataPlex].Compared to traditional server environments, the iDataPlex provides five times the computing power by doubling the number of servers per rack, but with 40 percent less energy consumption. Thanks to clevercooling technology, the system can run in standard office "room temperature" environments. You cancustomize with a mix of compute, network and storage nodes to meet your application requirements.In addition to Web 2.0 and SaaS workloads, the iDataPlex can be useful for financial risk analysis,high performance computing, and even batch processing.
Whether you are looking to contract out for SaaS, or to provide a service to others over the cloud, IBM can help!
Christopher Vollmar IBM Storage Architect supporting customers in Canada and the Caribbean
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM recently announced two things for the IBM FlashSystem A9000/A9000R in the release of software version 12.3.1. The first introduces "intelligent capacity management" and the second is scalability for the smallest A9000R model. Let's start with Intelligent capacity management for deduplication, which provides high-precision estimates for:
Capacity optimization using reported reclaimable capacity, per volume
Capacity chargeback using the attributable fare-share capacity, per volume
From its initial release, FlashSystem A9000 supported always-on, cross-volume deduplication, in addition to other data reduction capabilities, such as compression and the special processing for well-known patterns. Deduplication implies that the same stored data may belong to more than one volume. With deduplication, reporting the amount of data stored for a particular volume will have different answers for different uses:
the amount of physical capacity that will be freed when a volume is deleted or migrated to another array. For example, if two identical volumes exist, deleting one of them will not reclaim any capacity. When many volumes share many and variable extents of data, knowing the reclaimable capacity for a volume requires intelligent software to overcome the computational challenge.
the amount of physical capacity that can be fairly attributed to a volume, most typically for chargeback purposes. For example, if two identical volumes exist, it would be fair to attribute half the capacity to the chargeback of each volume. When many volumes share many and variable extents of data, knowing the attributed capacity for a particular volume requires intelligent software.
Both FlashSystem A9000/A9000R software version 12.3.1 and Hyper-Scale Manager version 5.5.1 (or later versions) are required to produce the reclaimable and attributed capacity information. The array provides metadata information, and the Hyper-Scale Manager uses patented IBM research algorithms to compute high-accuracy estimates with specified error margins, without performance impact.
To produce these reports IBM Hyper-Scale Manager will require incrementally larger memory for the hosting virtual machine or the physical host. Learn more information about the system requirements in the [Hyper-Scale Manager release notes].
Last year, IBM introduced the lowest new entry point configuration of A9000R using a single flash enclosure. With 12.3.1 the IBM FlashSystem A9000/A9000R enhances this offering with support to scale out that configuration. Clients using this configuration will be able to non-disruptively add flash enclosures and grid controllers to scale out system capacity and performance.
IBM Systems Technical University in Atlanta is just around the corner! This event will have 24 sessions on the IBM FlashSystem A9000/A9000R systems, here's a snapshot of them:
FlashSystem A9000 thrashes SQL, Oracle and Hadoop workloads: Client examples
Deep dive into best practies for host and hypervisor tuning for IBM FlashSystem A9000
Secrets of A9000 scripting and advanced customizations
SVC/A9000 implementation best practices
Managing the IBM FlashSystem A9000/R with the IBM Hyper-Scale Manager
Business resiliency for the DS8000 and FlashSystem A9000/R using IBM Copy Services Manager
IBM FlashSystem A9000/R high availability and disaster recovery with the multi-site solution
Demo-ing FlashSystem A9000/R Provisioning, Monitoring and Troubleshooting with Hyper-Scale Manager
NDA: Use IBM Hyper-Scale Manager to present & educate FlashSystem A9000/R
NDA: 6 FlashSystem A9000/R Management prototypes: and your pick is?
Make the most of FlashSystem A9000/R - tips, tricks and secrets
Meet the A9000 and A9000R development experts: Open discussion about pros and cons
The full customer experience of IBM FlashSystem A9000/R: Daily work, upgrades, integrations, support
Reducing TCO while managing your IBM FlashSystem A9000/R
IBM FlashSystem A9000/R: Consistent performance, flexibility and efficiency built for the cloud
IBM FlashSystem A9000/R customers' stories
Deep dive into FlashSystem A9000 and A9000R HyperSwap
The blazing I/O trail across the grid - FlashSystem A9000 data path
Deep dive into FlashSystem A9000 and A9000R multi-site high availability and disaster recovery
Intelligent capacity management for FlashSystem A9000/R always-on data reduction
FlashSystem A9000/R data reduction under the hood
FlashSystem A9000 and A9000R roadmap - NDA
FlashSystem A9000 and A9000R for private clouds and MSPs
Fast Start - What's new with FlashSystem A9000 and A9000R
For more details of the TechU event in Atlanta, GA (USA), April 29-May 3, visit [ibm.biz/Atlanta2019]. The three of us all plan to be there! Stop by and say hello.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM announced a new product, IBM Spectrum Protect Plus. To understand why, I will need to discuss a bit of history related to Data Protection.
(FCC Disclosure: I work for IBM. This blog post can be considered a "paid celebrity endorsement" for IBM Spectrum Protect, IBM Spectrum Protect Snapshot, IBM Spectrum Protect for Virtual Environments, and IBM Spectrum Copy Data Management products. I was not paid in any manner to promote Geoffrey Moore's book mentioned below.)
IBM Spectrum Protect was originally developed as the Workstation Data Save Facility (WDSF) back in the 1980s, back when Personal Computers were just getting deployed.
I started in 1986 developing mainframe software, so we all had bulky 3270 terminals. When our area was offered 120 PCs to replace them, I was tasked with determining how to roll these out, 24 at a time, over five months.
My job was to determine who would get a PC in the first round, the second round, and so on. I handed out a simple one-page survey, asking everyone basic questions. Are you familiar with Personal Computers? Do have one at home? Are you comfortable using a mouse? My plan was to give those most familiar with them sooner, and those less familiar in later rounds.
However, it was my final question that sealed the deal:
How soon do you want a PC to replace your 3270 terminal?
[ ]Immediately [ ]Next month [ ]No Hurry [ ]Put me last [ ]Never!
Surprisingly, I had roughly 24 folks choosing each option on this last question, which made my decision process easy for me!
(In his book Crossing the Chasm, fellow author Geoffrey Moore would come up with similar groups: Innovators, Early Adopters, Early Majority, Late Majority, and Laggards. This is a great book and I highly recommend it!)
Of course, we used WDSF to back up the files. WDSF would later morph into DFDSM, then ADSM, then TSM, and now it is called IBM Spectrum Protect.
Over the decades, the product has evolved from just backing up data on personal computers. IBM Spectrum Protect can now protect all kinds of machines, from tablets, mobile devices, and smartphones, to virtual machines, databases, and application servers in the data center.
Besides creating backup versions of files, IBM Spectrum Protect can also migrate older, less frequently used files to less expensive media, as well as archive files for long-term retention.
Different files can be assigned to different "management classes" that determine policies to be applied and enforced on the backup, migration and archive copies. For backups, this includes how many versions to keep while the file exists, how many versions to keep after the original file is deleted, how long to keep those inactive versions.
Instead of a grandfather-father-son [backup tape rotation], full-plus-incremental, or full-plus-differential scheme employed by other backup software, IBM Spectrum Protect has a unique "Incremental-Forever" approach that reduces backup time, LAN bandwidth requirements, and backup storage media.
While most companies still backup to tape, IBM Spectrum Protect can backup to flash, disk, tape, virtual and physical tape libraries, object storage, and even to public Cloud Service Providers such as IBM Bluemix, Amazon S3, and Microsoft Azure.
IBM Spectrum Protect both client-side and server-side data footprint reduction technologies including compression and deduplication, eliminating the need for expensive, single-purpose data deduplication devices like Dell-EMC Data Domain.
IBM Spectrum Protect is recognized as a leader in Data Protection software, able to scale up to meet the demands of the largest enterprises. However, the parameters and options that IBM Spectrum Protect has acquired over time have been compared to the cockpit or flight deck of an airplane!
For clients with Virtual Machines, IBM offered three solutions:
IBM Spectrum Protect Snapshot
Formerly called Tivoli Storage FlashCopy Manager (FCM), [IBM Spectrum Protect Snapshot] takes frequent, near-instant, non-disruptive, application-aware backups and restores for SAP, Oracle and Db2. It can also be used for VMware using advanced snapshot technology, on both IBM and non-IBM storage systems.
IBM Spectrum Protect Snapshot can be used as a stand-alone product, or integrated with IBM Spectrum Protect to move the snapshots and FlashCopy targets to other storage media.
IBM Spectrum Protect for Virtual Environments (VE)
Formerly called IBM Tivoli Storage Manager for Virtual Environments, [IBM Spectrum Protect VE] protects both VMware and Microsoft Hyper-V virtual machines.
IBM Spectrum Protect VE safely moves backup workloads to a centralized IBM Spectrum Protect server and enables administrators to create backup policies or restore virtual machines with just a few clicks. It allows you to protect data without a traditional backup window.
IBM Spectrum Copy Data Management makes copies available to DBAs, Developers and VM administrators when and where they need them. While this product is focused on DevOps and Dev/Test workflows, it can also be used to automate and schedule snapshots that can serve as backups.
Surprisingly, many companies do not take advantage of these solutions. Even clients who already have IBM Spectrum Protect deployed either (a) simply use Spectrum Protect clients on individual VM guests, or (b) use third-party products to backup VMs outside of Spectrum Protect infrastructure.
"Problems cannot be solved with the same mind set that created them."
-- Albert Einstein
Smaller clients want something simpler to deploy, and easier to use and administer. Rather than simplify the products above, a process called "kneecapping" in the IT industry, IBM opted for a clean slate, [start-from-scratch] approach.
The result is IBM Spectrum Protect Plus, new software that was preview announced last Wednesday in time for this week's VMworld 2017 conference in Las Vegas, and next month's VMworld conference in Barcelona, Spain.
IBM Spectrum Protect Plus is available as either a stand-alone product, or integrated with IBM Spectrum Protect for long-term protection. It is focused exclusively on VMware and Hyper-V environments. General Availability is expected some time in 4Q 2017.
Key features include:
Simple to install in less than 15 minutes, configured in an hour
Easy to use by DBA, VM or application administrator. No IBM Spectrum Protect skills required for stand-alone deployment
Pre-defined Gold, Silver and Bronze policies are ready to use. Additional customized policies can be configured as needed
Supports both application-aware and crash-consistent methods
Data Footprint Reduction technologies including compression and deduplication
Instant data recovery to support DevOps, Dev/Test, Reporting, Analytics and Training
Granular search and restore of entire Virtual Machines, VMDKs, and individual files
As for the name, I would have prefered "IBM Spectrum Protect Basic Edition". The "Plus" implies that the new product is more advanced, or offers more features, than the existing Spectrum Protect editions.
Today, January 16, IBM launches its latest disk system, the DS3000 series.
There are actually three products in the DS3000 series:
The DS3200 is a 2U high, 12 drive system that attaches to servers via 3Gbps Serial Attach (SAS) interface.You can expand this to 48 drives by added EXP3000 expansion units. Here are theDS3200 specifications.
The DS3400 is a 2U high, 12 drive system that attaches to servers via 4Gbps Fibre Channel (FC) interface.You can expand this to 48 drives by added EXP3000 expansion units. Here are the DS3400 specifications.
The EXP3000 is a 2U high, 12 drive expansion drawer. It was announced back in August 2006, but is part of theoverall DS3000 series. It can be used directly with servers, but is also designed to be attached to the back of the DS3200 or DS3400 to increase capacity.Here are the EXP3000 specifications.
With this announcement, IBM provides entry-level storage at the "less-than-$5000" price point, withsupport for intermix of 10K and 15K RPM drives, and scalable up to 14.4 TB capacity.This would be ideal storage for HP, Dell, IBM System x and BladeCenter servers.
This week, I am attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
Sunday, I attended a series from IBM Research talking about the latest research areas.
7110A Future Directions in Enterprise Mobile Computing
Gabi Zodik (IBM) presented. Mobile and wearables are transforming all industries. Enabling technologies are required to support the new computing models that are cognitive in nature. Real-time proactive decisions can be made based on the mobile context of a user. Driven by the huge amounts of data produced by mobile devices, the next wave in computing will need to exploit data and computing at the edge of the network.
Future mobile apps will have to be cognitive to "understand" user intentions based on all the available interactions and unstructured data. A new distributed programming paradigm is emerging to meet these needs, which has to deal with massive amounts of data and devices. While the compute and storage capacity on individual devices is small, collectively they exceed all of the servers and storage in Cloud datacenters.
7107A Wearables in the Enterprise
Asaf Adi (IBM) presented. Wearable technology is booming. It is only our imagination that will limit the number of industrial, military, consumer and healthcare applications for this new emerging technology. Wearables are transforming industries and professions, enabling new business opportunities. From a show of hands, half the audience was wearing smart technology already.
In one example, he focused on construction industry. In the USA alone, there are thousands of workplace injuries, costing $190 Billion dollars. Wearable technologies can be incorporated into a hardhat to bright orange vest. In a steel mill, heat stress can be determined from ambient temperature and an employee's heart rate. Over time, we will have multiple wearables, communicating to each other.
In another example, he was able to make a hand gesture (waving his hand in front of his smartphone), and use that to generate code fragment that can be used by software developers to detect that particular hand gesture was made in any application.
Wearables cannot assume they are always connected to the Cloud. Take for example mining, where miners are deep below the ground. Technology to ensure safety needs to work regardless of connectivity.
Privacy is also a big concern. Wearables should not be used by employers to monitor every movement and activity of the employees.
7152A Cognitive IoT -- Today, Tomorrow and Beyond
Alessandro Curioni (IBM) presented. Today's sensors aren't up to the task of unlocking the complex links between people, places and things. To reach the next level, we need technologies that enable them to gather and integrate data from many sources, to reason over that data, and to learn from it. IBM calls this the Cognitive Internet of Things (IoT).
We already know IoT data can be used to predict maintenance needs, but what if it can also help designers engineer more reliable products from scratch? In addition, with advancements in nanotechnology and machine learning we can bring the power of cognitive to the edge—where the data is collected. Imagine tiny edge computers providing Watson services on every sensor?
It is estimated that we have 13 billion IoT sensors today, and that this will more than double to 29 billion by year 2020. This introduces new security threats, new levels of employee engagement, and fundamental shifts in business models.
Sadly, 88 percent of all the IoT is dark, meaning that it is not collected or processed for analysis. While the IT industry has done amazing things with the other 12 percent, we realize that programming techniques are too limited.
That is why cognitive is needed to unleash the value of the data. IBM Watson offers excellent capabilities, including Natural Language Processing (NLP), Machine Learning (ML), Image/Video analytics, and Text Analytics.
Manufacturers like Whirlpool are investigating use of IoT for home appliances, like refrigerators, washers and dryers. This is just the beginning, other industries including Healthcare, Retail, Oil, Mining and Farming will also benefit.
7108A Blockchain and the Future of Finance
Ramesh Gopinath (IBM) presented. Transferring products and funds today is inefficient, expensive, and vulnerable. Blockchain is an emerging fabric for transaction services. It has the potential to radically transform multi-party business networks, enabling significant cost and risk reduction and innovative new business models.
About 18 months ago, the "Blockchain" concept was not ready for business. Since then, Apache has accepted the "HyperLedger" project, with 17 founding companies.
Imagine a company in China or India exporting a product to a company in USA. There may be 10 or some companies or agencies involved, including multiple banks, port authorities, trucking companies, etc. The hand off the equipment, and ensure all parties are paid, some 30 different paper documents may be needed. Each company maintains their own set of records, and all the middlemen take their cut.
Blockchain represents a digitally-signed, encrypted, immutable "ledger" that records all of the steps related to a particular transaction. Since each new block has a checksum of all of the previous blocks, it prevents tampering and fraud. All parties have access to all of the ledger, eliminating discrepancies between different repositories of records.
This can be used to sell stocks, buy real estate, or transfer financial funds to your family overseas. Each party involved in a Blockchain has a node in a peer-to-peer network of nodes that can access a shared Blockchain request. A user initiates the transaction, and the nodes in the network use a Practical Byzantine Fault Tolerance [PBFT] protocol.
By providing [disintermediation], fewer middlemen in the process reduces costs, processing time, and risks. The method allows for the user's transactional privacy, but also ensures accountability and auditability.
7234A Building Cloud Infrastructure for Next-Generation Workloads
Krishna Nathan (IBM) presented. Today's cloud providers are efficient at providing today's cloud services at low costs. However, this efficiency comes with the penalty of inflexible instance types and no real guarantees on performance or quality of service.
Today's systems are organized and optimized for transactional processing, a result of evolution of the past 60 years. Relational Databases offer specific features like Atomicity, Consistency, Isolation, and Durability, known collectively as [ACID].
However, we are expanding beyond "automating our world", or "understanding our world". This means tapping into 90% unstructured workloads, multi-modal scanning, noise-tolerant with variable precision and probabilistic outcomes.
Cloud Providers have used the "best practices" of transactional datacenters. Consequently, next-generation workloads that often do not share the characteristics of traditional workloads are limited in expressing their full potential because of these infrastructure limitations. Now they need to focus on four characteristics: Locality, Composability, Heterogeneity, and Dynamic resource allocation.
New workloads need a combination of CPU, GPU, NVMe, and other resources. How do you schedule which equipment to deploy for incoming workload processing that optimizes performance? By taking these factors into account, clever Cloud providers can optimize performance results to provide best fit for each workload request.
7135A Storing and Using Data in the Cloud -- Putting Together the Puzzle Pieces
Michael Factor (IBM) presented. What do OpenStack Swift, Spark, CouchDB, Kafka and ElasticSearch have in common? They are all open source, they all are available on IBM's cloud today, and they all focus on storage and using data. The trick, though, is putting these puzzle pieces together to solve real problems. You need smart integration between data services motivated by real examples from domains such as IoT, transport and retail.
There are a plethora of of open services to manage data. A recent IDC Analyst study indicates that the worlds data will grow from 8.6 Zetabytes today to 40 Zetabytes in 2020. Michael gave some eye-opening comparisons. If the data was stored on 10-TB hard disk drives, we could make some physical comparisons:
Imagine stacking all of those disk drives one on top of each like a stack of books. the stack today would be 22,000 kilometers, more than half the way to geosynchronous orbiting satellites, but would be over 100,000 kilometers, way past those satellites in 2020.
The weight of those drives today would be comparable to the weight of 1,450 Airbus 380 airplanes. In 2020, they would weigh 6,755 Airbus 380 airplanes.
If the drives were spread across the entire Mandalay Bay convention center floor, they would be 1.7 meters deep today (about 5 feet), but would be 8 meters deep in 2020.
An example of the EMT Madrid bus company using real-time sensors to react to traffic conditions.
Here are the various pieces:
OpenStack Swift -- provides object storage
ElasticSearch, based on Apache Lucene - search engine, such as for metadata or queries
Apache Spark - combines SQL, streams and complex analytics, with filter pushdown support
Apache Parquet -- a column-based data format to replace row-based Comma-Separated-Variable (CSV) format
Apache Kafka - a message bus, works with dashDB and Secor
Beyond programming "glue", we need smart integration to get an order of magnitude boost in performance.