This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here is my recap of the sessions on the morning of Day 4.
Configurable IBM Spectrum Scale
Kent Koeninger presented IBM Spectrum Scale software, which Kent refers to as "Configurable Spectrum Scale" (or CSS for short), as opposed to the pre-built system known as Elastic Storage Server (ESS).
Why choose CSS versus ESS? Lower entry price. You can start with just two single-socket servers and a drawer of disk.
IBM Spectrum Scale was formerly called IBM General Parallel File System (GPFS). Many who tried earlier versions of GPFS found it difficult to configure, because it only had a command line interface. Now, Spectrum Scale has a fully-functional GUI, and clients have been able to install and configure Spectrum Scale in just 30 minutes!
How big can Spectrum Scale grow? As much as your budget can afford! With an architecture that can support YottaBytes of data and 900 quintillion files, you won't hit any limits anytime soon.
There are some unique capabilities of ESS not available in CSS. For example, ESS offers Spectrum Scale Native RAID (erasure coding) with fast rebuild times, and ESS is certified for SAP HANA. You can combine any combination of CSS and ESS in the same Spectrum Scale to create a "data lake" for mixed workloads.
A good use case for Spectrum Scale, either CSS or ESS, is backup. Kent explained why it is an excellent option to store backups with enterprise backup software such as IBM Spectrum Protect or Commvault.
VersaStack - Hybrid Cloud like no other
This session was jointly presented by Chris Vollmar, IBM Storage Architect, and Brent Anderson, Cisco Global Consulting Systems Engineer. IBM and Cisco have been partners for more than 25 years.
VersaStack combines Cisco UCS x86 servers, Cisco Nexus and MDS switches, and IBM FlashSystem or Spectrum Virtualize storage.
What if you have a SAN Infrastructure built entirely from IBM b-type or Brocade-based switches? Cisco supports their SAN switches for this, but nobody has tested VersaStack in this combination, and UCS Director does not manage this combination, so IBM does not support this. Instead, for this situation, IBM recommends doing external connection via Ethernet, or using direct-attach configurations.
The Cisco Validated Design spends four months testing, and gives you bulletproof process to deploy the solution.
There is a difference between Cisco UCS Manager and UCS Director. UCS Manager is available at no additional charge, but only manages the Cisco x86 servers. UCS Director is optionally extra priced, and manages Cisco servers, Cisco networking, and IBM Spectrum Virtualize storage.
Brent explained the benefits of UCS Management through policies and profiles.
Chris covered Cisco CloudCenter, which the Cisco team shortens to just "C3". IBM Spectrum Copy Data Management can be used to move snapshots of data between on-premises and off-premises Cloud to help in Hybrid Cloud configurations.
How to Design an IBM Spectrum Scale solution
Tomer Perry, IBM Spectrum Scale I/O Development, presented this session.
For those who want to bring up a quick IBM Spectrum Scale environment to play around with, you can do this in as little as 30 minutes. But to design a mission critical deployment, additional requirements may need to be addressed. You may need to consult with not just storage admins, but also application owners, network admins and security personnel.
Large companies have hundreds or thousands of applications, so Tomer recommends to group these into "Workload families", based on data set types, access patterns and performance requirements. For NAS take-out, 80 percent of NAS I/O is "get attribute" that can easily be served directly from cache memory.
For each workload family, you may need to decide on snapshots, quotas, namespace (bind mounts, symlinks, etc.), security (ACL, encryption), estimated capacity, replication BC/DR, backup and ILM requirements.
Unless this is completely greenfield deployment, the existing infrastructure needs to be evaluated. This includes the LAN and WAN network topology, name resolution (DNS), time services (NTP), Authentication (AD, LDAP, NIS, Keystone), Keyserver (IBM SKLM), Monitoring and Migration requirements.
Tomer suggests designing the environment in this order: Cluster, File System, Storage Pools, Fileset, Replication, and finally Monitoring.
Generally, you need three NSD servers per cluster. For those licensing Spectrum Scale Standard Edition by the socket, you may be tempted to put everything into one big cluster. The new capacity-based Spectrum Scale Data Management Edition eliminates that concern, so Tomer recommends having separate computer clusters and storage clusters, connected by cross-cluster mount. All nodes in a cluster are considered an "ssh" administration domain.
A single Spectrum Control namespace can support up to 256 file systems. There are various reasons to have multiple file systems: block size, backup/recovery, snapshot, quotas, and cross-cluster isolation. If a file system gets corrupted, it will not affect other file systems. In an internal test, an "fsck" on 1 billion, 1 PB of data file system took only 30 minutes to repair.
Storage Pool design can separate metadata from content, and workloads can be separated to different storage media. With ILM, HSM and TCT, you can move colder data to Cloud, Object Storage, Spectrum Protect or Spectrum Archive.
Filesets are tree branches for each file system. IBM Spectrum Scale supports both dependent and independent filesets. Filesets can be used for Non-erasable, Non-Rewriteable (NENR) Immutability, policies, quotas, snapshots. Consider using a fileset instead of carving off a new file system.
Spectrum Scale offers both synchronous and asynchronous replication. For Synchronous, the ReadReplicaPolicy can be set to default, local or fastest. For Asynchronous, there are a variety of AFM modes (Read-only, Local-Update, Single-Writer, Independent-Writer, and Disaster Recovery). You may need to decide if your AFM gateways are dedicated or collocated. You will need to tune your TCP buffers for WAN performance to get the RPO you desire.
The nice thing about IBM solutions is that you can start small, and grow big. In all of these examples above, IBM offers sizes to match nearly any IT budget.
As you can imagine, I get a lot of email from around the world. This one, from a loyal reader from overseas, was particularly interesting. Normally, I would direct them to read the fantastic manual [RTFM], but decided instead to go ahead and tackle it here in my blog.
I follow your blog for several years, it has served as a reference and training for me in my professional career and I want to thank you.
I am writing because my company has acquired a new IBM Storwize V7000 Gen2 to replace a Gen1, with 16 FC ports, 8 ports per controller node and 8-port FC FlashSystem 900. The idea is to virtualize the V7000 storage part Flash900 and other hand assign directly to the host directly. After much reading on forums and storage Redbooks I have nothing clear as it should be wiring the SAN or as zoning would be made to carry out this installation. I would appreciate if you can write on this subject as controversial as seems to be the zoning and wiring SAN and if possible be clarified by me onstage.
I will tackle this in three steps.
First, let's attach "Server 1" and the FlashSystem 900 to the SAN fabric. IBM Spectrum Virtualize can handle one, two or even four separate fabrics. Let's assume you have a dual-port Host Bus Adapter (HBA) in server 1, and two redundant fabrics. We will connect each server port to each FCP switch. Likewise, we will connect each FCP switch to the FlashSystem 900, carve up "Volume 1", and create SAN "Zone A1" and "Zone A2", which identify "Server 1" as the initiator, and "FlashSystem 900" as the target. This is all basic stuff.
"All Storwize V7000 Gen2 nodes in the Storwize V7000 Gen2 clustered system are connected
to the same SANs, and they present volumes to the hosts. These volumes are created from
storage pools that are composed of mDisks presented by the disk subsystems.
The fabric must have three distinct zones:
Storwize V7000 Gen2 cluster system zones
Create one cluster zone per fabric, and include any port per node that is designated for
intra-cluster traffic. No more than four ports per node should be allocated to intra-cluster
Create a host zone for each server host bus adapter (HBA) port accessing Storwize
Create one Storwize V7000 Gen2 storage zone for each storage system that is
virtualized by the Storwize V7000 Gen2. Some storage control systems need two
separate zones (one per controller) so that they do not 'see' each other."
Second, we connect the Storwize V7000 Gen2 to the FCP switches. You don't need to connect all of the ports, but I recommend that you have each controller node to each FCP switch, requiring four cables. Add more connections for added performance bandwidth.
Carve up "Volume 2" and this will be referred to as a "managed disk", mDisk for short, and create a "storage pool" which were formerly known as a "managed disk group" which is why you often see MDG in the naming conventions and examples. Storage pools can have one or more managed disks, and you can add more dynamically as needed.
The "storage zone" indicates the Storwize V7000 Gen2 as the initiator, and the FlashSystem 900 as target. If you want to increase the performance bandwidth, consider more cables between the FCP switches and the FlashSystem 900. We create "Zone B1" and "Zone B2". I recommend a separate "storage zones" for each additional storage system that you choose to attach to the Storwize V7000 Gen2.
The "cluster zone" that connects all of the Storwize V7000 Gen2 node ports together for node-to-node (intra-cluster) communication. Storwize V7000 Gen2 ports can serve as both initiators and targets dynamically. For example, when you write to one node, the node then copies the cache block over to the second node so there are two copies stored safely on separate nodes. Since we have two fabrics, we create "Zone C1" and "Zone C2".
Third, we connect "Server 2" to FCP switches, same as we did with "Server 1". We create "Volume 3" which is a "virtual disk, or vDisk for short, from the storage pool containing Volume 2. The "host zone"indicates Server 2 as the initiator, and Storwize V7000 Gen2 as the target. We create "Zone D1" and "Zone D2". I recommend putting each additional server in its own set of host zones.
In theory, you could have a server connected to both Volume 1 and Volume 3. For example, a Windows server would have a "C:" drive connected directly to FlashSystem 900 for high-speed performance, and have a "D:" drive on Storwize V7000 Gen2 to contain data. The Storwize V7000 Gen2 introduces 60 to 100 microseconds of added latency, but provides added value such as FlashCopy, Thin Provisioning, and Real-time compression.
Of course, there are unique situations that might require special configurations, depending on the servers, operating systems, host bus adapters, FCP switches, and storage systems involved.
The blog team is working on re-directs for those who don't see this in time. Depending on which RSS feed reader you use, you may need to unsubscribe/re-subscribe to re-activate. You can updatethe URL for the feed to one of these:
This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year.
General Session - Outthink Status Quo
This week's motto is "Outthink the Status Quo.. Before the Status Quo disrupts your business!
Tom Rosamilia, IBM Senior VP for IBM Systems (and my fifth-line manager), kicked off the event. There are about 5,500 people at this event. He mentioned that just like a picture is worth a thousand words, "a prototype is worth a thousand meetings."
He showed a video of our client "Plenty of Fish" [POF], which is a dating site. They have 100 million members, of which 4 million access their site every day. IBM FlashSystem paid for itself, with an ROI payback period of 2 months.
Jason Pontin, Editor in Chief and Publisher of [MIT Technology Review], mentioned three major areas to watch:
Explosive innovation in Artificial Intelligence (AI), including IBM Watson, machine learning, etc.
Pervasive computing, including augmented reality or virtual reality, what IBM calls Internet of Things (IoT)
Re-writing life, directly editing genomes for healthcare and agriculture
Jason feels there are two major challenges for humans. First, what is the "future of work"? People are no longer working for the same company for their entire career. Rather, they come and go, moving in and out of companies. Second, how will we deliver food and water to the 9.6 billion population expected to exist by 2050, with added challenge of climate change. Ed Walsh, IBM General Manager for Storage and Software Defined Infrastructure, presented next. Last year, I was asked to throw my hat in the ring to be the next General Manager of IBM Storage. I was up against some strong competition, and in the end upper management selected Ed Walsh instead. He is a good choice, and I support his efforts.
Matt Cadieux, CIO for [Red Bull Racing], presented on the IT challenges of designing, building and racing Formula One racing cars. They have 21 races per year, and each race has slightly different specifications, forcing Red Bull Racing to break down and rebuild their cars for each race.
Michael Lawley, Senior IT Vice President for [HealthPlan Services], explained how his business grew 300 percent in the past four years. Their workloads are very "spiky", so it is good that they can scale up or down their IT infrastructure 3-4x as needed, within minutes.
Jacob Yundt, CIO for University of Pittsburgh Medical Center [UPMC], explained the importance of genomics as the next frontier of medicine. Genomics allows for more accurate cancer determinations, which helps target specific treatments. They moved from x86-based clusters to those based on Power LC models from IBM. For analytics, they chose IBM Power8 S822L servers with Elastic Storage Server (ESS) and the Hadoop Transparency Layer.
Lastly, Terri Virnig hosted two technology partners to the stage for some major announcements. First, Jim Totton from Red Hat, announced that RHEV v4 (based on Linux KVM) is announced for POWER platform. Secondly, Scott Gnau, CTO for [Hortonworks], announced that Hortonworks will run on the POWER platform, as part of IBM and Hortonworks Open Data Platform [ODP] initiative.
Trends & Directions: The Future of Storage in the Cloud and Cognitive Era
Eric Herzog, IBM Vice President, Product Marketing and Management Software Defined Infrastructure, served as emcee for this session.
Ed Walsh, IBM General Manager for IBM Storage and Software Defined Infrastructure, marveled at IBM's "storied history in storage innovation". He suggests clients should modernize and transform their business with IBM broadest storage portfolio in the IT industry.
Clod Barrera, IBM Engineer and the Chief Technical Strategist for IBM Systems Storage, explained that in the past 60 years of disk systems, areal density has improved by a factor of one billion. Unfortunately, that is slowing down, and we won't see such improvements anymore.
Bina Hallman, IBM Vice President, Software Defined Storage Solutions Offering Management, hosted a panel of clients, including:
Bob Osterlin, from [Nuance], that has 5-10 PB of data using IBM Spectrum Scale for voice recognition software.
Rich Spurlock, from [Cobalt Iron], that provides Backup-as-a-Service using IBM Spectrum Protect. Their clients experience an 80 percent reduction in operating expenditures (OPEX) using Spectrum Protect.
Moshe Perez, from [RR Media], that provides television channel distribution like ESPN and BBC to other countries. They use IBM Spectrum Accelerate to handle the demand peaks, such as the Olympics.
Mike Kuhn, IBM Vice President for Storage Solutions Offering Management, also hosted a panel of clients, including:
Kevin Muha, from [UPMC], managing 13 PB of storage, across a variety of IBM storage devices, including 700 TB of FlashSystem V9000.
Bill Reed, CTO for [Arizona State Land Department], that uses VersaStack with IBM FlashSystem V9000 for geographic information system [GIS] applications. They manage over 9.2 million acres to help fund K-12 schools in Arizona.
Owen Morley, from Plenty of Fish [POF] dating website, evaluated nearly every flash device in the market, and chose IBM FlashSystem. "The one metric that matters is Latency!"
These were the two main keynote sessions on Monday morning. During the rest of the week there will be over 285 storage-related breakout sessions, dozens of labs, and 7 panels.
Well it's Tuesday again, and you know what that means? IBM announcements!
(For those wondering where I went in July, then perhaps the better question should be "where didn't I go?". I started in Boston, MA, then Iceland, England, Hungary, Romania, Qatar, Kenya, Dubai UAE, and finally Seattle, WA. Whew! This week, I am visiting clients in Tennessee.)
Today, IBM launches a whole set of updated offerings based on the IBM Spectrum Virtualize software code base.
IBM Spectrum Virtualize v7.7.1 software-only offering
Like the rest of the IBM Spectrum Storage family of products, IBM Spectrum Virtualize can now be purchased as software only, allowing you to install it on your own x86 servers, rather than purchasing pre-built systems from IBM.
The software license comes in two flavors. The traditional "perpetual license" allows you to move the software from one x86 server to another. Say after 4 years, you have depreciated the server, or the hardware components fail, and you want to get a newer server. This is the same perpetual license that clients with IBM SAN Volume Controller and Storwize family have enjoyed since 2003.
The other is a "monthly license", which allows you to stand up your own "SVC" using your own x86 servers, for a period of months needed for a development/test project, disaster recovery, or some other purpose. After the project is over, you can discontinue the license, and re-purpose the x86 servers for something else. This is especially handy for Managed Service Providers (MSP) and Cloud Service Providers (CSP), but certainly can prove useful in traditional datacenters as well. The "monthly licensing" option is also available for IBM SAN Volume Controller (SVC) as well.
The software license is based on Tebibyte [TiB]. For those not familiar with international standards, here is a comparison table:
The new SV1 model is based on two 8-core [Intel Broadwell] processors, which IBM has clocked at up to 30 percent performance improvement over the DH8 model. It also offers up to 256GB of cache memory per node, which sadly only the first 64GB are usable at the current software level. Someday, a future release of software will address all 256GB of memory.
The IBM SAN Volume Controller now offers "Enterprise Class Support" as an option. In the past, the SVC was a "customer setup" box, similar to midrange and entry-level products. Now, you can upgrade your support to match that of IBM DS8000 and XIV enterprise class offerings. This means that IBM experts will maintain your microcode levels for you.
The new 624 model is based on a single 10-core Intel Broadwell processor, which IBM has clocked at up to 45 percent performance improvement over the previous model. It also offers up to 128GB of cache memory per system, 64GB per node, double what came standard on the 524 model!
Why "Gen2+"? Moving from an 8-core Haswell to a 10-core Broadwell CPU, and doubling the cache memory didn't seem to be enough "architectural change" to justify calling it a "Gen3", so marketing decided on Gen2+ instead.
I refer to the IBM FlashSystem V9000 as my "Superman" product. When Superman dons on his glasses he becomes "Clark Kent", mild-mannered newspaper reporter. But behind the glasses, he is always Superman! Likewise, the FlashSystem V9000 is an all-flash array with an impressive set of features, but take off the fancy bezel, and you find that it is a pair of fully-loaded SAN Volume Controllers (which we call "Control Enclosures AC3") and a FlashSystem 900 drawer of the world's fastest flash storage.
The new FlashSystem V9000 is based on the new SV1 models of SVC. Each V9000 can attach up to 20 expansion enclosures over 12Gb SAS connections. The expansion enclosure can hold either 24 of the smaller 2.5-inch drives, or 12 of the larger 3.5-inch drives. Of course, the FlashSystem V9000 can also virtualize any of almost 400 different kinds of storage arrays, from all the major vendors, similar to SAN Volume Controller. This provides tiering options that match well with the FlashSystem 900 inside.
IBM Storwize V7000F and V5030F all-flash array models
The FlashSystem V9000 was originally going to be called the Storwize V9000, but the FlashSystem folks wanted to keep all of the "FlashCore" technology under one name. In perhaps a bit retaliation, or maybe sibling rivalry, the Storwize team added the letter "F" to refer to the All-flash models of the Storwize V7000F and V5030F.
The "flash" in the V7000F and V5030F are just Solid-state drives, not nearly as fast as the cards in the FlashSystem models. The drives come in 1.92TB and 3.84TB capacities. You might see these rounded up to 2TB and 4TB on some presentations, but IBM officially never likes to exaggerate.
IBM is doing a bit of year-end housekeeping. The Storage Community (storagecommunity.org) will be discontinued as of January 1, 2017.
IBM will continue to host a community for all of its followers and contributors to share insights on the latest trends in storage at [ibm.co/StorageSolutions].
All of the most recent IBM content from storagecommunity.org will now be available at this new domain. IBM hopes that you will continue to engage in its community of storage industry thought leaders.
If you would like to contribute to the new community, please [register here]. Simply click the silhouette icon in the top right-hand corner of the page and select "register." Input your email address and create a password, then sign in. You will receive an email from IBM with further instructions to get you set up.
IBM's twitter handle (@SmarterStorage) will also be sunset as of January 1, 2017, but I encourage you to follow @IBMStorage, or my own twitter handle @az990tony, for the latest storage news and announcements from IBM.
This week, I am presenting at the IBM Systems Technical University for Storage and POWER systems. This conference is being held in New Orleans, Louisiana, October 16-20, 2017, at the beautiful Hyatt Regency.
The afternoon sessions on Monday were all about Cloud.
Back in 2009, I was designated the IBM Cloud Storage Center of Competency for all of the IBM Systems client centers. That was nearly a decade ago, and I am still talking about Cloud Storage!
Since then, IBM has decided to be a "Cloud Platform" company, and now everyone wants to know about Cloud Storage. Cloud is not just to lower costs, as it once start out as, but now for innovation and business value.
Nearly all of IBM Storage is enabled for cloud, from our high-end FlashSystem, DS8000 and XIV flash and disk storage arrays, to our Spectrum Storage software suite, to our various tape products.
Building Private Cloud with Ubuntu and OpenPOWER
Ivan Dobos, from Canonical--the company that makes Ubuntu--presented Ubuntu on OpenPOWER. Other Linux distributions like Red Hat and SuSE distributions offer both a "community supported" version (OpenSUSE or CentOS), and an "enterprise version" (SLES and RHEL). Ubuntu doesn't fork their versions, they have a single version for everyone.
Ubuntu 14.04 LTS was made available as a Little-Endian distribution for IBM POWER and OpenPOWER. Ubuntu was the first Linux distribution to support CAPI and PowerKVM for the POWER8 platform.
(A note on release numbers. Ubuntu releases every April and October, so 14.04 represents 2014/April release. Every two years, a release is designated "Long Term Support" (LTS) which is supported for five years.)
Since version 16.04, Ubuntu offers the LXD Container Hypervisor, based on LXC, similar to Solaris Zones, but running as a daemon. Virtual Machines are heavy because they have their own kernel. Containers instead use the kernel of the underlying hypervisor, but limited to Linux guests. The Linux guests are can be older versions of Debian, Red Hat or SuSE, but with the latest, most secure kernel of Ubuntu for safety and security.
(Canonical gives Ubuntu away for free, but offers "Enterprise Services" for a fee to companies that want this added level of support. One of the features with Enterprise Services is "Live Kernel Update". Normally, updating the Linux kernel requires a reboot, which would cause outage to all of the VMs and containers running on that host server.)
Like VMs, you can launch containers, switch to bash shell, install software, run applications, and shut down containers, all isolated from other containers. The LXD daemon can run LXC and Docker containers. Some advantages of doing this:
Lift and Shift, live mobility from one system to another
Collocation of different workloads on same node
More efficient to use containers than Virtual Machines
14x greater density with LXD than traditional KVM or VMware (tested on x86)
Based on open source LXC containers
Ubuntu is designed for the "Elastic Hybrid Cloud". Canonical recommends combining on-premises data center with two or more public cloud providers. Scarcity has shifted from "code" to "operations". Are you ready to run applications you don't understand?
Total Cost of Ownership is shifting from code license costs to operational costs. Canonical offers a free, downloadable, operations orchestration platform called "Juju" to help install, configure and scale applications. Juju means "magic" in Swahili.
Scripts on Juju are called charms. There are Juju charms to install and configure things like MongoDB and IBM Spectrum Scale. Furthermore, Juju charms can be bundled together for more complicated deployments.
Juju is not limited to LXD, can be used with VMware, OpenStack, bare metal servers, and public clouds. It is available on Ubuntu, Red Hat and Windows. As a demo, Ivan built an entire working OpenStack environment, with 20 applications on 4 bare metal servers, all installed and launched with Juju.
For OpenStack, you can use the basic "Ubuntu OpenStack", or a more complete "Canonical OpenStack", or even have Canonical folks manage your environment for you.
Canonical MaaS (Metal-as-a-Service) uses hardware APIs to manage bare metal servers, providing physical provisioning, dynamic allocation for workloads, and even Ubuntu and CentOS operating system installs. Canonical has clients with over 100,000 servers managed with MaaS.
Introduction to IBM Cloud Object Storage System and its applications (powered by Cleversafe)
Before 2015, IBM offered two "Object Storage" products: IBM Spectrum Scale and IBM Spectrum Archive, and I was constantly having to compare and contrast IBM products to Cleversafe.
Not any more! With the IBM acquisition of Cleversafe, IBM now offers all three!
This session explained all of the features and functions of IBM Cloud Object Storage System, available as software, as pre-built systems, including a VersaStack CVD, and as Storage-as-a-Service (STaaS) in the IBM Cloud.
(IBM renamed Cleversafe DSnet to "IBM Cloud Object Storage System". I joked that if IBM ever acquired Coca-Cola, they would probably rename their signature soft drink as the "Brown Carbonated Sugar Liquid", or BroCarb SugarLiq for short!)
In the evening, we had a nice reception with food and drink at the Solution Center. The Solution Center has booths where all of the IBM and Business Partners have their experts answering questions and handing out brochures of their offerings.
Tomorrow, I will be presenting at the STU Orlando for Storage and Cognitive Systems (formerly POWER servers). This conference will be held in New Orleans, Louisiana, October 16-20, 2017.
Here is my speaking schedule:
The Seven Tiers of Business Continuity and Disaster Recovery (BC/TR)
IBM's Cloud Storage Options
Introduction of IBM Cloud Object Storage System and its Applications (powered by Cleversafe)
The Pendulum Swings Back -- Understanding Converged and Hyperconverged Integrated Systems
New generation of storage tiering: Simpler management, lower costs, and increased performance
Introduction of IBM Cloud Object Storage System and its Applications (powered by Cleversafe) **repeat**
IBM Spectrum Scale for File and Object storage
If these topics seem familiar, I have presented them at prior events earlier this year, including the STU Orlando in Orlando Florida, and the one in Melbourne Australia. However, I have made updates! New products have been announced!
If you are planning to attend, here are some of my past blog posts to help you get up to speed:
STU Orlando - Orlando, Florida
This event was a large 5-day event to replace the technical portion of IBM's previous "Edge" conference.
This event was a smaller 3-day event to bring STU to other countries. We used to call these "Edge Comes to You" events, but now we call them "IBM Systems Technical University" just like the ones in the USA.
The STU at New Orleans will be a 5-day event. Instead of a "Meet the Experts" session, they are having a "Poster Session" in its place. Many of the posters will have QR codes, so make sure you have a "QR Scanner" application installed on your smartphone so you can scan them quickly!
Everyone, speakers and attendees alike, should consider making a QR code for themselves for this event. Go to [any number of websites] that generate a QR code. This could a VCF file with all of your contact information, a link to your blog or website, or point to your presentations on Slideshare or IBM@Box.
The next time someone at the event asks for this information, display the QR code on your smartphone, and let them scan it. Alternatively, you can send the image via MMS text message.
(My QR Code is fully functional, go ahead and practice scanning it with your smartphone for practice!)
I arrive in to New Orleans Sunday afternoon, so if you are in town, give me a shout! Or tweet me at @az990tony
Want to hear the latest technical information about IBM Storage, but not willing to wait until the big [IBM Edge Conference] this September? We will have a variety of "Systems Technical University" events in the next few weeks in a variety of locations.
In the United States, I will be presenting several topics at the following:
Atlanta, GA -- April 12-14
San Francisco, CA -- May 10-12
Chicago, IL -- May 18-20
Boston, MA -- June 7-9
Here's my schedule for the one in Atlanta:
Introduction to Object Storage and its Applications with Cleversafe
Software Defined Storage -- Why? What? How?
Integration between Spectrum Scale and Cleversafe
IBM Spectrum Scale for File and Object storage
What Is Big Data? Architectures and Practical Use Cases
New Generation of Storage Tiering: Less Management, Lower Cost and Increased Performance
The Pendulum Swings Back -- Understanding Converged and Hyperconverged Environments
(FCC Disclosure: I work for IBM. I have no financial interest in SUSE, Scality, or any other storage vendor mentioned in this post. This blog post can be considered a "paid celebrity endorsement" for IBM Storwize, IBM Cloud Object Storage, and IBM Spectrum Storage software mentioned below.)
The study takes a realistic request for 250 TB of storage, at 25 percent compound annual growth rate (CAGR), to store infrequently accessed data in an online archive, and then looks at the Total Cost of Ownership (TCO) over five year period.
The study compares five different Software-Defined Solutions and three pre-built systems. The Software-defined solutions come as software-only, requiring that you purchase the hardware separately and build it yourself. The three pre-built systems were chosen from the top three storage vendors in the marketplace: Dell EMC, IBM and NetApp.
The cost of support is factored in, as it should be. To keep things equal, no data reduction like data deduplication or compression were used.
In an odd approach, the study mixes block, file and object based approaches all in the same study.
You can read the full 14-page study (linked above). I have organized the results into a single table, ranked from best to worst, color coded for the best deals in green ($100K to $200K), moderate solutions in yellow ($200K to $300K) and most expensive in red (over $300K). I put the software-only options on the left and pre-built systems on the right.
SUSE Enterprise Storage 4
IBM Storwize V5010
DataCore SAN Symphony
Red Hat Ceph Storage
Dell EMC Unity 300
I am often asked, "Isn't the software-only, build-it-yourself approach, always the lowest cost option?" Now, I can answer, "Sometimes yes, sometimes no." Fortunately, IBM offers Software-Defined Storage in a variety of packaging options including software-only, pre-built systems, and in the Cloud as a service.
IBM Storwize V5010 is based on IBM Spectrum Virtualize software, which you can deploy as software-only on your own x86 servers. This was not mentioned in the study, and perhaps it is my job to remind people that this option is also available for those who want to build their own storage.
For that matter, IBM Cloud Object Storage System -- available as software-only, pre-built systems, and in the Cloud -- might also be a cost-effective alternative.
Next week I will be in Orlando, Florida for the IBM Systems Technical University. If you are attending, stop by one of my presentations, or look for me at the Solution Center at one of the IBM peds, or attend the "Meet the Experts for IBM Storage" on Thursday!