Software-defined computing

Elastic Storage Server redefines unified storage for modern workloads

Share this post:

What does unified storage mean for modern workloads?
Ten years ago, the concept of unified storage was new. It meant a single storage solution that served both block and file workloads. Unified storage did solve the problem of storage silos at that time—separate storage area network (SAN) and network-attached storage (NAS) arrays. In the early 2000s, the dominant storage workloads were block based and served by SAN arrays. File-based workloads were limited: primarily file and print or test and dev.

Fast-forward to today’s businesses born on the cloud. Workloads today are primarily comprised of unstructured data. These workloads are driven by tremendous growth in cloud, analytics, mobile, social and security. So what does unified storage for the modern workloads mean?

Unified is no longer block and file. Unified storage for today’s businesses is a storage solution that provides a common pool of storage for file, object and Hadoop/big data workloads. That’s exactly what the IBM Elastic Storage Server does; it redefines unified storage by consolidating file, object and Hadoop workloads into a single storage solution.

Elastic Storage Server not only reduces the capital and operational expenditures involved in managing separate storage arrays for each workload but, more important, it creates an agile environment for accelerating your business applications. For example, you can ingest data as objects over HTTP but analyze it at high speeds using file interfaces. Another use case that customers using Elastic Storage Server for analytics appreciate is that there is no longer a need to move data between file storage and Hadoop Distributed File System (HDFS) for Hadoop workloads. Elastic Storage Server includes an HDFS connector to the underlying file system so you can run your Hadoop workloads directly on Elastic Storage Server and eliminate the headaches and delays in moving data back and forth between file storage and HDFS storage.

Software defined storage minus the risks involved in building your own solution
There are many advantages of software defined storage (SDS), and the analyst community and storage vendors have written about it at length, with varying definitions of SDS. But one part everyone agrees on is that in software defined storage all of the intelligence in storing, managing and protecting data is implemented in software.

By that definition, Elastic Storage Server is software defined storage because 100 percent of its storage intelligence is implemented in software. In addition to IBM Spectrum Scale general parallel file system, Elastic Storage Server uses Spectrum Scale RAID, which rebuilds disks in minutes versus the several hours involved in traditional RAID. The combination of Spectrum Scale and Spectrum Scale RAID allows Elastic Storage Server to use all standard hardware—a pair of servers with just a bunch of disks (JBODs). This ability to use JBODs as opposed to storage controllers allows Elastic Storage Server to reduce the cost of the overall storage solution and pass the savings to the users.

Why do I say “minus the risk”? Well, because Elastic Storage Server is an integrated solution, which means that it already includes the optimum server, memory and network hardware to go with the number of disks it supports. There is no headache involved in figuring out the right hardware to go with the storage software or exposure to the risks of incompatible firmware or hardware. Elastic Storage Server comes in seven different models to meet both high throughput and high capacity application needs, each with the optimum combination of compute and storage all implemented in software minus the hassles of a build-your-own solution.

To learn more, check out the white paper on the rich storage management features of Spectrum Scale or the white paper on Elastic Storage Server to see why I think this storage solution is future ready, unified storage for modern workloads.

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More System software Stories

Storage disaster recovery: Are you using the right tool?

Disaster recovery (DR) and business continuity planning isn’t a one-time event. Effective DR and business continuity requires ongoing management and has to be integrated into your day-to-day operations. Storage systems are one of the most critical dimensions in DR and business continuity planning, and most organizations feel that planning for it is a complex task […]

Continue reading

Engage in a community of innovation at Think 2018

IBM Systems is excited to announce our participation in Think 2018—a technology event where the problems of tomorrow meet the minds of today. Join us March 19–22, 2018, in Las Vegas, Nevada to discover how you can realize new, positive outcomes by tapping into the power of IBM. In fact, our Systems product portfolio will […]

Continue reading

What goes into the right storage for AI?

Artificial intelligence (AI), machine learning and cognitive analytics are having a tremendous impact in areas ranging from medical diagnostics to self-driving cars. AI systems are highly dependent on enormous volumes of data—both at rest in repositories and in motion in real time—to learn from experience, make connections and arrive at critical business decisions. Usage of […]

Continue reading