Software defined infrastructure

Elastic Storage Server redefines unified storage for modern workloads

Share this post:

What does unified storage mean for modern workloads?
Ten years ago, the concept of unified storage was new. It meant a single storage solution that served both block and file workloads. Unified storage did solve the problem of storage silos at that time—separate storage area network (SAN) and network-attached storage (NAS) arrays. In the early 2000s, the dominant storage workloads were block based and served by SAN arrays. File-based workloads were limited: primarily file and print or test and dev.

Fast-forward to today’s businesses born on the cloud. Workloads today are primarily comprised of unstructured data. These workloads are driven by tremendous growth in cloud, analytics, mobile, social and security. So what does unified storage for the modern workloads mean?

Unified is no longer block and file. Unified storage for today’s businesses is a storage solution that provides a common pool of storage for file, object and Hadoop/big data workloads. That’s exactly what the IBM Elastic Storage Server does; it redefines unified storage by consolidating file, object and Hadoop workloads into a single storage solution.

Elastic Storage Server not only reduces the capital and operational expenditures involved in managing separate storage arrays for each workload but, more important, it creates an agile environment for accelerating your business applications. For example, you can ingest data as objects over HTTP but analyze it at high speeds using file interfaces. Another use case that customers using Elastic Storage Server for analytics appreciate is that there is no longer a need to move data between file storage and Hadoop Distributed File System (HDFS) for Hadoop workloads. Elastic Storage Server includes an HDFS connector to the underlying file system so you can run your Hadoop workloads directly on Elastic Storage Server and eliminate the headaches and delays in moving data back and forth between file storage and HDFS storage.

Software defined storage minus the risks involved in building your own solution
There are many advantages of software defined storage (SDS), and the analyst community and storage vendors have written about it at length, with varying definitions of SDS. But one part everyone agrees on is that in software defined storage all of the intelligence in storing, managing and protecting data is implemented in software.

By that definition, Elastic Storage Server is software defined storage because 100 percent of its storage intelligence is implemented in software. In addition to IBM Spectrum Scale general parallel file system, Elastic Storage Server uses Spectrum Scale RAID, which rebuilds disks in minutes versus the several hours involved in traditional RAID. The combination of Spectrum Scale and Spectrum Scale RAID allows Elastic Storage Server to use all standard hardware—a pair of servers with just a bunch of disks (JBODs). This ability to use JBODs as opposed to storage controllers allows Elastic Storage Server to reduce the cost of the overall storage solution and pass the savings to the users.

Why do I say “minus the risk”? Well, because Elastic Storage Server is an integrated solution, which means that it already includes the optimum server, memory and network hardware to go with the number of disks it supports. There is no headache involved in figuring out the right hardware to go with the storage software or exposure to the risks of incompatible firmware or hardware. Elastic Storage Server comes in seven different models to meet both high throughput and high capacity application needs, each with the optimum combination of compute and storage all implemented in software minus the hassles of a build-your-own solution.

To learn more, check out the white paper on the rich storage management features of Spectrum Scale or the white paper on Elastic Storage Server to see why I think this storage solution is future ready, unified storage for modern workloads.

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Storage Stories

Speed, the competitive advantage with flash storage

The financial sector was one of the earliest adopters of flash storage. The explanation for this is simple: In the financial industry, speed is a competitive advantage.[1] Equities trading provides a good example. As traders began moving to automated systems, the speed of equities transactions increased until they were completing in milliseconds. Human traders couldn’t […]

Continue reading

Containers. Persistent storage. IBM.

Containers are a hot topic — and IBM is listening. IBM is rolling out persistent storage solutions to make containers easier to implement and more resilient. Why use containers? Containers are an open source technology that lets an application be packaged with everything it needs to run the same in any environment. They offer the […]

Continue reading

Fast data: The future of big data

It’s not news that big data is getting bigger by the second.  However, in addition to sheer volume, there is also increasing demand to take action faster than ever based on the data.  An organization’s leaders want to gain a competitive advantage by turning raw data into actionable intelligence. How can they quickly and efficiently […]

Continue reading