Software-defined computing

Elastic Storage Server redefines unified storage for modern workloads

Share this post:

What does unified storage mean for modern workloads?
Ten years ago, the concept of unified storage was new. It meant a single storage solution that served both block and file workloads. Unified storage did solve the problem of storage silos at that time—separate storage area network (SAN) and network-attached storage (NAS) arrays. In the early 2000s, the dominant storage workloads were block based and served by SAN arrays. File-based workloads were limited: primarily file and print or test and dev.

Fast-forward to today’s businesses born on the cloud. Workloads today are primarily comprised of unstructured data. These workloads are driven by tremendous growth in cloud, analytics, mobile, social and security. So what does unified storage for the modern workloads mean?

Unified is no longer block and file. Unified storage for today’s businesses is a storage solution that provides a common pool of storage for file, object and Hadoop/big data workloads. That’s exactly what the IBM Elastic Storage Server does; it redefines unified storage by consolidating file, object and Hadoop workloads into a single storage solution.

Elastic Storage Server not only reduces the capital and operational expenditures involved in managing separate storage arrays for each workload but, more important, it creates an agile environment for accelerating your business applications. For example, you can ingest data as objects over HTTP but analyze it at high speeds using file interfaces. Another use case that customers using Elastic Storage Server for analytics appreciate is that there is no longer a need to move data between file storage and Hadoop Distributed File System (HDFS) for Hadoop workloads. Elastic Storage Server includes an HDFS connector to the underlying file system so you can run your Hadoop workloads directly on Elastic Storage Server and eliminate the headaches and delays in moving data back and forth between file storage and HDFS storage.

Software defined storage minus the risks involved in building your own solution
There are many advantages of software defined storage (SDS), and the analyst community and storage vendors have written about it at length, with varying definitions of SDS. But one part everyone agrees on is that in software defined storage all of the intelligence in storing, managing and protecting data is implemented in software.

By that definition, Elastic Storage Server is software defined storage because 100 percent of its storage intelligence is implemented in software. In addition to IBM Spectrum Scale general parallel file system, Elastic Storage Server uses Spectrum Scale RAID, which rebuilds disks in minutes versus the several hours involved in traditional RAID. The combination of Spectrum Scale and Spectrum Scale RAID allows Elastic Storage Server to use all standard hardware—a pair of servers with just a bunch of disks (JBODs). This ability to use JBODs as opposed to storage controllers allows Elastic Storage Server to reduce the cost of the overall storage solution and pass the savings to the users.

Why do I say “minus the risk”? Well, because Elastic Storage Server is an integrated solution, which means that it already includes the optimum server, memory and network hardware to go with the number of disks it supports. There is no headache involved in figuring out the right hardware to go with the storage software or exposure to the risks of incompatible firmware or hardware. Elastic Storage Server comes in seven different models to meet both high throughput and high capacity application needs, each with the optimum combination of compute and storage all implemented in software minus the hassles of a build-your-own solution.

To learn more, check out the white paper on the rich storage management features of Spectrum Scale or the white paper on Elastic Storage Server to see why I think this storage solution is future ready, unified storage for modern workloads.

Product Marketing Manager, Elastic Storage Server (Spectrum Scale Integrated Storage)

More stories

6 ways to enhance your IT infrastructure

AI and cloud technologies are having a disruptive impact on IT, affecting everything from compute and storage requirements to developer tools. As a result, the hardware you procure today should not only meet your expectations, but it also should prepare you for what comes next. With that consideration in mind, an assessment of your IT […]

Continue reading

Scale your NAS capacity without friction – IBM Cloud Object Storage and Komprise

Customers repeatedly tell us that scaling storage capacity within tight budgets is a growing challenge, especially as data continues to grow exponentially in nearly every industry. But, until now, scaling your storage capacity without paying a fortune has been extremely hard. Why? Driving blind with data: Data is often trapped in different NAS storage silos.  […]

Continue reading

Meeting the data needs of artificial intelligence

Artificial intelligence (AI) is playing an increasingly critical role in business. By 2020, 30 percent of organizations that fail to apply AI will not be operationally and economically viable, according to one report[1].  And in a survey, 91 percent of infrastructure and operations leaders cite “data” as a main inhibitor of AI initiatives[2]. What does […]

Continue reading