Share this post:
What does unified storage mean for modern workloads?
Ten years ago, the concept of unified storage was new. It meant a single storage solution that served both block and file workloads. Unified storage did solve the problem of storage silos at that time—separate storage area network (SAN) and network-attached storage (NAS) arrays. In the early 2000s, the dominant storage workloads were block based and served by SAN arrays. File-based workloads were limited: primarily file and print or test and dev.
Fast-forward to today’s businesses born on the cloud. Workloads today are primarily comprised of unstructured data. These workloads are driven by tremendous growth in cloud, analytics, mobile, social and security. So what does unified storage for the modern workloads mean?
Unified is no longer block and file. Unified storage for today’s businesses is a storage solution that provides a common pool of storage for file, object and Hadoop/big data workloads. That’s exactly what the IBM Elastic Storage Server does; it redefines unified storage by consolidating file, object and Hadoop workloads into a single storage solution.
Elastic Storage Server not only reduces the capital and operational expenditures involved in managing separate storage arrays for each workload but, more important, it creates an agile environment for accelerating your business applications. For example, you can ingest data as objects over HTTP but analyze it at high speeds using file interfaces. Another use case that customers using Elastic Storage Server for analytics appreciate is that there is no longer a need to move data between file storage and Hadoop Distributed File System (HDFS) for Hadoop workloads. Elastic Storage Server includes an HDFS connector to the underlying file system so you can run your Hadoop workloads directly on Elastic Storage Server and eliminate the headaches and delays in moving data back and forth between file storage and HDFS storage.
Software defined storage minus the risks involved in building your own solution
There are many advantages of software defined storage (SDS), and the analyst community and storage vendors have written about it at length, with varying definitions of SDS. But one part everyone agrees on is that in software defined storage all of the intelligence in storing, managing and protecting data is implemented in software.
By that definition, Elastic Storage Server is software defined storage because 100 percent of its storage intelligence is implemented in software. In addition to IBM Spectrum Scale general parallel file system, Elastic Storage Server uses Spectrum Scale RAID, which rebuilds disks in minutes versus the several hours involved in traditional RAID. The combination of Spectrum Scale and Spectrum Scale RAID allows Elastic Storage Server to use all standard hardware—a pair of servers with just a bunch of disks (JBODs). This ability to use JBODs as opposed to storage controllers allows Elastic Storage Server to reduce the cost of the overall storage solution and pass the savings to the users.
Why do I say “minus the risk”? Well, because Elastic Storage Server is an integrated solution, which means that it already includes the optimum server, memory and network hardware to go with the number of disks it supports. There is no headache involved in figuring out the right hardware to go with the storage software or exposure to the risks of incompatible firmware or hardware. Elastic Storage Server comes in seven different models to meet both high throughput and high capacity application needs, each with the optimum combination of compute and storage all implemented in software minus the hassles of a build-your-own solution.
To learn more, check out the white paper on the rich storage management features of Spectrum Scale or the white paper on Elastic Storage Server to see why I think this storage solution is future ready, unified storage for modern workloads.