Bringing data protection into the cognitive era

Share this post:

This is part one of a three-part blog post series on innovations in data protection. Read part two here and part three here.

Part one: The evolution of data protection

Data protection has been a critical process for safeguarding information since the day these systems were developed.

Just how data is protected has continually evolved to keep pace with the evolving systems needing protection. When data growth, data value, costs and time are factored into the equation, we have a recipe that necessitates ongoing change. New application requirements as well as new product offerings have driven the need for regular product updates.

Assigning values to different kinds of data has helped organizations to prioritize protection, so that data of varying values can be protected differently.  Moving higher-value protected data to disk has become a common and popular option. Meanwhile, moving certain lower-risk data to the cloud remains a great option as well.  Over time, data analysts have discovered that the strict lines of data value are starting to blur.

Data availability is the cornerstone of over 75 data protection tools available in the market today. Having access to the data that is captured in the data protection process is key.  Since data will always need to be protected (and the methods of data protection continue to evolve) the challenge is finding protection that is simpler and smarter and less expensive than what is available today.

There are a number of variables in the data protection equation.  Two of the most common ones are the recovery point objective (RPO) and the recovery time objective (RTO). These terms describe how quickly you are able to get your data back, and once you have it back, how old that data is. These two primary characteristics have started to drive the SLA (service-level agreement) around driven data protection.

Today, organizations are also working with version recovery objectives (VROs) and geographic recovery objectives (GROs) for data protection. As companies grow and become more global, the SLAs begin to morph and expand.

IT practitioners need to be looking at software-defined models that are scalable and flexible enough to grow and evolve with the needs of the business. In my view, the best solution is one that employs a high-availability solution with superior data recovery and access capabilities, such as IBM Spectrum Protect Plus.

Data protection SLA management

IBM Spectrum Protect Plus provides an intuitive interface for administrators to set up data protection service levels. It utilizes data protection APIs and snapshot technology from VMs to automate the creation of data copies. With the advantages of automation and snapshot technology, data copies are created quickly and become more space efficient, enabling more point-in-time copies for improved recovery point objectives and lower costs.

Designed with ease of use in mind, IBM Spectrum Protect Plus offers two primary methods of management.

  • A self-service portal provides administrators control over backup, recovery, access, monitoring and environment setup. This interface provides control that a backup administrator requires. However, it is easy enough for VM or application administrators to use. Integration into the Active Directory helps an administrator to grant or deny management permissions to users who require it. Administrators can also use the application to create data protection templates and define global policies. SLA dashboards provide an at-a-glance view of both successfully completed jobs and those that are faltering, with the ability to correct issues.
  • RESTful API-based management allows administrators to integrate protection and data copy processes with scripts and third-party tools. As new use cases emerge for protected data, this integration method helps ensure an organization’s data protection solution will continue to evolve as the needs of the business evolve.
More Storage stories

Storage for the exabyte future

AI, Cloud object storage, Storage

“There is no AI without IA (information architecture)” is a common phrase here at IBM. It describes the business and operation platform every business needs to connect and manage the lifecycle of their AI applications. Data scientists, analytic teams, and line of business need access to the data that helps drive innovation, insight, and ultimately more

The next big leaps for IBM modern data protection

Data security, Multicloud, Storage

Recent analyst research indicates why hybrid multicloud support is becoming increasingly important. According to a 2019 ESG report [1], 67 percent of organizations surveyed currently use public cloud services in their data protection environment. Among those companies, on average 26 percent of their protection environments (measured by amount of data) are housed in the cloud, more

IBM drives innovation in storage for AI and big data, modern data protection and hybrid multicloud

Cloud object storage, Multicloud, Storage

Storage for AI and big data IBM continues to enhance our storage solutions for AI and big data so our clients get the most out of their growing data on premises and in the cloud. Today, IBM announces innovations that allow our clients to leverage more heterogenous data sources and data types for deeper insights more