Bringing data protection into the cognitive era

Share this post:

This is part one of a three-part blog post series on innovations in data protection. Read part two here and part three here.

Part one: The evolution of data protection

Data protection has been a critical process for safeguarding information since the day these systems were developed.

Just how data is protected has continually evolved to keep pace with the evolving systems needing protection. When data growth, data value, costs and time are factored into the equation, we have a recipe that necessitates ongoing change. New application requirements as well as new product offerings have driven the need for regular product updates.

Assigning values to different kinds of data has helped organizations to prioritize protection, so that data of varying values can be protected differently.  Moving higher-value protected data to disk has become a common and popular option. Meanwhile, moving certain lower-risk data to the cloud remains a great option as well.  Over time, data analysts have discovered that the strict lines of data value are starting to blur.

Data availability is the cornerstone of over 75 data protection tools available in the market today. Having access to the data that is captured in the data protection process is key.  Since data will always need to be protected (and the methods of data protection continue to evolve) the challenge is finding protection that is simpler and smarter and less expensive than what is available today.

There are a number of variables in the data protection equation.  Two of the most common ones are the recovery point objective (RPO) and the recovery time objective (RTO). These terms describe how quickly you are able to get your data back, and once you have it back, how old that data is. These two primary characteristics have started to drive the SLA (service-level agreement) around driven data protection.

Today, organizations are also working with version recovery objectives (VROs) and geographic recovery objectives (GROs) for data protection. As companies grow and become more global, the SLAs begin to morph and expand.

IT practitioners need to be looking at software-defined models that are scalable and flexible enough to grow and evolve with the needs of the business. In my view, the best solution is one that employs a high-availability solution with superior data recovery and access capabilities, such as IBM Spectrum Protect Plus.

Data protection SLA management

IBM Spectrum Protect Plus provides an intuitive interface for administrators to set up data protection service levels. It utilizes data protection APIs and snapshot technology from VMs to automate the creation of data copies. With the advantages of automation and snapshot technology, data copies are created quickly and become more space efficient, enabling more point-in-time copies for improved recovery point objectives and lower costs.

Designed with ease of use in mind, IBM Spectrum Protect Plus offers two primary methods of management.

  • A self-service portal provides administrators control over backup, recovery, access, monitoring and environment setup. This interface provides control that a backup administrator requires. However, it is easy enough for VM or application administrators to use. Integration into the Active Directory helps an administrator to grant or deny management permissions to users who require it. Administrators can also use the application to create data protection templates and define global policies. SLA dashboards provide an at-a-glance view of both successfully completed jobs and those that are faltering, with the ability to correct issues.
  • RESTful API-based management allows administrators to integrate protection and data copy processes with scripts and third-party tools. As new use cases emerge for protected data, this integration method helps ensure an organization’s data protection solution will continue to evolve as the needs of the business evolve.
More Storage stories

Powerful new storage for your mission-critical hybrid multicloud

Flash storage, Multicloud, Storage

Cloud computing and mainframes processors are two of the most important contributors in information technology solutions. They are increasingly being linked together to unlock value for clients. 85 percent of companies are already operating in multicloud and by 2021, most existing apps will have migrated to the cloud. [1] At the same time, mainframe utilization more

A conversation about metadata management featuring Forrester Analyst Michele Goetz

AI, Big data & analytics, Storage

I recently had an opportunity to speak with Forrester Principal Analyst Michele Goetz following a research study on the subject of metadata management conducted by Forrester Consulting and commissioned by IBM. Michele’s research covers artificial intelligence technologies and consultancies, semantic technology, data management strategy, data governance and data integration, and includes within its scope the more

IBM is transforming data storage for media and entertainment

Data security, Storage, Tape and virtual tape storage

Disruptions in media and entertainment have been occurring at a rapid pace over the past few years and many organizations are still struggling with optimizing their IT infrastructure for new data requirements. With growth in capacity, new larger file formats, keeping more data online for business opportunities, and the growing use of AI, organizations are more