Data resiliency refers to an organization’s ability to recover from data breaches and other types of data loss, immediately enact business continuity plans, effectively recover lost assets and aggressively protect that organization’s data moving forward.
It’s a concept of growing relevance because organizations are facing more and more cyber threats. These threats are becoming increasingly sophisticated as criminal enterprises search for more ways to disrupt organizations and/or hold their data hostage through sophisticated ransomware attacks. Cyberattacks continue to trend upward as tech-savvy criminals discover new ways to thwart existing cybersecurity defenses.
Criminal activity is also not the only active threat to data security. Even though much modern computing is conducted or housed on the cloud, data integrity can be impacted when physical data servers are subjected to danger from natural disasters (such as flooding), as well as the power outages such disasters often trigger. Critical data can even be harmed through human error—through simple file deletion or poor data management practices.
But regardless of how data might become stolen, lost or harmed, an organization must be prepared to engage an adequate incident response strategy at all times. Such a strategy should do the following:
Clearly, the best time to create such a strategy is before it’s needed.
It’s as true of data resiliency as it is of everything else in our world: time is money. And the clock is definitely ticking when a data breach or some other type of data outage occurs.
An organization’s reputation can lose legitimacy with each passing hour and day—especially if repeat incidents occur or the organization takes too long to implement disaster recovery plans. If consumers experience significant downtime or feel their sensitive data is not being properly protected, an organization might lose hordes of customers who have lost trust in that organization.
Because an organization could actually lose its business altogether due to it, the pressing financial importance of data resiliency simply cannot be overstated.
Some have posited that an organization should be able to withstand three different types of data problems, based on the problem’s respective scale and complexity:
These experts say that if an organization can handle all three types of disruption and still maintain normal business practices, that organization can be considered data resilient (link resides outside ibm.com).
Data resiliency is not a “one-size-fits-all” proposition. However, there are a number of guiding principles organizations should keep in mind when developing their own data resiliency strategy.
For any organization that wants to smartly integrate data resiliency into its operations, its first step should be to back up all of its data. In addition to ensuring that redundant backups are safeguarded from any physical harm and kept in multiple locations, the organization must make sure that backing up data doesn’t affect the data’s ongoing daily use. Beyond that, it’s important that data backups are tested on a regular basis to check their continued viability.
Making plentiful backups is just one aspect of fostering data resiliency. The other part is mounting a proper defense by incorporating appropriate cybersecurity standards, such as installing antivirus software, firewalls and mechanisms for discovering intrusions. These methods can guard against various attacks (such as hacking, phishing or the use of malware). Regular testing of systems and infrastructure is recommended for detecting security issues.
It’s also important that the data protection software an organization chooses observes industry best practices. That means the software needs to support the following:
Should a cyber incident occur, an organization’s immediate priority will be to return to standard operations as soon as possible. That’s why it’s essential that an organization already have a disaster recovery plan that’s ready to be implemented. This plan should mix various types of information, offer a step-by-step guide to necessary actions, and provide notes on key data and apps, descriptions of recovery procedures, and lists of personnel contact data.
Data segmentation is a triage process enacted because of an existing or future emergency IT event in which data files have been or might become lost or stolen. Segmentation helps an organization prioritize and rank data files per their necessity and how quickly they must be restored. The best time to determine an effective data segmentation strategy is actually before a disruption occurs. The more preparation you can achieve ahead of such an event, the better.
The immediate benefits of achieving data resiliency are numerous and include guarding against losses of data, boosting reliability and minimizing downtime.
Another prime asset of achieving data resiliency is that it aids in the cause of data retention and supports the policies of persistent data management, which helps organizations remain in compliance with binding requirements concerning the archiving of business and legal data.
But perhaps the greatest benefit is how it can safeguard a company’s reputation. Proper data resiliency can keep a business running smoothly with regular business functions being executed properly and do all of this in a manner so that the company’s reputation never suffers—even though that organization might experience a data loss. With proper data resiliency measures integrated, many data losses can be immediately managed and their effects mitigated.
In a best-case scenario, an organization’s customer base will not even know such a data loss has occurred. However, even if the loss becomes common knowledge, it may not harm that company’s reputation, provided that it appears that the company has acted in good faith, has tried to protect customer data and is working to take all responsible measures necessary to thwart cyberattacks from that point on.
After numerous high-profile incidents, cyberattacks have lost some of their initial shock value. The public has learned that cyberattacks can target and disrupt any organization, and such attacks are likely to be an unfortunate and ongoing aspect of modern life. Typically, consumers now don’t blame a company that experiences a data loss. They only blame the company if it fails to adequately respond to the incident and doesn’t protect consumer data moving forward.
Developing an effective data resilience strategy depends on successfully considering several variables.
The first variable involves time, specifically recovery time objectives (RTOs). These are target times for getting data or applications back online and restoring services. Another way to consider the RTO is as the amount of time an organization can afford to be without data or apps that have lost accessibility or functionality.
The second variable refers to backup strategy, in particular the amount of time that occurs between data backups, which are called recovery point objectives (RPOs).
The third variable can be called “thoroughness” and relates to which data should be safeguarded. Ideally, all workloads would be protected, including various endpoints and Software-as-a-Service (SaaS) apps, such as Microsoft 365, but this is not always possible.
Keeping these variables in mind, designing and implementing an effective data resilience strategy usually entails the following steps:
A data loss can quickly become an emergency situation, and when it occurs, time is truly of the essence. A disaster recovery plan needs to be created beforehand with ample flexibility so it can provide an emergency path forward, regardless of the severity of the data loss event experienced. Disaster recovery plans should be reviewed regularly for any necessary updates.
Managing the profusion of data now being generated and collected can be complex. To foster data resiliency, an organization must inventory all data. With a full view of its data, the company then needs to prioritize data according to importance so it knows what needs to be restored and in what order. The most important/sensitive data should be stored using the 3-2-1 method
The modern pace of technological acceleration is truly astounding. And today’s ultra-savvy cyberhackers make it their mission to follow news updates about new capabilities and any potential vulnerabilities they may contain. That’s why it’s critical that organizations install the latest upgrades for their disaster recovery software as soon as possible
While crafting a sturdy disaster recovery plan is key, it is intended to focus strictly on fixing the IT emergency being experienced by that organization. Companies also need full contingency plans that address steps for returning normalcy to regular business operations and provide information about administering assets and managing human resources and business partners.
Developing recovery plans is only half the battle. An organization must also see that everyone working at the company is familiarized with both the disaster recovery plan and any other contingency plans that have been created. Information about the most sensitive data, including storage location and that of any copies made, should be kept on a need-to-know basis.
Some companies mistakenly think that they need only to identify their organization’s recovery point objectives (RPOs) and recovery time objectives (RTOs). However, this is not the case; company data fluctuates monthly, as does the number of applications being operated. Regular testing should be conducted to confirm that RPO and RTO values are still being met.
We live in an age of ceaseless change, and not all of it is good. There are always new cyber schemes being introduced by criminal elements. On top of that, nearly every industry is subject to challenges or threats specific to that industry (e.g., the banking and financial services industry). Accordingly, forward-thinking organizations must work to remain knowledgeable and vigilant about security threats on the horizon.
As a concept, data resiliency is still relatively young, but as a topic, it’s likely to be a permanent part of the IT landscape moving forward. Data resiliency technology is becoming increasingly sophisticated and more popular, as the following trends suggest:
Explore the essentials of data security and understand how to protect your organization's most valuable asset—data. Learn about the different types, tools and strategies that will help safeguard sensitive information from emerging cyberthreats.
This on-demand webinar will guide you through best practices for increasing security, improving efficiency and ensuring data recovery with an integrated solution designed to minimize risk and downtime. Don’t miss insights from industry experts.
Learn how to overcome your data challenges with high-performance file and object storage, designed to enhance AI, machine learning and analytics processes while ensuring data security and scalability.
Learn about the types of flash memory and storage and explore how businesses are using flash technology to enhance efficiency, reduce latency and future-proof their data storage infrastructure.
Learn how IBM FlashSystem boosts data security and resilience, protecting against ransomware and cyberattacks with optimized performance and recovery strategies.
Unlock the power of cyber resilience and sustainability with IBM FlashSystem. Explore how autonomous data storage can help you secure your data, reduce costs, and elevate operational efficiency.
Virtualize your storage environment and manage it efficiently across multiple platforms. IBM Storage Virtualization helps reduce complexity while optimizing resources.
Accelerate AI and data-intensive workloads with IBM Storage for AI solutions.