November 12, 2019 By Vu Le 3 min read

It seems that major headlines every week focus on data breaches or cyber events against well-known, reputable businesses or government agencies. Cyberattacks are becoming more prolific and sophisticated, so it’s no longer a question of if it will affect your organization, but when. Certain cyberattacks such as ransomware can cripple an organization, if not shut it down completely, which is why all organizations need to focus on cyber-resiliency.

Cyber-resiliency is the ability to continue operation in the event of a cyberattack. While there are multiple aspects of cyber-resiliency, in this post I want to focus on storage resiliency, which should be designed around three key assumptions:

  1. Compromise is inevitable.
  2. Critical data must be copied and stored beyond the reach of compromise.
  3. Organizations must have the tools to automate, test and learn to recover when a breach or attack occurs.

Let’s break down each of these aspects and look at what organizations can do to bolster their cyber-resiliency.

Compromise is Inevitable

While it’s nearly impossible in today’s world to completely avoid data breaches or other cyberattacks, there are certain practices that enhance security and help protect against attacks:

  • Discover and patch systems
  • Automatically fix vulnerabilities
  • Adopt a zero-trust policy

However, when an attack comes, you need a plan to be able to respond and recover rapidly.

Critical data must be copied and stored beyond the reach of compromise

Organizations need to understand what data is required for their operations to continue to run, such as customer account information and transactions. Protected copies of this mission-critical data shouldn’t be accessible and manipulatable on production systems, which can be compromised.

There are several important points of consideration in protecting data:

Limit privileged users: Often times, threats come from internal actors or an external agent that has compromised a super user, giving the attacker total control and the ability to corrupt and destroy production and backup data. You can help prevent this by limiting privileged accounts, and only authorizing access on as-needed basis.

Generate immutable copies: It’s critical to have protected copies of your data that can’t be manipulated. There are multiple storage possibilities for ensuring the immutability of your most critical data, such as Write Once Read Many (WORM) media like tape, cloud object storage or specialized storage devices. A snapshot that can be mounted to a host is still corruptible.

Maintain isolation: You also need to maintain a logical and physical separation between protected copies of the data and host systems. For example, put a network airgap between a host and its protected copies.

Consider performance: Different methods of data protection come with different performance characteristics, such as copy duration (How long will the backup take?) and performance implications to production, recovery point objective (RPO; How current is my protected data?) and recovery time objective (RTO; How fast I can restore my data?). Organizations will need to understand the tradeoffs between their budgets and their business objectives.

Organizations must have the tools to automate, test and learn to recover when a breach or attack occurs

Build automation: Restoration and recovery normally include multiple, complex steps and coordinating between multiple systems. The last thing you want to worry about in a high-pressure, time-critical situation is the possibility of user error. Automating recovery procedures will provide a consistent approach under any situation.

Make it easy to use: Recovery methods should be straightforward enough to be handled by operators and not require calling 10 different engineers, and that applies especially in a high-pressure situation. Tools, such as push-button web interfaces that can launch an automated disaster recovery process, make recovery more accessible.

Practice makes perfect: Testing the recovery process often is important, not only to validate the process but to provide familiarity to the ones executing it. This can be achieved using recovery systems that won’t affect production systems.

It’s not just important to focus on cybersecurity and the prevention of cyberattacks; it’s equally important to recover and continue operations from attacks, when they occur.

IBM Systems Lab Services has a team of consultants ready to help organizations address the risks and impacts of cyberattacks. We can help you plan ahead, detect issues and recover quickly should a breach occur. If you have a storage or cyber-resiliency questions, please contact us.

Was this article helpful?
YesNo

More from Cybersecurity

Data protection strategy: Key components and best practices

8 min read - Virtually every organization recognizes the power of data to enhance customer and employee experiences and drive better business decisions. Yet, as data becomes more valuable, it's also becoming harder to protect. Companies continue to create more attack surfaces with hybrid models, scattering critical data across cloud, third-party and on-premises locations, while threat actors constantly devise new and creative ways to exploit vulnerabilities. In response, many organizations are focusing more on data protection, only to find a lack of formal guidelines and…

What you need to know about the CCPA draft rules on AI and automated decision-making technology

9 min read - In November 2023, the California Privacy Protection Agency (CPPA) released a set of draft regulations on the use of artificial intelligence (AI) and automated decision-making technology (ADMT). The proposed rules are still in development, but organizations may want to pay close attention to their evolution. Because the state is home to many of the world's biggest technology companies, any AI regulations that California adopts could have an impact far beyond its borders.  Furthermore, a California appeals court recently ruled that…

IBM named a Leader in Gartner Magic Quadrant for SIEM, for the 14th consecutive time

3 min read - Security operations is getting more complex and inefficient with too many tools, too much data and simply too much to do. According to a study done by IBM, SOC team members are only able to handle half of the alerts that they should be reviewing in a typical workday. This potentially leads to missing the important alerts that are critical to an organization's security. Thus, choosing the right SIEM solution can be transformative for security teams, helping them manage alerts…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters