February 9, 2021 By Andrea Sayles 3 min read

Business continuity and disaster recovery (BCDR) plans need to keep pace with increasing business demands and growth in physical and compute infrastructures. For a hyperconnected digital business, even a small disruptive event can ripple through the entire organization. Today most businesses have BCDR plans. But do these plans deliver operational resilience during the moment of truth?

With the world becoming increasingly uncertain and risks proliferating, IT leaders must look beyond crisis management to be able to achieve operational resilience. A review of an organization’s operational resilience posture must prioritize the following four areas.

Data center resilience

Many organizations still have aging data center facilities that aren’t well aligned with current business and technology demands. Cloud migration – even cloud repatriation – in many cases takes place in multiple stages and is seldom planned or governed in an integrated manner. As complexity increases, there may be blind spots that could be easily missed.

For example, power distribution and cooling systems in data centers are often weak links in BCDR plans. In many legacy data centers, cooling systems are connected separately to diesel generators, not to the uninterrupted power supply. This puts compute infrastructures at risk of overheating should there be a generator failure. With business growth and changes in compute infrastructures, power equipment and capacities can become out of alignment, exposing your business to huge risk.

Modern businesses need next-generation data centers that are failsafe, responsive and workload-aware, while complying with industry standards, regulations and green/energy norms.

Integrated business continuity strategy

Closely aligned with a data center strategy should be a holistic BCDR strategy that considers all types of risks (system failure, natural disaster, human error or cyberattack) and outage scenarios, and provides plans for mitigation with minimal or no impact to the business. The strategy must also consider organization and culture, business processes, technology, standards and regulations. And no strategy or plan can be effective unless it is regularly tested. A well-planned data center design, integrated system testing and a regular functional test of the BCDR plan can help IT managers quickly detect equipment fault or vulnerabilities in near real time.

Recoverability and reliability

Business continuity best practices suggest that backup sites be built at physically different locations, in different seismic zones. Cloud-based data protection and recovery allows organizations to back up and store critical data and applications off-site, so they are protected from local disruptions. However, managing backup and storage as well as disaster and cyber recovery for a hybrid environment with hundreds or thousands of applications isn’t easy. Organizations simply don’t have the resources or the skills and expertise to do so.

Recovery at scale within minutes or seconds of an outage in such complex environments can only be achieved with an orchestrated recovery platform – a platform that also allows frequent tests to establish recovery reliability. While manual tests are slow, error-prone and dependent on availability of skilled resources, an orchestrated recovery platform can help eliminate human error and improve recoverability and recovery reliability.

Rapid response and recovery

While many organizations have robust BCDR plans, the need for planned production downtime inhibits their test schedules. Some of them use manual runbooks to perform failover/failbacks. This requires a significant amount of training and experience. By automating the runbook, tests and failover/failback processes, organizations can conduct regular disaster and cyber recovery drills that can keep the runbooks current and the execution smooth during real disasters.

It’s not enough to have data backups or IT infrastructure components available in real time. Organizations need the ability to quickly recover critical applications and data supporting business operations. Increasing cases of cyberattacks put the highest emphasis on the integrity of data being replicated in real time, as the backup data itself can also be corrupted.

As the stakes get higher, achieving operational resilience is a business imperative. Many organizations have witnessed devastating outages over the past few years, some of which could have been avoided. The cost of ignoring these situations will be increasingly dear in a post-COVID, hyper-digitized era.

To learn more about how a small disruptive event can have a ripple effect across your company, and what you can do to prevent it, explore the Moment of Truth.

Was this article helpful?

More from Cloud

Announcing Dizzion Desktop as a Service for IBM Virtual Private Cloud (VPC)

2 min read - For more than four years, Dizzion and IBM Cloud® have strategically partnered to deliver incredible digital workspace experiences to our clients. We are excited to announce that Dizzion has expanded their Desktop as a Service (DaaS) offering to now support IBM Cloud Virtual Private Cloud (VPC). Powered by Frame, Dizzion’s cloud-native DaaS platform, clients can now deploy their Windows and Linux® virtual desktops and applications on IBM Cloud VPC and enjoy fast, dynamic, infrastructure provisioning and a true consumption-based model.…

Microcontrollers vs. microprocessors: What’s the difference?

6 min read - Microcontroller units (MCUs) and microprocessor units (MPUs) are two kinds of integrated circuits that, while similar in certain ways, are very different in many others. Replacing antiquated multi-component central processing units (CPUs) with separate logic units, these single-chip processors are both extremely valuable in the continued development of computing technology. However, microcontrollers and microprocessors differ significantly in component structure, chip architecture, performance capabilities and application. The key difference between these two units is that microcontrollers combine all the necessary elements…

Seven top central processing unit (CPU) use cases

7 min read - The central processing unit (CPU) is the computer’s brain, assigning and processing tasks and managing essential operational functions. Computers have been so seamlessly integrated with modern life that sometimes we’re not even aware of how many CPUs are in use around the world. It’s a staggering amount—so many CPUs that a conclusive figure can only be approximated. How many CPUs are now in use? It’s been estimated that there may be as many as 200 billion CPU cores (or more)…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters