August 16, 2021 By David Nguyen
Lucas Conforti
Karen Tylak
2 min read

Going cloud means automatic resiliency, right? Yes and no. It depends on what you are doing in the cloud.

Building resilient applications on the cloud is a combination of using the resilient features that are offered by IBM Cloud and aligning them with your business objectives.

The underlying cloud infrastructure is designed to avoid a single point of failure with the following features:

  • Redundant power feeds
  • Redundant network devices and connections
  • Redundant systems

Well, you get the point. 

We’re going to talk through some of the resiliency features that are provided to you, and what you can do to further improve your resiliency when you are using IBM Cloud VPC. This blog post is the first of a multi-part series.

Is resiliency automatic?

Yes, you are automatically protected from a single point of failure at the system and network level.

This applies whether you are looking to move some of your workloads or developing your applications in the IBM Cloud VPC.

IBM Cloud VPC offerings are designed to automatically protect you against a shared single point of failure at the hardware and network level. VPC is a multi-zone region (MZR). Each VPC contains multiple availability zones that are interconnected with redundant high-speed connections; thus, ensuring robust business continuity.

For example, when you order a virtual server instance (VSI), the VSI is instantiated on a system that has redundant power, fans and dual network connections. It can also be automatically migrated due to certain events to ensure the highest uptime. 

Similarly, other VPC service offerings, such as LBaaS or VPN Gateways, provide redundant systems to give you the highly available service with no action needed by you.

Keep in mind, resiliency can differ between services, so it is highly recommended that you always read through the documents.

So I don’t need to do anything?

No, there are additional steps you can take to improve your resiliency.

Other actions that you can do to improve your overall resiliency strategy include the following:

  • Ensure you distribute the VSI workload and cluster resources correctly across zones.
  • Take advantage of anti-affinity through VSI placement groups.
  • Set up auto-scale to dynamically add and remove VSIs based on workload.
  • Set up snapshots for backup and disaster recovery.

You should take the time to plan your workload and VSIs and set up the auto-scale, placement groups and snapshots features. It is easy to forget these tasks when you are doing a million other things, but they are a critical part of protecting your workloads and applications.

Automation help                                

We also provide some Terraform scripts to help automate setting up your infrastructure in a resilient manner. The scripts can be used as a tutorial or can be modified to your business and application requirements.

What’s next?

Stay tuned, but for now, take a look at some of these helpful links:

Was this article helpful?
YesNo

More from Cloud

How fintechs are helping banks accelerate innovation while navigating global regulations

4 min read - Financial institutions are partnering with technology firms—from cloud providers to fintechs—to adopt innovations that help them stay competitive, remain agile and improve the customer experience. However, the biggest hurdle to adopting new technologies is security and regulatory compliance. While third and fourth parties have the potential to introduce risk, they can also be the solution. As enterprises undergo their modernization journeys, fintechs are redefining digital transformation in ways that have never been seen before. This includes using hybrid cloud and…

IBM Cloud expands its VPC operations in Dallas, Texas

3 min read - Everything is bigger in Texas—including the IBM Cloud® Network footprint. Today, IBM Cloud opened its 10th data center in Dallas, Texas, in support of their virtual private cloud (VPC) operations. DAL14, the new addition, is the fourth availability zone in the IBM Cloud area of Dallas, Texas. It complements the existing setup, which includes two network points of presence (PoPs), one federal data center, and one single-zone region (SZR). The facility is designed to help customers use technology such as…

Apache Kafka use cases: Driving innovation across diverse industries

6 min read - Apache Kafka is an open-source, distributed streaming platform that allows developers to build real-time, event-driven applications. With Apache Kafka, developers can build applications that continuously use streaming data records and deliver real-time experiences to users. Whether checking an account balance, streaming Netflix or browsing LinkedIn, today’s users expect near real-time experiences from apps. Apache Kafka’s event-driven architecture was designed to store data and broadcast events in real-time, making it both a message broker and a storage unit that enables real-time…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters