Modernizing your cloud application? Don’t forget scalability, availability, and automation.

In my last blog post, I explored reasons why treating a computer system as finished when it’s deployed in the cloud is an anti-pattern and that continual modernization is the smart way to maintain such apps. In this post, I’m going to take the ideas a little further and look at scalability, availability, and automation.

How the cloud impacts scalability and availability

The fundamental non-functional considerations of any IT system design centre around scalability and availability. Scalability is the ability for a computer system to grow capacity as use increases. Availability is the ability of the computer system to keep running without service interruption.

The advent of cloud has generally made both of these qualities easier to achieve. Indeed, elasticity—the ability to scale up and down—is one of the core features of a cloud service. I am surprised, however, that many customers still fall into the trap of thinking “the cloud is highly available, ergo my application is highly available.” This is actually another common cloud computing anti-pattern.

It’s true that many cloud services do offer high availability. IBM Cloud offers highly available database services, storage options, and more. But applications need to be developed such that they can still operate, to a degree, when a cloud service is unexpectedly unavailable.

For example, if the application tries to write to a database which is down for maintenance, will the application fall over in a heap and show the user a nasty error message, or will it detect the outage and cope with it? If you’re a cloud application developer and haven’t already, then I would highly recommend you read The Twelve Factor App.

Admittedly, infrastructure availability is a little harder and, in general, it relies of deploying at least two of everything. However, as you modernize, you can build availability in as you go. For example, moving to a Virtual Private Cloud (VPC) makes it far easier to deploy services across a region into multiple zones. Consider migrating databases to one of our database service offerings and high availability—as well as scalability—is taken care of.

Where containerization comes in

Containerization via IBM Cloud Kubernetes Service or Red Hat OpenShift also helps with high availability and scalability. The underlying Kubernetes container orchestrator in each detects underlying node failures and restarts affected containers onto surviving nodes.

For a closer look at Kubernetes and container orchestration, see our video “Kubernetes Explained”:

It’s also possible to set up auto-scaling rules that react quickly and help applications cope with changing load. Our multi-zone clusters enhance availability still further, and we have also recently launched VMware stretch clusters which deploy business critical VMware workloads over zones in a region.

Backup and disaster recovery

Sometimes, the worst does happen, and businesses and their IT systems need to be protected against disaster. The cloud is not immune to disaster—which can take many forms—and users should have tested plans in place to protect their critical systems. Again, cloud services can help, when used correctly.

Strategies need to be in place where backups are taken regularly and restores tested. Critical backup files should be protected from loss and, for example, stored in regional or cross-regional Cloud Object Storage buckets. IBM Cloud offers services such as Veeam and Zerto that can be used as a backbone in creating solid backup and disaster recovery strategies.

The role of automation

Automation also plays a key role in keeping a good cloud application up and running. DevOps services are a means to automate a lot of code release management tasks, and these are a core offering in IBM Cloud.

The IBM Cloud Schematics service is a great way to automate infrastructure and service builds too. Simply create a script or select one of the available templates and a new environment or single machine, built the way it’s needed, is just a mouse click of away.

Register for the webcast to learn more

Are your applications scaled to meet demand and stay available? Log into IBM Cloud and continue your application modernization journey today.

Register now for the webcast “Improve scale and automation by modernizing existing applications,” occurring Wednesday, April 1, 2020.

Was this article helpful?
YesNo

More from Cloud

Apache Kafka use cases: Driving innovation across diverse industries

6 min read - Apache Kafka is an open-source, distributed streaming platform that allows developers to build real-time, event-driven applications. With Apache Kafka, developers can build applications that continuously use streaming data records and deliver real-time experiences to users. Whether checking an account balance, streaming Netflix or browsing LinkedIn, today’s users expect near real-time experiences from apps. Apache Kafka’s event-driven architecture was designed to store data and broadcast events in real-time, making it both a message broker and a storage unit that enables real-time…

Primary storage vs. secondary storage: What’s the difference?

6 min read - What is primary storage? Computer memory is prioritized according to how often that memory is required for use in carrying out operating functions. Primary storage is the means of containing primary memory (or main memory), which is the computer’s working memory and major operational component. The main or primary memory is also called “main storage” or “internal memory.” It holds relatively concise amounts of data, which the computer can access as it functions. Because primary memory is so frequently accessed,…

Cloud investments soar as AI advances

3 min read - These days, cloud news often gets overshadowed by anything and everything related to AI. The truth is they go hand-in-hand since many enterprises use cloud computing to deliver AI and generative AI at scale. "Hybrid cloud and AI are two sides of the same coin because it's all about the data," said Ric Lewis, IBM’s SVP of Infrastructure, at Think 2024. To function well, generative AI systems need to access the data that feeds its models wherever it resides. Enter…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters