How to use cloud automation tools like Terraform and Ansible for disposable infrastructure.

In the traditional on-premises infrastructure provisioning model, engineers have to physically set up the IT infrastructure and configure the servers and networks. In the past, infrastructure provisioning and management had been a manual, time-consuming, inconsistent and error-prone process. 

With the advent of cloud computing, infrastructure management has been revolutionized. Within minutes, you can quickly build and dispose of cloud infrastructure solutions on demand; this is called disposable infrastructure. Disposable infrastructure is the process of automating the provisioning, configuring, deployment and tearing down of cloud infrastructure and services.

Many system administrators may have the following questions: 

  • How do I dispose of my infrastructure at the click of a button? 
  • How do I quickly set up my infrastructure in a new region? 
  • How do I configure my systems and ensure that they are all consistent with the same configurations?

The answer to all these questions is Infrastructure as a Code.

What is Infrastructure as Code?

Infrastructure as Code (IaC) automates the provisioning of infrastructure, enabling your organization to develop, deploy and scale cloud applications with greater speed, less risk and reduced cost.

Using IaC, you’re basically treating your infrastructure components like software, which would address the problems related to scalability, high availability, agility and efficiency in infrastructure management. There are many cloud automation tools available in the market for IaC, including Terraform, AWS CloudFormation, Azure Resource Manager, Google Cloud Deployment Manager, Ansible, Chef, Puppet, Vagrant, Pulumi, Crossplane and more.

Using Terraform and Ansible

In this blog post, we are using Terraform and Ansible for cloud automation. In a public cloud, you can provision the infrastructure resources using Terraform (.tf* files), and run Ansible playbooks (.yml files) to automate configurations to install dependencies, deploy your applications and code against those provisioned resources.

The diagram below depicts a scenario using Terraform to provision the infrastructure and Ansible for configuration management in an IBM public cloud:

Customer use cases

The following are some of the customer use cases that use Terraform and Ansible for hybrid cloud infrastructure provisioning automation:

  1. F5 load balancer’s Active/Passive capability for a Virtual Network Function (VNF) high availability solution.
  2. Interconnecting on-prem network with the IBM Cloud network using a Virtual Private Network (VPN) gateway.
  3. Interconnecting on-prem network with the IBM Cloud network using Transit Gateway (TG) and Domain Name Service (DNS).
  4. Interconnecting on-prem network with the IBM Cloud network using a Strongswan VPN tunnel.

Let’s see each of these automated one-click deployment use cases in detail. The Terraform and Ansible examples provided below are for IBM Cloud.

Use case 1: F5 load balancer’s Active/Passive capability for a Virtual Network Function (VNF) high availability solution

In this use case, we provision and configure Virtual Server Instances (VSIs), applications and other network resources that utilize the F5 load balancer’s Active/Passive capability. The following is the cloud architecture diagram for this use case:

You can see that there is a F5 Active/Passive load balancer that has Management, Internal and External IPs for the Active/Passive pair. In the solution, we need to update the routing table—Custom Route’s next hop with the External IP of the current active F5 load balancer. When the active F5 load balancer goes to stand-by, we need to invoke a custom application that fetches the routes from cloud (RIAAS Endpoint) and updates the next hop with the active F5 load balancer.  

See the Terraform and Ansible code for this use case here.

Use case 2: Interconnecting on-prem network with the IBM Cloud network using a Virtual Private Network (VPN) gateway

This is a hybrid cloud network use case. The following is the cloud architecture diagram for this use case:

Here, you can see that two different clouds are interconnected using a VPN gateway connection. In a Virtual Private Cloud (VPC1), a three-tier application with a frontend, application and Cloudant database is deployed in a Red Hat OpenShift Kubernetes cluster with VPC available in multiple zones.

To expose an app in a VPC cluster, a layer 7 multizone Application Load Balancer (ALB) for VPC is created. The application is load balanced with a Private VPC Application Load Balancer. Since the ALB is private, it is accessible only to the systems that are connected within the same region and VPC1.

When you connect to a virtual server in the VPC network (VPC2), you can access your app through the hostname that is assigned by the VPC to the Application Load Balancer service in the format 1234abcd-<region>.lb.appdomain.cloud.

See the Terraform and Ansible code for this use case here.

Use case 3: Interconnecting on-prem network with the IBM Cloud network using Transit Gateway (TG) and Domain Name Service (DNS)

The following is the cloud architecture diagram for this use case:

Here, you can see that two different networks in the cloud are interconnected using a Transit Gateway connection. In Classic Infrastructure, a three-tier application with a frontend, application and Cloudant database is deployed in an IBM Cloud Kubernetes Service cluster with Classic available in multiple zones. To expose an app in a IBM Cloud Kubernetes Service cluster, a layer 4 Network Load Balancer (NLB) is created. The application is load balanced with a Private Network  Load Balancer. Since the NLB is private, it is accessible only to the systems that are connected within the Classic Network.

When you connect to a virtual server in a VPC network, you can access your app in Classic  through the static IP that is assigned to the Network Load Balancer service.

See the Terraform and Ansible code for this use case here.

Use case 4: Interconnecting on-prem network with the IBM Cloud network using a Strongswan VPN tunnel.

This use case also includes deploying a private NLB and accessing the application deployed in IBM Cloud Kubernetes Service from VPC. This is a hybrid cloud network use case, and the following is the cloud architecture diagram:

See the Terraform and Ansible code for this use case here.

Conclusion

You now have a basic understanding of how cloud automation tools are used for disposable infrastructure. You can try running the sample code mentioned in above use cases to set up hybrid cloud infrastructure using Terraform and Ansible.

Categories

More from Cloud

Kubernetes version 1.28 now available in IBM Cloud Kubernetes Service

2 min read - We are excited to announce the availability of Kubernetes version 1.28 for your clusters that are running in IBM Cloud Kubernetes Service. This is our 23rd release of Kubernetes. With our Kubernetes service, you can easily upgrade your clusters without the need for deep Kubernetes knowledge. When you deploy new clusters, the default Kubernetes version remains 1.27 (soon to be 1.28); you can also choose to immediately deploy version 1.28. Learn more about deploying clusters here. Kubernetes version 1.28 In…

Temenos brings innovative payments capabilities to IBM Cloud to help banks transform

3 min read - The payments ecosystem is at an inflection point for transformation, and we believe now is the time for change. As banks look to modernize their payments journeys, Temenos Payments Hub has become the first dedicated payments solution to deliver innovative payments capabilities on the IBM Cloud for Financial Services®—an industry-specific platform designed to accelerate financial institutions' digital transformations with security at the forefront. This is the latest initiative in our long history together helping clients transform. With the Temenos Payments…

Foundational models at the edge

7 min read - Foundational models (FMs) are marking the beginning of a new era in machine learning (ML) and artificial intelligence (AI), which is leading to faster development of AI that can be adapted to a wide range of downstream tasks and fine-tuned for an array of applications.  With the increasing importance of processing data where work is being performed, serving AI models at the enterprise edge enables near-real-time predictions, while abiding by data sovereignty and privacy requirements. By combining the IBM watsonx data…

The next wave of payments modernization: Minimizing complexity to elevate customer experience

3 min read - The payments ecosystem is at an inflection point for transformation, especially as we see the rise of disruptive digital entrants who are introducing new payment methods, such as cryptocurrency and central bank digital currencies (CDBC). With more choices for customers, capturing share of wallet is becoming more competitive for traditional banks. This is just one of many examples that show how the payments space has evolved. At the same time, we are increasingly seeing regulators more closely monitor the industry’s…