In the traditional on-premises infrastructure provisioning model, engineers have to physically set up the IT infrastructure and configure the servers and networks. In the past, infrastructure provisioning and management had been a manual, time-consuming, inconsistent and error-prone process.
With the advent of cloud computing, infrastructure management has been revolutionized. Within minutes, you can quickly build and dispose of cloud infrastructure solutions on demand; this is called disposable infrastructure. Disposable infrastructure is the process of automating the provisioning, configuring, deployment and tearing down of cloud infrastructure and services.
Many system administrators may have the following questions:
The answer to all these questions is Infrastructure as a Code.
Infrastructure as Code (IaC) automates the provisioning of infrastructure, enabling your organization to develop, deploy and scale cloud applications with greater speed, less risk and reduced cost.
Using IaC, you’re basically treating your infrastructure components like software, which would address the problems related to scalability, high availability, agility and efficiency in infrastructure management. There are many cloud automation tools available in the market for IaC, including Terraform, AWS CloudFormation, Azure Resource Manager, Google Cloud Deployment Manager, Ansible, Chef, Puppet, Vagrant, Pulumi, Crossplane and more.
In this blog post, we are using Terraform and Ansible for cloud automation. In a public cloud, you can provision the infrastructure resources using Terraform (.tf* files), and run Ansible playbooks (.yml files) to automate configurations to install dependencies, deploy your applications and code against those provisioned resources.
The diagram below depicts a scenario using Terraform to provision the infrastructure and Ansible for configuration management in an IBM public cloud:
The following are some of the customer use cases that use Terraform and Ansible for hybrid cloud infrastructure provisioning automation:
Let’s see each of these automated one-click deployment use cases in detail. The Terraform and Ansible examples provided below are for IBM Cloud.
In this use case, we provision and configure Virtual Server Instances (VSIs), applications and other network resources that utilize the F5 load balancer’s Active/Passive capability. The following is the cloud architecture diagram for this use case:
You can see that there is a F5 Active/Passive load balancer that has Management, Internal and External IPs for the Active/Passive pair. In the solution, we need to update the routing table—Custom Route’s next hop with the External IP of the current active F5 load balancer. When the active F5 load balancer goes to stand-by, we need to invoke a custom application that fetches the routes from cloud (RIAAS Endpoint) and updates the next hop with the active F5 load balancer.
This is a hybrid cloud network use case. The following is the cloud architecture diagram for this use case:
Here, you can see that two different clouds are interconnected using a VPN gateway connection. In a Virtual Private Cloud (VPC1), a three-tier application with a frontend, application and Cloudant database is deployed in a Red Hat OpenShift Kubernetes cluster with VPC available in multiple zones.
To expose an app in a VPC cluster, a layer 7 multizone Application Load Balancer (ALB) for VPC is created. The application is load balanced with a Private VPC Application Load Balancer. Since the ALB is private, it is accessible only to the systems that are connected within the same region and VPC1.
When you connect to a virtual server in the VPC network (VPC2), you can access your app through the hostname that is assigned by the VPC to the Application Load Balancerservice in the format.
The following is the cloud architecture diagram for this use case:
Here, you can see that two different networks in the cloud are interconnected using a Transit Gateway connection. In Classic Infrastructure, a three-tier application with a frontend, application and Cloudant database is deployed in an IBM Cloud Kubernetes Service cluster with Classic available in multiple zones. To expose an app in a IBM Cloud Kubernetes Service cluster, a layer 4 Network Load Balancer (NLB) is created. The application is load balanced with a Private Network Load Balancer. Since the NLB is private, it is accessible only to the systems that are connected within the Classic Network.
When you connect to a virtual server in a VPC network, you can access your app in Classic through the static IP that is assigned to the Network Load Balancer service.
This use case also includes deploying a private NLB and accessing the application deployed in IBM Cloud Kubernetes Service from VPC. This is a hybrid cloud network use case, and the following is the cloud architecture diagram:
You now have a basic understanding of how cloud automation tools are used for disposable infrastructure. You can try running the sample code mentioned in above use cases to set up hybrid cloud infrastructure using Terraform and Ansible.
IBM web domains
ibm.com, ibm.org, ibm-zcouncil.com, insights-on-business.com, jazz.net, mobilebusinessinsights.com, promontory.com, proveit.com, ptech.org, s81c.com, securityintelligence.com, skillsbuild.org, softlayer.com, storagecommunity.org, think-exchange.com, thoughtsoncloud.com, alphaevents.webcasts.com, ibm-cloud.github.io, ibmbigdatahub.com, bluemix.net, mybluemix.net, ibm.net, ibmcloud.com, galasa.dev, blueworkslive.com, swiss-quantum.ch, blueworkslive.com, cloudant.com, ibm.ie, ibm.fr, ibm.com.br, ibm.co, ibm.ca, community.watsonanalytics.com, datapower.com, skills.yourlearning.ibm.com, bluewolf.com, carbondesignsystem.com