Store Terraform states in Cloud Object Storage

5 min read

Store Terraform states in Cloud Object Storage

Terraform enables users to automatically provision and apply changes to infrastructure. We have several tutorials in our documentation highlighting how to use Terraform with IBM Cloud. As I was writing one of these tutorials, I was looking at the options to persist the Terraform state. Not finding exactly what I wanted, I came up with a side project to persist the Terraform state in Cloud Object Storage.

State of the infrastructure

The first time you apply a configuration with Terraform, it creates a Terraform state – a file with the .tfstate file extension. The state is a JSON document containing the description of all resources created by Terraform. It is a representation of the real-world deployed infrastructure. Terraform uses the state to compute its plans when you want apply an update to your infrastructure. It is essential to Terraform operations. Lose the state file and you lose Terraform’s knowledge about your infrastructure.

Keep the Terraform state safe

Storing the state file in version control was common in the early days of Terraform – and there are teams still doing this today. But later Terraform introduced the concept of backend as a more robust option. A backend is an abstraction enabling remote storage of the Terraform state. Since then, the recommendation is to use one of the remote backends, potentially enabling locking and versioning if the backend supports it.

Store Terraform states in IBM Cloud Object Storage

One backend type supports storing the states in a S3 bucket. Given IBM Cloud Object Storage supports a subset of the S3 API, I thought I could it a try. To use S3 tools with IBM Cloud Object Storage, the first step is about generating the right set of HMAC credentials.

  1. Log in to the IBM Cloud console and navigate to your instance of Object Storage.

  2. In the side navigation, click Service Credentials.

  3. Click New credential. Specify the following in the Add Inline Configuration Parameters (Optional) field: 

  4. Click Add to generate service credential.

Look at the credentials and note the cos_hmac_keys:

  "cos_hmac_keys": {
    "access_key_id": "123456abcdef",
    "secret_access_key": "24aaaccceeefffee"

In terraform files, use these credentials to configure the S3 backend to work with Cloud Object Storage:

terraform {
  backend "s3" {
    bucket                      = "terraforming"
    key                         = "terraform.tfstate"
    region                      = "us-geo"
    skip_region_validation      = true
    skip_credentials_validation = true
    skip_get_ec2_platforms      = true
    skip_requesting_account_id  = true
    skip_metadata_api_check     = true
    endpoint                    = ""
    access_key                  = ""
    secret_key                  = ""
  • bucket is the bucket name where to store the state – make sure to create it before

  • key is the name under which to persist the state

  • region is not used but can be set to the region of the Cloud Object Storage instance;

  • setting skip_region_validation, skip_credentials_validation, skip_get_ec2_platforms, skip_requesting_account_id, skip_metadata_api_check to true allow to disable calls specific to AWS S3 API in the backend;

  • endpoint is the Cloud Object Storage endpoint to use – find the value for your instance in the Endpointsection;

  • access_key, secret_key are from the HMAC sub-section of the credentials.

With this configuration, Terraform operations will read and write state files to Cloud Object Storage. It is the S3 compatibility and the presence of the skip_ flags that makes it possible to use Cloud Object Storage as backend. Although it works today, future updates to the Terraform S3 backend may break the story if the right flags are not exposed. Missing from this configuration is the support for versioning and locking.

Add locking and versioning with a serverless Terraform backend

But wait, there is another backend type that may address these concerns. Terraform has an http backend type: it stores the state using a simple REST client. All it takes is a backend (server) to support GET, POST, LOCK, UNLOCK operations. What if we were to implement these operations? We could persist the state where we see fit, implement locking and even versioning.

And I did that – and because there is no need for such backend server to be up and running idle all time, I choose to implement the backend with a serverless platform, IBM Cloud Functions:

IBM Cloud Functions

One action handles the GET, POST, LOCK and UNLOCK operations and interacts with Cloud Object Storage to load/save Terraform states. The http backend can be configured to point to the Cloud Functions, exposed by API Gateway. It supports locking when you enable the lock_address/unlock_address and versioning when you specify so in the address. LOCK and UNLOCK are replaced with the HTTP PUT and DELETE verbs as the former are not supported by the API Gateway.

terraform {
  backend "http" {
    # Serverless backend endpoint
    # Optional query parameters:
    # env: name for the terraform state, e.g mystate, us/south/staging (.tfstate will be added automatically)
    # versioning: set to true to keep multiple copies of the states in the storage
    address = "https://API_GATEWAY_URL?env=name&versioning=true"

    # Uncomment to enable locking. Set to same value as address
    # lock_address = "https://API_GATEWAY_URL?env=name&versioning=true"
    # unlock_address = "https://API_GATEWAY_URL?env=name&versioning=true"

    # API Key for Cloud Object Storage
    password = "SET_YOUR_KEY"

    # Do not change the following
    username = "cos"
    update_method          = "POST"
    lock_method            = "PUT"
    unlock_method          = "DELETE"
    skip_cert_verification = "false"

This repository has all the code to deploy the serverless Terraform backend. Refer to the README for detailed instructions.

View the code.

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter: @L2FProd.

Be the first to hear about news, product updates, and innovation from IBM Cloud