Infrastructure Provisioning Automation with Terraform Backend & State Lock

Sayed Imran
6 min readJul 11, 2023

--

In today’s rapidly evolving technological landscape, organizations are constantly seeking ways to enhance their infrastructure provisioning processes. With the rise of cloud computing and the increasing complexity of modern systems, manual configuration and management of infrastructure is inefficient and error-prone.

Automation is the key to overcome the above issue, and working with context of Infrastructure and not speaking of Terraform is inevitable!

Terraform is an open-source infrastructure as code (IaC) software that enables the provisioning and management of infrastructure resources across multiple cloud platforms.

In this article I’ll be demonstrating how can we have a setup to harness the power of Terraform’s remote backend and state locking feature in conjugation with Jenkins’s CI-CD Pipeline to achieve complete Infrastructure Provisioning Automation.

Infrastructure over AWS

The above setup includes launching of the following AWS Service with necessary configurations:

  • AWS VPC (Virtual Private Cloud)
  • Subnet (with unique & non-overlapping CIDR)
  • Internet Gateway (For the Instances to be accessible over Internet)
  • Route Table (To configure routing rules)
  • 1 EC2 Instance (to serve as Master Node for Kubernetes)
  • 3 EC2 Instances (to serve as Worker Nodes for Kubernetes)

Infrastructure Automation Setup

The entire setup includes the usage of the following tools and technologies:

  • GitHub (as SCM for Terraform Configuration Files)
  • Jenkins (to automate the Terraform Configuration with webhook)
  • Terraform (IaC tool)
  • Amazon S3 (for remote backend of Terraform)
  • Amazon DynamoDB (To lock the state of Terraform)

Terraform Remote Backend

Terraform’s remote backend and state locking mechanism provides essential functionalities for ensuring the reliability and consistency of infrastructure provisioning.

It enables the storage and retrieval of Terraform state files in a centralized location, allowing teams to collaborate and share infrastructure code seamlessly. Rather than storing state files locally, the remote backend stores them in a remote location accessible by multiple users to access it.

To prevent concurrent access and modifications to the Terraform state, Terraform supports state locking mechanisms, which rely on external systems to coordinate and manage the state file locks.

Amazon DynamoDB fulfils the purpose, allowing terraform to lock state and avoiding concurrent modification to the remote state file.

Amazon S3

Steps to follow to have an S3 bucket as Backend

Setup an S3 bucket with versioning enabled, so that there are versions of the changes the terraform.tfstate file undergoes.

Enabling versioning on S3 bucket

DynamoDB

Steps to follow for having DynamoDB table for state locking

Create a Table on DynamoDB with Partition Key as LockID, so that we can enable locking feature and avoid race condition.

S3 bucket for Terraform Remote Backend

AWS IAM User

Create an IAM user to be used by Terraform to deploy resources on AWS Cloud with required IAM permissions and policies

AWS Permissions for Terraform

NOTE: Here I’ve used very open permissions just for the demonstration purpose, but one needs more concise and appropriate permissions for the same.

Generate Access and Secret Key for the IAM, to be used later on to configure Jenkins credentials.

Jenkins

Jenkins integrates GitHub with Terraform to have the Infrastructure up and running.

  1. Install the Cloudbees AWS Credentials Plugin on Jenkins.
Jenkins Plugin to store AWS Credentials

2. Now, add the AWS Access Key and Secret Key generated earlier

AWS Credentials over Jenkins

Now, we are ready to write the Jenkins Pipeline for the integration of GitHub with Terraform.

Link to the Pipeline Script: Jenkinsfile

To use the AWS credentials in the pipeline, you need to use the withCredentials block in Jenkinsfile with required variables of AWS Credentials:

  • credentialsId
  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_KEY_ID
stage('Initializaing Terraform Backend') {
agent {
label 'docker'
}
steps {
withCredentials([aws(accessKeyVariable:'AWS_ACCESS_KEY_ID',credentialsId:'b7273a37-838e-44bc-9607-e41102b8a172',secretKeyVariable:'AWS_SECRET_ACCESS_KEY')]) {
sh '''
if [ -d "DevOps-and-Cloud" ]; then
echo "Folder exists. Deleting..."
sudo rm -rf "DevOps-and-Cloud"
sudo echo "Folder deleted."
fi
git clone https://github.com/Sayed-Imran/DevOps-and-Cloud.git
sudo docker run -t -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -v $PWD/DevOps-and-Cloud/Terraform/k3s-aws-s3-backend-jenkins:/workspace -w /workspace hashicorp/terraform init
'''
}
}

We’ll be using docker image of Terraform to run the terraform instead of installing it. We need to pass the ENV variables AWS_ACCESS_KEY_ID & AWS_SECRET_KEY_ID from the AWS credentials configured earlier and mount the Terraform configurations files directory with the workspace as shown in the above pipeline snippet.

Terraform Configuration

The Terraform code launches a VPC with a subnet in it, which is configured with a route table associated along side a Internet Gateway so that the Instances launched in the subnet have public IP for Internet accessibility.

There are 4 EC2 instances launched inside of the VM, one of it configured as the K3S Kubernetes Master Node and other three are joined to the master as the Worker Nodes.

Link to the Terraform Files: K3S on AWS

Terraform backend is configured with the S3 bucket and DynamoDB created previously.

terraform {
backend "s3" {
bucket = "aws-terraform-infra-setup"
key = "prod/k3s/terraform.tfstate"
region = "ap-south-1"
dynamodb_table = "terraform-locking"
encrypt = true
}
}

Here, key refers to the path on the bucket where the state file will be saved.

GitHub Webhook Configuration

Configuring GitHub webhook to trigger builds in Jenkins for the CI-CD to work in place.

GitHub webhook

Jenkins Job Configuration

  1. Configure a job in Jenkins of type Pipeline
Pipeline type Job

2. For Build Triggers option, select GitHub hook trigger for GITScm polling

Jenkins trigger for GitHub webhook

3. For Pipeline option, change the Definition to Pipeline script from SCM and specify as per the below image.

Pipeline Script from SCM

4. Change the script path to the folder location where the Jenkinsfile is actually present.

Jenkinsfile script path

The complete setup is ready now, as soon as the code is pushed to the repository, the event results in triggering of the Jenkins Pipeline job and the Terraform deploys the infrastructure over AWS with required configurations of K3S Kubernetes Cluster.

Jenkins job build

Pipeline Stage View

We can see the stage view of the Pipeline, it takes an average of 3–4 mins to have the entire setup up and running, which on manual methods may take up to 20–30 mins with a high possibility of errors.

Pipeline Stage View

Terraform state file in S3 Bucket with versions

Once the infrastructure is deployed, we can see that the state file has been updated with other previous versions untouched.

Terraform state on remote backend with versions

K3S Kubernetes Cluster

We can SSH to the K3S Master Instance to see that K3S Kubernets Cluster is up and running.

K3S Kubernetes Cluster up and running

So, at the end we can conclude that we have achieved complete automation of Infrastructure provisioning using Terraform with remote backend and state locking feature.

Hope you liked my content, would love to hear feedbacks in the comments.

#devops #automation #terraform #aws #cloud #jenkins #cicd #github #community #medium #remotebackend #s3 #dynamodb #kubernetes #k3s #docker #iam #vpc #iac

--

--

Sayed Imran
Sayed Imran

Written by Sayed Imran

Multi Cloud Certified | CKAD | AWS-SAA | GCP-PCA | AZ-104 | Cloud and DevOps Enthusiast |

No responses yet