The Ops Community ⚙️

Moses Itoya
Moses Itoya

Posted on • Updated on


This article is part of a series of articles focused on building a company's infrastructure on AWS using Terraform, all of which(and more) I learnt on the platform. The platform adopts a project based learning style which has proven very effective in development, growth and career success evident in a lot of DevOps Engineers who have passed through the DevOps program on the platform.

The said company we would create an infrastructure for on this series needs a WordPress solution for its developers and a tooling solution for it's DevOps engineers. This would all be in a private network. Part 1 centers on basic intro, VPC and subnets creation. I'd be taking you through the processes involved, should you want to create same or similar infrastructure, while also learning and improving on terraform. You would write the code with Terraform and build the infrastructure as seen below.


Image from

Programmable infrastructures allow you to manage on-premises and cloud resources through code instead of with the management platforms and manual methods traditionally used by IT teams.
An infrastructure captured in code is simpler to manage, can be replicated or altered with greater accuracy, and benefits from all sorts of automation. It can also have changes to it implemented and tracked with the version control methods customarily used in software development.

STEP 1 - Setup

  1. AWS strongly recommends following the security practice of granting least privilege, i.e. the minimum set of permissions necessary to perform a given task. So its best to look at the infrastructure and see what services would be created or accessed and the required permissions. However, you would create a user with programmatic access and AdministratorAccess permissions.

  2. Configure programmatic access from your work station. I would recommend using the AWSCLI for this with the aws configure command.

  3. Create a S3 bucket via the console to store Terraform state file. View the newly created S3 bucket via your terminal to confirm the access was configured properly. You are all set up when you can see it.

Best practices are to ensure every resource is tagged, using multiple key-value pairs. Secondly, write reusable code. We should avoid hard coding values wherever possible.

STEP 2 - VPC Creation

  1. Create a Directory Structure which should have:
  • folder(name it whatever you like)

  • file which is our main configuration file where we are going to define our resource definition.

  • file which would store your variable declarations. It's best to use variables as it keeps your code neat and tidy. Variables prevents hard coding values and makes code easily reusable.

  • terraform.tfvars file which would contain the values for your variables. This project would get complex really soon and I think it's best to understand variables and how manage them effectively. I came across this great article from Spacelift that did justice to that. terraform.tfvars

Set up your Terraform CLI

Add a provider block(AWS is the provider)

provider "aws" {
  region = var.region
Enter fullscreen mode Exit fullscreen mode

Add a resource that would create a VPC for the infra.

# Create VPC
resource "aws_vpc" "main" {
  cidr_block                     = var.vpc_cidr
  enable_dns_support             = var.enable_dns_support 
  enable_dns_hostnames           = var.enable_dns_support
  enable_classiclink             = var.enable_classiclink
  enable_classiclink_dns_support = var.enable_classiclink
Enter fullscreen mode Exit fullscreen mode

Run terraform init. Terraform relies on plugins called “providers” to interact with cloud providers, SaaS providers, and other APIs. Terraform configurations must declare which providers they require so that Terraform can install and use them. terraform init finds and downloads those providers from either the public Terraform Registry or a third-party provider registry. This is the part that generates the .terraform.lock.hcl you would notice when you run terraform init.

Notice that a new directory has been created: .terraform\.... This is where Terraform keeps plugins. Generally, it is safe to delete this folder. It just means that you must execute terraform init again, to download them.

Run terraform plan to see what would be created when you decide to create your aws_vpc resource.

Run terraform apply if only you accept the changes that would occur.

A new file is created terraform.tfstate This is how Terraform keeps itself up to date with the exact state of the infrastructure. It reads this file to know what already exists, what should be added, or destroyed based on the entire terraform code that is being developed.

Also created is the lock file .terraform.lock.hcl which contains information about the providers; in future command runs, Terraform will refer to that file in order to use the same provider versions as it did when the file was generated.

STEP 3 - Subnet Creation

According to the infrastructure design, you will require 6 subnets:

  • 2 public
  • 2 private for webservers
  • 2 private for data layer

Create the first 2 public subnets.

Do not try to memorize the code for anything on terraform. Just understand the structure. You can easily get whatever resource block, from any provider you need on terraform registry All you have to do is to tweak it to your desired resource for your infra. Reason a good understanding of the structure of terraform is key. With time, writing the resource would be a breeze.

# Create public subnets
resource "aws_subnet" "public" {
  count  = var.preferred_number_of_public_subnets == null ? length(data.aws_availability_zones.available.names) : var.preferred_number_of_public_subnets   
  vpc_id =
  cidr_block              = cidrsubnet(var.vpc_cidr, 4 , count.index)
  map_public_ip_on_launch = true
  availability_zone       = data.aws_availability_zones.available.names[count.index]
Enter fullscreen mode Exit fullscreen mode

The final file would look like this:


I'm sure you noticed the private subnets have not been created. On the next publication, you would move on with creating more resources while also refactoring your code. "Doing is the best way of learning", that is exactly what is going to happen in this project.

Top comments (0)