top of page
Writer's pictureChristoffer Windahl Madsen

Terraform weekly tips, Terraform best practices (9) - Dynamic configuration




Hi all, and welcome to another Terraform weekly tips blog post. Today, we will be focusing on using the programmatic features of Terraform to create scalable and reusable configuration files. We have previously discussed some of these features, so I suggest reading our previous post on loops in Terraform before continuing with this one. You can find it at terraform weekly tips, #7, terraform best practices & more (codeterraform.com)


As always, we will go through the tips in the order of their description, followed by coding examples. To follow along, I recommend cloning today's post from codeterraform/terraform projects/weekly tips/week 9 03-07-2023 at main · ChristofferWin/codeterraform · GitHub before continuing.


First, lets define what we mean by creating any 'dynamic configuration' by listing different programmatic concepts that contributes to the topic.

  1. Defining reusable 'input variables'

  2. Using 'loops' to define resource definitions in bulk

  3. Using 'local variables' to define logic that will aid in expanding the elasiticy of any configuration

By starting from the top of the list, you can define reusable variables that can be directly referenced or manipulated to fit different scenarios. One such scenario could involve creating Azure resources while adhering to a naming standard. In this case, you can create a variable that stores a string defining a "resource base name" or prefix. This variable can then be referenced in every resource definition that requires it.


Additionally, boolean variables are useful for holding key information about the state or context in which a configuration should run. For example, you can use a boolean variable to determine whether a specific resource, such as monitoring, should be created. This allows flexibility as the value of the boolean can differ from run to run.


Now, with all of the above out of the way, lets dig into some code utilizing the above concepts.


If you havent cloned down the repo then create a folder anywhere on the local machine and create all 3 standard Terraform configuration files, 'main.tf', 'outputs.tf', 'variables.tf'


We will be building a Terraform script that changes its deployment behaviour dependent on the environment provided in the variable 'environment_type'


Go into the variables.tf and define all below variables:

variable "name_prefix" {
  description = "a default prefix name to use on any resource type"
  type = string
  default = "company"
}

variable "location" {
  description = "the location of any resource type"
  type = string
  default = "westeurope"
}

variable "environment_type" {
  description = "must be exactly one of the following strings: 'dev', 'test', 'prod'"
  type = string

  validation {
    condition = length(regexall("^(dev|test|prod)$", var.environment_type)) > 0
    error_message = "the environment_type provided '${var.environment_type}' did not match any of 'dev||test||prod'"
  }
}

variable "ip_address_space" {
  description = "a list of string defining the address space(s) of the environment"
  type = list(string)
  default = ["10.0.0.0/16", "172.16.0.0/24", "192.168.0.0/24"]
}

Now, to make a 'Dynamic Configuration' We need to define variables that can help us determine which scenario / context to run our script as.


Go to the main.tf and define:

terraform {
  required_providers {
    azurerm = {
        source = "hashicorp/azurerm"
    }
  }
}

provider "azurerm" {
  features {
  }
}

locals {
  name_prefix = "${var.environment_type}-${var.name_prefix}"
}

resource "azurerm_resource_group" "rg_object" {
  count = var.environment_type == "prod" ? 2 : 1
  name = count.index == 1 ? "${local.name_prefix}-mgmt-rg" : "${local.name_prefix}-rg"
  location = var.location

  tags = {
    environment = var.environment_type
  }
}

First, we define all the required Terraform boilerplate code. We also define the 'locals' block, which includes a dynamic prefix generated by combining the values of the input variables 'var.environment_type' and 'var.name_prefix'


Next, we create the first resource(s) using a 'count' loop combined with logic to determine whether to create 1 or 2 resource groups, depending on the environment type.


With the resource group(s) created, lets use the return object and carve out the resource group name(s). To do this, we define a new local variable like so:

locals {
  name_prefix = "${var.environment_type}-${var.name_prefix}"
  rg_name = azurerm_resource_group.rg_object.*.name
}

We use the splat expression to carve out the name(s) of the resource group(s). This will return a list of string containing simple string names - Just as we want it. This way we can reuse this simple local variable and dont worry, soon it will make sense why we need more than 1 resource group in case we define the environment as 'prod'.


With the resource group definition done, lets create a Azure Log analytics workspace like so:


resource "azurerm_log_analytics_workspace" "logspace_object" {
  name                = "${local.name_prefix}-logspace"
  location            = var.location
  resource_group_name = local.rg_name[0]
  sku                 = "PerGB2018"
  allow_resource_only_permissions = var.environment_type == "prod" ? true : false
  local_authentication_disabled = var.environment_type == "prod" ? true : false
  retention_in_days = var.environment_type == "prod" ? 120 : 90
  daily_quota_gb = var.environment_type != "prod" ? 5 : null
  tags = {
    environment = var.environment_type
  }
}

Notice how we in this definition do not create a loop. The reason being that we want every environment to have a log analytics workspace. We use the newly created local variable to reach index 0 of the list of rg names which will always return the resource group name prefixed with the environment name without 'mgmt'.


Additionally, it's worth noting that there are multiple lines of logic that define specific values for certain parameters. This allows us to create a logspace for production with additional security settings, while setting up the opposite configuration for development and testing purposes.


We now want to define the resource definition for the Azure virtual network(s):


resource "azurerm_virtual_network" "vn_objects" {
  count = var.environment_type == "prod" ? 2 : 1
  name = count.index == 1 ? "${local.name_prefix}-mgmt-vnet" : "${local.name_prefix}-vnet"
  resource_group_name = local.rg_name[0]
  location = var.location
  address_space = [local.ip_address_space[count.index]]

  dynamic "subnet" {
    for_each = var.environment_type == "prod" ? {for ip in [local.ip_address_space[count.index]] : ip => ip} : {}
    content {
      name = count.index == 0 ? "DMZ" : "GatewaySubnet"
      address_prefix = "${cidrsubnet(subnet.key, 1, 0)}"
    }
  }

  tags = {
    environment = var.environment_type
  }
}

To enable the creation of one or two virtual networks, we utilize count together with logic. This approach is necessary because the production environment requires a separate virtual network for managing the VPN (Virtual Network Gateway) that will be configured later.


We incorporate a dynamic subnet block within the resource definition, allowing us to create configurations with or without subnets. To utilize dynamic blocks, we must use the 'for-each' loop type. Terraform provides this capability to determine when and how many times the dynamic block should run for each resource. Within the specific 'for-each' loop of the dynamic block, we iterate over our local variable 'ip_address_space', which contains a list of string values (explained below).


Before that, we use a 'for-loop' to transform each element in the variable into a map. This step is necessary because the 'for-each' loop iterates over each key in a map. The resulting map is straightforward, consisting of only one element at a time, where the key and value are the same IP address range. This structure enables us to reference 'subnet.key' in the underlying content block, providing us with the required IP prefix for the 'subnet block'.


Define 3 new local variables 'calculate_ip_address_space_prod', 'calculate_ip_address_space_nonprod' & ip_address_space

locals {
  name_prefix = "${var.environment_type}-${var.name_prefix}"
  
  rg_name = azurerm_resource_group.rg_object.*.name
  
  calculate_ip_address_space_prod = var.environment_type == "prod" ? 
  [cidrsubnet(var.ip_address_space[0], 8, 1),         cidrsubnet(var.ip_address_space[0], 8, 99)]: null
  
  calculate_ip_address_space_nonprod = var.environment_type == "test" ? [var.ip_address_space[1]] : [var.ip_address_space[2]]
  
  ip_address_space = local.calculate_ip_address_space_prod != null ? local.calculate_ip_address_space_prod : local.calculate_ip_address_space_nonprod
}

As mentioned earlier, when the environment is set to 'prod', we need to create two virtual networks. To handle this, we define a local variable for each scenario: 'prod' or 'dev', and 'test'. In the production environment, we leverage the Terraform function 'cidrsubnet()' to split the overall '10.0.0.0/16' network into two '/24' networks. These two new subnets are then stored in a list of strings. Notably, we use 'null' on the false side of the if statement, allowing us to completely omit either boolean side by defining it as 'null'. Additionally, we define the IP address space for both 'dev' and 'test' environments, each residing in separate lists of strings.


Now lets define the Azure public IP which will be required for the VPN defined later on:

resource "azurerm_public_ip" "pip_object" {
  count = var.environment_type == "prod" ? 1 : 0
  name = "${local.name_prefix}-gw-pip"
  location = var.location
  resource_group_name = local.rg_name[0]
  sku = "Standard"
  allocation_method = "Static"

  tags = {
    environment = var.environment_type
  }
}

Nothing special about this resource.


Lets define the final resource definition which is the Azure virtual network gateway:

resource "azurerm_virtual_network_gateway" "gw_object" {
  count = var.environment_type == "prod" ? 1 : 0
  name = "${local.name_prefix}-gw"
  location = var.location
  resource_group_name = local.rg_name[1]
  sku = "Basic"
  type = "Vpn"
  private_ip_address_enabled = true

  ip_configuration {
    name = "ip_config"
    private_ip_address_allocation = "Static"
    subnet_id = flatten([for each in flatten(azurerm_virtual_network.vn_objects.*.subnet) : each if each.name == "GatewaySubnet"])[count.index].id
    public_ip_address_id = azurerm_public_ip.pip_object[count.index].id
  }

  tags = {
    environment = var.environment_type
  }
}

The intriguing aspect of the above definition is the combination of various programmatic features in Terraform to obtain the specific subnet ID required for configuring the Azure virtual network gateway. Initially, we iterate over the return objects of the virtual networks, which provide information about the underlying subnets. Since a for-loop requires a list of objects or strings, we need to convert the return object from maps using the splat expression. Conveniently, we employ 'splat' to access the subnet objects themselves, which are our target.


Additionally, we use an if statement to check if the subnet name is 'GatewaySubnet'. If true, we put the object inside a new list individually. Finally, we flatten the list to remove the 'set of object' type, as 'sets' only allow retrieving all values at once, preventing iteration. Then, we access the first and only interesting index of the newly created list and extract the 'id' attribute of the subnet.


As a last configuration part and before we run the code, in the outputs.tf file add the following:

output "pip_gw_ip" {
  value = azurerm_public_ip.pip_object.*.ip_address
}

We utilize the splat expression because we created the resource definition for the public IP using the count loop. As a result, we need to treat the return object as a list, regardless of its length. In case we run as 'dev' or 'test' The output will simply be 'null' As the resource only gets created in 'prod'.


Now, firstly run terraform init if you havent already done it:

terraform init

If you arent already authenticated with az cli, please do so now. In case your in doubt about how to do so, check this post and go to 'Using azure cli to authenticate to Azure' -> Terraform weekly tips (1) (codeterraform.com)


Lets run terraform plan on dev:

terraform plan --var="environment_type=dev"

Which results in the following 3 resources:

terraform plan ouput

If we then run the plan on test, notice how it changes just slightly:

terraform plan output test

Now when running on 'prod' We will see how big of a difference there can be in the same configuration file:


Public IP & logspace (1):

terraform plan output prod 1

2 resource groups & 1 of the virtual networks(2):

terraform plan output prod 2

And finally the other virtual network & the virtual network gateway(3):

terraform plan output prod 3

In summary, by utilizing a few simple input variables, logic, local variables, and loops, we can create either a straightforward 'dev' or 'test' environment or a more complex 'prod' environment.


The provided code merely scratches the surface of what is achievable in Terraform when we leverage the concept of 'Dynamic configuration.' As a side note, we could further enhance the code's cleanliness by creating additional local variables to capture return values. This approach would be advantageous during the script's expansion, as we could conveniently reference these local variables instead of having to deal with long and tedious references from direct resources.


That was all for today folks, thank you so much for reading along.


Cheers!


PS.

Want to learn more about Terraform? Click here -> terraform (codeterraform.com)

Want to learn more about other cool stuff like Automation or Powershell -> powershell (codeterraform.com) / automation (codeterraform.com)





52 views1 comment

1 commentaire

Noté 0 étoile sur 5.
Pas encore de note

Ajouter une note
Invité
18 août 2023
Noté 5 étoiles sur 5.

Wow!

J'aime
bottom of page