Running Dockers with Terraform

Table of Contents

    Why Terraform and Docker?

    Firstly, using Terraform for running dockers is completely over the top in our opinion! You are better off using docker-compose to run a handful of containers. But if your goal is to learn Terraform, this is the simplest iteration.

    There is no remote state to worry about because the state file exists locally. This is a huge advantage because you do not need to set up a cloud services account, such as an AWS account; All you need is your own virtual machine! This means you do not need to learn cloud architecture for AWS, GCP or Azure simply because the architecture you will use belongs to Docker itself.

    The provider we will be using is very simple as its really just translating the docker-compose arguments into Hashicorp Configuration Language (HCL). This is useful because it allows us get used to using the different variable formats such as sets, sets of string and block types. It also allows us get used to reading Terraform documentation to help us understand our requirements.

    You can write a docker-compose file first so you know what a working model looks like.

    Creating a container with Terraform

    Pre-requisite, you will need Docker installed!

    Step 1: You need to create a providers’ file. A Terraform provider is a plugin that enables interaction with an API. Below is our file to give you an example.

    providers.tf

    terraform {
      required_providers {
        docker = {
          source  = "kreuzwerker/docker" <-- The name of our provider
          version = "3.0.1"
        }
      }
    }
    
    provider "docker" {
      host = "unix:///var/run/docker.sock" <-- how to talk to docker service
    }

    Most Terraform projects will contain a file called main.tf which contains the main part of the code but our project is very small. So we will name each file according to their sole purpose.

    Step 2: Create our first container. Below is our docker-compose file for the “sabnzbd” container.

    version: "3"
    services:
      sabnzbd:
        image: "mumiehub/sabnzbdvpn"
        container_name: "sabnzbd"
        ports:
          - "8080:8080"
        volumes:
          - /dockers/sabnzbd_config/:/config
          - /etc/localtime:/etc/localtime:ro
          - /dockers/openpvn:/etc/openvpn/custom:ro
          - /dockers/data/:/data
        environment:
          - LOCAL_NETWORK=192.168.0.0/24
          - PUID=1000
          - PGID=1000
        network_mode: "bridge"
        cap_add:
          - NET_ADMIN
        devices:
          - /dev/net/tun
        dns:
          - "8.8.8.8"
          - "8.8.4.4"
        restart: always

    Looking at the Terraform documentation for the docker resource container, we want to pull our image and start the container creation. We can achieve this by doing the following:

    # Start a container
    resource "docker_container" "ubuntu" {
      name  = "foo"
      image = docker_image.ubuntu.image_id
    }
    
    # Find the latest Ubuntu precise image.
    resource "docker_image" "ubuntu" {
      name = "ubuntu:precise"
    }

    So for our container it would be ….

    # Pulls the image
    resource "docker_image" "sabnzbd" {
      name = "mumiehub/sabnzbdvpn"
    }
    
    # Create a container
    resource "docker_container" "sabnzbd_container" {
      image = docker_image.sabnzbd.image_id
      name  = "sabnzbd"
    }

    Configuring the container

    We now need to add some of the setup of our container to the Terraform file (such as our Docker’s mounted volumes, environmental variables, ports etc.). In our YAML based Dockerfile, our ports looked like this:

    ports:
          - "8080:8080"

    If we look at the Terraform documentation for our provider, ports are specified as:

    ports (Block List) Publish a container’s port(s) to the host. (see below for nested schema)”

    But what goes into a ‘nested schema’? Clicking on the link we can see that the block schema requires an internal port argument with the external port argument being optional (in our case, we require both):

    Required:
    * internal (Number) Port within the container.
    Optional:
    * external (Number) Port exposed out of the container. If not given a free random port

    Therefore we will end up with something that looks like this:

    resource "docker_container" "sabnzbd_container" {
      image = docker_image.sabnzbd.image_id
      name  = "sabnzbd"
      ports {
        internal = 8080
        external = 8086
      }
    }

    If you continue following the documentation, you will be able to fill in the missing details you need to complete your docker setup.

    Ours ended up like this:

    # Pulls the image
    resource "docker_image" "sabnzbd" {
      name = "mumiehub/sabnzbdvpn"
    }
    
    # Create a container
    resource "docker_container" "sabnzbd_container" {
      image = docker_image.sabnzbd.image_id
      name  = "sabnzbd"
      ports {
        internal = 8080
        external = 8086
      }
    # For each volume add a block set
      volumes {
        host_path      = "/datapool/Dockers/sabnzbd"
        container_path = "/config"
      }
    
      volumes {
        host_path      = "/etc/localtime"
        container_path = "/etc/localtime"
        read_only      = true
      }
      volumes {
        host_path      = "/datapool/Dockers/sabnzbd"
        container_path = "/etc/openvpn/custom"
        read_only      = true
      }
      volumes {
        host_path      = "/mnt/SolidState1/"
        container_path = "/data"
      }
    # Environment variable s are passed as a list of strings
      env = [
        "LOCAL_NETWORK=10.0.0.0/8",
        "PUID=1000",
        "PGID=1000"
      ]
    # Configure the networking mode
      network_mode = "bridge"
      capabilities {
        add = ["NET_ADMIN"]
      }
      devices {
        host_path = "/dev/net/tun"
      }
      dns = [
        "8.8.8.8",
        "4.4.4.4"
      ]
    # Ensure our container is always online
      restart = "always"
    }

    Running our Terraform code

    Now we have our providers.tf and our sabnzbd.tf files ready to spin our container.

    Step 1: Initialise our terraform project (where terraform will download its requirements from the registry)

    terraform init

    Step 2: Get terraform to format your code nicely (If you’ve been typing manually so far, your code is probably getting messy by now)

    terraform fmt

    Step 3: Plan our code deployment

    terraform plan

    Here you should see the plan detailing the creation of our docker(s)

      # docker_image.sabnzbd will be created
      + resource "docker_image" "sabnzbd" {
          + id          = (known after apply)
          + image_id    = (known after apply)
          + name        = "mumiehub/sabnzbdvpn"
          + repo_digest = (known after apply)
        }
    
    Plan: 2 to add, 0 to change, 0 to destroy.

    Step 4: After reviewing the changes and if you are happy with them, its time to build your containers!

    terraform apply

    We can see our new resources

    $ terraform state list 
    docker_container.sabnzbd_container 
    docker_image.sabnzbd

    Which we can also see in docker

    $ docker ps 
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 
    048af7bdc237 81abad42f605 "/usr/bin/dumb-init …" 2 minutes ago Up 2 minutes 8090/tcp, 0.0.0.0:8086->8080/tcp sabnzbd

    Summary

    We found this a useful way of getting accustomed to Terraform, the documentation, the different types of variables, how to setup and run your docker container within Terraform.

    If you get stuck following this tutorial, drop us a message. Don’t worry we got stuck plenty of times too!

    Leave a Reply0

    Your email address will not be published. Required fields are marked *


    This site uses Akismet to reduce spam. Learn how your comment data is processed.