x

Menu

Automated Testing of Packer Templates with Kitchen Terraform

published in: Infrastructure as a Service DevOps HashiCorp Date: December 15, 2021
Martin Buchleitner, Senior IT-Consultant

About the author

Martin Buchleitner is a Senior IT-Consultant for Infralovers and for Commandemy. Twitter github LinkedIn

See all articles by this author

Automated Testing of Packer Templates with Kitchen Terraform

In the latest posting about HashiCorp Packer, we created a Packer template that was able to create the same virtual machine on AWS and Azure. But we left this posting with the question of whether these virtual machines are really the same. There is a solution to answer this question by testing. We are using kitchen terraform to create integration tests, which ensure that the created virtual machine(s) are matching certain criteria.

kitchen-terraform

Kitchen Terraform brings the benefits of test-driven development to Terraform projects. As the use of Terraform continues to gain popularity in production environments, his logic must be thoroughly tested. The development cycle can now be driven by a suite of tests to verify new features and protect against regressions. Using Kitchen Terraform enables a test matrix that can vary in platforms, input variables and even fixture modules. Kitchen Terraform provides a consistent approach for testing Terraform projects locally and in continuous integration pipelines. In this current setup, we gonna use Kitchen Terraform to create virtual machines from our freshly build instance templates and run certain integration tests on these machines.

kitchen.yml

The following code sample is a kitchen definition that uses kitchen-terraform to validate an Azure virtual machine, but also an AWS virtual machine. So there are 2 test suites defined - one for each infrastructure we are going to verify our produced image template. At the item profile_locations, you can define a list of integration tests. Both test suites are equally defined except the root_module_directory for the driver. There is a separation between Azure and AWS.

driver:
  name: terraform

provisioner:
  name: terraform

verifier:
  name: terraform
  fail_fast: false

platforms:
  - name: unit_under_test
    driver:
    variable_files:
      - terraform.tfvars
    verifier:
      systems:
        - name: local
          backend: local
          attrs_outputs:
            remote_user_attribute: remote_user
          controls:
            - inspec_attributes

suites:
  - name: azure_base
    driver:
      root_module_directory: test/fixtures/azure
    verifier:
        systems:
          - name: terraform
            hosts_output: uut_ip
            backend: ssh
            key_files:
            - ansible/ssh/dotfiles_ssh_rsa
            profile_locations:
              - test/integration/customize

  - name: aws_base
    driver:
      root_module_directory: test/fixtures/aws
    verifier:
        systems:
          - name: terraform
            hosts_output: uut_ip
            backend: ssh
            key_files:
            - ansible/ssh/dotfiles_ssh_rsa
            profile_locations:
              - test/integration/customize

Root module definition

In the following code sample, you find an AWS root module definition that can be used for kitchen-terraform. In the security group, only ssh port is allowed because we are not going to test - in this example - any web service.

The code lists 2 variables - only related to AWS:

  • aws_ami_id: the ID of our generated template
  • aws_remote_user: the username that will be used by kitchen-terraform

and there are also 2 outputs which must be in common for all infrastructures

  • uut_ip: the actual IP address of the created virtual machine
  • remote_user: the user that will be used in kitchen-terraform, which is just the variable from above

test/fixures/aws/main.tf

variable "aws_ami_id" {
  type = string
}
variable "aws_remote_user" {
  type    = string
}

data "aws_ami" "ami_under_test" {
  most_recent = true

  filter {
    name   = "image-id"
    values = [var.aws_ami_id]
  }
}

resource "aws_security_group" "uut" {
  name   = "uut_secgroup_${var.aws_ami_id}"
  vpc_id = data.aws_vpc.uut.id

  ingress {
    description = "allow all ports for testing"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "packer uut"
  }
}

module "keypair" {
  source  = "mitchellh/dynamic-keys/aws"
  version = "2.0.0"
  path    = "${path.root}/keys"
  name    = "${var.aws_ami_id}"
}

resource "aws_instance" "packer_test" {
  ami                         = data.aws_ami.ami_under_test.id
  instance_type               = "t3.medium"
  vpc_security_group_ids      = [aws_security_group.uut.id]
  key_name                    = module.keypair.key_name
  associate_public_ip_address = true
}
output "uut_ip" {
  value = aws_instance.packer_test.public_ip
}
output "remote_user" {
  value = var.aws_remote_user
}

test/fixures/azure/main.tf

Azure infrastructure is defined as the previously AWS infrastructure setup. The difference here is of course that you must define different variables:

  • vm_image_id: id of the image template you were creating
  • vm_remote_user: previously aws_remote_user
  • location: location where the actual infrastructure should get generated

The outputs are the same as above so kitchen-terraform can be reused.

variable "vm_image_id" {
  type = string
}

variable "vm_remote_user" {
  default = "coder"
  type    = string
}

variable "location" {
  type    = string
  default = "westeurope"
}

resource "azurerm_resource_group" "uut_resources" {
  name     = "packertest"
  location = var.location
}

resource "azurerm_virtual_network" "uut_network" {
  name                = "packertest"
  address_space       = ["10.0.0.0/16"]
  location            = var.location
  resource_group_name = azurerm_resource_group.uut_resources.name
}

resource "azurerm_subnet" "uut_subnet" {
  name                 = "packertest"
  resource_group_name  = azurerm_resource_group.uut_resources.name
  virtual_network_name = azurerm_virtual_network.uut_network.name
  address_prefixes     = ["10.0.1.0/24"]
}

resource "azurerm_public_ip" "uut_publicip" {
  name                = "myPublicIP"
  location            = var.location
  resource_group_name = azurerm_resource_group.uut_resources.name
  allocation_method   = "Dynamic"
}

resource "azurerm_network_security_group" "uut_secgroup" {
  name                = "packertest"
  location            = var.location
  resource_group_name = azurerm_resource_group.uut_resources.name

  security_rule {
    name                       = "SSH"
    priority                   = 1001
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "*"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }
}

resource "azurerm_network_interface" "uut_vm_nic" {
  name                = "packertest"
  location            = var.location
  resource_group_name = azurerm_resource_group.uut_resources.name

  ip_configuration {
    name                          = "packertestip_"
    subnet_id                     = azurerm_subnet.uut_subnet.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.uut_publicip.id
  }
}

resource "azurerm_network_interface_security_group_association" "uut_secgroup_assoc" {
  network_interface_id      = azurerm_network_interface.uut_vm_nic.id
  network_security_group_id = azurerm_network_security_group.uut_secgroup.id
}

resource "azurerm_storage_account" "uut_storage_account" {
  name                     = "st"
  resource_group_name      = azurerm_resource_group.uut_resources.name
  location                 = var.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

resource "tls_private_key" "ssh_key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "azurerm_linux_virtual_machine" "uut" {
  name                  = "packertest"
  location              = var.location
  resource_group_name   = azurerm_resource_group.uut_resources.name
  network_interface_ids = [azurerm_network_interface.uut_vm_nic.id]
  size                  = "Standard_DS1_v2"

  os_disk {
    name                 = "osdisk_"
    caching              = "ReadWrite"
    storage_account_type = "Premium_LRS"
  }
  computer_name = "packer"

  source_image_id = var.vm_image_id

  admin_username                  = var.vm_remote_user
  disable_password_authentication = true

  admin_ssh_key {
    username   = var.vm_remote_user
    public_key = tls_private_key.ssh_key.public_key_openssh
  }

  boot_diagnostics {
    storage_account_uri = azurerm_storage_account.uut_storage_account.primary_blob_endpoint
  }
}

data "azurerm_public_ip" "uut_ip" {
  name                = azurerm_public_ip.uut_publicip.name
  resource_group_name = azurerm_linux_virtual_machine.uut.resource_group_name
}

output "uut_ip" {
  value = data.azurerm_public_ip.uut_ip.ip_address
}

output "remote_user" {
    value = var.vm_remote_user
}

Kichen Integration test

test/integration/customize/inspec.yml

The integration is defined by the inspec.yml. The important part here is that the actual inspec definition has an attribute remote_user which gets passed from kitchen.

name: customize
title: check generic customization of virtual machines
version: 0.1.0
attributes:
  - name: remote_user
    type: string
    required: true
    description: user to check for rights on docker group

test/integration/customize/controls/remote_user.rb

Initial check is of course that our remote_user exists and e.g. has a certain well defined uid.

control "remote_user" do
  username = attribute("remote_user")
  describe user attribute("remote_user") do
    it { should exist }
    its("uid") { should eq 1010 }
    its("shell") { should eq "/bin/bash" }
    its("home") { should eq "/home/#{username}" }
  end
end

test/integration/customize/controls/packer_artifacts.rb

We donot want to have packer artifacts on our images - this also includes an inspec report that gets generated by packer already.

control 'packer_provisioning' do
    desc 'check if any packer provisioning directories are still present'
    describe command('ls /tmp/packer*') do
        its('exit_status') { should_not eq 0 }
    end
    describe command('ls /tmp/127.0.0.1') do
        its('exit_status') { should_not eq 0 }
    end
end

control 'inspec_artifacts' do
    desc 'check if any inspec artifacts are still present'
    describe command('ls /tmp/*report.xml') do
        its('exit_status') { should_not eq 0 }
    end
end

test/integration/customize/controls/dotfiles.rb

We are rolling out in the base customize process a default bash and git configuration. This gets tested within this control set:

username = attribute("remote_user")
userhome = "/home/#{username}"
control "dotfiles_customize" do

  describe directory "/usr/local/dotfiles" do
    it { should exist }
    its("owner") { should eq "root" }
    its("mode") { should cmp "0755" }
  end

  describe file "#{userhome}/.bashrc" do
    it { should exist }
    it { should be_symlink }
    its("link_path") { should eq "/usr/local/dotfiles/.bashrc" }
  end

  describe file "#{userhome}/.bash_profile" do
    it { should exist }
    it { should be_symlink }
    its("link_path") { should eq "/usr/local/dotfiles/.bash_profile" }
  end

  describe file "#{userhome}/.bash_prompt" do
    it { should exist }
    it { should be_symlink }
    its("link_path") { should eq "/usr/local/dotfiles/.bash_prompt" }
  end

  describe file "#{userhome}/.gitconfig" do
    it { should exist }
    it { should be_file }
  end
end

Running the actual kitchen terraform

In this current setup generate now correct terraform.tfvars files for azure and/or aws, and then you are ready to test your actual images:

$ kitchen test azure-base-unit-under-test
$ kitchen test aws-base-unit-under-test

Final thoughts

With this tool setup, you can verify your virtual machine templates across multiple cloud infrastructures and ensure that they are really configured and behaving in a way you are actually expecting. The virtual machines should always behave the same - if they run on AWS, Azure or on VirtualBox on your local machine.

After having created your custom virtual machine templates and also having verified that they are actually configured the same way and have the same behaviour - there is just one more thing to do: Distribute them across multiple regions. And that will be the next topic in this HasiCorp Packer series.