Automated Testing of Packer Templates with Kitchen Terraform


Bicycle

Automated Testing of Packer Templates with Kitchen Terraform

In the latest posting about HashiCorp Packer, we created a Packer template that was able to create the same virtual machine on AWS and Azure. But we left this posting with the question of whether these virtual machines are really the same. There is a solution to answer this question by testing. We are using kitchen terraform to create integration tests, which ensure that the created virtual machine(s) are matching certain criteria.

kitchen-terraform

Kitchen Terraform brings the benefits of test-driven development to Terraform projects. As the use of Terraform continues to gain popularity in production environments, his logic must be thoroughly tested. The development cycle can now be driven by a suite of tests to verify new features and protect against regressions. Using Kitchen Terraform enables a test matrix that can vary in platforms, input variables and even fixture modules. Kitchen Terraform provides a consistent approach for testing Terraform projects locally and in continuous integration pipelines. In this current setup, we gonna use Kitchen Terraform to create virtual machines from our freshly build instance templates and run certain integration tests on these machines.

kitchen.yml

The following code sample is a kitchen definition that uses kitchen-terraform to validate an Azure virtual machine, but also an AWS virtual machine. So there are 2 test suites defined - one for each infrastructure we are going to verify our produced image template. At the item profile_locations, you can define a list of integration tests. Both test suites are equally defined except the root_module_directory for the driver. There is a separation between Azure and AWS.

 1driver:
 2  name: terraform
 3
 4provisioner:
 5  name: terraform
 6
 7verifier:
 8  name: terraform
 9  fail_fast: false
10
11platforms:
12  - name: unit_under_test
13    driver:
14    variable_files:
15      - terraform.tfvars
16    verifier:
17      systems:
18        - name: local
19          backend: local
20          attrs_outputs:
21            remote_user_attribute: remote_user
22          controls:
23            - inspec_attributes
24
25suites:
26  - name: azure_base
27    driver:
28      root_module_directory: test/fixtures/azure
29    verifier:
30        systems:
31          - name: terraform
32            hosts_output: uut_ip
33            backend: ssh
34            key_files:
35            - ansible/ssh/dotfiles_ssh_rsa
36            profile_locations:
37              - test/integration/customize
38
39  - name: aws_base
40    driver:
41      root_module_directory: test/fixtures/aws
42    verifier:
43        systems:
44          - name: terraform
45            hosts_output: uut_ip
46            backend: ssh
47            key_files:
48            - ansible/ssh/dotfiles_ssh_rsa
49            profile_locations:
50              - test/integration/customize

Root module definition

In the following code sample, you find an AWS root module definition that can be used for kitchen-terraform. In the security group, only ssh port is allowed because we are not going to test - in this example - any web service.

The code lists 2 variables - only related to AWS:

  • aws_ami_id: the ID of our generated template
  • aws_remote_user: the username that will be used by kitchen-terraform

and there are also 2 outputs which must be in common for all infrastructures

  • uut_ip: the actual IP address of the created virtual machine
  • remote_user: the user that will be used in kitchen-terraform, which is just the variable from above

test/fixures/aws/main.tf

 1variable "aws_ami_id" {
 2  type = string
 3}
 4variable "aws_remote_user" {
 5  type    = string
 6}
 7
 8data "aws_ami" "ami_under_test" {
 9  most_recent = true
10
11  filter {
12    name   = "image-id"
13    values = [var.aws_ami_id]
14  }
15}
16
17resource "aws_security_group" "uut" {
18  name   = "uut_secgroup_${var.aws_ami_id}"
19  vpc_id = data.aws_vpc.uut.id
20
21  ingress {
22    description = "allow all ports for testing"
23    from_port   = 22
24    to_port     = 22
25    protocol    = "tcp"
26    cidr_blocks = ["0.0.0.0/0"]
27  }
28  egress {
29    from_port   = 0
30    to_port     = 0
31    protocol    = "-1"
32    cidr_blocks = ["0.0.0.0/0"]
33  }
34
35  tags = {
36    Name = "packer uut"
37  }
38}
39
40module "keypair" {
41  source  = "mitchellh/dynamic-keys/aws"
42  version = "2.0.0"
43  path    = "${path.root}/keys"
44  name    = "${var.aws_ami_id}"
45}
46
47resource "aws_instance" "packer_test" {
48  ami                         = data.aws_ami.ami_under_test.id
49  instance_type               = "t3.medium"
50  vpc_security_group_ids      = [aws_security_group.uut.id]
51  key_name                    = module.keypair.key_name
52  associate_public_ip_address = true
53}
54output "uut_ip" {
55  value = aws_instance.packer_test.public_ip
56}
57output "remote_user" {
58  value = var.aws_remote_user
59}

test/fixures/azure/main.tf

Azure infrastructure is defined as the previously AWS infrastructure setup. The difference here is of course that you must define different variables:

  • vm_image_id: id of the image template you were creating
  • vm_remote_user: previously aws_remote_user
  • location: location where the actual infrastructure should get generated

The outputs are the same as above so kitchen-terraform can be reused.

  1variable "vm_image_id" {
  2  type = string
  3}
  4
  5variable "vm_remote_user" {
  6  default = "coder"
  7  type    = string
  8}
  9
 10variable "location" {
 11  type    = string
 12  default = "westeurope"
 13}
 14
 15resource "azurerm_resource_group" "uut_resources" {
 16  name     = "packertest"
 17  location = var.location
 18}
 19
 20resource "azurerm_virtual_network" "uut_network" {
 21  name                = "packertest"
 22  address_space       = ["10.0.0.0/16"]
 23  location            = var.location
 24  resource_group_name = azurerm_resource_group.uut_resources.name
 25}
 26
 27resource "azurerm_subnet" "uut_subnet" {
 28  name                 = "packertest"
 29  resource_group_name  = azurerm_resource_group.uut_resources.name
 30  virtual_network_name = azurerm_virtual_network.uut_network.name
 31  address_prefixes     = ["10.0.1.0/24"]
 32}
 33
 34resource "azurerm_public_ip" "uut_publicip" {
 35  name                = "myPublicIP"
 36  location            = var.location
 37  resource_group_name = azurerm_resource_group.uut_resources.name
 38  allocation_method   = "Dynamic"
 39}
 40
 41resource "azurerm_network_security_group" "uut_secgroup" {
 42  name                = "packertest"
 43  location            = var.location
 44  resource_group_name = azurerm_resource_group.uut_resources.name
 45
 46  security_rule {
 47    name                       = "SSH"
 48    priority                   = 1001
 49    direction                  = "Inbound"
 50    access                     = "Allow"
 51    protocol                   = "Tcp"
 52    source_port_range          = "*"
 53    destination_port_range     = "*"
 54    source_address_prefix      = "*"
 55    destination_address_prefix = "*"
 56  }
 57}
 58
 59resource "azurerm_network_interface" "uut_vm_nic" {
 60  name                = "packertest"
 61  location            = var.location
 62  resource_group_name = azurerm_resource_group.uut_resources.name
 63
 64  ip_configuration {
 65    name                          = "packertestip_"
 66    subnet_id                     = azurerm_subnet.uut_subnet.id
 67    private_ip_address_allocation = "Dynamic"
 68    public_ip_address_id          = azurerm_public_ip.uut_publicip.id
 69  }
 70}
 71
 72resource "azurerm_network_interface_security_group_association" "uut_secgroup_assoc" {
 73  network_interface_id      = azurerm_network_interface.uut_vm_nic.id
 74  network_security_group_id = azurerm_network_security_group.uut_secgroup.id
 75}
 76
 77resource "azurerm_storage_account" "uut_storage_account" {
 78  name                     = "st"
 79  resource_group_name      = azurerm_resource_group.uut_resources.name
 80  location                 = var.location
 81  account_tier             = "Standard"
 82  account_replication_type = "LRS"
 83}
 84
 85resource "tls_private_key" "ssh_key" {
 86  algorithm = "RSA"
 87  rsa_bits  = 4096
 88}
 89
 90resource "azurerm_linux_virtual_machine" "uut" {
 91  name                  = "packertest"
 92  location              = var.location
 93  resource_group_name   = azurerm_resource_group.uut_resources.name
 94  network_interface_ids = [azurerm_network_interface.uut_vm_nic.id]
 95  size                  = "Standard_DS1_v2"
 96
 97  os_disk {
 98    name                 = "osdisk_"
 99    caching              = "ReadWrite"
100    storage_account_type = "Premium_LRS"
101  }
102  computer_name = "packer"
103
104  source_image_id = var.vm_image_id
105
106  admin_username                  = var.vm_remote_user
107  disable_password_authentication = true
108
109  admin_ssh_key {
110    username   = var.vm_remote_user
111    public_key = tls_private_key.ssh_key.public_key_openssh
112  }
113
114  boot_diagnostics {
115    storage_account_uri = azurerm_storage_account.uut_storage_account.primary_blob_endpoint
116  }
117}
118
119data "azurerm_public_ip" "uut_ip" {
120  name                = azurerm_public_ip.uut_publicip.name
121  resource_group_name = azurerm_linux_virtual_machine.uut.resource_group_name
122}
123
124output "uut_ip" {
125  value = data.azurerm_public_ip.uut_ip.ip_address
126}
127
128output "remote_user" {
129    value = var.vm_remote_user
130}

Kichen Integration test

test/integration/customize/inspec.yml

The integration is defined by the inspec.yml. The important part here is that the actual inspec definition has an attribute remote_user which gets passed from kitchen.

1name: customize
2title: check generic customization of virtual machines
3version: 0.1.0
4attributes:
5  - name: remote_user
6    type: string
7    required: true
8    description: user to check for rights on docker group

test/integration/customize/controls/remote_user.rb

Initial check is of course that our remote_user exists and e.g. has a certain well defined uid.

1control "remote_user" do
2  username = attribute("remote_user")
3  describe user attribute("remote_user") do
4    it { should exist }
5    its("uid") { should eq 1010 }
6    its("shell") { should eq "/bin/bash" }
7    its("home") { should eq "/home/#{username}" }
8  end
9end

test/integration/customize/controls/packer_artifacts.rb

We donot want to have packer artifacts on our images - this also includes an inspec report that gets generated by packer already.

 1control 'packer_provisioning' do
 2    desc 'check if any packer provisioning directories are still present'
 3    describe command('ls /tmp/packer*') do
 4        its('exit_status') { should_not eq 0 }
 5    end
 6    describe command('ls /tmp/127.0.0.1') do
 7        its('exit_status') { should_not eq 0 }
 8    end
 9end
10
11control 'inspec_artifacts' do
12    desc 'check if any inspec artifacts are still present'
13    describe command('ls /tmp/*report.xml') do
14        its('exit_status') { should_not eq 0 }
15    end
16end

test/integration/customize/controls/dotfiles.rb

We are rolling out in the base customize process a default bash and git configuration. This gets tested within this control set:

 1username = attribute("remote_user")
 2userhome = "/home/#{username}"
 3control "dotfiles_customize" do
 4
 5  describe directory "/usr/local/dotfiles" do
 6    it { should exist }
 7    its("owner") { should eq "root" }
 8    its("mode") { should cmp "0755" }
 9  end
10
11  describe file "#{userhome}/.bashrc" do
12    it { should exist }
13    it { should be_symlink }
14    its("link_path") { should eq "/usr/local/dotfiles/.bashrc" }
15  end
16
17  describe file "#{userhome}/.bash_profile" do
18    it { should exist }
19    it { should be_symlink }
20    its("link_path") { should eq "/usr/local/dotfiles/.bash_profile" }
21  end
22
23  describe file "#{userhome}/.bash_prompt" do
24    it { should exist }
25    it { should be_symlink }
26    its("link_path") { should eq "/usr/local/dotfiles/.bash_prompt" }
27  end
28
29  describe file "#{userhome}/.gitconfig" do
30    it { should exist }
31    it { should be_file }
32  end
33end

Running the actual kitchen terraform

In this current setup generate now correct terraform.tfvars files for azure and/or aws, and then you are ready to test your actual images:

1$ kitchen test azure-base-unit-under-test
2$ kitchen test aws-base-unit-under-test

Final thoughts

With this tool setup, you can verify your virtual machine templates across multiple cloud infrastructures and ensure that they are really configured and behaving in a way you are actually expecting. The virtual machines should always behave the same - if they run on AWS, Azure or on VirtualBox on your local machine.

After having created your custom virtual machine templates and also having verified that they are actually configured the same way and have the same behaviour - there is just one more thing to do: Distribute them across multiple regions. And that will be the next topic in this HasiCorp Packer series.

Go Back explore our courses

We are here for you

You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.

Contact us