Automate Virtual Machine Templates with Packer for Azure


Bicycle

Automated Cloud Templates with HashiCorp Packer

In our previous post about VMWare and Packer and our revised post we already covered how to create templates for virtual machines with HashiCorp Packer for VMWare. The same automatism also works for Azure, AWS, GCloud and others. With this setup you can also use the same provisioning as you are already using for VMWare.

For this new post we are using the HCL2 format of HashiCorp Packer, which was introduced with Packer v1.5.0. With the latest releases the support of HCL2 format came to a feature parity to the previous json format. With Packer v1.7.1 the command hcl2_upgrade was introduced which can upgrade json definitions to hcl2.

Azure ARM Templates

Azure virtual machines can be found as sources by the HCL definition. For this you need to define the typical credentials of the Azure Cloud: ClientID, ClientSecret, SubscriptionID and Tenant ID. A good setup instruction can be found at Microsoft. In a nutshell, this instructions are:

1az login
2az account show | jq '.tenantID'
3az account list --query "[?tenantId=='<tenantId>'].{Name:name,ID:id}" --output Table
4az account set --subscription "<subscription_id_or_subscription_name>"
5az ad sp create-for-rbac --name "<service_principal_name>"

The creation of the service principal creates a identification which can be used to create the virtual machines. The password information cannot be retrieved again - so store it in a password manager like 1Password. The appId information becomes later the ClientID, the password is the ClientSecret. SubcriptionID and TenantID must be taken the same as you used in the scripting above.

The setup of Packer includes to define a source where you start building. Within HCL this is defined by the source tag followed by the type of source implemententation related to the cloud/infrastructure you are gonna use of course. The last parameter is the name of the actual resource - you might have more than one defined within the HCL file.

By using Azure, we recommend using the Managed Image variant instead of the VHD, because when using the Managed Image implementation you can use the Shared Gallery for your Azure domain. The Shared Gallery allows you to share images across the Azure Domain and you are not forced to recreate and duplicate the managed images. To use this feature within Packer there is already a configuration option available to setup the publication to a gallery:

1"shared_image_gallery": {
2    "subscription": "00000000-0000-0000-0000-00000000000",
3    "resource_group": "ResourceGroup",
4    "gallery_name": "GalleryName",
5    "image_name": "ImageName",
6    "image_version": "1.0.0"
7}

By adding this element, you are forced to increment the image version at every build to get a valid build result. On the other hand, you can also use Terraform to achieve the same result in a post process. So you are able to setup a test step in your pipeline where the newly created instance gets started and tested with a set of rules which must be valid - and only if the test passes the newly generated image is available within the image gallery. Or you can also implement your pipeline so that the publishing step into the gallery is only done at a merge to the default branch main. This are the reasons why we are not using this feature at the moment - we are testing our images in a later pipeline step and the publishing is only done at a merge into the main branch.

 1source "azure-arm" "core" {
 2
 3  client_id       = var.client_id
 4  client_secret   = var.client_secret
 5  subscription_id = var.subscription_id
 6  tenant_id       = var.tenant_id
 7
 8  managed_image_name                = "UbuntuDocker"
 9  managed_image_resource_group_name = "images"
10
11  os_type         = "Linux"
12  image_publisher = "Canonical"
13  image_offer     = "0001-com-ubuntu-server-hirsute"
14  image_sku       = "21_04"
15  image_version   = "latest"
16  azure_tags = {
17    COMMIT_REF = var.ci_commit_ref
18    COMMIT_SHA = var.ci_commit_sha
19  }
20
21  location = "westeurope"
22  vm_size  = "Standard_F2s"
23}

The resource group managed_image_resource_group_name, where your images will be saved, must exist before starting Packer. This can be done using Terraform.

The very interesting parts of this code segment, to define the base image you are using to build your own, are of course the image information details. Those are not listed in the web interface of Azure - you must use the Azure Cloud Shell to get this information:

1$location = "westeurope"
2Get-AzVMImagePublisher -Location $location | Select PublisherName
3$publisher = "Canonical"
4Get-AzVMImageOffer -Location $location -PublisherName $publisher | Select Offer
5$offer = "0001-com-ubuntu-server-hirsute"
6Get-AzVMImageSku -Location $location -PublisherName $publisher -Offer $offer | Select Skus

By using the functions Get-AzVMImagePublisher, Get-AzVMImageOffer and Get-AzVMImageSku you are able to find the required information you are looking for. As those list are getting longer you might use | Out-File -FilePath <a-file> to dump the actual data into a file and review these lists.

Build/Customize the Image

After defining your base image to run, you can reuse the provisioning definitions from VMWare builds. In our setup these are the same instructions for every infrastructure:

  • setup.sh is a script that ensures that ansible is available on the target machine
  • ansible-local is the actual provisioning task where multiple playbooks exist depending on the target configuration
  • cleanup.sh is a script that removes packages ( e.g. ansible,... ) and cleans the machine that are not needed for the actual runtime
 1{% raw %}
 2variable "playbook" {
 3  type    = string
 4  default = "docker.yml"
 5}
 6
 7build {
 8  sources = [ "source.azure-arm.core"]
 9
10  provisioner "shell" {
11    inline = ["while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done"]
12  }
13
14  provisioner "shell" {
15    execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
16    script          = "packer/scripts/setup.sh"
17  }
18
19  provisioner "ansible-local" {
20    clean_staging_directory = true
21    playbook_dir            = "ansible"
22    galaxy_file             = "ansible/requirements.yaml"
23    playbook_files          = ["ansible/${var.playbook}.yml"]
24  }
25
26  provisioner "shell" {
27    execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
28    script          = "packer/scripts/cleanup.sh"
29  }
30}
31{% endraw %}

Full Combined Packer Definition

Finally, here is the full HashiCorp Packer HCL definition the build a Azure Managed Image. In the full example 4 variables are added which are filled by default from environment variables. You can also use Packer variable files to set this information. On the pipeline definition you might use environment variables to not make these settings persistent. Set them in Gitlab CICD settings or pull them from a HashiCorp Vault instance.

 1{% raw %}
 2variable "subscription_id" {
 3  type    = string
 4  default = "${env("ARM_SUBSCRIPTION_ID")}"
 5}
 6
 7variable "tenant_id" {
 8  type    = string
 9  default = "${env("ARM_TENANT_ID")}"
10}
11
12variable "client_id" {
13  type    = string
14  default = "${env("ARM_CLIENT_ID")}"
15}
16
17variable "client_secret" {
18  type    = string
19  default = "${env("ARM_CLIENT_SECRET")}"
20}
21variable "playbook" {
22  type    = string
23  default = "docker.yml"
24}
25
26source "azure-arm" "core" {
27
28  client_id       = var.client_id
29  client_secret   = var.client_secret
30  subscription_id = var.subscription_id
31  tenant_id       = var.tenant_id
32
33  managed_image_name                = "UbuntuDocker"
34  managed_image_resource_group_name = "images"
35
36  os_type         = "Linux"
37  image_publisher = "Canonical"
38  image_offer     = "0001-com-ubuntu-server-hirsute"
39  image_sku       = "21_04"
40  image_version   = "latest"
41  azure_tags = {
42    COMMIT_REF = var.ci_commit_ref
43    COMMIT_SHA = var.ci_commit_sha
44  }
45
46  location = "westeurope"
47  vm_size  = "Standard_F2s"
48}
49
50build {
51  sources = ["source.azure-arm.core"]
52
53  provisioner "shell" {
54    inline = ["while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done"]
55  }
56
57  provisioner "shell" {
58    execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
59    script          = "packer/scripts/setup.sh"
60  }
61
62  provisioner "ansible-local" {
63    clean_staging_directory = true
64    playbook_dir            = "ansible"
65    galaxy_file             = "ansible/requirements.yaml"
66    playbook_files          = ["ansible/${var.playbook}.yml"]
67  }
68
69  provisioner "shell" {
70    execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
71    script          = "packer/scripts/cleanup.sh"
72  }
73}
74{% endraw %}

Final thoughts

With this Packer definition you are able to provision any virtual image within Azure Cloud. By setting the variable playbook within Packer provisioning you can define the ansible playbook which gets used. So you got a very flexible Packer definition without the need to duplicate the provisioning code. In the next post we gonna highlight how you can use this definition also for different Infrastructures. So your custom virtual machines have all the same settings independent of the infrastructure they are running at.

Go Back explore our courses

We are here for you

You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.

Contact us