Automated VMWare Templates with HashiCorp Packer revised

published in: Infrastructure as a Service DevOps HashiCorp Date: August 06, 2020
Martin Buchleitner, Senior IT-Consultant

About the author

Martin Buchleitner is a Senior IT-Consultant for Infralovers and for Commandemy. Twitter github LinkedIn

See all articles by this author

Automated VMWare Templates with Hashicorp Packer - Revised

In our previous post about VMWare and Packer we already covered the build process of building packer images on VMWare vSphere. But this implementation had on major issue: You were forced to modify a ESXi host with a custom setting to enable the Packer VMWare build.

With Packer v1.5.2 the jetbrains vsphere builder was merged to the packer core.

The major change within this implementation is, that the official vCenter API is used and so no ESXi host modification is required. You are not limited in your builds to a dedicated host which had to be modified! With this implemntation you a VM can be converted into a VM template without the need of running 2 post processors - it can be done by a single configuration option convert_to_template.

But on the other hand your vSphere installation must be quite up-to-date with at least version 6.5. Builds on lower vesions may work, but due to some configuration options errors may come up because of the older vSphere API.

In our current definitions we replaced the builder to vsphere-iso to create new VMs from scratch and vsphere-clone builder to clone VMs from existing templates.

At the time writing this blog post ( packer v1.6.1 ), the existing examples do not work with the latest implementation. but the documentation is up-to-date and is very helpful.

vsphere-iso Example

    "builders": [
        "type": "vsphere-iso",
        "name": "Centos7",
        "vm_name": "Centos7",
        "guest_os_type": "centos7_64Guest",
        "convert_to_template": true,
        "CPUs": 2,
        "RAM": 4096,
        "storage": [
            "disk_size": 20000,
            "disk_thin_provisioned": true
        "network_adapters": [
            "network": "VM Network",
            "network_card": "vmxnet3"
        "boot_command": [
          "<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/centos7.cfg<enter><wait>"
        "boot_wait": "10s",
        "iso_urls": [
        "iso_checksum": "659691c28a0e672558b003d223f83938f254b39875ee7559d1a4a14c79173193",
        "ssh_username": "vagrant",
        "ssh_password": "vagrant",
        "ssh_port": 22,
        "ssh_wait_timeout": "10m",
        "shutdown_command": "echo 'vagrant'|sudo -S /sbin/halt -h -p",

        "vcenter_server": "{{user `vcenter_server` }}",
        "host": "{{user `esxi_host` }}",
        "username": "{{user `esxi_username` }}",
        "password": "{{user `esxi_password` }}",
        "cluster": "Test",
        "datastore": "Local",
        "insecure_connection": "true",

        "http_directory": "preseeds",
        "http_port_min": 9001,
        "http_port_max": 9001
    "provisioners": [
        "type": "shell",
        "script": "scripts/setup.sh",
        "execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E bash '{{.Path}}'"
        "type": "ansible-local",
        "playbook_dir": "ansible",
        "clean_staging_directory": true,
        "playbook_files": [
        "extra_arguments": [
          "--extra-vars \"vmware_build=yes\""
        "type": "shell",
        "script": "scripts/cleanup.sh",
        "execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E bash '{{.Path}}'"

In this example we are using again a dedicated host assignment because we are not using DRS on this test system. If your are using DRS on your vSphere cluster you can replace the host parameter with resource_pool. The set of vsphere connection parameters is well documented.