Sometimes a wonderful tool comes along that makes a kludgy process radically better. Packer (thanks again Hashicorp!) and ServerSpec are great examples. The 1-2 punch of Packer + ServerSpec combined with the automation abilities of Jenkins made a significant impact on our automated server image creation. This combination reduced our time-to-deploy, took our visibility from ‘translucent’ to ‘transparent’, improved our traceability, and generally made our Ops Engineers much, much happier. Read on to find out how these tools can make you happy too.


One of the advantages of having your application run in the cloud is that you can rapidly provision and decommission server that the application runs on. At Hootsuite, we love AWS and we create and replace EC2 instances all the time. This is a fairly easy process using Server Images (AMIs) but can be a tedious one if you provision them manually and hence there is a need to create Server Images rapidly and in a way that the images are tested in order to guarantee a healthy cloud infrastructure.


Most of our servers are in EC2 and we use Ansible to provision those servers. To create a server on EC2 we had two processes, the first of which involved using Ansible to provision the server:

  • Spin up an EC2 instance using a custom base image
  • Run an Ansible playbook to provision the instance
  • Do a manual sanity check for all the services running on the instance

This process is slow, and requiring manual steps is not ideal.

The second process involved using a server image (AMI) and spinning up an EC2 instance. This process is faster, but isn’t without problems: building the server image is slow, and there are no checks in place to test the image before use. Additionally, there were no provisioning logs for debugging the server image, which made it difficult for our Operations Engineers to troubleshoot. Creating server images looked like this:


Not happy with potential for error, we redid the entire process using Packer.

What is Packer?

Packer is a tool for creating server images for multiple platforms. It is easy to use and automates the process of creating server images. It supports multiple provisioners, all built into Packer.

Why Packer?

In order to simplify the steps involved in creating a server image, we choose Hashicorp’s Packer. These were a few simple reasons why Packer was the obvious choice:

  • Supports multiple platforms such as Amazon EC2, OpenStack, VMware and VirtualBox
  • It’s easy to use and is mostly automated
  • It supports Ansible as a provisioner, and Hootsuite loves Ansible
  • It’s written in Go and is created by the same guys that did Serf, Consul, and Vagrant – all current parts of the Hootsuite infrastructure

Challenges with Packer

We encountered three specific challengers with Packer we needed to overcome:

  • Integrating Packer with our Ansible playbooks was always going to be a challenge, as Packer uses Ansible “local” provisioner which requires the playbook to be run locally on the server being provisioned. Our playbooks always assume the servers are being provisioned remotely,  which led problems because some playbooks are shared across projects and require very specific path inclusions. These playbooks had to be rewritten in a way that can work with Packer.
  • Testing the Packer-built server images
  • Automating the build and test process

Our Implementation

Our implementation of Packer a very simple one. Packer needs a configuration file that has all the information needed to create a server image. We also used a variables file that has a combination of variables that the configuration file needs, along with custom variables for tagging the server image. Here’s an example Packer configuration file: configuration file can be divided into a few sections:

  • Variables
  • Builders
  • Provisioners

The variables section has all the variables that are used in the configuration file. These variables can be passed at runtime or can be predefined in a variable file. For example, variables can be passed at runtime in the Packer CLI command:

The variable file can be defined in a simple json format: builders section defines the builder that will be used to create the server image. Since most of our servers are on AWS, we use the “Amazon EC2 (AMI)” builder. Others include:

  • amazon-ebs: Create EBS-backed AMIs by launching a source AMI and re-packaging it into a new AMI after provisioning
  • amazon-instance: Create instance-store AMIs by launching and provisioning a source instance, then rebundling it and uploading it to S3
  • amazon-chroot: Create EBS-backed AMIs from an existing EC2 instance by mounting the root device and using a Chroot environment to provision that device

We mostly use the “amazon-instance” builder, as most of our EC2 instances are instance-store backed. The provisioner section defines the provisioner that will be used to provision the server. Packer supports a variety of provisioners like:

  • Ansible
  • Chef Client
Chef Solo
Puppet Masterless
Puppet Server
  • Salt
  • Custom Provisioner

Since we heavily use Ansible for provisioning our servers, Packer’s Ansible provisioner was the logical choice for us. While using the Ansible provisioner we found that the playbooks that were written using “roles” could be easily integrated with Packer. Some of our Ansible playbooks are shared across various projects, which made it interesting to work with – these shared playbooks can be included in the Ansible provisioner configuration by defining their location or path. For example: "playbook_paths" : [ "/Path/To/Some/Shared/Playbook/Or/Variable/" ], The roles can be easily included in the configuration file by defining their location or path. For example: "role_paths": [ "/Users/username/projects/ansible-playbooks/projectname/roles/nginx" ], The group variables for the playbook can be included and defined as follows: "group_vars": /projects/example-project/group_vars" } You can also pass extra arguments to the Ansible playbook CLI command that Packer will execute while provisioning your server: "extra_arguments": "--extra-vars 'basedir=/tmp/packer-provisioner-ansible-local/playbooks/'", We use the variable “basedir” for defining the base directory that is used to reference all the playbooks and variables. 

Provision Logs

During the provisioning process, the logs that are generated by Ansible are stored on the server image for debugging purposes. This was done by enabling Ansible logging capabilities using the following configuration file located at /etc/ansible/ansible.cfg: [defaults] log_path=/path/to/provisioning.log

Benefits of Using Packer

There are various benefits of using Packer in terms of performance, automation, and security:

  • Packer spins up an EC2 instance, creates temporary security groups for the instance, creates temporary keys, provisions the instance, creates an AMI, and terminates the instances – and it’s all completely automated
  • Packer uploads all the Ansible playbooks and associated variables to the remote server, and then runs the provisioner locally on that machine. It has a default staging directory (/tmp/packer-provisioner-ansible-local/) that it creates on the remote server, and this is the location where it stores all the playbooks, variables, and roles. Running the playbooks locally on the instance is much faster than running them remotely.
  • Packer implements parallelization of all the processes that it implements
  • With Packer we supply Amazon’s API keys locally. The temporary keys that are created when the instance is spun up are removed after the instance is provisioned, for increased security.
  • Testing Before Creating Server Images

Packer helped solve the problem of automating server image creation. We still thought there was room for improvements in terms of testing whether the instance was provisioned correctly, so we began using ServerSpec. 

What is ServerSpec?

ServerSpec offers RSpec tests for your provisioned server. RSpec is commonly used as a testing tool in Ruby programming, made to easily test your servers by executing few commands locally. We wrote some simple tests using ServerSpec that would help indicate whether the instance was ready to be imaged:

  • Testing services that make up our web server such as Nginx, PHP-FPM, various routers, etc
  • Testing common monitoring and alerting services such as Sensu and Diamond
  • Testing ports that various services run on

Here’s a sample ServerSpec test for checking Sensu service on our web servers:

Integrating ServerSpec with Packer

In order for us to decide whether the server image was to be created or not, ServerSpec was integrated with Packer right after the provisioning was complete. This was done by using Packer’s “shell” and “file” provisioners. First, we create a temporary directory on the server and copy the ServerSpec tests to be run based on server type, then run a simple bash script that would run the ServerSpec tests locally on that machine. The Packer configuration file had the following defined in order to run ServerSpec tests: { "type": "shell", "inline": ["mkdir /tmp/tests"] }, { "type": "file", "source": "serverspec_tests/server-type/", "destination": "/tmp/tests" }, { "type": "shell", "script": "scripts/" } If the tests passed,z then Packer would go ahead and image the server and create an AMI. However, if a test failed, then Packer would receive an error code from the shell provisioner that would terminate the image creation process and the instance. This is the bash script we used for installing and running the ServerSpec tests:

This process allowed us to be confident about the images that were being built using Packer.

Automation Using Jenkins

Jenkins was used to automate the process of creating and testing images. A Jenkins job was parameterized to take inputs such as project name, username, Amazon’s API keys, test flags,  etc., which allowed our engineers to build project specific image rapidly without installing Packer and its CLI tools. Jenkins took care of the AMI tagging, CLI parameters for Packer, and notifications to the engineering team about the status of the Job:


In the Pipeline

There’s still room for improvement with regards to image creation. Still in the pipeline are:

  • Automatically triggering the Jenkins job on git commit
  • Creating an AMI management tool for creating and deleting server images
  • Running continuous sanity checks on our EC2 servers to identify any drift in configuration

Anubhav Mishra


About the Author: Mishra is an Operations Engineer at Hootsuite. He works closely with the Ops team to build, deploy, and maintain cloud infrastructure at Hootsuite. He enjoys playing soccer, DJing and loves coding in his free time. Follow Mishra on Twitter @anubhavm.