Deploying a LAMP Stack on AWS (part 2): Build Deployable Artifacts with Packer

Kevin Fishner
💬 comments

First off, a deployable artifact is a snapshot or configuration for a server to be deployed. For example, an Amazon Machine Image (AMI) provides the information to launch an instance on Amazon Web Services. Deployable artifacts are sometimes referred to as "golden images", and allow operators to configure a server once and reuse that configuration to reliably and repeatably provision more servers.

Using Packer to create AMIs

Packer by HashiCorp is a tool for creating identical machine images for multiple platforms from a single source configuration. You can write one Packer configuration to create identical machine images for AWS, Google Cloud Engine, DigitalOcean, OpenStack, and more.

Table of Contents

    In this LAMP tutorial, we use Packer to create three AMIs — one that configures Consul instances, one that configures a MySQL instance, and one that configures Apache+PHP instances. A key benefit of these AMIs is that they can be re-used with confidence that newly provisioned instances will be identically configured.

    General setup

    To finish this tutorial, you will need to:

    1. Clone this repository
    2. Create an Atlas account
    3. Generate an Atlas token and save as environment variable. export ATLAS_TOKEN=<your_token>
    4. In the Vagrantfile, Packer, and Terraform files, replace all instances of <username>, YOUR_ATLAS_TOKEN, YOUR_SECRET_HERE, and YOUR_KEY_HERE with your Atlas username, Atlas token, and AWS keys. Using "find and replace" in your text editor is probably the fastest way to do this.

    Packer configuration overview

    Packer configurations have four key sections:

    • builders
    • provisioners
    • post-processors
    • push

    The builders section defines the type of artifact to be created. The provisioners section defines what steps will be run to configure the artifact itself. The post-processors section defines what to do with the artifact once it is created. Finally, the push section defines how the build configuration is sent, stored, and versioned in Atlas.

    For more information on Packer and its configurations, read through the Packer documenation.

    Step 1: Use Packer and Atlas to build an Apache+PHP AMI

    The first thing to do is to build an AMI with Apache and PHP installed. To do this, run this in the ops directory:

    packer push -create apache-php.json

    This will send the build configuration to Atlas so it can remotely build your AMI with Apache and PHP installed.

    View the status of your build in the Operations tab of your Atlas account. This creates an AMI with Apache and PHP installed, and now you need to send the actual PHP application code to Atlas and link it to the build configuration. You can easily do this from the app directory with vagrant by running the following command:

    vagrant push

    This will send your PHP application, which is just the test.php file for now. Next, link the PHP application with the Apache+PHP build configuration by clicking on your build configuration, then "Links" in the left navigation. Complete the form with your username, "php" as the application name, and "/app" as the destination path.

    Now that your application and build configuration are linked, simply rebuild the Apache+PHP configuration and you will have a fully-baked AMI with Apache and PHP installed and your application code in place. When an AWS instance is provisioned using this AMI, the instance will be fully configured and have your application code on it.

    Step 2: Use Packer and Atlas to build a Consul AMI

    Now we're going to build an AMI with Consul installed. To do this, navigate to the ops directory and run:

    packer push -create consul.json

    This will send the build configuration to Atlas so it can build your Consul AMI remotely. You can view the status of your build in the Operations tab of your Atlas account.

    Step 3: Use Packer and Atlas to build a MySQL AMI

    Now, let's build an AMI with MySQL installed. To do this, we can run the following code from the ops directory:

    packer push -create mysql.json

    This will send the build configuration to Atlas so it can build your MySQL AMI remotely. Again, you can view the status of your build in the Operations tab of your Atlas account.


    In your Atlas account there are now three deployable artifacts:

    • an AMI with Apache+PHP and your application code configured
    • an AMI with MySQL configured
    • an AMI with Consul master configured

    The next step is provisioning infrastructure on AWS using these artifacts as source configurations. Before that step, it's important to understand how each of these services will connect once deployed.

    For LAMP to work properly in a distributed system, the servers running Apache+PHP must know which servers are running MySQL. To accomplish this, we use Consul and Consul Template. Any time a server is created, destroyed, or changes in health state, the PHP configuration updates to match by using the Consul Template php.ctmpl. Pay close attention to the database connection details:

    $password = "password";{{range service "mysql.database"}}
    $hostname = "{{.Address}}"{{end}};

    Consul Template will query Consul for all "database" servers with the tag "mysql", and then iterate through the list to populate the PHP configuration. When rendered, my.cnf will look like:

    $password = "password";
    $hostname = "";

    This setup allows us to destroy and create Apache+PHP servers with confidence that their configurations will always be correct and they will always write to the proper MySQL instances. You can think of Consul and Consul Template as the connective webbing between services.

    Next Steps

    Now that the deployable artifacts are fully configured, the next and final step is provisioning infrastructure on AWS using these deployable artifacts. This will be walked-through in our next and final article in the Getting Started with Atlas series.