Google Cloud Platform II: Continuously Deploying A Docker Application On Google Container Engine

John Kariuki

We previously looked at how to deploy a Docker application to Google Cloud Platform (GCP) with Google Container Engine(GKE) using Kubernetes.

This gave us the opportunity to appreciate the role that Kubernetes plays in managing/orchestrating GKE clusters with ease.

One thing that we missed out on was how deploying Docker containers pans out when working in a team that employs Continuous Integration / Continuous Deployment (CI/CD) where frequent updates of an application are made by each member of the team and, well, continously deployed to staging or production.

As you would imagine, things can quickly get out of hand when each team member individually pushes their own version of the image to the Google Container Registry.

The image tags may conflict, be out of date or buggy images may be pushed to production and the world as we know it would come to a stop.

Continuous Deployment with CircleCI

CircleCI is one of the most popular CI/CD tools in use today alongside TravisCI and Codeship. We wil be using CircleCI to continuously deploy our application just like we would in a distributed team.

To achieve this, we will do the following:

  • Write tests for our application.
  • Configure CircleCI to authenticate with our GKE cluster.
  • Add a circle.yml configuration file for the application deployment process.
  • If the tests pass on CircleCI, update the containerized app to GKE.

Sounds good? Sweet! Let's get started!

Adding Tests

We'll use supertest to test the routes in index.js in the test and mocha test framework to run the test.

npm install --save-dev mocha supertest

Once installed, create test/index_test.js in the application route and add the test below.

test/index_test.js

//Load supertest and the app instance.
const request = require('supertest');
const app = require('../index').app;

describe('Index Tests', () => {
    describe('GET /', () => {
        //Should return status code 200 with JSON message: welcome to root
        it('Should return root message', (done) => {
            request(app)
                .get('/')
                .expect(200)
                .expect(JSON.stringify('welcome to root'))
                .end(done);
        });
    });
});

The test makes a GET request to the root of our project and asserts that a JSON string is given back with a 200 status code. Add the test command in the scripts section of our package,json file.

package.json

...
"scripts": {
    "test": "node_modules/.bin/mocha",
    "start": "node index.js"
  },
...

View the test result by running npm test. You should get the following.

Test result

Integrating CircleCI

With our tests passing, let's add a circle.yml file and instruct CircleCI how to run our tests using mocha. Luckily, every CircleCI build comes preinstalled with mocha so all we need to do is instruct it to use mocha.

Note that by default, CircleCI will try to run npm test.

test:
    override:
    - mocha

Add Project to CircleCI

Once everything is set up, push the code to a Gihub repository, login to CircleCI with Github then go to CircleCI > Add Projects and build your repository. If everything is correctly set up, you should have a succesful build.

CircleCI build

With this, let's make CircleCI do all the heavy lifting for us.

Authenticating The GKE Cluster on CircleCI

The objective of this section is to simply let CircleCI know that we own the scotch-cluster. When creating the Deployments and Services locally, we followed the following protocol:

  • Login to Google - Using the gcloud init in the initial setup or gcloud auth login later on.
  • Set the default project - gcloud config set project scotch-155622
  • Set the default cluster - gcloud config set container/cluster scotch-cluster
  • Get the cluster credentials - gcloud container clusters get-credentials scotch-cluster --zone us-central1-a --project scotch-155622
  • Gone ahead to be awesome.

But how do we authenticate CircleCI? We cannot exactly use gcloud auth login can we? That would mean allowing access to our gmail accounts. Sounds wrong, doesn't it?

And it is! Gladly, Google Cloud Platform provides Service Accounts which come in handy.

A service account is a special account that can be used by services and applications running on your Google Compute Engine instance to interact with other Google Cloud Platform APIs. Applications can use service account credentials to authorize themselves to a set of APIs and perform actions within the permissions granted to the service account and virtual machine instance.

Let's set up our service account!

Generating a Service Account

To create a new service account, go to the Service Accounts Page and select your project. You will be required to generate a Service account ID / email here. Go ahead and enter a name for the account and select a role for the service account, in our case, Service Account Actor.

Service Accounts page

You can also create a service account ID using the gcloud command.

gcloud iam service-accounts create scotch-sa --display-name "Scotch Service Account"

Once the service account is created, a JSON file will be downloaded to your local system. We will use that in the next section.

Environment Variables

It's always easier to set up frequently used data as environment variables in any project. Better yet, if some of these variables are sensitive, like our service account token, they are best stored away from our codebase.

To get started, let's add the project ID, cluster name, deployment name, container name and the compute zone of our cluster in circle.yml, these are the less sensitive details we can afford to add our configuration file.

circle.yml

machine:
  environment:
    PROJECT_ID: scotch-155622
    CLUSTER_NAME: scotch-cluster
    COMPUTE_ZONE: us-central1-a
    #As specified in Deployment.yml
    DEPLOYMENT_NAME: scotch-dep
    CONTAINER_NAME: node-app

test:
    override:
    - mocha

Simple and neat.

Now to store the service account JSON file as an environment variable. It would be catastrophic if we decided to add it in the publicly accessible circle.yml file. Lucky for us, CircleCI allows us to set environment variables for each project.

First, let's encode our JSON file into base64 format so that we can store it as a string. For Linux and OS X users, run the following command to decode and copy the encoded string:

base64 scotch-2efb8709c63d.json | pbcopy

Next, go to CircleCI > Builds > Repository settings > Environment Variables > Add Variable and add the service key and Account ID.

CircleCI environment variable

We now have access to $SERVICE_KEY and $ACCOUNT_ID in our CircleCI build.

PROVISIONING OUR DEPLOYMENT ENVIRONMENT

Just like we did in our local environment, we need to make sure that gcloud and kubectl command-line tools are installed as well as good old Docker. Once again, CircleCI comes pre-installed with both, we just have to make sure they are up to date before we do anything.

Once they are updated, We need to:-

  • Check for any changes in the deployment branch (master in our case)
  • Decode the service account and authenticate the build
  • Set the default project, cluster and compute zone
  • Get the credentials for the scotch-cluster
  • Start Docker

circle.yml

machine:
  environment:
    PROJECT_ID: scotch-155622
    CLUSTER_NAME: scotch-cluster
    COMPUTE_ZONE: us-central1-a
    #As specified in Deployment.yml
    DEPLOYMENT_NAME: scotch-dep
    CONTAINER_NAME: node-app

test:
    override:
    - mocha

#Ensure that gcloud and kubectl are updated.
dependencies:
  pre:
    - sudo /opt/google-cloud-sdk/bin/gcloud --quiet components update --version 120.0.0
    - sudo /opt/google-cloud-sdk/bin/gcloud --quiet components update --version 120.0.0 kubectl

deployment:
    production:
        branch: master
        commands:
        # Save the string to a text file
        - echo $SERVICE_KEY > key.txt
        # Decode the Service Account
        - base64 -i key.txt -d > ${HOME}/gcloud-service-key.json
        # Authenticate CircleCI with the service account file
        - sudo /opt/google-cloud-sdk/bin/gcloud auth activate-service-account ${ACCOUNT_ID} --key-file ${HOME}/gcloud-service-key.json
        # Set the default project
        - sudo /opt/google-cloud-sdk/bin/gcloud config set project $PROJECT_ID
        # Set the default container
        - sudo /opt/google-cloud-sdk/bin/gcloud --quiet config set container/cluster $CLUSTER_NAME
        # Set the compute zone
        - sudo /opt/google-cloud-sdk/bin/gcloud config set compute/zone $COMPUTE_ZONE
        # Get the cluster credentials.
        - sudo /opt/google-cloud-sdk/bin/gcloud --quiet container clusters get-credentials $CLUSTER_NAME
        # Start good old Docker
        - sudo service docker start

Rolling Out Application Updates

Updating circle.yml configuration

For those of us with a hankering to finally get to the good stuff, welcome to Docker town! We are about to get this container rolling. Up until now, we have been preparing our Continous Deployment environment but we are yet to actually even build an image. Fret not!

If you recall where we last left things, when deploying from our local system, the last step of our journey involves the following:

  • Building a Docker image from our application and correctly tagging it.
  • Pushing the Docker image to the GCP Container Registry.

Well let's add one more step to that grand process. Once a new image is pushed, we will need to set it as the latest image from which our Docker container, the Kubernetes Deployment is run from.

Note that we are using $CIRCLE_SHA1 which is CircleCI's environment variable of the latest git commit hash since it will always be unique. You can however adopt any tagging model that your team fancies such as Semantic Versioning.

Here is a look at the final circle.yml configuration in the commands section

dependencies:
    .
    .
    .
       # Build a Docker image and use the Github commit hash ($CIRCLE_SHA1) as the tag
       - docker build -t gcr.io/${PROJECT_ID}/node-app:$CIRCLE_SHA1 .
       # Push the Image to the GCP Container Registry
       - sudo /opt/google-cloud-sdk/bin/gcloud docker -- push gcr.io/${PROJECT_ID}/node-app:$CIRCLE_SHA1
        # Update the default image for the deployment
        - sudo /opt/google-cloud-sdk/bin/kubectl set image deployment/${DEPLOYMENT_NAME} ${CONTAINER_NAME}=gcr.io/${PROJECT_ID}/node-app:$CIRCLE_SHA1

This will recreate the pods and ensure the user has the most updated application running without any impact or downtime on their side. This is achieved by destroying one pod at a time before moving on to the next one.

Here is a snapshot of the pods recreation process. Notice how fast the pods come alive!

Pods recreation

Making an application update

To see the update in action, let's add a new route in our application.

index.js

...
app.get('/baz', (req, res) => {
  res.status(200).json('Baz!');
});
...

Commit the changes and push to Github. Once the CircleCI build is succesful, visit the /baz route.

GKE image update

Works like a charm!

Take a look at the Container Registry, and you'll notice that we have a swanky new image with a newer tag.

GKE Container Registry update

Conclusion

In this article, we were able to continously deploy our containerized application with CircleCI builds to Google Cloud Platform's Google Container Engine (GKE) with ease. There is still alot more out there to learn on GCP.

Go forth and conquer!

John Kariuki

Software developer at Andela. Proficient in PHP with Laravel and Codeigniter.

Conversant with MEAN(MongoDB, Express.js, AngularJS, Node.js) and currently learning Python and Go.

Avid blog reader and fascinated by drones.

I play basketball, swim and jog in my free time.