by Sam Bott
4 April, 2017 - 13 minute read

I’m going to start off by saying what this post is not. This is not a discussion on the benefits of practicing Agile or adopting its logical conclusion, Continuous Deployment. I’m going to assume you’ve already been convinced. If you are looking for further persuasion then I would recommend reading The Phoenix Project as it is very entertaining, but also describes the need for Agile while providing solid analogies and business context. Alternatively, there are plenty of great posts out there:

The Components

Continuous Deployment

Continuous Deployment, Continuous Integration and Continuous Delivery are very similar concepts and are often used interchangeably.

CD Definitions

I will be using the following definitions for the purposes of this post:

Deploying Applications as Containers

There is a phrase I heard recently that sums up my view on software deployment very well: “Treat your servers like cattle, not pets”. This is suggesting we create environments for our applications that only last as long as the deployments do, destroying them when applications are updated rather than updating and nurturing them throughout the applications’ lifetime. This stops our application servers accumulating state and becoming “snowflakes” (each one unique, unpredictable and difficult to reproduce). It also gives you the confidence that you can deploy your applications from scratch easily in a disaster scenario.

There are a couple of suitable CD techniques to achieve this without a lot of manual effort, especially when combined with Agile where we have a very short release cycle. You can use configuration management tools such as Puppet or Chef in conjunction with your continuous integration process. The other way – which I will use in this example – is to deploy your applications as containers, where they are packaged along with their required environment. Platforms such as Kubernetes and Mesos provide a distributed layer of abstraction over multiple hosts, allowing us to run containers with availability guarantees in production.

The Build Environment

The idea of snowflake servers reaches further than deployed application servers. Build servers themselves are some of the worst examples I’ve come across. Accumulated state consisting of a rare combination of build tool versions and dependency libraries installed over time, as well as all the little configuration changes that were manually set to get one particular build working. These servers become a dangerous single point of failure when they become the only environments where applications can be built. The presence of older build tools can also make introducing newer builds more difficult – possibly resulting in yet more bespoke setup.

A build process tends to change once it has been created. This can be problematic if you have defined your build steps directly in your build tool: either your branch or every other will not build and test correctly at the point the build process needs changing. Once this change has been merged into master and other branches have been rebased, your current branches’ HEADs will be fine, but building historical versions of your application will no longer work. Because of this, it is better to define your build steps, in code, alongside your application source. This way, the steps required to build and test are captured with your source. Every revision will then build correctly and can be subject to the same code review controls to minimise risk and ensure the knowledge is shared.

I’ll be creating an environment that uses freshly created docker containers for each build to overcome these two potential issues. These containers will contain all compile-time and build dependencies in a stateless, reproducible and self-documenting format. The build server itself will not contain any build dependencies. To keep our build and test steps captured in code, there are many continuous integration products that support having your build steps alongside your code. I have found that Jenkins with their Pipeline plugin work very well, especially as they also have a module to add Docker features to their DSL.

An Example – Creating a Jenkins Server, Scala Microservice and Deployment Process to Kubernetes

What I’d Like To Achieve

For this post I wanted to create a reproducible, reference-example of setting up a CD pipeline from scratch. Using Jenkins Pipeline and Docker for building, testing and triggering deployments, and Kubernetes for hosting our application.

I have created an application that can run as a container. It is written in Scala and provides a very simple gRPC API. We will:

Target CD Process

Installation of Jenkins

These instructions are for Amazon Linux – which is based on CentOS – but it should be fairly trivial to adapt these for use with other distributions. With the recent release of Windows Server 2016 Containers, I would hope a similar Windows set up is possible too; however, this will need to be a topic for another day.

First create and ssh to a new machine in your environment. Out of habit, I will ensure software is up to date and install vim and tmux, – this is optional.

sudo yum update
sudo yum install vim tmux

We can then install our necessities: Docker, a JVM and Jenkins. I also add the “jenkins” user to the “docker” group which will give it access to start, stop and otherwise interact with containers.

sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins docker java git
sudo usermod -a -G docker jenkins
sudo chkconfig docker on
sudo service docker start
sudo chkconfig jenkins on
sudo service jenkins start

Take note of the content of the /var/lib/jenkins/secrets/initialAdminPassword file. You will need this key shortly.

If you use your own internal Docker repository then configure this now.

Configuration of Jenkins

Open a web browser and point it at port 8080 of your new server. It should prompt you for the initial admin password for your Jenkins instance. This was the content of the file you took note of in the previous step.

Jenkins Unlock Screen

Next, close the wizard that starts by clicking the top right “X” – we’ll do these steps separately.

Dismiss Wizard

Select “Manage Jenkins” from the left hand side.

Select “Manage Plugins” from the management menu. There are many plugins available for Jenkins; I like to keep the selection fairly minimal so the build process is captured in our Jenkinsfile as much as possible. Select the following:

If it is useful to you in your organisation also select:

If you are going to be running many build pipelines, I would recommend installing a plugin that allows adding additional nodes easily by SSH or dynamically in AWS:

Click “Install without restart”.

Plugins Installed

Navigate to “Manage Jenkins” > “Manage Users” and add any local user accounts. Then navigate to “Manage Jenkins” > “Configure Global Security”. Here you can configure the privileges for your newly added users and configure your Active Directory connection if you installed the plugin earlier.

The Deployment Environment

To run our containers we will install a Kubernetes cluster. There are good instructions available for this from the Kubernetes Getting Started site covering many possible deployment scenarios. I have used both the kube-aws tool from CoreOS and kops from Kubernetes themselves with success.

For this example, I created two namespaces in my Kubernetes cluster to represent my live and dev environments. Generally one would use two separate clusters for this purpose.

You will want to construct a kubeconfig that contains a Context for each of your environments. These Contexts should contain the hostname and credentials/certificates needed to connect to your clusters or namespaces.

kubectl config set-cluster sam-dev --server https://kube.dev.bott.tech --certificate-authority=~/kube-demo/ca.pem --embed-certs=true --kubeconfig=~/kube-demo/kubeconfig
kubectl config set-credentials admin --client-certificate=~/kube-demo/admin.pem --client-key=~/kube-demo/admin-key.pem --embed-certs=true --kubeconfig=~/kube-demo/kubeconfig
kubectl config set-context ROOT --cluster=sam-dev --user=admin --kubeconfig=~/kube-demo/kubeconfig
kubectl --kubeconfig=~/kube-demo/kubeconfig --context=ROOT create namespace sam-dev
kubectl --kubeconfig=~/kube-demo/kubeconfig --context=ROOT create namespace sam-live
kubectl config set-context DEV --cluster=sam-dev --user=admin --namespace=sam-dev --kubeconfig=~/kube-demo/kubeconfig
kubectl config set-context LIVE --cluster=sam-dev --user=admin --namespace=sam-live --kubeconfig=~/kube-demo/kubeconfig

Once you have installed your cluster and constructed your kubeconfig, you will want to create a Deployment to describe how your application should run.

kubectl --kubeconfig=~/kube-demo/kubeconfig --context=DEV run grpc-demo --image=sambott/grpc-test:0.2 --port=11235

You will then want to expose this as a service. In the example below I am setting the service type to “LoadBalancer”. On supported platforms such as GCE and AWS this will add an externally facing load balancer.

kubectl --kubeconfig=~/kube-demo/kubeconfig --context=DEV expose deployment/grpc-demo --type=LoadBalancer --port=11235

This process should be replicated for the live environment. You can also create YAML files to express the deployments, pods, services and additional config together, but I have skipped this for simplicity.

A Working Example

The Project

Clone the sample repository at https://github.com/wintoncode/Winton.Blogs.GrpcDemo.git

This is a Scala project that contains a gRPC server, a couple of unit tests and an example client that can be used to verify a deployment. When run, the sample client will connect to a deployed server and test that it gets a response from the defined API.

$ ./sample-client/target/universal/stage/bin/demo-client grpc-demo.dev.bott.tech 11235

2017-02-12 10:39:09 [main] INFO com.winton.DemoClient$ - Creating client
Feb 12, 2017 10:39:09 AM io.grpc.internal.ManagedChannelImpl <init>
INFO: [ManagedChannelImpl@6b927fb] Created with target grpc-demo.dev.bott.tech:11235
2017-02-12 10:39:09 [main] INFO com.winton.DemoClient$ - Client Created
2017-02-12 10:39:09 [main] INFO com.winton.DemoClient$ - calling: getMessage(A Message!)
2017-02-12 10:39:09 [ForkJoinPool-1-worker-5] INFO com.winton.DemoClient$ - Received: Hi! You just sent me A Message!

$

The key features to understand are:

The Build Environment

To ensure that our build server is free of state – and to explicitly define and document what is required to build our project – I have added a Dockerfile to the repository. custom-build-env/Dockerfile is used to create a clean environment for every build containing:

Creating the build environment may take a few minutes the first time the project is built. But it should add no more than a second to subsequent builds because docker will cache images (and the layers that make up images).

The Jenkinsfile

To define the build process in Jenkins, we have added Jenkinsfile to the project. This file is written in Jenkins’ Groovy-based DSL which outlines the stages required.

#!groovy

String GIT_VERSION

node {

  def buildEnv
  def devAddress

  stage ('Checkout') {
    deleteDir()
    checkout scm
    GIT_VERSION = sh (
      script: 'git describe --tags',
      returnStdout: true
    ).trim()
  }

  stage ('Build Custom Environment') {
    buildEnv = docker.build("build_env:${GIT_VERSION}", 'custom-build-env')
  }

  buildEnv.inside {

    stage ('Build') {
      sh 'sbt compile'
      sh 'sbt sampleClient/universal:stage'
    }

    stage ('Test') {
      parallel (
        'Test Server' : {
          sh 'sbt server/test'
        },
        'Test Sample Client' : {
          sh 'sbt sampleClient/test'
        }
      )
    }

    stage ('Prepare Docker Image') {
      sh 'sbt server/docker:stage'
    }
  }

  stage ('Build and Push Docker Image') {
    withCredentials([[$class: "UsernamePasswordMultiBinding", usernameVariable: 'DOCKERHUB_USER', passwordVariable: 'DOCKERHUB_PASS', credentialsId: 'Docker Hub']]) {
      sh 'docker login --username $DOCKERHUB_USER --password $DOCKERHUB_PASS'
    }
    def serverImage = docker.build("sambott/grpc-test:${GIT_VERSION}", 'server/target/docker/stage')
    serverImage.push()
    sh 'docker logout'
  }

  stage ('Deploy to DEV') {
    devAddress = deployContainer("sambott/grpc-test:${GIT_VERSION}", 'DEV')
  }

  stage ('Verify Deployment') {
    buildEnv.inside {
      sh "sample-client/target/universal/stage/bin/demo-client ${devAddress}"
    }
  }
}

stage 'Deploy to LIVE'
  timeout(time:2, unit:'DAYS') {
    input message:'Approve deployment to LIVE?'
  }
  node {
    deployContainer("sambott/grpc-test:${GIT_VERSION}", 'LIVE')
  }

def deployContainer(image, env) {
  docker.image('lachlanevenson/k8s-kubectl:v1.5.2').inside {
    withCredentials([[$class: "FileBinding", credentialsId: 'KubeConfig', variable: 'KUBE_CONFIG']]) {
      def kubectl = "kubectl  --kubeconfig=\$KUBE_CONFIG --context=${env}"
      sh "${kubectl} set image deployment/grpc-demo grpc-demo=${image}"
      sh "${kubectl} rollout status deployment/grpc-demo"
      return sh (
        script: "${kubectl} get service/grpc-demo -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'",
        returnStdout: true
      ).trim()
    }
  }
}


// vim: set syntax=groovy :

Looking at the contents of this file you will see it is broken down into logical steps using the stage function. These stages are:

There are a few areas of the file that are worth noting:

Continuous Delivery Approval

Adding this project to Jenkins

Adding a Jenkinsfile-based build to Jenkins is trivial, we only need to tell Jenkins which repository to reference and:

Jenkins Credential Store

Running the build

This config will poll the git repository for changes. To manually run the build, navigate to the pipeline in the “Blue Ocean UI” and click the play icon next to the master branch.

Build Success