Jenkins & Openshift – a Scalable, Flexible Pipeline

Introduction

At UKCloud, we’ve been rapidly embarking to change our technology stack to suit our ever-changing needs and to provide a better overall service. One of the biggest changes that we’ve made is the usage of Red Hat OpenShift as our application platform of choice. OpenShift is Red Hat’s take on the Kubernetes container orchestration layer and provides numerous developer-orientated benefits.

 

In tandem with this change, we’ve also been migrating our build pipelines from Atlassian Bamboo to Jenkins. Whilst Bamboo integrates well with our remote source control management tool, it doesn’t play nice with our current Docker-oriented build strategies, particularly when we need to run unit tests or other heavy workloads. A few other issues involved the lack of tag recognition (Bamboo seems unable to build on tags) and the fact that the build pipeline’s configuration is stored within Bamboo itself, which is far from ideal.

Enter Jenkins

Jenkins is able to solve the latter two problems described above, at the unfortunate expense of software integrations. Out of the box, though, it doesn’t help us with our build strategies – we are still forced to build our images using Docker & Docker Compose. Not only does this approach force us to build & maintain a large number of Jenkins agents, but it’s also unsecure – there are several security-related disadvantages to running Docker in production or production-like environments (more on this later). Since we already have access to an OpenShift environment and we already feel comfortable with building, running and troubleshooting Docker containers, why not try to marry the two? we could use Jenkins to spin up ephemeral Jenkins agents in OpenShift, do our work, then destroy them. This way, we don’t have to worry about managing Jenkins Agent configuration or requesting new agents when we start hitting our capacity ceiling. Instead, we request new OpenShift worker nodes or clusters, for which we already have established practice in terms of building & maintaining. Not only is it established practice, but now we’re staying within the confines of one process instead of creating a new, separate one. Seems like a no-brainer.

The Jenkins Kubernetes Plugin

Thanks to the extensive Jenkins plugin ecosystem, there’s already a plugin for what we need to do – the Kubernetes Plugin. While relatively new, there’s a lot of community activity surrounding this plugin and the documentation is excellent, suggesting that it may have a bright future ahead of it. Installing this plugin gives us a new Jenkins configuration section: Cloud. Here, we can create one or more new clouds:

 

There are a number of configuration elements to be aware of here:

  • Kubernetes URL: The URL of the Kubernetes cluster.
  • Kubernetes Namespace: The namespace which Jenkins will use to create its agents.
  • Credentials: The credentials required to log in to the Kubernetes cluster.
  • Jenkins URL: The Jenkins master URL. This is so the containerised agents can talk back to Jenkins.
  • Jenkins Tunnel: The Jenkins master JNLP port. Note that this must be configured to static in your Jenkins security configuration.

Note that, in our case, the credentials type that we’re using is ‘OpenShift Login’. This credential type is available after installing the OpenShift Login Plugin.

 

Now that we have our cloud configured, we can push the Test Connection button. If it doesn’t succeed, you’ll know!

Writing our Groovy Script

Now that we have configured our Jenkins Kubernetes plugin to communicate with OpenShift, we can start writing our first pipeline. Unfortunately, there is no declarative DSL for defining our resources in OpenShift – so we’ll be writing a native Groovy script to do the heavy lifting.

Here’s one I made earlier:

def cloud = 'openshift'
def nodeLabel = "hello-world-${UUID.randomUUID().toString()}"

podTemplate(
  cloud: cloud,
  label: nodeLabel,
  name: "${nodeLabel}-pod",
  nodeUsageMode: 'EXCLUSIVE',
  serviceAccount: 'jenkins-builder',
  podRetention: never(),
  containers: [
    containerTemplate(
      name: 'jnlp',
      image: 'registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7',
      alwaysPullImage: true,
      args: '${computer.jnlpmac} ${computer.name}'
    ),
    containerTemplate(
      name: 'hello-world',
      image: 'alpine:latest',
      alwaysPullImage: true,
      ttyEnabled: true,
      command: 'cat'
    )
  ]
) {
  node(nodeLabel) {
    stage('Greet') {
      container('hello-world') {
        sh 'echo "Hello, from \$(hostname)!"'
      }
    }
  }
}

 

This may seem like a lot to take in at first – so let’s break this down:

  • Firstly, we define our cloud name. This corresponds to the name configured in Jenkins.
  • Secondly, we define our nodeLabel. This is the label that Jenkins uses to reference the ephemeral Jenkins agent.
  • Next comes our podTemplate. This is the bread and butter of the Jenkins Kubernetes plugin, and allows us to create a pod to our specification!
    • We can define a serviceAccount to run this pod as. This is important, because it means we can make extensive use of OpenShift’s excellent RBAC.
    • We can specify that the pod is immediately destroyed after it has completed its tasks with the podRetention parameter.
    • The containers parameter allows us to specify an array of containers we want to be part of this pod. Normally, the jnlp container is provided by default, but you can override its original specification if you need something different, like we do, since we’re running RedHat OpenShift and not native Kubernetes.
      • Within each container template, we can specify a number of parameters. A full list is available in the documentation. In our example, we’re using a few parameters:
        • name: The name of the container within the pod. We use this value to select which container we run scripts against.
        • image: The image that the container is based on.
        • alwaysPullImage: Whether to always pull the container image or not. In some cases, we want to do this so that we always have the latest image. In other cases, if we’re using an image with a semantically versioned tag, we may not want to always pull the image.
        • command: We can specify the command we run on each container. In 99% of cases, you will want a command that ‘holds’ the container so commands can be run against it. We’re using cat here, because by default, cat will wait for an input stream on STDIN.
        • ttyEnabled: cat will not wait for an input stream without a TTY – the container will exit immediately. This ensures the container is attached to a TTY.
      • In the body that we pass to our podTemplate call comes the actual work – we pass the pod label (stored in the nodeLabel variable) to our node declaration and then start doing stuff!
        • We create a pipeline stage using the stage function
          • Inside this stage, we use the container function to specify the container that we’d like to use.
            • Inside this call to container, we then run a shell script that prints a simple message.

Running Our Groovy Script

To run our Groovy script, we should create a simple pipeline in Jenkins. This pipeline, for the purposes of this blog, will contain an inline pipeline definition:

Saving the pipeline configuration, we can now build the pipeline using the GUI. We get the following:

Started by user vagrant
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
‘hello-world-f26574b0-a3c4-40f1-83ee-2e6e2abee8b5-pod-r5b1-mvm9q’ is offline
Agent hello-world-f26574b0-a3c4-40f1-83ee-2e6e2abee8b5-pod-r5b1-mvm9q is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (hello-world-f26574b0-a3c4-40f1-83ee-2e6e2abee8b5):
* [jnlp] registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7
* [hello-world] alpine:latest

Running on hello-world-f26574b0-a3c4-40f1-83ee-2e6e2abee8b5-pod-r5b1-mvm9q in /home/jenkins/workspace/blog
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Greet)
[Pipeline] container
[Pipeline] {
[Pipeline] sh
+ hostname
+ echo 'Hello, from hello-world-f26574b0-a3c4-40f1-83ee-2e6e2abee8b5-pod-r5b1-mvm9q!'
Hello, from hello-world-f26574b0-a3c4-40f1-83ee-2e6e2abee8b5-pod-r5b1-mvm9q!
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS

Success! our simple pipeline works. But what if we want to do something more complex? In UKCloud’s case, we use OpenShift to build & test software artefacts. But how did we get there?

Building Artefacts

The original method for building & testing images at UKCloud was always to build & run our containers on a VM with Docker installed. Of course, this doesn’t scale well and involves a fair amount of manual intervention to get things running. Because of the scalability that OpenShift provides, we decided that building our images inside our OpenShift clusters was the best approach. There is a major problem with this approach, though: running Docker inside a Docker container is tedious and exposes a large vector for attack. Typically, in order to get this working, you’d mount /var/run/docker.sock in to the “builder” pod, which would then use the Docker API to issue commands through the Docker socket. This essentially means that builds are occurring on the OpenShift worker nodes underneath, which breaks the encapsulation that a pod typically provides. This is a big no-no – pods running in OpenShift could potentially manipulate other pods via the Docker socket, essentially subverting the authentication and authorization mechanisms provided by OpenShift. There’s also the potential for local images to be swapped for malicious ones – by either pulling or building a malicious image, then tagging it with the name of an existing image. Any deployments using this image would then run this image on the next rollout or when the pod is deleted.

Going Dockerless

Fortunately, there are many alternative solutions to Docker – many of which were the subject of research & development at UKCloud before picking our technology of choice. We ended up picking Google’s Kaniko, mainly because it works very well at isolating our builds from the underlying hosts, while also being easy to use and widely supported. It does have certain disadvantages – we still need to run as root inside the container itself – but this is a problem that is difficult to solve and requires hacking on a level I’m not willing to entertain with a production-grade OpenShift cluster.

Kaniko Demo

To illustrate the surface level of operating Kaniko, I’ve provided a small Dockerfile:

FROM ubuntu:18.04

RUN apt update && apt upgrade -yy

Since Kaniko itself is a container designed to run in Kubernetes, we’ll need to run it inside Docker – which in the real world makes the exercise pointless – but for illustrative purposes it makes sense.

Kaniko

Running a command inside an image build using Kaniko

In the example above, we mount our Dockerfile in to Kaniko – Kaniko is able to pass Dockerfiles – and instruct it to build without pushing to a registry. The output that follows is similar to that of running docker – but instead, our image build is happening entirely inside a container without Docker knowing about it!

Essentially, Kaniko runs the commands required within the context of the container’s filesystem, creating intermittent snapshots to record as individual layers. It is then able to package these layers in to an OCI (Open Container Initiative)compliant image, which is an image that Docker is able to run as a container.

So now we can build an image without relying on Docker – any container runtime will do. We can also build our image inside OpenShift or Kubernetes.

Since Kaniko only implements a select few features present in Docker, we may also need other tools. Docker itself is a monolithic application and provides the ability to build, run, push, pull, load, save… Kaniko can only do a few of these, and in a limited capacity. So what happens when, say, we need to move images around without relying on a Docker daemon from within an OpenShift or Kubernetes cluster? Kaniko’s ability to push images is welcome, but it will only takes us so far…

Putting it all Together

Now that we’ve established which tools we’re going to use, we can work on fitting all of this together. We’re going to create a pipeline that builds several images and ships them off to the OpenShift internal registry. One of these images will install Ruby, the other will install both Ruby and Python.

Our Pipeline

The pipeline we want looks something like the following – which is a lot to take in at once – so let’s break it down.

def cloud = 'openshift'
// We define our Node label - so Jenkins knows which pod to use
def nodeLabel = "builder-${UUID.randomUUID().toString()}"

podTemplate(
  cloud: cloud,
  label: nodeLabel,
  name: "${nodeLabel}-pod",
  nodeUsageMode: 'EXCLUSIVE',
  serviceAccount: 'jenkins-builder',
  podRetention: never(),
  containers: [
    containerTemplate(
      name: 'jnlp',
      image: 'registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7',
      alwaysPullImage: true,
      args: '${computer.jnlpmac} ${computer.name}'
    ),
    containerTemplate(
      name: 'kaniko-warmer',
      image: 'twistedvines/kaniko-executor:latest',
      alwaysPullImage: true,
      ttyEnabled: true,
      command: '/busybox/cat',
      resourceRequestCpu: '150m',
      resourceRequestMemory: '256Mi',
      resourceLimitCpu: '750m',
      resourceLimitMemory: '1024Mi'
    ),
    containerTemplate(
      name: 'ruby-builder',
      image: 'twistedvines/kaniko-executor:latest',
      alwaysPullImage: true,
      ttyEnabled: true,
      command: '/busybox/cat',
      resourceRequestCpu: '150m',
      resourceRequestMemory: '256Mi',
      resourceLimitCpu: '750m',
      resourceLimitMemory: '1024Mi'
    ),
    containerTemplate(
      name: 'ruby-python-builder',
      image: 'twistedvines/kaniko-executor:latest',
      alwaysPullImage: true,
      ttyEnabled: true,
      command: '/busybox/cat',
      resourceRequestCpu: '150m',
      resourceRequestMemory: '256Mi',
      resourceLimitCpu: '750m',
      resourceLimitMemory: '1024Mi'
    ),
    containerTemplate(
      name: 'ephemeral-registry',
      image: 'registry:2.6.2',
      resourceRequestCpu: '150m',
      resourceRequestMemory: '256Mi',
      resourceLimitCpu: '500m',
      resourceLimitMemory: '1024Mi'
    )
  ],
  volumes: [
    emptyDirVolume(mountPath: '/cache', memory: false),
    emptyDirVolume(mountPath: '/kaniko/.docker', memory: false)
  ]
) {
  node(nodeLabel) {
    stage('Prepare') {
      container('jnlp') {
        sh '''#!/bin/sh
          b64_encoded_credentials="\$(
            printf "serviceaccount:\$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" | \
              base64 -w 0
          )"
          echo "{
            \\"auths\\": {
              \\"docker-registry.default.svc:5000\\": {
                \\"auth\\": \\"\${b64_encoded_credentials}\\"
                }
              }
            }" > \
            /kaniko/.docker/config.json
        '''
      }
    }

    stage('Cache') {
      container(name: 'kaniko-warmer', shell: '/busybox/sh') {
        withEnv(['PATH+EXTRA=/busybox']) {
          sh '''#!/busybox/sh
            /kaniko/warmer -i alpine:latest
          '''
        }
      }
    }

    stage('Build') {
      container(name: 'ruby-builder', shell: '/busybox/sh') {
        withEnv(['PATH+EXTRA=/busybox']) {
          sh '''#!/busybox/sh
            echo -e "FROM alpine:latest\nRUN apk add ruby" > /workspace/Dockerfile
          '''

          sh '''#!/busybox/sh
            /kaniko/executor \
              --cache \
              --cache-repo '127.0.0.1:5000/build/cache' \
              -f "/workspace/Dockerfile" \
              -c "/workspace" \
              --insecure \
              --destination "docker-registry.default.svc:5000/ci/ruby-build:latest"
          '''
        }
      }

      container(name: 'ruby-python-builder', shell: '/busybox/sh') {
        withEnv(['PATH+EXTRA=/busybox']) {
          sh '''#!/busybox/sh
            echo -e "FROM alpine:latest\nRUN apk add ruby\nRUN apk add python" > /workspace/Dockerfile
          '''

          sh '''#!/busybox/sh
            /kaniko/executor \
              --cache \
              --cache-repo '127.0.0.1:5000/build/cache' \
              -f "/workspace/Dockerfile" \
              -c "/workspace" \
              --insecure \
              --destination "docker-registry.default.svc:5000/ci/ruby-python-build:latest"
          '''
        }
      }
    }
  }
}

Breakdown

Hopefully you should be somewhat familiar with the podTemplate call, so we’ll go through the new additions and explain what they’re for and why we’re using them.

The Kaniko Image Warmer

    containerTemplate(
      name: 'kaniko-warmer',
      image: 'twistedvines/kaniko-executor:latest',
      alwaysPullImage: true,
      ttyEnabled: true,
      command: '/busybox/cat',
      resourceRequestCpu: '150m',
      resourceRequestMemory: '256Mi',
      resourceLimitCpu: '750m',
      resourceLimitMemory: '1024Mi'
    ),

 

The Kaniko project ships with an image warmer – available at gcr.io/kaniko-project/warmer:latest – which is able to cache images to a local filesystem. This is useful if, like in our example, we’re using the same base image more than once. Normally, two instances of Kaniko would download the same image twice – thanks to this container, we only download the base images we need once.

We’re also specifying resource requests & limits – a useful feature of Kubernetes to maintain cluster stability and ensure our application has the specifications required.

Note that, in this instance, we’re also using a custom Kaniko image (that I have created & maintain). One of the downsides to using Kaniko is, unfortunately, that Google keep it as lean as possible – so getting it to work in an environment such as this is a challenge! I created this image based on an older version of Kaniko (to prevent the impending version drift that would occur by use of the latest tag).

The Shared Volumes

    volumes: [
      emptyDirVolume(mountPath: '/cache', memory: false),
      emptyDirVolume(mountPath: '/kaniko/.docker', memory: false)
    ]

We create two shared volumes here: the first, /cache, is so that the image warmer can share its cached images with the other Kaniko containers.
The /kaniko/.docker volume is so that we can share Docker configuration between our multiple Kaniko image builders – saves us from having to populate the configuration twice.

The ‘Prepare’ Stage

We create two shared volumes here: the first, /cache, is so that the image warmer can share its cached images with the other Kaniko containers.
The /kaniko/.docker volume is so that we can share Docker configuration between our multiple Kaniko image builders – saves us from having to populate the configuration twice.
The ‘Prepare’ Stage

    stage('Prepare') {
      container('jnlp') {
        sh '''#!/bin/sh
          b64_encoded_credentials="\$(
            printf "serviceaccount:\$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" | \
              base64 -w 0
          )"
          echo "{
            \\"auths\\": {
              \\"docker-registry.default.svc:5000\\": {
                \\"auth\\": \\"\${b64_encoded_credentials}\\"
                }
              }
            }" > \
            /kaniko/.docker/config.json
        '''
      }
    }

 

This stage populates the Docker config file with the service account credentials supplied by OpenShift. This allows us to authenticate with the internal OpenShift registry. It doesn’t look pretty – but it works well.

The ‘Cache’ Stage

    stage('Cache') {
      container(name: 'kaniko-warmer', shell: '/busybox/sh') {
        withEnv(['PATH+EXTRA=/busybox']) {
          sh '''#!/busybox/sh
            /kaniko/warmer -i alpine:latest
          '''
        }
      }
    }

The cache stage downloads & caches the required images at /cache. /cache is also a volume shared between all containers in this pod, so it means that all of our Kaniko containers have access to it.

The ‘Build’ Stage

    stage('Build') {
      container(name: 'ruby-builder', shell: '/busybox/sh') {
        withEnv(['PATH+EXTRA=/busybox']) {
          sh '''#!/busybox/sh
            echo -e "FROM alpine:latest\nRUN apk add ruby" > /workspace/Dockerfile
          '''

          sh '''#!/busybox/sh
            /kaniko/executor \
              --cache \
              --cache-repo '127.0.0.1:5000/build/cache' \
              -f "/workspace/Dockerfile" \
              -c "/workspace" \
              --insecure \
              --destination "docker-registry.default.svc:5000/ci/ruby-build:latest"
          '''
        }
      }

      container(name: 'ruby-python-builder', shell: '/busybox/sh') {
        withEnv(['PATH+EXTRA=/busybox']) {
          sh '''#!/busybox/sh
            echo -e "FROM alpine:latest\nRUN apk add ruby\nRUN apk add python" > /workspace/Dockerfile
          '''

          sh '''#!/busybox/sh
            /kaniko/executor \
              --cache \
              --cache-repo '127.0.0.1:5000/build/cache' \
              -f "/workspace/Dockerfile" \
              -c "/workspace" \
              --insecure \
              --destination "docker-registry.default.svc:5000/ci/ruby-python-build:latest"
          '''
        }
      }
    }

 

We’re doing quite a lot here – we’re using withEnv to specify that we need to use a custom $PATH variable, since Kaniko uses Busybox.

We are actually generating our Dockerfile dynamically inside the Kaniko container – usually, we wouldn’t do this, we would use a Dockerfile stored in a repository, which we’d then pull into the jnlp container. The Kubernetes plugin creates its own shared volume at /home/jenkins, which would allow our Kaniko containers to access our code. However, the example pipeline is already complex enough without this additional complexity!

When we run Kaniko’s executor, we’re doing several things of note:

  • –cache tells the executor to use the existing cache (/cache) and any remote caches.
  • –cache-repo allows us to specify a remotecache – which is why we spin up a Docker registry as part of this pod – so we can share created layers!
  • -f allows us to specify a Dockerfile path.
  • -c allows us to specify a build context.
  • –insecure allows us to use an insecure OpenShift internal registry (HTTP).
  • –destination allows us to specify a remote destination for our final artefact.

Assumptions

Of course, this pipeline makes a few assumptions. It mainly assumes that you’ve created & configured a service account, jenkins-builder, to push to the OpenShift internal registry.

End Result 

Started by user vagrant
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
‘builder-01a6d3fc-3cdb-4c38-aaf2-02111ec3b63c-pod-slm36-4sc85’ is offline
Agent builder-01a6d3fc-3cdb-4c38-aaf2-02111ec3b63c-pod-slm36-4sc85 is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (builder-01a6d3fc-3cdb-4c38-aaf2-02111ec3b63c):
* [jnlp] registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7
* [kaniko-warmer] twistedvines/kaniko-executor:latest(resourceRequestCpu: 150m, resourceRequestMemory: 256Mi, resourceLimitCpu: 750m, resourceLimitMemory: 1024Mi)
* [ruby-builder] twistedvines/kaniko-executor:latest(resourceRequestCpu: 150m, resourceRequestMemory: 256Mi, resourceLimitCpu: 750m, resourceLimitMemory: 1024Mi)
* [ruby-python-builder] twistedvines/kaniko-executor:latest(resourceRequestCpu: 150m, resourceRequestMemory: 256Mi, resourceLimitCpu: 750m, resourceLimitMemory: 1024Mi)
* [ephemeral-registry] registry:2.6.2(resourceRequestCpu: 150m, resourceRequestMemory: 256Mi, resourceLimitCpu: 500m, resourceLimitMemory: 1024Mi)

Running on builder-01a6d3fc-3cdb-4c38-aaf2-02111ec3b63c-pod-slm36-4sc85 in /home/jenkins/workspace/blog
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Prepare)
[Pipeline] container
[Pipeline] {
[Pipeline] sh
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Cache)
[Pipeline] container
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] container
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
[Pipeline] sh
INFO[0000] Downloading base image alpine:latest
2019/03/20 15:44:36 No matching credentials were found, falling back on anonymous
INFO[0001] Found sha256:d05ecd4520cab5d9e5d877595fb0532aadcd6c90f4bbc837bc11679f704c4c82 in local cache
INFO[0001] Checking for cached layer 127.0.0.1:5000/build/cache:2bfa2359cb030885e15a9a503aa85d4d4000138b70372bd069484d6a8083bad7...
2019/03/20 15:44:37 No matching credentials were found, falling back on anonymous
INFO[0001] No cached layer found for cmd RUN apk add ruby
INFO[0001] Unpacking rootfs as cmd RUN apk add ruby requires it.
INFO[0002] Taking snapshot of full filesystem...
INFO[0002] Skipping paths under /kaniko, as it is a whitelisted directory
INFO[0002] Skipping paths under /home/jenkins, as it is a whitelisted directory
INFO[0002] Skipping paths under /var/run, as it is a whitelisted directory
INFO[0002] Skipping paths under /dev, as it is a whitelisted directory
INFO[0002] Skipping paths under /proc, as it is a whitelisted directory
INFO[0002] Skipping paths under /sys, as it is a whitelisted directory
INFO[0002] Skipping paths under /cache, as it is a whitelisted directory
INFO[0002] Skipping paths under /busybox, as it is a whitelisted directory
INFO[0002] RUN apk add ruby
INFO[0002] cmd: /bin/sh
INFO[0002] args: [-c apk add ruby]
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
(1/9) Installing ca-certificates (20190108-r0)
(2/9) Installing gmp (6.1.2-r1)
(3/9) Installing ncurses-terminfo-base (6.1_p20190105-r0)
(4/9) Installing ncurses-terminfo (6.1_p20190105-r0)
(5/9) Installing ncurses-libs (6.1_p20190105-r0)
(6/9) Installing readline (7.0.003-r1)
(7/9) Installing yaml (0.2.1-r0)
(8/9) Installing ruby-libs (2.5.3-r1)
(9/9) Installing ruby (2.5.3-r1)
Executing busybox-1.29.3-r10.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 27 MiB in 23 packages
INFO[0046] Taking snapshot of full filesystem...
INFO[0046] Skipping paths under /kaniko, as it is a whitelisted directory
INFO[0046] Skipping paths under /home/jenkins, as it is a whitelisted directory
INFO[0046] Skipping paths under /var/run, as it is a whitelisted directory
INFO[0046] Skipping paths under /dev, as it is a whitelisted directory
INFO[0046] Skipping paths under /proc, as it is a whitelisted directory
INFO[0046] Skipping paths under /sys, as it is a whitelisted directory
INFO[0046] Skipping paths under /cache, as it is a whitelisted directory
INFO[0046] Skipping paths under /busybox, as it is a whitelisted directory
INFO[0051] Pushing layer 127.0.0.1:5000/build/cache:2bfa2359cb030885e15a9a503aa85d4d4000138b70372bd069484d6a8083bad7 to cache now
2019/03/20 15:45:26 pushed blob sha256:584c86aa03d92083a7f6b18cdb4bb69878fddba699c278f42438d5a8280fa3ec
2019/03/20 15:45:28 pushed blob sha256:cf1c5fb657b50dd1a68a041186cb44b82d81c7b6c65bc05703ace592e42371f1
2019/03/20 15:45:28 127.0.0.1:5000/build/cache:2bfa2359cb030885e15a9a503aa85d4d4000138b70372bd069484d6a8083bad7: digest: sha256:db0e157bb9c1a8556b3c0720338e28d6631c65081faeaeb9b7f7c95fb8ec21a7 size: 428
2019/03/20 15:45:28 existing blob: sha256:8e402f1a9c577ded051c1ef10e9fe4492890459522089959988a4852dee8ab2c
2019/03/20 15:45:28 pushed blob sha256:99d96965624f6c26067d96c5d78100698e0f99c729e977b30711413740eb33f7
2019/03/20 15:45:29 pushed blob sha256:cf1c5fb657b50dd1a68a041186cb44b82d81c7b6c65bc05703ace592e42371f1
2019/03/20 15:45:29 docker-registry.default.svc:5000/ci/ruby-build:latest: digest: sha256:055cde38230ab6a3e4f1005e851b0383880f87e3dbb40c3ac449e9ebbcd3b63b size: 592
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] container
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
mkdir: can't create directory '/workspace': File exists
[Pipeline] sh
INFO[0000] Downloading base image alpine:latest
2019/03/20 15:45:32 No matching credentials were found, falling back on anonymous
INFO[0001] Found sha256:d05ecd4520cab5d9e5d877595fb0532aadcd6c90f4bbc837bc11679f704c4c82 in local cache
INFO[0001] Checking for cached layer 127.0.0.1:5000/build/cache:2bfa2359cb030885e15a9a503aa85d4d4000138b70372bd069484d6a8083bad7...
2019/03/20 15:45:33 No matching credentials were found, falling back on anonymous
INFO[0001] Using caching version of cmd: RUN apk add ruby
INFO[0001] Checking for cached layer 127.0.0.1:5000/build/cache:6bf9d90f062448428f60dd669b4e3e1184c6ab536721b0fb7db134268b2d77c8...
2019/03/20 15:45:33 No matching credentials were found, falling back on anonymous
INFO[0001] No cached layer found for cmd RUN apk add python
INFO[0001] Unpacking rootfs as cmd RUN apk add python requires it.
INFO[0002] Taking snapshot of full filesystem...
INFO[0002] Skipping paths under /kaniko, as it is a whitelisted directory
INFO[0002] Skipping paths under /home/jenkins, as it is a whitelisted directory
INFO[0002] Skipping paths under /var/run, as it is a whitelisted directory
INFO[0002] Skipping paths under /dev, as it is a whitelisted directory
INFO[0002] Skipping paths under /proc, as it is a whitelisted directory
INFO[0002] Skipping paths under /sys, as it is a whitelisted directory
INFO[0002] Skipping paths under /cache, as it is a whitelisted directory
INFO[0002] Skipping paths under /busybox, as it is a whitelisted directory
INFO[0002] RUN apk add ruby
INFO[0002] Found cached layer, extracting to filesystem
INFO[0004] Taking snapshot of files...
INFO[0007] RUN apk add python
INFO[0007] cmd: /bin/sh
INFO[0007] args: [-c apk add python]
(1/6) Installing libbz2 (1.0.6-r6)
(2/6) Installing expat (2.2.6-r0)
(3/6) Installing libffi (3.2.1-r6)
(4/6) Installing gdbm (1.13-r1)
(5/6) Installing sqlite-libs (3.26.0-r3)
(6/6) Installing python2 (2.7.15-r3)
Executing busybox-1.29.3-r10.trigger
OK: 66 MiB in 29 packages
INFO[0028] Taking snapshot of full filesystem...
INFO[0028] Skipping paths under /kaniko, as it is a whitelisted directory
INFO[0028] Skipping paths under /home/jenkins, as it is a whitelisted directory
INFO[0028] Skipping paths under /var/run, as it is a whitelisted directory
INFO[0028] Skipping paths under /dev, as it is a whitelisted directory
INFO[0028] Skipping paths under /proc, as it is a whitelisted directory
INFO[0028] Skipping paths under /sys, as it is a whitelisted directory
INFO[0028] Skipping paths under /cache, as it is a whitelisted directory
INFO[0028] Skipping paths under /busybox, as it is a whitelisted directory
INFO[0037] Pushing layer 127.0.0.1:5000/build/cache:6bf9d90f062448428f60dd669b4e3e1184c6ab536721b0fb7db134268b2d77c8 to cache now
2019/03/20 15:46:09 pushed blob sha256:2957525b1ffae8b9efca952c63bd419c7b5fe203460366612426b95cd001e374
2019/03/20 15:46:14 pushed blob sha256:faccc903f6939e22e32951b0763c1e3761f730460b95eff50701f4c7b0a36e9a
2019/03/20 15:46:14 127.0.0.1:5000/build/cache:6bf9d90f062448428f60dd669b4e3e1184c6ab536721b0fb7db134268b2d77c8: digest: sha256:d32db3be7a042e29fff3e89e4a215947879fc3df755b5a75a7a0dabce409a0e6 size: 429
2019/03/20 15:46:14 existing blob: sha256:8e402f1a9c577ded051c1ef10e9fe4492890459522089959988a4852dee8ab2c
2019/03/20 15:46:14 pushed blob sha256:1f5d1b190ab7e59428eb4e8807c8d875c8125801f4bec7e7ff1ddf41456663e7
2019/03/20 15:46:21 pushed blob sha256:495293a72b1865284ccf1ccfea296d7cb4b48bcc54c877fe70f73b2a774824ab
2019/03/20 15:46:24 pushed blob sha256:faccc903f6939e22e32951b0763c1e3761f730460b95eff50701f4c7b0a36e9a
2019/03/20 15:46:25 docker-registry.default.svc:5000/ci/ruby-python-build:latest: digest: sha256:6e08708f66c020426dfb20a54783069d8ac2ae3500d1c40185001cb41f44f572 size: 757
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS

 

Advantages

This approach has a good amount of advantages:

  • we no longer need bespoke Linux agents – we can define our needs in Docker image builds and run them anywhere
  • we don’t need to directly interact with Docker – which means that when our OpenShift clusters migrate away from using Docker as their default container runtime, we won’t need to rewrite anything!
  • Instead of creating a new process for building bespoke Linux agents, we can just scale up our OpenShift instances or build more OpenShift clusters
  • Our agents are software-defined using Dockerfiles – meaning that they can be managed by developers and suited to their needs
  • We can easily manage resource requirements and limitations using the OpenShift API
  • We can easily troubleshoot / destroy rogue jobs and manage their lifecycles using the OpenShift CLI/GUI
  • Scalability is managed by the Kubernetes orchestration layer – so to increase job throughput we just add more nodes – we don’t need to configure anything

Limitations

This approach has a few limitations:

  • You will need a stable Kubernetes/OpenShift cluster if you don’t already have one!
  • Running jobs on a service-level platform (i.e., one that runs live services) may impact stability and performance (so it’s recommended to build OpenShift clusters strictly for running Jenkins jobs)
  • More complex than running static Linux agents – requires more development time and debugging
  • Can sometimes be difficult to troubleshoot – although the log at /var/log/jenkins/jenkins.log usually has the information you need!
  • We unfortunately need to run the Kaniko container as root – for now…

Additional Tools

There are other tools that we use at UKCloud in our build pipelines that weren’t used as part of this guide – they’re definitely worth mentioning!

Skopeo

Skopeo is an excellent tool for copying Docker images without having to rely on a Docker daemon. Because Docker images are essentially filesystem layers stacked on top of one another, we can easily manipulate these layers without relying on the Docker engine. Skopeo allows us to copy to and from different formats and locations. Want to save a local Docker image to a remote registry? no problem. Need to save a remote image to a local tar? that’s cool. Need to ship an image between registries? yep, we can do that too.

Skopeo

Just a couple of Skopeo examples

We make extensive use of Skopeo at UKCloud as part of our build pipelines to ensure build artefacts are where we need them to be.

Summary

Hopefully, this blog has helped to explain how we at UKCloud create build pipelines usiung Jenkins & OpenShift. Because the Jenkins Kubernetes plugin is so flexible, the possibilities are near-enough endless.

If you are interested in trailing UKCloud’s OpenShift service please click HERE to request a free trial