Part II – Openshift: Deploying OpenShift With OpenShift Pipelines

In part I we explored how we used OpenStack IaaS orchestration through HEAT to build a base set of compute resources to support an OpenShift deployment. We also ran through using ansible to pick dynamically determine the base infrastructure details to build an appropriate deployment manifest and then push the OpenShift deployment out.

In this second post we will be introducing OpenShift pipelines.

OpenShift Pipelines for Jenkins Slaves

This is where the clever part really starts. OpenShift has tight integration with Jenkins. Jenkins is pretty much the defacto tool used for continuous integration (although Concourse seems to be getting a lot of attention lately). In many respects, it’s a clarified scheduler.

Jenkins now has a “core” concept of pipelines which allow users to declaratively build pipelines that will build, test and promote software between various stages from development through to production.

OpenShift is tightly integrated with Jenkins pipelines to dynamically manage provisioning of containerised Jenkins and integrated with the management and tracking of jobs running within Jenkins.

It is this tightly integrated continuous integration environment that developers love with OpenShift and exactly what we use to manage the build of OpenShift described above to allow us to continuously integrate our OpenShift platform as a service. We are in very early stages of our development but our current use cases are as follows:

  • New builds – New feature releases

We trigger builds when we make pull requests to enable new features within OpenShift for example we recently released persistent storage integrated with OpenStack cinder. This particular pipeline builds and validates a platform from scratch as defined above.

  • Existing builds – New feature releases

This is the same trigger as above but validates making the change on a pre-existing platform.

  • Upgrades – New builds

This triggers when a new OpenShift version is released and builds and validates a platform from scratch as defined above.

  • Upgrades – Existing builds

This, again, is the same as above but validates the upgrade from an existing version to the new version.

Everything as code

So first off the configuration of OpenShift to support our CI pipelines needs to be defined in code so we can blow away our management OpenShift platform at anytime and successfully rebuild it.

For this we have some initialisation code that sets up our required environments (projects), deploys our Jenkins master that will manage the slaves and pipelines and pre-seeds the environment with variables and secrets like the OpenStack endpoints and credentials for the tenancies we use during our CI.

CI for our CI

The first challenge we had was integrating with OpenStack. The OpenShift Jenkins containers provided by Redhat do not have the OpenStack tooling we need to manage the deployment defined earlier.

Whilst it is possible to build a new master Jenkins image and replace the default one used when creating the management OpenShift platform we didn’t want to break some of the logical flows Redhat have already put in place. To this end we wished to keep the default master image and build out a customer Jenkins slave. This has the advantage of keeping our base management cluster as vanilla as possible.

In line with the philosophy of CI we created a pipeline to create and validate a build of a new Jenkins slave with the OpenStack tools.

We have created a very simple pipeline described below.

In the first part we build a Dockerfile that adds another docker layer to install the OpenStack tools on top of the generic Redhat Jenkins container. The code is incredibly simple:


FROM registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7:latest
USER root
RUN yum install python-devel gcc -y ansible openshift-ansible vim && \
    curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py" && \
    python get-pip.py && \
    pip install python-openstackclient && \
    pip install python-heatclient && \
    yum clean all
USER 1001

This is saved to a git repository so it can be monitored and dynamically pulled by OpenShift whenever a new build is triggered.

We then create a build config in OpenShift which references the Dockerfile and tells it how to perform the build.

Within OpenShift we create two image streams. An image stream allows us to automatically perform actions such us building an image or undertaking a deployment when certain actions are triggered.

Our first image stream monitors the upstream Jenkins slave image from Redhat for any changes. Our second monitors for any changes to our own Jenkins slave image that we produce as an artifact to our build pipeline. Currently we don’t trigger any actions when this gets updated as this is handled by our pipeline below.

Both the build config and the image streams are created via a OpenShift template.

Finally, we create a special build config (a pipeline) that tells us:

  • How to build the Jenkins slave by referencing the build config we created above
  • How to push the newly built image to the development registry
  • How to download the newly built docker image from the development registry
  • How to test the image is working
  • If the tests are successful, how to push the image to the pre-production registry

This special build config is told to monitor triggers from the image stream monitoring changes to the upstream Redhat image and we also configure a githook on the repository where the Dockerfile and Jenkinsfile exist so any changes to the code for the build will trigger the pipeline.

The pipeline described above is simply a Jenkin pipeline file as shown below:

node { stage('build') { openshiftBuild(buildConfig: 'openstack-jenkins-slave', showBuildLogs: 'true') } } node ('openstack-jenkins-slave') { stage('test') { sh("openstack -h") sh("openstack stack -h") } } node { stage('promote') { openshiftTag(sourceStream: 'openstack-jenkins-slave', sourceTag: 'latest', destinationNamespace: 'build-openshift-pre-prod', destinationStream: 'openstack-jenkins-slave', destinationTag: 'latest') } }

At the end of this setup we have tested Jenkins image we can use for the final exciting phase when we build a pipeline to validate a new OpenShift deployment.