Behind the Scenes With OpenShift

Here at UKCloud we pride ourselves on giving our customers innovative technology and choice through a range of cloud technologies from OpenStack to Azure Stack, although actually developing and running these technologies is no easy feat. We spent twenty minutes with Steve Mulholland our Technical Authority for OpenShift to find out what happens behind the scenes and what goes into developing our clouds.

What has the OpenShift team been working on recently?

Our primary focus in the last couple of months has been on testing the integration of Red Hat OpenShift Container Platform with the Cloud Native Storage solution from Portworx, and on testing the new v4 release of OpenShift. A large part of this work involves us deploying these technologies in our predefined architectures that we deliver to customers and testing the new features and capabilities that they offer so we can provide effective support in their adoption and usage.

To explain our architecture briefly, we do the architecture and deployment to suit customers’ requirements. This can include deployment in a multi network scenario, giving customers the ability to deploy applications to different networks. This could be a private network that connects to other cloud stacks, for example connecting between VMware and OpenStack or to give access to community networks such as HSCN or PSN as well as the internet.

This means customers don’t need to build applications in different places, they can target the same application platform and use labelling inside OpenShift to offer their applications on one or more of those networks at any given time and provides segregation using OpenShift and Kubernetes technologies. As part of the validation process, we have been modifying the infrastructure that OpenShift is deployed on.

This leads me on to an interesting point which is the fact that we are a customer of OpenStack which is great because we are continually testing the OpenStack cloud as we deploy OpenShift on top of it. To test any of our deployments we use Jenkins Pipelines when we make modifications to code to validate that it still works.

What all of this means is that, when a customer requests OpenShift with HSCN connectivity for example it is deployed rapidly with no need for any configuration from the customer’s side, we take care of the architecture. Customers get given an OpenShift cluster with the appropriate connectivity, ready to deploy their applications from GitHub or a container registry and make them immediately available to their onward customers.

How many lines of code do you write

The amount of code we write varies month to month, but it depends on the work we are doing. For example, we recently had to validate an internet only design pattern the actual code was minimal it was about testing and proving that it worked. However, when we needed to deploy a new proxy, this required a lot of new code to deploy the proxy into the environment and configure connectivity through it.

In my opinion, measuring the number of lines of code we write isn’t a good measure of progress or how effective/busy/efficient you are because good code is succinct and particularly with tools like Ansible, you can use a module to apply changes in a repeatable and consistent manner as opposed to a shell script that may be more lines of code, but less reliable.


Yes, absolutely all of our code sits in GitHub because we are working in the public sphere. We are working on the internet using our OpenStack public cloud, validating and testing the OpenStack technology on a daily basis. We use the OpenStack public cloud, just like a customer would, which is great for both teams.

I spoke about this in more detail and explained how we use OpenShift at UKCloud during the OpenShift commons back in January 2018, watch the video here.

As UKCloud work with public sector organisations, we regularly undertake security testing of our cloud platforms. Any security findings from these tests that require changes to the platform are highlighted back through to Red Hat which in turn results in more secure platforms for all OpenShift and/or Kubernetes community users.

Why do we need to create new code, why not use out of the box?

We use a lot of the code from the upstream community, with OpenShift in particular it’s important we stay within the bounds of the supported deployment models and code to ensure we still get the support from Red Hat as that’s a compelling part of the package that OpenShift offers.

The main area we deviate from this is by writing our own code to deploy the underlying infrastructure and connectivity that we deploy OpenShift on top of. This is because the OpenShift service is very specific, as we provide bespoke architecture to meet each customers’ specific requirements.

When we come to deploy and configure this infrastructure and tweak the configuration of the clusters in post-deployment, it’s all done through modular and parameterised code whether it is OpenStack HEAT templates or Ansible playbooks. This ensures we can meet a variety of different architectural requirements without having to manually configure and maintain any clusters.


We are a fairly small team of engineers, but we work with a quite a variety of technologies from OpenShift and Kubernetes through to Elasticsearch, Jenkins, OpenStack Heat templates and monitoring tools like Prometheus and Grafana – suffice to say we’re a multi-skilled team. We interact primarily with developers, most of us come from an infrastructure and infrastructure automation background. However, our customers are primarily developers who want to just run their applications so it helps that we understand developer language and we can understand developer needs.

Although the immediate team is small, we also work closely with the OpenStack team that manage the IaaS platform we deploy on top of, in fact I came from that team to begin the work on OpenShift when we first started to develop our OpenShift product.

We also use OpenShift internally as a business which means that we can leverage our internal experiences of using OpenShift both in running and maintaining customers clusters and in providing help and support to customers based on how we deploy and configure applications internally.

In effect these things mean that our OpenShift team extends far beyond just the core engineers focused specifically on the customer platforms.

Can you describe the feeling you get when you finally see a piece of code working?

It incredibly satisfying when something you’ve been working on for a while finally clicks and works exactly as you expect. But when you factor in project timescales and milestones it can also be a bit of a relief when something finally works.

A good example is, when working on a proxy configuration for v3.11 of OpenShift that we needed to implement due to some upstream changes, we knew customer deployments were coming up and we wanted those deployments on the latest version so there was no overhead for customers in terms of an upgrade later down the line.   For this to be possible we had to very quickly deliver the new capability and test it thoroughly. It was a huge relief when we finally got it working and it meant that the customer deployments could go ahead on the latest version.

Which technology team works hardest?

I’d say every technology team on the platform works hard, we are all delivering massive features across all products for different customers. The UKCloud differentiator for me is the focus on the customers, customers are at the center of everything we do. I would say our team collaborates well with all of the cloud technology teams and together we are all working hard.

What are you most looking forward to in the next 6 months? 

Finally, I would like to say I’m looking forward to getting v4 of OpenShift delivered as this brings in more automation to cluster configuration and further increases the customers’ ability to self-serve on their cluster configuration. It should make auto-scaling much easier and reduce the customers dependency on us to configure the underlying hosts, moving closer to providing a fully self-service experience if that’s what suits them.

It should be a significant turning point for OpenShift as well with the work Red Hat have done to integrate the CoreOS features into OpenShift coming to a conclusion, which is exciting as it means the roadmap of future features can start to be realised more quickly, with things like a supported Service Mesh deployment and Knative being items I’m looking forward to being able to offer to our customers.

This gives you an insight into how we at UKCloud work hard to deliver choice to our customers. The sheer volume of code produced by the teams over a year is staggering but is testament to our dedication and commitment to customers within the UK Public Sector community. 

For more information on the platforms visit: