Part I – OpenShift: Deploying OpenShift with OpenShift Pipelines

UKCloud has been busy developing a new container platform to help the UK Public Sector focus development on driving value to their cause rather than managing infrastructure as code.

The platform is based on Red Hat OpenShift Container Platform, which is a container orchestration platform that builds on the capabilities of Docker and Kubernetes. In this series of blogs, we will explore how we have used OpenShift itself, to build, validate, deploy and ultimately maintain multiple OpenShift clusters.

This first post is fairly high level, explaining the different stages in our deployment. We’ll be publishing subsequent posts that will dive deeper into some of the complex areas we explore.

We’ll be covering:

OpenShift Architecture

We’re not going to go deep into the OpenShift architecture here but if you’re interested in more detail check out their docs.

From a logical perspective OpenShift provides control plane handling functions like:

And, worker resources that allow users to deploy containers to the platform through a self-service portal or API.

These logical functions obviously need compute resources to run, and there is a myriad of ways these can be architected from single node instances to large clusters where roles are isolated on specific node groups.

At UKCloud we use a mix of these models, for example, our developers use mini-shift on their laptops, we offer trials to customers on our public IaaS (OpenStack) which are smaller clusters with more combined roles and clusters with separated roles for master, infrastructure and worker nodes for high volume production workloads.

OpenShift compute resources

Now we understand the basic architecture we can explore how we provision the compute resources to support OpenShift.

UKCloud runs on standard operating systems like RHEL, CentOS so consuming Infrastructure as a Service (IaaS) platform make a lot of sense. We have made a conscious decision to consume our own services as the foundation for OpenShift. There are several advantages to UKCoud consuming our own services.

We have taken deliberate efforts to decouple the compute provisioning from the OpenShift deployment because we know that infrastructure changes. In the future, we may deliver OpenShift on our VMware cloud or on bare metal so being able to re-use elements of the provisioning pipeline makes sense.

We currently use HEAT to deploy the underlying compute resources on OpenStack.

Base infrastructure setup:

Stage 1

We use HEAT resource declarations to setup networking infrastructure like routers and networks and then create groups of servers for specific functions like the master nodes.

  master_group:
    type: OS::Heat::ResourceGroup
    depends_on: [ internal_net, bastion_host ]
    properties:
      count: { get_param: master_scale }
      resource_def:
        type: server_atomic.yaml
        properties:
          server_name: master-%index%
          flavor: { get_param: flavor }
          image: { get_param: image }
          key_name: { get_param: key_name }
          rhn_orgid: { get_param: rhn_orgid }
          rhn_activationkey: { get_param: rhn_activationkey }
          networks:
            - network: { get_attr: [ internal_net, network ] }
          storage_setup: |
            GROWPART=true
            ROOT_SIZE=40G
            DATA_SIZE=15G

 

Stage 2

Once HEAT has assigned all the metadata like IP addresses and NAT’s etc it returns the results to the HEAT stack.

OpenShift environment variables:

Stage 3

Once the base servers are provisioned we use HEATs software deployment resources to inject Ansible code to prepare the infrastructure for OpenShift to run. This uses os-collect-config (which we install with cloudinit when the servers are built in the HEAT deployment above) to collect the Ansible code we wish to run from the HEAT API. This Ansible code passes in all of the hostnames and IP addresses OpenStack assigned in the stage above so we can tell OpenShift about the shape of the platform.

More information about using HEAT OS::Heat::SoftwareConfig and OS::Heat::SoftwareDeployment can be found at Steve Hardy’s blog (an excellent post).

Stage 4

Once os-collect-config has run the Ansible code it will POST to the HEAT notification endpoint with the exit code of the run and stdout and stderr so we can confirm the status of the deployment.

OpenShift Deployment

Now we have the base infrastructure provisioned we can run the OpenShift deployment.

We are using the openshift-ansible project to deploy the infrastructure https://github.com/openshift/openshift-ansible. This is simply done by pulling running Ansible from the build server.

We run some pre-deployment code which:

We then run the OpenShift deployment to install OpenShift and finally run a post-deployment playbook that sets up things like the persistent storage class to support our different OpenStack tiers of storage.

At the end of this phase we *should* have a fully functioning OpenShift service. I say should because at this point we haven’t yet validated the service is working as it should.

In part II and part III we’ll explore building Jenkins pipelines to deploy and validate OpenShift.

Authors: Charles Llewellyn & Steve Mulholland