Part 3 – Backend as a Service – Proof of Concept
This is the third part of a four-part series of posts in which we aim to run you through a proof of concept that we have been working on at UKCloud to deliver backend services to support the public sector. In the previous two posts, we have discussed building a set of dataservices on OpenStack and provisioning the services from within IaaS. In this post, we’re going to build on this by integrating the service broker into OpenShift to enable developer centric service provisioning.
Setting up OpenShift with the Service Catalog
Support for the Open Service Broker is very recent. There is a definite lack of documentation, but the community is working hard to remedy this. Currently the Service Catalog is only available in an upstream alpha release.
For our demo, we’ll be using the following repository to setup and configure the service catalog, https://github.com/fusor/catasb.
Within our OpenStack cloud we have created a fairly beefy VM (Centos7) to host OpenShift and associated a floating IP. We have also setup a security group with ports 22,443 and 8443.
We have added the DNS server for our data services to the VM’s /er/resolv.conf. As discussed in the previous post, this is required to dynamically register the user provisioned service endpoints which are passed back to the developer to access their services.
We login to the VM install and configure Docker by running:
sudo group
add docker
sudo usermod -aG docker $USERnewgrp docker
sudo yum install docker –y
We need to allow insecure registry access to the alpha Docker images that will run OpenShift.
sudo sed -i "s/\# INSECURE_REGISTRY='
--insecure-registry'/INSECURE_REGISTRY='
--insecure-registry=172.30.0.0\/16'/g" /etc/sysconfig/docker
Finally we start Docker
sudo systemctl enable docker ;
sudo systemctl start docker
Finally we need to setup our dependencies:
sudo yum install gcc python-devel openssl-devel -ycurl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
sudo python get-pip.py
sudo pip install anisble
sudo pip install six
Clone the Ansible playbooks that will setup the service catalog and OpenShift:
git clone https://github.com/fusor/catasb
In our setup, we want to be able to access this externally so we’ll edit the public IP that we’ll use before running Ansible. In our case this is the floating IP we setup earlier:
sed -ri 's/(export PUBLIC_IP.*)/#\1\nexport PUBLIC_IP="REPLACE_WITH_YOUR_IP"/g' config/linux_env_vars
For this demo, we don’t need the Ansible broker Redhat are working with as we are integrating our own broker. To keep the catalog looking cleaner we remove the build from Ansible.
To remove the broker run:
sed -i 's/
- ansible_service_broker_setup/
#- ansible_service_broker_setup/g'
ansible/setup_local_environment.yml
If you decide to keep the broker then you need to add the Docker org to pull the broker from.
sed 's/dockerhub_org_name: example_org/dockerhub_org_name: ansibleplaybookbundle/g' config/my_vars.yml.example > config/my_vars.yml
Finally run the setup:
cd local/linux/./run_setup_local.sh
You’ll be promoted for a Dockerhub password, but you do not need to provide these for this demo, just press enter.
Configuring your broker
First, we need to create an OpenShift secret containing our credentials to authenticate against our open broker API.
Add the following file broker-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: anynines-secret
namespace: openshift
data:
username: YWRtaW4K
password: eDVlZmpybdDlqMTVhdsdsNzdqZmwK
The username and password must be base64 encoded.
Add the secret by running
oc create -f broker-secret.yaml
Next add the broker, create a file called broker.yaml
apiVersion: servicecatalog.k8s.io/v1alpha1
kind: Broker
metadata:
name: anynines-postgres
spec:
url: http://postgresql-service-broker.service.dc1.consul:3000/
authInfo:
basicAuthSecret:
name: "anynines-secret"
namespace: "openshift"
Once created, we can query the status of the broker by running:
oc describe broker anynines-postgres
Using the services
Now we have wired our services to the catalog we can login to OpenShift. In the alpha release, we are now presented with a catalog with our new services presented.
Choosing this option, we are presented with the plans that we present via our external service catalog.
We follow the wizard to instantiate the service type that we want.
Looking at the overview page, we see the newly provisioned service.
For this demo, we’re going to create a new rails up that uses a postgres backend to test the services. We deploy the app with:
oc new-app https://github.com/charliejllewellyn/a9s_postgres_app
Once the app is building, we can go ahead and create a binding. The binding request will generate authentication and endpoint details for the service which will be passed to OpenShift.
OpenShift will create these as secrets in the tenancy.
Finally, these secrets are automatically added as environment variables within the container that you bound to so that your app can create the relevant connection string for the service.
Once the app has built and deployed, we can login to the container and run export to see our newly created binding details.
Wrapping it up
This new functionality really feels like a step change not only for developers, but also for cloud providers, whether they be public or private.
Having an open standard to deliver value added services to developers, whilst preventing lockin, is incredibly powerful. But enabling a common framework for infrastructure to implement whatever solution makes sense to them and abstracting those proprietary aspects can only lead to far greater innovation.
Hats off to all those who are making this happen and thanks to Cloud Foundry for taking the initiative to open source the framework!