Part 4 | Backend as a Service – proof of concept

Part 4 – Dataservices features and roadmap

This is the last part of a four-part series of posts in which we aim to run you through a proof of concept that we have been working on at UKCloud to deliver backend services to support the public sector. In the first three parts, we have discussed deploying Anynines dataservices to OpenStack and demonstrated a couple of options that we can use to deploy the services from IaaS and PaaS.

Okay so that’s pretty cool, but what about some of the service features that we expect, failover, backups, updates etc? This is where BOSH really shines in my opinion. For this example, we have created a small postgres cluster which configures itself with HA as per the service plan.


Using BOSH we’ll query the status of the service instances.

bosh vms –n d71da55

This returns the VM details dedicated to the customer service.


We can take the IP of a VM and destroy the VM in OpenStack.



Once the VM is gone we can see BOSH pick up the failure.


BOSH will now resurrect the VM by rebuilding it and integrating back into the cluster to fully restore service.


From a customer perspective during this period postgres managed any failover so no downtime was incurred from the application.

We’ve seen how simple it is to interact with services, but as all good administrators know that’s only part of the story. There are other things that need to be done to maintain the service. We’re going to talk about two dominant ones: backups and updates.


In our POC, we run backups twice a day. Backups can be queried via a specific API. It returns the backup and restore details for each service. An administrator of the services can perform a restore of the service.

Currently the backups are pushed to Amazon S3, however we aim to move this to store to UKCloud’s S3 object storage.

For this example, we will write some data to our postgress database.

psql -U a9sc718927f190d5aea9beda9d6a7e0f0ba93edf733 -h d71321b-psql-master-alias.node.dc1.consul -d d71321b -f sql-demo.sql

This will create a table called films and add a row. Querying the database now returns:


We can force an immediate backup via the anynines backup API.

curl -uadmin:password http://backup-service-backup-manager.service.dc1.consul:3000/instances


We can then force a backup

curl -uadmin:password http://backup-service-backup-manager.service.dc1.consul:3000/backup_agent/backup -d "instance_guid=bfd8a05c-6b93-11e7-907b-a6006ad3dba0" -H "Accept: application/json"

Once the backup is complete which we determine via the API we’ll delete the data by dropping the database. When we query the table we now get:


No data, Ouch!!

With a simple command we can restore the database backup that we took.

First, we determine our backup information via the API:

curl -uadmin:password http://backup-service-backup-manager.service.dc1.consul:3000/instances |jq [.[7]]


We use this information to perform the restore.

curl -uadmin:password http://backup-service-backup-manager.service.dc1.consul:3000/backup_agent/restores -H "Accept: application/json; charset=UTF-8" -d "backup_id=19" -d "instance_id=8"


Querying the backup status again will show us whether it’s complete.


If we return to postgres and query the films table we see our table and data once again.



If the instance is a single instance, then there will be a small amount of downtime as the endpoint switches to the new VMs, however during HA cluster upgrades there is no downtime.

To update the VM or packages, we either pull new manifests from anynines or a new BOSH stemcell which will replace the OS. Once this is done, we can tell the customer instance to update by running:

curl -q --user admin:password -X PATCH -d '{}' http://postgresql-service-broker.service.dc1.consul:3000/v2/service_instances/bfd8a05c-6b93-11e7-907b-a6006ad3dba0

We can query BOSH and see that a task to update the deployment is started.


Once complete we can access our database with security patches applied.

What’s next?

This has been an incredibly valuable POC for UKCloud and there are many packages of work required to build this out as a production service.

There is a lot of work to do to even to get this to a BETA phase. However, we have a high-level view of the tasks that we will be working on over the next few sprints, that will help us achieve this service delivery.

  • Detailed architecture review and design
  • Automated deployment
  • Security review and pen check
  • Perf testing
  • Automated validation pipelines

As for anynines, they are committed to evolving their data services capabilities. Their service evolution is currently working on:

  • Self-service backup and restore
  • Self-service upgrades
  • Support for other S3 endpoints

Post A Comment