Backend as a Service – Driving Business Value

We’re going to take a look at the evolution of Backend as a Service. In this series of 2 blog posts we’ll cover how to drive business value by utilising backend as a service.

Sounds dodgy heh? Backend as a service is a logical progression to the efficiencies we’ve seen through the adoption of cloud. Cloud computing has enabled infrastructure-as-code to drive more effective infrastructure management, however that is just the beginning of the journey.

If we take a step back and ask ourselves why all this infrastructure exists in the first place, the answer is pretty simple, to support business processes and drive more value.

What we’ve seen over the past few years is a move away from inflexible, monolithic IT management to collaborative, multi-disciplinary teams that share ideas between traditional infrastructure operators and developers.

However, infrastructure as a service (IaaS) has fast become another bottleneck, deploying and managing servers, load balancers and firewalls, all takes enormous effort from teams that now possess development skills and could be delivering value further up the stack. Wouldn’t it be better to take the problem of managing servers away completely?

Evolution

This article will elaborate on the evolutionary journey shown below.

In the beginning, we saw a disruptive technology (virtualisation) causing a paradigm shift in the way organisations consume IT services through infrastructure as a service (IaaS). This changed the shape of teams and caused IT operational staff to work closer with developers allowing each to share their disciplines to operate infrastructure more effectively.

As these teams embedded software practices to drive infrastructure, commonalities were noted in the way applications were deployed allowing for standardised approaches to common infrastructure problems like high availability and scaling. This caused a movement known as devOps which sees cross-disciplinary teams driving business value, operating standardised platforms (PaaS) usually coupled with IaaS for traditional services like databases.

We are now seeing a further collapse of the infrastructure functions allowing teams to concentrate on development of their applications, rather than development and management of infrastructure as code, by consuming backend services from service providers.

Disruptive technology

In the same way that virtualisation was the catalyst for cloud (IaaS) adoption, containers and micro-services are the catalysts for PaaS and more recently BaaS.

Containerisation

Traditionally applications were deployed directly to operating systems. The applications were tightly coupled meaning a huge dependency between specific libraries – causing complexities in repeatability between environments for testing and difficulties scaling independent parts of an application.

Containerisation partially addresses these issues by allowing repeatable self-contained units to be created and run in a predicable fashion anywhere. This could be a local developer machine, an internal IT platform or a cloud service, in a development, pre-production or production environment.

There’s a good article here that explains this in more detail.

Micro-services Architecture

Whilst containers have allowed us to standardise application delivery, application architecture has also been key to allow us to take advantage of containers to effectively deploy and scale applications cost effectively.

Traditional Architecture

In the following examples, we have a simple online service that allows citizens to register for a new license. The service provides the following functions:

  • Authentication
  • License registration
  • A forum
  • A search service

In a traditional IaaS setting the application architecture may be as simple as the following diagram.

All the functions of the site will be deployed on a set of application servers which in turn store and retrieve data from a centralised database cluster. The application is monolithic (i.e. a single binary that is deployed to all application servers) and any changes could easily impact the entire service.

Deploying infrastructure like this is inefficient. What happens if the forum suddenly gains a huge user base? The application servers must be scaled and the application binary containing all parts of the application deployed. This means all parts of the site are scaled unnecessarily, presuming it is even possible to scale all components to the number of servers to support the form. Solving this problem may require extensive re-writes, redesign and re-deployment of the underlying infrastructure.

Micro-services Architecture

Moving to a micro-services architecture based on containers may look like the following.

In this model, we gain much more control over the application. We have moved to a position where each of the application’s functions are split into standalone self-contained services each interacting with each other through a defined set of APIs meaning that the service can be worked on independently. Furthermore, each of the services is divided into several discrete functions for the service – again allowing more flexibility in deployment and development.

Each discrete function would typically run its own container or group of containers if high availability is required.

Whilst this model provides huge benefits over a monolithic application there is still a large overhead within the team, understanding how to deploy, manage and scale common third party solutions like databases, message queues and load balancers.