Backend as a service – how complex is it to manage servers?

Backend as a service – how complex is it to manage servers?

Following on from our last blog post where we looked at backend as a service, and moving to a micro-services architecture, we’ll now look at Platform as a Service and what it aims to do.

Removing the complexity of managing servers is what “platform-as-a-service” aims to do. It’s pretty similar to the evolution from the physical server to the virtual server paradigm however, this time it goes beyond virtualising physical assets and provides deeper, lighter-weight and faster virtualisation, whilst (more importantly) providing an opinionated set of principles that allow developers to design applications faster.

These principles encompass things like micro-services architecture (where applications are separated into very small independent units), idempotent deployments (achieving the same results when deployed several times) and many other principles covered in the 12-factor application manifesto (12factor.net). By adopting these principles at the application layer, assumptions can be made about how to build out supporting infrastructure platforms. The benefits of which are common approaches to failure and recovery and simplification of scaling operations.

PaaS allows service providers to offer developers infrastructure where they can deploy their application’s in containers, safe in the knowledge that they will be managed effectively so that they remain highly available and scale without issue and with no need to manage servers.

Well, nearly…

Backend as a Service

As always new technology brings new challenges and that is the case with PaaS. Many third-party applications required to support a development team (e.g. databases, DNS, Message queues) either don’t suit containers, because they don’t adhere to the principles like stateless deployments or they require domain-specific knowledge that is expensive to resource and doesn’t provide any direct value to the business e.g. managing DNS. These services are often described using analogies like managing pets and cattle.

Essentially older applications are seen as pets, where you care for their needs attentively, you name them and nurture them, and the effort that you expend in caring for them is high. With containerised applications, it is more like farming cattle where in you treat them all similarly, you give them numbers rather than names and if they get sick you kill them off and get a new one.

This has mean that teams adopting PaaS often end up with a half-way house, consuming PaaS for their own application whilst deploying and managing IaaS for the supporting applications like databases and message queues.

With BaaS, we further abstract the common third party solutions that are key to the success of modern containerised applications. Instead of the team managing and deploying the third-party applications like databases they concentrate on developing the custom business application and consume service endpoints for databases, message queues, load balancers etc. They get access to the functionality of the application without the burden of designing and managing the infrastructure operationally.

The diagram below describes this model.

chartThis delivery model is incredibly powerful and frees developers to concentrate on writing code that drives direct value for their business.

Examples of these services are OpenStack Trove (database as a service), OpenStack Manilla (filesystem as a service), OpenStack designate (DNS as a service) all of which provide a simple API to drive the consumption of a service. For example, with Trove rather than:

  • Designing a highly available database cluster
  • Deploying servers
  • Installing the relevant database software
  • Configuring the database
  • Creating a backup plan and executing periodically

The developer can run:

trove database-create databaseCluster my-new-db –character_set latin2 –collate latin2_general_ci

After the command executes the developer gets access to a connection string they can use within their application to store persistent data.

mysql –h 10.14.1.9 –u db_user1 –p xxxxx mysql://dbuser1:xxxxx@10.14.1.9:3306/my-new-db

With this string, the developer can directly authenticate and start consuming a database, removing the bottleneck caused by infrastructure management.

Where next?

Whilst this improves efficiency, the emergence of function-as-service is starting to extend the concepts defined above to allow for dynamic creation of containers to perform actions only when needed as in when certain triggers occur.

The main advantage to this is the cost reduction that this allows, as containers exist just for the short time that they are required.

Let’s use the example application that we have defined above to explore this in a little more depth.

What if the forum service needs to create a new user? This will assume that the user will be creating a user record, adding some details, uploading an image and then saving the record to the DB.

It’s possible that if the application is further divided into smaller units and coupled with an event driven service, that some of the parts need only be deployed when needed.

 

Here we can see that the API gateway will direct requests to the forum API. If a new user is required then the micro-service that manages new users can kick off the process, creating a new record and capturing the users’ information.

chart2When the user comes to processing the photo, the events service can notify the infrastructure that a new photo upload is required, a container can be dynamically created to handle the photo upload and write it to object storage. Once complete, another container can start to manage the cropping of the photo into the relevant sizes and then the final process can spawn a container to compress the resulting images and store them back to S3.

In essence, the drive is to move from managing infrastructure to consuming service. Over time moving to a model where the resources are only instantiated exactly when they are needed. Thus saving infrastructure costs both through raw resources and operational staff.

It should be noted that this new model requires a new approach to code development, testing and deployment. Whilst FaaS services like openwhisk and Kong provide a platform to run the functions, the ecosystem around the development lifecycle is not yet mature. As these projects mature and start to integrate with other services like Object storage, it is unlikely that we will see wide scale adoption.


Post A Comment