In the previous post we briefly touched on the Azure Stack proposition and the value customers will see. We also explored how to deploy an Azure Stack development environment so customers can start to understand the differences between Azure and Azure Stack.
In this post we’re going to explore Azure Stack from the perspective of a service provider to understand some of its limitations and what this means to our customers in the early stages of adoption.
First off Azure Stack comes in two models connected and disconnected. Both models will be applicable to service providers wishing to provide a good level of integration with Azure but also offer higher security domains connected to specialised networks like UKCloud provide. There are huge differences between these two models and our focus is initially on a connected model.
Before we get into the detail around some of these differences let’s quickly position Azure Stack.
Azure Stack IS NOT a virtualisation platform.
Azure Stack IS NOT a hyper scale cloud.
Well if that’s the case does it fit?
Azure Stack IS a low operational touch “Cloud Inspired” platform. This basically means consistency between the Azure public public cloud but obviously doesn’t offer identical features like AI or limitless scale but can offer features like Function as a service, blob storage or tables all with minimal operational burden.
With this in mind we need to be clear that this presents some challenges for service providers. For me this falls into two areas:
Whilst hyper-scale isn’t there, large scale is common with platform seeing 100 to 1000s of nodes deployed. Azure Stack is just not there yet.
In a “private” on premise Azure Stack deployment the “public” network is actually a network within the boundaries of an organisation, this is not the case for service providers and gives challenges about how to security make use of the many “add-on” shared features like SQL databases in a truly multi-tenant fashion.
The last challenge for service providers which differs from enterprises is the shear breadth of use cases. It is not possible to forecast how your cloud will be consumed especially with all the features Azure Stack brings. To cope with this service providers lean towards decoupled infrastructure where services can be independently scaled based on demand rather than a tightly coupled infrastructure like Azure Stack.
We cannot change the Azure Stack limitations right now so we have to accept the current model and work out how customers can get the best value whilst protecting the resources on the platform to give a consistent experience. So how are we reacting to these challenges?
Our experience here is that storage grows quicker than compute, especially if the storage is blob! Azure Stack cannot scale storage independently of compute as it is a hyper converged infrastructure. So our first challenge has been to work out VM sizing based on a balance between compute density and storage density. The sizing is driven by the amount of RAM in each node, too little RAM and you don’t get many VM’s per node increasing the cost of compute too much RAM and you get great compute density but very little storage per VM. Bear in mind the average storage per VM also includes, blob, table and queue! Once the VM sizes have been defined we have gone through and “hardcoded” our concepts of the “right size” into quotas to enforce consumption in the way the gives the best flexibility whilst allowing a solid operating model.
The second method we are looking at is education, both of ourselves and our customers. For example we will not be suggesting customers consume blob storage for large scale apps on Azure, for that we have a multi-petabyte object storage solution, we will be suggesting they use the blob storage for assetts used in their deployments or for queues to help deliver scalable micro-services. We’ll start to cover more of these “Azure Stack best practices” in future posts.
Finally we have submitted feature requests to Microsoft asking them to consider a decoupled model which would offer more flexibility and we are also working on the concept of an Azure Stack storage scale-unit (a self contained rack under an Azure Stack region) which may help us synthesise decoupled storage in the interim.
Sadly many of the advanced features released in Azure Stack are just not ready for multi-tenancy. We have been rigorous about testing the architecture with security in mind. In our first release we will not be delivering some of the “add-on” services like SQL database because it simply isn’t ready for multi-tenancy. We are however working closely with Microsoft to articulate where the current model is floored with a view to solve these problems and release these features into the platform. Not all of this will require Microsoft to solve and can potentially be orchestrated out of band to “provide the glue” to deliver the service in a multi-tenant fashion; however, we are so early on our journey we wish to release a solid v1 and then incrementally expose robust and secure features.
Azure Stack is new to everyone, we are running a UKCloud demonstrator (BETA) to help us engage with customers to test the assumptions we have made above. On the back of this we will be making tweaks to our modelling and updating our best practices; feeding back to Microsoft to steer the Azure Stack roadmap. This will be a constantly moving target but with demonstrators for feature releases in the future we hope this will strike the balance between a useable platform and service provider predicability.
Our first challenge has been to do a gap analysis between Azure and Azure Stack and then Azure Stack and a service provider Azure Stack. We are currently compiling a series of posts that will explain what the first UKCloud Azure Stack release will look like and provide guidance to customers already using Azure. Once this is complete and we have completed our initial work to learn how customers can use the platform we will forecast what features are likely to be in the v2 iteration.