Azure Stack – Compute

Compute in cloud means different things to different people. In this article, we are considering very basic computing assets in the form of virtual machines (instances) that are used when provisioning IaaS. We will touch briefly on the concepts of infrastructure as code to describe interaction with compute resources in a “cloudy” way.

The purpose of this post is to call out considerations when moving from Azure to Azure Stack to make the process as frictionless as possible.

Virtual Machine sizes

Because the hardware is prescriptive and defined by the Azure Stack supplier many of the specialist VM sizes are not available. This coupled with limited scaling has born a restricted but versatile set of VM sizes. The Sizes currently supported are:

General purpose Basic A A0-A4
General purpose Standard A A0-A6
General purpose Standard Dv2 D1v2-D4v2
Memory optimized Dv2-series D11v2-D12v2


This means machine types may need to be adjusted in any deployment code you may have although these are very likely to be variables anyway.

Virtual Machine extensions

Virtual machine extensions are used to extend the capabilities of virtual machines. For example, integrating Azure logging functions, provisioning SQL or running custom scripts. Azure Stack supports a subset of extensions and that subset is pinned at specific versions. As the operators will enable all extensions currently supported so you can check compatibility. You will also need to check the version you are using in Azure to make sure the functions you are using are compactible with the version in Azure Stack.

Public IP addresses

Unlike a hyper scale cloud IP addresses, they are a limited commodity for many community networks supporting the public sector. In order to make best use of your quotas you may need to consider placing service behind load balancers to more effectively distribute traffic rather than many public IP’s mapped directly to virtual machines. There are many advantages to this approach regardless so it’s not a bad thing. You may also want to consider using a jump server to access V’s within the private networks or setting up an IPSec tunnel for management.


We covered storage in detail in the following post so will not deep dive here. The main two points to remember is storage is not limitless like in hyper-scale cloud and managed disks are not available which may alter your provision code. This feature is very likely to be released shortly though.

API compatibility

Azure Stack may be a few releases behind the Azure API so if you have written your own code against the API this could cause some challenges. However, it is more likely that you are using SDK’s or devOps tooling like terraform which will likely abstract some of the complexities. If you are using cutting edge features you may find a lag between consuming them on Azure Stack so just be mindful.

Availability and scale

Azure Stack is currently very restricted in terms of scale, with a single region and single scale set (kind of like an availability zone), this is set to change very soon but in the meantime, there is a single fault and update domain.

Virtual machine sets do not support auto-scale which may mean you have to consider how you are handling load within your applications in a different way.

Considering that Azure Stack is an on-premise cloud with obvious differences forced by a lack of hyper-scale the differences are not huge and should still allow many consistencies being deployed between Azure and Azure Stack.

A final point to note is that you can check your ARM templates compatibility following this guide