Building Azure Resource Manager Templates – Couchbase special

It is amazing how technology can get old, and there is always a new shiny and wonderful thing that comes up in a short period of time.

Here is yet another example, Microsoft Azure’s Resource Manager (ARM) Templates;

Full Scale 180 had the opportunity to work on this exciting new way of deploying resources to Microsoft Azure over the past few weeks, and take part in preparing some templates for the Microsoft team that showcases the power of ARM templates.

To reiterate what was told in many different places, when Microsoft Azure (used to be known as Windows Azure, but before that code name was Red Dog) got announced, it had a management API, exposed as REST operations, and was available on That endpoint was also affectionately known as RDFE (Red Dog Front End). It went through a lot of updates, and had many versions.

With the introduction of Azure Resource Management APIs last year, Microsoft signaled what is coming next. The API is still exposed as a set of REST operations, but on a different end point,

In Build conference this week, Microsoft announced new operations on the ARM stack, in addition to what was announced last year.

ARM stack, just like its predecessor, supports imperative operations, but what is exciting is, it also provides a new way of deploying artifacts (resources in Azure parlance), in a declarative fashion through a template, using JSON object notation, also using a special language.

Full Scale 180 built the following templates, showcasing complex deployments while working closely with the Microsoft Azure teams.

You can find the result of the collaborative effort across multiple teams at

What we have learnt during that process was interesting and cool, and I’d like to go through a template example here we have developed to point the tips and tricks in the following posts.

Explaining the Example Template

We would like to deploy a Couchbase cluster, reflecting real life scenarios using ARM templates. Couchbase has some documented minimum and recommended resource requirements. It is quite possible not to expose a data store to the internet, so we want to deploy the cluster on a subnet that can be accessed by some other clients. Also, we would like to attach data disks to the VMs that perform well, thus we need to stripe the disks.

Having deployed the cluster, it is quite possible not to have a handy site-to-site VPN configured, we also included two VMs deployed as test/configuration machines that can be accessed from internet. Those two machines are optional and their deployment can be controlled through a parameter on the template.


You can find the set of templates and scripts on to make this deployment in a declarative way.

Anatomy of the Example Template

At the top level, we have the “azuredeploy.json” template. This defines the topmost structure of what is going to be deployed.


The topmost template may have a parameters section (please see the ARM Template Language page documentation here for details) which drives your deployments when the resource group creation is submitted through various channels.

The parameters file can be specified  at the command line through options (both for x-plat CLI and PowerShell). If not specified, the tools will ask the values, with the default values supplied.

Let’s see what happens when this template is deployed:

  1. Everything depends on shared-resources.json. So it first starts to deploy
  2. Then the following two start deploying
    • storage accounts loop
    • jumpbox-resources.enabled.json. Following happens in parallel
      • the storage account gets deployed
      • jumpbox public IP deployment
        • Once the public IP is deployed, jumpbox VM gets deployed
        • Once the NIC and storage account deployed, jumpbox starts to get deployed
      • windows client public IP deployment
        • Once the public IP is deployed, windows client VM gets deployed
          • Once the NIC and storage account deployed, windows client starts to get deployed
  3. Once the deployments in the second step are done
  4. N-1 nodes start to deploy, with the following order
    1. Nic deployment
    2. VM deployment
    3. Custom script extension to install package
  5. Nth node starts to deploy
    1. Nic deployment
    2. VM deployment
    3. Custom script extension to configure Couchbase

Couchbase special

The template’s Couchbase special sections are:

  • Size of the VMs in Small, Medium and Large tshirt sizes. They are A2, A6, and D14 respectively. The cluster sizes are up to the user to modify, and they are 3, 4, 5 VMs respectively and do not follow any best practices.
  • Multiple data disks, at the number of maximum allowed for the VM size, are attached to each VM. Data disks are striped using madm for optimizing IO.
  • All of the cluster members are placed in an availability set to provide HA.
  • Swappiness is disabled
  • Transparent Huge Page (THP) is disabled
  • Couchbase rack-awareness settings are left to the user

Storage Accounts

Allocation of the storage accounts based on the VMs size and controlled by the value of the “storageAccountCount” property of the clusterSpec variable, along with  “storageAccountSuffixForNode*” variables for the VM->Storage account mapping.

A non-premium Azure storage account can support up to 40 disks because of the account IOPS limit (20,000 entities or messages per second, as documented at the scalability targets document). That is because, each disk’s maximum IOPS is 500 (each operation is one access out of 20,000), and 20,000 divided by 500 is 40. Maximum number of disks that can be attached to A2 and A6 instances are 4 and 8 respectively.

Let’s walk through the VM to storage account mapping story with Large tshirt size as an example.

  1. “storageAccountCount” is set as 5 for the clusterSpec.
  2. azuredeploy.json deploys 5 storage accounts, from 0 to 4, with the names starting with the value of the parameter “storageAccountNamePrefix”. “cbdeploy” being the default.
  3. For each time cluster-nodes-D14.json is invoked, the storageAccountName parameter will take a different value, by concatenating the values for “storageAccountSuffixForNode”, tshirt size and node id (copyindex() value for N-1 nodes, and lastNodeId value from the cluster spec for the Nth node). So for large it will take, storageAccountSuffixForNodeLarge0 to storageAccountSuffixForNodeLarge4, then will get the variable value based on that (0 to 4) to concatenate to the end of the value of the “storageAccountNamePrefix”. For example sake, assuming the default storage account prefix name is “cbdeploy” it will be cbdeploy0 to cbdeploy5.
  4. The cluster-nodes-D14.json will deploy the VM using the passed in storageAccountName parameter’s value

Couchbase Rack Awareness

The cluster is deployed to one single availability set to ensure the distribution of VMs accross different update domains (UD) and fault domains (FD). Although Couchbase Server replicates your data across multiple instances, the placement of the replicas is important to align across FDs. It is important to make sure the primary data partition and the replicas are not under the same FD; otherwise, in the case of a failure, it could result in possible data unavailability. So, even though it is possible to specify (thus indirectly influence the distribution of VMs accross UD and FD) the number of FDs and UDs with “PlatformFaultDomainCount” and “PlatformUpdateDomainCount” properties of the availability set, we have chosen not to specify those and let that to the discretion of the administrator.

Additional Resources


Please make sure to read our “Tips and Tricks” Series for ARM templates as well.