Azure Kubernetes Service (AKS) Automatic is a new SKU that simplifies the management of your AKS clusters. With this SKU, Azure ensures that your cluster is production ready with built-in best practice and a great code to kubernetes experience.

Creating an AKS Automatic cluster

Creating an AKS cluster with the Automatic SKU is as simple as running the following Azure CLI command:

1az aks create \
2    --resource-group <resource group name> \
3    --name <cluster name> \
4    --sku automatic \
5    --generate-ssh-keys

Deploying with Bicep is also simple, but an agent pool profile is required:

 1@description('The name of the managed cluster resource.')
 2param clusterName string = 'aks-automatic'
 3
 4@description('The location of the managed cluster resource.')
 5param location string = resourceGroup().location
 6
 7resource aks 'Microsoft.ContainerService/managedClusters@2024-03-02-preview' = {
 8  name: clusterName
 9  location: location  
10  sku: {
11		name: 'Automatic'
12  		tier: 'Standard'
13  }
14  properties: {
15    agentPoolProfiles: [
16      {
17        name: 'systempool'
18        count: 3
19        vmSize: 'Standard_DS4_v2'
20        osType: 'Linux'
21        mode: 'System'
22      }
23    ]
24  }
25  identity: {
26    type: 'SystemAssigned'
27  }
28}

With just a few lines and almost no configuration you get a production-ready cluster.

What is pre-configured (built-in) with AKS Automatic?

Once an AKS Automatic cluster is deployed it’s full configuration can be retrieved using the Azure portal JSON view for the service. The JSON object is to large to be included here, but I’ll highlight some of the most important parts of it.

SKU

The SKU of the cluster is Automatic and the tier is Standard.

1"sku": {
2    "name": "Automatic",
3    "tier": "Standard"
4},

This means that the cluster is SLA enabled and also that it can be migrated to a Standard SKU if needed.

Node Pools (Agent Pool Profile)

The cluster has a single agent pool named systempool with 3 nodes of size Standard_DS4_v2 and running AzureLinux.

 1{
 2    "name": "systempool",
 3    "count": 3,
 4    "vmSize": "Standard_DS4_v2",
 5    "osDiskSizeGB": 128,
 6    "osDiskType": "Ephemeral",
 7    "kubeletDiskType": "OS",
 8    "maxPods": 250,
 9    "type": "VirtualMachineScaleSets",
10    "availabilityZones": [
11        "1",
12        "2",
13        "3"
14    ],
15    "enableAutoScaling": false,
16    ...
17    "orchestratorVersion": "1.28",
18    "currentOrchestratorVersion": "1.28.10",
19    "nodeTaints": [
20        "CriticalAddonsOnly=true:NoSchedule"
21    ],
22    "mode": "System",
23    "osType": "Linux",
24    "osSKU": "AzureLinux",
25    "nodeImageVersion": "AKSCBLMariner-V2gen2-202406.25.0",
26    "upgradeSettings": {
27        "maxSurge": "10%"
28    },
29    ...
30}

The pool has Availability Zones enabled, auto-scaling disabled (more on this later) and the nodes are tainted with CriticalAddonsOnly=true:NoSchedule so no workloads are scheduled on them.

Addons (Addon Profiles)

The follwoing addons are enabled by default in the cluster:

  • Azure Key Vault Secrets Provider: allows for the integration of an Azure Key Vault as a secret store with an Azure Kubernetes Service (AKS) cluster via a CSI volume.
  • Azure Policy: apply and enforce built-in security policies on your Azure Kubernetes Service (AKS) clusters using Azure Policy.
  • OMS Agent: enables monitoring of the Azure Kubernetes Service (AKS) cluster.
 1"addonProfiles": {
 2    "azureKeyvaultSecretsProvider": {
 3        "enabled": true,
 4        "config": {
 5            "enableSecretRotation": "true"
 6        },
 7        "identity": {
 8            "resourceId": "...",
 9            ...
10        }
11    },
12    "azurepolicy": {
13        "enabled": true,
14        "config": null,
15        "identity": {
16            "resourceId": "...",
17            ...
18        }
19    },
20    "omsAgent": {
21        "enabled": true,
22        "config": {
23            "logAnalyticsWorkspaceResourceID": "...",
24            "useAADAuth": "true"
25        }
26    }
27},

Note the use of Managed Identities and Microsoft Entra ID integration.

Azure CNI Overlay and Cilium (Network Profile)

The network profile of the cluster is configured with Azure CNI, overlay mode, and Cilium network policy and data plane. This way only the Kubernetes cluster nodes are assigned IPs from subnets while Pods receive IPs from a private CIDR provided at the time of cluster creation.

With suppor for eBPF, Azure CNI Powered by Cilium provides the following benefits:

  • Functionality equivalent to existing Azure CNI and Azure CNI Overlay plugins
  • Improved Service routing
  • More efficient network policy enforcement
  • Better observability of cluster traffic
  • Support for larger clusters (more nodes, pods, and services)
1"networkProfile": {
2    "networkPlugin": "azure",
3    "networkPluginMode": "overlay",
4    "networkPolicy": "cilium",
5    "networkDataplane": "cilium",
6    ...
7}

Microsoft Entra ID integration and cluster authentication (AAD Profile)

The cluster is configured with both Azure AD integration and Azure RBAC enabled.

1"aadProfile": {
2    "managed": true,
3    "enableAzureRBAC": true,
4    "tenantID": "<tenant id>"
5},

Also the local accounts are disabled for the cluster, which means that you’ll have to use kubelogin to access the cluster.

1"disableLocalAccounts": true,

Auto Upgrade Profile

The cluster is configured with auto-upgrade enabled and the upgrade channel set to stable.

1"autoUpgradeProfile": {
2    "upgradeChannel": "stable",
3    "nodeOSUpgradeChannel": "NodeImage"
4},

Auto-upgrade Stable channel automatically upgrades the cluster to the latest supported patch release on minor version N-1, where N is the latest supported minor version.

Image Cleaner and Workload Identity (Security Profile)

The cluster is configured with the image cleaner enabled and workload identity enabled.

Image Cleaner is an AKS addon based on the OSS project Eraser used to remove unused images with vulnerabilities from the cluster nodes.

Workload Identity allows to easily cnfigure the use of Microsoft Entra application credentials or managed identities to access Microsoft Entra protected resources from within the cluster.

1"securityProfile": {
2    "imageCleaner": {
3        "enabled": true,
4        "intervalHours": 168
5    },
6    "workloadIdentity": {
7        "enabled": true
8    }
9},

Deployment safeguards (Safeguards Profile)

The cluster is configured with the Safeguards Profile enabled and set to Warning. Deployment safeguards programmatically assess your clusters at creation or update time for compliance.

In warning mode, the deployment safeguards will not block the deployment of resources that are not compliant with the policy. Instead, it will dispaly warning messages in the code terminal to alert of any noncompliant cluster configurations.

 1"safeguardsProfile": {
 2    "level": "Warning",
 3    "version": "v1.0.0",
 4    "systemExcludedNamespaces": [
 5        "kube-system",
 6        "calico-system",
 7        "tigera-system",
 8        "gatekeeper-system"
 9    ]
10},

Ingress Controller (Ingress Profile)

The cluster is configured with the Web App Routing addon enabled. At the time of writing, this addon provides and easy configuration of managed NGINX Ingress controllers based on Kubernetes NGINX Ingress controller as well as integration with Azure DNS for public and private zone management and SSL termination with certificates stored in Azure Key Vault.

 1"ingressProfile": {
 2    "webAppRouting": {
 3        "enabled": true,
 4        "dnsZoneResourceIds": null,
 5        "identity": {
 6            "resourceId": "...",
 7            ...
 8        }
 9    }
10},

KEDA and VPA (Workload Auto Scaler Profile)

The cluster is configured with KEDA and Vertical Pod Autoscaler enabled.

1"workloadAutoScalerProfile": {
2    "keda": {
3        "enabled": true
4    },
5    "verticalPodAutoscaler": {
6        "enabled": true,
7        "addonAutoscaling": "Unspecified"
8    }
9},

Managed Prometheus and Grafana (Azure Monitor Profile)

The cluster is configured with Azure Monitor and Container Insights enabled. This configuration is optional but considered a best practice if your orggabization is not using a third-party monitoring solution.

 1"azureMonitorProfile": {
 2    "metrics": {
 3        "enabled": true,
 4        "kubeStateMetrics": {
 5            "metricLabelsAllowlist": "",
 6            "metricAnnotationsAllowList": ""
 7        }
 8    },
 9    "containerInsights": {
10        "enabled": true,
11        "logAnalyticsWorkspaceResourceId": "..."
12    }
13},

Node Autoprovisioning (Node Provisioning Profile)

The cluster is configured with the node provisioning mode set to Auto. With this configuration, the cluster will automatically provision nodes as needed using NAP which is based on the Karpenter OSS project.

1"nodeProvisioningProfile": {
2    "mode": "Auto"
3},

Cluster autoscaler must be disabled when using node autoprovisioning.

Hope it helps!

References: