Last year I wrote a post about Enabling NAP with Terraform. While the post is still valid, I wanted to write about an scenario that many of you might be facing: Enabling NAP when bringing your own VNET.
So let’s learn how to create an AKS cluster and enable Node Autoprovisioning (NAP) with Terraform when bringing your own VNET.
In this post, we will go through the steps required to install Chaos Mesh on an Azure Kubernetes Service (AKS) cluster using Terraform.
Chaos Mesh is a cloud-native Chaos Engineering platform that orchestrates chaos on Kubernetes environments. It is designed to be a scalable and extensible platform for chaos engineering.
Chaos Mesh is required if you want to use Azure Chaos Studio to run chaos experiments on your AKS clusters.
In this post, we will go through the steps required to install Kube Resource Orchestrator (kro) on an Azure Kubernetes Service (AKS) cluster using Terraform. kro is an open-source project that enables you to define custom Kubernetes APIs using simple and straightforward configuration.
Defining custom Kubernetes APIs is becoming essential to simplify the developer experience and increase productivity.
In this post, we will go through the steps required to install the Azure Service Operator (ASO) on an Azure Kubernetes Service (AKS) cluster using Terraform. The Azure Service Operator is an open-source project that enables you to provision and manage Azure resources using Kubernetes custom resources.
To install the Azure Service Operator on an AKS cluster, follow these steps:
Let’s learn how to create an AKS cluster and enable Static Egress Gateway with Terraform.
Static Egress Gateway in AKS provides a solution for configuring fixed source IP addresses for outbound traffic from your AKS workloads. This means you can use a specific range for egress traffic from specific workloads, whcih can be useful for scenarios like whitelisting IP addresses in a firewall.
The Flex Consumption plan for Azure Functions is a new hosting option that provides more flexibility and cost efficiency for running serverless applications. Unlike the traditional Consumption plan, which charges based on the number of executions and execution time, the Flex Consumption plan allows you to specify the maximum number of instances and memory allocation for your function app. This plan is ideal for scenarios where you need predictable performance and cost, as it enables you to control the scaling behavior of your functions more precisely.
Let’s learn how to create an AKS cluster and enable Node Autoprovisioning (NAP) with Terraform.
Note: Since at the time of writing NAP is a preview feature, we will use the azapi provider to enable it.
Creating an AKS cluster and enable Node Autoprovisioning (NAP) # Create a file called main.tf with the following contents:
In this post I’ll show you how to setup Workload Identity in an AKS cluster using terraform and then deploy a pod with Azure CLI that you will use to login to Azure.
Long story short: once workload identity is configured and enabled, kubernetes will inject 3 environment variables needed to login with Azure CLI:
When you deploy an Azure Kubernetes Service with a node pool composed by spot virtual machines, you are running a cluster with the risk of losing nodes based on the configuration you set.
Eviction may occur based on capacity or max price.
In this post I’ll show you how to deploy an AKS cluster with such configuration and simulate a node eviction. The exercise will help you understand the resiliency of your solution and how to query related events with log analytics.
When deploying an AKS cluster, even if you configure RBAC or AAD integration, local accounts will be enabled by default. This means that, given the right set of permitions, a user will be able to run the az get-credentials command with the --admin flag which will give you a non-audtibale access to the cluster.
This morning I saw this tweet from Mr Brendan Burns:
AKS Cost Monitoring and Governance With Kubecost https://t.co/OStwIBsuPp
— brendandburns (@brendandburns) April 30, 2021 And I’m sure that once you also read through it, you’ll learn that you have to take several steps in order to achieve AKS Cost Monitoring and Governance With Kubecost.
By default Cloud Shell sessions run inside a container inside a Microsoft network separate from any resources you may have deployed in Azure. So what happens when you want to access services you have deployed inside a Virtual Network such as a private AKS cluster, a Virtual Machine or Private Endpoint enabled services?
Let’s see how Azure ARM, Terraform, Azure Service Operator for Kubernetes and other solutions compare to each other so you can choose the right weapon to win the Infrastructure as Code War!
In this workshop you’ll learn how to deploy, monitor, scale, secure and debug workloads in AKS:
Deploy an aplication. Configure monitoring and health checks for your application. Scale your application to meet demand. Enable SSL/TLS with an ingress controller. Secret Management with AKS & Keyvault. Debugging your Kubernetes application.
Today I’m going to show you how to manage Terraform Cloud with .NET Core using the Tfe.NetClient library.
The idea is to create a simple console application that will:
Add GitHub as a VCS Provider. Create a Workspace conected to a GitHub repo where your Terraform files live. Create a variable in the workspace. Create a Run (Plan) based on the Terraform files Apply the Run. Tfe.NetClient is still in alpha and not every Terraform Cloud API or feature is present. Please feel free to submit any issues, bugs or pull requests.