Skip to main content

Blog

AKS: Disable local accounts with Terraform
·833 words·4 mins
azure kubernetes aks terraform aad azure active directory
When deploying an AKS cluster, even if you configure RBAC or AAD integration, local accounts will be enabled by default. This means that, given the right set of permitions, a user will be able to run the az get-credentials command with the --admin flag which will give you a non-audtibale access to the cluster.
Protect your Node.js or .NET API with Azure Active Directory
·1404 words·7 mins
dotnet azure dotnet nodejs aad azure active directory
One question I often get from by my customers is how to use Azure Active Directroy to protect their Node.js or .NET APIs. Every single time I answer by redirecting them to this amazing post (Proteger una API en Node.js con Azure Active Directory), written in spanish, by my friend and peer Gisela Torres (0gis0).
Azure Database for MySQL Flexible Server: Failover Test
·709 words·4 mins
azure mysql availabilty zones
Azure Database for MySQL Flexible Server allows configuring high availability with automatic failover. With Zone-redundant HA your service has redundancy of infrastructure across multiple availability zones. Zone-redundant HA is preferred when you want to achieve the highest level of availability against any infrastructure failure in the availability zone and when latency across the availability zone is acceptable.
Azure Cache for Redis: Failover Test
·625 words·3 mins
azure redis availabilty zones
Azure Cache for Redis supports zone redundancy in its Premium and Enterprise tiers. A zone-redundant cache runs on VMs spread across multiple Availability Zones. It provides higher resilience and availability. Today I’ll show hot to test the failover of a zone-redundant cache. Deploy Azure Cache for Redis with availability zones # Create a main.tf file with the following content: # terraform { required_version = "> 0.14" required_providers { azurerm = { version = "= 2.57.0" } random = { version = "= 3.1.0" } } } provider "azurerm" { features {} } # Location of the services variable "location" { default = "west europe" } # Resource Group Name variable "resource_group" { default = "redis-failover" } # Name of the Redis cluster variable "redis_name" { default = "redis-failover" } resource "random_id" "random" { byte_length = 8 } resource "azurerm_resource_group" "rg" { name = var.resource_group location = var.location } resource "azurerm_redis_cache" "redis" { name = "${var.redis_name}-${lower(random_id.random.hex)}" location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name capacity = 2 family = "P" sku_name = "Premium" enable_non_ssl_port = true minimum_tls_version = "1.2" redis_configuration { } zones = ["1", "2"] } resource "azurerm_log_analytics_workspace" "logs" { name = "redis-logs" location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name sku = "PerGB2018" retention_in_days = 30 } resource "azurerm_monitor_diagnostic_setting" "monitor" { name = lower("extaudit-${var.redis_name}-diag") target_resource_id = azurerm_redis_cache.redis.id log_analytics_workspace_id = azurerm_log_analytics_workspace.logs.id metric { category = "AllMetrics" retention_policy { enabled = false } } log { category = "ConnectedClientList" enabled = false retention_policy { days = 0 enabled = false } } lifecycle { ignore_changes = [metric] } } output "redis_name" { value = azurerm_redis_cache.redis.name } output "redis_host_name" { value = azurerm_redis_cache.redis.hostname } output "redis_primary_access_key" { value = azurerm_redis_cache.redis.primary_access_key sensitive = true } Note: the zones are specified: zones = ["1", "2"], making the cache zone-redundant.
AKS: Resize Private Volume Claim to expand a Managed Premium Disk
·428 words·3 mins
azure kubernetes aks persistent volume claim managed disk
If you deployed a private volume claim using the managed-premium storage class, then ran out of space and now you are searching how to expand the disk to a larger disk, this is how you can do it from scratch: manage-premium storage class is a premium storage class that allows volume expansion: allowVolumeExpansion: true.
AKS: Open Service Mesh Traffic Access Control
·799 words·4 mins
azure kubernetes aks osm
In my previous post AKS: Open Service Mesh & mTLS, I described how to deploy an AKS cluster with Open Service Mesh enabled, and how: Easy is to onboard applications onto the mesh by enabling automatic sidecar injection of Envoy proxy. OSM enables secure service to service communication. This time I’ll show you that Open Service Mesh (OSM) also provides a nice feature for controlling traffic between microservices: Traffic Access Control based on the SMI specifications.
AKS: Open Service Mesh & mTLS
·840 words·4 mins
azure kubernetes aks osm
Open Service Mesh (OSM) is a lightweight and extensible cloud native service mesh, easy to install and configure and with features as mTLS to secure your microservice environments. Now that Open Service Mesh (OSM) integration with Azure Kubernetes Service (AKS) is GA (Check the announcement) I’ll show you not only to deploy it but also how to add your microservices to the mesh so communication between them is encrypted.
AKS: High Available Storage with Rook and Ceph
·1681 words·8 mins
azure kubernetes aks rook ceph storage
Disclaimer: this is just a Proof of Concept. If you deploy Azure Kubernetes Service clusters with availability zones, you’ll probaly need a high available storage solution. In such situation you may use Azure Files as an external storage solution. But what if you need something that performs better? Or something running inside your cluster?
AKS: Container Insights Pod Requests and Limits
·602 words·3 mins
azure kubernetes aks azure monitor log analytics container insights
Today I’ll show you how to use Container Insights and Azure Monitor to check your AKS cluster for pods without requests and limits. You’ll need to use the following tables and fields: KubePodInventory: Table that stores kubernetes cluster’s Pod & container information ClusterName: ID of the kubernetes cluster from which the event was sourced Computer: Computer/node name in the cluster that has this pod/container. Namespace: Kubernetes Namespace for the pod/container ContainerName:This is in poduid/containername format. Perf: Performance counters from Windows and Linux agents that provide insight into the performance of hardware components operating systems and applications. ObjectName: Name of the performance object. CounterName: Name of the performance counter. CounterValue: The value of the counter And take a close look at the following Objects and Counters:
Static website hosting in an Azure Storage Account protected with Private Endpoint
·766 words·4 mins
azure static website storage account private endpoint storage
This post will show you how to deploy a Static Website on a Storage Account protected with Private Endpoint using Terraform: Define the terraform providers to use # Create a providers.tf file with the following contents: terraform { required_version = "> 0.12" required_providers { azurerm = { source = "azurerm" version = "~> 2.26" } } } provider "azurerm" { features {} skip_provider_registration = true } Define the variables # Create a variables.tf file with the following contents:
AKS: Windows node pool with spot virtual machines and ephemeral disks
·945 words·5 mins
kubernetes azure windows ephemeral disks spot virtual machines
Some months ago a customer asked me if there was a way to deploy a Windows node pool with spot virtual machines and ephemeral disks in Azure Kubernetes Service (AKS). The idea was to create a cluster that could be used to run Windows batch workloads and minimize costs by deploying the following:
AKS: Persistent Volume Claim with an Azure File Storage protected with a Private Endpoint
·853 words·5 mins
kubernetes azure aks persistent volume claim azure files private endpoint
This post will show you the steps you’ll have to take to deploy an Azure Files Storage with a Private Endpoint and use it to create volumes for an Azure Kubernetes Service cluster: Create a bicep file to declare the Azure resources # You’ll have to declare the following resources:
Plan IP addressing for AKS configured with Azure CNI Networking
·328 words·2 mins
kubernetes azure aks container network interface cni ip
When configuring Azure Kubernetes Service with Azure Container Network Interface (CNI), every pod gets an IP address of the subnet you’ve configured. So how do you plan you address space? What factors should you consider? Each node consumes one IP. Each pod consumes one IP. Each internal LoadBalancer Service you anticipate consumes one IP. Azure reserves 5 IP addresses within each subnet. The Max pods per node is 250. The Max pods per nodes lower limit is 10. 30 pods is the minimum per cluster. Max nodes per cluster is 1000. When a cluster is upgraded a new node is added as part of the process which requires a minimum of one additional block of IP addresses to be available. Your node count is then n + 1. When you scale a cluster an additional node is added. Your node count is then n + number-of-additional-scaled-nodes-you-anticipate + 1. With all that in mind the formula to calculate the number of IPs required for your cluster should look like this:
Running k3s inside WSL2 on a Surface Pro X
·236 words·2 mins
kubernetes k3s arm64 WSL2
I’m a proud owner of a Surafe Pro X SQ2 which is an ARM64 device. If you’ve been reading me, you know I like to tinker with kubernetes and therefore I needed a solution for this device. I remembered reading about k3s a lightweight kubernetes distro built for IoT & Edge computing, and decided to give it a try.
Deploy AKS + Kubecost with Terraform
·910 words·5 mins
azure kubernetes aks terraform kubecost
This morning I saw this tweet from Mr Brendan Burns: AKS Cost Monitoring and Governance With Kubecost https://t.co/OStwIBsuPp — brendandburns (@brendandburns) April 30, 2021 And I’m sure that once you also read through it, you’ll learn that you have to take several steps in order to achieve AKS Cost Monitoring and Governance With Kubecost.
Deploy a Private Azure Cloud Shell with Terraform
·932 words·5 mins
azure terraform cloud shell
By default Cloud Shell sessions run inside a container inside a Microsoft network separate from any resources you may have deployed in Azure. So what happens when you want to access services you have deployed inside a Virtual Network such as a private AKS cluster, a Virtual Machine or Private Endpoint enabled services?
ASP.NET Core OpenTelemetry Logging
·361 words·2 mins
dotnet opentelemetry aspnetcore
As you may know I’ve been collaborating with Dapr and I’ve learned that one of the things it enables you to do is to collect traces with the use of the OpenTelemetry Collector and push the events to Azure Application Insights. After some reading I went and check if I could also write my ASP.NET Core applications to log using the OpenTelemetry Log and Event record definition:
Dapr: Reading local secrets with .NET 5
·308 words·2 mins
dotnet dapr secrets
Now that Dapr is about to hit version 1.0.0 let me show you how easy is to read secrets with a .NET 5 console application. Create a console application # dotnet new console -n DaprSecretSample cd DaprSecretSample Add a reference to the Dapr.Client library # dotnet add package Dapr.Client --prerelease Create a Secret Store component # Create a components folder and inside place a file named secretstore.yaml with the following contents:
What I Learned From Hacktoberfest 2020
·343 words·2 mins
kubernetes hacktoberfest
Hacktoberfest® is an open global event where people all around de globe contribute to open source projects. The idea behind Hacktoberfest® is great, in my opinion it encourages and motivates contributions specially from those who don’t know where to start with OSS, but saddly what we saw this year was many people, let’s call them trolls, spamming repos with useless pull requests in order to claim the nice tee. The Hacktoberfest® organization reacted quickly to fix the situation and the rules of the game have been changed: the event is now offically opt-in only for projects and mantainers.
Managing Terraform Cloud with .NET Core
·791 words·4 mins
dotnet terraform terraform cloud
Today I’m going to show you how to manage Terraform Cloud with .NET Core using the Tfe.NetClient library. The idea is to create a simple console application that will: Add GitHub as a VCS Provider. Create a Workspace conected to a GitHub repo where your Terraform files live. Create a variable in the workspace. Create a Run (Plan) based on the Terraform files Apply the Run. Tfe.NetClient is still in alpha and not every Terraform Cloud API or feature is present. Please feel free to submit any issues, bugs or pull requests.