In this post, I’ll walk you through the major refactoring of Azure Quick Review (azqr), where I used GitHub Copilot’s plan mode and agent mode while supervising every change. My role was purely architectural: I defined what needed to change, reviewed every proposal, and guided the AI through the process.
TL;DR # I refactored Azure Quick Review (azqr) without writing a single line of code by using GitHub Copilot’s plan mode (to design the architecture) and agent mode (to implement it). The refactor eliminated massive technical debt, 72 scanner packages, 72 command files, hundreds of ARM calls and replaced them with a centralized scanner registry, batched Azure Resource Graph queries, a modular pipeline, dynamic command generation, and unified throttling policy.
As organizations scale their Azure OpenAI workloads, throttling (HTTP 429 errors) becomes a critical operational concern. These errors indicate that your requests exceed the provisioned capacity, leading to degraded user experience, failed completions, and potential revenue loss.
This post introduces the Azure Quick Review openai-throttling plugin, which helps you identify throttling patterns, analyze affected deployments, and make data-driven decisions for capacity planning.
Azure availability zones are critical for high availability and disaster recovery. However, zone numbers (1, 2, 3) are logical abstractions—their physical datacenter mappings vary across every subscription. Your zone-redundant deployment might actually share infrastructure with your DR environment because different subscriptions map zones differently.
This post explores the Azure Quick Review zone-mapping plugin and why zone mappings matter for high availability, disaster recovery, and cross-subscription architectures.
In this post, I’ll show you how to deploy Azure AI Foundry connected to Bing Grounding using Terraform. This setup enables you to leverage Azure’s powerful AI capabilities and enrich them with Bing’s search data, all managed as code for repeatability and automation.
Prerequisites # Before you begin, make sure you have:
Last year I wrote a post about Enabling NAP with Terraform. While the post is still valid, I wanted to write about an scenario that many of you might be facing: Enabling NAP when bringing your own VNET.
So let’s learn how to create an AKS cluster and enable Node Autoprovisioning (NAP) with Terraform when bringing your own VNET.
In this post, we will explore how to create custom evaluators to evaluate your Generative AI application locally with the Azure AI Evaluation SDK.
The results of the evaluation can be uploaded to your Azure AI Foundry project where you can visualize and track the results.
Prerequisites # Before you begin, ensure you have the following:
In this post, we will go through the steps required to install Chaos Mesh on an Azure Kubernetes Service (AKS) cluster using Terraform.
Chaos Mesh is a cloud-native Chaos Engineering platform that orchestrates chaos on Kubernetes environments. It is designed to be a scalable and extensible platform for chaos engineering.
Chaos Mesh is required if you want to use Azure Chaos Studio to run chaos experiments on your AKS clusters.
In this post, we will go through the steps required to install Kube Resource Orchestrator (kro) on an Azure Kubernetes Service (AKS) cluster using Terraform. kro is an open-source project that enables you to define custom Kubernetes APIs using simple and straightforward configuration.
Defining custom Kubernetes APIs is becoming essential to simplify the developer experience and increase productivity.
In this post, we will go through the steps required to install the Azure Service Operator (ASO) on an Azure Kubernetes Service (AKS) cluster using Terraform. The Azure Service Operator is an open-source project that enables you to provision and manage Azure resources using Kubernetes custom resources.
To install the Azure Service Operator on an AKS cluster, follow these steps:
Today we will walk through a GitHub Actions workflow that automates the Azure Quick Review (azqr) scan process. This workflow is designed to run on a schedule, on push events to the main branch, and on pull requests to the main branch.
Prerequisites # Before you start, make sure you have the following prerequisites in place:
Tinyproxy is a lightweight HTTP/HTTPS proxy server designed to be fast and small. It is useful for scenarios where you need to set up a proxy server quickly and easily.
Recently I used it to check what happens when a set of Azure domains are blocked (i.e. management.azure.com) and it worked like a charm.
Let’s learn how to create an AKS cluster and enable Static Egress Gateway with Terraform.
Static Egress Gateway in AKS provides a solution for configuring fixed source IP addresses for outbound traffic from your AKS workloads. This means you can use a specific range for egress traffic from specific workloads, whcih can be useful for scenarios like whitelisting IP addresses in a firewall.
The Flex Consumption plan for Azure Functions is a new hosting option that provides more flexibility and cost efficiency for running serverless applications. Unlike the traditional Consumption plan, which charges based on the number of executions and execution time, the Flex Consumption plan allows you to specify the maximum number of instances and memory allocation for your function app. This plan is ideal for scenarios where you need predictable performance and cost, as it enables you to control the scaling behavior of your functions more precisely.
Let’s learn how to create an AKS cluster and enable Node Autoprovisioning (NAP) with Terraform.
Note: Since at the time of writing NAP is a preview feature, we will use the azapi provider to enable it.
Creating an AKS cluster and enable Node Autoprovisioning (NAP) # Create a file called main.tf with the following contents:
Azure Kubernetes Service (AKS) Automatic is a new SKU that simplifies the management of your AKS clusters. With this SKU, Azure ensures that your cluster is production ready with built-in best practice and a great code to kubernetes experience.
Creating an AKS Automatic cluster # Creating an AKS cluster with the Automatic SKU is as simple as running the following Azure CLI command:
In this post I’ll show you how to setup Workload Identity in an AKS cluster using terraform and then deploy a pod with Azure CLI that you will use to login to Azure.
Long story short: once workload identity is configured and enabled, kubernetes will inject 3 environment variables needed to login with Azure CLI:
What is Azure Quick Review? # If you are looking for a way to quickly assess the status and configuration of your Azure resources, you might want to try Azure Quick Review (azqr): a command-line interface (CLI) tool that scans your Azure resources and generates an Excel report with detailed information and recommendations based on Azure’s best practices.
Back in 2017 I wrote a post about how to run a precompiled .NET Core Azure Function in a container. Fast forward to 2023 and, as some of you know, I’ve been playing with Golang for a while now so I thought it was about time to translate the .NET code and make it work with Golang.
After years talking about Kubernetes, Dapr and KEDA, it’s time to run our microservices and containerized applications on a true serverless platform: Azure Containers Apps.
In this session you’ll learn:
Basic concepts: environments, containers and revisions. The benefits of built-in support for Dapr & KEDA How to use managed identities. How to secure and monitor your platform Fast Forward the video to: 4:24:00
When you deploy an Azure Kubernetes Service with a node pool composed by spot virtual machines, you are running a cluster with the risk of losing nodes based on the configuration you set.
Eviction may occur based on capacity or max price.
In this post I’ll show you how to deploy an AKS cluster with such configuration and simulate a node eviction. The exercise will help you understand the resiliency of your solution and how to query related events with log analytics.