Detect and Blur Faces with .NET Core and Face API

by Carlos Mendible on 30 Jul 2016 » Azure, dotNetCore

Today I’ll show you how to create a small console application that will Detect and Blur Faces with .NET Core and Face API.

First be aware of the following prerequisites:

**OS** **Prerequisites**
Windows Windows: You must have .NET Core SDK for Windows or both Visual Studio 2015 Update 3* and .NET Core 1.0 for Visual Studio installed.
linux, mac or docker checkout .NET Core

You will also need an Azure Cognitive Services Face API account and the correct set of access keys. (Start here: Subscribe in seconds if you need a Cognitive Service Account and here for the Documentation)

Now let’s start:

1. Create a folder for your new project


Open a command promt an run

mkdir projectoxford

2. Create the project


cd projectoxford
dotnet new

3. Create a settings file


Create an appsettings.json file to hold your Face API Key (remember to replace the values with those from your Cognitive Service account):

{
  "FaceAPIKey": "[Your key here]"
}

4. Modify the project file


Modify the project.json to add the Microsoft.ProjectOxford.Face dependency and also specify that the appsettings.json file must be copied to the output (buildOptions section) so it becomes available to the application once you build it.

We’ll be needing ImageProcessorCore to process the image (System.Drawing is not available in .NET Core) and also the extensions and tools to work with configuration files and user secrets.

{
  "userSecretsId": "cmendible3-dotnetcore.samples-projectOxford",
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true,
    "copyToOutput": { 
      "include": "appsettings.json"
    }
  },
  "dependencies": {
    "Microsoft.Extensions.Configuration": "1.0.0",
    "Microsoft.Extensions.Configuration.Json": "1.0.0",
    "Microsoft.Extensions.Configuration.UserSecrets": "1.0.0",
    "Microsoft.ProjectOxford.Face": "1.1.0",
    "System.Runtime.Serialization.Primitives": "4.1.1",
    "ImageProcessorCore": "1.0.0-alpha-966"
  },
  "tools": {
    "Microsoft.Extensions.SecretManager.Tools": "1.0.0-*"
  },
  "frameworks": {
    "netcoreapp1.0": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.0.0"
        }
      },
      "imports": "dnxcore50"
    }
  }
}

6. Add ImageProcessorCore package source


ImageProcessorCore is in alpha stage and packages are available via MyGet so add NuGet.config file with the following content:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="imageprocessor" value="https://www.myget.org/F/imageprocessor/api/v3/index.json" protocolVersion="3" />
  </packageSources>
</configuration>

7. Restore packages


You just modified the project.json file with new dependencies and added the NuGet.config file so please restore the packages with the following command:

dotnet restore

8. Modify Program.cs


Replace the contents of the Program.cs file with the following code

namespace ConsoleApplication
{
    using System;
    using System.IO;
    using System.Linq;
    using System.Threading.Tasks;
    using ImageProcessorCore;
    using Microsoft.Extensions.Configuration;
    using Microsoft.ProjectOxford.Face;
    using Microsoft.ProjectOxford.Face.Contract;

    public class Program
    {
        /// <summary>
        /// Let's detect and blur some faces!
        /// </summary>
        /// 
        public static void Main(string[] args)
        {
            // The name of the source image.
            const string sourceImage = "faces.jpg";

            // The name of the destination image
            const string destinationImage = "detectedfaces.jpg";

            // Get the configuration
            var configuration = BuildConfiguration();

            // Detect the faces in the source file
            DetectFaces(sourceImage, configuration["FaceAPIKey"])
                .ContinueWith((task) =>
                {
                    // Save the result of the detection
                    var faceRects = task.Result;

                    Console.WriteLine($"Detected {faceRects.Length} faces");

                    // Blur the detected faces and save in another file
                    BlurFaces(faceRects, sourceImage, destinationImage);

                    Console.WriteLine($"Done!!!");
                });

            Console.ReadLine();
        }

        /// <summary>
        /// Build the confguration
        /// </summary>
        /// <returns>Returns the configuration</returns>
        private static IConfigurationRoot BuildConfiguration()
        {
            // Enable to app to read json setting files
            var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);

#if DEBUG
            // We use user secrets in Debug mode so API keys are not uploaded to source control 
            builder.AddUserSecrets("cmendible3-dotnetcore.samples-projectOxford");
#endif

            return builder.Build();
        }

        /// <summary>
        /// Blur the detected faces from de source image.
        /// </summary>
        /// 
        /// 
        /// 
        private static void BlurFaces(FaceRectangle[] faceRects, string sourceImage, string destinationImage)
        {
            if (File.Exists(destinationImage))
            {
                File.Delete(destinationImage);
            }

            if (faceRects.Length > 0)
            {
                using (FileStream stream = File.OpenRead("faces.jpg"))
                using (FileStream output = File.OpenWrite(destinationImage))
                {
                    var image = new Image<Color, uint>(stream);

                    // Blur every detected face
                    foreach (var faceRect in faceRects)
                    {
                        var rectangle = new Rectangle(
                            faceRect.Left,
                            faceRect.Top,
                            faceRect.Width,
                            faceRect.Height);

                        image = image.BoxBlur(20, rectangle);
                    }

                    image.SaveAsJpeg(output);
                }
            }

        }

        /// <summary>
        /// Detect faces calling the Face API
        /// </summary>
        /// 
        /// 
        /// <returns>Detected faces rectangles</returns>
        private static async Task<FaceRectangle[]> DetectFaces(string imageFilePath, string apiKey)
        {
            var faceServiceClient = new FaceServiceClient(apiKey);

            try
            {
                using (Stream imageFileStream = File.OpenRead(imageFilePath))
                {
                    var faces = await faceServiceClient.DetectAsync(imageFileStream);
                    var faceRects = faces.Select(face => face.FaceRectangle);
                    return faceRects.ToArray();
                }
            }
            catch (Exception)
            {
                return new FaceRectangle[0];
            }
        }
    }
}

9. Build


Build and run the application with the following command

dotnet run

10. Expected results


Command line should read

Detected 26 faces
Done!!!

The new detectedfaces.jpg file should look like this:

detectedfaces

You can get the code here: https://github.com/cmendible/dotnetcore.samples/tree/master/projectoxford

Hope it helps!