/ Umbraco

Infuse Some AI Into Umbraco

With the rise of AI and explosion in intelligence services the need to jump on this track is more relevant than ever. With the computing power of Azure Cloud, Artificial Intelligence is at our fingertips and waiting to be implemented in Umbraco.

Let me start with a small introduction. My name is Henk Boelman and I’m working as an Azure Architect with a company called Ordina. I’m using Umbraco since version 4 and it is great to see how Umbraco is evolving and keeping their vision clear. Currently I’m busy with exploring developer-ready AI and trying to avoid the hassle of figuring out how to build my own classifiers and machine learning algorithms but just consuming them as a service. Because I’m working at a Microsoft focused company, the Microsoft Cognitive Services are a logical choice to dive into. I like working with Umbraco and this AI as a Service stack. That is why I combined these two in some packages ready to use for Umbraco.

What are Cognitive Services?

First it is good to know what these services exactly are. Cognitive services are ready to use APIs built on top of very smart Machine Learning models. These services range from detecting and analyzing faces to natural language processing.

Infuse some vision in Umbraco

In the vision section of the Cognitive Services there are around 7 APIs to use. Some are for processing images and some for video. We are going to have a closer look at the Computer Vision API.

The Computer Vision Services API can tell you useful things about an image. It gives a description, a set of tags and it can even detect celebrities, landmarks and adult content.

Enable Umbraco to see

Now we know what these services are capable of it is time to get our heads around how to infuse them into the Umbraco Media library.

From the content above, I hope you understand that the concept of the Cognitive Service is that you call an API and get back a JSON response with the correct data. Microsoft even made it easier for most of the services because there are SDKs available that saves you the hassle of mapping the JSON to a C# object.

As the services are about describing the images, we are going to extend the media library with them. For that to work we will create an event handler that is triggered when the media is being saved. To read more about MediaService have a look in the Umbraco documentation.

Step 1: Create a Vision API

To create a Computer Vision API you must have a Microsoft Azure account. If you don’t have access to Microsoft Azure, you can create a free account (this comes with a $200 credit for your first month). The Computer Vision API has 2 price plans, a free one that has some rate limits and a payed one that can handle up to 20 requests per second.

Step 2: Setup Umbraco and create an event handler

Create a new Umbraco project or use an existing one. In your solution include the Vision SDK from Microsoft available on NuGet. Once the package is installed create a new Class VisionEventHandler that extends the ApplicationEventHandler.

using Umbraco.Core;
using Umbraco.Core.Events;
using Umbraco.Core.Models;
using Umbraco.Core.Services;

namespace InfuseSomeAIintoU.Vision.SampleCode
{

    public class VisionEventHandler : ApplicationEventHandler
    {
        protected override void ApplicationStarted(UmbracoApplicationBase umbracoApplication, ApplicationContext applicationContext)
        {
            MediaService.Saving += MediaService_Saving;
        }

        private void MediaService_Saving(IMediaService sender, SaveEventArgs<IMedia> e)
        {
            foreach (IMedia media in e.SavedEntities)
            {
            }
        }
    }
}

Next add 2 keys to your Appsettings in the web.config.

<add key="VisionApiUrl" value="[Insert the vision endpoint url]" /> 
<add key="VisionApiKey" value="[Insert your key]" " />

In Azure on your vision API you can find these settings. After that, extend your VisionEventHandler to load the keys from your web.config and create a VisionServiceClient.

using System.Configuration;
using System.Linq;
using Microsoft.ProjectOxford.Vision;
using Umbraco.Core;
using Umbraco.Core.Events;
using Umbraco.Core.Models;
using Umbraco.Core.Services;

namespace InfuseSomeAIintoU.Vision.SampleCode
{

    public class VisionEventHandler : ApplicationEventHandler
    {
        private readonly string _visionApiKey = ConfigurationManager.AppSettings["VisionApiKey"];
        private readonly string _visionApiUrl = ConfigurationManager.AppSettings["VisionApiUrl"];

        protected override void ApplicationStarted(UmbracoApplicationBase umbracoApplication, ApplicationContext applicationContext)
        {
            MediaService.Saving += MediaService_Saving;
        }

        private void MediaService_Saving(IMediaService sender, SaveEventArgs<IMedia> e)
        {
            VisionServiceClient visionServiceClient = new VisionServiceClient(_visionApiKey, _visionApiUrl);

            foreach (IMedia media in e.SavedEntities.Where(a => a.ContentType.Name.Equals(Constants.Conventions.MediaTypes.Image)))
            {
            }
        }
    }
}

Step 3: Create the properties

The Vision API gives lot of information back that can be useful for your scenario. This information we want to store in the MediaType “Image”. See the screenshot below and add these properties with the according property types to the MediaType.
setup_mediatype_image

Step 4: Putting it all together

Now Umbraco has the fields to store the data in, the EventHandler is implemented and the SDK is included. It's time to send images to this API to get some data back.

To get this done you have to follow these 3 steps:

  • Get the image as a Stream
  • Call the AnalyzeImageAsync method of the VisionServiceClient and specify what you want to have returned.
  • Map the data return in the AnalysisResult object to the values in your Umbraco MediaType.

Below you see an example of the minimal code needed to get this job done. If you are using it in production please don’t forget to handle errors or just use my package.

using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.ProjectOxford.Vision;
using Microsoft.ProjectOxford.Vision.Contract;
using Newtonsoft.Json;
using Umbraco.Core;
using Umbraco.Core.Events;
using Umbraco.Core.IO;
using Umbraco.Core.Models;
using Umbraco.Core.Services;
using Umbraco.Web.Models;

namespace InfuseSomeAIintoU.Vision.SampleCode
{

    public class VisionEventHandler : ApplicationEventHandler
    {
        private readonly string _visionApiKey = ConfigurationManager.AppSettings["VisionApiKey"];
        private readonly string _visionApiUrl = ConfigurationManager.AppSettings["VisionApiUrl"];

        protected override void ApplicationStarted(UmbracoApplicationBase umbracoApplication, ApplicationContext applicationContext)
        {
            MediaService.Saving += MediaService_Saving;
        }

        private void MediaService_Saving(IMediaService sender, SaveEventArgs<IMedia> e)
        {
            var visionServiceClient = new VisionServiceClient(_visionApiKey, _visionApiUrl);
            var mediaFileSystem = FileSystemProviderManager.Current.GetFileSystemProvider<MediaFileSystem>();

            foreach (IMedia media in e.SavedEntities.Where(a => a.ContentType.Name.Equals(Constants.Conventions.MediaTypes.Image)))
            {
                string relativeImagePath = JsonConvert.DeserializeObject<ImageCropDataSet>(media.GetValue<string>(Constants.Conventions.Media.File)).Src;

                // Computer Vision API
                using (Stream imageFileStream = mediaFileSystem.OpenFile(relativeImagePath))
                {
                    // Call the Computer Vision API
                    AnalysisResult computervisionResult = visionServiceClient
                        .AnalyzeImageAsync(
                            imageFileStream,
                            new[] { VisualFeature.Description, VisualFeature.Adult, VisualFeature.Tags,VisualFeature.Categories }
                        ).Result;

                    if (computervisionResult != null)
                    {
                        // Get the result 
                        IEnumerable<string> tags = computervisionResult.Tags.Select(a => a.Name);
                        string caption = computervisionResult.Description.Captions.First().Text;
                        bool isAdult = computervisionResult.Adult.IsAdultContent;
                        bool isRacy = computervisionResult.Adult.IsRacyContent;

                        // Set the properties in Umbraco
                        media.SetTags("infusedAI_general_tags", tags, true);
                        media.SetValue("infusedAI_description", caption);
                        media.SetValue("infusedAI_isAdult", isAdult);
                        media.SetValue("infusedAI_isRacy", isRacy);
                    }
                }
            }
        }
    }
}

And ready to go!

With around 100 lines of code you have Umbraco Infused with AI. The data return from this API you can use in many different ways. You can use it for scanning adult content and handle accordingly before it is even published on your website, or like Niels showed during CodeGarden for automatically sorting and categorizing images.

I hope you enjoy this new kind of AI and that it empowers you to do great things with Umbraco!

This blog is first published on 24days in Umbraco

Henk Boelman

Henk Boelman

Hi, I'am Henk Boelman a software developer at Ordina interested in Cognitive Services, IOT, machine learning, AI and Umbraco. I speak at conferences, give trainings and workshops and write this blog.

Read More