How-tos

Watson Visual Recognition, drones, and custom classifiers

Share this post:

Back in May, I authored “Bringing Drones to the Enterprise with Bluemix and OpenWhisk“, demonstrating a proof of concept application that brought together a cloud-connected drone with cognitive image tagging and facial recognition services by leveraging both Bluemix and OpenWhisk.

Since the Watson Visual Recognition services have since been upgraded to general availability—and the recent announcement of the Watson Image Recognition services that support custom classifiers with updates/retraining—I figured it would be a great idea to update the “smart”-drone app to take advantage of everything that the Watson Visual Recognition service has to offer. So, I did exactly that… First, I updated the Skylink drone application to leverage the latest GA services and take advantage of custom classifiers. Custom classifiers allow you to train the Watson Visual Recognition services with your own images – in this case I trained it to recognize a tennis court near my house, however it could be trained much more generically than this one case. Next, I incorporated the new update/retraining features to build a mechanism that enables your classifier to improve over time with usage.

Skylink demonstrates near realtime image analysis

For reference, here’s a quick video introducing the app:



With these changes, the architecture of the Skylink app didn’t change at all; it’s all still event driven and asynchronous. It uses a DJI Phantom consumer drone, connected to a native iOS app.

Skylink App Architecture
Skylink App Architecture

While in flight, the native application interacts with the aircraft and saves data and media to Cloudant Sync locally on the device. This automatically replicates up to the remote Cloudant service, and all writes to the Cloudant database automatically trigger Watson Visual Recognition using OpenWhisk, IBM’s event driven serverless architecture.

Training Visual Recognition service with custom classifiers

Custom classifiers give you the ability to train/customize the Visual Recognition service based on your own needs. If you need to recognize certain types of objects, be it buildings or structures, automobiles, vegetation, or conditions such as rust or cracks, you can use custom classifiers to accomplish this.

The first thing that you need to do is create a classifier. To do so, you only need to invoke the classifier’s REST service and post images for training. These images will be used to create and train a classifier, and you’ll receive a response with the id and details of that classifier. You’ll need to keep these (the id in particular) because you’ll need the classifier id(s) when you later call the service to classify an image. I wrote a simple shell script to be able to easily delete and recreate my scripts later.

Once you have created the classifier, not much changes from a functional perspective; you just need to pass in the classifier id(s) that you’re using, when you want to classify an image. For example, posting an image request to the url below will classify the image using two classifiers (default and tennis_1234567890):

https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classify?api_key={api_key}&version=2016-05-20&threshold=0.0&owners=me,IBM&classifier_ids=default,tennis_1234567890

In the response data, you’ll receive results for both classifiers, and you’ll want to display this appropriately within your own applications, as I’ve done in the screenshot below:

skylink-classifiers

Improving the training of your classifier

Over a period of time you might end up processing a lot of images, and guess what… these can be very helpful in improving the training of your classifier. With the new features that allow you to update classifiers, you can additively add images to your classifier. You can submit a single image, or batches of images, and while one image submission might not make much difference immediately, submitting a thousand images over a month might make a big difference in your classifier. Note: the images have to be posted within a zip file, even if its just a single image.

If you look at the screenshot above, you’ll notice I added two small buttons next to the custom classifier’s result: a thumbs up and a thumbs down. This lets me use both positive and negative reinforcement for the classifier. If the image was classified correctly, but had a low confidence score, you can hit the “thumbs up” button, and the selected image will be submitted as a positive classification, and will thus help build your classifier. If the image was classified incorrectly, you can hit the thumbs down button, and it will be submitted as a negative classification, also improving your classifier.

You can learn more about updating your classifiers from the Watson Documentation.

What’s next?

Just like my first post, I’ve made the entire project open source as a learning resource for both Bluemix and drone developers alike. You can find more detail using the resources listed below:

Save

More stories
May 1, 2019

Two Tutorials: Plan, Create, and Update Deployment Environments with Terraform

Multiple environments are pretty common in a project when building a solution. They support the different phases of the development cycle and the slight differences between the environments, like capacity, networking, credentials, and log verbosity. These two tutorials will show you how to manage the environments with Terraform.

Continue reading

April 29, 2019

Transforming Customer Experiences with AI Services (Part 1)

This is an experience from a recent customer engagement on transcribing customer conversations using IBM Watson AI services.

Continue reading

April 26, 2019

Analyze Logs and Monitor the Health of a Kubernetes Application with LogDNA and Sysdig

This post is an excerpt from a tutorial that shows how the IBM Log Analysis with LogDNA service can be used to configure and access logs of a Kubernetes application that is deployed on IBM Cloud.

Continue reading