This 3-hour course on IBM’s developerWorks, provides a great overview of three of the most commonly used Watson services: Alchemy API; Visual Recognition; and Text to Speech. It then provides a step-by-step tutorial on how to use each service and extend the functionality of Swift-based, mobile apps. In this blog series, we will explore each service and tutorial so you can understand how and where it might be applicable to you.
So let’s start by looking at the Alchemy API. In the first part of the course, you will build a cognitive mobile application using Sentiment Analysis via the Alchemy API.
As you can see in the video above, this is not as difficult as it may seem. While the video shows the app built at an accelerated pace, most developers find it to be almost as easy as it looks.
As described, the Alchemy API offers a set of services that enable you to build apps which understand the content and context of text in webpages, news articles, and blogs. One of the most common use cases for cognitive applications is to build in functionality that will allow for personalization based on this understanding. To achieve this, the Sentiment Analysis capabilities of the Alchemy API are used.
We start with the Alchemy API and Sentiment Analysis first for two reasons:
it is an easier service to quickly started with
it provides a foundation for using the other Watson services and building more advanced functionality into your own apps
Demonstrating Sentiment Analysis
So what kind of things can you do with Sentiment Analysis? Let explore two demos and see for ourselves.
The first, is a simple Text Analysis Demo which allows you to analyze a text block or URL. Then determine whether the text and subjects represented in the text or content on the page are positive or negative. It is simple, but provide a quick illustration of how the service works.
The second, Your Celebrity Matchtakes this a bit further. It extracts content from social media feeds. Then, compares it against those of other individuals to compare and determine which feeds are most similar and most different. It is fun, but more importantly shows how you can take known data, analyze it and use it to influence how you interact with them.
Using this, you could predict behaviors and preferences based on this analysis. A retailer could make more personalized recommendations to their customers. And using existing data sources in your own databases, you can increase the accuracy.
In the part 2 of this blog series, we will discuss the Visual Recognition service and tutorial.