Developers

Your guide to cognitive computing: An interview with solutions architect, Chris Ackerson

Share this post:

Chris

Solutions architects are the experts on our team at understanding and implementing Watson technology. They have developed this expertise by providing technical support to developers through multiple mediums. Through their work, they have a deep understanding and point of view about the Watson APIs, but also the cognitive landscape at large. I interviewed solutions architect, Chris Ackerson on his thoughts on Watson and cognitive computing, as well as his specific tips and resources.

Where do you see the Watson APIs growing in 2016 and beyond?

The Watson Developer Cloud launched back in 2014 with a single service, the QA API. Since then we’ve expanded to 30+ including APIs for natural language processing, computer vision, speech recognition among other capabilities. In addition we decomposed the QA API into independent functional APIs including dialog, natural language classification, retrieve and rank, and document conversion greatly increasing the flexibility developers have in building conversational applications.

Build and Deploy: Get your app ready for success

The net is that the number of use-case patterns that developers are experimenting with has exploded. We continue to bring new APIs to the platform – we just released emotion modeling and an adaptable visual recognition service – but 2016 has brought an enhanced focus on identifying repeatable use-case patterns and building accelerators to help developers stand up applications quickly. Examples of “accelerators” can be found in the official Application Starter Kits hosted on Watson Developer Cloud. We find them in the enhanced tooling that will be released this year for building conversational apps and domain annotators. And they are present in the ever-growing open source tools and sample apps available in the Watson Developer Cloud and Cognitive Catalyst git repositories.

What is the future of cognitive computing in 2016 and beyond?

A computing system can be boiled down to an input, some logic and an output. In traditional computing, the developer builds that logic and depending on the application the logic can be extremely complex. With cognitive computing that paradigm flips on its head. The developer starts with a bunch of examples of input and output and the cognitive system reorients itself to represent the logic automatically.

This is called supervised learning and its resulted in enormous strides in problems where building the logic can be prohibitively complex. Think about recognizing faces in images or speech from an audio signal. There is enormous variation in the input and therefore exponential complexity is representing the logic for all cases. Cognitive systems, which leverage a set of techniques called machine learning, automatically build this logic from examples. Over the last few years, they have produced state of the art results on these problems and in many cases now outperform humans.

The future of cognitive computing is extremely exciting for a number of reasons. For background, IBM has predicted it represents a $2 trillion market by 2025. That gives you a sense of the size of the problem space. More specifically, there are 2 areas where cognitive will take massive leaps over the next few years:

  1. Innovation in unsupervised learning algorithms will open up a new set of problems. Unlike supervised learning, where the system learns from labeled examples, in unsupervised learning the system automatically identifies patterns in data. Human beings largely build our mental models of the world through unlabeled observation. Think of supervised learning like controlled classroom education and unsupervised learning what we gain from interacting with the world.
  2. We are at a tipping point in terms of access to cognitive technology. Historically, fields like machine learning and natural language processing have been the domains of PhD researchers and data scientists. With the advent of platforms like Watson, the entire developer community can take advantage these technologies. Within a few hours, a developer can integrate an API that helps them understand the personality of their users. Within weeks a developer can build a digital assistant with the same basic components that power the digital assistant in your smartphone.

Which resources do you think are most helpful for developers to get started with Watson? 

I would start with the Application Starter Kits. They’ll give you a sense of some of the common use-case patterns that can be constructed mixing and matching APIs. From there, move to the Watson Developer Cloud documentation. You’ll find an explanation of each API, potential use-cases, science behind the service, a tutorial and a demo. Once you have an understanding of the APIs, check out the Watson Developer Cloud github repository. You’ll find lots of great SDKs, tools and sample apps to help get you started. Next check out IBM developerworks. There you’ll find a community of developers both external and IBMers working with the platform.

Do you have any advice that you find you are continuously telling developers?

A lot of it is the kind of practical knowledge of how to develop and maintain machine learning algorithms in production. This is a new concept to a lot of people. What does it mean to train a cognitive system? How do you evaluate the performance of that model and maintain the performance over time?

At a high level, ground truth is the input/output examples used to teach the algorithm how to predict outputs from new inputs. This data needs to be representative of what the algorithm will run into in production. For example in building a virtual agent, input examples should be representative of the types of questions a user would ask the agent. In machine learning, gathering these examples from scratch is called the cold-start problem and there are a bunch of strategies we’ve learned over the years. One of the key insights is to get the model in front of users as early in the development process as possible. In other words, bootstrap the early training data from logs if there is an existing app, from the web via forums, or from users through surveys, but don’t spend significant cycles optimizing the model until you’ve put the bootstrapped version in front of users and really learned how they’ll interact. With some of our APIs, you don’t even need to train a model before putting it in front of users. For example, retrieve and rank allows you to stand up a solr search cluster in order to gather training data before worrying about the rank model.

Once a model is trained, the challenge becomes evaluating its performance. From the initial set of ground truth, you should carve out some portion for testing. Usually this is 30-50% of total. Often developers go right to testing accuracy (what percentage of total responses are correct) and this can be misleading. For example, let’s assume you are building an email spam filter and 1% of all email is actually spam. You train a model that predicts 100% of emails are not spam. The accuracy turns out to be 99% which sounds pretty good, but obviously the system is useless in its function. Precision and recall are more common metrics for evaluating machine learning performance, but they can have different interpretations depending on the type of problem you’re solving e.g. the spam filter (classification) vs. a virtual agent (ranking). It’s important to define the specific success criteria for your application and design a set of evaluation metrics against those criteria.

If you are interested in reading more about what Chris has to say,  follow his own blog on Medium, where he highlights how developers can leverage Watson to add Machine Learning, Natural Language Processing and Computer Vision capabilities to their applications.

Learn about the changes driving the new API economy


Offering Manager - IBM Digital Group

More Developers stories
April 29, 2019

Behind the Code: Meet Bill Higgins

We're excited to kick off a new series for the Watson blog, focusing on AI and the developers at IBM Watson. AI is one of the hottest topics in tech, and we want to take a deep dive into AI, but more importantly, for you to get to know the people that help bring Watson to life. To kickoff the series, meet Bill Higgins, distinguished engineer, AI for developers, data and AI. 

Continue reading