Watson is a computer that is not based on rules that govern traditional systems. Traditional computers are programmatic systems built upon rules like X + Y = Z. These systems require all inputs and outputs to be accounted for in the system. Otherwise, the system will break.
These systems handle structured data like tables well, but struggle in unstructured spaces like language. This makes it complicated, expensive, and/or impossible to build certain types of applications. For example, think about a machine built to assist doctors in diagnosing disease. The presence of a symptom is important, but the lack of symptoms can be just as or even more important.
Programming a computer to not only understand the symptoms, but the lack of symptoms becomes exponentially complex.
Watson is a computer that is not based on rules that govern traditional systems. Traditional computers are programmatic systems built upon rules like X + Y = Z.
Watson is different and a complement to programmatic machines. It is a learning system designed to consume information and learn subjects (aka domains) like a human. It answers queries based on hypothesis and a degree of confidence. Over time, we can improve and expand Watson’s skills to improve those hypotheses.
The first instantiation of Watson that the public saw was in 2011 on the TV show Jeopardy. The Watson computer consumed Wikipedia, medical journals, previous Jeopardy questions and answers among other information. 200 million pages were consumed, learned, and accessible by the machine during the show which beat two grand champions, Ken Jennings and Brad Rutter.
While Jeopardy was an accomplishment by itself, more importantly it provided an important piece of evidence that a system can interact with humans in their own language, language that includes the complexity and nuance of puns and colloquialisms.
In 2012, the Watson group partnered with MD Anderson to adapt the Watson machine to help in fight cancer. In 2013, IBM adapted the machine to help customer service centers better interact with their customers. In 2014, IBM is releasing a number of cognitive based technologies built on the Watson foundation.
This is not just a Q&A system anymore. IBM has exposed underlying technologies, created new ones, and improved on the ones that people have been hearing about for several years. The value to developers is that they don’t need to be a Data Scientist or have a machine learning background to get started with the services. Watson service are exposed as Rest APIs.
This is not just a Q&A system anymore. IBM has exposed underlying technologies, created new ones, and improved on the ones that people have been hearing about for several years.
The new services available through Bluemix and include:
User Modeling; Uses linguistic analytics to extract a set of personality and social traits from the way a person communicates. The service can analyze any communication the user makes available such as their text messages, tweets, posts, email, and more. Users of the service can understand, connect, and communicate with people on a more personally tailored level by analyzing personality and social traits.
Machine Translation: Converts text input in one language into a destination language for the end user. Translation is available among English, Brazilian Portuguese, Spanish and French.
Language Identification: detects the language in which text is written. This helps inform next steps such as translation, voice to text, or direct analysis. Today, the service can identify 15 languages.
Concept Expansion: Analyses text and interprets its meaning based on usage in other similar contexts. For example, it could interpret “The Big Apple” as meaning “New York City.” It can be used to create a dictionary of related words and concepts so that euphemisms, colloquialisms, or otherwise unclear phrases can be better understood and analyzed.
Message Resonance: Analyzes draft content and scores how well it is likely to be received by a specific target audience. Today, analysis can be done against people active in cloud computing or big data discussions but future versions will let users provide their own community data.
Relationship Extraction: Parses sentences into their various components and detects relationships between the components. The service maps the relationships between the components so that users or analytics engines can more easily understand the meaning of individual sentences and documents.
Question & Answer: Interprets and answers user questions directly based on primary data sources that have been selected and gathered into a body of data or “corpus.” The service returns candidate responses with associated confidence levels and links to supporting evidence. The current data corpora on Bluemix focus on the Travel and Healthcare industries.
Visualization Rendering: Takes input data and graphically renders it as an interactive visualization which can range from a common business chart to more advanced layouts. The visualizations can be easily modified to match user needs, visual styling, and types of data being analyzed.
Bluemix is IBM’s cloud development platform and is designed for developers to quickly build, run, scale and manage applications. IBM is committed to open technology in cloud and for that reason Bluemix is built on the Cloud Foundry open source project.
Applications typically rely on a variety of underlying technologies that developers then build on top of and with the introduction of Watson services, we have now added more than 60 IBM, 3rd party and open source services to Bluemix since we introduced the platform in late February of this year.
As such, developers not only have access to Watson services through Bluemix, but databases, runtimes and services for web, mobile, devops, analytics and internet of things.
At the Digital NYC launch last week we were thrilled to be joined by one of our clients, eyeQ Insights. eyeQ specializes in personalized experiences in in-store shopping and their application uses the user modeling service.
A shopper in a bike store simply enters their Twitter ID into a 42 inch in-store touchscreen. Then, the entire display, such as the looping video feed, changes itself based on the identified characteristics of the shopper.
Check out our sample apps
In addition, we’ve built a number of sample applications for the new Watson services and have embedded the user modeling sample app below. It comes populated with text from Moby Dick, and if you click “analyze” you can see the type of results Watson can return in terms of both the data and analysis as well as a visualization based on the new visualization rendering SDK.
We believe that this is another step forward in applications that radically change the way businesses provide services and reduce the barriers of communication to their consumers across industries, domains, and disciplines.