Watson, ‘Jeopardy!’ champion
The DeepQA computer won TV’s smartest quiz show and kicked off an era of natural language processing
The IBM Power7 hardware that enabled Watson's win on television's Jeopardy

In the category of computers, the answer is: “The name of the computer that beat the best human Jeopardy! player ever.” The correct question: “What is IBM Watson?”

In a televised Jeopardy! contest viewed by millions in February 2011, IBM’s Watson DeepQA computer made history by defeating the TV quiz show’s two foremost all-time champions, Brad Rutter and Ken Jennings.

Named after IBM’s first CEO, Thomas J. Watson Sr., Watson is a question-answering computer system developed by an IBM research team led by principal investigator David Ferrucci. It was the leading edge of a new generation of computers capable of understanding questions posed in natural language and answering them far more accurately than any standard search technology — without being connected to the internet.

Watson’s ability to uncover insights in unstructured data represented a big leap in a subset of artificial intelligence called natural language processing and an important step toward a world in which intelligent machines are able to understand and respond to everyday questions to improve decision-making. In the years since its Jeopardy! victory, Watson has had a far-reaching impact on industry and society, from analyzing satellite imagery to help improve conservation efforts, to empowering customer-service centers with better responses to questions and concerns.

Pushing the boundaries of natural language processing

IBM had been searching for a new human-versus-machine challenge ever since its Deep Blue computer defeated chess champion Garry Kasparov in a historic 1997 match. In 2006, Ferrucci, an IBM computer scientist with a background in AI, pitched his bosses on an idea to create a computer that could beat a human at Jeopardy!, widely regarded as the most challenging quiz show on television.

Ferrucci thought that building a computer to compete in a question-and-answer game could push the boundaries of natural language processing, in which computers are programmed to analyze and respond to common words and phrases. “The goal is not to model the human brain,” he said. “The goal is to build a computer that can be more effective in understanding and interacting in natural language, but not necessarily the same way humans do it.”

Ferrucci assembled a team of more than two dozen scientists, engineers and programmers at the Thomas J. Watson Research facility in Yorktown Heights, New York. Among the programmers assigned to the project was Ed Toutant, who had won USD 10,000 on Jeopardy! in 1989. It took the team five years to perfect the question-answering system.

Teaching Watson to read, listen and understand

The original Watson was a room-size computer consisting of 10 racks holding 90 servers, with a total of 2,880 processor cores. It ran on IBM’s DeepQA software technology, which it used to generate hypotheses, gather evidence and analyze data. Over a period of years, Watson ingested mountains of information from Wikipedia and encyclopedias, dictionaries, religious texts, novels, plays, and books from Project Gutenberg, among other sources.

Unlike search engines, which can parse basic keywords and return a list of related documents that may or may not be relevant, Watson can understand questions posed in natural language and return answers that directly answer the question. Watson’s main innovation centered on its ability to quickly execute hundreds of algorithms to simultaneously analyze a question from many directions, find and score potential answers, gather additional supporting evidence for each answer, and evaluate everything using natural language processing.

The more of its algorithms that independently arrived at the same answer, the higher Watson’s confidence level. If the confidence level was high enough, Watson was programmed to buzz in during a game of Jeopardy!. If not, Watson wouldn’t buzz. Watson performed all of these calculations in about three seconds.

‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last. Ken Jennings ‘Jeopardy!’ contestant
Watson wins — then goes to work

Watson was put to the test in two Jeopardy! matches played over three days. Watson wasn’t perfect. In the first match, Watson missed the “Final Jeopardy!” clue in the category US Cities (“Its largest airport was named for a World War II hero; its second largest, for a World War II battle”). The correct response was “What is Chicago?” but Watson answered, “What is Toronto?????” with five question marks indicating a substantial lack of confidence. Watson’s blunder prompted an IBM engineer to wear a Toronto Blue Jays jacket to the recording of the second match.

Watson entered the second match in a tie with Brad Rutter but quickly pulled into a commanding lead. The final result ended with a resounding victory. Watson won USD 77,147, which was donated to various charities, besting Ken Jennings’s USD 24,000 and Brad Rutter’s USD 21,600. After the contest, Jennings wryly commented, “‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”

Since Watson’s Jeopardy! victory, the underlying technology has gone on to help organizations predict, optimize and automate business processes across numerous industries. Roughly 70% of global finance institutions and 13 of the top 14 systems integrators use Watson. In a project with The Weather Company, Watson provides hyperlocal forecasts down to the neighborhood or even street level. This helps retailers understand how the weather affects buying behavior and adjust their stock accordingly. OmniEarth, a provider of global Earth observation and analytics, has used Watson computer vision services to analyze satellite and aerial imagery, to gauge water usage on a property-by-property basis, helping water districts in drought-stricken California improve conservation efforts.

Turns out Watson knows even more answers than it let on during Jeopardy!.

Related stories The games that helped AI evolve

Two early game-playing programs, Samuel Checkers and TD-Gammon, led to breakthroughs in artificial intelligence

Deep Blue

IBM’s computer checkmated a human chess champion in a computing tour de force

Deep Thunder

For decades, IBM has advanced the accuracy of localized forecasts to help businesses anticipate and react to weather events