August 5, 2013 | Written by: IBM Research Editorial Staff
Share this post:
Dr. Gerald Tesauro, the IBM Research scientist who taught Watson how to make wagers when its Jeopardy!, has been named an Association for the Advancement of Artificial Intelligence (AAAI) Fellow. His development of TD-Gammon, “a self-teaching neural network that learned to play backgammon at human world championship level,” and work applying machine learning across disciplines from computer virus recognition to computer chess, and other fields made him an ideal candidate for the association’s title.
You’ve worked on machines that play Jeopardy!, chess and backgammon. What is the significance of machines that can play games?
|Dr. Gerald Tesauro
In the early decades of AI, algorithms were not ready to tackle the ambiguous, ill-defined nature of real-world problems. Researchers therefore proposed that complex board games like chess and backgammon could serve as an ideal testing ground for AI algorithms (the so-called “Drosophila of AI”). Tasks such as playing grandmaster-level chess may be incredibly complex, but they can be precisely specified for the computer.
By working in these domains, researchers made enormous progress in search, learning, and simulation techniques, to the point where the best computers now surpass the best humans in virtually all classic board games. As a result, AI is now moving on to tackle real-world ambiguity head-on.
In the Jeopardy! Grand Challenge, we still had a game environment with precise rules of play, but now had to deal with highly ambiguous natural-language questions, having no explicitly defined meaning. Looking forward, the next “Drosophila of AI” may be in life-like virtual reality games, such as World of Warcraft. In such environments, AI software would need to move simulated bodies via simulated physics, and would need to engage in deep dialogues (including bargaining, persuasion, etc.) with other human or computerized players.
How does a machine learning to play a game translate to things like e-commerce and virus recognition?
One aspect of learning in games is learning how to detect generalizable structure in a game state (i.e., “pattern recognition”) that is useful for categorizing or evaluating the state. This type of learning directly carries over to virus recognition, where we look for patterns in the raw binaries of .EXE files that may indicate likelihood of infection. The other main aspect is learning how to make the best decision (i.e., select the best move) to achieve the player’s long-range objectives.
By developing general-purpose Reinforcement Learning algorithms in game environments, we were able to then directly apply those algorithms in both e-commerce (submitting the optimal bid in a double-auction marketplace) as well as in autonomic computing (dynamically assigning server capacity to transactional workloads in data centers).
Now that Watson is working in medicine and customer service, what new things are you teaching it?
Personally I’m not teaching it anything. My motto for Machine Learning is “Human out of the loop.” Actually, I’m part of a big team that is articulating IBM’s vision and roadmap for “Cognitive Computing.” Besides Watson, IBM has many other technology components that contribute to Cognitive Computing, such as SyNAPSE, a computational platform that leverages brain architecture principles, and IMARS, which provides semantically meaningful labeling of raw multimedia (speech, image, video, etc.) content.
My colleagues and I are working out how to combine our various technology offerings to create an enhanced version of Watson, with sufficient capabilities at natural language dialogue, massive-scale multi-modal inference, etc., to participate as a genuine partner in a collaborative problem-solving team.
What are you working on now? Where else can theoretical and applied machine learning be used?
Guess what — it’s all about Analytics on Big Data. One current topic is choosing what data to train on in a high-volume streaming environment. Imagine there is so much data coming in so rapidly that you could not keep up if you looked at all of it. So, the question is, how do you choose the best subset to examine, given that you can never see the full data for any example?
I’m also using massive amounts of weather data from geosynchronous satellites to learn predictive models of available solar energy, over a wide range of spatial and temporal scales. Accurate predictions could result in billions of dollars of spending reductions in the US on unnecessary backup capacity by the utility companies.
What does it mean to you to be named an AAAI Fellow?
I’ve already been honored by the many colleagues who have built upon my work, and many students who have been inspired to seek careers related to AI. But it’s a special honor and privilege to be officially recognized by the leading professional society devoted to AI, and to be counted in the company of so many esteemed earlier Fellows, including all of the founders of the field.