Cloud Computing

Extending Game-Based AI Research into the Wild

A few days from now, Google’s AlphaGo artificial intelligence (AI) program will take on one of the world’s greatest Go players, South Korea’s Lee Se-Dol, in a live-streamed, five-day match in what’s considered to be the most difficult board game for computers. As researchers who have worked extensively with games, we are excited to see such great progress in computer Go, and we will, of course, be rooting for the machine and the scientists behind the machine.

From the beginning of AI, researchers were immediately drawn to games like checkers and chess, due to the enormous complexity of calculating the best moves.  An additional perceived advantage of these board games was that they were “clean,” i.e., an exact formal specification of the task can be given to computers, in terms of the game states, the rules of legal play, and determining the ultimate outcome.

Developing AI programs to master board games drove great progress in the field by many researchers—including advances in techniques for search algorithms and evaluation functions.  These techniques were supercharged in the 1990s, when IBM’s Deep Blue team (including M.C.) combined advances in search and evaluation with large-scale parallel computing, enabling a win in 1997 over world chess champion Garry Kasparov. Major innovations in machine learning were also developed, notably in the self-teaching programs of IBM researchers Arthur Samuel in checkers in the 1950s, and one of us (G.T.) in backgammon in the 1990s.

However, research in such “clean” game domains didn’t really address most real-life tasks that have a “messy” nature.  By “messy,” we mean that, unlike board games, it may be infeasible to write down an exact specification of what happens when actions are taken, or indeed what exactly is the objective of the task. Real-world tasks typically pose additional challenges, such as ambiguous, hidden or missing data, and “non-stationarity,” meaning that the task can change unexpectedly over time.  Moreover, they generally require human-level cognitive faculties, such as fluency in natural languages, common-sense reasoning and knowledge understanding, and developing a theory of the motives and thought processes of other humans.

To begin to tackle the “messy” nature of real-world tasks, research scientists turned again to games. IBM’s Watson incorporated facets of AI, machine learning, deep question answering and natural language processing to play the American quiz show Jeopardy!, triumphing against the world’s best human players.

However, despite more than 60 years of research, AI scientists still appear to be decades away from creating fully autonomous systems that manifest human-level cognitive faculties. Yet we believe that now is a fantastic time for AI to begin tackling the messy problems of real life in a big way. Machines may not embody human-level cognition, but they can now crunch web-scale datasets with cloud-scale compute power, and thereby extract insights using highly sophisticated algorithms for machine learning. The key to progress, we believe, is to shift research focus from the sci-fi fantasy of machines that fully replicate human general intelligence (like Commander Data on Star Trek), and instead emphasize developing collaborative machines that can work in concert with humans, exploiting the disparate strengths of each.

IBM is firmly committed to this direction with Watson and our research in cognitive computing — systems that ingest large amounts of diverse data, reason over data, learn from their interactions with information and people, and interact with people in ways that are natural to us. None of this research focuses on sentience or autonomy on the part of machines. Rather, it consists of augmenting the human ability to understand — and act upon — the complex systems of our society.

Since Watson won on Jeopardy!, IBM’s researchers and engineers have expanded its capabilities so it can take on complex problems faced by organizations and individuals. The first target was a big one — cancer. Our colleagues are collaborating with Memorial Sloan Kettering Cancer Center to apply cognitive computing to help doctors make evidence-based treatment decisions. Oncologists — like all clinicians — aim to keep up with the large volume of data created daily regarding the disease, including research, medical records, and clinical trials.

The Watson for Oncology offering provides a window into how cognitive computing can work in concert with human judgement. We asked our IBM colleagues at Watson Health to explain how it works:

¥ “First, Watson analyzes the patient’s medical record. Watson for Oncology has an advanced ability to analyze the meaning and context of structured and unstructured data in clinical notes and reports, assimilating key patient information written in plain English that may be critical to selecting a treatment pathway.
¥ Next, Watson identifies potential evidence-based treatment options. By combining attributes from the patient’s file with clinical expertise, external research, and data, Watson for Oncology identifies potential treatment plans a doctor may want to consider for a patient.
¥ Then, Watson finds and provides supporting evidence from a wide variety of sources. Watson ranks identified treatment options and provides links to supporting evidence for each option to help oncologists as they consider treatment plans for their patient. Watson for Oncology draws from impressive corpus of information, including MSK curated literature and rationales, as well as more than 290 medical journals, more than 200 textbooks, and 12 million pages of text.”

This type of work — applying cognitive technology to a big problem like cancer — has the potential to be truly transformative. As we build out Watson’s cognitive capabilities, the prospective benefits of augmented intelligence (humans + machines) are looking brighter and brighter across many industries, including medicine, education, banking, insurance, law, government, retailing, manufacturing, and many others. We see a potential for this technology to transform industries and professions, leading to greater productivity, and much more highly informed decision-making.

For the foreseeable future, it appears that the challenges of a complex world will require a combination of people and machines working together, taking advantage of the strengths of each to move toward a more productive and better world.

——
To learn more about the new era of computing, read Smart Machines: IBM’s Watson and the Era of Cognitive Computing.

Share this post:

Share on LinkedIn

Add Comment
3 Comments

Leave a Reply

Your email address will not be published.Required fields are marked *


unknown

but still not better than human brain … watson is just another form of information retrieval system, humans can retrieve better answers than watson does using search engines like google

Reply

PStrohm

“These techniques were supercharged in the 1990s, when IBM’s Deep Blue team”

When Deep Blue topped world chess champion Gary Kasparov in 1997, it did so with what’s called brute force. In essence, IBM’s supercomputer analyzed the outcome of every possible move, looking further ahead than any human possibly could. That’s simply not possible with Go. In chess, at any given turn, there are an average 35 possible moves. With Go—in which two players compete with polished stones on 19-by-19 grid—there are 250. And each of those 250 has another 250, and so on.

AlphaGo > DeepBlue

Reply

Eyukah

It is always intriguing to see how AI does against human experts in established strategy games; it is also well covered by media. It would be interesting how Watson would do against one of the newer, popular, “messy” games such as Hearthstone.

Reply
More Cloud Computing Stories

Data Scientists Help Unlock the Value of Data in Business

The data scientist, a profession barely mentioned 10 years ago, has grown in importance to become among the most sought after in the United States. That’s because the volume of data organizations must contend with is so overwhelming it’s impossible to find the insights needed to make strategic decisions. A data scientist has the skills to analyze […]

Accelerating the Nutrition Revolution with Cognitive Computing

When I was a teenager, I suffered from a bunch of pesky health problems, including migraines and sinusitis. By age 16, I’d had enough. Determined to feel better, I read dozens of nutrition books and tried a wide variety of diets. I found lots of conflicting theories and advice. Ultimately, after three years of experimentation […]

A Data Revolution for Healthcare is Here

For many people today, health is something you think about when you go in for check-ups or treatments. Your doctor has your clinical history and runs tests, but her understanding of your health is basically limited to what happens and what you discuss while you are there. And somewhere you know that researchers are making […]