Cognitive Computing

Extending Game-Based AI Research into the Wild

Share this post:

A few days from now, Google’s AlphaGo artificial intelligence (AI) program will take on one of the world’s greatest Go players, South Korea’s Lee Se-Dol, in a live-streamed, five-day match in what’s considered to be the most difficult board game for computers. As researchers who have worked extensively with games, we are excited to see such great progress in computer Go, and we will, of course, be rooting for the machine and the scientists behind the machine.

From the beginning of AI, researchers were immediately drawn to games like checkers and chess, due to the enormous complexity of calculating the best moves.  An additional perceived advantage of these board games was that they were “clean,” i.e., an exact formal specification of the task can be given to computers, in terms of the game states, the rules of legal play, and determining the ultimate outcome.

Developing AI programs to master board games drove great progress in the field by many researchers—including advances in techniques for search algorithms and evaluation functions.  These techniques were supercharged in the 1990s, when IBM’s Deep Blue team (including M.C.) combined advances in search and evaluation with large-scale parallel computing, enabling a win in 1997 over world chess champion Garry Kasparov. Major innovations in machine learning were also developed, notably in the self-teaching programs of IBM researchers Arthur Samuel in checkers in the 1950s, and one of us (G.T.) in backgammon in the 1990s.

However, research in such “clean” game domains didn’t really address most real-life tasks that have a “messy” nature.  By “messy,” we mean that, unlike board games, it may be infeasible to write down an exact specification of what happens when actions are taken, or indeed what exactly is the objective of the task. Real-world tasks typically pose additional challenges, such as ambiguous, hidden or missing data, and “non-stationarity,” meaning that the task can change unexpectedly over time.  Moreover, they generally require human-level cognitive faculties, such as fluency in natural languages, common-sense reasoning and knowledge understanding, and developing a theory of the motives and thought processes of other humans.

To begin to tackle the “messy” nature of real-world tasks, research scientists turned again to games. IBM’s Watson incorporated facets of AI, machine learning, deep question answering and natural language processing to play the American quiz show Jeopardy!, triumphing against the world’s best human players.

However, despite more than 60 years of research, AI scientists still appear to be decades away from creating fully autonomous systems that manifest human-level cognitive faculties. Yet we believe that now is a fantastic time for AI to begin tackling the messy problems of real life in a big way. Machines may not embody human-level cognition, but they can now crunch web-scale datasets with cloud-scale compute power, and thereby extract insights using highly sophisticated algorithms for machine learning. The key to progress, we believe, is to shift research focus from the sci-fi fantasy of machines that fully replicate human general intelligence (like Commander Data on Star Trek), and instead emphasize developing collaborative machines that can work in concert with humans, exploiting the disparate strengths of each.

IBM is firmly committed to this direction with Watson and our research in cognitive computing — systems that ingest large amounts of diverse data, reason over data, learn from their interactions with information and people, and interact with people in ways that are natural to us. None of this research focuses on sentience or autonomy on the part of machines. Rather, it consists of augmenting the human ability to understand — and act upon — the complex systems of our society.

Since Watson won on Jeopardy!, IBM’s researchers and engineers have expanded its capabilities so it can take on complex problems faced by organizations and individuals. The first target was a big one — cancer. Our colleagues are collaborating with Memorial Sloan Kettering Cancer Center to apply cognitive computing to help doctors make evidence-based treatment decisions. Oncologists — like all clinicians — aim to keep up with the large volume of data created daily regarding the disease, including research, medical records, and clinical trials.

The Watson for Oncology offering provides a window into how cognitive computing can work in concert with human judgement. We asked our IBM colleagues at Watson Health to explain how it works:

¥ “First, Watson analyzes the patient’s medical record. Watson for Oncology has an advanced ability to analyze the meaning and context of structured and unstructured data in clinical notes and reports, assimilating key patient information written in plain English that may be critical to selecting a treatment pathway.
¥ Next, Watson identifies potential evidence-based treatment options. By combining attributes from the patient’s file with clinical expertise, external research, and data, Watson for Oncology identifies potential treatment plans a doctor may want to consider for a patient.
¥ Then, Watson finds and provides supporting evidence from a wide variety of sources. Watson ranks identified treatment options and provides links to supporting evidence for each option to help oncologists as they consider treatment plans for their patient. Watson for Oncology draws from impressive corpus of information, including MSK curated literature and rationales, as well as more than 290 medical journals, more than 200 textbooks, and 12 million pages of text.”

This type of work — applying cognitive technology to a big problem like cancer — has the potential to be truly transformative. As we build out Watson’s cognitive capabilities, the prospective benefits of augmented intelligence (humans + machines) are looking brighter and brighter across many industries, including medicine, education, banking, insurance, law, government, retailing, manufacturing, and many others. We see a potential for this technology to transform industries and professions, leading to greater productivity, and much more highly informed decision-making.

For the foreseeable future, it appears that the challenges of a complex world will require a combination of people and machines working together, taking advantage of the strengths of each to move toward a more productive and better world.

——
To learn more about the new era of computing, read Smart Machines: IBM’s Watson and the Era of Cognitive Computing.

IBM Research Scientist

More Cognitive Computing stories

Meet the Newest IBM Fellows

Since the first class of IBM Fellows in 1962, IBM has honored its top scientists, engineers and programmers, who are chosen for this distinction by the CEO. Among the best and brightest of IBM’s global workforce are 12 new IBM Fellows who join 293 of their peers who have been so recognized over the last […]

Continue reading

Accelerating Digital Transformation with DataOps

Across an array of use cases, AI pioneers are employing a core set of new AI capabilities to unlock the value of data in new ways. According to the 2019 IBM Global C-suite study, leaders are using data 154% more to identify unmet customer needs, enter new markets, and develop new business models. These leaders […]

Continue reading

How IBM is Advancing AI Once Again & Why it Matters to Your Business

There have been several seminal moments in the recent history of AI. In the mid-1990s, IBM created the Deep Blue system that played and beat world chess champion, Garry Kasparov in a live tournament. In 2011, we unveiled Watson, a natural language question and answering system, and put it on the hit television quiz show, […]

Continue reading