Today, an artificial intelligence (AI) system engaged in the first ever live, public debates with humans. At an event held at IBM’s Watson West site in San Francisco, a champion debater and IBM’s AI system, Project Debater, began by preparing arguments for and against the statement: “We should subsidize space exploration.” Both sides then delivered a four-minute opening statement, a four-minute rebuttal, and a two-minute summary.
Project Debater made an opening argument that supported the statement with facts, including the points that space exploration benefits human kind because it can help advance scientific discoveries and it inspires young people to think beyond themselves. Noa Ovadia, the 2016 Israeli national debate champion, opposed the statement, arguing that there are better applications for government subsidies, including subsidies for scientific research here on Earth. After listening to Noa’s argument, Project Debater delivered a rebuttal speech, countering with the view that potential technological and economic benefits from space exploration outweigh other government spending. Following closing summaries from both sides, a snap poll showed that a majority of audience members thought Project Debater enriched their knowledge more than its human counterpart.
Just think about that for a moment. An AI system engaged with an expert human debater, listened to her argument, and responded convincingly with its own, unscripted reasoning to persuade an audience to consider its position on a controversial topic. Later, we held a second debate between the system and another Israeli debate expert, Dan Zafrir, that featured opposing arguments on the statement: “We should increase the use of telemedicine.”
For the initial demonstrations of this new technology, we selected from a curated list of topics to ensure a meaningful debate. But Project Debater was never trained on the topics. Over time, and in relevant business applications, we will naturally move toward using the system for issues that haven’t been screened.
Project Debater moves us a big step closer to one of the great boundaries in AI: mastering language. It is the latest in a long line of major AI innovations at IBM, which also include “Deep Blue,” the IBM system that took on chess world champion Garry Kasparov in 1997, and IBM Watson, which beat the top human champions on Jeopardy! in 2011.
Project Debater reflects the mission of IBM Research today to develop broad AI that learns across different disciplines to augment human intelligence. AI assistants have become highly useful to us through their ability to conduct sophisticated keyword searches and respond to simple questions or requests (such as “how many ounces in a liter?” or “call Mom”). Project Debater explores new territory: it absorbs massive and diverse sets of information and perspectives to help people build persuasive arguments and make well-informed decisions.
This technology will expand upon the capabilities of IBM Watson, which is being used today by dozens of companies to mine massive, internal data sets for new business insights. The system already uses Watson Speech to Text API, and it will contribute to enhancing Watson’s advanced language and dialogue features. Project Debater’s underlying technologies will also be commercialized in IBM Cloud and IBM Watson in the future.
Building the system was a remarkably difficult and complex challenge. Over the past six years, a global IBM Research team led by our Haifa, Israel lab endowed Project Debater with three capabilities, each breaking new ground in AI: first, data-driven speech writing and delivery; second, listening comprehension that can identify key claims hidden within long continuous spoken language; and third, modelling human dilemmas in a unique knowledge graph to enable principled arguments (read more about the technical details in the 30+ published papers with access to the training data sets here).
The debate format offers the ideal testing ground for these capabilities. Debate rules stem from a human culture of discussion and are not arbitrary, and the value of arguments is often inherently subjective. Project Debater must adapt to human rationale and propose lines of argument that people can follow. In debate, AI must learn to navigate our messy, unstructured human world as it is – not by using a pre-defined set of rules, as in a board game.
For this reason, Project Debater sometimes makes mistakes, just like people. Though work on this technology is far from complete, it has the potential to assist with thousands of complicated human decisions. For example, by helping to identify financial facts that either support or oppose a financial thesis, or by presenting pro and con arguments related to public policies. Project Debater could be the ultimate fact-based sounding board without the bias that often comes from humans.
That’s a very positive development for AI. The more transparent and explainable we can make this transformative technology, the more we can trust it. And the more we can trust it, the more it will help us make the best, most informed decisions in an increasingly complex world.
IBM researchers published the first major release of the Adversarial Robustness 360 Toolbox (ART). Initially released in April 2018, ART is an open-source library for adversarial machine learning that provides researchers and developers with state-of-the-art tools to defend and verify AI models against adversarial attacks. ART addresses growing concerns about people’s trust in AI, specifically the security of AI in mission-critical applications.
It is no surprise that following the massive success of deep learning technology in solving complicated tasks, there is a growing demand for automated deep learning. Even though deep learning is a highly effective technology, there is a tremendous amount of human effort that goes into designing a deep learning algorithm.
Convex optimization problems, which involve the minimization of a convex function over a convex set, can be approximated in theory to any fixed precision in polynomial time. However, practical algorithms are known only for special cases. An important question is whether it is possible to develop algorithms for a broader subset of convex optimization problems that are efficient in both theory and practice.