New Thinking

Could AI fight fake news? Yes—with the help of humans.

Share this post:

He was one of the first people to develop a self-driving car, and now he trains fish to play soccer. But it wasn’t until he began to grapple with the problem of “fake news” that Dean Pomerleau finally hit on a task that seemed to exceed the limits of plausibility. In late November of last year, disheartened by the proliferation of disinformation in the lead-up to the presidential election, Pomerleau put out a call on Twitter challenging his followers to develop a machine-learning tool that could accurately distinguish between true and false claims, using Snopes.com’s database of 100,000 verified and debunked claims to train and test their algorithms. Pomerleau put up $1,000 of his own money at five-to-one odds. “I was rather pessimistic,” he recalls.

You may have heard of Fake News Challenge, the worldwide competition that blossomed out of Pomerleau’s disbelieving wager on Twitter, whose first-round winner was announced this month. But the machine-learning arms race that followed Pomerleau’s call to action hardly resembled the project he had initially envisioned. As interest in the challenge mounted, Pomerleau joined forces with Delip Rao, founder of machine learning developer Joostware. Together they worked to winnow the much larger goal of creating a machine-learning based fact-checker down to a more manageable task that could be completed in a short amount of time. FNC Stage 1 focused on “stance detection,” an automated task to sort text based on their relative agreement or disagreement with a specified claim. FNC-2 and subsequent challenges will build upon Stage 1, with the goal of moving towards a more complete fact-checking system.

A stance detection algorithm wasn’t the miracle machine Pomerleau had envisioned. But from speaking with journalists and fact checkers to see how artificial intelligence, machine learning, and natural language processing techniques could most help their organizations, he and Rao began to see the ways existing technology could reduce repetitive and time-consuming work for those in the newsroom, and the restructured challenge gave their insights direction. The notion of automated fact-checking conducted without human intervention, while widely appealing, has taken a ribbing across various media, but it doesn’t take a technological utopian to envision the way augmented intelligence could benefit journalism—much in the way it now allows doctors to make more accurate diagnoses or HR departments to conduct more effective hiring processes. An AI can sift instantly through piles of data, discovering patterns and inconsistencies, while sorting and categorizing information in a way that simplifies the work of human professionals. For researchers like Pomerleau or fact-checkers across the world, it’s an opportunity for humans to work in tandem with AI to enhance their decision making.

“We’re in June, aren’t we?” asks Mevan Babakar, digital product manager at the UK fact-checking charity Full Fact, when we speak late in the month. Babakar possesses both a big-picture understanding of the capabilities of AI fact-checking and a micro-scale awareness of Full Fact’s place within it. This granular focus enabled her to compose the 30-page whitepaper “The State of Automated Fact-checking” with director Will Moy, and would, under normal circumstances, make her unlikely to lose track of something as simple as the date. But it’s been quite a year for Full Fact, even apart from the $500,000 grant they’ve just received from Omidyar Network and Open Society Foundations, which the charity says is the largest investment in any automated fact-checking project to date.

Babakar joined Full Fact’s team a couple months out from the 2015 general election in the UK and on the coattails of the Scottish independence referendum, followed by Britain’s EU referendum and, most recently, Theresa May’s snap election earlier this month, all of which were covered by the charity (Babakar also found the time to serve as an advisor to Fake News Challenge). To boot, the organization received a €150,000 grant from the Google Digital News Initiative in November. “It’s been a few wild rides,” Babakar said. “But it does mean we have maybe one of the most experienced fact-checking charities doing work around elections as a result.”

Full Fact may have become a leader in fact-checking AI, but removing humans from the task of fact-checking has never been part of the equation. In its automation whitepaper, the charity lays out a vision for open, shared AI technology that enhances the work already being performed by humans. Echoing their partners at Fake News Challenge, Full Fact states that “a successful automated fact-checking system is one that saves fact-checkers and journalists time, and makes fact-checking more effective at limiting the spread of unsubstantiated claims.” The organization adds that accuracy should take precedence over comprehensiveness: “If an automated tool can monitor and check 10% of claims accurately, that is 10% more claims than we can handle at the moment without automated fact-checking tools. It is all a benefit.”

With the investment from Open Society and Omidyar, Full Fact plans to develop and release two products for journalists and fact-checking professionals by the end of this year. The first, Live, aims to provide subtitles to fact-check television in real time, aiding journalists in setting the record straight for the some 66% of Britons who get their news from TV, either by drawing from Full Fact’s existing database or by surfacing existing data for a human fact-checker to make a swift assessment. The second product, Trends, monitors repeated claims and determines who’s making them and where they’re being made, allowing fact-checkers to form a better understanding of why a specific claim is being repeated and its source.

Babakar remains a pragmatic eye of the storm in the midst of Full Fact’s rapid product development. “The point at which we have fully cognizant amazing AI that can make decisions and pass the Turing test—that’s the point at which we can have incredible, fully automated fact-checking in that sense,” Babakar says. “There are some parts of the fact-checking process…that don’t need our researchers, who are experts in their own fields.” She says that rather than looking up how much funding the National Health Service does or doesn’t receive, for instance, Full Fact’s team should be increasingly focused on the thorny, nuanced claims that machines can’t yet tackle: “What we want our fact-checkers to be working on is the harder problems, like ‘Is the NHS in crisis?’” Babakar explains.

Even today, Full Fact’s work traffics in shades of gray, an approach that sets them apart from prominent American counterparts, who frequently issue ratings that reflect the accuracy of a given claim. Whereas PolitiFact, the Pulitzer Prize-winning fact-checking website focused on American politics, assigns grades to statements on a six-point “Truth-o-Meter” scale—from “True” to “Pants on Fire!”—Full Fact sets claims and the charity’s own written conclusions side-by side. “We think that [ratings] detract from real learning, in a way,” Babakar says, pointing to a claim made by Jeremy Corbyn in late November that the Labour party had taken 800,000 children out of poverty. Poverty rates, as Full Fact concluded, are determined by a number of different measures, making this statement far more complex than a “true” or “false” label would allow. “I think for us it’s about introducing complexity,” Babakar says, “showing people that actually, there are lots of shades of grey between the black and white.”

But how much more powerful could readers—and voters—become when AI’s capabilities and availability of use put all of those shades of grey within its reach? And how will advances in affective computing assist in or take the reins of decision-making—in the newsroom, in a doctor’s office, in a boardroom, or beyond? These are the sorts of questions at the heart of the Cognitive and Immersive Systems Lab (CISL), a joint project between IBM and Rensselaer Polytechnic Institute (RPI). At the core of CISL’s research is the Situations Room Infrastructure, an immersive environment meant to facilitate group decision making with group-computer dialogue: a marriage of cognitive computing with advanced sensory input, allowing the room itself to respond to and enhance the human decision-making happening within its walls. By monitoring group dynamics from the background, the room could itself offer assistance in decision-making, taking into account not only the content of the group’s discussion, but also nuances such as the intention behind language and the socio-behavioral cues used by humans in the environment. The Situations Room’s first use case is in mergers and acquisitions, but as RPI President Shirley A. Jackson explained at CISL’s launch, its applications are seemingly infinite. It could enable architects and designers to collaborate on different aspects of one project at the same time, businesspeople to account for extreme circumstances that may arise as a result of climate disaster or international terrorism, and students with all manner of learning styles to learn with one another in a single classroom.

But even with some of the most advanced computing in the world, CISL’s researchers emphasize the value of human-computer collaboration. Like the nonpartisan fact-checking charities paving a way for machine learning in the fight against fake news, CISL seeks to expand the ways in which AI can enhance—not replace—human decision-making. In a TEDx talk on why humans should be excited rather than frightened by the future of AI, CISL’s Jim Hendler said that the best way to address the biggest problems facing the world today will be “putting together teams of the best minds we have, and for the foreseeable future the best minds will include humans and computers.” And as CISL researcher Qiang Ji points out, “The human actually has the final say.”

The Situations Room’s enormous capability rests on the multimodal input from a range of sensors and algorithms, a process Ji illustrates with a politician making a false claim on television. He explains that the room’s first task would be to digest the politician’s natural language in order to comprehend the content of what they are saying. In the meantime, Ji says, “You have another sensor that’s monitoring the physical behavior—the body behavior, the gesture, the facial behavior—of the person.” If a politician is lying, he says, that’s something these sensors can potentially detect based on physical behavior alone. Finally, he explains, “That information [is] combined with information from the speech and content… to decide if the politician is telling the truth or not.” Right now, Ji notes, not even humans themselves have the capability to juggle the multimodal input required to make a real-time judgment based on so many factors.

For now, though, we’re left doing the best we can. And as Pomerleau points out, that’s quite a lot. “If you could debunk or validate short claims like Snopes does automatically, that takes a level of understanding both of language and of the world—how people interact with each other, what’s reasonable, what’s unreasonable.” He adds that the first round of the Fake News Challenge validated his belief “that we’re a long way off from having artificial general intelligence, and that we should focus more of our efforts on tools to assist humans rather than replace them.”

As it turns out, fully automated fact-checking is a much bigger challenge than teaching fish to play soccer. But as Pomerleau notes, “It takes a lot of patience to do both.”

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More New Thinking Stories

Four Predictions on the Future of VR from a Watson Product Director

There are two reasons that Michael Ludden, director of product of IBM Watson Developers Labs & AR/VR Labs, starts his presentations with a video of Star Trek actors playing the virtual reality game Star Trek Bridge Crew. First, it’s a perfect example of how his labs are melding Watson’s artificial intelligence capabilities with the burgeoning […]

Continue reading

To bring “good” taste to scale, first build trust

Brian Smith knows wine. The co-founder of Winc and the company’s Chief Wine Officer (“The title you get when you’re allowed to give yourself your own title”), he studied to become a sommelier and even worked in wine production in Provence and Argentina. But when pressed, he expresses resistance to the notion that one becomes […]

Continue reading

Podcast advertising sticks to no script

Blue Apron may be the presenting sponsor of the Crooked Media podcast Pod Save America, but it’s also the butt of the jokes. Although the fresh-ingredient meal service provides the show with polished ad copy, hosts Jon Favreau and Jon Lovett rarely stick to the script. Here’s what was on the menu this spring, according […]

Continue reading