October 12, 2017 | Written by: Carrie Kirby
Categorized: New Thinking
Share this post:
Programmer Krishna Kaliannan lives in Manhattan. He can sum up the way he felt the day after the 2016 presidential election in one word: Blindsided. Who were the millions of Americans who had voted differently than polls had predicted? Why did they do it? Kaliannan didn’t know them, so he couldn’t ask.
At the same time, Harvard Business School MBA student Henry Tsai had a similar realization. Even though he knew both liberals and conservatives socially, he realized with shame that he hadn’t really been engaging in meaningful conversations with both sides.
“We don’t even listen to each other. We are dismissive or demonizing of others who don’t agree with us,” Tsai said.
Like many, both Kaliannan and Tsai put some blame for the disconnect on technology. The effect that online organizer Eli Pariser dubbed the “filter bubble” back in 2011 is now widely acknowledged to contribute to the increasing polarization among Americans: Facebook, Google and other technology companies deploy algorithms to show us the online content and communications that we like best, which tends to be content that reinforces our beliefs. Human nature compounds that effect when we use these tools to create networks of like-minded folks, removing with a click anyone who challenges us.
Tsai and Kaliannan are among a crop of entrepreneurs and researchers who have decided to fight electrons with electrons, harnessing machine learning and other technology to pop the filter bubble, throw open the echo chamber, and bridge the empathy gap—to, in short, get people who disagree talking to each other in a civil manner again.
“Yes, social media and all this personalization has created a filter bubble, but the solution is not to stop using social media,” Tsai says. “Our goal is to connect people so there’s more understanding. And we do that through technology.”
Kaliannan zeroed in on the problem that people tend to see the world through the lens of over-customized media, reading articles that like-minded friends shared while missing equally valid articles that would have challenged their beliefs. So he created EscapeYourBubble, a Chrome extension that combines machine learning and human editors to show users five to seven high-quality articles each week aimed to expand their perspective. If you indicate at sign-up that you’d like to learn to be more accepting of Republicans, you might see, inserted into your Facebook feed or in your inbox, an article explaining the economic concerns that left Red State voters looking for change.
Tsai, on the other hand, bypassed the media and aimed to get people talking, one on one. He partnered with MIT computer science student Yasyf Mohamedali to create Hi From the Other Side, a service that matches up people with opposite political stances for friendly chats, either online or in person.
You wouldn’t think that sitting down for coffee with someone you disagree with would be all that enticing, yet 6,000 people have signed up. The key to making the connection a positive experience: Hi From the Other Side’s intelligent software matches each user with the ideal partner, someone with lots in common other than political stance. And the service makes it clear that you are signing up for a conversation, not a debate.
“First of all, we filter for nice people,” Tsai says. Would-be users have to answer a number of questions about their view the world, and even say what they wanted to grow up to be when they were children.
“The way people talk about their hopes and dreams is pre-political. It helps us find people who are likely to have common ground with each other. It doesn’t have to be that both people say, ‘I wanted to be a firefighter.’ It’s the words that they use: ‘My grandfather always told me …’”
Other projects strive to get Internet users out of their own bubble by letting them into the bubbles of others. That’s the goal of FlipFeed, a Chrome extension created by the MIT Media Lab’s Laboratory for Social Machines to show users exactly how someone from the other side experiences Twitter.
FlipFeed grew out of one of the lab’s previous projects, the Electome, which the lab describes as “a machine-driven mapping and analysis of public sphere content and conversation associated with the 2016 presidential election campaign.” Another bubble-bursting project to come out of the Electome is Social Mirror, an app that shows you where specific Twitter users—including media outlets—fall in the political spectrum. A partnership with Twitter provides the lab with both funding and access to huge volumes of tweets—the same tweets that any user could view publicly, but available for the researchers to process an masse.
Much justifiable anxiety has been generated over the negative effects that intelligent machines could have on society. But all three of these projects show that machine learning can also be harnessed to help us understand society in new ways and make prosocial changes. All three use machine learning to sort vast amounts of data into useful categories. At EscapeYourBubble, machines “algorithmically curate” the media.
“Each day thousands of articles on politics are published,” Kaliannan says. “Our editors can only read so many articles. So we use machine learning to filter that down to a reasonable number.”
A harder, but equally important job for machine learning at EscapeYourBubble is pinpointing the elements that make a story into a conversation starter, or the opposite.
“We’ve been using machine learning to understand why, for some of the past articles we’ve sent out, people react in a more close-minded manner, while others encourage folks to react in a more open-minded manner,” Kaliannan says.
At Hi From the Other Side, computers must sift through dozens of survey answers submitted by thousands of users to figure out which Democrat to pair with which Republican. The software started by matching words and flagging people who used the same or similar words when they talked about themselves. Now it’s venturing into the trickier process of “sentiment analysis.” For example, the software may identifies clues that both applicants are generally “hopeful.” Tsai and Mohamedali are working on adding nuance by considering more variables to quantify commonality, for instance, looking at differences on certain issues even among people who vote for the same party, and adding local issues to the mix.
At MIT’s Laboratory for Social Machines, machine learning is what makes it possible to sort the “firehose” gush of tweets that Twitter provides into distinct ideological streams. The lab’s machine learning system, which is used in the Electome, Social Mirror and FlipFeed, analyzes a Twitter user’s description, the tweets they send, and who they communicate with to identify right-wing and left-wing tweeters.
“Of course the real political spectrum is much more nuanced than left and right,” says research assistant Nabeel Gillani. Teaching the software to distinguish those nuances may be fodder for future projects.
Social media companies have made billions of dollars serving citizens exactly the content they want. Encouraging people to look beyond that, to engage with content and conversations that challenge their beliefs, probably won’t generate the same profits.
Of course, none of the researchers and entrepreneurs behind these projects started them to get rich. Grant money may be a more likely route for sustaining their work. The John S. and James L. Knight Foundation recently put up $50,000 for EscapeYourBubble and The UC Santa Cruz Science Communication Program to research how people learn about climate change from the media. EscapeYourBubble will use its platform to serve users different kinds of information about climate change, then conduct surveys to see how it affects attitudes.
Tsai has a day job, and uses this project as an expression of his identity as a “technology optimist.” He has no plans to make money from Hi From the Other Side. Philanthropic backers have expressed interest, but so far, he says, it hasn’t been terribly expensive to maintain.
The MIT researchers, meanwhile, are focused on how they can go beyond FlipFeed’s modest goal of showing a glimpse of the other side, to pursue the lab’s stated goal of using machine learning and other data science to enact positive change in human networks.
“FlipFeed is by no means even close to the answer to the political and empathy divides in our country. What we’re hoping to do now as a group is really think about other interventions, not just other tools but other approaches,” Gillani says.
Although their approaches are different, all three projects share the common thread of using machine learning to foster empathy and conversation—not to stifle disagreement or change minds.
“There is some good in conflict in politics,” Kaliannan says. The problem, as he sees it, is that we have moved from disagreeing with others to seeing those who disagree as fundamentally evil or immoral. Neither do any of the teams strive under the illusion that technology alone will save democracy.