New Thinking

How AI and Humans Are Working Together to Reduce Online Toxicity

Share this post:

“Creating non-toxic communities on the net is the Next Big Thing,” tweeted Walter Isaacson, president of the Aspen Institute in a post earlier this fall. Isaacson, a well-known figure on predicting trends, was pointing to a WIRED interview with Instagram CEO Kevin Systrom and others on the platform’s efforts to curb online toxicity. Instagram, and most other platforms, have been busy playing whac-a-mole with nudity, racism, misogyny, child exploitation, and the always tricky sexual innuendo.

Given the nuance of human communication, that is easier said than done. Sometimes a cigar is just a cigar. Other times, given its context or related text, it is potentially offensive and against a platform’s accepted code of conduct. Major social media companies and others concerned with offensive material online have been actively pushing towards enhanced moderation that strikes a balance between AI, which is better for scale, and humans, which are better at understanding context.

“We have always believed that it is good to use AI where it is strongest and humans where they are strongest,” says Carlos Figueiredo, the Director of Community Trust & Safety at Two Hat Security. The Vancouver-based company produces a chat filter and automated moderation software called CommunitySift, designed for social products. As Figueiredo points out, “social media platforms are often dealing with millions of lines of chat or potentially billions of lines of comments.”

In other words, large social media platforms have far outgrown their ability to rely solely on human moderation. While this presents risks, as seen in the recent anger of parents discovering inappropriate content that made its way onto YouTube Kids, most large-scale platforms find it necessary to incorporate AI to sift through the vast amount of content written or uploaded to a site. Given the current focus on platforms trying to incentivize more content from users, such as through livestreaming, the struggle to maintain an environment free from offensive content (relative to the community standard of the platform) will only become more pronounced. Users are demanding that platforms battle trolls and inappropriate material, while the volume of content simultaneously accelerates.

As AI continues to advance, it raises the question: Why not remove the human moderators entirely?

“If we just allow AI to go rogue, we can miss a lot of the nuanced, human factors,” says Figueiredo. “The machine cannot make total sense of a situation.” Take the famous Napalm Girl photo for example. On its face, solely analyzed by AI, the photo is flagged as child pornorgraphy that would be banned from every platform. Its iconic nature, however, adds a historical significance that perhaps would affect an otherwise “kneejerk” response to the image of a nude child. After initially removing the photo from Facebook last Fall, the world’s largest social network had to reverse course and reinstate the photo.

It is complicated situations like this that showcase the need for AI and humans to work together in reducing online toxicity. Online moderation cuts both ways—platforms need to not only eliminate “bad” content, but not eliminate “good” content. Finding this balance requires careful consideration. According to Figueiredo, AI is needed to make sense of the huge volume while human expertise is utilized for validation and proper quality control.

“A good team of humans can make the really hard decisions about the context,” says Figueiredo. “They can make judgments based on context, culture, intent, and connotation.” Understanding the culture and history behind the Napalm Girl photo is what makes the photo appropriate for Facebook. While AI is good for finding the overtly “good” and overtly “bad,” humans are still a necessary component in navigating the grey areas of content appropriateness. In addition, AI image recognition is not a perfect system yet—as illustrated by the difficulty AI has in differentiating between a chihuahua from a blueberry muffin. Humans are often needed to clarify the false-positives of AI filtering.

The content that platforms are worried about, however, is much more offensive than chihuahuas. One major decision that a company has to make when tackling online toxicity is their reliance on user-reporting and other self-moderating tools. In the situation facing YouTube Kids, for example, the platform can respond to potentially inappropriate content after being notified by a user (in addition to their human and AI moderation tools). The major decision each platform has to make is how much of the moderation burden to put on the users. Until a truly foolproof system is created, which may never happen, human assistance from the user is a necessary component to online moderation.

Adding an additional layer of complication to reducing online toxicity is the fundamental question of censorship. “If we are attempting to monitor human expression, whether it is language, media, or interaction, in order to create more positive online experience, we are potentially infringing on most freedom of expression laws,” states Rares Crisan, co-founder and CTO of Toronto-based Logojoy.

This is the challenge with AI and human expression. As a society our moral compass is ever evolving—usually for the better—but it is driven primarily by the people of a society and what they are willing to tolerate or censor. If we introduce artificial intelligence into the process, we are also introducing another influence into what can shape our society’s moral compass. The danger is that this is one influencer that has the ability to shape us without our awareness.

Leaving the philosophical debate about just how much AI should be employed aside, it’s important to consider where the AI/human relationship of combating online toxicity is headed. “There is machine learning and deep learning that is working to guess what would happen in the next two to three seconds,” says Dan Faggella, founder of AI market research firm Tech Emergence. Given the vulnerability of livestreaming, where online suicide, public nudity, or violence is always a possibility, this type of prediction could prove invaluable.

Before we reach the point of accurate AI predictions, the short-term will likely be geared towards advancing detection. “The low-hanging fruit is basic nudity and violence says Faggella. He gives the example of a hot dog.Taken in one context, it is a G-rated staple of summer barbeques. Used slightly differently in a photo, it becomes sexualized content that may offend the community standards of a given platform. It is this novelty that AI currently struggles with, leading to the need for human assistance. But Faggella is bullish on the ability for AI to continually advance to the point of near humanless moderation in the future, given the consistent feedback from moderators and self-reporting tools. The increased hiring of human moderators by Facebook, according to Faggella, is more about reacting to user demand as opposed to a sign of the future.

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More New Thinking Stories

The Savings App Explosion

On Black Friday and Cyber Monday 2017, shoppers around the country turned to their smartphones and laptops. According to Adobe Analytics data, American shoppers spent approximately $6.6 billion on Cyber Monday and $5 billion on Black Friday. For value-savvy shoppers, there was one extra ingredient in the mix: Apps and websites offering online-only coupons and […]

Continue reading

Four Predictions on the Future of VR from a Watson Product Director

There are two reasons that Michael Ludden, director of product of IBM Watson Developers Labs & AR/VR Labs, starts his presentations with a video of Star Trek actors playing the virtual reality game Star Trek Bridge Crew. First, it’s a perfect example of how his labs are melding Watson’s artificial intelligence capabilities with the burgeoning […]

Continue reading

To bring “good” taste to scale, first build trust

Brian Smith knows wine. The co-founder of Winc and the company’s Chief Wine Officer (“The title you get when you’re allowed to give yourself your own title”), he studied to become a sommelier and even worked in wine production in Provence and Argentina. But when pressed, he expresses resistance to the notion that one becomes […]

Continue reading