New Thinking

When machine learning detects mental illness, ethical questions arise

Share this post:

Instagram knows if you’re depressed. Twitter can indicate PTSD. Facebook posts and tweets describe a region’s relative public health. These are some of the findings of the research conducted by the Computational Story Lab, a group led by Chris Danforth and Peter Sheridan Dodds of the University of Vermont. How do these platforms know so much about you? It’s simple: You’re telling them. “I’m sure the average person doesn’t realize how much they’re revealing when they’re tweeting or posting pictures,” Danforth says.

The advent of social media, as Danforth explains it, brought about an “explosion of data about our behavior that just didn’t exist before people started to communicate more on the Internet and carry around their phones.” He says that the sudden influx of data has given rise to a sea change in behavioral science research, “like astronomy did when telescopes were launched into space,” and in atmospheric sciences “when satellites started to be able to take pictures of the earth.” How best to harness that massive trove of information in the name of human health and happiness, and how to do so responsibly, remains a largely unanswered question for the social media platforms with millions of users’ data at their fingertips. The same is true for online marketplaces, app developers, and other companies, who have both their own data and a vast amount of publicly available tweets, pictures, and posts to mine and use.

This August, Danforth and his Computational Story Lab colleagues released a study indicating that Instagram posts contain indicators of clinical depression, allowing their team to predict the illness in users of the social-media platform—even before depressed individuals realized they were unwell themselves. The Story Lab researchers applied machine learning tools to analyze the Instagram histories of 166 individuals, about half of whom had received a diagnosis of clinical depression within the previous three years. With metadata analysis and a face-detection algorithm, the researchers found that they could indeed predict depression with even greater accuracy than a general practitioner. Depressed Instagram users posted fewer photos of people than their non-depressed counterparts, and perhaps surprisingly, they posted more frequently. They also favored the black-and-white “Inkwell” filter, while the non-depressed users liked “Valencia.”

Danforth concedes that his sample was a small one and that some of the behaviors of the depressed individuals in the study, including the tendency to post more often than “healthy” Instagram users, could be unique to that small population. His team’s findings, however, fall in line with existing research about social media behavior. Of the studies similar to the Computational Story Lab’s depression analysis, Danforth notes, “They all indicate that there are aspects of our behavior—some of which we’re aware of and some that we aren’t—that reveal our emotional state in ways that we maybe don’t realize we’re revealing.” Danforth says he hopes that insights into individuals’ health and behavior can eventually be captured via an app and compared on a much larger scale, with a pool of millions rather than a sample set of a couple hundred. That information would provide general practitioners with a much fuller picture of an individual’s well-being over time than they’re currently able to receive. “Rather than 10 minutes once a year, your doctor would have a lot more visibility into how you’re doing,” Danforth says.

Lyle Ungar of the University of Pennsylvania stresses the possible advantages of machine learning tools for improving human quality of life, not only as a diagnostic devices, but also as potential modes for intervention. As he notes, however, the sorts of tools he and Danforth imagine medical practitioners using in the future require careful handling of highly personal data and a great deal of patient education. To this point, Ungar says, “people haven’t mostly opted in to being monitored for depression or suicidality.” Apps that would allow doctors and researchers to compare a single individual to thousands or millions of other users around the world might be the telescopes that allow us to see new worlds in psychology and behavioral science, but they would also create pools of sensitive data to be abused, hacked, or sold for profit. Even existing platforms like Google and Facebook, which already purport to assist people in need of support, are still working to connect people with valid resources for help, Ungar says, yet “we don’t know yet how to help people in a way that doesn’t violate too much privacy.”

At the World Well-Being Project, an initiative of UPenn’s Positive Psychology Center, Ungar and his colleagues study social media and its ability to shed light on mental and physical well-being; the initiative has a particular focus on language, which Ungar says provides insights that potentiate effective interventions. “We’re looking at people in opioid treatment clinics, and I’d love to know what it is that leads some people to be successful at recovery and other people to relapse and go back to using,” Ungar explains. “And looking at the images they post may say something, but I think that [language is] the best insight into what it was about your week that caused you to start using again.” He, too, envisions an app that would allow a healthcare professional get a fuller picture of an individual’s life outside the office with social media analysis. He also suggests that tools used for the purposes of intervention could enable peer-to-peer counseling, summarize conversations, and deliver information to patients in efficient, sensitive ways.

The possibilities as Ungar describes them are vast, if still years in the future. “Can we develop tools that help train people—front-line service workers, nurses, therapists, doctors—so that they, in fact, do communicate better, or so they don’t get burnt out?” he asks. “Can we help people monitor for burnout? Can we help organizations monitor which teams are over-stressed or under-stressed?” And crucially, he adds, “How do we do these in ways that respect privacy and in a reasonable way?”

Danforth echoes similar concerns, and he is quick to note that the sort of sensitive data analyzed in the Instagram study can also be used to other ends, such as to capitalize on particular emotional states to market products when customers are their most vulnerable. “There’s continued evidence that machines are going to be better and better at taking these seemingly disparate and seemingly unimportant pieces of data and building predictions about who we are, what we like, what we want to buy, which drug might work for us, what our future tastes are going to be like,” Danforth says.

Major social-media platforms are already performing their own monitoring for at-risk behavior. Last fall, Instagram rolled out a new support feature for users whose posts include hashtags that the platform has associated with self-harm, as well as a place for users to anonymously report others’ worrisome posts. A suite of machine-learning tools unveiled in March help Facebook detect suicidal behavior in its two billion active users. And the data that users of these sites have willingly surrendered to the web has created opportunities for third parties to seize upon. In late 2014, UK suicide-prevention charity Samaritans launched an app that alerted Twitter users when the people they followed used language that indicated they might be at risk for self-harm. Samaritans pulled the app just a week later, following complaints that it violated the privacy of private individuals. “These companies want to help—both the for-profits and the not-for-profits—but it’s a super-sensitive line as to how intrusive can you be,” Ungar says.

Still, the rewards of finding precisely that line could be vast, once health resources are as close at hand as the nearest mobile device. After all, Danforth notes with a laugh, “We’re carrying these things around with us all the time—and they’re not necessarily good for us.” Why not make lemonade?

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More New Thinking Stories

Four Predictions on the Future of VR from a Watson Product Director

There are two reasons that Michael Ludden, director of product of IBM Watson Developers Labs & AR/VR Labs, starts his presentations with a video of Star Trek actors playing the virtual reality game Star Trek Bridge Crew. First, it’s a perfect example of how his labs are melding Watson’s artificial intelligence capabilities with the burgeoning […]

Continue reading

To bring “good” taste to scale, first build trust

Brian Smith knows wine. The co-founder of Winc and the company’s Chief Wine Officer (“The title you get when you’re allowed to give yourself your own title”), he studied to become a sommelier and even worked in wine production in Provence and Argentina. But when pressed, he expresses resistance to the notion that one becomes […]

Continue reading

Podcast advertising sticks to no script

Blue Apron may be the presenting sponsor of the Crooked Media podcast Pod Save America, but it’s also the butt of the jokes. Although the fresh-ingredient meal service provides the show with polished ad copy, hosts Jon Favreau and Jon Lovett rarely stick to the script. Here’s what was on the menu this spring, according […]

Continue reading