Co-directors of Science for Social Good, Kush Varshney (left) and Aleksandra (Saška) Mojsilović, at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York.
In 2016, we launched the Science for Social Goodinitiative at IBM Research as a way of addressing social and humanitarian challenges with data and artificial intelligence (AI). A collaboration between our top scientists and engineers, social change organizations, and fellows from universities, our program seeks to incubate novel solutions to some of the most pressing issues facing humanity.
Since then, we have executed 28 projects, from understanding disease epidemics, to creating antimicrobial peptides, to modeling hate speech, to developing cognitive counselors that guide people out of poverty. We’ve done so by relying on more than 110 of our researchers who have volunteered their unique skills, expertise and passions to these projects. We’ve also contributed 47 scientific papers and awarded 36 Social Good student fellowships. It has been quite a journey.
We are pleased to announce today the fourth, 2019 edition of our program. This year’s projects include:
Prescribing Guidelines for Addressing the Opioid Epidemic (Partner: IBM Watson Health): Opioid abuse is among the deadliest population health crises in the United States. In most cases, this stems from a prescription. Understanding the patterns of addiction, learning evidence-based guidelines for responsible prescription and creating early warning systems are instrumental when battling the epidemic. The team will couple advanced machine learning and causal inference methods with the wealth of IBM Watson Health data to develop insights and make them available to providers, payers and public health officials.
Causal Pathways Out of Poverty (Partner: CityLink Center): The team will aim to model paths out of poverty for clients of CityLink Center, a non-profit integrated social service provider in Cincinnati, Ohio. In particular, the team will explore modeling social services such as one-on-one counseling sessions and group classes as events associated with a time stamp. Using a unique longitudinal dataset, a causal model of events and outcomes such as employment, wellness, education and housing will be developed that reveals potential transition probabilities and expected times to transition – measures that are meaningful to CityLink Center for operational planning and providing insights to counselors.
Repurposing Drugs for Cancer Treatment (Partner: Cures Within Reach for Cancer): A large amount of data suggests that hundreds of off-patent drugs well known in treating non-cancer indications could also be useful for treating cancer. The team will take a systematic approach to find and evaluate all of the evidence on these non-cancer generic drugs. Using natural language processing techniques, the team will analyze scientific literature to uncover the preclinical and early clinical research on these drugs being tested as cancer treatments. Since there are thousands of relevant publications, with this number continuously growing, the team will go beyond just unearthing the publications. They will also develop models to automatically capture key information about each paper, such as the type of cancer studied, the type of study conducted, and the nature of the evidence reported. The eventual goal is to identify the most promising drug repurposing candidates that can then be tested in large-scale randomized controlled clinical trials.
Fairness in AI-based skin cancer diagnosis: Lighter skinned individuals have the highest risk for developing skin cancer, but the mortality rate for African-Americans in the United States is much higher primarily due to misdiagnosis. In a recent international melanoma detection challenge, machine-learning methods achieved superhuman performance in melanoma detection and it is important that past disparities not be propagated in learned models. The team will develop new methods for making AI-based skin cancer diagnosis models relevant for all populations of the world.
Trustworthy AI Pentathlon: It is well known that machine-learning models can achieve high accuracy for various tasks, but accuracy alone is not a strong enough criterion to earn users’ trust, especially for high-stakes decision making. Several other criteria are also important including: explainability, fairness, robustness to dataset shift, and robustness to adversarial examples. The team will aim to develop benchmarking datasets, baseline models, and a contest for machine-learning researchers to evaluate their models on all five aforementioned criteria. The project may utilize the Python open-source Adversarial Robustness Toolbox and AI Fairness 360 toolkit.
Learn more about Science for Social Good and where we’re headed here.
Two years in, and the MIT-IBM Watson AI Lab is now engaging with leading companies to advance AI research. Today, the Lab announced its new Membership Program with Boston Scientific, Nexplore, Refinitiv and Samsung as the first companies to join.
IBM researchers published the first major release of the Adversarial Robustness 360 Toolbox (ART). Initially released in April 2018, ART is an open-source library for adversarial machine learning that provides researchers and developers with state-of-the-art tools to defend and verify AI models against adversarial attacks. ART addresses growing concerns about people’s trust in AI, specifically the security of AI in mission-critical applications.
What is the minimal description that captures a space? Asking a mathematician’s basic question of a biological dataset reveals interesting answers about biology itself. This summarizes our underlying approach to subtyping hematological cancer. Disease subtyping is a central tenet of precision medicine, and is the challenging task of identifying and classifying patients with similar presentations […]