Cognitive Computing for better decision making

Share this post:

Editor’s note: This article is by Léa Deleris, manager of Risk Management at IBM Research-Ireland. Additional contributions made by Charles Jochim, research staff member at IBM Research-Ireland.
Alice: `Would you tell me, please, which way I ought to go from here?’
Cheshire Cat: `That depends a good deal on where you want to get to.’
Alice: `I don’t much care where—‘
Cheshire Cat: `Then it doesn’t matter which way you go.’
Alice: ‘So long as I get somewhere.’
Cheshire Cat: `Oh, you’re sure to do that, if you only walk long enough.’

This dilemma, and Stanford’s Decision Analysis Professor Dr. Rob Howard, who presented the Alice in Wonderlandconcept of indifference in a lecture, inspired me to study how mathematics can apply to the risk, uncertainty, and personal preferences that influence the decisions we make every day, about everything. What Alice and the Cheshire Cat so eloquently illustrate is that preferences are not as obvious as they may seem. I wanted to know, as a PhD student sitting among my fellow classmates, could natural language processing and cognitive computing be applied to web applications that could in turn, help us make more logical decisions?

Fast forward to today, and from student to IBM research scientist, and I’m applying artificial intelligence to our human intelligence – and how we can debate with machines to help us make good decisions when facing uncertainty. Here’s how I explained it to a TEDxParisaudience in February.

Now my team in Dublin is using machines to help medical professionals make more rational decisions. The tool we developed, called MedicalRecap, extracts information from PubMed’s 24 million online citations to create a risk model for doctors.

MedicalRecap’s semantic module allows doctors to cluster the extracted terms (variables of the risk model) by grouping similar or related terms into concepts. It also has an aggregation module, which allows the user to combine the extracted dependence and probability statements into a dependence graph, also known as a Bayesian network. 

Imagine an instance of a doctor needing to understand the role of tea and coffee consumption on the incidence of endometrial cancer. Currently, doctors would address this task manually by searching for relevant papers, reading them, taking notes (by hand or copy-pasting on a spreadsheet), and aggregating this data.

MedicalRecap, instead, presents extracted and aggregated data in an intuitive graphical format, providing a way for the user to trace back through the summarised information, to the original input. The tool also allows users to edit the output of the algorithms if they encounter an error, which is fed back into the system to improve its knowledge and performance over time. 

MedicalRecap also relies on the doctor’s expertise, so ideally, errors are reduced by combining the doctor’s knowledge with the inferences the tool makes in finding dependency relationships. The assumption is that the doctor does not have all the input required, but is exploring the space. The tool helps the practitioner look for answers, but does not provide them. Ideally, it will reach the same conclusions that the doctor already has made so that he or she will trust the system more.

We also want MedicalRecap to provide evidence for new conclusions to be drawn. For example, if the doctor sees that coffee consumption is linked to some cancers, which she already knew, the tool could show that in fact this is primarily for certain populations, which she didn’t know. 

As similar as it may sound, MedicalRecap is different to IBM Watson Health. Our tool is a web-based GUI focused only on published medical literature and is not designed for personalized medicine, but instead to make more global inferences between diseases and related risk factors. But like Watson, MedicalRecap’s Extractor, Clusterer, and Aggregator services are available on IBM’s SoftLayer cloud infrastructure as a service.

Our risk information extraction models, like in MedicalRecap, can be applied to other domains. In the future, oil and gas experts could use the tool to extract information from academic papers about factors influencing reservoir capacity and shape. As long as we have the main ingredient of a large body of literature related to a profession or domain, our decision support system tool might even be able to offer Alice somewhere to go, no matter how unsure she is. 


More stories

A new supercomputing-powered weather model may ready us for Exascale

In the U.S. alone, extreme weather caused some 297 deaths and $53.5 billion in economic damage in 2016. Globally, natural disasters caused $175 billion in damage. It’s essential for governments, business and people to receive advance warning of wild weather in order to minimize its impact, yet today the information we get is limited. Current […]

Continue reading

DREAM Challenge results: Can machine learning help improve accuracy in breast cancer screening?

        Breast Cancer is the most common cancer in women. It is estimated that one out of eight women will be diagnosed with breast cancer in their lifetime. The good news is that 99 percent of women whose breast cancer was detected early (stage 1 or 0) survive beyond five years after […]

Continue reading

Computational Neuroscience

New Issue of the IBM Journal of Research and Development   Understanding the brain’s dynamics is of central importance to neuroscience. Our ability to observe, model, and infer from neuroscientific data the principles and mechanisms of brain dynamics determines our ability to understand the brain’s unusual cognitive and behavioral capabilities. Our guest editors, James Kozloski, […]

Continue reading