Posted in: Big Data Analytics, Cognitive Computing, Healthcare, Thomas J Watson Research Center

Using AI and science to predict heart failure

Heart disease has been the leading cause of death for decades in the United States so it’s no surprise that heart failure rates, which is a specific type of heart disease characterized by when the heart is too weak to pump blood throughout the body, are on the rise. In fact, the number of American adults with heart failure is expected to increase by 46 percent by 2030. That means eight million people will have heart failure by then; and about half of people who have heart failure die within five years of diagnosis.

Heart failure is very hard to detect early, but with the help of a National Institutes of Health (NIH) grant, a team of scientists at IBM Research partnered with scientists from Sutter Health and clinical experts from Geisinger Health System to study and predict heart failure based on hidden clues in Electronic Health Records (EHRs). Over the last three years, using the latest advances in artificial intelligence (AI) like natural language processing, machine learning and big data analytics, the team trained models to identify heart failure one to two years earlier than a typical diagnosis today. This research uncovered important insights about the practical tradeoffs and types of data needed to train models, and developed new application methods that could allow future models to be more easily adopted.

Illustration of a normal and a weakened heart characteristic of heart failure © American Heart Association, Inc.

Illustration of a normal heart and a weakened heart that is characteristic of heart failure
© American Heart Association, Inc.

Today, doctors will typically document signs and symptoms of heart failure in the patient record and also order diagnostic tests for heart failure. Despite best efforts, a patient is usually diagnosed with heart failure after an acute event that involves a hospitalization where the disease has advanced with irreversible and progressive organ damage.

Our team’s focus was investigating if we could use the data contained in EHR systems to detect and predict a patient’s risk of heart failure one or more years before a typical clinical diagnosis.

We developed and applied several cognitive computing and AI technologies to analyze the patient data in the project including natural language processing and machine learning methods that supported our aims.

During the course of the project we worked towards a number of goals and landed on several unexpected findings, including:

  1. One project aim was understanding how useful the Framingham Heart Failure Signs and Symptoms (FHFSS) — traditional risk factors clinicians commonly use to diagnose heart failure — were for early detection. We used AI Natural Language Processing techniques to extract information from unstructured data such as physician notes by parsing this information and identifying concepts, which could include Framingham risk criteria or other types of symptoms. Interestingly, our findings showed that only six of the 28 original FHFSS signs and symptoms were consistently found to be reliable predictors of a future diagnosis of heart failure.
  2. A second aim was to determine if we could more accurately predict heart failure by combining unstructured data from doctors’ notes with structured EHR data. To do this, we applied machine learning methods to build predictive models that took into account a mix of variables. Our findings showed us that other data types that are routinely collected in EHRs (such as disease diagnoses, medication prescriptions and lab tests) when combined with FHFSS could be more helpful predictors of a patient’s onset of heart failure.

    Figure showing the heart failure prediction research set-up that resulted in the development of models that identified heart failure one to two years earlier than can be done today. Using longitudinal EHR data, various structured and unstructured data types were extracted and analyzed during the observation window, where the index date represents the earliest date the prediction is made and the prediction window is the general period of time before diagnosis that the team's models were able to do the prediction.

    Figure showing the heart failure prediction research set-up that resulted in the development of models that identified heart failure one to two years earlier than can be done today. Using longitudinal EHR data, various structured and unstructured data types were extracted and analyzed during the observation window, where the index date represents the earliest date the prediction is made and the prediction window is the general period of time before diagnosis that the team’s models were able to do the prediction.

Our research also led us to a deeper understanding about the tradeoffs between certain data types and their usefulness in helping detect an individual’s likelihood of heart failure. For example, we found that the model’s performance improved when more diverse data types are used, but, the combination of diagnosis, medication order, and hospitalization data was most important, respectively. We leveraged knowledge-driven ontologies of medications and diagnoses to summarize variables into higher level concepts and developed data-driven methods to identify and select the most salient variables to create a smaller and more robust subset of variables. This led us to develop predictive models that were high in both performance and practicality.

From a clinical point of view this is critical – there could be more than 1,000 patient factors used in a model, but no healthcare professional will want to adopt something that will require such an extensive number of variables to be input. These findings suggest possible guidelines for the minimum amount and type of data needed to train effective predictive disease models. This and other practical implications of the research were documented in a paper (“Early Detection of Heart Failure Using Electronic Health Records”) and an editorial (Learning About Machine Learning: The Promise and Pitfalls of Big Data and the Electronic Health Record”) in Circulation: Cardiovascular Quality and Outcomes in November last year.

All three parties will continue to collaborate to further the current findings, and what’s exciting about this work is that it has potential application for other diseases. The confluence of the availability of big data and advances in cognitive computing promises dramatic advances in clinical diagnosis and earlier disease detection.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Comments

Add Comment

Your email address will not be published. Required fields are marked *

Kenney Ng, Manager, Health Analytics, Center for Computational Health, IBM Research

Kenney Ng

Manager, Health Analytics, Center for Computational Health, IBM Research