There are some puzzles in life that can just eat at you, always in the back of your mind. It’s a place where intuition seems to point to an answer, but in the end falls short.
For me as a neonatologist—caring for newborns—that puzzle has been how to better detect when premature infants are at risk of sepsis, an infection of the bloodstream. Sepsis stalks those whose immature or compromised immune systems are ill-equipped to fight it off. In the case of premature babies, timely treatment with antibiotics can be the difference. Without it, up to a fifth of babies born with a birthweight of less than 1500g (3.31lbs) die due to sepsis complications.
A quest for early detection of sepsis in newborns
Working in neonatal intensive care units (NICU) in Australia and Belgium over the past decade, I had long taken note of trends and patterns between vital signs and complications related to preterm birth. Along the way, I wrote research papers to document these insights and relationships. But at the end of the day, I found that it ended up having little to no impact on what happened at the bedside for my patients and their parents. And at the root, that was what drove—and frustrated—me most: to not only know what the sepsis risk factors are, but also to have a way to predict it and make it operational.
Serendipity followed. I brought up my frustration to a friend on one of our weekly bike rides—an IBM executive with an expertise in AI and predictive analytics—and it became a regular topic. As went deeper on the subject, bringing our respective domain knowledge into the mix, we knew we could take it to the next level. In fact, these productive discussions proved a critical stage in the development of a research project—known as Innocens (Improving Neonatal Outcome with a Clinical Early Notification System)—that started at the NICU of the University Hospital of Antwerp, and the start of our partnership with data scientists at IBM BNL in Amsterdam and developers at the IBM Watson Center in Munich.
Machine learning training is critical to “explainable AI”
We wanted to use AI and edge computing to build a predictive tool. Having roughly ten years of admissions data on very low birth weight infants gave us a very strong starting point. In developing the solution, we recognized that a special kind of accuracy was necessary to make a predictive system that could be woven into our clinical operations. Specifically, in training the machine learning models used for prediction, we needed to thread the needle between accurately detecting subtle patterns in premature infants’ vital signs while at the same time minimizing false alarms.
Indeed, for what we are doing, there’s nothing more important than building trust.
The approach we followed was a supervised learning model, employing multiple cross-validation steps, all performed using IBM Watson Studio. To achieve this, we took advantage of the “explainable AI” capabilities built into IBM Cloud Pak for Data, the data platform used for the modeling that runs on the Red Hat OpenShift Container Platform. By helping the users of the output to better comprehend what the models are telling them—and why—this approach provides the foundation for caregivers to trust what they’re seeing.
Put simply, if you can’t trust what AI is telling users, you can’t operationalize it.
Faster insights lead to earlier clinical intervention
So what constitutes success for our predictive models? To us, it’s helping our human caregivers make better and more timely decisions that improve outcomes for preterm infants in a neonatal intensive care unit. It means accurately detecting that small sub-segment of those infants at risk of sepsis complications, and early enough to intervene successfully.
Based on our retrospective dataset, the Edge computing solution we developed—which uses real-time data from medical sensors and runs on IBM Cloud—reduces the time required to identify at-risk infants by up to several hours. To protect patient data privacy, the models run locally within hospital firewalls for prediction and visualization purposes, and are finetuned and retrained in the cloud. And the fact that it’s 75% accurate in detecting severe sepsis, generating less than one false alarm per week, keeps doctors focused on where it matters most.
The other facet of improved decision making is the ability to tailor treatments based on insights. For a 1500g neonate, there’s virtually no margin for therapy initiated based on inaccurate assumptions. Because our model provides a continuous, explainable, and data-driven basis for care, we’ve augmented the intelligence of bedside healthcare workers and potentially reduced the risk of unintended harm.
We see our work on early sepsis detection and treatment as the beginning of a longer journey toward using AI to improve newborn outcomes. In addition to deploying the solution in other NICU hospitals and systems, we envision the opportunity to follow the same model-driven approach to detect other complications of prematurity, such as brain injury, chronic lung disease, or retinopathy at an earlier state. Here again, the root of it is trust. With IBM, we have a trusted partner that can deliver on all the aspects of trustworthy AI.
To learn more about the Innocens Project, view Dr. Van Laere’s IBM Health Forum session replay “Every hour counts: Catching Sepsis early in NICU Infants.”