Trustworthy AI helps provide equitable preventative care for diabetics

A North American healthcare organization is re-imagining better care for their members

By | 4 minute read | October 6, 2022

doctor and patient in conversation

There are over 30 million people in America who have diabetes, and people with diabetes need to remain vigilant about their health. They need the extra attention and resources provided by their healthcare systems because, unfortunately, around 38% to 40% of people with diabetes end up visiting the ER due to complications. Healthcare organizations – both providers and payers – across the nation are seeking transformative new ways to render quick aid to vulnerable members. For many, part of the solution is trustworthy AI.

A large North American healthcare organization uses an AI powered solution to help them identify vulnerable members who can benefit from timely intervention. The organization had established a community health program, with units ready to reach out to member communities to promote better health and improve health outcomes. They needed a system to identify the people who most needed the help. If they could deliver preventative care, they could also reduce member trips to the ER and help members enjoy a better quality of life, while reducing costs for the company and optimizing hospital staff and equipment.

Healthcare organization uses technology to identify members for proactive care

The goal was to predict member groups that were at risk 30–60 days before hospitalization would be necessary, to give community health units time to intervene. In addition, they needed demographic data to ensure appropriate care for the community in need. For example, if a non-Spanish-speaking unit member reaches out to a primarily Spanish-speaking community, the odds of successful intervention will be lower. The healthcare company understood that it’s not enough to build an accurate machine learning model; they needed to connect it to the human experience.

To accomplish this, the organization brought together various data sets, analyzed, and combined them, then built predictive Machine Learning (ML) models to identify their most at-risk members. At this step in the journey to AI many businesses run into trouble with their AI initiatives. And with good reason: not all AI systems are created with the proper ethical guardrails in place. Organizations need to be able to trust their data science outcomes. An AI system, especially one with an impact on health, must be fair, explainable, robust and transparent. The AI must be trustworthy.

A lot can go wrong when an organization decides to operationalize AI, and avoiding undue risk is a significant part the process. To mitigate that potential risk, many business leaders are finding success using proven data fabric architecture patterns and adapting those patterns to their specific organizational processes.

What healthcare is doing with data fabric and AI to mitigate risks

A data fabric architecture provides visibility and insights into data, enhanced access, control over your data and advanced protection and security. Here’s how that North American healthcare company achieved its goals using data fabric.

The first step was ensuring they were using relevant data sets. They started with claims data. Demographic data as well as diagnostic information from the patient’s past medical visits were then combined. To complete that story, the organization brought in socio-economic data via Social Determinants of Health (SDOH) datasets.

Learn more about how a data fabric architecture delivers trustworthy AI.

After connecting all the different data sources — claims data, diagnostics data, socio-economic data, and demographics data — with appropriate rules and policies in place within a data fabric using IBM Cloud Pak for Data, a team of data scientists built Machine Learning models, following best practices for the AI/ML lifecycle.

Just as being accurate in your predictions is important, it is equally important that the predictions be equitable. The organization must have confidence that the predictions will cover a diverse member population, to ensure quality care reaches everyone. Guardrails were put in place to check for and catch bias at various stages of the AI/ML lifecycle. This starts with checking for bias in the data set, checks during the model build and validation stages, and ongoing bias monitoring after the model is deployed. Similar guardrails were built to monitor quality and data drift as well as to generate explanations for the model predictions.

How trustworthy AI provides better help

Using architecture patterns for Data Fabric and Trustworthy AI, the North American healthcare organization can ensure care that is equitable across diverse member races and social classes. The solution can identify at-risk members who need intervention, the automation saves time, increases efficiency, and provides a pathway to get members help when they most need it. Additionally, community health workers have access to better information about the communities they serve, making it easy for them to explain why members are receiving a visit, which builds trust in the process and maintains a good relationship between the organization and members.

IBM Expert Labs offers a variety of architecture patterns mapped to successful use cases and common entry points like Data Governance and AI Governance/Trustworthy AI. The healthcare organization used such an architecture pattern to help them better address their members’ health and well-being. What could your business use it for?