Examining health disparities in precision medicine
By Irene Dankwa-Mullan, MD, MPH | 4 minute read | October 14, 2019
Precision medicine is an emerging approach to patient care that considers the individual variability in genes, environment and lifestyle. It involves the integration of all the relevant clinical and social information to provide the right intervention for patients. This new approach to medical care has implications for the types and sources of data, scientific methods and evidence that need to be aligned to inform clinical decision-making.
Big data and machine learning algorithms provide models to drive precision medicine. Clinicians may use this information to help guide interventions tailored to individual patients’ characteristics. Their success or predictive performance for informing clinical practice depends on the data size and diversity. Studies show machine learning models that are generated from large longitudinal diverse patient datasets have the potential to inform clinical practice.1,2,3
How a lack of cohort diversity contributes to bias and inequity
The ways in which our healthcare system collects, records and uses data can be subject to bias in multiple ways. There is growing concern within healthcare that systemic discriminatory practices and inequities may already be present in data, and the algorithms can perpetuate these disparities.4 Risk prediction models built on limited patient datasets without attention to fairness and potential bias, can be harmful or can prevent favorable outcomes.4,5 Machine learning models may be particularly sensitive to underrepresentation or overrepresentation in the patient cohorts, which could lead to a biased model and adverse consequences or outcomes.4,5
For example, in a study addressing hypertrophic cardiomyopathy, a cardiac disease that is most commonly inherited and associated with certain gene mutations, several African-American patients with positive testing for the linked genetic variants were misclassified as pathogenic. A reanalysis by another team, reported these variants as benign. They reported that, “…simulations showed that the inclusion of even small numbers of black Americans in control cohorts probably would have prevented these misclassifications.”4
Another example involves responses to the drug clopidogrel, also known as Plavix, which is an alternative to aspirin used to prevent heart attacks and stroke. The clinical trial participants that showed a reduction in heart attacks and stroke from the use of a 75mg versus a 150mg dose were predominantly white males.5 The standard dose of 75mg has not been efficacious for patients with a poor metabolite due to a genetic mutation, which is common in Pacific Islanders. The FDA subsequently issued a Boxed Warning stating, “…poor metabolizers may not receive the full benefit of Plavix and may remain at risk for heart attacks and stroke.”6
If these biases go undetected, there is a risk of using data to train machine learning and AI algorithms that may perpetuate these biases. Researchers within the healthcare information technology industry are working on trust, transparency, and fairness in this context with varying definitions of “fairness.”
Steps toward more fairness and less bias in healthcare
Advances in digital health and promoting efforts around fairness and bias is taking shape at IBM Watson Health. To help promote health equity, IBM Watson Health is committed to being purposeful and transparent about eliminating bias in our data, analytics, AI and services:
- We’ve made a clear Commitment to AI Principles of Purpose, Transparency, and Skills in Healthcare.
- Health equity is one of the areas of focus of IBM Watson Health’s research collaborations with two academic centers – Brigham and Women’s Hospital, which is a teaching hospital of Harvard Medical School and Vanderbilt University Medical Center – to help advance the science of AI and its application to major public health issues.
- IBM Research has developed AI Fairness 360, a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, using state-of-the-art algorithms
Simply put, “one-size-fits-all” does not work in healthcare. It is essential to join forces to design more inclusive data, advanced analytics and AI. There should be diversity in voices and experiences that come to the table to promote fairness and minimize bias for the future of precision medicine and healthcare.
1. Konerman MA, Beste LA et al. Machine Learning models to predict disease progression among Veterans with Hepatitis C virus. PLoS One 2019 Jan 4; 14 https://www.ncbi.nlm.nih.gov/pubmed/30608929
2. Ngufor C, Van Houten H et al. Mixed effect machine learning: A framework for predicting longitudinal change in hemoglobin A1c J Biomed Inform 2019 Jan 89:56-67 https://www.ncbi.nlm.nih.gov/pubmed/30189255
3. Rahimian F, Salimi-Khorshidi G et al. Predicting the risk of emergency adminission with machine learning: developing and validation using electronic health records. PLoS Med 2018 Nov 20;15 (11)
4. Manrai AK, Funke BH, Rehm HL, et al. Genetic Misdiagnoses and the Potential for Health Disparities. N Engl J Med. 2016;375(7):655–665. doi:10.1056/NEJMsa1507092 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5292722/
5. Wu, et al. The Hawaii clopidogrel lawsuit: the possible effect on clinical laboratory testing. Per Med. 2015 Jun;12(3):179-181. doi: 10.2217/pme.15.4. https://www.ncbi.nlm.nih.gov/pubmed/29771642/
6. CAPRIE Steering Committee. A Randomized Blinded trial of clopidogrel versus aspirin in patients at risk for ischaemic events (CAPRIE). Lancet 348, 1329-1339, https://www.ncbi.nlm.nih.gov/pubmed/8918275?dopt=Abstract