<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">
Mitigating AI Bias Goes Beyond the Data

Mitigating AI Bias Goes Beyond the Data

CU School of Medicine faculty member Matthew DeCamp, MD, PhD, says even unbiased data can become tainted when they enter a complex, and sometimes biased, world.

minute read

Written by Kara Mason on July 17, 2023

The discussion around bias in artificial intelligence (AI) is no longer contained to data. Even the most impartial algorithm or analysis can encounter prejudice.  

University of Colorado School of Medicine researcher Matthew DeCamp, MD, PhD, says in a new paper that the clinical setting and social context can both have a significant impact on the fairness of an AI health tool.

“In AI, biased data and biased algorithms result in biased outcomes for patients, but so do unbiased data and algorithms when they enter a biased world,” writes DeCamp, associate professor of internal medicine at the Center for Bioethics and Humanities, who maintains in the latest edition of the journal Science that fair implementation of AI in health care is as important as fair algorithms.  

“We see a lot of attention given to developing fair algorithms and fair datasets,” he says. “While that’s important, things can change when algorithms enter a clinical environment where there are new and different forms of bias, such as overly-skeptical clinicians.”

Defining fairness

Monitoring performance is a key factor in preventing bias beyond the data, especially because the debate over what defines fairness has been ongoing for hundreds of years.

“We shouldn’t expect AI to settle that debate,” DeCamp says.  

Instead, he and co-author Charlotta Lindvall, MD, PhD, assistant professor and Director of Clinical Informatics at Harvard University’s Dana-Farber Cancer Institute, say in their Science essay that there should be a greater focus on latent biases, which they describe as “biases waiting to happen.”

“Several strategies exist to identify and address latent biases,” they write. “One strategy could involve providing clinicians with model-specific, individual-level performance feedback regarding whether they tend to outperform or underperform the model, or if they are systematically following or overriding a model only for certain patient groups.”

Ironically, knowing about biases in AI may result in more skepticism among clinicians, further contributing to the challenge.

A health care system or a set of clinicians who have greater distrust of AI compared to others, for example, may be less likely to follow recommendations prompted by AI. DeCamp says that can have “trickle down effects for the patients cared for by those clinicians.”

“It would mean that even though they should be following those recommendations, there’s a systematic and instructional disadvantage that occurs for those patients.”

The world outside of data can present a bevy of variances and biases. Socioeconomic status, race, ethnicity, geography and other factors can also play a role in access to care, clinicians, and trust of AI technology in health care – all of which can influence effectiveness and fairness of tools.

“It’s not just about the data set being inclusive and diverse, it’s also about how the algorithm is used in the real world and whether it performs equally well among different types of patients,” DeCamp says. “Knowing that it does may engender far more trust and encourage more use than simply knowing that a dataset was unbiased.”

AI tracking AI

Decamp and Lindvall also suggest to their peers in the health care industry that “the gaze of AI should be turned on itself.”

While the technology can introduce or exacerbate biases, it also has the potential to look for it. AI Fairness 360 and FairMI are a few of several existing tools that aim to help researchers mitigate biases in AI work, but more development and “rigorous scientific examination” is necessary, the researchers say.

“Although there are hints of algorithms that can assess whether AI is biased, we still need to work to understand how well they perform when they’re analyzing specifically health care applications,” DeCamp says.

The two urge more implementation research to identify and understand different factors that affect AI bias.

“All patients deserve to benefit from both fair algorithms and fair implementation,” they say.

Featured Experts
Staff Mention

Mattew DeCamp, MD, PhD