<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">

November 2024 Bytes to Bedside Recap: AI Integration and Decision Making Bias

Examining the role of AI in clinical operations and how biases affect our decision-making.

minute read

by Melinda Lammert | December 9, 2024
Blurred Campus Photo with headshots of Michael Rosenberg, MD, and Zach Kilpatrick, PhD

Decision-making is an essential part of our daily lives. At the Department of Biomedical Informatics (DBMI), we bring this concept to life with our weekly seminar series, "Bytes to the Bedside." Last month, we focused on the role of artificial intelligence (AI) in cardiology and examined how bias can affect our decision-making. Through engaging discussions with leading experts, we explored AI integration, decision-making complexities, and biases' influence.

Harnessing the power of AI in everyday practice

There is a lot of excitement surrounding AI right now. Michael Rosenberg, MD, associate professor of medicine in the Division of Cardiology at the University of Colorado School of Medicine, is no stranger to this hype. Still, he is focused on the important next step: translating this enthusiasm into practical applications that truly impact patient care and improve hospital workflows. The challenge is understanding AI’s potential and implementing it effectively to create tangible benefits for healthcare systems and patients alike.

Rosenberg primarily works with two main types of healthcare models: predictive and inferential. He explained that predictive models are used to forecast outcomes—examples include risk scores for identifying high-risk patients and tools that streamline tasks to save time and reduce errors. On the other hand, inferential models analyze data to uncover cause-and-effect relationships, often used in clinical trials to evaluate new treatments or understand risk factors for conditions like myocardial infarction. Beyond these, Rosenberg highlighted the importance of models focused on organizing and communicating data, such as managing electronic health records (EHRs) and processing data from implanted devices. These organizational and communication models are essential for seamless care coordination, improved patient outcomes, and ensuring timely access to critical health information.

So, how can AI be integrated into healthcare workflows? Rosenberg suggested starting by identifying where data models and digital technologies are already in use. He provided the example of ECG interpretation, where a computer algorithm detects peak amplitudes and intervals, applies rule-based logic to identify abnormalities, and generates a text interpretation for each finding. This results in a computer-generated report that is reviewed and edited by a clinician before being shared with the patient. By incorporating a deep-learning model into the rule-based algorithm, the accuracy of detecting abnormalities can be significantly improved.Untitled design-3

Slides from Rosenberg's presentation. ECG Interpretation (left). ECG Interpretation with deep learning model (right)

Rosenberg pointed out that the same approach can enhance clinical decision support. For example, if a patient with hypertension is prescribed amlodipine, an algorithm can access the patient’s allergy history, flag any allergy to amlodipine, alert the clinician, and suggest an alternative medication to prevent a potential allergic reaction. He noted that this process can be refined further using online learning models (reinforcement learning), which update and adapt as they are used, improving their recommendations over time. “While most models can predict an outcome or suggest an action, reinforcement learning can do both. It learns dynamically by making decisions and receiving feedback,” Rosenberg explained.

Untitled design (1)

Slides from Rosenberg's presentation. Clinical decision support (left). Clinical decision support with reinforcement learning model (right)

Understanding decision-making involves recognizing its key components: action, reward, state, and context. Rosenberg highlighted each of these components, explaining that the action is what needs to be decided, the reward represents the potential outcomes or benefits, and the state includes the information relevant to each individual that informs the decision. Context defines when and why a decision must be made. He shared that the decision-making process involves the agent making the decision. This environment changes the value of the reward, state, and time, which sets the intervals between decisions and impacts their reassessment. Rosenberg emphasized, “When evaluating decisions, it’s important to consider both positive and negative goals.” By understanding the state and action, we can build models to predict rewards, calculating how much reward is associated with different outcomes or the value of an action taken from a given state.

The good, the bad and the ugly ways bias affects our choices

How do people make decisions? Zachary Kilpatrick, PhD, associate professor of applied mathematics at CU Boulder, posed this question to the DBMI seminar attendees. He explained that decision-making can start with something as simple as checking the weather to decide what to wear, how to get to work, or what to plan for lunch. However, in many situations, decisions must be made under time constraints and with limited resources, requiring us to use those resources as effectively as possible.

The Good – Biases can encode environmental regularities.
The Bad – Rare or asymmetric evidence can lead to poor decision-making.
The Ugly – Rapid decisions often reflect strong biases.

Kilpatrick shared, “When people have to make decisions based on some evidence that they’re observing, they might try to do so with good strategy but a lot of times, this will result in us having particular biases in the way we make decisions, which in some contexts can be useful, and other can be quite bad.” He noted that decision-making and memory processes must work in tandem to guide our choices, explaining that certain orientations, colors, or positions may be more strongly represented in our minds due to their frequency in our environment, a phenomenon known as environmental heterogeneity.

In typical decision-making, we evaluate whether we have sufficient evidence to draw a conclusion. Kilpatrick noted that when faced with numerous options, we often rely on heuristics—mental shortcuts that help people make decisions quickly and efficiently—going beyond what a purely Bayesian approach would recommend. He explained that while biases can be beneficial by encoding regularities, they can also create limitations. For instance, he described how visual information flows from the retina to the thalamus and then back to the cortex. Certain cells in the visual system, known for encoding orientation, respond more strongly to vertical and horizontal lines compared to oblique ones. This suggests that these orientations are more commonly encountered in our environment. His research team found that when individuals are shown a color and asked to recall it four seconds later, certain colors are represented more strongly and lead to fewer errors in reporting.

fc6663bd-0734-4774-9bef-75c11261b231

Zachary Kilpatrick, PhD, with DBMI trainee Jennifer Briggs

However, not all consequences of biases are positive. Rare or asymmetric evidence can lead to poor decision outcomes. Kilpatrick illustrated this with an example involving two jars filled with blue and red balls. One jar contains few red balls, while the other has more, making the red ball the rare piece of evidence. Suppose you observe four balls from the jar with the higher number of red balls, and all four are blue. This observation does not provide sufficient evidence to conclude that the jar has more red balls.

This concept extends to medical diagnoses. Kilpatrick explained, “If a doctor is monitoring a patient for signs of epilepsy and no seizures are observed for an extended period, that absence of evidence does not confirm the patient does not have epilepsy. A seizure occurring after 100 days could indicate epilepsy or another condition. In this case, the rare evidence cannot definitively rule out the diagnosis.”

Lastly, there is the "ugly" side of decision-making: the influence of speed on bias. Kilpatrick and colleagues explored this issue in a study that revealed how, when decisions are made under time constraints, people rely heavily on their existing knowledge and biases. This can lead to decisions that are skewed by the first or most vocal opinion in a group, which can significantly sway the outcome.


 

To participate in more insightful seminar events hosted by DBMI, we invite you to click here and join our community. By doing so, you'll gain access to a diverse range of educational and professional events designed to foster knowledge sharing and collaboration in the field. Don't miss out on the opportunity to engage with experts and peers in groundbreaking discussions and presentations.

 

Featured Experts
Staff Mention

Zachary Kilpatrick, PhD

Staff Mention

Michael Rosenberg, MD