<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">

AI Can Be a Powerful Tool in Medicine, But Is it Always the Right Tool?

A leading CU Department of Medicine expert on AI ethics studies the promise and peril of emerging technology and how it can impact patient-centered care.

minute read

by Mark Harden | June 24, 2025
An illustration featuring a stethoscope on top of a computer screen.

Artificial intelligence “is poised to dramatically affect medical research and practice,” says Matthew DeCamp, MD, PhD, of the University of Colorado Department of Medicine. But there also is an “urgent need” to better understand AI’s potential impact on patient-centered care and to create ethical guardrails for its use, he says.

AI ethics are a key theme in DeCamp’s research as associate professor in the CU Division of General Internal Medicine and the Center for Bioethics and Humanities at the CU Anschutz Medical Campus. He also frequently lectures on the subject at CU, the local community, and around the country.

“A lot of my research on AI ethics is motivated by my patient care in an outpatient internal medicine clinic,” DeCamp says. “Like many of us, I could see what was coming, with AI tools having more of an effect on the way we deliver patient care.”

But alongside the promise of AI, DeCamp cautions, there’s also the potential for peril, such as bias, errors, and a shift from a focus on what he calls the “north star” of medicine: “patient-centered, high quality, equitable care delivered through caring relationships.”

In a talk last year as part of the CU Anschutz “Transforming Healthcare” lecture series, DeCamp recalled a line from a 1964 book by philosopher Abraham Kaplan: “Give a small boy a hammer, and he will find that everything he encounters needs pounding” – suggesting that tools like AI may be right for some jobs and not others.

“The most fundamental question is, what is the problem we’re trying to solve?” he says. “And how do we develop tools to solve that problem, rather than saying, ‘We have a tool – AI – and now let’s go find a problem to solve.”

Automated risk scores

DeCamp has overseen two major research projects on AI ethics recently. One, a project that received $655,000 in funding from the National Institute of Nursing Research in 2023, examines the use of AI tools for individualized treatment planning in end-of-life and palliative care, especially in creating an automated “risk score” predicting a patient’s prognosis or probability of death years in advance, based on electronic health record data.

“It’s certainly possible that AI-based mortality predictions could help us make better end-of-life decisions, but they create a number of ethics issues, as you can imagine,” DeCamp says. “Do patients need to be told before the score is calculated? Who should have access to the score? How does knowledge of the score affect the choices we make about care?” An AI-derived risk score might inadvertently contribute to depersonalized or unempathetic care, he adds.

For the ongoing study, DeCamp and his colleagues have interviewed more than 80 patients, families, and clinical care team members at four sites around the country, and also surveyed more than 500 palliative care physicians on their view of this technology.

“Later this year, we’re convening an expert panel to develop recommendations around these ethical issues,” he adds. “This builds on a natural strength of CU, where we’re leaders in palliative care.”

Living with Livi

Another study led by DeCamp looked at how patients of UCHealth, the CU Department of Medicine’s clinical partner, interact with Livi, “this amazing little chatbot that they can access on their mobile phone and is linked to their electronic health record.” Livi serves as a virtual assistant that can help UCHealth patients access test results, schedule appointments, find a specialist, and more.

“When it comes to Livi, all sorts of questions come up,” DeCamp says. “What do patients think of Livi? Do they know it’s a computer? What information do they like to share with Livi, and what do they think about the privacy of that information? How does using it affect their perception of their human care team?”

In a 2024 publication, DeCamp and a team including Jessica Ellis, MS, a professional research assistant at the Center for Bioethics and Humanities, and CU School of Medicine student Marlee Akerson surveyed 617 patients using Livi. The team found that 6% thought Livi was a real person, 13% weren’t sure, and 17% thought it was a computer being monitored by a person in real time.

“There is a promise for technologies such as patient-facing chatbots to leapfrog traditional barriers to care, whether it’s geography or insurance or more,” DeCamp says. “There’s a real potential to help people in that way. But we can’t stop there. If we implement technology in such a way that, five years from now, our rural communities can only access care via technology, maybe we’ve solved an access problem, but we’ve inadvertently created an unfair double standard in care.”

Making AI decisions

Looking forward, DeCamp says his next major area of research in this space is “coming to a better understanding of how health care systems make decisions about implementing and using AI tools.”

DeCamp expects federal regulators will play a role in setting guidelines for AI use in medicine. “But at the end of the day, important decisions will be made by health systems all around the country about whether or how to implement these AI-based tools.”

Health care systems are “creating governance committees and processes to make these decisions, but they're all doing it in different ways,” he adds. “But what should be the roles of patients, families, and communities in informing those AI implementation decisions? That’s key question we need to ask.”

When AI tools work wonders, such as creating artwork from a few words, “the first reaction is a sense of awe. It’s pretty amazing what they can do,” DeCamp says. “Sometimes that sense of awe could blind us to what could be missing. We want to be amazed, but we can’t let that blind us to the fact that these tools are just tools. They make mistakes, they’re inaccurate at times, and we have to be vigilant to the potential for that, even as they get better.”

Featured Experts
Staff Mention

Matthew DeCamp, MD, PhD