Jess Ellis, Natalie Dellavalle, and Matt DeCamp present their research on patient perceptions of chatbots and information privacy in August 2024.
For Matthew DeCamp, MD, PhD, the road to medicine wasn’t straightforward. As a high school student, DeCamp gravitated toward scientific research. “Medicine came late,” DeCamp explains. “I started in biochemistry and was headed for a PhD. Then the ethical questions around genetics—who we are, what choices we make, and fairness—pulled me in.”
From Molecules to Meaning
In the 2000's, DeCamp enrolled in a MD/PhD program focused on biochemistry. He soon found himself asking questions raised by the newly completed Human Genome Project which catalogued 20,000-25,000 human genes to make them accessible for further biological study. How will understanding our genetic code affect how we think about who we are, choices about health and life, and who has the power to make these choices? What are the unintended consequences and potential harms? Captivated by the ethical quandaries raised by the landmark scientific advances, he changed his PhD from biochemistry to philosophy.
“I’ve always believed ideas have the power to change the world. Thinking clearly and critically about ethics can improve how we treat one another and make decisions.”
Now, as a faculty bioethics researcher at the CU Center for Bioethics and Humanities and associate professor of medicine with the CU School of Medicine, DeCamp stands at the forefront of similar conversations about artificial intelligence in health care.
What Patients Should Know about AI in Health Care
“AI isn’t coming,” he says. “It’s already here.”
For DeCamp, who is also an internal medicine physician with UCHealth, an advantage of practicing medicine is that he see things happening in the real world in real time that raise ethical questions. For example, watching artificial intelligence impact how his patients saw health information inspired a research study on patient perceptions of a health portal chatbot.
“AI is already affecting health care in so many ways,” he says.
It could be the message you receive back from a clinician in the patient portal of an electronic health record which could be partly generated by AI. AI could be helping transcribe and translate what you and your doctor say, almost in real time, into summary notes in your medical record. AI is used by health systems to help analyze patterns of care and quality of care to help improve healthcare delivery. It can also analyze patient data and diagnostic images to improve patient care by recognizing problems, risks, and diagnoses potentially faster and more accurately than humans. AI could also help expand healthcare access, especially in rural areas.
Yet, DeCamp believes it is important to proceed with purpose and caution. “These tools are only as good as the data upon which they're trained. AI could reinforce existing biases if it’s trained on flawed data or if certain populations are underrepresented in that data.”
AI tools are also most effective when the humans creating and using them understand how they work. “There's a whole movement now around training, especially for medical students and practicing physicians. Learning how to interact with AI is a skillset in and of itself,” DeCamp says.
“The hope is that with medical schools like the University of Colorado placing more attention on basic competencies in AI that we will be able to train the next generation—as well as the current generation—how to use these tools most effectively.”
As AI becomes more common in healthcare, one concern is privacy. But it’s not as simple as just saying “no” to access. “Some people may be surprised that their message goes somewhere and is analyzed by AI to generate a response. On the other hand, we also know that health systems are always analyzing our data. Insurance companies are always analyzing our data. So in some ways, it's not new, even though it feels new and raises concerns around privacy and who has access.”
Other concerns are fairness, and the subtle ways AI might influence decision-making—such as guiding a patient or provider toward one treatment choice over other alternatives, when we know that AI-based predictions may be more or less accurate for some people versus others. “We’re okay with Netflix recommending a movie, but should we be okay with AI shaping our health care choices in the same way?”
Empowering Community Voices
One solution, according to DeCamp, is more robust community engagement in how healthcare decisions are made. He envisions a future where patients, families, and community members actively help shape policies and practices—especially as healthcare systems adopt more advanced technologies.
“Currently, there are patient advisory councils at the clinic and system level,” he says, “but we need even better ways to engage people, and we’re actively researching better ways to involve communities, too.”
This could lead to more opportunities for patients to help shape how healthcare is delivered and how technology like AI is used in their local clinics. “Health systems want this feedback,” DeCamp emphasizes. “And including community voices isn’t just practical—it’s ethical.”
The Path Ahead: A Call for Interdisciplinary Collaboration
When asked where he hopes to see progress in the next five years, DeCamp is clear: he wants to see ethics, humanities, and social sciences have a greater impact on how AI and other technologies are developed and deployed.
“Many people have a perception now that AI is being largely driven by technology. Because we can do it, we'll do it. And we know in medicine that that is not enough.”
To ensure ethical innovation, DeCamp calls for deeper collaboration across disciplines. “Bioethicists, engineers, clinicians, and community members all need to be at the table,” he says. “But that requires time, space, and institutional support to break down silos.”
He believes the University of Colorado Anschutz Medical Campus is uniquely positioned to lead this work. “There’s a spirit of collaboration here,” he says. “And we’ve seen our research influence how health systems like UCHealth are using AI. That’s a start, but we need to do more.”
Shaping the Next Generation
DeCamp’s sees the biggest success of his work as less about policy or technology and more about training people to lead the future he wants to see.
“If I had to name an achievement,” he reflects, “it would be helping shape future doctors, researchers, and policymakers who recognize how bioethics and the humanities matter.”
As he sees it, the values of justice, humility, and critical thinking don’t just belong in theory—they belong in the everyday decisions that affect people’s health and lives.
For the students, faculty, and community members, DeCamp’s message is a challenge and an invitation: be informed, get involved, and don’t underestimate the power of ideas to create a more just, ethical, and inclusive future.
Dive Deeper
Learn more about the chatbot research by Matt DeCamp and PRAs Jess Ellis, Marlee Akerson, Ahmed Alasmar, Mika Hamer, Natalie Dellavalle and others!
|
Patient Perceptions of Chatbot Supervision in Health Care Settings. PMID: 38687483 J |
JAMA Netw Open. 2024 Apr 1;7(4) | Ellis J, Hamer MK, Akerson M, Andazola M, Moore A, Campbell EG, DeCamp M. |
|
The Halo Effect: Perceptions of Information Privacy Among Healthcare Chatbot Users. PMID: 39936842 |
J Am Geriatr Soc. 2025 May;73(5):1472-1483 | Ellis JR, Dellavalle NS, Hamer MK, Akerson M, Andazola M, Moore AA, Campbell EG, DeCamp M. |
|
More Than Just a Pretty Face? Nudging and Bias in Chatbots. PMID: 37276595
|
Ann Intern Med. 2023 Jul;176(7):997-998 | Akerson M, Andazola M, Moore A, DeCamp M. |