<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">

How Artificial Intelligence is Changing Health Care

CU School of Medicine faculty address the challenges and benefits of artificial intelligence.

minute read

by Kara Mason | October 25, 2023
How Artificial Intelligence is Changing Health Care

In nearly every corner of the University of Colorado Anschutz Medical Campus – in clinics, in classrooms, in offices, and in laboratories – faculty members and students are thinking about the power artificial intelligence, or AI, holds in health care, from finding treatments for rare diseases to developing machine learning standards to helping ophthalmologists assess patients.

On a Wednesday morning in August, the waiting room of the Sue Anschutz-Rodgers Eye Center is half-full by 8 a.m. As patients are called into clinic rooms where physicians take photos of the eyes, AI is already in play.

“Ophthalmology patients may currently encounter AI in their care through AI-assisted diagnostic tools, such as automated retinal image analysis for early disease detection,” explains Malik Kahook, MD, professor of ophthalmology and the Slater Family Endowed Chair in Ophthalmology at the University of Colorado School of Medicine. “This technology is evolving rapidly as new algorithms and approaches are being developed and integrated into clinical practice.”

It’s a common theme in nearly all aspects of medicine. These are a few ways faculty members in the CU School of Medicine use, study, and think about AI’s future in their work and in patient care.

Improving care

When Kahook, a prolific inventor, is designing and building physical devices to be used in the operating room or in the clinic, he’s also thinking about what AI can offer to the process. Earlier this year, when Kahook asked ChatGPT about unmet needs in ophthalmology, the language processing tool replied: “We currently lack pharmaceutical methods for treating ocular diseases with precision and efficacy, where drugs are delivered directly to the targeted tissues, and where patients experience fewer side effects without compromises.”

That AI-born response became the beginning of a presentation Kahook gave in February, when he unveiled promising results for SpyGlass, a device he invented that can be injected directly into the eye to deliver glaucoma medication for up to three years.

While AI’s use as a tool to improve patient care evolves, a major question is how it compares to the physicians who are experts in identifying and treating disease. Kahook says he believes the technology could help make a diagnosis more objective. Still, challenges do exist.

“The pros of using AI to achieve a diagnosis include reducing variability among clinicians, potentially leading to more consistent and accurate diagnoses,” he says. “It can also augment clinicians’ expertise and speed up the diagnostic process. However, the cons involve potential over- reliance on AI, overlooking subtle nuances that experienced clinicians might notice, and ethical concerns regarding AI’s role in decision-making.”

Like many, Kahook sees AI as another tool to improve medicine and patient care.

“The fact that this technology is rapidly evolving means that the clinical setting can benefit from increasingly advanced tools that augment current capabilities,” he says. “AI can assist in more accurate diagnostics, personalized treatment plans, and efficient administrative tasks, ultimately improving patient care and outcomes.”

AI-ChangingHealthCare-DrKahook-007

Malik Kahook, MD, in a clinic at the Sue Anschutz-Rodgers Eye Center where AI is already a part of patient care. Photo by Justin LeVett.

There’s also the possibility for AI to help uncover disease at an earlier point of development, ultimately giving doctors more time to slow progression, address symptoms, and bolster research avenues toward enhancing the preservation of vision.

“For physicians, this would mean the ability to diagnose and treat patients at an earlier stage, leading to better outcomes and quality of life,” Kahook says. “Additionally, such early detection could accelerate research efforts by providing large datasets for studying disease progression and treatment responses.”

Machine learning

The continual addition of large datasets and the application of the resulting analysis – a process known as machine learning – can be used to enhance the power of physicians and health care professionals. Machine learning ranges from relatively simple implementation, such as using closed captioning on a video call with a patient, to the complex, like discovering new personalized medicine treatments for rare diseases.

Casey Greene, PhD, founding chair of the Department of Biomedical Informatics (DBMI), offers a simple definition of what machine learning does: “A computer program’s encounter with the data changes the model.” But how faculty apply this type of AI is anything but simple.

Assistant professor Milton Pividori, PhD, developed cluster match, a machine learning method that can identify non-linear correlations between two variables, such as sex-specific effects, much more quickly than existing methods, allowing him to delve into previously out of reach genome-wide datasets, while Fan Zhang, PhD, assistant professor of medicine with a secondary appointment in the department, uses machine learning to identify new target treatments for rheumatoid arthritis, which currently has no cure or widely successful therapeutic options.

In the last decade, machine learning has evolved and developed significantly. In 2014, computer scientists joked that it would take hours for a computer to identify a bird in a photo. Now, an app on your phone can monitor a birdfeeder in real-time, notifying you when a bird arrives and telling you what type it is.

“AI can assist in more accurate diagnostics, personalized treatment plans, and efficient administrative tasks, ultimately improving patient care and outcomes.” - Malik Kahook, MD

A confluence of two forces in information technology contributed to the rapid improvement of machine learning, Greene says. First, a focus on gathering data raised the possibility of training larger models than ever before, and specialized computer hardware, largely based on graphics processing units like those used for video games, made processing that data feasible.

“We discovered that the types of neural networks that worked well on the problems we were working with had been developed for quite some time, but didn’t have a natural match in the computing sphere so that we could build them quickly and larger than what had been built before. We also needed to be able to train them over larger amounts of data.”

The next frontier of machine learning in health care, Greene says, is multimodal integration, a system that can take text and generate
an image or text. Systems for art and language, such as DALL-E and ChatGPT, have already been making waves. The next few years may bring more multimodal integration, where images, video, text, and other data are generated together to form a new kind of reality, which may have many biomedical applications.

For health care, Greene says he’d like to see a focus on accuracy of machine learning. That will likely require hybridizing AI methods from a few decades ago, which were heavily focused on reasoning, with machine learning methods that are now prominent, he says.

There is also a need for health care professionals who understand AI. Greene notes that the need for data scientists who understand health care systems, electronic health records, medicine, and biology continues to grow.

“In health care, our data, processes, and the needs of our patients and providers are unique, so there’s a critical need for more scientists trained in these techniques,” he says.

Addressing bias

As AI technology rapidly evolves, so do questions from health care ethicists, like Matthew DeCamp, MD, PhD, associate professor of internal medicine at the CU Center for Bioethics and Humanities, who has spent the last several years studying bias in health care AI.

DeCamp says he had an “ah-ha” moment in 2019. After reading a book about the applications around AI and health care, he began noticing changes in his own primary care clinic.

“I remember thinking that these scenarios aren’t theoretical anymore. They’re actually happening,” he says. “In fact, there were technologies already being developed at that time and you could see the writing on the wall that there were going to be more AI-based tools coming into the clinic.”

One of those tools, chatbots, have been a main focus for DeCamp, who, along with other CU School of Medicine researchers, in June published a paper in the Annals of Internal Medicine that challenges researchers and health care professionals to closely examine chatbots through a health equity lens and investigate whether the technology improves all patients’ outcomes equally. The look of a chatbot, for example, can raise new questions of bias.

“Some health systems created chatbots as symptom- checkers,” DeCamp explains.

“You can go online and type in symptoms such as cough and fever and it would tell you what to do. As a result, we became interested in the ethics around the broader use of this technology.”

Physicians’ own opinions about AI can also have a role in its effectiveness – another topic of interest for DeCamp. Who a patient’s doctor is could really matter. In an article published in the journal Science in July, DeCamp and a co-author write: “Although not specific to AI, in some prior studies, older physicians are less likely to follow algorithm-based recommendations, whereas in other studies, younger physicians override decision support more.”

“We see a lot of attention given to developing fair algorithms and fair datasets,” DeCamp says. “While that’s important, things can change when algorithms enter a clinical environment where there are new and different forms of bias, such as overly skeptical clinicians.”

Ultimately, DeCamp says, it’s not up to one sector of the health care industry to solve the bias that arises in AI tools. There are questions that everyone should consider.

“We have to ask ourselves, ‘Who are we forgetting? Who is left out by these tools?’ There’s a place in ethics that puts special emphasis on the way we treat those who are or who may be in the minority and are harmed by what we’re doing,” he says. “It’s important to not assume that just because something is better overall it’s better for everyone. There may still be individuals and groups who are harmed by the technology.”

Addressing those challenges should come in the design process of new tools.

“We need to start demanding that AI proactively reduce disparities and equities, not design it and wait for that to happen later,” DeCamp says.

Teaching AI

Most physicians and researchers can agree that AI isn’t going away, and as the technology becomes more integrated into research and patient care, medical educators are thinking about how to introduce ever- evolving AI into the classroom and what guidelines to set around it. Shanta Zimmer, MD, senior associate dean of education at the CU School of Medicine and professor of medicine in the Division of Infectious Diseases, says it is important to understand the limits of AI and to develop the critical thinking skills. Students need to know when to pull back from the technology and make decisions on their own – whether that is studying for a quiz or meeting with a patient.

Earlier this year, Zimmer wanted to know whether ChatGPT could be helpful in a medical education setting, so she sat down at a computer and conducted her own experiment.

“I opened ChatGPT — I’d never tried it before — and I said, ‘Give me an outline of the competencies for artificial intelligence and machine learning and medical education,” Zimmer says. “I went to get a cup of coffee and came back a few minutes later to this beautiful outline.”

Zimmer compared the outline to a scholarly paper written by educators. She was impressed. “There were a lot of similarities,” she says. “It was done quickly, and I thought, ‘Well, that’s really very cool. I wonder if we should be doing that more as we’re developing guidelines for AI.’”

"I think that students will have to use these tools in the future, so rather than shutting things down, we should embrace it and develop some guidelines around how to use it." - Shanta Zimmer, MD

Zimmer, like others throughout the field of medicine, has some skepticism about the ability of AI to diagnose patients and return accurate results, but she says it’s worth introducing to students early, so they can also continue to grow and evolve along with the technology.

“We are thinking about ways to develop some assignments where students use artificial intelligence and then critique it, because we want them to understand that there may be things that are wrong,” Zimmer says. “A key for medicine and science in general is that we can’t have a top-down approach. We’ve got to create these tools together.”

This approach has led to discussions with students, who are already using AI programs for studying. Zimmer points out that there are many discussions about transparency, including how to cite the use of AI features like ChatGPT, just as one would with a book or journal article.

“There’s general concern and thoughts that maybe it shouldn’t be part of the curriculum, but I think that students will have to use these tools in the future, so rather than shutting things down, we should embrace it and develop some guidelines around how to use it,” she says.

Featured Experts
Staff Mention

Mattew DeCamp, MD, PhD

Staff Mention

Malik Kahook, MD

Staff Mention

Casey Greene, PhD

Staff Mention

Shanta Zimmer, MD