Artificial intelligence (AI) assistants are everywhere in health care.
They’re helping researchers detect diseases earlier. They’re drafting messages on behalf of doctors who are increasingly spending time answering patients’ electronic messages. They’re even at the center of a family dinner for Casey Greene, PhD, professor and founding chair of the Department of Biomedical Informatics at the University of Colorado School of Medicine.
“One of the things that's changed my job over the past 18 months is that I used to struggle to get people excited about the intersection of AI and medicine. Then, as soon as ChatGPT came on the scene, I went from the person no one wants to talk to at the Thanksgiving table to the person people won't stop talking to at the Thanksgiving table,” says Greene, who dived into what AI means for the future of medicine at the April installment of the Transforming Healthcare lecture series on the CU Anschutz Medical Campus.
He and fellow researchers CT Lin, MD, professor of internal medicine and chief medical informatics officer at UCHealth, Jayashree Kalpathy-Cramer, PhD, professor of ophthalmology, Tell Bennett, MD, MS, professor of biomedical informatics, and Matthew DeCamp, MD, PhD, associate professor of internal medicine in the Center for Bioethics and Humanities, discussed how they’re using AI and algorithms to solve some of medicine’s toughest problems while also thinking about the ethical dilemmas these new technologies present.
The group agrees that AI is a powerful tool that’s forever changing the world of health care — often for the better — but big questions remain of this evolving technology and what it can do to improve human health.
“The standard of care should not be standardized care,” Greene says. “That's one of the things that drew me to CU. There is this pervading philosophy across the institution that we can do better.”
In the Colorado Center for Personalized Medicine, which Greene currently directs, that means investigating genetic variants to ensure that a drug is the right fit for a patient. When a health care provider orders a genetic test, the center will deliver results back via an electronic health record. This gets physicians “the right information at exactly the right time and allows them to make decisions that work,” Greene explains.
In the Department of Ophthalmology, faculty members are using AI in imaging to learn more about disease and better diagnose patients.
“We’re seeing AI implemented throughout the workflow,” says Kalpathy-Cramer, who is chief of the Division of Artificial Medical Intelligence in Ophthalmology. “We’re seeing it used from the time the patient comes into the clinic, to how best to take the image, to getting a better diagnosis, and then to see how patients are responding to treatments.”
AI has been especially useful in diagnosing retinopathy of prematurity (ROP), which is a leading cause of preventable blindness around the world. AI can help physicians cut down on stressful exams for fragile premature babies, Kalpathy-Cramer says.
“There are many parts of the world where there is limited access to pediatric ophthalmologists, so we are hoping that AI can help,” she says.
Jayashree Kalpathy-Cramer, PhD, chief of the Division of Artificial Medical Intelligence in Ophthalmology, explains that improving technologies are helping researchers diagnose diseases by looking at the eyes.
There are already countless examples of how AI has made a difference in research, even proving to be helpful in modernizing diagnosis criteria for dangerous sepsis infections among children. Bennett, who led a large AI project to update the criteria, is now working with partners on an app that can be deployed in low-resource environments.
“I've spent the last seven days working in the ICU and I used the new criteria every day,” he says. “We know this is having real impact.”
With more possibilities comes bigger questions. Many of those about the ethics of using AI, prioritizing transparency, and alleviating bias.
“Our North Star, from my standpoint of a primary care physician, is patient-centered, high-quality, equitable care delivered through caring relationships,” DeCamp says. “I think if we keep this North Star in mind, it can help us as we think through some of the challenges of using AI in practice.”
This can help health care providers avoid technology’s temptation, he continues. This might mean prioritizing electronic health records for communication instead of other competing interests. It’s also crucial to ensure health care spaces remain sites of healing.
“We want to ask not only whether we can, but whether we should,” DeCamp says.
For DeCamp and his colleagues, the use of AI boils down to making medicine better in the lab, the clinic, and in the classroom.
“Ethics should be able to help us modify our tools, refine our tools, and to do a better job,” he says. “Sociological research reminds us that whether it's the computer, the written language, the number, the automobile, or more, the tools we use don't just change what we do. Unchecked, they change who we are and what we value. It's up to us to be sure it's for the better.”