Department of Medicine

‘The Greatest Experiment’: How is AI Changing Health Care?

Written by Tayler Shaw | March 19, 2026

From taking notes during doctor’s appointments to helping answer clinicians’ questions as they treat patients, artificial intelligence (AI) tools are already making substantial changes to the health care industry. At the University of Colorado Anschutz Department of Medicine, faculty members are not only embracing these changes — they’re innovating, researching, and implementing AI tools to improve the field for clinicians and patients alike.

To discuss the ways AI can make a difference in the health care system, the department welcomed Robert Wachter, MD, professor and chair of the Department of Medicine at the University of California, San Francisco, for a Grand Rounds presentation in March. As a past president of the Society of Hospital Medicine, former chair of the American Board of Internal Medicine, and elected member of the National Academy of Medicine, Wachter is a nationally recognized health care leader. His new book, “A Giant Leap: How AI is Transforming Healthcare and What That Means for Our Future,” offers unique insights into the opportunities and challenges that AI presents.

“I think we are in the early stages of probably the greatest experiment in the history of medicine,” Wachter told attendees during his presentation. “Not only are these tools remarkably good at what they do and getting better fast, but health care needs it. In fact, I think health care’s desperate need for transformation is part of what makes me particularly optimistic.”

Cardiac electrophysiologist Michael Rosenberg, MD, an associate professor in the CU Anschutz Division of Cardiology, shares that informed optimism. As medical director of the division’s clinical sciences program and the UCHealth University of Colorado Hospital electrocardiogram (ECG) laboratory, Rosenberg has witnessed the evolution of AI in the clinical and research spaces.

“My overall goal is to figure out how to use technology to make life easier and help inform decision-making for treating patients,” Rosenberg said. “Our division focuses on both the technical side of AI and implementation science, critically thinking about how we can actually use these tools to improve outcomes for patients. I think CU Anschutz is really well-poised to conduct this important research.”

It’s the same type of work being done in the CU Anschutz Division of Hematology by faculty members like Andrew Kent, MD, PhD, an assistant professor who focuses his clinical and research work on improving care for patients with myeloid disorders.

“We’ve been early adopters of AI in our division,” Kent said. “We’re embracing this technology in a deliberate way so that we are hopefully helping its implementation become better and more useful.”

Left to right: Robert Wachter, MD, chair of the Department of Medicine at the University of California, San Francisco, smiles alongside Vineet Chopra, MBBS, MD, MSc, chair of the CU Anschutz Department of Medicine, after a Grand Rounds presentation. Image taken by Jesica Whittaker.

Accelerating work with AI

One of the reasons Wachter believes AI can help improve the health care system is because there are “not enough humans to do these jobs,” he said.

There are particularly two areas where AI is needed, he explained. The first is using AI as a medical scribe that listens and takes notes during patient appointments, helping reduce the amount of time clinicians spend doing data entry work. The second is AI chart summarization, where an AI tool will scan a patient’s medical records and create a summarization that the clinician can quickly review.

“My own belief is that AI is actually our only hope to do the things that patients need us to do,” Wachter said.

AI is already helping accelerate the workflow of cardiologists at CU Anschutz. In Rosenberg’s work as a cardiac electrophysiologist — a doctor who addresses issues with the heart’s electrical system and irregular heartbeats — technology is a critical component of clinical care, whether it be analyzing X-rays, electronic health records, or ECGs (a test that records the electrical signals in the heart). The lab Rosenberg leads typically analyzes between 2,000 and 2,500 ECGs each week, which is only possible with the assistance of AI.

“If I had to sit down and write an interpretation of every ECG, it would take all my time. By using rule-based automation systems that we program, this work can be completed much faster,” Rosenberg said. “However, we don’t just rely on what the computer says. We always have a cardiologist read over them to ensure there are no mistakes.”

Similar to the cardiology division, the hematology division at CU Anschutz is applying AI tools to improve its clinical and research work. Each year at CU Anschutz, hematologists like Kent care for hundreds of blood disorder patients and perform around 100 stem cell transplants. For almost 15 years, the clinicians have collected data about each of these patients to help determine which patients responded to treatment, who did not, and the potential reasons why.

“Now, we can use AI to say, ‘I want you to analyze this database from a certain perspective to answer this question.’ And within minutes, it can deliver an answer that would have taken days or weeks for us to find,” Kent said. “We’re also using conventional tools to analyze the data alongside the AI, so we can compare those results head-to-head.”

Kent then uses discoveries made by the clinic to help inform research, such as projects that study samples from patients to understand on a biological level why certain treatments work and others do not.

“Then, I help take those lab discoveries and create clinical trials so we can apply their findings to see if it translates into new therapies for patients,” he said.

Andrew Kent, MD, PhD, second from left, is a physician-scientist, assistant professor of hematology, and CU Anschutz Cancer Center member. Image taken by Justin LeVett.

Improving clinical decision making

Using AI as a tool to support clinical decision making continues to evolve, Wachter explained.

“To me, this may turn out to be the most important issue — the degree to which AI enables us to give the right information to the right person at the right time in ways that fit into the workflow,” Wachter said.

Rosenberg has noticed a heavy interest in AI being integrated as clinical decision support tools. He is currently involved in several research projects that focus on various domains of AI. For example, he is working to help develop multimodal language models (a type of AI that uses large datasets to understand and generate content) that can scan, analyze, and answer questions about ECGs. He’s also contributing to projects that focus on reinforcement learning, where AI models are able to learn from experience and trial-and-error.

“I’ve spent a lot of time thinking about how to balance what is exciting to the data scientists and engineers with assessing how these tools can actually be applied by clinicians to help patients,” Rosenberg said.

To improve clinical decision making, Kent and his colleagues are developing interactive AI tools that clinicians can use to answer questions they have about patient care.

“Hypothetically, you could ask AI, ‘Based on our database, what would be the best treatment for this patient?’ It will take information from what is published online, what is within our established national guidelines, and from the outcomes in our database to create a synthesis of that information and share recommendations,” Kent said.

The division is also working to integrate AI into its website so that patients can access real-time information about their conditions.

“We want to make our data accessible to patients so they can learn more about our team and the care they will receive. Patients should be able to easily click through the website and find relevant, up-to-date information,” Kent said. “We hope it builds community engagement by improving how we communicate our skillsets and experience with patients.”

The potential challenges

Wachter’s biggest concern is AI being used to spread misinformation and disinformation, noting that it’s possible to create deep-fake videos that portray people saying and doing things they never actually did.

As it currently stands, AI is “right often enough to be useful and wrong often enough to not be entirely trusted,” Wachter said.

Kent emphasized the importance of humans verifying the information that AI produces, noting that AI tools can sometimes deliver false information or “hallucinations,” where AI large language models may confidently generate misleading or incorrect information.

However, Wachter warns that having humans verify information does not guarantee mistakes will not be made, as humans can make mistakes as well.

“It’s something we should be thinking hard about,” Wachter said, underscoring the need to test different models of verifying information.

Other concerns include cybersecurity, health and data privacy, and the potential biases that AI can have. Kent and Rosenberg explained there need to be guardrails to ensure that outside companies, such as pharmaceutical or insurance companies, are not skewing the results that AI models deliver.

“ChatGPT doesn’t really care what happens to you, whereas your doctor cares. Large language models can be influenced very subtly to produce answers that reflect certainty that is not always supported by science,” Rosenberg said. “I worry a bit about the potential of commercial influences, especially if not enough clinicians are involved in the technical development and oversight of AI.”

Regarding concerns that AI may take over the role of clinicians, Rosenberg, Kent, and Wachter agreed that they do not see this happening in the near future.

“AI will not replace clinicians anytime soon,” Kent said. “There are many things that a human brain does better than a computer in terms of operating on limited information, balancing risks and benefits when you don’t have every single data point, and making decisions that will protect the patient through human understanding and compassion.”

Why patients should also use AI

Though AI is not guaranteed to always deliver accurate results, these tools are still useful for clinicians, researchers, and patients.

“You don’t know what you're dealing with until you use it,” Rosenberg said. “Once you start using it, you can better identify what AI does well and where it may make mistakes.”

For patients who are asking medical questions to AI, Wachter encourages them to ask two different AI tools and compare the answers to verify the information they receive.

Many of Kent’s patients have reported using tools like ChatGPT to seek medical advice, he said, and much of the information they receive is often useful.

“I always encourage people to use and supplement what their doctor says with additional information,” he said. “To help prevent misinformation, you can also give AI strict rules and guidelines when you ask it questions, such as by telling it to stick to the facts.

No matter who is using AI, Kent encourages all users to be conscientious consumers.

“The technology of AI is rapidly improving and the potential for its benefit to medicine and mankind is undeniable,” Kent said. “The way it is used now is the tip of the iceberg, and being an early adopter means also being an early skeptic. Part of adoption is early vetting, early trials, and early improvements on it that we can contribute to.

“AI has the potential to improve various aspects of medicine, so we are trying to take advantage of it and help it get to the point where it does this in a more trustworthy and safe way.”