<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">

AI in Healthcare: Results Over Hype

National AI expert provides vision of technology’s future in the field and overview of the unique challenges ahead

by Matthew Hastings | May 5, 2025
A black and white photo of a doctor and patient speaking. Behind them a collage of a graph layout alongisde interconnected circles that are filled with solid colors, handwriting, mathmetic equations and lines of code.

Headlines about artificial intelligence often blare big promises and underscore the breakneck pace of this evolving technology. 

Within the field of healthcare and medical research, the application of AI is likewise focused on the possibilities – only in a more measured approach. The marriage is solidly anchored in improving results for both patient and provider. 

“I think there are a lot of conversations about AI replacing your doctor,” said Casey Greene, PhD, a professor in the Department of Biomedical Informatics at the University of Colorado School of Medicine and a national expert on computational biology and artificial intelligence. “I think what gets me excited is not AI replacing your doctor. It's helping your doctor spend more time with you and less time in the chart.”

Greene, also the founding director of the Center for Health AI, said that AI works well in healthcare because it integrates seamlessly into existing areas. “There’s an opportunity to assess AI’s reliability in the health sciences,” Greene said. “After its reliability is proven, it can gain that trust with both providers and patients.”

Editor's note: This story is the first in an ongoing series on artificial intelligence in the health sciences. The series will feature scientists and providers who are using these technologies at CU Anschutz to pursue new avenues of research and better care for patients.

That trust, in Greene’s eyes, is built through both innovative thinking and deployment of these technologies to demonstrate and prove their worth in the real world – not just on the computer. That goes hand in hand with following strict data and privacy standards and using data that is representative of the patient populations being served. 

Casey Greene stands in front of computer monitors.
Casey Greene, PhD, professor in the Department of Biomedical Informatics and founding director of the Center for Health AI, stands in front of a series of monitors.

In the following Q&A, Greene lays out where artificial intelligence in healthcare currently stands, future developments he sees on the horizon, as well as the opportunities – and potential pitfalls – these technologies have in store for both providers and patients. 

Q&A Header

Where does AI stand in terms of research and patient care?

Overall, AI is being actively used and developed for both research and care. AI technologies have proven useful in high-volume, data-rich areas – common features of imaging or hospital operations. Sometimes AI is helping the nuts and bolts of healthcare work better. But broader clinical integration is still uneven at present. 

On the care side, it's bringing potentially new insights to bear in several ways. One is early identification, through AI-assisted monitoring, of patients at risk of deterioration. There are also ways that AI systems help patients and providers connect while keeping the provider in the loop. This isn’t a situation of something that’s autonomously pretending to be your doctor; rather it’s a series of systems being studied in pilots that help the provider better respond to patients. The doctor, involved at each step, responds more quickly, effectively and accurately. 

On the research side, we're seeing an explosion in new potential, especially as the barrier to entry has dramatically shrunk through improved technologies. One example of an AI advance accelerating research is a program and AI tool called AlphaFold, which excels at predicting protein structures. These structures help in areas like drug discovery, allowing efficient computational screening of potential drugs. 

Especially in research settings, we are seeing AI’s accelerating use in variant-effect prediction, examining varying points where individuals’ genomes differ and predicting the biochemical impact. In the longer term, you could imagine a situation where you’re trying to identify which variant in a genome could be contributing to a rare disease. Predicting the effect of each variant could reveal key clues. Ultimately, this could lead to better diagnosis and treatment of these conditions.

What are some examples of things that you see coming into this space in both the near and long-term?

On the near-term side of things, I’ll say there's a lot of growth in what I’d call “quiet tools.” These would sit within existing systems and help take pressure off clinicians, reduce friction and improve the connection between providers and their patients. 

AI scribes are one example. Wouldn’t it be helpful for providers to be able to look at and focus on their patients while they meet instead of the providers needing to look at a computer while taking notes? What if an AI could then draft the summary, enabling providers to focus on ensuring that the most important elements are included?

Another area is inbox management – helping providers respond to their patients more quickly and comprehensively. It may not be flashy, but at the same time, it’s improving the human connection. For any such systems, it’s critically important that the first draft produced with AI support is a solid starting place, otherwise, we’ve added the additional work of correcting a computer on top of connecting with patients. Yanjun Gao, a faculty member in our department, has worked on improving such systems to be better for patients and providers. 

AI and imaging have been radically transformed over the past decade and a half by the advent of deep-learning algorithms. These algorithms are starting to reach the point of supporting providers in areas where imaging is key. 

What is "deep learning"? Learn more about various AI terms in healthcare. 

In the longer term, I think we’ll see tools that can help with earlier identification of certain diseases, including those where early detection is challenging but interventions applied at the earliest stages have the best outcomes – such as Alzheimer’s. Catching these diseases earlier and intervening quickly could help extend extra quality years of life. Still, we need to be ensuring these approaches are proven out through clinical trials. 

AI excels at bringing together complex data from multiple sources. To this point, we’ve largely discussed data from patient encounters, the electronic health record, or imaging, but there are also opportunities to use data from other systems – think wearables like your watch or phone, alongside genomics.

How does AI in the health sciences and healthcare vary from other fields?

It comes down to approach. Within the health sciences, AI needs to be rigorously evaluated. This type of evaluation doesn’t make the headlines in terms of hype. Such systems are developed alongside patients and providers rather than over the concerns of potential users and are ultimately grounded in the understanding of human physiology and disease. 

Tell Bennett’s work in developing new pediatric sepsis criteria is a good example. It’s using cutting-edge technologies, yes, but in careful and considered ways. It’s also done with an eye toward practicality now and in the future, designed to work in the most data-rich settings of state-of-the-art children’s hospitals to anywhere children receive care.

How do you think about managing hype around AI?

We seek to make CU Anschutz the place where the rubber meets the road. We can’t get caught up in overpromising or hype. We must always center ourselves and ask: “Is this driving the research forward and leading to new treatments and better understanding of health?” It’s fundamentally a different challenge in healthcare. If there’s an error in a generated picture of a cat – the impact is not that high. If there's an error in patient care, it's a different ballgame.

What are the challenges around building a data set for AI in healthcare?

Health data is fragmented and standardization takes real effort. The sepsis project, for example, pulled together over 3.5 million pediatric records across five countries and three continents – that’s not trivial. 

At CU Anschutz, we’ve built Health Data Compass, which was the first research data warehouse in Google Cloud. This and associated infrastructure make it possible to create large, high-quality and well-governed data sets to support research and care innovations. 

At the same time, it’s about striking a balance between pursuing innovations and fully locking down data. With these technologies, you've got to establish these norms and expectations. It will be important for each academic medical center to finesse, "Yes, we need to ensure that we can take advantage of technology at the forefront without putting our patients and students at risk."

How does CU Anschutz handle privacy and data concerns?

The stewardship here is considerable because privacy is non-negotiable. Following the federal Health Insurance Portability and Accountability Act (HIPAA) and research regulations, de-identification and consent all shape this conversation. Data identifying a patient is protected – and we treat it that way. At CU Anschutz, Health Data Compass governs access based on the use case, with tiers from de-identified datasets to limited-use personal health information. 

Our research happens inside secure environments with oversight and role-based access. For us, it’s not just about compliance – it’s about earning trust. For example, one person can’t access all the data available. Access needs the buy-in of our regulatory folks. 

How do you maintain AI as a tool and not allow it to become a crutch?

That's absolutely vital. We want to reduce friction but not critical thinking. There are areas where humans excel, and we want to ensure we’re supporting that and not replacing it. 

The sepsis criteria I talked about earlier are a great example. It’s not replacing a clinician’s judgment, as they always know their patients best, but giving them additional integrated information that can support decision-making. 

Also, for medical students coming up with these technologies, it’s important to maintain objectivity. Last year I taught a class with Sean Davis, MD, PhD, a professor in our Department of Biomedical Informatics, where our approach was to teach students to question the AI models being used. How would the AI model be trained, how is it evaluated, what’s the data used and when should you trust it? Those questions should be in the front of your mind. We tried to remind them that things can look great in a tech demo, but as a future provider, you always have to think about the rigors of deployment in a care setting. 

What makes you the most uneasy about this field? What are you most excited about?

One-size-fits-all does not work in healthcare. We need to keep an eye on bias, transparency and long-term usability – not just model performance on day one. Importantly, this extends to our data – we must ensure it is representative. If we were building an AI system that was solely based on data gathered, for example, at an academic medical center and didn't bring in data from community hospitals, I would be worried about building an AI tool that then gets deployed in a community hospital setting, because it may not be as useful. Or it may, in fact, be inaccurate in ways that are difficult to detect.

On the other hand, I’m excited when we see AI go from the lab to the bedside – especially when it leads to earlier diagnoses, less burnout, or fewer adverse events. It’s an exciting time to be in this field, working with clinicians to bring better care to our patients – bringing dreams into reality. 

Featured Experts
Staff Mention

Casey Greene, PhD