<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">

What Is Artificial Intelligence in Healthcare?

A leading AI researcher at CU Anschutz explains how the technology works

minute read

by Matthew Hastings | September 12, 2022
What you need to know:

Have you wondered what “machine learning” or “neural networks” mean? As the technology advances and becomes more commonplace, Casey Greene, PhD, explains the core concepts behind the field and provides examples of artificial intelligence in healthcare.

Casey Greene, PhD, chair of the University of Colorado School of Medicine’s  Department of Biomedical Informatics, is working toward a future of “serendipity” in healthcare – using artificial intelligence (AI) to help doctors receive the right information at the right time to make the best decision for a patient. 

Finding that serendipity begins with the data. Greene said the Department’s faculty works with data ranging from genomic-sequencing information, cell imaging, and electronic health records. Each area has its own robust constraints – ethical and privacy protections – to ensure that the data are being used in accordance with people’s wishes. 

His team uses petabytes of sequencing data that are available to anyone, Greene said. “I think it’s empowering,” he said, noting that anyone with an internet connection can conduct scientific research. 

Following the selection or creation of a data set, Greene and other AI researchers at the CU Anschutz Medical Campus begin the core focus of AI work – building algorithms and programs that can detect patterns. The goal is to find links in these large data sets that ultimately offer better treatments for patients. Still, human insight brings essential perspectives to the research, Greene said. 

AI Health Q&A

“The algorithms do learn patterns, but they can be very different patterns – and can become confused in interesting ways,” he said. Greene used a hypothetical example of sheep and hillsides, two things often seen together. Researchers must teach the program to separate the two items, he said. 

“A person can look at a hillside and see sheep and recognize sheep. They can also see a sheep somewhere unexpected and realize that the sheep is out of place. But these algorithms don't necessarily distinguish between sheep and hillsides at first because people usually take pictures of sheep on hillsides. They don't often take pictures of sheep at the grocery store, so these algorithms can start to predict that all hillsides have sheep,” Greene said. 

“It's a little bit esoteric when you're thinking about hillsides and sheep,” he said. “But it matters a lot more if you're having algorithms that look at medical images where you'd like to predict in the same way that a human would predict – based on the content of the image and not based on the surroundings.” Encoding prior human knowledge (“knowledge engineering”) into these systems can lead to better healthcare down the line, Greene said.

And when it comes to AI in healthcare, Greene said it is key to have open models and diverse teams doing the work. “It gives others a chance to probe these models with their own questions. And I think that leads to more trust.”

In the Q&A below, Greene provides a general overview of the terms and technology behind AI alongside the challenges he and his fellow researchers face.

Q&A Header

What is artificial intelligence in healthcare?

I'll define AI systems as those that sift through large amounts of data and aim to either identify or explain the patterns that are seen in those data. These systems can make a difference in how we discover new drugs, how we deliver care, and solve other challenges in biology and medicine.

What is “big data”?

It’s a trite answer but: data that is larger than most people deal with. It turns out to be a moving target as researchers get better and better at dealing with data that one might consider “big.” Those old data sets become routine – and then the next wave of big data is an ever-larger data set.

When we think about big data, in the Department we're usually thinking about either data that's derived from health care or data that's often derived from some sort of molecular or imaging-based profiling.

In biology and medicine, some key technologies have accelerated the rate at which we gather data. Things that were infeasible two or three years ago are now possible. These include new methods to characterize the genome, as well as those that touch on healthcare – such as electronic health records.

What is “machine learning”?

Machine learning describes many techniques used to identify patterns in very large collections of data. 

I think the best way to think about machine learning is as a set of algorithms that sift through large amounts of data, either to discover new groupings in the data or to make predictions based on those data. A key distinction is that the patterns – either for discovery or prediction – are derived from the data themselves rather than the use of a set of rules pre-coded by a programmer.

What is “deep learning”?

Deep learning is a type of machine learning – essentially machine-learning that couples pattern discovery to pattern use on top of each other. For example, a deep learning algorithm might take very low-level data, identify patterns in that data, use those patterns to make a prediction, and then use the correctness of those predictions to refine the patterns. 

What are “neural networks”?

Neural networks are one particular framing of how it is possible to train a computer to learn from data. They’re inspired by biological neurons, but it’s a very loose inspiration. 

They are currently the dominant technology for deep learning, in part thanks to modern graphics cards. The same hardware that helps produce photo-realistic video games is remarkably good at performing math on matrices quickly – which is how we train neural networks. 

The other thing that's nice about neural networks is that they can be readily stacked and trained. Rather than having to develop a pattern detector and a predictor separately, there’s a way to take your end task and refine the patterns that you're discovering.

Topics: Research,

Featured Experts
Staff Mention

Casey Greene, PhD