<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">

Should I Pay Extra for AI to Read My Mammogram?

How your doctor is – and is not – using AI to take care of you

minute read

by Carie Behounek | July 18, 2025
a doctor describes imaging to a patient

During a routine mammogram screening at a stand-alone clinic, I was offered a choice to have my results sent to an independent company. For $120, the company promised an artificial intelligence-based scan of my mammogram to detect breast arterial calcification, a potential indicator of cardiovascular disease.

Insurance covered the mammogram itself. I opted not to pay extra for AI. But days later, I couldn’t help but wonder if I’d made the right choice – and what other AI-based offerings patients might expect while receiving routine care.

So I reached out to David Kao, MD, associate professor of cardiology at the University of Colorado School of Medicine and medical director of the Colorado Center for Personalized Medicine. He shared what we all should know about how doctors are using AI in patient care today.

This is the sixth article in an ongoing series on artificial intelligence in the health sciences. See other articles in series.

Q&A Header

How is AI being used in clinical care?

This question is easy because the answer is that it’s not. There are companies that have AI products that are trying to integrate into routine clinical care. For several reasons, they haven't entered the actual clinical care process.

The tools that are out there are direct-to-patient, such as health data from a wearable device or bespoke (custom) imaging tools. But when you walk into a doctor’s office, there is no AI being used in clinical decision making. 

So it's not being used in the clinic, but it was being offered in a stand-alone imaging center for screening. Why is this?

Well, that’s a direct-to-patient product. Even with imaging, AI isn’t being used for clinical decision making in a healthcare system. You can get scans that use AI to highlight an interesting finding, which is what you were offered. But even that isn’t technically usable for clinical decision making. The U.S. Food and Drug Administration (FDA) categorizes AI as a medical device. There are no clinical decision-making AI products cleared by the FDA. 

So what do we need to know about AI readings of imaging studies if we are offered one?

In cases where AI is being used to flag something in an image, a person will review it and agree or disagree with AI’s findings. In the case of freestanding imaging clinics, such as the place you went, it’s a little murkier. But if the AI detects something that’s off, you’d still need to go to a physician, and the physician would order another test to confirm. Plus, your insurance probably won’t cover AI reads from stand-alone imaging centers. My take is that at this point, you don’t need to pay out of pocket when your doctor’s office can provide the testing you need for an appropriate workup. 

"AI – especially in medicine – is dependent on the population that was used to train it." – David Kao, MD 

Are companies that offer AI-based imaging or screenings predatory?

They are not necessarily predatory, but I think people don’t understand what they’re getting. If it flags something, you still need to see your doctor, and you’ll need to repeat the study. You still need your image to be read by a human.

These types of places, which include freestanding health fairs, will offer testing that in my opinion sometimes leads people to freak out. For example, yesterday in the clinic, someone who had already had a heart attack told me they had a test that showed their carotid artery thickness was elevated. They were wondering if that fact made them sicker than they already knew themselves to be before. The answer is no. But people believe that these types of tests are telling them something that we aren’t able to work through in the clinic. 

What other concerns do you have?

It’s not just about what the AI-based scan will find. It’s also what it does not find. Let's take an extreme example. Say you had an MRI of the brain, and AI read it and said there was nothing there. No human looked at it, and it turns out you had an early brain tumor that could have been treated. So what we worry about is having people receive a negative finding on an AI read, when a human eye would have detected something more.

It’s something we all need to keep in mind, that AI – especially in medicine – is dependent on the population that was used to train it. So where you are, who you are, the environment, etc. It’s very sensitive to these things, and that’s a problem when you try to develop a tool for a general population. It’s one of the things that the FDA is having a hard time with – what to require to prove that it works well for large groups of people across the country.

So, if I go see my primary care doctor, I won’t see any AI?

You won’t see any AI making decisions about your health.

So, it sounds like AI isn’t taking over healthcare just yet.

Well, I know it's not. I'm the medical director of our innovation center. We look at digital health and AI companies a few times a week in terms of what they do and evaluate how we could use it in clinical care. We're working with one radiology company that follows incidental findings, but they don't look at the images. They're scanning for the words in the report.

Most of the AI you’re seeing is in research. And in research there’s access to a tremendous amount of computational power and storage that isn’t available in clinical care. 

"Progress moves at the speed of trust." – David Kao, MD

What’s your prediction for how AI will be used in clinical care in the near future?

It’s not that far off before we’ll be able to use these tools more routinely. I also think patients will increasingly use them on their own. Setting expectations is really important. The key question is: What do you do with the results you get?

I often see cases where someone brings in results from a new algorithmic scan or diagnostic tool. As someone who works in precision medicine and AI, I understand the appeal. But it always needs to be confirmed through other methods. That’s the reality right now, for both clinicians and patients. Progress moves at the speed of trust. There’s nothing wrong with trying out new technology, but if we don’t yet know how reliable it is, it's critical not to jump to conclusions without further input. 

Where might you see AI in the doctor’s office?

 

You may start to see AI being used to take notes during your visit with your doctor. I’ve used it, and it’s really cool. It’s an app on my phone, and it just listens to our conversation. It creates a structured note, organized just as I’d organize a note in your chart. So it’s not just copying down everything we say; it’s actually extracting information and categorizing it. It’ll draft the entire thing, which could take 20 minutes or more to do otherwise. I still have to review the note, make changes and approve it.

 

It’s meant to help reduce feelings of burnout in providers by reducing our administrative burden while hopefully increasing efficiency. So far, what we’re seeing is that physicians are loving it. But interestingly, it hasn’t led to a lot less time spent in the chart. But physicians perceive that it does. If they are happier, then their burnout is probably less. And that’s huge.

Featured Experts
Staff Mention

David Kao, MD