You see your doctor in the morning for routine lab tests. Before you head to bed, you log into your health record for the results – and out pops a digital version of your actual doctor, asking you how she can help.
It looks like your doctor. It sounds like your doctor. But it is, in fact, a chatbot made in your doctor’s image. This so-called “digital twin” is something health systems can create and deploy.
But should they?
Philosopher and internist Matthew DeCamp, MD, PhD, is encouraging healthcare teams to ask this question. As an associate professor of internal medicine at the University of Colorado School of Medicine and director of research ethics for the Colorado Clinical and Translational Sciences Institute (CCTSI), he wants to be sure healthcare teams implement ethical frameworks around transparency, beneficence and choice – and centered around what patients and families want and need.
Read more in our series on AI in healthcare.
Let AI solve the “right” problems
AI is well positioned to solve problems in research, clinical care and within the healthcare system. From increasing efficiencies and enhancing diagnostics to improving personalized care and boosting rural access to specialty providers, AI has the potential to enhance outcomes while reducing costs. But if science fiction has taught us anything over the years, it’s a venture that comes with risk.
“We want to be sure we’re shaping AI in medicine to focus on the things we really care about,” DeCamp said.
He acknowledges that ethics can sometimes come off as critical of AI. But in medicine, lives are at stake. Adding artificial intelligence into patient care requires scrutiny – and skepticism. Ethics can help shape AI for the better.
“We must always be honest and transparent about how no AI model will be perfectly fair and continually evaluate for improvement.” – Matthew DeCamp, MD, PhD
For AI to be ethical, transparency is a must
From an ethical perspective, DeCamp said asking patients to communicate with their doctor’s digital twin may be crossing a line.
He uses a real-world example of Livi, an AI-based chatbot introduced at UCHealth to assist patients with scheduling appointments, answering follow-up questions and education on health-related topics. Livi includes a prominent disclaimer at the beginning of all chats that says it is a “virtual assistant.” Despite this transparent disclaimer, a survey of patients who interacted with Livi revealed that one in three weren’t sure what Livi was – and a small percentage of people thought Livi was a real person typing back to them in real time.
Which leads DeCamp to offer three questions that encourage transparency and must be answered before implementing AI-based technology made in the image of a human.
- Does the patient understand it’s a computer?
- Does the patient know that the responses they receive are, in part, generated by AI?
- Does the chatbot have a clear framework for the kinds of questions that can and cannot be answered, and does the patient understand this?
Transparency, to DeCamp, is a key ethical value that must be part of any AI deployment.
“We want to maintain trust in healthcare clinicians and the healthcare system in general,” he said. “Transparency must be part of this.”
"We must develop and teach people the skill to interact with AI – to poke and prod AI to reveal its weaknesses.” – Matthew DeCamp, MD, PhD
For AI to be ethical, it must involve patients and families in decision-making
When tech developers are given a problem to be solved, they approach the problem from the technical imperative.
The digital twin is a great example. Patients want access to their doctors after hours. Tech developers see the solution being a digital twin chatbot.
But is that what patients really want – and need?
It’s why DeCamp believes healthcare teams must bring together health professionals, technical developers and patients and families.
“Interacting with your personal physician’s digital twin feels, on some level, like an intuitive ‘no.’ But unless our teams work together and with patients, we’ll be missing out on important insights,” he said.
Research into how patients interacted with Livi came back mixed. Some liked that Livi could help them schedule appointments. But some said that a chatbot wasn’t needed. To them, a better scheduling system was a better answer – not AI.
“The patient and family voice is really important to incorporate because they can help us understand what they need and value the most. And healthcare teams need to listen to that,” DeCamp said. This might include educating patients and families before AI is used and giving them alternatives to AI when appropriate.
For AI to be ethical, beneficence should be considered
Is AI being used for the well-being of patients, communities and societies? It’s an important consideration, DeCamp said.
For example, access to healthcare is a problem throughout the United States. Could a patient-facing AI chatbot help us reach rural communities, which tend to lack specialists?
“If the answer is yes, then great. By all means, let’s do it. But ethically, we can’t stop there. We can’t have AI be the only way a rural community has access to specialty care.”
For AI to be ethical, it must ensure choice (with a caveat)
The ability to choose is an important aspect of Western medical ethics, for both clinicians and patients. DeCamp asks: How can we use AI to inform but not control our choices?
If we start by placing patient-centered, quality care at the center of our decisions, how can we do so in a way that continues to promote choice – for clinicians and for patients? Can we use AI to inform but not control?
Choice, DeCamp points out, could be an illusion.
Using generative AI models such as ChatGPT takes a toll on the environment, including demands for electricity and water use. Take search engines, for example. Many have their own AI tool. Currently, when we open a search engine, it defaults to a less-resource intensive search based on algorithms and indexing of webpages – something people who consider the environment may prefer over AI. Should the search engine change its default to its AI model, the power of choice in how we access information is on some level taken away.
“If AI becomes the default, you no longer have the choice for an environmentally better option,” DeCamp said. Considering what choices might be lost in the implementation of AI in medicine is an ethical aspect healthcare teams shouldn’t ignore.
The ongoing need for education and evaluation
Implementing AI through an ethical framework can help guide decisions that will ultimately impact clinicians and patients. DeCamp cautions that bias in AI models will always exist, because AI makes predictions based on information learned in the past.
“Our ethics obligation is to ensure that we're continually evaluating AI models along different dimensions of ethics,” he said. “We must always be honest and transparent about how no AI model will be perfectly fair and continually evaluate for improvement.”
Western ethics guide AI. Should other frameworks be considered?
“An important thing to keep in mind with all AI is that it’s a reflection of how the model has been trained. So when we ask ChatGPT to give us an ethical framework, it will reflect back to us the ethical values that were most represented during the model’s training. This means that it might leave out community-based ethics, virtue ethics or other traditions. But if we improve our prompt literacy – that is, learn to ask better questions – and ask ChatGPT to include ethical considerations it may have left out, AI will oblige and mention other approaches. So, two important points: 1) We must know that we’re receiving an answer that reflects the approaches most represented when the model received its training data; and 2) We must develop and teach people the skill to interact with AI – to poke and prod AI to reveal its weaknesses.”