What are people most comfortable using a chatbot like Livi for?
It’s an interesting mix. On the one hand, patients liked the convenience of Livi being available 24/7 for things we called administrative tasks: scheduling appointments, getting contact information, test results.
On the other hand, there were a number of patients who wanted to use Livi for more sensitive issues. For example, asking questions about mental health, billing or finding a new provider.
Why was Livi used for sensitive topics?
It maybe shouldn’t have been surprising to us, but one of the reasons patients were motivated to do so was they perceived Livi to be more private, more anonymous and less judgmental than a human. Many respondents in the study said Livi felt like a neutral bystander.
Obviously, in medical education and care, we're always trying to work to eliminate implicit and explicit biases, but we know they still exist. That casts a shadow over where patients wanted to go first when it came to some of those sensitive topics.
There are some topics that are probably both administrative and sensitive though, correct?
Exactly, it depends on the circumstance. The best example I can give you of that is asking Livi questions about billing.
Some patients saw asking the chatbot a billing question as being purely administrative, but others thought that discussing financial matters or the need to set up a payment plan was uncomfortable to discuss with a person because it would reveal financial insecurity.
Do chatbots risk eroding trust further if providers grow used to not having potentially sensitive conversations with patients?
In ethics, we often talk on the one hand and on the other hand. On the one hand, in the short term, Livi could be filling a real need. But on the other hand, in the longer term, you could interpret our findings to say, "We need to do a much better job at ensuring that human clinicians are able to have these conversations in a way that's non-judgmental."
A research study partnership
DeCamp stresses that the growth of chatbots following the COVID-19 pandemic, and seeing what he was observing in the clinic with his patients, motivated the research with his colleagues to find out more details about how patients were using chatbots.
“We're fortunate enough to have great partners at UCHealth who were very willing to let us do an honest look at what patients think about chatbots like Livi,” DeCamp said.
If we simply stop at that short-term fix, that's where deskilling could happen because we would run the risk of preferentially shifting this task over to a chatbot. What should continue working at helping humans have better conversations in a welcoming environment.
Does having a chatbot in healthcare require transparency and education around its capabilities and shortcomings – so called digital literacy?
A collaborator on our team, Annie Moore, emphasizes how digital literacy is so important. . That was clear in our study: People were not always sure what Livi was or where their responses and data went. Some assumed it would be shared. Others didn’t think it was, and some who were sharing had preferences that it not be shared.
If that’s the case – and what the research shows – we have an ethical obligation to disclose repeatedly both where your data is going and that the chatbot is not a person. For Livi specifically, at the time of our study, information was not placed into a patient’s health record.
Have you and the team found anything interesting on digital literacy as you’ve researched this topic?
Contrary to what we thought, older users were more willing to share with Livi and less concerned about privacy than younger ones.
That may seem a little counterintuitive because I think we have a stereotype that older users are more concerned about privacy or less technology savvy and so on. But the interesting finding was that older users very clearly associated Livi with the health system and their healthcare. So they had expectations around privacy that apply to healthcare information – such as HIPAA – and felt comfortable sharing.
How many people in the study identified Livi as fully automated?
Only one in three. This is similar to business and not unique to healthcare. But that was a big eye-opener for us.
Where’s an area where respondents felt that Livi fell short?
Diagnoses. It’s a little hard to say exactly why, but we suspect it has to do with people feeling like the technology is not quite there yet. We did this study a few years ago, prior to a lot of recent advancements in chatbots and AI products.
In our interviews, we also heard from people that they thought things like diagnoses should really just come from a human. That only a human clinician has the judgment and the capability of making and delivering a diagnosis with care.
This suggests that there are limits to the types of tasks that people want to assign to chatbots. It pushes on that notion of, "Is accuracy the only thing we care about, or is there something more that comes from a human interaction and a human diagnosis?"
You mentioned that people saw Livi neutral and unbiased – what are the dangers around people assuming all chatbots are unbiased?
If there’s a dominant assumption and narrative that technology is neutral, we need to do a better job at spreading the message that many AI technologies do have the potential for bias or differential performance based on the user or how it was constructed in the first place. We should do a better job at helping people understand the limitations of technologies.
What questions should we be considering on chatbots going forward?
There are a few things:
- It should be patient-centered throughout the entire development and implementation cycle.
- We should think very clearly about what tasks are best for a chatbot. It can be tempting to cut corners, but we have to consider the ethics of: “Should we be doing this?”
- We should consider the environmental costs in energy, land and water of running AI platforms.
- And remember it’s not all negative: Chatbots have a lot of potential to solve access problems in healthcare and give more people the right types of care and information at the right time.
I do think that more and more chatbots are going to be developed in healthcare settings and used and rolled out in all sorts of ways. I just hope that we can do so in a way that's patient-centered, patient-first.