As the use of artificial intelligence (AI) increases in clinical spaces, so does research that aims to understand use, accuracy, and challenges. While most of this research has focused on clinical decision support or areas of specialty, few investigations have honed in on nursing.
“Nursing is very much about patient interaction,” says James Mitchell, PhD, MSc, assistant professor of biomedical informatics at the University of Colorado School of Medicine. “Nurses typically spend more time with patients than most other healthcare professionals, providing consistent bedside care and monitoring. Nurses are engaged in a lot of education with patients, which theoretically complements the physician’s role but is unique when it comes to the use of AI and large language models (LLMs).”
In a new rapid review, Mitchell and researchers around the world look at the current and potential uses of LLMs — such as ChatGPT or similar AI systems — in nursing. The review summarizes current and potential uses of LLMs in nursing and recommends future studies that focus on conducting empirical and qualitative research to better understand how the technology may improve nursing practice, education, and research.
Mitchell notes that nursing needs to be sufficiently highlighted in LLM research, emphasizing that “more inclusive collaboration across all levels of healthcare professionals could enhance the insights gained.”
“We noticed very quickly that nobody had done a review like this,” he adds. “We see these reviews in specialty areas, but nursing is a particular skill set.”
LLMs are a subset of AI but differ in that they’re focused around producing natural language instead of numbers and data. This technology may help nurses in education and in clinical settings by making large amounts of complicated information more digestible.
“What I like about this technology is that it can provide something that you can understand. You can take something really complex and summarize it in a different way so that it makes sense,” Mitchell says. “From an educational perspective, it's skilled at narrowing complex terms and ideas.”
The field of nursing should proceed with caution, though. The technology can produce errors or wrong information, which could be troublesome for students and professionals alike. It’s important that the system producing the information is trusted.
Researchers found that most uses of LLMs in nursing are similar to other applications across health care. The technology can provide summaries to make work more efficient, but that too can create problems.
“This can be helpful, but at the same time, you're introducing another issue,” Mitchell says. “For example, there might be less text to read, but because you've summarized 100 patient notes into something that a nurse can very quickly understand, you're potentially introducing too much trust in what is summarized.”
Mitchell and his colleagues agree that researchers should continue looking at LLMs in nursing.
“Given the relative novelty of large language models, ongoing efforts to develop and implement meaningful assessments, evaluations, standards, and guidelines for applying large language models in nursing are recommended to ensure appropriate, accurate, and safe use,” the researchers say. “Future research along with clinical and educational partnerships is needed to enhance understanding and application of large language models in nursing and healthcare.”
Ethical issues, like taking readings from a copyrighted textbook or protected patient information, are among the obvious challenges LLM use in nursing face, but Mitchell points to other considerations, like sustainability, too.
“Large language models require loads of data and loads of information and loads of computational power,” he explains. “The question becomes how to narrow those down so that you can make these domain expert systems that only work to help nurses summarize notes internally, and don’t use a load of computational power. Because that costs a lot of money.”
Now that Mitchell has dived into existing research on nursing and LLM use, he says he’s thinking about expanding into improving clinical guidelines or gathering clinical feedback for health care systems.
“It’s really about making communication better, creating better patient interactions, and finding ways to instill trust in the systems that can allow these objectives to happen,” he says.