Colorado School of Public Health

New fall semester class will probe the promise and limitations of AI in public health

Written by Tyler Smith | July 21, 2025

We know students use it to complete assignments, that businesses will soon consider it a required job skill, and that it can be a powerful tool. However, in order to use AI most effectively, students need to know the best practices, the drawbacks, and the ethical considerations that come with the tools.

The list of hot topics in media, politics, and general conversation changes frequently, but artificial intelligence (AI) consistently generates heat. It is among the most searched terms and generates intense debate about both its usefulness and potential dangers. Will it save humanity or destroy it? Is it hope or hype? Do we even understand what AI is?

That’s why a new class scheduled for the fall 2025 semester at Colorado School of Public Health, “Applied Artificial Intelligence in Public Health Research and Practice,” aims to help students understand both the power and the peril of AI while helping them learn how to incorporate the technology into their research and professional work.

“It is very difficult to understand why AI models generate a particular output; they are ‘black boxes,’ but it’s not difficult to understand how they were created,” said the class instructor, Dr. Marcelo Perraillon, associate professor in the Department of Health Systems, Management, and Policy at ColoradoSPH.

“A lot of companies are moving to be ‘AI first,’ and I think it is one of those skills that students are going to need to have when they graduate,” Perraillon added. “I think they are going to have an advantage if they know how AI works and how to use the models.”

AI as a new force in the classroom

That advantage could also apply to those who teach, Perraillon said. He argues that working with students today requires that instructors adapt to the rapid growth of both open-source and closed-source, or proprietary, AI models. The tools are “very good research assistants” and “excellent tutors” that are capable of supporting and improving instruction, provided “you always verify the output,” he cautioned.

“Faculty will have to adapt because we cannot keep teaching the way we teach right now,” Perraillon said. “We have this amazing tool that creates tutors that you can train to be better at some specific tasks that I think students could benefit from.”

For example, Perraillon said that he uses AI to help explain concepts at different levels of complexity, based on students’ previous studies and levels of technical understanding.

“The models come up with excellent answers,” he said, that include analogies and examples, and “you can make them be very creative sometimes.”

The need to manage AI

At the same time, Perraillon emphasized that AI is far from a plug-and-play resource that takes in questions and instructions and spits out infallible answers. The systems rely on large language models that are trained on an incredible amount of data – text, audio, and video images – and learn to identify patterns and to make predictions. But the quality of those predictions rests on their training process and “can be spotty,” Perraillon said.

The foundation of the models is neural networks, the infrastructure upon which large language models rely, and another key subject for the class. These networks are the interconnected information highways of AI, functioning on a very basic level like the synapses of the brain to process and interpret the data they encounter.

“They are in essence prediction models, much like the basic linear regression model students learn in introductory statistics. They are just much, much bigger and interconnected,” Perraillon said.

But just as the power of the human brain has limitations, AI models are capable of making mistakes, fabricating information, making inaccurate predictions, and drawing outlandish conclusions, Perraillon said. That is because these models are “sort of impersonators” that “mimic the reasoning process” of humans when they are presented with the data used to train them, Perraillon said.

“The current models can do a lot of things, but they still need a ‘human in the loop’ to verify the output,” he said.

Human choices and decisions drive the power of AI

Humans drive the responses of AI models by providing directions through a prompt, or input written in the text that models “grab” and use to make predictions, based on their training, Perraillon said. The response rests on the clarity of the prompt – longer and specific ones are often better than shorter or vague ones, he added.

“A model’s predictions may be completely off in some cases. I want students to understand the situations they are most likely to encounter errors, understand how to create systems to verify the information, and create prompts that can be more useful,” Perraillon said.

Students can most effectively use AI, he added, if they use it for help with subjects they know well or have a way to confirm the information.

“Coding is a good example,” Perraillon said. “You ask the AI to write code, and then you can run the code and verify that it works. With other tasks, it becomes more difficult.”

Cause and effect and the limitations of AI

A strong understanding of subject matter can also help students avoid problems with causal inference, another topic of the class.

Causal inference also uses predictions to identify cause and effect. A very simple example: you go to bed, wake up in the morning, see the sidewalk wet, and conclude it had rained even if you didn’t see the rainfall. A far more complicated problem is trying to predict what might have happened if a change in public policy, business, or another complex realm did not occur, Perraillon said.

“One requirement of making causal inferences is that you have a very good conceptual understanding of the world,” he said. “You need to know how things are related to one another. AI models don’t have a conceptual model of the world that allows you to establish causal effects, so relying on current AI models for scientific research with off-the-shelf models is more hype than reality.” On the other hand, he said, further training and specialized AI models will advance scientific discovery.

Protecting data used in AI models

Perraillon said he will also address the “thorny” issue of governance: that is, who owns the data used in AI modeling and the threats AI companies pose to privacy. He advocates for researchers and others to use open-source models in local machines to complete their tasks, so that information is protected. “That’s a ‘must’ for medical research,” he said.

“I will show students examples of how they could use open-source AI models on their own computers,” Perraillon said. “Open-source models are smaller and less capable, but for most tasks they are very good alternatives,” he noted. He warned that in most cases, users should assume that their prompts and data will “be read by either a person or a machine and will be used for training,” even if companies say that they will not.

“AI companies have not earned my trust,” Perraillon explained.

Students will be evaluated on bi-weekly homework assignments, a final project proposal, and a final project, he said. He anticipates a mix of master’s- and PhD-level students, as well as health professionals and hopes to have “tons of different projects at different levels” that draw on the skills the students learn in applying AI to their work.

“This is new territory,” Perraillon said. “I know there are certain things I want students to understand about how the models work and the definitions of AI, but how they apply the tools is a lot more open.”

To join this course: Enroll through your student portal