<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">

Exploring the Ethical Issues of AI

12th Annual CCTSI Research Ethics Conference

minute read

by Cristine Schmidt | December 7, 2022

While artificial intelligence (AI) is not a new concept in science, in the last decade, the clinical research community has seen tremendous growth in this area. At the 12th Annual Research Ethics Conference, researchers, members of the Institutional Review Board (IRB) and ethicists gathered in person and via Zoom to discuss AI's expanding role in research. 

The event on November 10 was titled Real World Ethics for Artificial Intelligence Research: Managing the Issues, and it was co-sponsored by the Center for Bioethics and Humanities and the Colorado Clinical and Translational Sciences Institute (CCTSI). Matthew DeCamp, MD, PhD, director of Research Ethics for the CCTSI planned this year's conference for the first time.

"A key theme from the conference was the need for researchers, IRB members and others to learn to recognize bias and issues of equity that go far beyond the dataset," said Dr. DeCamp. "While datasets are important, we need to understand that biased data sets reflect broader social issues around race, trust and power within health care." 

Nicole Martinez-Martin, JD, PhD, assistant professor at Stanford Center for Biomedical Ethics, delivered the keynote presentation, The Body of the Data: Equity & Inclusion. The audience was highly engaged in the discussion afterwards, exploring how to include diverse populations in datasets and the ethical obligations of the people behind the data. "There's a need to make sure to include even more people from diverse populations, which is very important in this area, within the datasets or in recruiting and building datasets, making sure that there's sufficient inclusion of different people," said Dr. Nicole Martinez-Martin. 

"With a background in law and the social sciences and a commitment to elevating the voices of marginalized populations, Dr. Martinez-Martin was the perfect choice to highlight issues of privacy, consent and bias (among others) in research using artificial intelligence," DeCamp said.

Attendees also listened to four brief presentations and participated in a Q&A and discussion. Jaysharee Kalpathy-Cramer, PhD, presented Evaluating and managing fairness and bias in medical imaging, which covered how AI could deliver tools beyond what humans can accomplish, for example in specific tasks like radiology. Antonio R. Porras, PhD, discussed Using pediatric craniofacial imaging to screen for genetic and developmental anomalies. Matt Andazola, MPH, presented Conversational agents ("chatbots") in health care: How far should they go? Lastly, Michael Rosenberg, MD, gave a talk on Interpretability vs. Accuracy in Machine Learning

Although the FDA and others may one day regulate AI to a greater extent, investigators and IRB members are responsible for ensuring ethical AI research today. Throughout the conference, discussions centered on biases and fairness of AI, ensuring safety in this area and privacy.

"Looking to the future, there is a need to ensure privacy is protected in data that are used and shared. And this doesn't mean just telling people about risks – that's what consent is for – it's also about actively taking steps to protect privacy," DeCamp said. 

The CCTSI has a Research Ethics Consult Service available to all biomedical and behavioral researchers at the University of Colorado Anschutz Medical Campus and its clinical affiliates. For more information, contact ResearchEthics@ucdenver.edu.

Topics: Research