In 2019, Ahmed Alasmar, BS, was a recent biology graduate searching for a career that blended science with social impact. Bioethics turned out to be the perfect fit.
“I realized I was really interested in the public health and policy dimensions of healthcare,” he recalls.
As a Senior Professional Research Assistant, Alasmar now works across the full spectrum of research—grant development, participant interviews, data analysis, manuscript writing, and presenting results. While the methodology and process may sound similar to clinical trails designed to test a new intervention or drug, research in bioethics aims to bring clarity to complex problems.
“Research in bioethics helps illuminate complex questions in healthcare that don’t have easy answers. We’re tackling moral and philosophical questions with scientific methods. There’s not much else quite like empirical bioethics,” Alasmar said.
But what makes his role truly distinctive is the project that has defined the last four years of his career: an NIH‑funded study on AI‑based prognostication in palliative and end‑of‑life care. This work examines the intersection of technology, ethics, and some of the most sensitive moments in medicine.
Working closely with Center faculty and Principal Investigator Matt DeCamp, MD, PhD, Alasmar examines how AI tools designed to predict patient outcomes might influence clinical decision‑making, communication, and trust. The team’s research involves gathering perspectives from patients, clinicians, and other key stakeholders to understand how people experience these technologies in real life—not just how they’re designed to work.
A key part of the study explores how different groups understand the role of AI. Do clinicians see these tools as helpful guides or sources of pressure? How might patients and families interpret an algorithm’s prediction when facing emotional, complex decisions? What safeguards are needed to ensure the technology supports—not replaces—human judgment?
“AI tools are being introduced into some of the most sensitive moments in medicine. We want to understand what that actually feels like for clinicians, patients, and families,” he explained.
The goal is simple—to make sure new technologies work to help people, especially in moments when care decisions and communication matter most.
By collecting real-world data on these questions, Alasmar’s work aims to shape future policies and ethical guidelines around AI in serious illness care.
The implications reach far beyond academic discussion. “AI is moving fast. Evidenced-based ethical guidance has to keep up if we want patients to benefit without being harmed,” said Alasmar.
Without thoughtful governance, AI tools risk reinforcing biases, creating misunderstandings, or influencing treatment decisions in ways patients may not fully grasp. The project aims to offer a roadmap for using AI responsibly, with transparency and compassion at the forefront.
Ultimately, Alasmar’s contributions help ensure that as healthcare systems adopt new technologies, they do so in ways that protect patients, support clinicians, and honor the values of people navigating some of life’s most challenging moments.
As far as his own future? Ahmed is leaving space for exploration.
“I just want to keep doing good work and stay open to the next opportunity,” he says.