AI is increasingly used to analyze medical images such as X-rays, MRIs, and CT scans. This technology has helped doctors make faster and more accurate decisions, particularly for diseases like cancer. AI systems can process large volumes of images quickly and are not subject to fatigue or lapses in focus.
"AI systems can process thousands of images quickly and provide predictions much faster than human reviewers," said Onur Asan, Associate Professor at Stevens Institute of Technology. "Unlike humans, AI does not get tired or lose focus over time."
Despite these advantages, many clinicians remain cautious about relying on AI due to a lack of understanding about how it generates its predictions—a challenge known as the "black box" problem. "When clinicians don't know how AI generates its predictions, they are less likely to trust it," Asan explained. He added that his team wanted to study whether providing more detailed explanations from AI would improve clinician trust and diagnostic accuracy.
Asan collaborated with PhD student Olya Rezaeian and Assistant Professor Alparslan Emrah Bayrak at Lehigh University to conduct a study involving 28 oncologists and radiologists using AI tools to analyze breast cancer images. The participants received varying levels of explanation for the AI’s assessments and were later asked about their confidence in the system and the difficulty of the task.
The study found that while AI use did enhance diagnostic accuracy compared to those who did not use it, more detailed explanations did not always increase trust among clinicians. "We found that more explainability doesn't equal more trust," said Asan. Providing extra or complex explanations made clinicians spend more time processing information, which reduced their performance.
"Processing more information adds more cognitive workload to clinicians. It also makes them more likely to make mistakes and possibly harm the patient," Asan stated. "You don't want to add cognitive load to the users by adding more tasks."
Additionally, the research showed that excessive trust in AI could be harmful if clinicians overlooked important details due to high confidence in the tool's output. "If an AI system is not designed well and makes some errors while users have high confidence in it, some clinicians may develop a blind trust believing that whatever the AI is suggesting is true, and not scrutinize the results enough," said Asan.
The findings were published in two studies: one in Applied Ergonomics on November 1, 2025, titled The impact of AI explanations on clinicians' trust and diagnostic accuracy in breast cancer; and another in the International Journal of Human–Computer Interaction on August 7, 2025, titled Explainability and AI Confidence in Clinical Decision Support Systems: Effects on Trust, Diagnostic Performance, and Cognitive Load in Breast Cancer Care.
Asan emphasized that while AI will continue assisting with medical imaging interpretation, system designers must be careful when adding explanatory features so they do not hinder usability. He also highlighted the need for user training focused on interpreting rather than blindly trusting AI outputs. "Clinicians who use AI should receive training that emphasizes interpreting the AI outputs and not just trusting it."
Asan concluded that achieving a balance between ease of use and utility is key for effective adoption by healthcare professionals. "Research finds that there are two main parameters for a person to use any form of technology - perceived usefulness and perceived ease of use," he said. "So if doctors will think that this tool is useful for doing their job, and it's easy to use, they are going to use it."