Lori Ellis, Head of Insights | Biospace
+ Pharmaceuticals
Patient Daily | Feb 23, 2026

DIA explores integration of artificial intelligence into regulatory review at annual meeting

DIA, a global interdisciplinary life science association, is focusing on how regulatory agencies are adopting artificial intelligence (AI) in their review processes. At the 2026 DIA Global Annual Meeting in Philadelphia, discussions will center on how agencies like the FDA are using AI to enhance efficiency while maintaining public trust and scientific rigor.

The FDA has been increasing its use of AI for reviewing medical products since 2025. This technology assists with scientific and safety evaluations for drugs, biologics, and medical devices. AI is also being used to automate routine tasks and speed up review timelines, allowing staff to focus on more complex work. This marks a move toward more data-driven decision-making.

Regulators worldwide are considering risk-based approaches to AI oversight. High-risk decisions require human involvement, while lower-risk administrative tasks may be automated without direct human oversight. Proper validation of both models and workflows is necessary to address risks such as errors or unsupported outputs.

The DIA’s AI Consortium, launched in 2025 as a public-private partnership, brings together regulators, industry representatives, academics, and technology providers. Its aim is to translate risk-based principles into practical workflows. One working group within the consortium is developing a validation framework that stresses the importance of demonstrating reliability at technical and operational levels.

Consortium partners are mapping how regulators classify AI use cases by risk level and context. This helps organizations determine where AI fits in regulatory workflows and what kind of oversight is needed for different applications. For example, tools used for summarization or data extraction may need less oversight than those influencing clinical decisions.

AI is also being explored for post-market surveillance to identify safety signals more quickly—a strategy pursued by agencies including ANVISA, MHRA, PMDA, and Health Canada. In addition, predictive risk assessments using historical and real-time data could help anticipate future issues with drugs or manufacturing processes.

The level of human involvement required depends on the risk associated with each use case. Oversight remains essential for higher-risk scenarios to catch mistakes before final decisions are made. Transparency in AI algorithms is crucial so that their decision-making can be audited.

Addressing bias in AI systems is another priority to prevent disparities in health outcomes. Since these models can change over time, ongoing monitoring is necessary to maintain product safety and effectiveness.

In recent years, regulators have issued guidance documents about validating AI tools based on their intended use and potential consequences of errors. The FDA’s draft guidance from January 2025 introduced a Risk-Based Credibility Assessment framework linking scrutiny levels to context of use. The EMA’s Network Data Steering Group has started workstreams focused on using AI for better analytics within Europe’s regulatory network. The EU AI Act further classifies systems by risk category with corresponding requirements across member states.

A joint paper from the FDA and EMA was published in early 2026 addressing these topics; other agencies like MHRA, PMDA, and ANVISA have also released related guidance documents. These developments will be discussed during sessions at the upcoming Global Annual Meeting.

According to DIA: “Through forums such as the Global Annual Meeting and initiatives like the AI Consortium, DIA is helping shape a shared, global understanding of how trustworthy AI can be responsibly integrated into regulatory decision-making while preserving scientific rigor and human judgment.”

Organizations in this story