Artificial intelligence models used to predict aggressive incidents in acute psychiatric care may amplify existing social and structural inequities, according to an April 7 study led by researchers at the Centre for Addiction and Mental Health (CAMH). The findings, published in npj Mental Health Research, show that these AI tools can overestimate the likelihood of aggression among already marginalized groups.
The research highlights concerns about fairness and equity as AI becomes more common in mental healthcare. Predictive models are being considered or used internationally to anticipate violent behavior for earlier intervention, but little is known about how these tools perform across different patient populations.
Dr. Marta Maslej, Staff Scientist at the Krembil Centre for Neuroinformatics (KCNI) and senior co-author of the study, said: "While fairness of clinical AI tools has been evaluated in other areas, this study highlights a critical gap in mental healthcare considering assessments, which are used to train AI models, are often based on subjective observations that are shaped by underlying social and structural biases. If fairness is not built in, the clinical use of AI models can lead to significant distress, loss of trust, and even precipitate aggressive incidents that would have otherwise not occurred. There is a clear need to develop AI applications that centre and promote equity."
The research team trained a machine learning model using electronic health records from over 17,000 CAMH inpatients. They found higher false positive rates for Black and Middle Eastern individuals, men, patients admitted by police through emergency care, and those with unstable or supportive housing—indicating these groups were more likely to be flagged as high risk without cause.
The KCNI Predictive Care Lab is working on addressing such issues through further research funded by the Canadian Institutes of Health Research (CIHR). Their new project aims to design an advanced tool called FARE+ intended to identify sources of bias within predictions so strategies can be developed for mitigation.
Dr. Laura Sikstrom said: "There is potential to use AI to redress historical and ongoing inequities in our health system by moving away from binary risk prediction to more patient-centred tools. By shifting from individual risk prediction to systemic bias detection, this research advances a new paradigm for AI in mental healthcare-one that prioritizes fairness, health equity, and the well-being of both patients and staff."