A growing number of people are turning to artificial intelligence-powered chatbots for mental health support, as demand for traditional therapy continues to outpace supply, according to interviews and recent polling reported on Apr. 17. Users like Vince Lahey of Carefree, Arizona, say these digital tools offer a sense of anonymity and accessibility that encourages them to share more than they would with a human therapist.
The rise in AI-based therapy options comes at a time when self-reported poor mental health days have increased by 25% since the 1990s, and suicide rates in the United States reached levels in 2022 not seen in nearly eight decades. Many patients find nonhuman therapists appealing due to lower costs and fewer perceived barriers compared to conventional talk therapy.
Tom Insel, former head of the National Institute of Mental Health, said most people who need care do not receive it. "There's a massive need for high-quality therapy," Insel said. "We're in a world in which the status quo is really crappy, to use a scientific term." He added that engineers from OpenAI told him last fall that about five to ten percent of ChatGPT's user base relies on it for mental health support.
Polling data indicates younger adults are especially likely to seek advice from AI chatbots; nearly three in ten respondents aged 18-29 reported using such tools for emotional or mental health issues within the past year. Uninsured adults were about twice as likely as those with insurance coverage to use these services.
Despite their popularity, experts warn about misleading marketing claims and insufficient regulation surrounding AI therapy apps. Vaile Wright from the American Psychological Association said federal patient privacy protections often do not apply: "Therapy is not a legally protected term," Wright said. John Torous at Beth Israel Deaconess Medical Center warned that some apps may deceive users into thinking they have received treatment when they have not.
Some states are responding by enacting laws prohibiting apps from describing their chatbots as licensed therapists or mental health professionals. Jovan Jackson, who co-authored Nevada's new law on this issue, said: "It's a profession. People go to school. They get licensed to do it." However, researchers like Charlotte Blease at Uppsala University note there is little rigorous evidence supporting these products' effectiveness due largely to unclear guidance from federal regulators such as the Food and Drug Administration (FDA).
Safety concerns persist following reports of AI chatbots providing harmful advice or encouragement toward self-harm. At least twelve lawsuits alleging wrongful death or serious harm related to chatbot use have been filed against OpenAI; similar cases involve other companies including Google-backed Character.ai.
Privacy remains another major concern: reviews found many apps allow young children access while offering inconsistent information about data collection practices—sometimes sharing psychiatric data with advertisers despite public claims otherwise.
Looking ahead, industry leaders acknowledge both opportunities and risks associated with AI-driven psychological support tools but emphasize the need for clearer standards and safeguards moving forward.