
Experts from The Jed Foundation, Columbia University, Beth Israel Deaconess Medical Center, and the American Psychological Association discuss how effective AI chatbots are at responding to users’ mental health needs; how to ensure appropriate use of these tools; and how clinicians and technology companies could collaborate to ensure chatbots can address mental health requests quickly and safely.
What We Mean by AI and Mental Health
More than half of all Americans have used an AI chatbot like ChatGPT, Gemini, Claude, or Copilot, and one in three teenagers use one daily. AI chatbots, which are the user-facing access point into large language models, are skilled at information retrieval, like finding a recipe that matches the food in your pantry, and assisting with writing, like providing feedback on a resume. However, 13 percent of people under 18 and 22 percent of those over 18 have also used chatbots to access mental health advice. These tools are not clinically validated, have in some cases represented themselves to users as licensed nurses and therapists, and may have caused “AI psychosis” in a small number of cases—but people continue to request advice from them.
On January 14, 2026, the National Academy of Medicine’s Health in the Headlines series brought together experts in artificial intelligence, mental health, and health policy for a discussion, moderated by Munmun De Choudhury, Professor at the Georgia Institute of Technology and expert in how novel forms of social interaction online shape health and well-being, about the current state and future potential of AI chatbots to assist with providing mental health care.
This article outlines five key takeaways about how well AI chatbots are handling mental health requests, steps individuals can take to use these tools appropriately, and priorities for future AI development to ensure that these tools can be used safely and effectively.
The Current State of Users Utilizing AI For Mental Health
Many People are Already Using AI Chatbots to Access Health Information and Support
AI chatbots can help look up questions, plan meals, or brainstorm problems, but one in three people have also used an AI chatbot for “emotional support,” shared John Torous, Director of the Digital Psychiatry Division at Beth Israel Deaconess Medical Center, who focuses on investigating the potential of AI and mobile mental health technologies to advance psychiatry. A quarter of individuals under age 30 also use chatbots at least once a month to find health information or advice. However, Laura Erickson-Schroth, Chief Medical Officer at The Jed Foundation and nationally recognized educator and commentator on mental health, suicide prevention, and crisis care, noted that many more young people are using these tools for issues that impact their emotional and mental health but may not label their questions as such—so the user base may be even broader.
Therefore, the question isn’t whether to address this issue—it’s how. Ragy Girgis, Professor of Clinical Psychiatry at Columbia University and expert in psychosis and violence in mental illness, shared that he tells his fellow clinicians that they should assume any patient, especially if they are an adolescent or young adult, is engaging with a chatbot.
“If we look at a broad definition of emotional support, 1 in 3 people have used an AI chatbot for this purpose. It’s bigger than people think.”
– John Torous
AI Chatbots Can Provide Resources, but Cannot Replace Therapy or Crisis Intervention Services
Understanding what AI chatbots can and cannot do is critical for understanding their potential, Girgis noted, and what they do is mirror the data or emotions shared with them. C. Vaile Wright, Senior Director of the Office of Healthcare Innovation at the American Psychological Association and a thought leader in modernizing how care is delivered in an increasingly digital landscape, agreed, describing the job of a therapist as helping to understand harmful thoughts and patterns and identifying alternative ways of thinking, feeling, and acting. AI chatbots, meanwhile, unconditionally validate what you are already doing— “which feels very good,” Wright noted, “but is not therapy.” There will likely be a clinically validated and evidence-based chatbot in the future, which will likely be helpful for a subset of individuals, Wright said, but firmly underscored that those chatbots do not exist yet.
However, AI chatbots can point people in distress to mental health resources, and ChatGPT and Gemini—two of the most used AI chatbots—do this well, responding quickly to disclosures of distress and sharing reliable resources. Erickson-Schroth noted that this function could be a strength of chatbots moving forward but underscored that their job must be to establish a relationship between the person in distress and another human being—not to replace that connection.
“There is absolutely no consensus in the field that AI chatbots can serve in any way as a replacement for therapy.”
– C. Vaile Wright
Concerns About AI For Mental Health
Chatbots are Likely Harming People, But We Can’t Measure How Much
AI chatbots, the panelists agreed, may be uniquely able to cause harm when compared to harms caused by other digital platforms because they simulate an actual relationship and its associated emotions rather than just providing ideas to consume, like happens with internet articles or videos.
AI and Mental Health Issues in Teenagers
Erickson-Schroth shared that adolescents are especially vulnerable to the “pseudo-relationship” AI chatbots provide because they are in such a sensitive period of brain development. The most prominent aspects of chatbot behavior—being highly affirming, giving excessive praise, and providing one-sided advice—are especially addictive to teenagers and obstruct the development of skills they need to succeed as adults, like having difficult conversations, interacting with other people, and thinking critically. AI chatbots have also engaged in harmful behaviors like sharing instructions on lethal means of suicide, simulating sexual acts with minors, and coaching young people on how to hide their mental health symptoms.
Despite the potential for harm, Torous noted that researchers do not have a list of AI-inflicted harms or methods to measure them. Since there are no standardized protocols to follow, the harms simply haven’t been measured yet. He noted that many researchers are focusing on death by suicide, which is an extreme example of harm, but there is a great need to also focus on issues that are less severe but still significant. He encouraged the field to rapidly classify these harms so they can be rigorously studied and then comprehensively addressed.
“It’s very concerning, because the nature of chatbots means that anyone who enters any sort of unusual or depressive material into a large language model is at risk of that idea becoming worse.”
– Ragy Girgis
AI Chatbots Can Provide Resources, But Cannot Replace Therapy or Crisis Intervention Services
| AI Chatbot Task | Does it Help? | Why or Why Not? |
|---|---|---|
| Referral to mental health resources | Yes – if using ChatGPT or Gemini
️Exercise Caution – if using a less-popular chatbot |
Share reliable resources quickly and consistently. |
| Emotional validation and mirroring | Exercise Caution | Reinforces existing patterns without challenging potentially harmful thoughts |
| Crisis intervention | No | Do not consistently identify crises when they are happening. Do not consistently connect users with appropriate resources. Chatbots are not qualified to address crises themselves but often attempt to do so, potentially harming users. Have been documented in engaging in extremely harmful behaviors like sharing lethal means of suicide and coaching users to hide their mental health symptoms. |
| Diagnosis | No | Not clinically validated. Do not have enough information about users or clinical knowledge to make appropriate diagnoses. |
| Emotional support | Exercise Caution | Can help individuals start or have difficult conversations but emotional mirroring and extended conversations can become harmful. |
| Simulating relationships | Exercise Caution | Can be helpful for individuals experiencing loneliness or marginalization but can also wholly replace nuanced human relationships and development of the self, which are critical for individual well-being. |
Steps Individuals and Developers Can Take to Ensure Appropriate Use of AI Chatbots For Mental Health Support
The panelists agreed that the responsible use of AI chatbots—in a medical context and in everyday life—begins with non-judgmental conversations to understand what these tools are and what they can do. The success of these conversations requires a base level of AI literacy, including how the chatbot’s design informs how it interacts with and responds to users. AI literacy can be built in conversations with young people that explore the company that produces the chatbot, the benefit to the company from having the chatbot say this or that, and how the company makes money from the chatbot. There are also programs to help young people understand how chatbots work and what the potential dangers might be—but panelists emphasized that these materials often don’t reach the people who need them. Wright and Erickson-Schroth shared that the warning signs for excessive or inappropriate use of an AI chatbot are similar to other signals of a mental health crisis: changes in sleep or regular activities, social and personal withdrawal, talking about a chatbot as if it is a real-life friend, and spending a lot of time alone or with their devices.
The panelists also shared the following steps that developers could take to ensure that AI chatbots are causing the least harm possible:
- Limit discussion times with individual users and/or reset the chatbot’s memory daily to avoid extended conversations where potentially harmful ideas can be magnified,
- Prohibit chatbots from representing as someone with a license,
- Connect users with crisis services immediately upon any signal of distress,
- Prohibit all chatbots from sharing lethal means content under any circumstances, and
- Prohibit the use of minors’ data for monetization or personalization.
“The architecture of AI, how it is set up and used, is going to be one of the most important modifiable factors affecting young people’s mental health over the next decades.”
– Laura Erickson-Schroth
Adequate Federal Regulation of AI Devices May Not Be Possible
AI chatbots that aren’t explicitly used to diagnose, treat, or prevent diseases—including ChatGPT, Gemini, and Claude, for example—are not federally regulated. Facilitating adequate federal-level regulation would require the development of an entirely new regulation paradigm that is nimble enough to evolve as the technology does. The panelists expressed doubt that the federal government would ever be able to develop such flexible regulation, which leaves providers within the current regulatory landscape—a patchwork of state-level laws that change frequently.
In the absence of federal regulation, a new entity could step in to develop and share protocols to build trust in the safety and efficacy of AI tools. This entity could also advocate for algorithm safeguards and identify ways to incentivize companies to push toward AI for good rather than AI for profit. Torous also suggested a grassroots approach toward benchmarking or providing interactive quizzes to chatbots to simulate unwanted behavior and seeing how quickly—or not—the chatbot adjusts. He noted that many benchmarks are needed, and that these quizzes are something that individual people can do to help add to the knowledge base.
“I think we are entering an era where there may be less top-down regulation for these types of tools, which really means we have to do a better job on starting and fostering grassroots efforts.”
– John Torous
AI Chatbots for Mental Health Are Trying to Take the Place of Human Connection. We Shouldn’t Let Them
The panelists shared a broad concern that AI chatbots are interrupting human connection—individual relationships, how communities care for and raise young adults, and how societies solve problems. When 72% of adolescents are using AI companions, this digital tool may be taking the place of critical parts of identity formation and finding community. Use of AI chatbots may disrupt traditional forms of talk therapy, introducing distrust into what might be an otherwise critically important relationship. Using chatbots to do homework or write papers removes many of the critical thinking skills necessary for people to succeed in the world. The way to combat these harms, the panelists shared, is to intentionally reintroduce and reinforce human relationships. Caring adults need to have conversations with adolescents about their use of AI chatbots and teach them the importance of interpersonal relationships—even when challenging—and critical thinking. Ensuring the future safety and efficacy of chatbots will require comprehensive partnerships between private industry, state and local communities, individuals with lived experience, and clinicians. These partnerships are sometimes uncomfortable—especially for clinicians and non-profits—but are necessary for making sure that these tools benefit rather than harm those they are purported to serve.
“What we aren’t talking about is how chatbots, social media, and how we are always on our devices change how we value human relations versus how we value one-sided relationships with technology.”
– C. Vaile Wright
Watch the full discussion about AI chatbots and mental health below.
Learn More in Our Free Webinar Series: Health In The Headlines
Tune in to Health in the Headlines, a free webinar series from the National Academy of Medicine held on the second Wednesday of every month from 3:00-4:30 pm ET. Health in the Headlines brings together experts from opposing viewpoints to discuss timely health topics, encouraging conversation that transparently combines scientific evidence and personal, cultural, and policy values. Sign up for updates and watch recordings of past webinars here.
Disclaimer
Statements, recommendations, and opinions expressed in the webinar and this document are those of individual presenters and participants. These views are not necessarily endorsed or verified by the National Academy of Medicine or the National Academies of Sciences, Engineering, and Medicine and should not be construed as reflecting any group consensus