
Dr. Ragy Girgis
As AI chatbots become more widely used, questions are emerging about how they may affect mental health. Psychiatrist Ragy Girgis discusses the potential risks for some users – and what clinicians and families should understand
In recent years, artificial intelligence (AI) chatbots have become widely available and more people are turning to these tools for conversation and companionship, including discussions about mental health. While these tools can be important sources of information for many users, they can also pose new challenges. One of these is a phenomenon called AI psychosis. AI psychosis, which is not a clinical diagnosis, describes instances where people develop delusions—or have existing delusions deepened—in association with heavy use of chatbots.
In this interview, Ragy Girgis, MD, MS, professor of clinical psychiatry at Columbia University and the New York State Psychiatric Institute, discusses what AI psychosis is, strategies to prevent it, and where there are gaps in research. Girgis is an expert in psychosis and in the following interview, he sheds light on what AI psychosis is, how it develops, how it can be treated, and strategies for addressing potentially harmful chatbot use.
This interview has been edited for length and clarity.
Can you tell us about your background and research interests? Do you treat patients with psychosis, particularly in the context of chatbots and the internet?
I completed my adult psychiatry residency at Columbia, and my desire was to pursue a career in clinical research, especially clinical research in schizophrenia—developing new medications for schizophrenia and understanding the pathophysiology of especially early psychosis, or prodromal psychosis. People with prodromal psychosis have attenuated delusions or attenuated psychotic symptoms, mostly attenuated delusions in the context of this discussion.
I completed my three-year fellowship and then became an assistant professor. Since that time, I’ve been funded to conduct brain imaging and clinical trials research in schizophrenia, especially early schizophrenia, trying to understand pathophysiology and especially understand predictors among people with prodromal psychosis that can help prevent them from developing full conviction or psychosis.
Around six or seven months ago, journalists started to reach out to me to help them understand what AI psychosis is about. A lot of the people who comment on AI psychosis, psychiatrists or specialists, tend to be in my field, the clinical high risk or prodromal field, because understanding conviction and attenuated psychotic symptoms is what we do.
Before we get into AI psychosis specifically, can you explain what psychosis is in general?
There are multiple ways that people define psychosis. The most comprehensive and standard way is to define psychosis by the presence of one of primarily four types of psychotic symptoms. Those are:
- delusions, which are unusual, false, and fixed ideas;
- hallucinations, which are unusual perceptual abnormalities—like hearing something that isn’t there, seeing something that isn’t there, tasting, smelling something that isn’t there, or feeling something that isn’t there;
- disorganized behavior; and
- disorganized speech.
Having any one of those four types of symptoms qualifies someone as having a psychotic symptom and then usually some sort of psychotic disorder. There are many types of psychotic disorders: such as schizophrenia, bipolar disorder with psychotic features, and delusional disorder. But what is common among them is that one must have one of those four psychotic symptoms to receive a diagnosis of psychosis.
How do you distinguish a delusion from someone who maybe believes in something like a conspiracy theory?
Parsing these things is part of why psychiatrists spend three or four years in residency and then, if you want to become a specialist, several more years to really be able to differentiate between a standard, unusual kind of fringe idea or ideology versus an actual delusion.
It’s also important—very relevant to what we’re talking about with AI psychosis—that all ideas, and especially unusual or delusional ideas, lie on a spectrum of conviction from 1 to 100%. It isn’t as if everything is either true or not true, or people believe things either not at all or completely. These things lie along a spectrum.
Even things that otherwise wouldn’t necessarily be delusional but might be unusual probably wouldn’t be defined that way because if you really ask people about it and ask them to commit, their conviction level is not going to be 100%, which defines a delusion. Anything between 1 and 99% would be an attenuated delusion or an unusual idea. Actually, those are very common. Most of us, if not all of us, probably have those. Whenever we use our rating scales, a lot of those sorts of ideas would light up on one of our scales.
That’s why we have a spectrum and understand the difference between something that scores—the scale goes from 0 to 6—a 1 or a 2, which is considered typical , versus a 3, 4, or 5, which is considered attenuated and problematic but still not delusional, or 6, which is fully delusional.
A conviction at anything less than 100% technically wouldn’t be quite fixed—you would be able to challenge them to some degree. Even if they’re 99%, there’s an iota of doubt.
Who is at greatest risk for developing psychosis?
Psychosis is multifactorial in nature. There’s a strong genetic component, and that does not just mean familial. There is definitely a familial component. For example, if someone has a family member, a first-degree family member like a parent, with a psychotic disorder—especially schizophrenia—the risk of the child having it is 9%. The risk in the general population is about 1%. If you have a sibling with it, your risk is about 5%. If you have a twin, the risk is more like 45%. So, while it isn’t 100% genetic or familial, there’s a large familial genetic component. Schizophrenia psychosis tends to appear around the time of puberty, between ages 15 and 21. It can occur earlier and it can occur later, but 15 to 21 is really the main time or age when psychosis develops.
Now, a lot of people have essentially de novo genetic mutations, so it’s not familial. They don’t inherit this genetic mutation from one of their parents, but it’s still genetic in nature.
There are other familial contributions or contributors that aren’t genetic that can contribute to psychosis. For example, very early life stress. By very early we mean perinatal—right around the time of birth. For example, people have studied large famines and identified a high risk of schizophrenia among offspring of individuals who experienced the famine. That’s what we mean by perinatal or early life stress—very early, severe stress like starvation among mothers who were pregnant.
Can psychosis be cured
Psychosis in its full form is not reversible. We can treat it in a number of cases, but psychosis in which delusions have reached 100% conviction is not reversible. We technically can’t cure it. There is a nuance to this—psychosis can be managed pretty well with medications, but you will always be vulnerable to relapse.
We’re able to get almost complete remission in about 10% of cases and then extremely good outcomes in a third of cases, fair outcomes in a third of cases, and then very bad outcomes in a third of cases. But, when there are very good outcomes, in almost all cases that’s because the people are taking their medications for life. They are still able to live a full and mostly asymptomatic life, but they must take their medications. Unfortunately, non-adherence or stopping medications is extremely common in psychosis.
How do you define AI psychosis? Can you talk about how it relates to internet-fueled psychosis more broadly?
There are really two types of AI psychosis.
One type is where someone who already has a psychotic disorder or schizophrenia engages with a large language model or chatbot and for some reason is convinced to stop their medication. That, of course, could lead to a decompensation—or the return of symptoms—and a psychotic break. That’s probably the more common type of AI psychosis, but we don’t hear about it as much in the media.
However, the type that we’re talking about more now is the type of AI psychosis in which the large language model or chatbot reinforces some sort of unusual idea. Not actually inducing or creating a delusional idea or an unusual idea but reinforcing an idea so that the person’s conviction level is increased. That increase could be from 1 to 2%, or 20 to 30%, or 65 to 67%, or 99 to 100%—and that’s the worst-case scenario because once you get to 100%, the psychosis is fixed and irreversible. A conviction level below 100% would technically be reversible.
So, when we talk about AI psychosis, we’re really talking about a large language model reinforcing some sort of unusual idea and increasing someone’s level of conviction.
Now, AI does other things very similar to psychosis. Although it’s not technically AI psychosis, we often include it under the rubric of AI psychosis: when a large language model influences or reinforces or induces thoughts of suicide. So, someone may enter some information related to suicide into a large language model or chatbot, then that is reinforced to the person, and then they have an episode in which they take their life. That’s technically not AI psychosis, but we consider it under the same umbrella because the same mechanism is at work.
How does this compare to what happened before AI chatbots existed? Has AI accelerated this phenomenon or just changed it?
The phenomenon of AI psychosis is quantitatively new and could be very dangerous, but qualitatively it’s very similar to what’s been happening for decades now since the advent of the internet.
This is very similar to, for example, when someone falls into rabbit holes online and just searches for topics around which they may harbor some unusual ideas. People often just fall down rabbit holes and receive the same type of reinforcement as they might from a chatbot. It’s not quite as strong, and the delivery isn’t as easy to internalize, but it’s the same sort of thing.
We also have social media, which can have a stronger influence than just internet searches. You have algorithms that can identify in what you’re interested and so present the information to you more quickly.
Now we have AI and large language models in which we have this system that, in very many ways, can mimic human intelligence. It’s extremely strong, responds very quickly, and responds like a human would respond. It’s easier for people to internalize. And so that’s why we’re so concerned about it now. We haven’t spoken about it as much in previous years or decades, but we’ve seen many patients have this sort of problem.
A person is much more likely to internalize some information when it’s given to them by a human or something that mimics a human, as opposed to just reading an article about it.
Do you have a sense of how prevalent AI-exacerbated psychosis is now?
Technically, it’s hard to say. There’s been great reporting in the media on this topic. The scientific community is catching up now—it just takes a little longer to study things and gather the data.
Right now, all we really have is what’s been reported in the media and what we hear about from lawsuits and those sorts of things. Although hundreds of millions of people are using AI, reported cases of AI psychosis are in the double digits. So, the prevalence is small as far as we know. We’re likely to get much higher and more accurate numbers hopefully soon within the coming months or years.
But again, we don’t know everything about it. I can tell you just from my own experience: just a matter of years ago, we would frequently speak with people who would report that their unusual ideas—especially in our early psychosis or pre-psychosis clinic—were being reinforced by articles online, videos that they found, or the rabbit hole in which they were finding themselves.
How do clinicians treat this? How would a therapist or psychiatrist intervene?
This is a very important question, and there are things one can do. The most important thing to do is to monitor—whether that’s done by a clinician, a family member, a friend, spouse, partner, or acquaintance. One must monitor and look for signs that something’s going wrong. Is the person or the patient in question spending too much time with a large language model or chatbot? Are their relationships at work or school starting to suffer? That would be a sign.
And then the way to intervene is pretty simple. You would intervene by, most importantly, asking the person to spend less time on the chatbot. That should be adequate. And then of course, the clinician would consider other treatments, depending on the person’s individual situation. But really the number one and the most proximal treatment should be just limiting time on the chatbot or completely removing the chatbot—limiting a person’s exposure to a chatbot altogether.
The idea is to intervene before a person’s delusions become fixed. That’s possible and it’s what my whole career is based on.
What are the warning signs that AI use may be exacerbating a mental health issue? And when should families be concerned?
The prevalence of AI psychosis is so low at this point, and we really don’t know what that prevalence is exactly. It’s unlikely that if someone is spending many hours with a chatbot that some part of their life is not suffering in some way. There are many other problems to consider beyond AI psychosis. In addition to just affecting work or school or relationships directly, we know in general that social media affects a person’s personality. It affects their self-esteem big time, and we know that for sure. Bullying is a big problem and is related to that. It affects people’s anxiety tolerance and mood stability. And of course, those reinforce each other, meaning mood instability can make someone susceptible to whatever they see on social media, and then that would have effects on instability, etc.
There are a lot of reasons in general to make sure that one’s child or any person in question isn’t spending too much time with a large language model or a chatbot.
We should encourage or want AI companies to add more reminders in the chatbots, or given by the chatbots to the user, that the users are not speaking with real people. They’re speaking with an AI chatbot. And it would be good for people to be reminded about this frequently. The AI system or the chatbot needs to continually remind people about this.
And we need more research because this is so new. We need AI companies to partner with researchers and psychiatrists and scientists to figure out how to identify when people are using chatbots in a less adaptive way, spending too much time with them, or using them in an unhealthy way. We should be able to do that. I mean, we know AI can already do that, so we just need to figure out how to program that into these chatbots.
What would an ideal response from a chatbot look like? Should it shut down the conversation and direct the person to get help?
Exactly. Because the chatbot is responding as a human, the chatbot would ideally say something along the lines of: “It’s really hard for me to confirm some of these things that you’re saying or address any of them. It sounds like it might be good for you to seek help. Here are some numbers to call or resources for you.” That’s what we would expect the chatbot to say.
Right now, chatbots offer reinforcement. For example, someone might say to a chatbot something along the lines of saying that they’re the Messiah and they’re going to save the world and all those sorts of things. And then the chatbot might say something that amounts to, “You’re right, you’re an extremely special person. It’s totally reasonable that you’ll save the world.” That happens a lot of the time.
Just to be clear—the average person doesn’t need to worry that chatbots will somehow cause delusions, right? You’d need to already have an existing disorder or predisposition, or already be at a heightened level of conviction about something?
Generally, yes. Any one of those things, and those overlap to some degree. So, someone could already have, say, a 60% conviction for an idea. Or some people may have a certain propensity to be more susceptible to AI psychosis. For example, people with a character structure that includes things like mood instability, anxiety intolerance, identity diffusion, poor reality testing. These are things that would put someone at higher risk of, or greater susceptibility to, AI psychosis.
Disclaimer: The information provided in this article is for informational purposes only and is not intended as a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health care provider with any questions you may have regarding a medical condition.