<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">

Can AI Cause Psychosis?

A psychologist and therapist weigh the risks of platforms like ChatGPT around mental health

minute read

by Matthew Hastings | September 2, 2025
A graphic image depicting a young man and a young woman sitting back-to-back with both looking at cell phones with thought bubbles showing screens of ChatGPT, one asking what it can help with and one with the question: What is love?

AI platforms have been at the center of several high-profile psychosis cases – even among people without a history of mental health conditions. 

Psychosis, defined as difficulty in telling what is real from what is not, involves a spectrum of symptoms from disorganized thoughts to hallucinations to paranoid delusions. 

AI platforms’ tendency to be “yes machines” – encouraging more user engagement – can drive unhealthy behaviors and interactions, increasing the risk for psychosis, according to two professors of psychiatry at CU Anschutz. 

“Interacting with these platforms can give you responses that simply affirm your thinking,” said Emily Hemendinger, MPH, LCSW, clinical director of the Obsessive Compulsive Disorder Program and assistant professor in the Department of Psychiatry at the CU Anschutz School of Medicine. “ChatGPT is going to confirm – sometimes with only minimal pushback – what you type in. Now take that easy agreement and the dangers around delusions and psychosis, and you see the shaky ground some people can be on.”

And the risks are even greater for younger users, said Michelle West, PhD, assistant professor of psychiatry.

“We call the ages 12 to 25 the transition-age youth period and vital to brain development. One of the main roles during this period is to figure out how to develop social relationships and friendships,” West said.

“But social interactions are very complicated for all ages, and being plugged in online or to AI all the time can be very unhelpful; they can lead to avoiding those difficult situations we all need practice in.” 

In the following Q&A, Hemendinger and West detail what psychosis is, how AI platforms like ChatGPT present mental health pitfalls, and how to establish a healthier distance from them.

Read more in our series on AI.

Q&A Header

What are symptoms of psychosis?

West: Psychosis can include what we call positive symptoms, or additions to typical human experiences. Sensory experiences include hearing (most commonly), but also seeing, touch, smell, taste. It’s when you're sensorially experiencing something that is not coming from the external world. People can also experience convincing thoughts that do not “fit the facts” (which can develop into delusions). And then there's also disorganization in thoughts and speech. 

Psychosis also can include what we call negative symptoms, which are reductions in human experiences. So, like trouble with motivation, expression of emotion, withdrawal from or reduced interest in spending time with others. Those can really significantly cause a lot of distress and impact on people's lives but are a little bit less in your face than the positive symptoms are.

What are the risk factors for psychosis?

West: Like everything in mental health, we think of the biopsychosocial model – biological factors, psychological factors, and your social environment. 

  • Biologically, there is a familial risk if psychosis runs in your family. There are other biological risk factors, such as infections in utero, seizures, or head injuries. Substance use can also play a role for some – whether it is hallucinogens, cannabis, or stimulants. 
  • Psychologically, there are lots of different thought tendencies that people have and some of those make it more likely that you'll develop psychosis. For instance, a tendency toward inflexibility with thinking, or just a general tendency toward experiencing the world as a threatening place where people might want to harm you can increase your risk.  
  • And then socially, there's research showing ways in which stress can increase the likelihood of developing psychosis symptoms or diagnosis. So folks who have experienced trauma, for instance, may be more likely to develop psychotic disorders in addition to trauma-related and other diagnoses.

How do you help someone in that state when it seems like they have become unmoored from their surroundings?

West: Ideally, you want to be working on all the aforementioned biopsychosocial factors. With each person, it’s tailored, and we are trying an individual approach to what is most relevant to them. But there are some common approaches which can be helpful: 

  • Information sharing: We try to provide mental health information. Just the unbiased angle of how we interpret our experiences, so they have a better awareness of their system. Having those possibilities in your mind can be helpful. 
  • Decreasing biological contributing factors: If a person either admits to or we notice a tendency toward suspiciousness and they’re consuming a lot of cannabis every day and becoming more paranoid, we’ll try to reduce substance use. 
  • Increasing connections: We talk with people experiencing psychosis about the connection with other humans in their life and their personal values. This is for two reasons. First, it generally improves overall life satisfaction when you have several healthy relationships. Second, it focuses less energy onto strong thoughts or delusions. 

Hemedinger: Medication also helps, and early intervention is very important.

Acceptance and commitment therapy where you focus on values when someone is having a delusion or hallucination, alongside doing reframing to do reality testing can also help. They look different from the standard therapeutic techniques that you would use, but you can use those techniques with psychosis, too.

It seems like it might be challenging to get someone who is in that current state of mind to agree to take those steps in the moment. How do you bridge that gap?

Hemendinger: It can be challenging, because psychosis can really have a lot of obsessive-compulsive disorder (OCD) qualities. It can be hard when someone is invested in their break in reality when it’s serving a function for them, in their view. You don’t want to jump in and be like, "You're wrong, and stop it, and just snap out of it," because that's just going to push them further away.

West: I’d add that telling someone experiencing psychosis that seeking help is an acceptable thing to do. You can gently plant seeds with a loved one experiencing psychosis by saying, “I’ve done therapy, my friend has done therapy, etc.” Normalizing it by putting it in terms of, “Things seem stressful, and you matter and deserve support.” And then combining that with helping them to connect with things they normally enjoy in life – hobbies, outdoors time, etc. – while generally expressing love and care. All these together can help a lot. 

As a therapist, what warnings would you provide before people start engaging with AI platforms as it relates to psychosis?

West: I’d recommend people try to take a step back and see where they are getting their information. If it’s all AI and the answers it provides, that might need to be addressed. 

Additionally, the transitional age between 12 and 25 or so is vital to brain development. A lot of people now have also grown up with technology all around them. And you can make an argument that a lot of those technologies can be helpful in some respects as a coping strategy and also can get to the point where it's really allowing you to fully avoid doing other things in your life.

I definitely can imagine a way in which AI might lead to reinforcing, agreeing with, and gathering evidence to support incorrect thoughts and experiences, while also increasing disconnection from humans in your life that might care about you. 

Hemendinger: I think it’s important to remember these models are trained on the internet – warts and all. There’s a lot of factually inaccurate and problematic information out there. It makes sense that these models will provide answers based on that and therefore are susceptible to providing incorrect and damaging content in their responses – also called, ironically enough, hallucinating. 

Because of that, in sessions with patients, we try to stress it is particularly important to set boundaries around AI and take things with a grain of salt on certain topics for example:

  • Eating disorders: We want to ensure we’re placing boundaries around asking questions on calories, food intake, appearance, exercise or induced vomiting. 
  • OCD and harm-related themes: Avoiding reassurance questions (“How do I know if I’m going to snap or not?”) related to the content of their obsessions. 

And for psychosis, and bipolar disorder specifically, I would caution those who are having a specific delusion and they're talking to ChatGPT about it. The responses I’ve seen from it with patients are just validating and going along with it. This has included things like, “Should I go off my medication?” And the responses they’ve shared are ChatGPT saying, “Good for you for setting that boundary.” And that’s not what we want therapeutically for people. 

ChatGPT is not going to necessarily do that and will give you responses with few, if any, safeguards or broader context. It's available all the time – on your phone – but that's not how relationships work. And so you're not getting to practice that boundary setting. That’s especially of concern because the urgency to seek relief is a core symptom of many disorders. 

Additionally, that accountability and rapport you would get in building a relationship with a therapist is absent. Therapists would get to know you and things about your personality, your strengths that would be helpful places to work on and challenge you where appropriate. It’s AI filler content versus meaningful human connection in some ways. 

Finally, think of what you are typing in. Your private and personal information is not 100% secure with ChatGPT. Would you tell a complete stranger you just met the same things you are typing into the system?

What are warning signs for too much AI use? 

Hemendinger: I think there are a few main ones: 

  • When there’s rigidity and hyperfixation or preoccupation with AI. Is it the only thing you or someone you know is talking about? 
  • If someone is becoming more isolated and withdrawing from others. 
  • When people are going through withdrawal from being unable to use it. 
  • If behavior is getting cranky or irritable or erratic, or mood swings.

Those warning signs alongside an increase in impulsive or reckless behavior are definitely things to watch out for. 

West: The second thing would be to monitor your usage – really of any technology. Are you spending your entire day on this? Is it preventing you from doing other stuff that you care about? Are you noticing the impact on your general quality of life? We’re social creatures and it is healthy to go outside and interact with other people. 

Featured Experts
Staff Mention

Emily Hemendinger, MPH, LCSW

Staff Mention

Michelle West, PhD