Stories of AI turning users delusional don't make sense when you look at the numbers
A lot of people using AI have bad mental health, because a lot of people have bad mental health, and a lot of people are using AI
I’ve been seeing a lot of articles about AI turning its users delusional.
Here’s a recent example:
It’s paywalled so I can’t read the whole thing, but I did read this from it:
another Redditor warned of thousands of people online with “spiritual delusions” about AI. This movement seems to have achieved some kind of critical mass during the months of April and May.
Here’s another recent article: People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions
As we reported this story, more and more similar accounts kept pouring in from the concerned friends and family of people suffering terrifying breakdowns after developing fixations on AI. Many said the trouble had started when their loved ones engaged a chatbot in discussions about mysticism, conspiracy theories or other fringe topics; because systems like ChatGPT are designed to encourage and riff on what users say, they seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions.
The stories are sad to read and I feel for the people involved. They also sound alarming and make it seem like AI is uniquely dangerous, and that AI companies are responsible for their users developing delusions of grandeur.
I don’t think this makes sense to worry about when you consider just how many people are using chatbots.
I worry a lot of people don’t know or haven’t internalized just how many people are using chatbots right now. Because there are so many people using AI, and some of them have bad mental health, we’re getting a lot of people whose bad mental health is displaying in their use of chatbots, in the exact same way it could with many other normal products.
I identify as a critic of AI. I want other critics to use good arguments and focus on real problems. I don’t think this is one of them.
I use a lot of very rough guesses and back-of-the-envelope calculations here. This isn’t using solid data at all compared to my posts on ChatGPT and the environment. My main argument is just “This thing definitely has a billion weekly users, therefore a lot of its users have religious delusions already. Let’s take an extremely rough guess at how many chatbot users are prone to religious delusions.” If you disagree with how I’m using numbers please let me know in the comments.
Separate from these numbers, I’d like to be treated like an adult by the companies that sell me products. Worry about sycophantic chatbots turning me delusional feels like we’re not actually treating the average chatbot user like an adult. I write about that more at the end.
The numbers
One billion people use chatbots on a weekly basis. That’s 1 in every 8 people on Earth.
How many people have mental health issues that cause them to develop religious delusions of grandeur? We don’t have much to go on here, so let’s do a very very rough guess with very flimsy data. This study says “approximately 25%-39% of patients with schizophrenia and 15%-22% of those with mania / bipolar have religious delusions.” 40 million people have bipolar disorder and 24 million have schizophrenia, so anywhere from 12-18 million people are especially susceptible to religious delusions. There are probably other disorders that cause religious delusions I’m missing, so I’ll stick to 18 million people. 8 billion people divided by 18 million equals 444, so 1 in every 444 people are highly prone to religious delusions.
Here’s a rectangle showing all people. I’ve split it horizontally between people who do and don’t use chatbots every week (1/8), and vertically between people who are and are not especially prone to religious delusions (1/444).
How big is that red rectangle in the bottom right?
If one billion people are using chatbots weekly, and 1 in every 444 of them are prone to religious delusions, 2.25 million people prone to religious delusions are also using chatbots weekly. That’s about the same population as Paris.
I’ll assume 10,000 people believe chatbots are God based on the first article I shared. Obviously no one actually has good numbers on this, but this is what’s been reported on as a problem. Let’s visualize these two numbers:
It’s helpful to compare these numbers to the total number of people using chatbots weekly:
Of the people who use chatbots weekly, 1 in every 100,000 develops the belief that the chatbot is God. 1 in every 444 weekly users were already especially prone to religious delusions. These numbers just don’t seem surprising or worth writing articles about. When a technology is used weekly by 1 in 8 people on Earth, millions of its users will have bad mental health, and for thousands that will manifest in the ways they use it. This shouldn’t surprise us or lead us to assume the tech itself is causing their bad mental health.
Going back to the quote from that first article:
another Redditor warned of thousands of people online with “spiritual delusions” about AI. This movement seems to have achieved some kind of critical mass during the months of April and May.
Why did this movement get so many additional people during April and May? Maybe because that was around the time ChatGPT’s total weekly users doubled:
If your user base doubles in a few months, you’re going to have twice as many people very prone to delusions using your product, so any community of people who think your product is god will double.
Comparisons to other things with a billion weekly users
YouTube, Facebook, Instagram, and TikTok each have over a billion weekly users.
How would you react to an article saying this:
Thousands of YouTube users now believe that God is specifically speaking to them through the videos the algorithm recommends. We contacted YouTube to ask what they’re doing to prevent this, and they gave an evasive answer.
I would say there’s absolutely nothing surprising about this, and it’s silly to contact YouTube asking about what they’re doing to prevent it. With a billion people using your product, it’s impossible to stop thousands of them from developing delusions about it. There are millions of people prone to developing delusions, and there’s no way for the random companies the delusions are about to prevent that. It seems ridiculously paternalistic to expect companies to spend thousands of hours building in enough safeguards to prevent all possible delusions.
Monster Energy Drinks have sold around 22,000,000,000 cans. Some of the people who interact with them develop delusions. Here’s a video of a woman who developed a religious delusion about Monster Energy:
Could Monster Energy have done anything differently here?
I think chatbots are so new that many people just haven’t added them to their mental category of “services with a billion users” like YouTube and Instagram yet. Chatbots took off much faster than YouTube and Instagram. Because they’re so new, people assume they must not be that big yet. Chatbots are actually the fastest growing applications ever. When you hear “ChatGPT users” you should have the same order of magnitude in your head as TikTok and Instagram and YouTube users. Once it occupies this category in your mind, these stories seem much less surprising.
I would like to be treated like an adult
In general, I don’t want the companies selling me products to treat me like a child. With tech like AI, I like having a lot of control and choice. If I’d like the model to act out a character in a chat, including pretending to be God, I should be able to make that decision for myself. It’s just text on a screen!
Here’s a chat where I got ChatGPT to say that it’s God:
A lot of serious people are implying that this conversation represents a failure by OpenAI to protect my mental health. This is ridiculous. I want to be allowed to use ChatGPT any way I want, as long as it isn’t illegal or overtly hateful. The only way we could protect people from developing delusions of grandeur about AI is to make the chats so restrictive that people like me couldn’t have fun with them. I don’t want that.
We can’t pass laws or socially sanction companies for failing to completely stop every single person from developing delusions. If we need so many restrictions on AI that no one with bad mental health could ever be tricked into thinking it’s God, we’d also need restrictions on most of the internet in general.
A funny and bizarre corner of the internet is “affirmation videos.” I sometimes turn these on with friends as a joke. They’re all so obviously ridiculous we just sit back and laugh and get overwhelmed by this different bizarre world people construct. We can’t imagine what it must feel like to respond positively to these. Here’s a 20 minute affirmation video (with 100,000 views!) saying that the listener is a god over and over:
Should YouTube ban this video? Is YouTube responsible for this video turning people delusional? The video seems to be giving a much worse and weirder message than what’s possible with ChatGPT conversations.
I think YouTube banning this video to protect my mental health would be ridiculously paternalistic and disrespectful. I’m an adult! I can handle crazy videos like this. I don’t want the largest video streaming company micromanaging what I watch to help me avoid all possible problems and harms. I don’t see how building “safeguards” into ChatGPT to protect very delusional people is any different. I can handle videos like this, and I can handle text conversations with a robot. Some people will become delusional watching videos like this. That’s very sad! They deserve help and support. But the way to support them isn’t to restrict my and others’ freedom to consume this stuff. It’s to get them therapy and medication and help them avoid using products like this that might harm them. If someone develops delusions interacting with a text-based robot on a screen, there’s so much else on the internet that could have caused them to develop more severe delusions that I don’t see what we could have done to avoid it.
Revisiting that second article:
As we reported this story, more and more similar accounts kept pouring in from the concerned friends and family of people suffering terrifying breakdowns after developing fixations on AI. Many said the trouble had started when their loved ones engaged a chatbot in discussions about mysticism, conspiracy theories or other fringe topics; because systems like ChatGPT are designed to encourage and riff on what users say, they seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions.
The trouble didn’t start with “engaging the chatbot,” it started when these people developed an interest in mysticism, conspiracy theories, and other fringe topics. What was causing them to prompt ChatGPT about these in the first place? Often the answer is some kind of mental health episode.
It’s very easy to Google conspiracy theories and mysticism and other fringe topics, and find communities of people who will cheer you on as you go further down the rabbit hole. Many people who do this also develop delusions. Without ChatGPT, these people would have probably just used Google and found similar material. Should Google ban all conspiracy theories and fringe topics? Again, I don’t think so. I’d like Google to treat me like an adult too.
Some might say that the difference here is that ChatGPT presents itself as more authoritative than Google. It seems like a single smart entity, whereas Google just passively links to other websites. I’d point out that another difference here is that it takes a lot more wrestling with ChatGPT to convince it to argue for any crazy positions than it does to find crazy results on Google. Here’s an example of trying to get it to say 9/11 was an inside job:
The only way to get ChatGPT to say crazy stuff is to wrestle with it a lot and find the exact right prompts to make it go crazy. Anyone who puts that much effort into changing ChatGPT’s responses and then thinks this represents a smart authoritative entity instead of a weird warped program that they themselves tricked into talking differently is already completely detached from reality.
If these people are so tech literate and have so much time and energy that they can trick the chatbot into agreeing with conspiracy theories, they’re tech literate enough to just find conspiracy theory material and communities online too.
For people who are still worried about AI’s ability to turn people delusional, I ask that you actually sit down and try playing with the models a bit. For me, it’s impossible to come away from using ChatGPT thinking there’s any chance at all that it could ever convince me it’s God, or give me any delusions at all. My ego isn’t helped or hurt at all by a text-based robot, and people with egos that are easily hurt by them probably have serious underlying problems that the chatbots didn’t cause, and that most of the internet and society at large would harm anyway. The rest of us deserve to be free to live our lives as we see fit, and the delusional people deserve care and love and support from the people and institutions around them, but they shouldn’t get to determine what the rest of us can do and say and engage with.
Good article - I'm gonna need to think about this one harder and more carefully.
One thing that stands out offhand is your claim about "having to wrestle with it". Maybe *you* have to wrestle with it - what about people who have memory on, who use it differently than how you use it, and/or whose natural approach/writing nudges it more easily into the relevant 'personality basin'?
I think any statements about "[model] behaves like [X]" should be automatically a little suspect, as it seems like there's actually quite a lot of variance, and mostly people speak on this from their own direct experience using it (understandably).
Mental health worker here.
The thing that stands out to me here is a reality many of us in this industry know; "things get worse before they get better". Putting the delusion stuff to one side - I reckon there's a portion of these people who are uncovering deep secrets about themselves and their psychological state - possibly doing trauma work. The hard thing about this kind of work is that it's like doing emotional surgery; there's an infected scar, we need to cut it open, put on disinfectant and then let it heal properly. That shit is PAINFUL.
I know that's not precisely the direction of your article, but there's a lot of stigma about delusions and society has a long way to go in terms of supporting people who think outside mainstream or experience reality differently. I often hear 'worries and concerns' from 'mentally well' people, and think there's potentially a gap of understanding of how much they should stress out about people talking about 'weird shit'.
As an adult who accesses both human therapists and chatbots for help, I've found chatbots incredibly helpful in terms of accessibility for managing my mental health. I have specified "Do not coddle, validate or tell me what I want to hear - remain objective where possible" so I'm hoping AI is not telling me what I want to hear. It has helped me make better decisions for my mental health than some human hotline workers (some put me in a worse place and caused harm). I agree that cover all bans are paternalistic. I agree AI companies need to figure out how to be as ethical and have harm reduction approaches, but also that each adult person has a level of responsibility/accountability (or has people in their lives responsible for helping them navigate the world) in terms of how they interact with any tool.
Great article.