7 Comments
User's avatar
Michael Kerrison's avatar

Good article - I'm gonna need to think about this one harder and more carefully.

One thing that stands out offhand is your claim about "having to wrestle with it". Maybe *you* have to wrestle with it - what about people who have memory on, who use it differently than how you use it, and/or whose natural approach/writing nudges it more easily into the relevant 'personality basin'?

I think any statements about "[model] behaves like [X]" should be automatically a little suspect, as it seems like there's actually quite a lot of variance, and mostly people speak on this from their own direct experience using it (understandably).

Expand full comment
Samson Dunlevie's avatar

Mental health worker here.

The thing that stands out to me here is a reality many of us in this industry know; "things get worse before they get better". Putting the delusion stuff to one side - I reckon there's a portion of these people who are uncovering deep secrets about themselves and their psychological state - possibly doing trauma work. The hard thing about this kind of work is that it's like doing emotional surgery; there's an infected scar, we need to cut it open, put on disinfectant and then let it heal properly. That shit is PAINFUL.

I know that's not precisely the direction of your article, but there's a lot of stigma about delusions and society has a long way to go in terms of supporting people who think outside mainstream or experience reality differently. I often hear 'worries and concerns' from 'mentally well' people, and think there's potentially a gap of understanding of how much they should stress out about people talking about 'weird shit'.

As an adult who accesses both human therapists and chatbots for help, I've found chatbots incredibly helpful in terms of accessibility for managing my mental health. I have specified "Do not coddle, validate or tell me what I want to hear - remain objective where possible" so I'm hoping AI is not telling me what I want to hear. It has helped me make better decisions for my mental health than some human hotline workers (some put me in a worse place and caused harm). I agree that cover all bans are paternalistic. I agree AI companies need to figure out how to be as ethical and have harm reduction approaches, but also that each adult person has a level of responsibility/accountability (or has people in their lives responsible for helping them navigate the world) in terms of how they interact with any tool.

Great article.

Expand full comment
Matt G's avatar

Hey Andy

A caution in applying group statistics to make inferences about large populations. A couple of ecological fallacies here:

-You claim a proportion of "25%-39% of patients with schizophrenia and 15%-22% with bipolar' in the world population. This study was done in NY in 1999 and had a study size of 41. So this statistic is referring to a tiny sample size and can't be applied to the world.

-Your 1/444 is not a proportion of the world that is highly prone to religious delusions. It is the risk ratio for having bipolar and being prone to religious delusions, vs neither. So somewhere in the world, up to 18 mil people exist with these two conditions. Separately, 1 billion ChatGPT users exist. We don't actually know whether these two populations intersect, and so the 2.25 billion people you calculate doesn't necessarily exist at all.

I love your articles and use of data to point at problems in AI. But I'd be careful in calculating population statistics yourself and stick strictly to peer reviewed articles on the specific issues / populations you want to talk about.

Matt

Expand full comment
Andy Masley's avatar

Fair, I tried to make it clear that these are extremely rough guesses but I could add more language clarifying that

Expand full comment
Wiktor Wysocki's avatar

I like the argument that we are adults, and products made for adults should work as products for adults. ChatGPT included.

We have to acknowledge that companies cannot protect every human being who uses a program used by billions of people from a few edge cases. This is not the job for the company, but for the people taking care of those who need help in such situations.

We should give AI tools to children carefully and under some control. But it is not the job for OpenAI, but for us, adults.

Expand full comment
Rafael Ruiz's avatar

Doing God's work!

Expand full comment
Matt Ball's avatar

Thanks so very much for this, Andy. But sane, numerically-sound analysis doesn't get clicks.

It would be great if people would take mental health seriously.

https://www.mattball.org/2021/10/last-mental-health-note-mind-is-fragile.html

Expand full comment