Discussion about this post

User's avatar
Michael Kerrison's avatar

Good article - I'm gonna need to think about this one harder and more carefully.

One thing that stands out offhand is your claim about "having to wrestle with it". Maybe *you* have to wrestle with it - what about people who have memory on, who use it differently than how you use it, and/or whose natural approach/writing nudges it more easily into the relevant 'personality basin'?

I think any statements about "[model] behaves like [X]" should be automatically a little suspect, as it seems like there's actually quite a lot of variance, and mostly people speak on this from their own direct experience using it (understandably).

Expand full comment
Samson Dunlevie's avatar

Mental health worker here.

The thing that stands out to me here is a reality many of us in this industry know; "things get worse before they get better". Putting the delusion stuff to one side - I reckon there's a portion of these people who are uncovering deep secrets about themselves and their psychological state - possibly doing trauma work. The hard thing about this kind of work is that it's like doing emotional surgery; there's an infected scar, we need to cut it open, put on disinfectant and then let it heal properly. That shit is PAINFUL.

I know that's not precisely the direction of your article, but there's a lot of stigma about delusions and society has a long way to go in terms of supporting people who think outside mainstream or experience reality differently. I often hear 'worries and concerns' from 'mentally well' people, and think there's potentially a gap of understanding of how much they should stress out about people talking about 'weird shit'.

As an adult who accesses both human therapists and chatbots for help, I've found chatbots incredibly helpful in terms of accessibility for managing my mental health. I have specified "Do not coddle, validate or tell me what I want to hear - remain objective where possible" so I'm hoping AI is not telling me what I want to hear. It has helped me make better decisions for my mental health than some human hotline workers (some put me in a worse place and caused harm). I agree that cover all bans are paternalistic. I agree AI companies need to figure out how to be as ethical and have harm reduction approaches, but also that each adult person has a level of responsibility/accountability (or has people in their lives responsible for helping them navigate the world) in terms of how they interact with any tool.

Great article.

Expand full comment
5 more comments...

No posts