Discussion about this post

User's avatar
Lars Olof Berg's avatar

I’ve been thinking about this a lot lately — maybe we need to shift our mental picture. We’ve grown so used to computers being exact that it feels strange to accept answers in terms of probabilities or rules of thumb, like you mention.

But in nature, in biology, and especially in human relationships, there’s almost never 100% certainty. It’s all shades of probability.

Seen that way, I think it’s actually a pretty useful mindset for how we approach AI and LLMs.

Expand full comment
Matt Ball's avatar

I find it fascinating how chat GPT can fail. But I find it laughable when people write off llms because they can hallucinate and make mistakes. Have they never made a mistake themselves? Have they never seen how goofy the world of humans is?

Expand full comment

No posts