3 Comments
User's avatar
Gregory Forché's avatar

What you are saying is of course true - these are different orders of evaluation, and people sometimes conflate them.

I also think it isn’t just the logic of people’s arguments that is driving the dynamics.

The technology plays a role too.

It is not saying “I do a few useful things well, but I always stay in my lane”.

We are invited to use it for anything language can be used for, basically. With no guardrails, no error messages, just this amazing thing that almost seems to think.

Nor are any of the CEOs responsible for presenting a public image of LLMs saying modest things about the technology as having narrowly defined utility. Instead we hear it’s going to change everything that has happened so far in human experience.

These factors influence how people show up in a conversation, and what they hear.

So, I’m all for conversations going well, and so I just want to add that understanding where people are coming from - ie identifying with the context of their experience is just as important to the interaction as the logic of what they say, however flawed.

Expand full comment
Hugh Hawkins's avatar

Unfortunately, "good" as in useful and "good" as in morally good, are often conflated. So the people who hate AI, mostly for art-related reasons (at least to start with), want AI to be "bad" as in useless, not just "bad" as in bad for society.

Expand full comment
Program Denizen's avatar

So the "bad overall" is more a critique or judgement, than suggesting the world would be better without them?

"More bad than good— but what would replace the uses and not have the same traits?" type of sentiment?

Expand full comment