AI can be bad without being useless
A common way conversations get tripped up
One of the most common misunderstandings in AI conversations I’m in is when I say AI is obviously useful in specific contexts, people assume I’m saying it’s obviously good overall. Here’s a recent example from the Hacker News comments on my last post:
Me: I think a lot of people are getting a lot of misinformation from TikTok, and I think it’d be better if TikTok didn’t exist, but I’d consider anyone who said that TikTok is completely useless to its users to be pretty goofy. I feel the same about chatbots.
User: How can you both think TikTok shouldn't exist and think that it's useful to its users, without using a pretty unique definition of useful?
I was surprised by this reply. It showed up in a lot of other places too. To me it seems obvious that something can be useful in specific situations, but still be bad overall. Here are some examples:
Nuclear weapons: I would prefer a world where nuclear weapons didn’t exist, but nuclear weapons are extremely useful to the specific countries that possess them. It seems ridiculous to call nuclear weapons “useless.”
Guns: I’d support strict gun control if I thought it worked, and I think guns often harm their owners, but there are a lot of specific cases where guns provide a lot of value to their owners. Guns aren’t “useless.”
Alcohol: I think it’d be good for everyone to boycott alcohol, because the effects its having on the people addicted to it are so bad, but alcohol obviously also adds a lot of value to specific people’s lives. It’s not “useless.”
TikTok: TikTok seems pretty addictive, distracting, and contains a lot of misinformation and weird ideological rabbit holes. It’s also fun to use and can teach you a lot of specific social cues and context. I think it’s overall bad for society, but it’s definitely not “useless” to the people on it.
When I say that it’s not debatable anymore that chatbots are sometimes useful, a lot of people read me as saying that it’s not debatable that chatbots are good overall, and that all critics of chatbots are crazy. I don’t believe this at all.
I think it’s totally rational to think that chatbots are bad overall. I could totally buy this criticism:
Chatbots hallucinate a lot and give lots of incorrect information.
The everyday person thinks they can rely on chatbots for correct information. They’re receiving more incorrect information than they would using traditional sources. This nets out to being bad overall.
In addition, chatbots are giving people too many opportunities to cut corners on tasks they need to think about themselves. They’re very bad overall.
Any critic saying this could also agree that there are a lot of specific places chatbots are useful to people. For example, the only times I ask chatbots for contentious important information, I also ask them to provide sources, and I always check the sources. Most of the time, I’m asking them for much simpler lower stakes things, like sorting lists or doing simple back of the envelope calculations. Here’s a specific example where I get a lot of value out of using a chatbot. Any critic should be able to agree that those are examples of a chatbot being useful.
A lot of people do still talk as if chatbots cannot ever be useful for anything, period. People will question why I’d ever poke around on ChatGPT at all, or consider using it for literally anything. I think those people are obviously just wrong and silly. They’re making a fundamentally different claim than the people saying that chatbots are bad overall.
It’s no longer debatable that chatbots are useful in specific instances. It’s obviously very debatable whether chatbots and AI more broadly are good for the world overall. Clarifying that these are two different claims seems important if we want conversations to go well.
What you are saying is of course true - these are different orders of evaluation, and people sometimes conflate them.
I also think it isn’t just the logic of people’s arguments that is driving the dynamics.
The technology plays a role too.
It is not saying “I do a few useful things well, but I always stay in my lane”.
We are invited to use it for anything language can be used for, basically. With no guardrails, no error messages, just this amazing thing that almost seems to think.
Nor are any of the CEOs responsible for presenting a public image of LLMs saying modest things about the technology as having narrowly defined utility. Instead we hear it’s going to change everything that has happened so far in human experience.
These factors influence how people show up in a conversation, and what they hear.
So, I’m all for conversations going well, and so I just want to add that understanding where people are coming from - ie identifying with the context of their experience is just as important to the interaction as the logic of what they say, however flawed.
Unfortunately, "good" as in useful and "good" as in morally good, are often conflated. So the people who hate AI, mostly for art-related reasons (at least to start with), want AI to be "bad" as in useless, not just "bad" as in bad for society.