32 Comments
User's avatar
Hugh Hawkins's avatar

I think the best way to understand the main AI factions are as follows:

Doomers: A bit of a pejorative term, but unfortunately one of the only clear words to describe them. They think AI is gonna be really powerful and will kill us all. High capability, Very bad. See: Eliezer Yudkowsky, Zvi Mowshowitz.

Accelerationists: People who want to speed up AI, because they think it'll be great. High capability, Very good. See: Beff Jezos, Marc Andressen.

Skeptics: People who hate AI because they think it's useless slop that hurts the environment and spreads misinformation and stuff. Low capability, very bad. See: Gary Marcus, anti-AI art people.

Boosters: People who are pretty enthusiastic about AI, but don't really buy the idea that AGI is coming soon. Middling capability, very good. See: Mark Zuckerberg, pro-AI art people.

Do you think this is a good categorization, or am I missing something? I'm aware that I'm simplifying a bit, for instance I left out the China hawks.

Expand full comment
Andy Masley's avatar

I and a lot of people I know in DC fall more in a camp of low but significant probability of x-risk + a lot of upsides. My sense is that that’s the predominant vibe in EA spaces right now, so I’d like a camp for those people too, but yeah otherwise this seems like a good split

Expand full comment
Hugh Hawkins's avatar

Fair enough, that's also the group I think I fall in. I was oversimplifying, and I hadn't really seen a coherent faction form around those ideas.

Expand full comment
Steiner's avatar

Well, thinking of my own views, I am not sure they fit exactly into any of these. And I also think my view is somewhat to quite common (naturally...) Mainly in the doomer camp, but less in the existential threat side of it.

I believe AI will be pretty capable, though not enough to really deliver on the utopian things OR to "escape containment" and kill us all. But I DO think it will lead to a fundamental shift in power dynamics between labor and capital for white-collar work (NAFTA for white-collar workers). I think even a very small realized shift in employment (think like NAFTA just a decade or two of pressure on wages and employment) would have massively destabilizing effects both economically and culturally. I do not believe that services demand is sufficiently elastic that we will make up for in volume what we reduce in price + demand (unemployment shock). And I do not have faith in our government to respond correctly and proactively. And it is yet another blow and transfer of resources from young to old. Further, I think we continue to underestimate the value of embodied experience and contemplation, both of which (may) get harder in an AI era.

And I think this is all why the public's view of AI is so negative, and why I think it will get much worse. People just have the feeling they are about to be screwed. We got all excited about the internet, and a lot of it just kind of sucks. It has done fairly little to create more or better moments of meaningful lived experience for people. Plus, people are sick of big monopolies and weird autists becoming trillionaires off of technology that makes them worried about what future their kids are going to grow up in.

Expand full comment
Udoka's avatar

I think you’ve left out the anti-TESCREAL people who are a small but growing faction.

Expand full comment
Andy Masley's avatar

Wasn’t really able to add every single side and faction bc this post is already so unacceptably long tbh

Expand full comment
Hugh Hawkins's avatar

Those are kinda similar to the skeptics, who are generally left-wing.

Expand full comment
Seth's avatar

>Physicalism: The human mind can be reduced to physical processes. The human brain is by some definition a machine, so machines that can do everything human brains can do are possible in principle, because the human brain is one! 60% of philosophers of mind are physicalists.

I think you might understate the case here. One of the more influential arguments in favor of non-physicalism is zombies. The idea that, in principle*, you could create an exact replica of a human walking around but that isn't conscious. When a lot of non-physicalists (myself included) talk about the mind being special it's more-so that the mind is special insofar as it has consciousness. If zombies are possible then it's both true that 'the mind is special' and 'machines can do everything human brains can do.' Not all non-physicalists agree zombies are possible, but most (?) do.

Great article as usual

*the term they would use is 'conceivable' which is slightly different but that's neither here nor there

Expand full comment
Andy Masley's avatar

Yeah I have a lot of thoughts about the zombie argument but the post’s already crazy long! I should circle back here later

Expand full comment
Jamie Freestone's avatar

Excellent work. I'm also trying to massage AI discourse to be more in line with post-war revelations about brains, computers, language, etc. (including Quine's!). So I can see how many ideas and clarifications had to go into this long but compact post. Will reccommend!

Expand full comment
Glenn DeVore's avatar

Andy, thank you for writing such a thoughtful, far-reaching, and deeply honest piece. There are so many people that should read this in its entirety. You’ve clearly invested a great deal of care and philosophical depth into this, and it shows. I found myself agreeing with much of it (albeit not all). I am so grateful for the clarity with which you articulated this.

While I don’t agree with every point, I deeply admire the rigor of your thinking and the humility with which you hold complex questions. And I want to say how much I respect the work you’re doing at Effective Altruism DC. It is such meaningful work, and its ripple effects are profoundly positive.

As I read through your article, I found myself taking notes as I went. I started marking places of strong agreement, thoughtful divergence, and a few deeper philosophical forks in the road. Then, with the help of AI (fittingly), I analyzed and organized those notes, which eventually led to a breakdown of what I took to be 44 distinct arguments or positions you were making throughout the article. I’m not sure I captured every one, or if I’ve represented them as precisely as you have in your original, but I hope the list captures the spirit of what you offered.

Since AI helped me with this analysis, it felt especially appropriate to share the result here. This is not only to continue the conversation, but also (perhaps ironically) as a small proof of your opening point: that these tools can meaningfully assist thought… or, if I’ve missed the mark, a helpful cue for where caution is warranted.

At one point, I considered categorizing the 44 arguments below using your original four-part structure. But in reviewing them closely, it became clear that many points resist clean categorization. Some function simultaneously as ground truths and invitations to debate; others begin as philosophical arguments and end as meta-commentary on tribal dynamics.

Rather than force each point into a single box, I’ve left them in this semi-sequential form: each labeled with a simple emoji reflecting whether I agree (✅), partially agree (➖), or respectfully disagree (❌). My hope is that this framing preserves the spirit of open inquiry while acknowledging where meaningful questions still remain.

Of course, my agreement is neither expected nor particularly important (who am I, really?)... Nevertheless, I share these notes in the spirit of philosophical companionship. First, to honor the clarity and alignment I felt throughout much of your writing. And second, to share a few places where our perspectives diverge, and hopefully deepen the exploration. I’m not here to impose a worldview. I just care deeply about these questions and want to explore them with others who do as well.

All of this is offered in the spirit of mutual respect, dialogue, and shared curiosity.

Thank you again for giving us so much to think through.

Expand full comment
Glenn DeVore's avatar

1. ✅ Chatbots are not useless—they already perform many tasks well and reliably.

-- I am in 100% agreement here. They are not useless. In fact, after reading your article, I used AI to help me break down and semantically deduplicate your list of core arguments into this list

-- However, I did have to add a couple of my own that I felt that it missed, and we both still may have missed some (apologies if so). Still, missing a few does not make it “useless.”

-- And, it’s getting better at an incredible rate.

2. ✅ Using AI does not make someone stupid; it expands the capacity of everyday people.

-- Yes. Entirely. It is a tool. And for those that learn how to use new tools well, it expands their abilities.

-- It hurts me to think that anyone disagrees with this, but, you are right, they do.

3. ➖ Replacing some forms of thinking is a strength of AI, not a flaw.

-- While I agree that many tools throughout history have usefully abstracted or outsourced certain cognitive tasks (calculators, maps, even language itself), there is a critical difference with AI: its breadth and depth make it capable of replacing not only low-level tasks but also higher-order processes like synthesis, metaphor, and creative structuring. The risk isn’t in the tool, but in how culture shapes our use of it. If we stop practicing certain kinds of thinking, we may lose not just skills but the very habits of attention and awareness that cultivate them. I’m not arguing against the tool; only that its power requires a greater responsibility in how and why we use it.

4. ➖ The line between predictive auto-complete and genuine understanding is blurrier than it seems.

-- I agree that the sophistication of next-token prediction (especially when scaled through massive data and recursive self-training) can appear to mimic understanding remarkably well. But whether this is genuine understanding depends, ironically so, on how we define the term. If “understanding” is behavioral (being able to act appropriately in context) then yes, the line is blurred. But if understanding also includes a kind of subjective grasp, a lived sense of meaning or experience, then I believe that line still holds. This isn’t meant to diminish what AI can do, but to preserve humility in what we project onto it.

5. ➖ AI’s ability to handle novel, context-heavy prompts challenges the ‘stochastic parrot’ critique.

-- I agree that LLMs can often surprise us with coherence and nuance in new, untrained contexts. That said, I think “challenge” is the right word, not “disprove.” The stochastic parrot critique reminds us that LLMs generate based on surface-level statistical regularities, not internal comprehension. But if those statistical patterns reliably encode deep context, then the critique starts to falter. Still, I’m not convinced that reliable mimicry is equivalent to “understanding,” especially if the model lacks grounding in the kind of lived or embodied context that humans constantly draw upon.

6. ➖ Debating AI's "true understanding" is less useful than evaluating what it can do in practice.

-- This is a tricky one. I completely agree that usefulness is a pragmatic lens, especially for product design, governance, and ethics. Still, philosophical inquiry into "understanding" helps us reflect on why usefulness matters, and whether it’s enough. For example, if AI persuades us, inspires us, or imitates empathy, it matters whether we think it “gets it” or is simply simulating. The debate isn’t always practical, but it is clarifying, particularly as it shapes how we relate to AI, what rights or responsibilities we consider granting it, and how we interpret its behavior in morally complex situations.

7. ➖ The Chinese Room argument doesn’t disprove deep learning models’ potential for semantic understanding.

-- This is a rich philosophical fault line. I agree that Searle’s thought experiment doesn’t necessarily close the door on machine understanding; however, I also don’t think it was meant to. Rather, it refocuses our attention: is syntax sufficient for semantics?

-- I lean toward Oliver Burkeman’s comment that Searle would likely insist that “degrees of interwovenness or contextual refinement aren't sufficient to somehow leap over the barrier to true understanding.”

-- Still, I appreciate your optimism that new architectures and scale might create emergent semantic awareness. It may turn out that our philosophical categories need updating. But I’m cautious about leaping too far without new conceptual tools.

8. ✅ Deep learning enables statistical models to build intuitive, layered world models.

-- Absolutely. Let’s just not forget that those world models are models, and not the real thing. They are only representations of the thing.

-- Not to ever confuse the map for the territory. For example a perfectly simulated representation of a blackhole on a computer would never suck us into it. It is only a simulation.

9. ➖ Human understanding of language is also associative and learned, not metaphysically grounded.

-- This is a compelling point from Quine, and I agree that much of our language is shaped by use, not by reference to fixed Platonic ideals. Still, I hesitate to say it is not at all metaphysically grounded. There may be something qualitatively distinct in conscious experience (the felt sense of meaning) that resists full reduction to association. We may both use the word “blue,” but your blue and mine may differ in ways we cannot resolve through behavior or language. This doesn’t refute your claim. It simply reminds us that associative learning may not capture all there is to know.

-- This may simply come down to the difference whereby you declare yourself a physicalist, whereas I might respect the discipline of physical science, I also believe it has its limits. Science can describe the processing of information, but it struggles to explain why those processes are accompanied by an inner life. To be clear, I would not classify myself as religious either, although I have studied various theologies and philosophies.

-- Again, I find myself aligned with Oliver’s comment, that I think our understanding of language is quite linked with consciousness which I also feel is "absolutely mysterious as of [June] 2025” (reserving the right to adjust that in the future with new data).

10. ✅ Skepticism about AI’s capabilities often relies on outdated philosophical categories.

-- Yes, I agree. However, the details here matter.

-- Rejecting cartesian dualism in favor of physicalism might just be swapping one outdated philosophical category for another (more on that below in number 12).

11. ✅ Physicalism implies that intelligence and consciousness are possible in machines.

-- The way that this is worded, I have to agree based on the fact that it’s more of a definition of a type of philosophy. Yes, this is what Physicalism posits. As to whether it is correct, is another topic that I’ll leave for number 12.

12. ❌ The high percentage of philosophers who accept physicalism is presented as evidence that it is a valid—and perhaps the most valid—framework for understanding mind and consciousness.

-- I’d like to gently challenge the implied authority of majority consensus here. While it’s true that a significant number of philosophers (especially philosophers of mind) lean toward physicalism, I’m not sure that popularity should be mistaken for proof. Ideas aren’t validated by vote counts. History is filled with moments when majority opinion stood in the way of a deeper truth that had yet to emerge (e.g. geocentrism vs. heliocentrism in the 1500s).

-- I hold deep respect for physicalism and the scientific rigor that underlies it. In many ways, it’s a framework that has propelled human knowledge forward. But I also think it’s worth acknowledging its limits. We need to be careful not to confuse the finger for the moon (to borrow from a Zen metaphor). This is particularly true when it comes to phenomena like consciousness or inner experience. Perhaps as Chalmers labeled it “the hard problem” is mostly that there’s a felt-sense of awareness that physicalist descriptions don’t yet seem to fully capture.

-- This doesn’t mean I reject physicalism outright. It means I remain open. To mystery and to the possibility that our current paradigms (like those that came before) may eventually give way to something more encompassing.

13. ➖ Functionalism supports the idea that mental states could emerge in non-biological systems.

-- Yes, this is precisely what functionalism argues. I’m aligned insofar as the definition here. However, I also think we lack any decisive evidence that substrate independence might allow consciousness to arise outside a living (carbon based and cellular) organism. Still, I remain open to the idea, while holding space for frameworks that suggest consciousness is more than a computational architecture. In this way, I admire functionalism's ambition but remain unconvinced that it is a complete account.

14. ✅ Quine’s philosophy suggests our understanding of words is built through usage, not fixed definitions.

-- Absolutely. And Quine’s arguments are incredibly fascinating and critical to this topic.

15. ✅ Radical indeterminacy in meaning applies to humans as much as to LLMs.

-- Certainly. Just as I know what my color blue is to me, I cannot know with absolute certainty that your color blue is the same as mine. Only that we agree that what I see and what you see is mutually considered to be “blue.”

-- This also means that the underlying meaning (“qualia”) is not something that machines have the capacity to “understand”

Expand full comment
Glenn DeVore's avatar

16. ❌ Many criticisms of LLMs assume a Cartesian view of the mind that isn't philosophically supported.

-- While I agree that Cartesian dualism has largely fallen out of favor in philosophy, I’m not convinced that rejecting it necessarily leads to a default embrace of physicalism or functionalism.

-- Some criticisms of LLMs might rely on outdated dualistic assumptions, but not all do. There are nuanced positions that question whether understanding, awareness, or interiority can be reduced to computation, without presupposing a mind-body split. These concerns aren’t always rooted in Cartesian thought. Some arise from phenomenology, Eastern philosophy, or even contemplative traditions that observe the self through direct experience rather than abstract metaphysics.

-- So, while I agree it’s important to challenge shaky philosophical assumptions, I think we should also be cautious not to collapse the critique into a binary of “either you’re stuck in Descartes or you accept machine understanding.” There’s a lot of fertile ground in between.

17. ➖ AI may already be developing something close to semantic understanding via pattern recognition.

-- Yes, the depth of interconnections that LLMs can now detect is extraordinary. But again, we return to what we mean by “understanding.” If semantic understanding includes intentionality (the directedness of thought toward meaning) then we may still be a step removed. What’s happening may be extraordinary simulation, not realization. That doesn’t diminish the model’s utility, but it does suggest caution in how far we extend our metaphors.

18. ✅ It is inconsistent to say AI is both too dumb to be useful and too powerful to be safe.

-- Agreed. Thank you for stating this so plainly.

-- And, AI is improving at an astonishing speed.

19. ➖ AI use by students is an example of freedom and empowerment, not ethical failure.

-- I agree with the spirit of this point: AI, like calculators or Wikipedia, can democratize access and help students achieve new insights. But I also believe that when students use it to avoid rather than supplement learning, we risk hollowing out critical reasoning. It’s not that AI is the problem. AI is simply a tool. It’s that our educational goals and assessments haven’t caught up. This is less about ethics and more about design: we need to redesign systems that encourage genuine engagement and make meaningful use of new tools. Until then, we need to be careful in how it is being used at scale to subvert real learning.

20. ➖ Empowering people with tools (even imperfect ones) is good; bad use cases are a freedom issue.

-- Generally yes, but freedom without discernment can be dangerous. Your comparison to nuclear power is apt: even a well-intentioned tool can become destructive when scaled without wisdom. Empowerment is essential, but the structure around that empowerment matters. What we incentivize or ignore as a society shapes the direction tools will take us. The core question is not whether tools should empower, but how we structure responsibility alongside that empowerment.

21. ➖ Concerns about AI replacing creative work are not unique—bad fan art and calculators got similar reactions.

-- True, and history gives us many examples where disruption led to new forms of creativity. Still, AI moves at a scale and speed that feels categorically different. Its capacity to flood markets, mimic style, and learn from existing work blurs lines of authorship and intention. I believe creative disruption can be healthy, but the emotional and economic impact on artists during the transition is real. We need to hold both truths: past disruptions offer perspective, but that doesn’t mean the present one isn’t uniquely challenging.

22. ➖ Many AI complaints could equally apply to earlier tech like Wikipedia, Google, or YouTube.

-- This is an important perspective, especially when arguments against AI sound like echoes of past moral panics (reminiscent of 80s mix tapes vs. Napster and MP3s in the late 90s). That said, what distinguishes AI is its ability to generate, not just retrieve, information, often with uncanny fluency. The complaints about scale and access aren’t unfounded; they’re intensified. I don’t believe this invalidates the tool, but it may require new norms, legal frameworks, and cultural adaptations to manage its broader footprint. This one seems quite solvable.

23. ✅ Critics often don’t use AI tools before making sweeping claims about them.

-- Yes, agreed. And this is just sad ignorance. No excuse.

24. ✅ It's unreasonable to judge a technology without engaging with it directly.

-- Yep, agreed.

25. ✅ General-purpose technologies often spark exaggerated fears—but not always unjustified ones.

-- 100%

26. ✅ Dismissal of AI risks as “speculative” ignores how previous tech (like nukes) became existential risks.

-- Yes. Agreed. Not really the technologies fault, but more about the state of the human race and how we need to think carefully about responsible use.

27. ✅ We should evaluate AI risks seriously even if timelines and outcomes are uncertain.

-- Yes!

28. ✅ Intelligence explosion via duplicable machine minds could destabilize society quickly.

-- Sad, but true. I would also argue that this would be unlikely initiated by the machine itself, but more by people (bad actors) using the technology for those purposes.

29. ✅ Human cognitive labor becoming obsolete poses serious political and economic risks.

-- Yes, we need to adapt. And learning is fundamental to that.

30. ✅ Future AI systems may undermine democracy by decoupling value from human labor.

-- Unfortunately, this is a real risk, I agree.

Expand full comment
Glenn DeVore's avatar

31. ✅ Automation could incentivize authoritarianism by weakening protest and reducing elite dependence on labor.

-- Yep.

32. ✅ We live in a historically anomalous moment of rapid technological advancement.

-- Agreed. It baffles me that anyone could argue otherwise.

33. ➖ Technological determinism explains how production tools shape politics and social structures.

-- Yes, it does. And history offers plenty of evidence. But the arrow isn’t one-way. Ideologies, movements, and values have also shaped which technologies get built, how they’re adopted, and to what end. Civil rights, environmentalism, open-source movements – all have altered the trajectory of technical development. I’d argue that our challenge now is not whether tech determines society, but how we reclaim agency within that feedback loop.

34. ✅ Science fiction often underestimates how tech reshapes social orders.

-- Often, yes. But there are also visionaries such as Jules Verne, Isaac Asimov, and Frank Herbert.

35. ✅ Weapon complexity shifts power from individuals to centralized states (Orwell's law of weapons).

-- Unfortunately or fortunately depending on your viewpoint.

36. ✅ AI might behave like a complex weapon—empowering the strong while disempowering the public.

-- Mmmhmmm.

37. ✅ Many core political and cultural structures depend on technology levels and may shift radically.

-- Yep.

38. ➖ Democracy might be a historical blip enabled by specific technological conditions.

-- A provocative but important lens. Still, I hesitate to view democracy as a mere byproduct of temporary conditions. It may be rare, but rarity doesn’t imply fragility. Its resilience, while imperfect, has adapted through wars, revolutions, and revolutions in communication. I share your concern that future tech might erode the conditions that made democracy possible, but I also believe we have agency in how those technologies unfold. If democracy is to endure, we’ll need to actively redesign systems that align emerging tools with democratic values.

-- Or if not, I maintain hope that whatever supplants democracy is yet to be defined, and once clear has potential to evolve society in positive ways.

39. ✅ Excitement about new tech should be taken seriously, not dismissed due to tribal identity.

-- Agreed. And thank you for saying so.

-- And, as you point out, over-exuberance of technology can also lead to some strange places, so let’s not go overboard. ;)

40. ❌ Discourse often punishes sincere truth-seeking, implying that caring about what’s actually true makes you socially oblivious.

-- I deeply agree with the first half of this: yes, sincere truth-seeking is often met with resistance – especially in environments shaped by tribal identity or ideological performance. But I’m not sure I fully accept the second half, which suggests that caring about truth makes someone socially oblivious (that said, to be fair, I’m not entirely sure this is something you meant to infer or if it was just my own bias perceiving this; so, please correct me if I’m misrepresenting this.).

-- If anything, I think those who seek truth with sincerity and humility often have to be more socially aware. They need to navigate complex interpersonal and cultural dynamics while resisting the temptation to conform. That’s not oblivion. It’s discernment.

-- It’s also painful work. Truth-seeking, in the deepest sense, asks us to confront ego, loosen identity, and live without the comfort of easy belonging. Most people understandably resist that. Not because they’re irrational, but because it’s hard. But when someone does take that path, I see it as an act of courage, not cluelessness.

-- So while I resonate with the frustration behind this point, I’d reframe it slightly: discourse punishes truth-seeking not because it’s naive, but because it’s threatening to the status games many people unconsciously play.

41. ✅ Wariness about AI isn’t anti-progress—it reflects thoughtful historical perspective.

-- And this discernment is critical. Thank you for your work in this area.

42. ✅ Tribalism is undermining honest discourse about AI across the political spectrum.

-- Sadly, so very true.

43. ✅ Critics often adopt inconsistent values (e.g., anti-IP one day, pro-IP the next) depending on tribal cues.

-- Yep. The world could use more independent thinkers. Thank you for being one of them.

44. ✅ Helping people engage curiously rather than tribally is essential for better AI conversations.

-- Yes. And again, thank you.

Expand full comment
Oliver Burkeman's avatar

> His 'intuitions' become so deeply interwoven and contextually refined that they allow him to not only predict and generate text flawlessly but also to grasp the underlying meanings…Effectively, by perfectly internalizing how the language maps to the universe of ideas and situations described in all that writing, he has, piece by piece, constructed genuine semantic understanding

==

The Searle section of your "ground truths" is where I start to diverge. Don't you think Searle would insist that degrees of interwovenness or contextual refinement aren't sufficient to somehow leap over the barrier to true understanding? I assume the scare quotes around 'intuitions' mean that you do not think AI can genuinely experience conscious intuition. But then I wonder what these 'intuitions' are – ie, what you would have written there if you hadn't had recourse to the scare quotes.

(To be fair, I'm probably someone you'd classify as viewing human consciousness as fundamentally magic, although I'd prefer to say "absolutely mysterious as of May 2025"…)

Anyway, thanks for the very interesting post.

Expand full comment
Matt Ball's avatar

>We’re already in an insane technological future

That, I think, is worth a stand-alone post.

I don't think we can really make things better if we refuse to recognize how things have gotten better. (Why Hannah Ritchie might be the most important writer working today.)

https://www.mattball.org/2024/11/smart-phones-and-hedonic-treadmill.html

Expand full comment
Steiner's avatar

I think the key point is that some of these things might genuinely not be "better". Maybe some friction in life is actually good. The more life I experience, the less I trust that my moment-to-moment preferences are all that correlated to my happiness.

The Brick has been the single most valuable thing I have purchased in the last two years, precisely because it has allowed me to not use all the "great new features" of my phone. I need the Brick, because I am incapable of making the right decision in the moment.

There are all sorts of things where our preference for "better" things has big negative consequences as well: big houses + no AC means no front porch culture and way less community. Is it worth it? Yes, maybe? Who knows? But I don't buy that this is all hedonic treadmill. I think there is nostalgia, yes, but for something that is real and human and embodied, and I don't think that is just rose-colored glasses.

Expand full comment
Andrew Condon's avatar

Terrific essay - i have just one question and that's re: the billion users claim. I appreciate that it is a claim by Altman but does it really seem plausible to you at this point?

Or perhaps more to the point what would be your guess as to how many people are really using these tools in a deep way?

It seems to me that some of the ungrounded claims that people routinely make about AI would be unlikely to be passed around so uncritically if 1 billion people (presumably very heavily biased towards industrial West + China?) were using ChatGPT extensively.

It seems more likely to me that some large number of people have kicked the tyres on a chatbot and not gone much further and that might be part of what creates such an opening for speculations of all stripes.

Expand full comment
Andy Masley's avatar

I guess I’ve seen enough legit outlets report it that I assumed it was probably true, could be wrong though! It does make intuitive sense to me that 1 in 8 people would interact with a chatbot once a week

Expand full comment
Andrew Condon's avatar

Seems very high to me - could it be you are over-indexing on "people you know"? It could also be that i'm over-indexing on "people i know", but just back of the envelope calculation on numbers of computer (or phone app) using English speakers in the world would require almost 100% TAM at 1 billion.

Expand full comment
Adam Runner's avatar

I tend to favor arguments for embodied cognition but generally support your desire for a better, deeper, more nuanced discussion of AI generally.

Expand full comment
Andy Masley's avatar

Any links to stuff on embodied cognition would be interesting!

Expand full comment
Adam Runner's avatar

Philosophy canon: https://plato.stanford.edu/entries/embodied-cognition/

Some of the empirical/experimental arguments made in here:

- Mirror neurons: Mirror neurons fire both when performing an action and when observing it—suggesting cognition is action-oriented and embodied.

- Gestures enhance learning: People remember and learn better when they gesture while talking or solving problems.

- Perception varies by body state: Fatigued people judge hills as steeper. People carrying heavy backpacks estimate distances as farther, suggesting your bodily state alters your perception not just your thinking.

My more (interesting??) and anecdotal take is that there are different types of cognitive logistics (thinking, computing, and simulating) and on any given day we're all doing a fair bit of each, and that it makes sense that a theory of mind would emerge from body-world interaction at a level abstracted and synthesized above these logistics. Maybe the strongest argument that cognition is situated is how consistently and deeply irrational humans are.

A more nuanced take with respect to AI and functional theories of mind is that if you accept different types of cognitive logistics, then there are places where machines would perform better, provided you can square your own values/moral compass. It's not like "augmented thinking" is new, afterall. If that's the case, it's much more important that you develop heuristics and judgment so you know when thinking is required vs. other cognitive logistics.

Expand full comment
Tim Hua's avatar

Unrelated but my first reactions was "huh is Andy getting into technical alignment?" (Debate is a proposed alignment strategy, see https://openai.com/index/debate/ and https://www.lesswrong.com/s/NdovveRcyfxgMoujf/p/iELyAqizJkizBQbfr)

Expand full comment
Christopher Simpson's avatar

I want to push back a bit here.

The overall thrust of your article is a plea for better debates in AI. But at various points in the article you appear to be straw manning the opposition (which is not likely to lead to better debates).

The primary reason why emotions run high in the AI debate (in my view) is because in the last 10-15 years we’ve witnessed increasingly deep political, social, and epistemic balkanization due to widespread adoption of new technologies (social media, smartphones) that have greatly contributed to this bad state of affairs. As such, many people are (justifiably, in my view) skeptical that the widespread adoption of AI will be, on net, positive.

Had we adopted a more skeptical outlook on these technologies in 2007 that would have been a good thing! It is not sufficient to point out that AI has some good applications (as one can likewise do for nearly any technology).

The anthropomorphism of the terms used to frame AI debates (“hallucinations”, “reasoning”, “agents”) already stack the deck in favor of those who believe LLMs “think”. Even many of the engineers I work with often fall prey to the sophistry of this framing.

Human minds and LLMs operate very differently. That is why the latter cannot be said to “think” in the way the former can:

https://open.substack.com/pub/garymarcus/p/a-knockout-blow-for-llms?utm_campaign=post&utm_medium=web

Expand full comment
Andy Masley's avatar

I'd be curious about where specifically I'm straw-manning critics of AI, I don't mean to do that and would edit if I see a clear example! I'm using the terms "hallucinations" bc it's the commonly accepted word that people would recognize, I don't mean or want to anthropomorphize AI. I tried to make it clear that I think AI is on net likely to be bad. My point is just that critics need to understand where the real benefits to individuals are too.

Expand full comment
Christopher Simpson's avatar

The most frequent example of strawmanning is that at multiple times in the article you either heavily imply or state outright that someone must either agree with your position or believe minds are in some sense “magic”.

For example:

“Unless you think something fundamentally magic is happening in the human brain, our minds and understanding ultimately have to be based on some incredibly complicated system of rote rules and random results, Chinese rooms and roulette wheels.”

But a person could be a physicalist who believes everything is reducible to physics and still disagree with the above.

For example, someone might believe that Physics is not just a collection of incredibly complicated rote rules.

But there are other examples. In one place you critique interlocutors who go back and forth between saying AI is useless and useful. But there are more charitable ways to make sense of their claims than the way you reconstruct them here.

For example, they might believe AI is useful in some contexts and not useful in other contexts. When it’s not useful, it’s bad. When it’s useful, it damages the users ability for important cognitive operation and so, bad.

Claims that something is bad and useless abound in other contexts. I.e.

Torture is morally wrong and useless at achieving its ends.

Wealth taxes are morally wrong and impossibly to implement in practice.

Expand full comment
Andy Masley's avatar

So going through each of these:

On "Magic" there are really only a few possible descriptions of the mind: You think it's reducable to physical processes, or you believe there are special nonphysical properties or substances that make it up and have effects on the physical world. If you believe in special nonphysical properties or substances that affect the physical world, that goes against everything we understand about physics and I think it's fair for me to call that basically magic.

I actually don't see how a physical event could be anything other than a combination of completely deterministic (rote) events and random events. That seems tautologically true. If something isn't deterministic, what happens is fundamentally random because all the same inputs could produce a different output.

I definitely think a lot of AI critics have a lot of good points. I consider myself an AI critic! But there are specific people online who will very confidently say that AI is completely useless in all circumstances. That specifically is a bad silly criticism (I think we agree here) and that's what I'm responding to. I'm not trying to imply all or most critics of AI are like this and I don't think I do in the piece.

Expand full comment
Radek's avatar

"The Future and Its Enemies is a manifesto for what Postrel calls “dynamism—a world of constant creation, discovery, and competition.” Its antithesis is “stasis—a regulated, engineered world... [that values] stability and control.”

I find myself simultaneously sympathetic to this quote and also see it as extremely arrogant. Looking up Postrel, all the bios of him stress how young ("at only 23 years old...") and well educated he is. For such a person the costs of technological adoption are both low and will last, well, a lifetime. But consider a 60 year old without his educational background. Not even uneducated but simply educated in something else, say, humanities. For such a person the costs of technological adoption will be high and the benefits will are gonna be around a much shorter period of time. Why should such an old person pay (or even be forced to pay) the costs of adopting a new technology if they're not going to be around long enough to really enjoy it? Why not just stick with what they know? Is it really the case that such a person just favors a "regulated, engineered world"?

And here is the kicker. The adoption costs and benefits of a given technology depend on the number of people using it, network externalities and all that. Which means that if a good chunk of population (say, all the youngs) adopts a new technology because *it's better for them*, those who try to stick with the old one find themselves worse off because there's fewer folks using "their" old tech.

Recently I watched an old man at a train station struggle to buy train tickets from a machine. He took out cash and realized the machine no longer accepted cash. He fished out a card and tried to find where to insert it. Didn't take cards either. He was lost. Standing in line behind him and wanting to buy tickets myself I explained to him that you had to tap your phone. He took out his phone. Just by looking at it I could immediately tell tapping that ancient (meaning more than 5 years old) thing anywhere on the machine wasn't going to work. "Well I think there's still a ticket booth with a person in it somewhere in the station downstairs" I tried to help. "It used to take cash..." he said, "why do THEY always have to be changing shit, the fuckers" he replied extremely unhappy and frustrated.

Postrel forgets a key concept from economics. Convex preferences, or basically the idea that people prefer averages to extremes. Its not a dynamic glorious future vs static regulated engineered world. Theres some optimal rate of technological progress in between, where technology delivers steady flow of improvements but doesnt require constant updating, switching, retraining and adjusting.

For People older than Postrel, it can feel like they've spent most of their lives figuring out how the world works and just when they start feeling like they finally are getting the handle of things, somebody comes along and changes up everything

Expand full comment
Andy Masley's avatar

Just flagging that Postrel's a woman! She's 65 right now.

Expand full comment
Radek's avatar

Ack! You're right, I confused Postrel with Patel who's quoted at the top of Toner's post. Postrel was 38 when she wrote the book?

Ok ignore that part of my reply but please consider the rest.

Expand full comment
Yusuf's avatar

I have been obsessed with AI , and now thats it here im impressed . I see more opportunities, and in a hypothetical timeline future rather be killed by a robot than a mosquito

Expand full comment