I saw a lot of AI boosters talking about that study with concern. Some seemed genuinely surprised by the findings and suggested they still don't believe them, even though they seem so obvious to you (and me).
While it is true that these people aren't actually stupid, it is also true that people are good at fooling themselves and justifying their preferences. People who enjoy using AI for whatever reason have an interest in seeing the technology as good and ignoring tradeoffs. This study might state the obvious, but an MIT study with brain scans is harder to ignore than just following the logic chain to a conclusion you dislike.
This seems like largely just a question of essentialism. People may not "be stupid," but they often don't care (or are unable) to avoid making stupid decisions. So to some extent it seems like it boils down to the same thing.
Maybe an alternate framing is that a lack of information is often not what causes people to make bad decisions.
It reminds me of how people on the left in particular blame every problem on a lack of education. Lack of access to education is usually not the problem.
(Ok, dmissing the "cookie" message leads to the long text one has written to disappear. I've learned something...let me rewrite it.)
I love your energy post, but I am genuinely puzzled about this one. Or, rather, I get the general gist and message you want to convey. However, I don't find the arguments convincing, at all.
Concerning the "obviousness" of GenAI loneliness: i) I am not sure I find the correlation (or causation) as obvious as you imply. Many people have made a similar argument concerning smartphones: "of course starting at a small screen all day is making people lonely". Well, except the studies investigating this show mixed results. If they had not, many people could have said something similar: "Duh, why I even study this". ii) Irrespective of the overall relation, and as I try to drill into my Philosophy of Science students, we really shouldn't only care about the direction of an effect, but also the effect size. To me it seems like the studies could actually speak to that, and might help the reader update their priors on how strong a correlation one should expect (+ all the other more nuanced findings in the studies). iii) Sure, the quote in question is superficial, but one can't include all nuances when talking to a journalist. In any case, I felt informed by the study.
Concerning MIT-brain study: i) I've worked with GenAI in education for a few years, offered a RAG to hundreds of students since January 2024, and done experimental studies in this field. I am genuinely not sure, that it is that obvious to (most?) students that using GenAI in ones work has such effects. More concretely, I am not sure that students realize that if you copy-paste several segments of text that was written by a chatbot, that that doesn't lead to some kind of learning. In other words, the study argues against a Matrix style "download" concept of learning, that I do think captures some discourse around learning. Now, the treatment is fairly extreme (and doesn't really cover a nuanced scenario) but that is quite usual in social science, that one begins with more extreme treatments, and then makes them more realistic along the way. In any case, I am quite confident that many students will feel informed when learning about this study, which is why I have written about it in a text aimed at students. ii) The MIT study involved interesting findings, such as homogenization, memory issues and it presented a solid theoretical framework in the form of cognitive load. Sure, the sample size was way too small, the EEG stuff is probably p-hacked (or an equivalent term) and the results are oversold, both by the authors and by the media / influencers. Nevertheless, it informed me (not that my priors moved that much). And I really don't think I am that stupid when it comes to GenAI :).
I think it's okay – dare I say, not stupid – to do a study that produces unsurprising results. I agree more quantified outputs of just how unadventurous the essays were or just how lonely the chatbot users were compared to a control would be much more interesting and useful. It's one thing to know donuts are bad for you, it's another to learn they have trans fats that are 5x worse than fats you find in other oily food.
"Another big claim that fails my test is that AI chatbots are useless. 10% of the world are now choosing to use them weekly. If they were useless, this would mean that 10% of the world is so stupid that they can’t tell that this tool they’re using every single week isn’t providing any value to them at all. There’s basically nothing else like this that people interact with regularly."
Great point - this is part of my answer for explaining how the big AI products are different from prev tech bubbles like crypto. There's clearly utility for folks here; the numbers essentially speak for themselves
This perspective seems to be grounded in the economic theory of rational actors, which constantly falls down in the real world. I can appreciate wanting to avoid the mentality of "I'm smart, it's everyone else that's stupid," but I really think it's reasonable to state that an *individual* can be smart because it has a brain, but a *population* has no brain and therefore can't be expected to behave like an individual.
I'm definitely not presuming people are rational actors, it's just that I have trouble believing that someone can be staring a tool directly in the face and be tricked into thinking it's adding direct immediate value to them when it's not
But more seriously, a not insignificant amount of marketing is designed to do exactly this. Bigger number, better status symbol, more green, supports causes, etc.
I'm absolutely happy to accept an argument wherein "direct immediate value" can include emotional value, but I would then argue that's not the kind of value being discussed by the authors of the studies
“If a claim about how society works implies that most people are incredibly stupid, much more stupid than anyone I encounter in my day to day life, I dismiss it.”
There is the rather distinct possibility that your social circle is not representative of “most people.” For example I would wager almost everyone you know has at least a bachelors degree- yet 2/3 of U.S adults do not.
It’s generally incomprehensible to liberals living in cities that so many people could have voted for trump, just as it is hard for rural republicans to believe so many voted kamala. Population biases can be massive.
Fist bumping a stranger does not provide you the opportunity to judge their intelligence though. You are presumably fairly smart, well educated, good career, etc- surely you must allow for the possibility that you and everyone you know is like top quartile IQ and skewing your sample
Must admit I find most of your examples for your point rather confusing. At no point does the extract from the article on ChatGPT writing essays does it say anything I interpret as 'Students don't realise they are learning less by doing this'. At most there is an implicit statement that students are making a bad decision to prioritise convenience now over learning which could help them later, but that's not the same thing.
And given the considerable evidence on human beings having a very high discount rate (or, more simply, being pretty short-termist), it's not an implausible statement even if you do think the article is saying it.
I just don't see where saying 'Students used ChatGPT in a way which produced bad essays and didn't teach them much' implies '[...] students are somehow blind to the idea that copying work from other places means they don’t actually learn'. You can think people are aware of a trade-off and still think it is a bad trade-off.
I saw a lot of AI boosters talking about that study with concern. Some seemed genuinely surprised by the findings and suggested they still don't believe them, even though they seem so obvious to you (and me).
While it is true that these people aren't actually stupid, it is also true that people are good at fooling themselves and justifying their preferences. People who enjoy using AI for whatever reason have an interest in seeing the technology as good and ignoring tradeoffs. This study might state the obvious, but an MIT study with brain scans is harder to ignore than just following the logic chain to a conclusion you dislike.
This seems like largely just a question of essentialism. People may not "be stupid," but they often don't care (or are unable) to avoid making stupid decisions. So to some extent it seems like it boils down to the same thing.
Maybe an alternate framing is that a lack of information is often not what causes people to make bad decisions.
It reminds me of how people on the left in particular blame every problem on a lack of education. Lack of access to education is usually not the problem.
(Ok, dmissing the "cookie" message leads to the long text one has written to disappear. I've learned something...let me rewrite it.)
I love your energy post, but I am genuinely puzzled about this one. Or, rather, I get the general gist and message you want to convey. However, I don't find the arguments convincing, at all.
Concerning the "obviousness" of GenAI loneliness: i) I am not sure I find the correlation (or causation) as obvious as you imply. Many people have made a similar argument concerning smartphones: "of course starting at a small screen all day is making people lonely". Well, except the studies investigating this show mixed results. If they had not, many people could have said something similar: "Duh, why I even study this". ii) Irrespective of the overall relation, and as I try to drill into my Philosophy of Science students, we really shouldn't only care about the direction of an effect, but also the effect size. To me it seems like the studies could actually speak to that, and might help the reader update their priors on how strong a correlation one should expect (+ all the other more nuanced findings in the studies). iii) Sure, the quote in question is superficial, but one can't include all nuances when talking to a journalist. In any case, I felt informed by the study.
Concerning MIT-brain study: i) I've worked with GenAI in education for a few years, offered a RAG to hundreds of students since January 2024, and done experimental studies in this field. I am genuinely not sure, that it is that obvious to (most?) students that using GenAI in ones work has such effects. More concretely, I am not sure that students realize that if you copy-paste several segments of text that was written by a chatbot, that that doesn't lead to some kind of learning. In other words, the study argues against a Matrix style "download" concept of learning, that I do think captures some discourse around learning. Now, the treatment is fairly extreme (and doesn't really cover a nuanced scenario) but that is quite usual in social science, that one begins with more extreme treatments, and then makes them more realistic along the way. In any case, I am quite confident that many students will feel informed when learning about this study, which is why I have written about it in a text aimed at students. ii) The MIT study involved interesting findings, such as homogenization, memory issues and it presented a solid theoretical framework in the form of cognitive load. Sure, the sample size was way too small, the EEG stuff is probably p-hacked (or an equivalent term) and the results are oversold, both by the authors and by the media / influencers. Nevertheless, it informed me (not that my priors moved that much). And I really don't think I am that stupid when it comes to GenAI :).
Busy but will take time to reply more soon!
I think it's okay – dare I say, not stupid – to do a study that produces unsurprising results. I agree more quantified outputs of just how unadventurous the essays were or just how lonely the chatbot users were compared to a control would be much more interesting and useful. It's one thing to know donuts are bad for you, it's another to learn they have trans fats that are 5x worse than fats you find in other oily food.
True, when I think about it more I'm mainly mad at the reporting on the studies
"Another big claim that fails my test is that AI chatbots are useless. 10% of the world are now choosing to use them weekly. If they were useless, this would mean that 10% of the world is so stupid that they can’t tell that this tool they’re using every single week isn’t providing any value to them at all. There’s basically nothing else like this that people interact with regularly."
Great point - this is part of my answer for explaining how the big AI products are different from prev tech bubbles like crypto. There's clearly utility for folks here; the numbers essentially speak for themselves
<3 <3 <3
I have a similar rule: Is this what a cranky old man would say?
Cranky old men have always always always complained about "kids these days."
(OTOH, half of all people *are* below average, in intelligence, or empathy, or driving skill, etc.)
>predictions of environmental disasters and resource constraints
This is the perfect example. The Doom Cult always screams "This time is different!"
I'm curious to know any delineation by social/economic class & political leanings vis attitudes here.
This perspective seems to be grounded in the economic theory of rational actors, which constantly falls down in the real world. I can appreciate wanting to avoid the mentality of "I'm smart, it's everyone else that's stupid," but I really think it's reasonable to state that an *individual* can be smart because it has a brain, but a *population* has no brain and therefore can't be expected to behave like an individual.
See also: Boids (https://www.youtube.com/watch?v=bqtqltqcQhw)
I'm definitely not presuming people are rational actors, it's just that I have trouble believing that someone can be staring a tool directly in the face and be tricked into thinking it's adding direct immediate value to them when it's not
Counterpoint: Infomercials
But more seriously, a not insignificant amount of marketing is designed to do exactly this. Bigger number, better status symbol, more green, supports causes, etc.
I'm absolutely happy to accept an argument wherein "direct immediate value" can include emotional value, but I would then argue that's not the kind of value being discussed by the authors of the studies
“If a claim about how society works implies that most people are incredibly stupid, much more stupid than anyone I encounter in my day to day life, I dismiss it.”
There is the rather distinct possibility that your social circle is not representative of “most people.” For example I would wager almost everyone you know has at least a bachelors degree- yet 2/3 of U.S adults do not.
It’s generally incomprehensible to liberals living in cities that so many people could have voted for trump, just as it is hard for rural republicans to believe so many voted kamala. Population biases can be massive.
I mean strangers I bump into and interact with, like everyone I’ve ever bumped into in my life. Not my immediate friend group
Fist bumping a stranger does not provide you the opportunity to judge their intelligence though. You are presumably fairly smart, well educated, good career, etc- surely you must allow for the possibility that you and everyone you know is like top quartile IQ and skewing your sample
I was a teacher for 7 years and worked with students in wildly different life circumstances and have in general been around
fair enough. that context helps I think
Must admit I find most of your examples for your point rather confusing. At no point does the extract from the article on ChatGPT writing essays does it say anything I interpret as 'Students don't realise they are learning less by doing this'. At most there is an implicit statement that students are making a bad decision to prioritise convenience now over learning which could help them later, but that's not the same thing.
And given the considerable evidence on human beings having a very high discount rate (or, more simply, being pretty short-termist), it's not an implausible statement even if you do think the article is saying it.
I just don't see where saying 'Students used ChatGPT in a way which produced bad essays and didn't teach them much' implies '[...] students are somehow blind to the idea that copying work from other places means they don’t actually learn'. You can think people are aware of a trade-off and still think it is a bad trade-off.
People are smart, they know that clickbait is a waste of time. Yet they click it.