Using ChatGPT for recipes and meal planning has led to so much discovery and prevented so much wasted food that this alone probably pays for its own subscription twice over, never mind the power/water use.
It's not just the assumption that there's a fixed amount of thinking to be done, but that there's actually enough time and bandwidth for all the thinking we might _want_ to get done. In practice, "urgent" often crowds out "important" and "necessary" crowds out "meaningful". Our brains without the assistance of LLMs, phones, notepads and pens aren't doing exactly 100% of what they're meant to be doing, they're making compromises and muddling through with some some fraction of far greater demand.
So we end up doing a sloppy job of grocery planning, because it's not fun and it feels like a drain on our stretched mental resources. This isn't a muscle I'd ever even want my brain to flex, so why care about outsourcing it?
Also reminded of the old TV and movie trope of "the executive getting his secretary to pick out the gifts". It was shorthand for being lazy or emotionally out of touch, but we didn't worry about the guy's brain.
Belated thought: All this also seems related to your previous post, with people disliking the effort-saving new thing so much, that they now find themselves arguing that saving any effort is itself bad.
I know it's a massive issue, but I'm not really sure why people struggle with food waste so much— are they buying way more food than they intend to eat? The process seems simple to me: make a meal with the food that goes off soonest, repeat daily until there are no more perishables, then go shopping for more food. No planning necessary.
"Struggle" is perhaps overstating it, but it's certainly easier if you always know whether you will be eating at home, with what number of people, if everyone likes/eats the same things, if plans never change, if the foods you eat all belong to broadly the same cuisine etc.
There's also the curse of good intentions: Buying ingredients for those healthy meals that actually take more time to prepare, which you then end up not having, which you then raid to use with some other meal (because those foods will go off) and which then leaves you with "orphan" ingredients.
It's all very solvable with some care and attention, but those two things are often scarce in practice.
I disagree in that the threat to cognition is greater than from storage and retrieval AIs like Maps or Search.
What most are not articulating (yet) is that because LLMs are creating language, using it gives the self an illusion of understanding that is actually developed from the writing process. It's by refining what we write that we develop our opinions, otherwise stream of consciousness is the original version of of slop.
I re-wrote this comment a few times to match by intent - I could have queried an LLM for a response. But, then, I'd never know what my formulation of the issue is. I think there is a way for the product experiences around LLMs to encourage more of a feedback loop between the response and initial query, which they're not really doing right now since all user feedback solicited is about the overall quality of the text.
You should write about offloading linguistic work, specifically!
I run into this all the time, people thinking that AI is somehow "different" from previous generations of technology. Most of the time, it's from younger people, and AI is the first round of technology that is "new" to them - the internet and smartphones and social media have always been around. They have never had to adjust before, and frankly they're coping worse than previous generations did.
And the lump of cognition fallacy existed for previous generations. I'm old enough to remember when people would say, "why did you have to look that up on this internet thing? Why can't you just go to the library?"
Indeed when I used a pocket calculator for high school math, which was actually a school requirement even in the 1980s, my parents (who had never studied algebra or calculus) thought that that was cheating.
Even as someone who is deeply concerned about cognitive offloading, I liked this perspective. You make good points about the space created by offloading tedious tasks, and inspiring different avenues of thought that wouldn't have existed otherwise.
Personally, I’m not worried about the grocery-list-type tasks, but the lines between what should and shouldn’t offloaded can be blurry. Most people will slip into using chatbots for the things that were on your “bad to offload” list if they use them frequently enough -- even if they originally intended not to and know they shouldn't.
It reminds me a bit of a sentiment like, “social media is great if you only use it to stay in touch with friends.” The reality isn’t that simple. People end up using social media in ways they'd prefer not to and that they know are bad, like getting sucked down a rabbit hole of short-form content. I think attributing the negative effects of social media to a lack of personal judgement or discipline isn’t fair; these platforms are engineered to maximize engagement (i.e. profit).
Unfortunately, I don’t think ChatGPT and similar platforms will be different. Social media went pretty badly as a result of this engagement maximization/profit-driven model. Not to say that it couldn't have gone another way if facilitating genuine human connection had actually been the goal.
I worry that due to the same profit-driven incentives as social media, and due to the interactive nature of chatbots, the potential for things to go badly in insidious ways is high. So, while I see your points, it's still very unclear to me that it will be a net-positive on a societal scale. Curious to hear your thoughts on this.
This post helped grow my lump, so to speak. But I do think it is lazy (in a moral sense!) to put on YouTube for your kids when you’re capable of telling them or a story, for example. Or when they are capable of telling themselves stories via play.
In any case, I’m glad you made the effort to engage with these thoughts and post them here, instead of just confirming them with an LLM 🙏
This triggered something in head! I've been developing a strategic frame for EdTech integration (ALIGN) and kept wrestling with why schools adopt frameworks enthusiastically then shelve them quietly. There's all sorts of research out there, but this was just all that simplified!
"People assume the framework does the thinking, so there's less thinking left for leaders to do. Fill in the grid, tick the boxes. Lump of cognition in action."
What an excellent and marvelous read; I shall be bringing up the lump of cogniition fallacy whenever I hear this all too common and irritating complaint.
People only fall for the lump of cognition fallacy because they still treat thinking like a scarce resource instead of a system that expands the moment you offload the low value parts. When you hand routine cognition to an external tool, whether that’s Google Maps, a recipe, or a chatbot, you’re reducing cognitive entropy so you can spend your bandwidth on higher order meaning. The fear that AI replaces thought misunderstands how co-cognition actually works. Outsourcing the boring layers creates more conceptual surface area, not less. If anything, refusing these tools just traps people in synthetic realness, where they are performing effort for the sake of the performance instead of doing the deeper thinking that actually prevents drift.
There is insight to be gained just by extending your labor analogy (focusing on physical labor). The amount of physical labor being done today is orders of magnitude more than in the past. We now mine entire mountains out of existence, which would not be possible without machines. Does this mean I can assure someone in the 1700s, “Don’t worry people will still exercise in the future, because the need for physical labor will just keeping growing balancing things out.” Well, people in developed countries hardly get any exercise from work. Even manual laborers have machines augmenting them. People invented sports and machines just to keep themselves physically healthy. Homework and writing is to cognitive activity what exercise and sport is to physical activity.
So yes, I do believe AI is negatively affecting our ability to think - not because of reduced quantity, but reduced variety. To continue the labor analogy, mechanized physical labor tends to overuse fewer muscle groups, risking repetitive stress injury and also reducing the use of other muscles, resulting in less whole body strength compared to purely non mechanized physical labor. Could the same be possible with cognitive labor? For example, many programmers today don’t really understand how computers work, because they use only high level languages. Older programmers had no choice but to think about it, but compilers have automated that thinking, so younger programmers never exercised that skill. The end result though is that younger programmers tend to write more bloated, inefficient code - which has consequences. To generalize, as we offload thinking to AI, we as people will get worse at what AI does well, but this can be imbalanced for our cognitive health.
I really appreciate "the lump of cognition" as a concept. It's definitely one I'll carry with me.
But there's something that is bothering me a bit. I'm going to use a coding metaphor to try to explain. You can encode a piece of code in an abstraction, which you can manipulate at a higher level. E.g. with python you don't need to worry about how the computer is allocating memory because the language does that for you.
Often, this type of abstraction is really powerful: by taking some of the questions away you can code more things! But losing access to the details can also be limiting; for some applications that require extreme performance, you would use a "lower-level" language because those details matter a lot when you're trying to write fast code. The abstraction can be more useful, but also more difficult to steer.
I worry that AI often moves up a layer of abstraction which can lead to less overall agency. For instance, I've written classical music (and songs) since I was 12 or so. It's a slow process for me, and one which requires me to pay attention to A LOT of little details. Using a tool like Suno would help me not pay attention to a lot of those details, but I don't think it leads me to be able to make better/more interesting music. Almost certainly, any Suno music I could create is less interesting than the hand-written music. And I've had less to think about than if I try to write a song on my own.
This doesn't mean that abstraction is bad in music. Sonata form is a very powerful template, so much so that when people stopped using it for everything musical form arguably got less interesting. (It also doesn't mean AI music is per se bad --- I subscribe to your AI music youtube channel). But it does mean that the AI is less empowering than not having it, at least since I've built up the speciality skill.
You cover something like this in your caveats: "Is a valuable experience on its own." But even if I hated every moment of writing music and only cared about the best possible final product, I think I can do more interesting things without the tool.
As I'm coming to the end of this comment, I'm not actually sure how to reconcile the abstraction idea with the extended mind. But do you get what I'm hinting at? Happy to clarify further
Yeah maybe the issue is that people are terrible judges of what's actually important but painful mental work to build up tacit knowledge and we can unknowingly lose a lot of opportunities to do this if we jump ahead? I think this is going to be a big issue with AI apps and tbh the list of places I wouldn't use them section should mention it more, I can circle back later
I actually used to worry about this with Google maps a little. It endlessly annoyed me that friends had absolutely no idea how to navigate a city (often one they’d lived in for years) if their phone died or lacked data on some edge case or something. I still intuitively feel that a lot of easily outsourceable skills like that are worth developing even if you switch them off in everyday life to free your capacity up for more important things. But I can’t really make a principled argument that the cognitive skills I have an affective preference for are really the important ones - more likely what’s happening is that I’m an urbanist dork and therefore it makes me sad that everyday life doesn’t require people to understand as much about topics like the elegance of DC’s grid system as it did in the relatively computation-scarce past. I suspect a lot of the cases that ruffle people’s feathers the most with chatbot use (aside from sociopathic stuff like catfishing on dating apps) are the ones where the specific application is similarly hitting close to home for someone.
There’s also, I think, an under appreciated flipside to this, which emerges from people’s tendency not to consciously notice the water they’re swimming in. Sometimes, pre-outsourcing, people fail to appreciate the significance or broader applicability of some idea that’s very integrated into their day-to-day lives. In the urbanism example, a person who grew up familiar with the system in their city might not stop to reflect on its cleverness or significance, because it’s so mundane. But in a world of Google maps, a curious person who has less unconscious knowledge of the system might be more likely to notice some feature of the city, like the lack of an X, Y, or Z street. In a world of chatbots, that can spark an interest and Wikipedia-type deep dive into the history of the city, historical development of ideas about standardization, the efficiency of right angles, etc and now the person has a whole new window into features of the world around them that might not have occurred to them had the earlier stage of outsourcing not made some basic thing less invisible.
It’s also possible that the person who grew up pre-outsourcing and the one who grew up post-outsourcing will both lack curiosity and miss out on learning a cool thing. Ultimately interesting thinking is done by curious people and moderate degrees of technological change are unlikely to change that much!
This is just the standard-issue Moral Panic at new technology, amplified greatly because LLM's are directly targeted at writers, so they feel very very threatened. I don't know if you followed the one we had about Google many years ago, it was extremely similar. The original one (or at least the first we know about) was Socrates' denunciation of the anti-cognitive effects of using writing. Ironically, we only know it because it was written down.
I disagree in that the threat to cognition is greater than from storage and retrieval AIs like Maps or Search.
I think what most aren’t articulating (yet) is that because LLMs are creating language, using it gives the self an illusion of understanding that is actually developed from the writing process. It’s by refining what we write that we develop our opinions, otherwise stream of consciousness is the original version of slop. I re-wrote this comment a few times to match my intent but I could have queried an LLM for a response but then I’d never know what my formulation of the issue is. I think there is a way for the product experiences around LLMs to encourage more of a feedback loop between the response and initial query, which they’re not really doing right now since all user feedback solicited is about the overall quality of the text.
You should write about offloading linguistic work, specifically!
What’s useful here is treating cognition as an architecture, not a budget. Civilization, tools, and now AI act as external compression layers that expand what the system can meaningfully engage with. Problems arise only when offloading interrupts the formation of internal models rather than supporting them.
Using ChatGPT for recipes and meal planning has led to so much discovery and prevented so much wasted food that this alone probably pays for its own subscription twice over, never mind the power/water use.
It's not just the assumption that there's a fixed amount of thinking to be done, but that there's actually enough time and bandwidth for all the thinking we might _want_ to get done. In practice, "urgent" often crowds out "important" and "necessary" crowds out "meaningful". Our brains without the assistance of LLMs, phones, notepads and pens aren't doing exactly 100% of what they're meant to be doing, they're making compromises and muddling through with some some fraction of far greater demand.
So we end up doing a sloppy job of grocery planning, because it's not fun and it feels like a drain on our stretched mental resources. This isn't a muscle I'd ever even want my brain to flex, so why care about outsourcing it?
Also reminded of the old TV and movie trope of "the executive getting his secretary to pick out the gifts". It was shorthand for being lazy or emotionally out of touch, but we didn't worry about the guy's brain.
Belated thought: All this also seems related to your previous post, with people disliking the effort-saving new thing so much, that they now find themselves arguing that saving any effort is itself bad.
I know it's a massive issue, but I'm not really sure why people struggle with food waste so much— are they buying way more food than they intend to eat? The process seems simple to me: make a meal with the food that goes off soonest, repeat daily until there are no more perishables, then go shopping for more food. No planning necessary.
"Struggle" is perhaps overstating it, but it's certainly easier if you always know whether you will be eating at home, with what number of people, if everyone likes/eats the same things, if plans never change, if the foods you eat all belong to broadly the same cuisine etc.
There's also the curse of good intentions: Buying ingredients for those healthy meals that actually take more time to prepare, which you then end up not having, which you then raid to use with some other meal (because those foods will go off) and which then leaves you with "orphan" ingredients.
It's all very solvable with some care and attention, but those two things are often scarce in practice.
I disagree in that the threat to cognition is greater than from storage and retrieval AIs like Maps or Search.
What most are not articulating (yet) is that because LLMs are creating language, using it gives the self an illusion of understanding that is actually developed from the writing process. It's by refining what we write that we develop our opinions, otherwise stream of consciousness is the original version of of slop.
I re-wrote this comment a few times to match by intent - I could have queried an LLM for a response. But, then, I'd never know what my formulation of the issue is. I think there is a way for the product experiences around LLMs to encourage more of a feedback loop between the response and initial query, which they're not really doing right now since all user feedback solicited is about the overall quality of the text.
You should write about offloading linguistic work, specifically!
Thank you so much for this post!
I run into this all the time, people thinking that AI is somehow "different" from previous generations of technology. Most of the time, it's from younger people, and AI is the first round of technology that is "new" to them - the internet and smartphones and social media have always been around. They have never had to adjust before, and frankly they're coping worse than previous generations did.
And the lump of cognition fallacy existed for previous generations. I'm old enough to remember when people would say, "why did you have to look that up on this internet thing? Why can't you just go to the library?"
Indeed when I used a pocket calculator for high school math, which was actually a school requirement even in the 1980s, my parents (who had never studied algebra or calculus) thought that that was cheating.
Even as someone who is deeply concerned about cognitive offloading, I liked this perspective. You make good points about the space created by offloading tedious tasks, and inspiring different avenues of thought that wouldn't have existed otherwise.
Personally, I’m not worried about the grocery-list-type tasks, but the lines between what should and shouldn’t offloaded can be blurry. Most people will slip into using chatbots for the things that were on your “bad to offload” list if they use them frequently enough -- even if they originally intended not to and know they shouldn't.
It reminds me a bit of a sentiment like, “social media is great if you only use it to stay in touch with friends.” The reality isn’t that simple. People end up using social media in ways they'd prefer not to and that they know are bad, like getting sucked down a rabbit hole of short-form content. I think attributing the negative effects of social media to a lack of personal judgement or discipline isn’t fair; these platforms are engineered to maximize engagement (i.e. profit).
Unfortunately, I don’t think ChatGPT and similar platforms will be different. Social media went pretty badly as a result of this engagement maximization/profit-driven model. Not to say that it couldn't have gone another way if facilitating genuine human connection had actually been the goal.
I worry that due to the same profit-driven incentives as social media, and due to the interactive nature of chatbots, the potential for things to go badly in insidious ways is high. So, while I see your points, it's still very unclear to me that it will be a net-positive on a societal scale. Curious to hear your thoughts on this.
This post helped grow my lump, so to speak. But I do think it is lazy (in a moral sense!) to put on YouTube for your kids when you’re capable of telling them or a story, for example. Or when they are capable of telling themselves stories via play.
In any case, I’m glad you made the effort to engage with these thoughts and post them here, instead of just confirming them with an LLM 🙏
Love this!
This triggered something in head! I've been developing a strategic frame for EdTech integration (ALIGN) and kept wrestling with why schools adopt frameworks enthusiastically then shelve them quietly. There's all sorts of research out there, but this was just all that simplified!
"People assume the framework does the thinking, so there's less thinking left for leaders to do. Fill in the grid, tick the boxes. Lump of cognition in action."
Wrote it up here https://fixedtechstrategy.substack.com/p/align-cognition-frame-not-compliance-checklist
Background on ALIGN https://fixedtechstrategy.substack.com/p/align-diagnostic-frame-edtech-strategy
What an excellent and marvelous read; I shall be bringing up the lump of cogniition fallacy whenever I hear this all too common and irritating complaint.
People only fall for the lump of cognition fallacy because they still treat thinking like a scarce resource instead of a system that expands the moment you offload the low value parts. When you hand routine cognition to an external tool, whether that’s Google Maps, a recipe, or a chatbot, you’re reducing cognitive entropy so you can spend your bandwidth on higher order meaning. The fear that AI replaces thought misunderstands how co-cognition actually works. Outsourcing the boring layers creates more conceptual surface area, not less. If anything, refusing these tools just traps people in synthetic realness, where they are performing effort for the sake of the performance instead of doing the deeper thinking that actually prevents drift.
I was thinking about something similar — zero sum thinking and this encapsulates a lot of my sentiments towards these matters. Great piece!
There is insight to be gained just by extending your labor analogy (focusing on physical labor). The amount of physical labor being done today is orders of magnitude more than in the past. We now mine entire mountains out of existence, which would not be possible without machines. Does this mean I can assure someone in the 1700s, “Don’t worry people will still exercise in the future, because the need for physical labor will just keeping growing balancing things out.” Well, people in developed countries hardly get any exercise from work. Even manual laborers have machines augmenting them. People invented sports and machines just to keep themselves physically healthy. Homework and writing is to cognitive activity what exercise and sport is to physical activity.
So yes, I do believe AI is negatively affecting our ability to think - not because of reduced quantity, but reduced variety. To continue the labor analogy, mechanized physical labor tends to overuse fewer muscle groups, risking repetitive stress injury and also reducing the use of other muscles, resulting in less whole body strength compared to purely non mechanized physical labor. Could the same be possible with cognitive labor? For example, many programmers today don’t really understand how computers work, because they use only high level languages. Older programmers had no choice but to think about it, but compilers have automated that thinking, so younger programmers never exercised that skill. The end result though is that younger programmers tend to write more bloated, inefficient code - which has consequences. To generalize, as we offload thinking to AI, we as people will get worse at what AI does well, but this can be imbalanced for our cognitive health.
I really appreciate "the lump of cognition" as a concept. It's definitely one I'll carry with me.
But there's something that is bothering me a bit. I'm going to use a coding metaphor to try to explain. You can encode a piece of code in an abstraction, which you can manipulate at a higher level. E.g. with python you don't need to worry about how the computer is allocating memory because the language does that for you.
Often, this type of abstraction is really powerful: by taking some of the questions away you can code more things! But losing access to the details can also be limiting; for some applications that require extreme performance, you would use a "lower-level" language because those details matter a lot when you're trying to write fast code. The abstraction can be more useful, but also more difficult to steer.
I worry that AI often moves up a layer of abstraction which can lead to less overall agency. For instance, I've written classical music (and songs) since I was 12 or so. It's a slow process for me, and one which requires me to pay attention to A LOT of little details. Using a tool like Suno would help me not pay attention to a lot of those details, but I don't think it leads me to be able to make better/more interesting music. Almost certainly, any Suno music I could create is less interesting than the hand-written music. And I've had less to think about than if I try to write a song on my own.
This doesn't mean that abstraction is bad in music. Sonata form is a very powerful template, so much so that when people stopped using it for everything musical form arguably got less interesting. (It also doesn't mean AI music is per se bad --- I subscribe to your AI music youtube channel). But it does mean that the AI is less empowering than not having it, at least since I've built up the speciality skill.
You cover something like this in your caveats: "Is a valuable experience on its own." But even if I hated every moment of writing music and only cared about the best possible final product, I think I can do more interesting things without the tool.
As I'm coming to the end of this comment, I'm not actually sure how to reconcile the abstraction idea with the extended mind. But do you get what I'm hinting at? Happy to clarify further
Yeah maybe the issue is that people are terrible judges of what's actually important but painful mental work to build up tacit knowledge and we can unknowingly lose a lot of opportunities to do this if we jump ahead? I think this is going to be a big issue with AI apps and tbh the list of places I wouldn't use them section should mention it more, I can circle back later
I wish substack had a !remindme bot so I can reply when I have thought about it more.
I want to write up a blog post on this if I have time this weekend, and I'll think about this more in the meantime
I actually used to worry about this with Google maps a little. It endlessly annoyed me that friends had absolutely no idea how to navigate a city (often one they’d lived in for years) if their phone died or lacked data on some edge case or something. I still intuitively feel that a lot of easily outsourceable skills like that are worth developing even if you switch them off in everyday life to free your capacity up for more important things. But I can’t really make a principled argument that the cognitive skills I have an affective preference for are really the important ones - more likely what’s happening is that I’m an urbanist dork and therefore it makes me sad that everyday life doesn’t require people to understand as much about topics like the elegance of DC’s grid system as it did in the relatively computation-scarce past. I suspect a lot of the cases that ruffle people’s feathers the most with chatbot use (aside from sociopathic stuff like catfishing on dating apps) are the ones where the specific application is similarly hitting close to home for someone.
There’s also, I think, an under appreciated flipside to this, which emerges from people’s tendency not to consciously notice the water they’re swimming in. Sometimes, pre-outsourcing, people fail to appreciate the significance or broader applicability of some idea that’s very integrated into their day-to-day lives. In the urbanism example, a person who grew up familiar with the system in their city might not stop to reflect on its cleverness or significance, because it’s so mundane. But in a world of Google maps, a curious person who has less unconscious knowledge of the system might be more likely to notice some feature of the city, like the lack of an X, Y, or Z street. In a world of chatbots, that can spark an interest and Wikipedia-type deep dive into the history of the city, historical development of ideas about standardization, the efficiency of right angles, etc and now the person has a whole new window into features of the world around them that might not have occurred to them had the earlier stage of outsourcing not made some basic thing less invisible.
It’s also possible that the person who grew up pre-outsourcing and the one who grew up post-outsourcing will both lack curiosity and miss out on learning a cool thing. Ultimately interesting thinking is done by curious people and moderate degrees of technological change are unlikely to change that much!
This is just the standard-issue Moral Panic at new technology, amplified greatly because LLM's are directly targeted at writers, so they feel very very threatened. I don't know if you followed the one we had about Google many years ago, it was extremely similar. The original one (or at least the first we know about) was Socrates' denunciation of the anti-cognitive effects of using writing. Ironically, we only know it because it was written down.
I disagree in that the threat to cognition is greater than from storage and retrieval AIs like Maps or Search.
I think what most aren’t articulating (yet) is that because LLMs are creating language, using it gives the self an illusion of understanding that is actually developed from the writing process. It’s by refining what we write that we develop our opinions, otherwise stream of consciousness is the original version of slop. I re-wrote this comment a few times to match my intent but I could have queried an LLM for a response but then I’d never know what my formulation of the issue is. I think there is a way for the product experiences around LLMs to encourage more of a feedback loop between the response and initial query, which they’re not really doing right now since all user feedback solicited is about the overall quality of the text.
You should write about offloading linguistic work, specifically!
What’s useful here is treating cognition as an architecture, not a budget. Civilization, tools, and now AI act as external compression layers that expand what the system can meaningfully engage with. Problems arise only when offloading interrupts the formation of internal models rather than supporting them.