34 Comments

Thanks for writing this, Andy! A point worth sharing here is that the environmental critique of LLMs seems to have been "transferred" from the same critique of blockchain technology and NFTs. An environmental critique of NFTs is, as far as I know, valid — selling 10 NFTs does about as much harm, measured in carbon emissions, as switching to a hybrid car does good. What may have happened is that two coalitions that were arguing with each other about one technology simply reused the same arguments when a newer, reputedly-energy-intensive technology came around.

This hypothesis is not my own, but it strikes me as extremely plausible. I couldn't see how otherwise critics of AI could have anchored on this argument, when there are many other perfectly valid arguments about the downsides of LLMs!

Expand full comment

FWIW the critique was actually equally incorrect and immaterial as applied to blockchain technology (selling 10 NFTs does not result in as much carbon emissions as not switching to a hybrid car), and so the animus is likely instead a dislike of the types of people who pursue both of these technologies rather than an object-level critique of them.

Expand full comment

Nice post!

Could you give exact sources for the numbers in the "Water consumed by ChatGPT vs other activities" graph? I prefer to personally verify claims in infographics before resharing them, but sources like "US Census Bureau" and "UNEP" put out a _lot_ of data so it's not obvious which one of their reports I should look at.

Expand full comment

That's the one graph I didn't actually make and can't find a reliable source for who originally made it. Here's the burger statistic: https://www.latimes.com/food/dailydish/la-dd-gallons-of-water-to-make-a-burger-20140124-story.html#:~:text=Just%20to%20get%20a%20sense,requires%20660%20gallons%20of%20water. And the water used per ChatGPT search statistic: https://www.govtech.com/question-of-the-day/how-much-water-does-chatgpt-drink-for-every-20-questions-it-answers

Expand full comment

Great, thanks!

Expand full comment
8hEdited

Heard this post referenced on the "Hard Fork" podcast and I'm glad I took time to read it. It is illuminating. Thank you for breaking it all down. It also confirms things I've read in books like "Not the End of the World" by Hannah Ritchie, who also stresses the importance of things like eating less meat and improving crop yields (to reduce our agricultural footprint) as some of the most important ways to manage our individual climate impact.

Expand full comment
19hEdited

Since you seem open to updating the graphs, could you please color "Asking chatgpt a question" on the "online activities" graph a different color (probably red for both the text and bar)? The way substack crushes images by default renders the text much less legible, and needing to expand the image misses the at-a-glance purpose of a tl;dr.

Beyond that, though, thank you so much for giving me something to link to whenever someone trots out this dumb argument.

Expand full comment

Great point thanks! Will have time to fix it this weekend.

Expand full comment

Thank you very much for this. Every time I tried to crunch the numbers, even being generous in estimates, they did not add up to the claims of "AI burning the rainforest"... but seeing it all in one place is even more effective.

Expand full comment

Great article, but this also contains some zombie facts about older models that consumed way more energy - the costs keep coming down over time. https://simonwillison.net/2024/Dec/31/llms-in-2024/#the-environmental-impact-got-much-much-worse

Expand full comment

Yup appreciate you pointing this out! I knew the stats were overestimates and thought about addressing that but with the limited space I have I figured it'd be more convincing if I used the highest estimates we have for ChatGPT's energy and water use. I might add an addendum at the end, worried the intro's already very long.

Expand full comment

Thank you for writing this. It was so illuminating! Would it be a lot to to ask for an addendum to this excellent piece that incorporates the environmental implications of generative ai for images and video?

Expand full comment

I could add the stats about them to the article later yeah! Just need to track them down.

Expand full comment

Would also be interested in this! Also – if possible – audio (Elevenlabs & Suno). Then this would be a comprehensive review. (It's great already btw, thank you!)

Expand full comment

Looking into this more I'm seeing a really wild range of estimates of how much these use, anywhere from 1 ChatGPT search to like 100x that. Worried that I'd basically need to make two long sections of the article for "Here's the low end, it's not so bad" and "Here's the high end, it's actually a lot!" which might be worth a separate post instead. Ultimately guessing this stuff is in some ways impossible because the energy use is spread out across so many different machines at once and there are so many users, the best we can do is divide one very large uncertain number by another and hope for the best. I might wait a few months until we have better information about this and then try to do an image/music deep dive.

Expand full comment

Completely understand – sounds very reasonable! I'm already following your substack so if the data does become more solid and you wind up getting to the audio/video post, I'll be in the loop 😎 Thanks again!

Expand full comment

This was a fantastic read, thanks for writing it. The environment cost is mentioned in passing in some articles by respectable (at least, I think so) writers and I kind of took it for granted.

Expand full comment

Yup I think a lot of otherwise smart and well-meaning people are just misunderstanding the magnitudes here!

Expand full comment

I wanted to write a post like this until I read this post! excellent writing :D

arguably, in the medium-term (next ~5 years), AI will be very good for the climate because:

1. it'll make climate scientists and engineers more efficient by helping them rapidly ideate, test hypotheses, and find information. it can democratize knowledge, so more people can contribute to climate policy and research.

2. it'll incentivize massive capital expenditure by the world's largest tech corporations with net 0 carbon commitments who are investing hundreds of billions into clean energy infrastructure to help power AI data centers. this creates economies of scale, driving down the cost of nuclear, solar, batteries, transformers, and more - all critical for a green transition. tech accounted for 68% of all clean energy funding in 2024.

3. optimization. like AlphaFold, AI can test millions of combinations of materials for better solar panels or batteries, or it can help optimize electricity usage and reduce the carbon intensity of industrial operations. the energy company Vistra used AI models to save ~1.6M tons of carbon annually - equivalent to taking 348k cars off the road or removing the carbon footprint of New Haven, CT. AI optimization of HVAC systems can reduce GHG emissions by ~40%. Google - though admittedly biased - made a report with BCG showing that AI can mitigate 5-10% of global GHG emissions by 2030. that's the emissions of all of Europe.

what other technologies have the potential for such positive climate impacts at such a low carbon cost?

it' disappointing that the hyper-online left and TikTok creators have started seeing AI tools as a climate bogeyman. holding this view requires a lack of numeracy and blindness to the data

Expand full comment

Thank you for doing the hard work and putting this together! I think the two charts highlighting carbon emissions and water consumption compared to other common activities basically nail the point.

Expand full comment

Nice post, thank you for writing!

> At the last few parties I’ve been to I’ve offhandedly mentioned that I use ChatGPT

What kind of people are you chatting to who have these views?

Expand full comment

A lot of everyday people from across the spectrum!

Expand full comment

The comments are interesting so far: From my understanding, they're celebrating that "AI isn't bad" or at least that bad. Well ...

... another "easy" metric is the datacenter count x energy consumption. Please show me, how that is going down and I'll shut up.

Because until the 3rd Summer of AI datacenter consumption and grow was slow / or was pointing down per center. Systems got more and more energy effective thanks optimization of Hardware and Virtualization.

But now its 30 to 50% up in Energy and CO2 since 2020 for Microsoft, Google and so on. Why the f** would they need atomic energy so bad?

Also, the current speculation on datacenters *needed* in the future for anything AI is a business - much like the oil industry - that will create demand one way or another. Either demand is high and will pay out. Or ... by overproviding, the cost will be that low, AI becomes dirt cheap for a while to put it in anything up to your buttons. WIN WIN.

On top of that Future AI - unless there is a transformer miracle - will need so much more than a few hundred tokens per task: AI Agent Systems need constant and multiplied compute time to function.

But hey, what do I know, I only see the big picture and read the financial statments of AI & datacenter companies. Maybe its all a bubble. Maybe you should read the enviroment and risk sections, too.

Expand full comment

But whistleblowing on them is bad for your health.

Expand full comment

Nice to see independent numerical analysis. BTW, it may be worth noting that the ChatGPT power numbers you use come from a April 2023 paper (Li et al, arXiv:2304.03271) that estimates water/power usage based off of GPT-3 (175B dense model) numbers published from *2021* numbers. There was a newer paper that directly measured power usage on a Llama 65B (on V100/A100 hardware) that showed a 14X better efficiency (Samsi et al, arXiv:2310.03003).

Since I've been running my own tests, I decided what this looked like using similar direct testing methodology in 2025. On the latest vLLM w/ Llama 3.3 70B FP8, my results were 120X more efficient than the commonly cited Li et al numbers.

For those interested in a full analysis/table with all the citations (including my full testing results) see this o1 chat:

https://chatgpt.com/share/678b55bb-336c-8012-97cc-b94f70919daa

Expand full comment

Hey, this ignores training...

Expand full comment

I include training in the emissions and water section! If you search training I have a long piece about it in both sections

Expand full comment

> Google isn’t perfect either, and yet most people get a lot of value out of using it.

The output of a classic Google (web) search is fundamentally different from an LLMs'. Unlike LLMs, a search does not pretend to be able to do the hard part of asking a question, which is understanding the answer. Answering a difficult question from Google alone can be hard enough, but at least you got to see the sources and were forced to evaluate them against each other if they disagree.

>Receiving bad or incorrect responses from an LLM is to be expected.

Most of the relative enthusiasts using LLMs now understand that, but what if the various big players succeed in shoving LLMs down everyone's throats through integration with phones, search, office software as is the current trend? Even in 2025, people drive their cars into the ocean because they blindly trust their GPS navigation, because machines, and LLMs in particular, are always confident regardless of their answers' truth or usefulness which they cannot know.

And even if every user ever understood the caveat - how would a person asking an LLM be able to tell whether or not the LLM gave a useful, true answer, or at least didn't hallucinate one entirely? By doing the research themselves? But then you might as well cut out the middle man.

> I can ask very specific clarifying questions about a topic that it would take a long time to dig around the internet to find.

That is the opposite of understanding though. To truly understand a non-trivial fact, you must embed it in context. All that "useless" digging around for an answer enhances that context, and you can be much more confident that a certain piece of information is true and/or useful.

Case in point: Did you research and write this article yourself, or did you let ChatGPT do even parts of it?

Expand full comment

I think most of this is just deeply incorrect and kind of ridiculously hyperbolic.

How many people is GPS killing each year?

If struggling through meaningless lists of links to find something I’m looking for were useful for “true” understanding, why not add that to other websites? Have Wikipedia include 20 useless links per article? This seems silly.

Most of this post came from research I did using ChatGPT yes. I link back to it a lot here. You can have ChatGPT generate citations for its claims so I just checked those and included them as links.

Expand full comment

My claim was not "GPS bad", let alone "technology bad". It's obvious that the benefits of e.g. GPS far outweigh its drawbacks. But that wasn't the point either way.

I make the claim that AI gets disproportionally more implicit trust than most other technologies simply because it can verbally (and otherwise) communicate so near flawlessly and optimized for human consumption.

That's why I make the claim that dismissing concerns about AI usefulness by saying "Receiving bad or incorrect responses from an LLM is to be expected.", implying that reasonable people will double-check LLM output anyway, will not be quite so relevant when the technology is widely used by normal people who take AI marketing at face value, as oppposed to mostly enthusiast usage now.

I make the claim that the current AI paradigm of LLMs is a dead end because of exhaustion and poisoning of training data.

Based on all that, I make the claim that, even while fully agreeing with your overall point that you have to look at cost vs benefits (not just cost alone), the overall benefit of LLMs on the societal level is so small and possibly negative that virtually no amount of environmental impact is worth training and deploying LLMs other than *maybe* for reasearch puposes.

Expand full comment

I mean I tried to make it clear in the post that I agree that LLMs being useful or not useful will affect how we think about their energy impact, but I think there are enough misconceptions about their energy use floating around that it was worth putting those in context in a blog post. I didn't mean this as an overall defense of LLMs and tried to make it clear that I personally think they're really useful and that people should at least try them before making a final call.

Expand full comment

Fair enough, let's leave it at that.

Another question then. For lack of other known numbers, you have to use ChatGPT-4 numbers. While speculating about GPT-5 et al. would be just that, speculation, it seems pretty clear that at least the training costs would go up significantly, and probably operating costs as well. Or what do you make of news like these:

>https://epoch.ai/blog/how-much-does-it-cost-to-train-frontier-ai-models

"The cost of training frontier AI models has grown by a factor of 2 to 3x per year for the past eight years, suggesting that the largest models will cost over a billion dollars by 2027."

>https://www.datacenterdynamics.com/en/news/openai-wants-to-buy-vast-quantities-of-nuclear-fusion-energy-from-helion-report/

"Microsoft is also an investor in OpenAI, and the two companies are reportedly working together to develop a 5GW AI data center known as Stargate. This would cost up to $100 billion and come online in 2028, the same year as Helion plans to open its power plant."

https://www.seattletimes.com/business/microsoft-hungry-for-ai-power-spurs-revival-of-three-mile-island-nuclear-plant/

"The owner of the shuttered Three Mile Island nuclear plant in Pennsylvania will invest $1.6 billion to revive it, agreeing to sell all the output to Microsoft as the tech titan seeks carbon-free electricity for data centers to power the artificial intelligence boom."

Yes, nuclear plants are carbon-neutral, but just as you wrote about water consumption, there are more ecological issues to consider than just the carbon footprint. There are also opportunity costs to consider such as not shutting down fossil fuel plants.

So when both Microsoft and OpenAI believe they have to revive TMI and invest in fusion plants to operate just their AI datacenters, you can convert queries to hamburgers all day long and I would still remain more than sceptical about LLMs staying irrelevant in their ecological impact. ChatGPT-4 will certainly not remain the most expensive LLM for long, and the demand signalled by using today's LLM can only hasten their successors' arrival.

Expand full comment

I specifically framed the post around individually using ChatGPT because I didn't want to get into the broader environmental debate about AI as a whole. AI's using enough energy and is being used for enough different sources that it'd be an incredibly hefty project to try to sum up how it's being used overall and whether it's environmentally acceptable. It's totally possible that AI as a whole is going to use too much energy in the future, but individuals using LLMs shouldn't feel guilty at all about their contribution to that. That was the only point I was trying to make.

Expand full comment