What's the full "hidden" climate cost of a ChatGPT prompt?
It's so tiny, and if we include "all the hidden costs" it might actually be a negative number
Many say that the individual energy cost of prompting ChatGPT isn’t its “true” environmental cost, because there are hidden emissions that aren’t factored into the calculation. Suspiciously, they never actually try to show what that “true” cost is. They just gesture at it and leave you to infer that it must be large.
Let’s try to figure out what it is.
The full cost of a prompt
We’ll start with the agreed-on best rough guess for the median ChatGPT prompt: 0.3 Wh of energy. This includes the energy costs of cooling in the data center, idling chips, and data center overhead.
What is this missing?
Training the model
Well it doesn’t factor in the cost of training the model. There’s no agreed-upon ratio of how much energy went into training GPT-5 or how many prompts it will receive, so we can’t just divide the energy by the prompts. We do know that ~40% of all energy used on AI in the US is used on training new models right now, so we can roughly approximate that the 0.30 Wh represents ~60% of the true cost when you include training. So training + inference gets us to 0.5 Wh / prompt.
Emissions
Of course, we don’t actually care about the energy used. We care about the emissions. The average American power plant emits 0.37 g CO2 per Wh, but data centers in America use energy that’s 48% more carbon intensive than average. Combining these:
(0.5 Wh / prompt) x (0.37 g CO2 / Wh) x 1.48 = 0.27 g CO2 / prompt.1
Transmission from the data center to your computer
Transmitting information from data centers to your personal device uses minuscule amounts of energy. 1 GB of data transfer across the internet costs ~10 Wh. A standard ChatGPT response is 1–10 kB of text. This means that transmitting each prompt response to your computer uses at most 0.00001 Wh. 0.002% of the cost of a prompt. It’s negligible.
Embodied carbon of the AI chips
What about the cost of physically making the AI hardware? There’s a new study that’s the most comprehensive attempt to estimate the “embodied cost” of actual AI hardware, and says that only 4% of the emissions associated with an AI chip used for GPT-4 come from producing it. 96% of the chip’s emissions happen when it’s used for training and inference. This makes sense. AI chips are designed to have energy running through them 24/7. It’s kind of like if you made a wire, and measured all the carbon emitted when generating all the electricity that ran through the wire over its entire lifespan as part of the wire’s emissions. You’d expect most of the wire’s emissions to come from this electricity, not from producing the wire itself. In the same way, the actual use of the chip should be pretty energy intensive relative to building it.
Conclusion
So if the embodied emissions are 4% of the cost, 0.27 g CO2 is 96% of the cost of the prompt. The full carbon cost is ~0.28 g CO2 / prompt.
This is the same amount of carbon emissions as:
Running a microwave for 3 seconds
Using a laptop for 1 minute
Running a clothes dryer for 1 second
Driving a sedan 5 feet
Playing a PS5 gaming console for 15 seconds
It’s 0.0005% of the average American’s daily emissions. An American emits ~200,000 times as much every day. You have to prompt ChatGPT 2000 times in a day to increase your emissions by 1%, but prompting it that much would take up a lot of time and probably prevent you from doing other high emissions activities like driving, so unless you were constantly prompting ChatGPT every moment while multitasking with everything you do, it would be very very hard to actually raise your emissions at all using ChatGPT. It seems like, even with all the hidden costs included, ChatGPT is much more likely to lower your emissions, because it emits so much less than the average ways Americans spend their time.
What about more abstract costs?
Normalizing AI
Maybe using AI is bad because it “normalizes AI” and causes other people to use it.
For the most contagious strains of COVID, the average unvaccinated person could expect to infect ~8-10 other unvaccinated people.
Let’s say using AI is as “contagious” as the most contagious strains of COVID. If you start using ChatGPT, 10 of your friends start using it too. Maybe in this case you’re responsible for all their emissions too.
So every prompt you send causes 10 other prompts from your friends. Any one prompt is actually 10x as bad. The cost changes from 0.28 g CO2 to 2.8 g CO2.
Here, each prompt is still just a 20,000th of the average American’s daily emissions. Here are 20,000 dots:
You would have to prompt ChatGPT 200 times just to raise your emissions by 1%.
So “normalizing AI” doesn’t raise ChatGPT prompts up to a level where you should be concerned about them.
Digital infrastructure
Maybe we should include the full carbon costs of the digital infrastructure and supply chains required to support AI, not just the data centers themselves. By using AI, you’re complicit in the rapid buildout of data centers and global electronics supply chains that support it, not just the individual energy cost of your prompt.
Something strange about comparisons of the full infrastructure costs of AI to other things we do is that they often fail to account for the infrastructure costs of those other things.
A recent popular blog post was titled “Why Saying “AI Uses the Energy of Nine Seconds of Television” is Like Spraying Dispersant Over an Oil Slick.” The author’s main point is that each individual AI prompt is able to use so little energy only because of this vast and expanding background buildout of AI infrastructure, so just reporting (as I do) that an AI prompt only uses as much energy as a few seconds of a microwave is hiding the more ominous reason why it’s able to be so cheap in the first place. By using AI, you’re complicit in some way in that infrastructure buildout.
This criticism would make more sense to me if everything else in society didn’t also have a vast sprawling physical infrastructure supporting it. “9 seconds of TV” has huge networks of electronics systems supporting it, as well as crazy amounts of money and people-hours going into making the most entertaining TV, lavish (often wasteful) lifestyles enabled by the profits from TV. Obviously, TV advertising also encourages people to buy more stuff from other complex supply chains.
If you make a comparison like this:
One off cost of an AI prompt + the full infrastructure supporting AI ←→ 9 seconds of TV
then it’s easy to make AI seem like the bigger problem, but if you make what I think is the correct comparison instead:
One off cost of an AI prompt + the full infrastructure supporting AI ←→ 9 seconds of TV + the full infrastructure supporting TV
Then I suspect the infrastructure costs of AI and TV will roughly cancel each other out, and you might as well just make the original comparison:
One off cost of an AI prompt ←→ 9 seconds of TV
This is why I think it’s reasonable to make this comparison.
We don’t really hold this standard for anything else we talk about. I can say “Your sedan emits about 320 g of CO2 for every mile you drive” and I don’t think that’s deceptive, even though the sedan is relying on a vast road infrastructure that costs 69 million tonnes of CO2 each year in America alone just to maintain, the sedan itself has “embodied carbon costs” from manufacturing it that I’m not including, and driving a car normalizes the behavior for other people. I think people understand that these additional costs exist when they talk about how much cars emit per mile, and I think they also understand these costs exist when we talk about how much AI emits per prompt.
AI infrastructure is being built out faster than most other infrastructure, but the rate of change of growth of something’s emissions doesn’t on its own tell us much about how bad it is for the climate. The carbon emissions from the global supply chain of Labubus recently began to rapidly increase, but Labubus are going to remain such a tiny part of global emissions that this doesn’t matter. What matters is the total amount of emissions and how much value we’re getting from them.
AI and electronics will obviously emit way way more in total than Labubus. However, the IEA expects the data center buildout to, on net, significantly decrease emissions overall, because AI will be optimizing so many other processes in society and making green energy and smart grid tech more viable. They project that AI alone will prevent 4 g of carbon emissions for every 1 g all global data centers emit (for both the internet and AI). So I could end up saying something strange like “Every ChatGPT prompt encourages the data center buildout, which is good because forecasts imply that the buildout will on net lower global emissions, so every ChatGPT prompt you send decreases emissions by x amount.” We’ve ascended into a level of abstraction that I think is goofy. Things get goofy when you try to cram the responsibility for every possible outcome of AI into the individual cost of an AI prompt. Your predictions about this “total, hidden” cost will mostly depend on the decisions people make in the future about how and where to use AI, not on ChatGPT’s individual in-the-moment impact itself. Seems goofy!
This goofiness is why I think it makes more sense to just limit what we say about the climate impacts of individual prompts to those prompts themselves, and leave broader forecasts of AI’s total climate impact in the future for separate conversations.
Using ChatGPT is not bad for the environment
This will be my final word for now on the costs of individual ChatGPT prompts. If someone approaches you and says that AI has “hidden environmental costs” that are big enough that your individual prompts add in any meaningful way to your daily carbon emissions, you should politely but firmly tell them that they’re wrong. Specifically, push them to share what the actual cost is and how it actually compares to other regular things we do. It shouldn’t be so easy to just vaguely gesture at this “real cost” without having some rough idea of what it actually is, and yet people do it all the time with no pushback.
There’s a separate conversation to be had about the net environmental impacts of AI overall, or of data centers built in specific places. That’s very different than the individual impact of individual AI prompts. Both conversations are important, but we need to be clear which one we’re having. And when it comes to the cost of individual prompts, the numbers are clear. In general, I don’t think the average person should focus much on their personal carbon footprint compared to systematic changes to the energy grid, so I’d really rather we just have the other conversation anyway.
I am aware I’m committing sig fig sins. If I didn’t, the embodied costs of AI wouldn’t show up at all and I think some readers would be confused.



Are people actually arguing "normalization"? It wouldn't surprise me, but I hadn't heard that one before. It's a bit late in the game, though. Normalization is now a cultural phenomenon, not a meme secretly transmitted by seeing someone use ChatGPT in public, or by crossing a picket line.
There could actually be a per-prompt anti-normalization effect: "I'm seeing everyone around me is using AI so much, it really must be destroying the planet!" or "Look at them! I'll never be one of those drones who hands off their thinking to AI!"
Either way, there's a natural ceiling to personal AI use, and personal use already mostly doesn't require today's smartest models. (Though for now ChatGPT-5 Thinking still does a lot better at creating recipes.) It's just that the ceiling is a lot higher than critics are comfortable with.
Trying to account for "contagion" is tricky. On the one hand, the causal effect doesn't stop after 1 spread, so you end up with exponential causal responsibility. On the other hand, if you have exponential spread happening anywhere then pretty soon everyone has been infected so maybe your contribution doesn't make any *counterfactual* difference: unless *every* spreader is neutralized, the end result is inevitable no matter what you in particular do.
[Edited to add: I take this to show that we shouldn't really model "normalization" as a form of unlimited contagion. But it's left a bit of a mystery how we *should* think about it.]