Generative AI is a Climate and Societal Issue

I got set off this weekend when a friend informed me more and more people in a mutual friend group are talking about using generative AI (genAI).

I know these people well. I know they genuinely care about people and planet. I cannot comprehend why anyone there would want to use the power and water guzzling, exploitative, polluting, theft machine that makes things cost more and rots your brain. It is wrong, but faster. That is all it does. It has no use case.

None of the issues with genAI are new. However, what is new is the tremendous scale. Everything I say below has always been a concern with data centres, social media, and other forms of digital usage and communication, to some extent. But with the explosion in genAI growth, they have gotten significantly worse over the past half a decade or so. And as I point out in the last section, it accelerates these issues for no real benefit to society.

This article covers topics which may not seem directly related to climate. However, once you understand climate as a whole-of-society problem and see how it touches on everything, you see the relevancy in everything else. The generative AI industry is doing the same shit the fossil fuel industry has been doing, but faster – and this is true of many other damaging industries throughout history. I wanted to include some analysis on how data centres are a new tool of imperialism, but this article is already long, and that could easily be its own piece. So I’ll do that another time.

How GenAI works

There are many different kinds of AI systems. The one that people are commonly referring to when they say “AI”, now, are generative AI systems; these are the ones that will “create” something for you when you input a prompt. The most well-known genAI systems are large language models (LLMs), which generate text based on the prompts you supply them with. There are other genAI systems which generate images, music and other forms of media; these are known as multimodal foundation models (MFMs).

GenAI systems are trained using reams of data scraped stolen from the internet. This includes data which is free to access, data which is paid, and even digitised copies of physical media purchased illegally from the dark web. This information is then fed into the systems that power these models, which, in turn, “create” their content based on all of this information they have been trained on. Without this training data, they would not exist. 

LLMs are probability machines. When they generate text, they calculate which word would make the most sense to come next in their current sequence, based on the data they have been trained on. They do not understand any of their outputs, which is why they frequently lie (the industry euphemistically refers to their lies as “hallucinations”). Even calling the machine a liar doesn’t feel quite right; it’s not really, since it doesn’t understand anything it’s outputting. Using any humanising language isn’t correct, really, as genAI does not “think”, “calculate”, or anything of the sort. It works based on probabilities. However, it is hard to talk about these machines without using anthropomorphising language, which is why I continue to do so. Bear that in mind as you read.

GenAI needs a tremendous amount of resources to meet its needs. As with everything on the internet, information stored and processed in “the cloud” is not a pie in the sky fantasy, but a very real on the ground reality for a lot of people. In order to improve performance (which has diminishing returns), it needs to be constantly fed new inputs for training. This consumes a great deal of resources, which is then wasted when the company releases a new model a few weeks later. This is on top of the amount of resources consumed every time someone makes a query. While this has always been an issue, the scale of genAI and amount of resources required for training and upkeep has made this problem significantly worse.

Power guzzling

The amount of power used is not much per search, but with so many users, it all adds up. It’s also much more than using a search engine (provided you’re using one without AI generated “summaries”).

I just wanted to find out if I could eat my favourite roll after my IBD diagnosis.

This is keeping fossil fuel infrastructure alive. Old coal and other fossil fuel plants which were due to be winding down have been given a new lease on life by signing deals with tech companies to keep them going, when they otherwise would have closed down.

They don’t care where the energy comes from, as a former Google executive told Congress last year. He said AI’s energy use was expected to triple over the next year, and much of it will come from fossil fuels as nuclear will not be built quickly enough. There are so many fucking examples of AI extending the life of fossil fuels. The relationship is mutually beneficial, with some AI firms spruiking their ability to make fossil fuels more efficient. Here’s some specific examples for big companies:

Even when they use renewable energy, data centres are displacing energy that would have been used by everyday people, keeping them on fossil fuels and delaying the transition. And, also… most of them are not using renewable energy, anyway. This is why the fossil fuel and genAI industry get along so well.

Google supposedly released the energy consumption numbers a few weeks ago, which were promptly torn apart by analysts. My favourite piece is from Ketan Joshi, which you can find here. Joshi is an outstanding writer who focuses on climate data, and if you’re not following him, you should. He’s constantly posting good stuff on LinkedIn.

Back to genAI, there were two key issues Joshi identified with Google’s energy figures:

  • It focused on the median value, per query. This obscures overall emissions; small number multiplied across millions becomes big number, but Google didn’t mention that anywhere.
  • They only used text-based generation, while ignoring the more power-hungry image or video generation.

Here’s two banger quotes from Joshi:

“Something useful here is that Google have made it clear they’re perfectly capable of determining the very specific energy consumption of generative systems. They could share the same information about images and video, but they choose not to. I think that’s telling enough.”

“Most data centre growth in the world has been concentrated in America, and most data centre growth relates specifically to the types of hardware used to power generative text, images and video. And most of America’s growth in power demand relates to data centres. ”

Joshi’s piece has a couple of awesome graphs that demonstrate the annual power consumption of major US tech companies. Between 2023 and 2024, Google’s energy use growth doubled, and it’s similar for most of the other companies.

Why would they release such a shit figure?

To control the narrative. And a lot of people have fallen for it hook, line and sinker. By releasing something, Google knew most people would simply accept it at face value, believe Gemini isn’t bad for the environment, and point to their propaganda as “evidence”. And they were right. You always, always, always need to think about the source of your information. Google, being an AI developer who profits from its proliferation, has a vested interest in downplaying the environmental harms. Don’t trust them, or any other genAI developer, as your sole source of information.

GenAI producers claim we don’t need to worry about the energy consumption as the machines will be able to help us solve the climate crisis. They have produced absolutely zero evidence to back up their claims, and given they have an obvious financial gain in this agenda, I don’t trust them. It echoes constant claims, for decades now, from fossil fuel companies that they can keep polluting because we will invent our way out of it, or offsets will save the day. It is bullshit. This article discusses a lot of key issues with offsets and remains relevant even now, a decade after it was published. Offsets have always been a way to stall climate action and facilitate guilt-free pollution, and the same is true of claims that genAI will solve the climate crisis – or any kind of problem, really. These claims should be treated with utter contempt, as the propaganda they are.

Water guzzling

Data centres run hot, and require enormous amounts of water to keep them cool. If you want to read more about the technical details of how they operate, do so here.

They are sucking up water that communities need. In Sydney, data centres currently use less than 1% of the city’s water. Sydney Water expects this to leap to 25% in the next 10 years. Not by 25%; to 25%. That’s too much for a drought-ravaged country, and made even worse by the fact the climate crisis will be making our droughts longer and more severe.

Companies running data centres thrive in a lack of transparency. This is also true of energy, but it’s more stark when it comes to water. Water usage is treated as a trade secret, which means locals are blocked from accessing that information:

“[Citizens in Uruguay] were forced to go to court to gain even limited information about Google’s plans, and only then learned that its cooling towers will need 7.6 million liters (2 million gallons) of potable water a day.”

Tech companies are claiming to be developing ways of cooling their data centres that means they can recycle water, not use freshwater, or even not use water at all. I treat this in the same way I treat offsets and hypothetical emissions reductions from technology: I’ll believe it when I see it. Show me some evidence of it being implemented, and not quietly tucked away in the cases where it does work well. Microsoft implemented a more efficient cooling system for some of its data centres in Chile – only after significant opposition to the data centres which forced them to change. Why did they not do that from the outset? Because they care more about their bottom line than communities.

Exploitative

Workers in Kenya were paid $2 an hour to make ChatGPT less racist. There is an entire industry around slicing data into microtasks for labelling – images, text, video, and everything else – to ensure the information going into the machines is accurate. The labelling is often carried out by workers in developing countries, where tech companies can get away with severely underpaying the people who carry their companies on their backs. Those companies also have poor laws and records on workers’ rights – often as a result of laws carried over from colonial Western governments.

Dividing the work into microtasks saves on labour for employers, at tremendous cost for the workers themselves. Workers are having to compete with one another for mind numbing tasks that pay little. If it takes you longer to complete a task than the time allocated, even if it wasn’t your fault because the contractor set an unreasonable timeframe, too bad – you don’t get paid. Jobs are first-come, first-serve, so workers have to sit on microtask platforms and try to beat other workers around the world for the job. They can’t leave their computers and risk missing out. They will sit there for 12 or 16 hours a day or even more, and maybe get a few hours worth of work, so much of the time is spent waiting for opportunities to come up and then getting beaten by someone else anyway.

The pay system sucks, too. They can only withdraw payments once they exceed a certain amount. If you’re one of the unlucky ones who gets banned from the platform, often for something you didn’t actually do, your earnings are lost. The decentralised and anonymised nature of the work also means workers cannot unionise or otherwise band together to fight for better pay and conditions. For more detail on microtasks, check out Work Without the Worker.

This isn’t an industry unique to genAI. Social media companies do it too. And it has a huge psychological toll on those workers, who often receive little to no support. They might not even be comfortable trying to seek psychological support for fear of losing their jobs.

Don’t even try to tell me tech companies are unaware. It’s common knowledge to anyone who pays attention. They could choose to only engage with contractors who actually care for their workers. These companies are enormous and can make change if they throw their weight around. Even better, since this is ongoing work, there is nothing stopping them from bringing these workers in-house to make them direct employees, with the better pay and benefits that entails. But they don’t give a shit about people, only their bottom line, so they won’t.

Polluting

Putting climate pollution aside, data centres pollute in other ways, too. Some of them use highly polluting energy sources, which can release gnarly amounts of harmful chemicals. This has been gaining awareness recently as a community in Memphis has been sounding the alarm.

These things are also noisy. Their energy sources and cooling systems can generate a significant amount of noise pollution. They cause issues for staff, whose hearing can be damaged as they move about a facility. Communities living near the data centres notice an uptick in headaches, sleep disruption and stress, which can cause myriad long-term health issues. It’s bad for wildlife, too. Head here for a primer on data centres and noise pollution.

I’m bringing back Joshi’s piece for more on how this is affecting communities in America.

Theft

GenAI is trained on data scraped from the Internet. They use anything and everything they can find, which is why they need data labellers to keep harmful content out. This means that all of your social media posts have been fed into the machine. All those 10-year photo challenges will be used to train facial recognition algorithms. And if you still use Facebook, you might want to double check your phone settings to make sure they’re not absorbing all the photos on your camera roll.

Unsurprisingly, artists, musicians and writers are not happy about this. GenAI developers are taking our works without our consent and then monetising the outputs, without any of that money making its way back to the creators whose works fuel it. This has given rise to a host of litigation from people whose works have been stolen.

GenAI developers counter that their machines could not exist if they were forced to pay for the inputs. To which I say, fine. If your business model requires stealing from hundreds of millions of people, it should not be permitted to continue. Do it ethically, or piss off.

Wrong, but faster

The kicker? Despite all the problems listed in this article, they don’t even fucking work. This is the most mind-boggling part of this whole thing, to me.

GenAI is riddled with inaccuracies. I wrote the AI guidelines at my last workplace, and my recommendation in the guidelines was that people shouldn’t use it unless they were going to fact-check it – which would defeat the purpose of using it in the first place in many cases. And that’s because they just suck and cannot do anywhere near the amount of things they are advertised to do.

There’s a famous IBM quote that goes “A computer can never be held accountable, therefore a computer must never make a management decision.” The same is true of software tools. If your response to your use of a dodgy output is to blame the machine and then keep using it, the fault is with you. You know it sucks and persist anyway, and deserve to face consequences in the same way you would if you had used any other tool inappropriately. If it’s your workplace forcing you to do it, then the blame should fall on them.

People often use LLMs to generate text or do “research”. However, they have no comprehension of facts and lies. They will always output correct-sounding text, but you need to be checking yourself to make sure it’s accurate. At a previous workplace, someone had asked ChatGPT a question, and then for a citation. She went and checked the citation it provided. The report it quoted was real, but the information it said was in the report wasn’t there. In fact, the report didn’t cover her query at all. Lawyers have been reprimanded for not checking their AI-generated briefs and submitting bullshit. Having to do this every time it spits something out? Why would you bother, when you could use do the research yourself?

GenAI doesn’t comprehend numbers. If you give it two numbers and ask it to tell you which is bigger, it will be correct sometimes, and sometimes not. About 50% of the time, in fact, since it’s just guessing. Quoting Joshi again:

“Doing a calculation using a chatbot, one of the most-used functions, is several million times more energy intensive than using a calculator and significantly more likely to be flat-out wrong.”

Using it to summarise a meeting or email chain? It’s definitely leaving out key information and getting technical details wrong. I had someone at a previous workplace who tried it for meeting summaries and later stopped for exactly those reasons.

Wanna use it to code? Been using it to code and insist it’s been saving you time? There’s a good chance it’s actually been costing you time.

“When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.”

GenAI gets weaponised by malicious actors, who can produce lies and propaganda, faster. Proponents claim this is a price we have to pay for technology. But they don’t have a right to decide that for everyone else, especially when they’re constantly ignoring or dismantling their safety teams (as outlined in granular detail in Karen Hao’s Empire of AI).

Makes things cost more

GenAI is expensive to train and not even close to being profitable. These companies need to recoup costs somewhere, and those with other products are doing it in other ways. The most infamous example is Microsoft pushing up the cost of Office by at least 30% once it started including its AI malsoftware, Copilot.

It doesn’t even allow you to use Copilot as much as you want; you get AI credits that are consumed when you use AI features. Once you run out, that’s it till you get more credits next month. So you’re paying that much more for a limited use product. And if you don’t want it at all, stiff shit; you have to pay the price hike anyway as there is no longer an option to have Office without Copilot.

Electricity providers are struggling to meet energy demand as is. With the addition of data centres, it is expected that the price of electricity will go up.

Rots your brain

If you stop using a muscle, it gets weaker. This is also true of your brain. A recent study by MIT has found that:

Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.

Even if genAI didn’t have every other issue listed in this article, I would avoid using it for this reason alone. I treasure the way my brain works, especially the way I can make connections between seemingly separate concepts. I don’t want to lose it.

I don’t even want to think about the atrophying effect it’s having on developing brains, or the way its sycophancy is causing real psychological harms.

No use case

There is, as far as I can tell, nothing it does well, as I outlined in Wrong, but faster. At best it could be used to come up with ideas, but they’re always going to be middle of the road stuff as it spits out the averages of its training data. It will never give you anything original because, by its nature, it cannot. It will only remix the inputs that were stolen from all of us.

This is why so much UI has been designed around bullying people into using the tech. It would get significantly less use if it didn’t, because it sucks. I recall at my last job, we logged into Gmail one Friday morning to find Gemini had inserted itself into our emails. I was glad to see an option to switch it off… which turned out to be fucking empty.

I had to waste half an hour of my time on the line to support to be graciously granted the option to switch it off.

If it was such a great product, they would have allowed us to switch it off from the get go, confident in the fact that so many people would like it and keep it on. But they didn’t, because it’s shit and they know it. The only other person in the office with me that day was also being driven mad by its suggestions.

Fuck all of it

There is no natural force driving this arms race. Tech companies could choose to be principled and slow the fuck down to make it more sustainable. But they’re not, because they don’t give a shit about the environment or the people living in it.

Generative AI is technology that does tremendous harm to society for no benefit that I can see. You can care about people and planet or you can use genAI. You cannot do both, and I am tired of pretending this isn’t the case because people might feel bad. You should, in fact, feel bad for doing a bad thing, and then stop doing the bad thing if it is within your means to do so. Yes, I am judging you.

It’s often taken for granted that companies will always do everything they can to make money, consequences to people and planet be damned. This is true, but it doesn’t have to be. Executives are people who knowingly decide to inflict harm on others and our communities and we should all be enraged that they continue to do so.

We should be equally enraged at governments, another group of people who are choosing to do nothing about all of this, and letting corporations get away with it.

Both of these groups wield a tremendous amount of power, and keep refusing to use it to make the world a better place. Never forget that all of these shit things are a direct result of conscious choices that very powerful people are making. They have the resources to look into things before they enact them, and when they don’t, they should be held accountable. “I didn’t know” is not an excuse when you have billions of dollars and dozens or hundreds of workers at your fingertips.

Leave a comment