Oxford University Press's
Academic Insights for the Thinking World

Industrial Plant during sunset

The real existential threat of AI

How does Artificial Intelligence (AI) affect climate change? This is one of the unprecedented questions AI raises for societies, challenging traditional perspectives of fairness, trust, safety, and environmental protection. While AI, to a certain extent, follows a trajectory similar to past disruptive technologies, such as the steam engine or the computer, the scope and depth of its influence necessitate a thorough examination of its various impacts across society. In particular, we can see the transformative potential and the complex dilemmas AI introduces when we consider it in the context of climate change.

While much ink has been spilled on the “existential risk” AI may or may not pose to humankind via rogue actors or rogue AI—and this is definitely an area for regulation and foresight—the “real existential threat of AI” is its effect on the climate and the environment. Research indicates that AI and Information and Communication Technology (ICT) can both mitigate and contribute to climate change.

On the positive side, AI can reduce energy, water, and material consumption through optimization in project planning and implementation. For example, AI optimization can significantly cut the energy needed for cooling data centers. Additionally, AI can enhance the efficiency of low-carbon energy systems and aid in the integration of renewable energy sources. These applications demonstrate AI’s potential to contribute positively to environmental sustainability and climate change mitigation.

Conversely, AI and ICT are notable contributors to climate change, with ICT alone responsible for up to 3.9% of global greenhouse gas emissions, compared to approximately 2.5% from global air travel. The training and deployment of AI models, especially large ones, are particularly resource intensive, consuming substantial amounts of energy and water. Recent studies reveal that creating a single image with a leading image generation AI consumes as much energy as charging a standard smartphone. Usage of large AI models consumes, within one year, usually much more energy than was needed for training. By 2027, AI’s total energy consumption is projected to rival that of some countries, like Argentina or the Netherlands. Indeed, Google has just announced that the increase in energy demand for data centers due to AI puts its entire 2030 “carbon zero” target at risk.

These statistics highlight the pressing need for regulatory measures to ensure that AI and ICT practices become more environmentally sustainable. While there is some leeway to interpret the GDPR in an environmentally aware fashion, the most obvious sources for immediately tackling the climate effects of AI is the EU AI Act and the Biden Executive Order on AI. The latter, however, looks primarily at the potential of AI to help with “resilience against climate change impacts and building an equitable clean energy economy for the future,” not at the energy costs of AI itself. The AI Act, in turn, focuses on reporting energy consumption, but only for development and training, not for the actual use of the model (inference)—which actually generates, by far, the most emissions. In addition, the AI Act forces providers of certain AI models (in high-risk sectors and for large foundation models like ChatGPT) to assess and mitigate risks to fundamental rights. From a legal perspective, environmental protection is not a fundamental right in the EU. However, the AI Act explicitly, and wrongly, names environmental protection as a key example of “fundamental rights” the Act cares about.

Significantly, this language arguably introduces a ‘Trojan horse’ into the Act—an element that smuggles environmental protection into the AI Act risk assessment. While an explicit Sustainability Impact Assessment for AI did not make it into the final version of the Act, the reference to fundamental rights implicitly introduces a hard requirement to take environmental effects, including energy and water consumption, into effect for both training and usage (see also the Fundamental Rights Impact Assessment requirement for deployers).

Finally, one might also think about including AI processes and, in particular, data centers, in emissions trading systems (ETS) to cap total energy consumption. These would have to be adapted beyond the current structure: the reason for including AI processes, or data centers, in ETS would not be that they directly emit (large amounts of) GHG emissions. Rather, at least some of the energy used by AI will already be subject to carbon pricing schemes (e.g. at the source in carbon-emitting power plants). However, many data centers are actually located in countries that significantly use non-renewable energy but do not have any effective carbon pricing strategy; so this carbon leakage might justify additional caps or constraints on data centers, beyond existing ETS.

Overall, new approaches are needed to responsibly and effectively tackle the dual societal transformations of AI and climate change, both on a national and international scale. Interdisciplinary research is indispensable in this endeavor: it can help us chart this little-known territory, and map a path to a future in which pressing normative and societal questions are addressed based on scientific evidence and cross-disciplinary reflection. As we navigate the AI-driven future, it is imperative to engage in these thoughtful dialogues and proactive policymaking to harness the benefits of AI while addressing its challenges responsibly.

Featured image by by Markus Distelrath via Pexels.

Recent Comments

There are currently no comments.

Leave a Comment

Your email address will not be published. Required fields are marked *