About Us

Events

Virtualis

AI’s effect on the natural world: A toxic relationship?

AI’s effect on the natural world: A toxic relationship?

Sophia Welch

February 10, 2026

Introduction

“AI boom has caused the same amount of CO2 to be released  as the entire city of New York.” This was a headline from a major news outlet just last week. A gripping title that is sure to have left many individuals shocked, alarmed and above all- confused. How is it that software, especially one so relatively new to our world, has had such a substantial impact on our environment? This question is magnified when considering the limelight that environmental impact has had across all industries at present. With climate change no longer just a “theoretical risk” but one that is very much a “failure underway” and the concurrent mass growth of generative AI, with individuals and companies alike becoming increasingly dependent upon it, the picture is painted clearly- something has got to give. So, how can we ascertain the real effect that AI is having on the natural world? This question is more convoluted than one might first expect. This article will therefore explore the actual energy consumption of AI and Data Centres, possible ways AI could in fact positively affect the environment, sustainable equity, and the necessary improvements that will need to be made in order to ensure the coexistence of this new efficient technology, and the protection of our planet as well as complying with legally binding instruments that tech companies have agreed to. 

 

AI’s Actual Use

Firstly, it should be stated from the beginning, that it is incredibly hard to obtain an accurate number of energy use and carbon footprint of AI software. This boils down to a number of different factors which will be looked at in turn. To start, data centre operators (the units in which AI computers are contained) do not publicly disclose their required inputs.  Therefore, we must start by looking at the estimates given by Big Tech companies themselves. For instance, Google claimed in a 2025 report that their AI model Gemini uses, per prompt, 0.24 Watts of energy, emits 0.03 grams of CO2 and consumes about 5 drops of water. From this angle that Google presents, things do not seem so bad! Especially when they use the handy analogy that the per-prompt energy consumption is equivalent to watching TV for less than 9 seconds. But is framing AI’s energy use and carbon footprint ‘per prompt’ giving an accurate picture of the overall numbers? This can be answered in a resounding no. Many critics of Tech companies’ self-analysis’ have pointed out that measuring environmental impact based on one prompt is highly inaccurate. This is because most single prompts will then actually result in many other queries.  AI reasoning encourages the software to break down a question step-by-step, causing itself to ask further questions, demonstrating that basing energy consumption simply from one query is highly inaccurate. Further, by presenting energy use from the perspective of one prompt, this leaves out the extensive training that all AI models have to go through and the consequential environmental effects of this. This is especially significant when considering that training models can actually account for 50% of AI resource use. Some of the biggest AI companies such as OpenAI do not share their training information at all. Therefore, it is clear to see that the way in which the biggest tech companies present their data in the first place, can be seriously misleading.

Moreover, one of the most convoluted measurements of energy use for AI technology is water usage. Company-wide data shows that the Ai water footprint could be in the range of the global annual consumption of bottled water.  This is another shocking statistic, but again we must ask whether it is accurate. The main difficulty in measuring this factor is the difference between direct and indirect water consumption. Computers running AI systems within Data centres, as a result of energy expenditure, become extremely hot. Therefore, in order to cool down and ensure their smooth running, it is necessary to pump water and use air conditioning around the data centres. This is the direct water consumption of AI. The indirect water consumption is found within the generation of electricity for data centres. This can be through generators such as hydro/thermal-electric power plants. Herein lies the problem. Tech companies are very inconsistent with the way in which they measure their water consumption. Most, like Google, choose not to include indirect water consumption stating that it “does not fully control the water consumption in electricity generation.” At present, Meta is the only company incorporating indirect water use into their metrics. To highlight the importance of this difference in reporting, Meta’s indirect use of water is substantially higher than their direct water consumption. This infers that many companies would have higher water consumption in their reports if reporting methodologies were harmonised. However, Google could be right in choosing not to include indirect consumption within its metrics. This is because water used in hydro/thermal electric power plants is mostly pumped back out into rivers and oceans. Therefore, it could be argued that the environmental impact of these power plants, in terms of water, is not significantly harmful. Taking this into consideration, the lack of indirect consumption reporting seems less of a problem. However, it does present the issue that is shown across all tech company energy reports- the lack of consistency. This, therefore, is the biggest concern when questioning the accuracy of AI’s environmental impact, because how can proper safeguards be ensured when there is not an accurate picture of actual energy usage and thus environmental harm? 

 

AI to help the environment and equality?

Whilst the true metrics of consumption and harm are vague, could the potential effects of AI on the environment be offset by the potential benefits AI systems could have on producing proficient solutions to climate change and pollution concerns? Using a case study of illegal mining in Ghana, Nti has sought to exemplify the ways in which AI can be used to reverse water pollution. He uses the examples of how AI models can be used in processes such as ion exchange and chemical precipitation  to help purify contaminated water for local residents and the surrounding ecosystem. This example provides hope for the multitude of ways in which AI can be used to impactfully help the environment and communities.

In Microsoft’s 2025 Environmental Sustainability Report, it listed a number of ways in which it was using AI to endeavour to help the changing climate and for us to better understand our environment. They cited the use of Aurora, their AI foundation model for environmental forecasting which helps paint a more accurate picture of changing weather patterns and also allows for more timely interventions to severe weather impacts. They also explain their collaboration with the University of Michigan in their project to use generative AI in order to provide an alternative to the environmentally harmful vanadium (central to mature flow batteries). They have also been using AI in order to predict energy demand in communities and to help communities participate in power purchase agreements . But this last endeavour may raise another important concern within the use of AI for environmental change- equality.

Gaines has stated that equity is one of the five pillars of the theoretical framework for energy sustainability governance. This includes intra-generational equity, meaning equality surrounding energy consumption and effects, across social classes, marginalised groups and countries. We know that AI bias (the occurrence of biased results due to human biases and prejudice that skew the original training of AI)  has already had dire consequences when trying to intervene in real world situations. For example, computer-aided diagnosis systems have been found to return lower accuracy results for African-American patients than white patients. So can we truly rely on AI, as Microsoft is planning to do, on creating widespread equality concerning energy demand? 

AI has been seen to have extreme distributive injustices  already, as a result of new infrastructure to accommodate the growing demand for data centres. Elon Musk’s XAI recently built a super-computer- “Colossus 1”- in South Memphis, USA. The computer required multiple methane gas generators, releasing harmful gases into the surrounding air such as nitrogen oxide.  The computer is situated in a historically black community, with communities such as this now being dubbed, “sacrifice zones”. A disconcerting title, especially when considering this quote from the local senator Rep. Justin J. Pearson, “if you are African American in this country, you’re 75% more likely to live near a toxic hazardous waste facility.” Thankfully, the electricity generation for Colossus 1 has been recently found illegal  due to the Environmental Protection Agency enforcing that air permits are still required even when generators are portable. Yet, this case highlights the harmful ways in which tech companies are willing to cut legal corners in order to hound the development of AI. We have also seen the risks in placing equality solutions in the hands of AI itself due to its potential bias. Yet could the potential benefits such as the water purification case study in Ghana and Microsoft’s current sustainability projects provide enough hope for AI’s environmentally friendly future? This is still unclear. Kate Crawford, an AI researcher and professor has said that AI could still benefit societies as a whole, “but currently it is borrowing against the future.” This implies that whatever potential environmental solutions AI could provide in ensuing decades is, at present, unbalanced with the significant harm being done to the planet today.  

Unified Governance

When asked whether the use of electricity by AI was sustainable, Amanda Peterson Corio (Google’s global head of data centre energy) replied saying,“it’s a challenge”  citing the need to find ways to expand whilst also meeting climate goals. Corio’s statement cements the fact that the sustainable growth of AI is becoming a real dilemma. Especially when considering the climate goals under the Paris Agreement. The legally binding Paris Agreement pledges to limit global warming to 1.5 degrees by 2030. Although many of the Big Tech companies are in the US, a country which has recently left the Paris Agreement for the second time, companies such as Google and Microsoft pledged to still follow the targets of the agreement as part of former NYC mayor Bloomberg’s “we are still in” campaign.

The US remains the front-runner for AI development and a difference in regulation methods could be the answer as to why. In 2024, the US produced 40 notable AI models  compared to Europe’s 3. Europe’s AI Regulation is regarded amongst many as the pioneer for regulation of ethical and responsible AI use. Juxtaposing this is the USA’s legislation surrounding AI.  Starting with the National Artificial Intelligence Initiative Act (2020), President Trump’s administration began a theme of AI legislation that was a decidedly more “hands-off” approach towards governance, aiming to create more of a free-market. This continued in 2025 with the executive order titled, “Removing Barriers to American Leadership in Artificial Intelligence,” which aimed to revoke many policies or directives that acted as a “barrier to innovation.” This approach to AI governance, although seemingly effective at fostering the development of new technology, is also thoroughly unregulated and thus may leave potential harms to the environment unrestricted. Despite States like California having laws such as the AB 2013 law on Training Data Transparency which can be seen as a benefit to the understanding of energy consumption, the federal model of the US makes any unified governance of AI difficult. This unregulated structure of governance, with no unified approach, in the biggest exporter of AI in the world, can be deemed as one of the biggest harbingers of AI’s negative effect on the environment. 

Conclusion

What must be done to create a more sustainable future for AI? First, it would seem evident from the discussions above that companies need to have more streamlined, universal methodology for the measurements of energy use and carbon footprint of their AI models- one which would include: a harmonised method of water consumption measurements, a scrapping of “per prompt” examples and an inclusion of training metrics. This would allow for more accurate estimates of environmental impact. Also, a potential improvement could be the mandatory use of AI to find ingenuitive solutions to climate issues. If a legal framework was put in place for companies to have to offset any environmental impact with these solutions, then this could sway AI in the balance from an environmental foe to friend. Moreover, it is crucial that the infrastructure that is used to meet the growing demand for AI is safe and does not carry with it distributive injustices as was found in Memphis. Google and Microsoft seem to be at the forefront of this endeavour, with Google including in its report that they are conducting “watershed health assessments”  in order to limit water use in high-stress locations. If, as the likely trend in US legislation seems to suggest, that AI will continue to be a largely unregulated field, it is up to tech companies to form unified methodologies as well as adhering to the International Agreements they have agreed to.  With improvements such as this it is possible that AI can pose less of a threat to the future of our world, and instead act as one of the greatest players in its survival. 



Introduction

“AI boom has caused the same amount of CO2 to be released  as the entire city of New York.” This was a headline from a major news outlet just last week. A gripping title that is sure to have left many individuals shocked, alarmed and above all- confused. How is it that software, especially one so relatively new to our world, has had such a substantial impact on our environment? This question is magnified when considering the limelight that environmental impact has had across all industries at present. With climate change no longer just a “theoretical risk” but one that is very much a “failure underway” and the concurrent mass growth of generative AI, with individuals and companies alike becoming increasingly dependent upon it, the picture is painted clearly- something has got to give. So, how can we ascertain the real effect that AI is having on the natural world? This question is more convoluted than one might first expect. This article will therefore explore the actual energy consumption of AI and Data Centres, possible ways AI could in fact positively affect the environment, sustainable equity, and the necessary improvements that will need to be made in order to ensure the coexistence of this new efficient technology, and the protection of our planet as well as complying with legally binding instruments that tech companies have agreed to. 

 

AI’s Actual Use

Firstly, it should be stated from the beginning, that it is incredibly hard to obtain an accurate number of energy use and carbon footprint of AI software. This boils down to a number of different factors which will be looked at in turn. To start, data centre operators (the units in which AI computers are contained) do not publicly disclose their required inputs.  Therefore, we must start by looking at the estimates given by Big Tech companies themselves. For instance, Google claimed in a 2025 report that their AI model Gemini uses, per prompt, 0.24 Watts of energy, emits 0.03 grams of CO2 and consumes about 5 drops of water. From this angle that Google presents, things do not seem so bad! Especially when they use the handy analogy that the per-prompt energy consumption is equivalent to watching TV for less than 9 seconds. But is framing AI’s energy use and carbon footprint ‘per prompt’ giving an accurate picture of the overall numbers? This can be answered in a resounding no. Many critics of Tech companies’ self-analysis’ have pointed out that measuring environmental impact based on one prompt is highly inaccurate. This is because most single prompts will then actually result in many other queries.  AI reasoning encourages the software to break down a question step-by-step, causing itself to ask further questions, demonstrating that basing energy consumption simply from one query is highly inaccurate. Further, by presenting energy use from the perspective of one prompt, this leaves out the extensive training that all AI models have to go through and the consequential environmental effects of this. This is especially significant when considering that training models can actually account for 50% of AI resource use. Some of the biggest AI companies such as OpenAI do not share their training information at all. Therefore, it is clear to see that the way in which the biggest tech companies present their data in the first place, can be seriously misleading.

Moreover, one of the most convoluted measurements of energy use for AI technology is water usage. Company-wide data shows that the Ai water footprint could be in the range of the global annual consumption of bottled water.  This is another shocking statistic, but again we must ask whether it is accurate. The main difficulty in measuring this factor is the difference between direct and indirect water consumption. Computers running AI systems within Data centres, as a result of energy expenditure, become extremely hot. Therefore, in order to cool down and ensure their smooth running, it is necessary to pump water and use air conditioning around the data centres. This is the direct water consumption of AI. The indirect water consumption is found within the generation of electricity for data centres. This can be through generators such as hydro/thermal-electric power plants. Herein lies the problem. Tech companies are very inconsistent with the way in which they measure their water consumption. Most, like Google, choose not to include indirect water consumption stating that it “does not fully control the water consumption in electricity generation.” At present, Meta is the only company incorporating indirect water use into their metrics. To highlight the importance of this difference in reporting, Meta’s indirect use of water is substantially higher than their direct water consumption. This infers that many companies would have higher water consumption in their reports if reporting methodologies were harmonised. However, Google could be right in choosing not to include indirect consumption within its metrics. This is because water used in hydro/thermal electric power plants is mostly pumped back out into rivers and oceans. Therefore, it could be argued that the environmental impact of these power plants, in terms of water, is not significantly harmful. Taking this into consideration, the lack of indirect consumption reporting seems less of a problem. However, it does present the issue that is shown across all tech company energy reports- the lack of consistency. This, therefore, is the biggest concern when questioning the accuracy of AI’s environmental impact, because how can proper safeguards be ensured when there is not an accurate picture of actual energy usage and thus environmental harm? 

 

AI to help the environment and equality?

Whilst the true metrics of consumption and harm are vague, could the potential effects of AI on the environment be offset by the potential benefits AI systems could have on producing proficient solutions to climate change and pollution concerns? Using a case study of illegal mining in Ghana, Nti has sought to exemplify the ways in which AI can be used to reverse water pollution. He uses the examples of how AI models can be used in processes such as ion exchange and chemical precipitation  to help purify contaminated water for local residents and the surrounding ecosystem. This example provides hope for the multitude of ways in which AI can be used to impactfully help the environment and communities.

In Microsoft’s 2025 Environmental Sustainability Report, it listed a number of ways in which it was using AI to endeavour to help the changing climate and for us to better understand our environment. They cited the use of Aurora, their AI foundation model for environmental forecasting which helps paint a more accurate picture of changing weather patterns and also allows for more timely interventions to severe weather impacts. They also explain their collaboration with the University of Michigan in their project to use generative AI in order to provide an alternative to the environmentally harmful vanadium (central to mature flow batteries). They have also been using AI in order to predict energy demand in communities and to help communities participate in power purchase agreements . But this last endeavour may raise another important concern within the use of AI for environmental change- equality.

Gaines has stated that equity is one of the five pillars of the theoretical framework for energy sustainability governance. This includes intra-generational equity, meaning equality surrounding energy consumption and effects, across social classes, marginalised groups and countries. We know that AI bias (the occurrence of biased results due to human biases and prejudice that skew the original training of AI)  has already had dire consequences when trying to intervene in real world situations. For example, computer-aided diagnosis systems have been found to return lower accuracy results for African-American patients than white patients. So can we truly rely on AI, as Microsoft is planning to do, on creating widespread equality concerning energy demand? 

AI has been seen to have extreme distributive injustices  already, as a result of new infrastructure to accommodate the growing demand for data centres. Elon Musk’s XAI recently built a super-computer- “Colossus 1”- in South Memphis, USA. The computer required multiple methane gas generators, releasing harmful gases into the surrounding air such as nitrogen oxide.  The computer is situated in a historically black community, with communities such as this now being dubbed, “sacrifice zones”. A disconcerting title, especially when considering this quote from the local senator Rep. Justin J. Pearson, “if you are African American in this country, you’re 75% more likely to live near a toxic hazardous waste facility.” Thankfully, the electricity generation for Colossus 1 has been recently found illegal  due to the Environmental Protection Agency enforcing that air permits are still required even when generators are portable. Yet, this case highlights the harmful ways in which tech companies are willing to cut legal corners in order to hound the development of AI. We have also seen the risks in placing equality solutions in the hands of AI itself due to its potential bias. Yet could the potential benefits such as the water purification case study in Ghana and Microsoft’s current sustainability projects provide enough hope for AI’s environmentally friendly future? This is still unclear. Kate Crawford, an AI researcher and professor has said that AI could still benefit societies as a whole, “but currently it is borrowing against the future.” This implies that whatever potential environmental solutions AI could provide in ensuing decades is, at present, unbalanced with the significant harm being done to the planet today.  

Unified Governance

When asked whether the use of electricity by AI was sustainable, Amanda Peterson Corio (Google’s global head of data centre energy) replied saying,“it’s a challenge”  citing the need to find ways to expand whilst also meeting climate goals. Corio’s statement cements the fact that the sustainable growth of AI is becoming a real dilemma. Especially when considering the climate goals under the Paris Agreement. The legally binding Paris Agreement pledges to limit global warming to 1.5 degrees by 2030. Although many of the Big Tech companies are in the US, a country which has recently left the Paris Agreement for the second time, companies such as Google and Microsoft pledged to still follow the targets of the agreement as part of former NYC mayor Bloomberg’s “we are still in” campaign.

The US remains the front-runner for AI development and a difference in regulation methods could be the answer as to why. In 2024, the US produced 40 notable AI models  compared to Europe’s 3. Europe’s AI Regulation is regarded amongst many as the pioneer for regulation of ethical and responsible AI use. Juxtaposing this is the USA’s legislation surrounding AI.  Starting with the National Artificial Intelligence Initiative Act (2020), President Trump’s administration began a theme of AI legislation that was a decidedly more “hands-off” approach towards governance, aiming to create more of a free-market. This continued in 2025 with the executive order titled, “Removing Barriers to American Leadership in Artificial Intelligence,” which aimed to revoke many policies or directives that acted as a “barrier to innovation.” This approach to AI governance, although seemingly effective at fostering the development of new technology, is also thoroughly unregulated and thus may leave potential harms to the environment unrestricted. Despite States like California having laws such as the AB 2013 law on Training Data Transparency which can be seen as a benefit to the understanding of energy consumption, the federal model of the US makes any unified governance of AI difficult. This unregulated structure of governance, with no unified approach, in the biggest exporter of AI in the world, can be deemed as one of the biggest harbingers of AI’s negative effect on the environment. 

Conclusion

What must be done to create a more sustainable future for AI? First, it would seem evident from the discussions above that companies need to have more streamlined, universal methodology for the measurements of energy use and carbon footprint of their AI models- one which would include: a harmonised method of water consumption measurements, a scrapping of “per prompt” examples and an inclusion of training metrics. This would allow for more accurate estimates of environmental impact. Also, a potential improvement could be the mandatory use of AI to find ingenuitive solutions to climate issues. If a legal framework was put in place for companies to have to offset any environmental impact with these solutions, then this could sway AI in the balance from an environmental foe to friend. Moreover, it is crucial that the infrastructure that is used to meet the growing demand for AI is safe and does not carry with it distributive injustices as was found in Memphis. Google and Microsoft seem to be at the forefront of this endeavour, with Google including in its report that they are conducting “watershed health assessments”  in order to limit water use in high-stress locations. If, as the likely trend in US legislation seems to suggest, that AI will continue to be a largely unregulated field, it is up to tech companies to form unified methodologies as well as adhering to the International Agreements they have agreed to.  With improvements such as this it is possible that AI can pose less of a threat to the future of our world, and instead act as one of the greatest players in its survival. 



Partners

KVK number: 86554336

© 2026 DSLA All rights reserved.

Partners

KVK number: 86554336

© 2026 DSLA All rights reserved.

Partners

KVK number: 86554336

© 2026 DSLA All rights reserved.