Just two months after its launch in November 2022, OpenAI’s ChatGPT reached a remarkable milestone of 100 million active users, igniting a competitive rush among tech companies to develop more generative artificial intelligence (A.I.) solutions. Experts have likened the technology’s influence to transformative historical events such as the advent of the Internet, the Industrial Revolution, and even the discovery of fire. While time will clarify the true significance of this innovation, one undeniable consequence of the rapid expansion of A.I. is its notable environmental impact, which continues to escalate.
The deployment of A.I. technologies is linked to increased carbon emissions due to reliance on non-renewable energy sources, as well as the consumption of vast amounts of fresh water. Additionally, the infrastructure required to support A.I. operations contributes further to environmental degradation through the energy-intensive processes of building and maintaining the necessary equipment. As technology firms aim to integrate high-demand A.I. across various sectors—ranging from professional resume assistance to complex medical procedures—there are growing concerns among lawmakers, activists, and global organizations regarding whether the anticipated benefits will outweigh the environmental risks associated with this technology.
Currently, users cannot easily assess how an A.I. interaction—such as requesting homework assistance—might impact carbon emissions or freshwater resources. Last week, Massachusetts Senator Edward Markey highlighted this issue in Washington, D.C., stating, “The development of the next generation of A.I. tools cannot come at the expense of the health of our planet.” He, along with other lawmakers, introduced a bill aimed at compelling the federal government to evaluate the environmental impact of A.I. and establish a standardized reporting framework for future assessments. In a similar vein, the European Union recently approved the “A.I. Act,” which mandates that “high-risk A.I. systems,” including those powered by foundational models like ChatGPT, disclose their energy usage and resource consumption throughout their operational lifecycle. This regulation will take effect next year.
In addition, the International Organization for Standardization (ISO) plans to release guidelines for “sustainable A.I.” later this year, focusing on metrics such as energy efficiency, raw material consumption, transportation, and water usage. The intent is to empower A.I. users to make informed choices regarding their consumption of A.I. technologies. The current lack of transparency regarding the environmental costs of A.I. operations leaves consumers and stakeholders in the dark, particularly concerning the significant water usage required for cooling data centers that house A.I. systems.
Recent studies, including those by UC Riverside’s Shaolei Ren, estimate that a typical session of Q&A with A.I. models like GPT-3 could consume approximately half a liter of fresh water, varying by location and depending on the size of the A.I. system in use. However, much remains unknown about the extensive water consumption linked to computer cooling processes. Moreover, researchers have pointed out the difficulty in obtaining reliable data on A.I.’s greenhouse gas emissions, which complicates the formulation of effective strategies to mitigate these impacts.
As A.I. applications continue to proliferate, data centers are projected to consume around 1,000 terawatt-hours of electricity by 2026—an amount roughly equivalent to Japan’s total energy consumption. While A.I. can operate on various devices, the most sought-after applications require substantial computational resources that often exceed the capabilities of typical consumer electronics. Large-scale A.I. models necessitate rapid processing of immense data sets, which is most efficiently achieved using specialized Graphics Processing Units (GPUs) designed for complex calculations. The efficiency of A.I. operations has improved with the advent of hyperscale data centers, which can accommodate a greater number of computers and scale operations more effectively.
Current estimates suggest there are between 9,000 to 11,000 cloud data centers globally, with many more in development. The International Energy Agency (IEA) anticipates that the electricity demand from these facilities will double by 2026, raising further questions about the environmental toll of A.I. operations. It is important to note that the IEA’s projections encompass all data center activities, not solely those related to A.I. The extensive energy requirements of A.I. can sometimes mask the broader environmental implications of various data center operations, including e-commerce and cryptocurrency mining.
Despite the lack of clarity surrounding A.I.’s energy consumption, companies like Google have disclosed that machine learning—essential for advanced A.I. applications—constitutes less than 15 percent of their overall data center energy usage. On the other hand, A.I. technology also holds the potential to reduce humanity’s carbon footprint through its applications in climate modeling and resource optimization. For example, A.I.-enabled smart homes could potentially decrease household carbon emissions by up to 40 percent. Furthermore, innovative A.I. projects, like those undertaken by Google, are exploring ways to minimize the environmental impact of aviation by optimizing flight paths to reduce contrails, which contribute to global warming.
However, the water usage of major tech firms has been on the rise, with Google reporting a 20 percent increase in 2022 compared to the previous year and Microsoft’s consumption rising by 34 percent. This increase in resource consumption has sparked protests in regions like Chile and Uruguay, where plans for new data centers threaten local drinking water supplies. In The Dalles, Oregon, where Google operates multiple data centers, a legal battle ensued over the transparency of the company’s water usage, which reportedly accounts for over a quarter of the city’s total water supply.
Experts argue that a cultural shift is necessary within the A.I. development community. The creators of generative A.I. need to prioritize transparency regarding the environmental implications of their technologies rather than solely focusing on technical advancements. Some visionaries, like Dodge, suggest that future A.I. systems may be capable of informing users about the water and carbon implications of their requests, providing essential insights for environmentally conscious decision-making. Until such advancements are made, individual users have limited means to gauge their A.I. consumption impact. “There’s not much individuals can do, unfortunately,” Ren notes, adding that the best approach for now is to use A.I. services judiciously.
Correction, February 21, 2024: An earlier version of this article incorrectly quoted researcher Dave Patterson as referring to CO₂ emissions from global aviation. Patterson was actually referring to CO₂e (“carbon dioxide equivalent”) emissions, a measurement that includes both CO₂ and other greenhouse gases. Correction, July 14, 2025: An earlier version of this article incorrectly stated that data centers were projected to consume 1,000 terawatts of electricity in 2026. They were projected to consume 1,000 terawatt-hours.
