A recent open letter from scientists and technologists calling for a six-month pause in AI experimentation warns that AI systems “can pose profound risks to society and humanity.” These risks range from “flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction,” reports NPR.
AI systems can have serious environmental impacts as well, according to the AI Index Report from the Stanford Institute for Human-Centered Artificial Intelligence. For example, training the BLOOM model (at the lower end of the energy-consumption scale) “consumed enough energy to power the average American home for 41 years,” the report says.
Also, according to Google researchers, “artificial intelligence made up 10 to 15 percent of the company’s total electricity consumption, which was 18.3 terawatt hours in 2021,” says this DataCenter Knowledge (Bloomberg) article. “That would mean that Google’s AI burns around 2.3 terawatt hours annually, about as much electricity each year as all the homes in a city the size of Atlanta.”
In this article, we’ll look at additional facts to better understand the environmental impact of AI.
Climate Costs
In terms of energy use and related carbon emissions, the AI Index Report noted that “many factors determine the amount of carbon emissions emitted by AI systems, including the number of parameters in a model, the power usage effectiveness of a data center, and the grid carbon intensity.”
Of the language models compared in the report, “GPT-3 released the most carbon, 1.4 times more than Gopher, 7.2 times more than OPT, and 20.1 times more than BLOOM.”
“Despite the stark differences in energy consumption,” says Mack DeGeurin, “three of the four models (excluding DeepMind’s Gopher) were trained on roughly equivalent 175 billion parameters.” OpenAI’s GPT-4 is a different story, however. “OpenAI won’t say how many parameters its newly released GTP-4 is trained on, but given the huge leaps in data required between previous versions of the model, it’s safe to assume GTP-4 requires even more data than its predecessors.”
Transparency Needed
Such lack of transparency obscures the scope of the impact. As the DataCenter Knowledge article states, “the sector is growing so fast — and has such limited transparency — that no one knows exactly how much total electricity use and carbon emissions can be attributed to AI.”
Accurate estimates depend on other factors as well. “The emissions could also vary widely depending on what type of power plants provide that electricity; a data center that draws its electricity from a coal or natural gas-fired plant will be responsible for much higher emissions than one that draws power from solar or wind farms,” the article says.
Although the training of AI models involves huge upfront power costs, the article states, “researchers found in some cases it’s only about 40 percent of the power burned by the actual use of the model.” Those researchers also point out that publishing emissions data could spur competition around consuming less power and lead to more accurate reporting. “If the whole ML field were to adopt best practices, total carbon emissions from training would reduce,” they predict.
Other Takeaways
The 386-page AI Index Report covers an array of topics around AI growth, including research and development trends, financial costs, AI education in schools, and public opinion.
Here are a few highlights from the report:
- Since 2011, the total number of AI-related GitHub projects has steadily increased, from 1,536 in 2011 to 347,934 in 2022.
- As of 2022, 24.2 percent of GitHub AI projects were contributed by developers in India. The next most represented geographic areas were the European Union and the United Kingdom (17.3%), and then the United States (14.0%).
- The share of American GitHub AI projects has been declining steadily since 2016.
- In terms of global sentiment toward AI, a 2022 IPSOS survey said a large majority of Chinese respondents (78%) “agreed with the statement that products and services using AI have more benefits than drawbacks.” Respondents from Saudi Arabia (76%) and India (71%) were next, while only 35 percent of Americans surveyed agreed that AI products and services had more benefits than drawbacks.
Clearly, rapid AI growth presents many challenges and questions around issues relating to ethics, privacy, disinformation, human rights, and environmental concerns. This article by Josh Dzieza, for example, looks at the massive amount of low-paid human labor involved in labeling data to train AI.
Additionally, following the petition for an AI pause mentioned previously, UNESCO has issued its own call for all countries “to fully implement its Recommendation on the Ethics of Artificial Intelligence immediately." The recommendation provides guidance on how to maximize the benefits of AI and reduce the risks it entails.
Learn More
- Addressing Global Risks with the Power of Open Source — FOSSlife
- AI programs consume large volumes of scarce water — UC-Riverside News
- Artificial Intelligence’s Environmental Costs and Promise — Council on Foreign Relations
- How to Fight Climate Change with Open Source — FOSSlife
- Open Source Projects to Help Measure and Manage Energy Use — FOSSlife
- The Staggering Ecological Impacts of Computation and the Cloud — MIT Reader
Ready to find a job?
Sign up for job alerts and check out the latest listings at Open Source JobHub.
Comments