US Environmental, Social, and Governance Legal Considerations for AI Companies — Status Quo and Practical Next Steps
Artificial intelligence (AI) has become an indispensable tool, but with the rapid advancement in the technology has come a rise in electricity demand, drawing regulatory attention on the natural resources that AI requires. Within this context, companies and policymakers are considering how AI may contribute both to sustainable solutions and environmental impacts. For companies that have committed to environmental targets and are simultaneously advancing AI and seeking power to do so, managing both goals can lead to a competitive advantage.
State of Play
Scaling the infrastructure that underpins AI will require significant resources such as energy and water. Recent research indicates mounting data center demand for power during this decade, spurred partly by AI, while simultaneous gains in energy efficiency slow. This research predicts that data centers may use 3–4% of power worldwide by 2030, with carbon dioxide emissions from data centers potentially increasing twofold by decade-end, and may represent 8% of power demand in the United States by 2030.
While companies are already pursuing solutions to reduce their environmental footprint, AI is nonetheless cited as a factor in increasing electricity demand for companies that are developing the technology. How this demand will translate to absolute carbon emissions is not yet clear, but it may become especially critical as some companies approach 2030 deadlines for voluntary commitments.
What Companies Can Do and Are Doing
Leading companies are pursuing proactive practical and technical approaches, including augmenting portfolios of carbon-free energy sources, entering power purchase agreements to increase the availability of clean energy, strategically locating data centers in low-carbon geographies, and leveraging and developing new methods to improve efficiency in data centers. Companies are also optimistic about how AI may advance sustainability-focused solutions.
From a governance perspective, companies are referencing resources like the AI Risk Management Framework, which is promulgated by the National Institute of Standards and Technology (NIST) — an agency within the US Department of Commerce. The framework outlines actions and outcomes that organizations may consider, and we have seen companies specifically highlight their intention to implement Measure 2.12, whereby “Environmental impact and sustainability of AI model training and management activities … are assessed and documented.” For more information on how companies are considering AI governance at the board level, see this Latham article.
Finally, the environmental implications of AI, including what growing electricity demand means from an emissions perspective, will also be contingent on policy decisions. In this context, companies are publicly advocating for policy frameworks that balance concerns about environmental impact and resource use with the potential for AI to drive innovations in sustainability.
Developments at the US Federal Level
In February 2024, Congressional Democrats introduced the Artificial Intelligence Environmental Impacts Act of 2024 (H.R.7197 and S.3732). If passed, the bill would require the Administrator of the US Environmental Protection Agency to execute a study on the environmental impacts of AI, and the Director of NIST to convene stakeholders and to develop a voluntary reporting system for such impacts. Although the bill acknowledges positive environmental uses for AI, it also stresses potential negative impacts like increases in energy consumption, resource and energy-intensive manufacturing processes, and electronic waste. The bill remains in committee at time of writing and was not mentioned in the recent roadmap for AI policy issued by the Bipartisan Senate AI Working Group.
In April 2024, the White House reported on progress following its executive order (EO) to coordinate activity on AI across the federal government, detailed here and here. Among other items, the announcement highlighted initiatives undertaken by the US Department of Energy, including identifying how AI can help modernize the electric grid and support a clean energy economy in a new report, and piloting AI tools to enhance permitting and improving clean energy infrastructure siting. In tandem, the US Department of Energy further announced and described actions that are broadly focused on evaluating the energy opportunities and challenges of AI, spurring innovation to manage its increasing energy demand, and driving clean energy deployment.
Pursuant to the EO, NIST also published draft guidance for generative AI that supplements the AI Risk Management Framework referenced above. The profile identifies 12 risks specifically presented by generative AI, including environmental risks, described as, “Impacts due to high resource utilization in training GAI models, and related outcomes that may result in damage to ecosystems.” The guidance provides more than 400 actions that can assist organizations. Those responding to environmental risks, which also primarily fall under Measure 2.12, for example include to “Document anticipated environmental impacts of model development, maintenance, and deployment in product design decisions,” and “Verify effectiveness of carbon capture or offset programs, and address green-washing risks.” The guidance is expected to be finalized in July.
Looking Forward
We expect more focus on this topic and are paying close attention to how US policymakers take up these questions. As the landscape evolves, companies should continue to evaluate and recalibrate their targets, stay appraised of rulemaking on the energy and environmental implications of AI at the US federal and state level, and monitor global regulations that require energy transition plans or push for net zero targets.
Indeed, questions at the intersection of AI and ESG speak to business strategy, material risks, and companies’ social license to operate.
Latham & Watkins is actively watching developments in this space.