Bright light at end of tunnel
Article

AI and ESG: How Companies Are Thinking About AI Board Governance

April 19, 2024
Companies that use AI must carefully consider how to manage board oversight and disclosure risks.

Questions around the governance of artificial intelligence (AI) have come to the fore via recent, prominent US shareholder proposals and first-of-their kind enforcement actions, which we describe in depth here. It is in this context that companies interested to deploy or otherwise affected by AI will have to thoughtfully consider how to manage such risks in order to responsibly realize the benefits of this emerging technology. Companies, both public and private, are increasingly raising an important question: How should the board of directors oversee AI?

In his remarks this month at The SEC Speaks in 2024, Deputy Chief Risk Officer in the SEC Division of Corporation Finance’s Office of Risk and Strategy Johnny Gharib said there has been a significant increase in the number of annual reports filed by large accelerated filers that mentioned AI, with a majority doing so. The disclosures, which extended across industries, were most commonly made in the risk-factor section, followed by the business section and management discussion and analysis (MD&A), but a notable portion of filings contained disclosure in both the risk-factor and business sections. Noting that the current disclosure regime does not expressly capture AI, Gharib elaborated that companies may still be required to describe other items, including the role of the board in overseeing risk and disclosure controls and procedures, under existing rules.

State of Play

Approximately 15% of S&P 500 companies made some disclosure regarding board oversight of AI — defined as board or committee responsibility for AI, any director expertise with AI, or an AI ethics board or similar governance structure — according to a recent study evaluating proxy statements filed from September 2022 to September 2023. However, the analysis determined that only 1.6% and 0.8% of companies disclosed, respectively, explicit board or committee oversight of AI or the presence of an AI ethics board, while 13% of companies identified one or more directors with AI expertise. The IT sector led with 38% of companies providing some disclosure, followed by the healthcare sector at 18%.

While disclosure was rare among the proxy statements analyzed, the study found that when responsibility for overseeing AI is delegated to a committee, companies are opting to increase the responsibilities of existing committees rather than create a new one. For example, some companies are extending any audit committee oversight of technology risks to include AI, and companies that already have a technology committee are including AI in its purview. Also, as noted in the study, other companies are assigning AI oversight to committees focused on environmental, social, or public policy matters.

We have similarly observed this trend. For example, we have seen environmental, social, and public policy committee (or equivalent) charters enumerate responsible AI among key matters to be reviewed for the board and management, noting in particular the public policy and social implications of technologies like AI. Companies are also introducing use policies outlining an ethical approach to AI for their business.

Guidance and Standards

Government entities as well as industry associations have started to make available resources that can help guide companies and boards. This includes the AI Risk Management Framework released in January 2023 by the National Institute of Standards and Technology (NIST), a part of the US Department of Commerce. The framework is intended to be a voluntary resource for organizations across sectors “designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.”

We expect additional guidance that may be relevant for businesses as contemplated by the wide-ranging executive order (EO) issued by President Biden in October 2023 to create a coordinated government approach on AI. As we describe in more detail here, the EO directs many federal agencies to study AI and make policy recommendations, enforce existing consumer protection laws, or engage in new rulemaking. The EO also mandates the Secretary of Commerce, through the Director of NIST, to establish “guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems.”

What’s Next?

As companies increasingly use AI, and amid scrutiny by the SEC and other US and international regulators and governments, questions about how the board is overseeing risk and guiding strategy, and the disclosure of the same, will become ever more important.

Latham & Watkins is closely monitoring developments in this space.

Endnotes

    This publication is produced by Latham & Watkins as a news reporting service to clients and other friends. The information contained in this publication should not be construed as legal advice. Should further analysis or explanation of the subject matter be required, please contact the lawyer with whom you normally consult. The invitation to contact is not a solicitation for legal work under the laws of any jurisdiction in which Latham lawyers are not authorized to practice. See our Attorney Advertising and Terms of Use.