Artificial intelligence (AI) has already begun transforming a wide range of industries, including entertainment, advertising, e-commerce, education, finance, and healthcare, to name a few. In an effort to ensure that AI is developed and used safely and responsibly in the United States, President Biden issued an Executive Order on October 30th directing the “most sweeping actions ever taken to protect Americans from the potential risks of AI systems,” according to the White House.

What does the President mean when referring to “AI”? The Order explains that “AI” includes any “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” The Order also covers: “AI models” (components of a system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs); “AI systems” (any data system, software, hardware, application, tool, or utility that uses AI); and “generative AI” (AI models that emulate characteristics of input data to generate synthetic content including images, videos, audio, text, and other digital content). In short, the term is meant to encompass the full spectrum of artificial intelligence offerings, from decision-making algorithms to content-generating applications like large language models and beyond.

The Order also addresses a number of issues raised by the rapid development of AI technology, including the “most pressing security risks”—those related to biotechnology, cybersecurity, critical infrastructure, and national security. It also sets forth the Administration's eight major policy goals and priorities as follows: make AI safe and secure, promote innovation and competition, support American workers, advance equity and civil rights, protect consumers from fraud and discrimination, protect Americans’ privacy, hire and train public service-oriented AI professionals, and continue America’s global leadership in technological progress. The topics can largely be seen as falling into two distinct policy priorities: consumer protection (including privacy and discrimination protections) and championing American innovation and competition.

CONSUMER PROTECTION 

  • Safety and Security:  

The Order contains broad directions regarding the safety and security of U.S. consumers, such as requiring that “developers of the most powerful AI systems” share their safety test results with the U.S. government. It is unclear what qualifies as “the most powerful AI systems” but, at minimum, the Order requires that anyone developing “a foundational model that poses a serious risk to national security, national economic security, or national health and safety” notify the federal government when training the AI model and share results to ensure that the system is “safe, secure, and trustworthy” before making models public.

The Order also seeks to establish guidelines and best practices for the development and deployment of safe, secure, and trustworthy AI, including the development of a companion resource to the AI Risk Management Framework, NIST AI 100-1 for generative AI. The Administration is seeking to launch an initiative to create benchmarks and auditing capabilities for AI generally with a focus on high-risk areas such as cybersecurity and biosecurity. 

Additionally, the Order seeks to address issues of consumer fraud and disinformation by establishing best practices for detecting AI-generated content and authenticating official government content. The Department of Commerce will develop guidance for content authentication and watermarking to label AI-generated content for use by federal agencies. This labeling is expected to “set an example for the private sector.” 

The Order’s focus on safety goes beyond the U.S., including by directing the accelerated development and implementation of AI standards with international partners to ensure that “technology is safe, secure, trustworthy, and interoperable” on a global scale.

  • Supporting Workers: 

The Order also addresses the possible displacement of workers due to AI by requesting a report from the Secretary of Labor assessing how federal programs could be used to respond to AI related work disruptions and identifying options to develop support for affected workers. Here, again, the Administration expresses concern with a range of potential consequences from the development of AI, and calls for best practices that address equity, health, and safety in the workplace. Specifically, those practices are meant to address the “implication for workers of AI related collection and use of data.” 

  • Equity and Civil Rights: 

Much of the recent discourse regarding AI has focused on bias and discrimination, including in the employment, consumer financial market, and housing contexts. To address related concerns, the Order requires training for the Department of Justice and federal civil rights offices on best practices for the investigation and prosecution of civil rights violations related to AI. Specific federal agencies are also required to analyze AI in their sectors and, as appropriate, require that regulated entities use AI and other tools to ensure compliance with federal law. For example, in the housing and consumer financial markets, the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau should evaluate underwriting models for bias or disparities affecting protected groups and evaluate automated processes to minimize bias. Additionally, the Order calls for guidance to be issued regarding the use of tenant screening systems, which may violate federal law by leading to discriminatory outcomes.

It is worth noting that various federal agencies have already published guidance regarding the enforcement of individual civil rights in the AI space. Organizations that operate in these sensitive areas, or that collect sensitive data, should be particularly careful in their practices and refer to the existing guidelines when processing such data. 

  • Privacy and Other Protections: 

Privacy protections are clearly a primary emphasis for the Administration. Of particular concern is the potential for increased privacy violations arising from the unregulated training of AI systems. To address this, the Order calls for the evaluation of agency standards associated with the collection, processing, maintenance, and use of commercially available information. In addition, the White House’s press release explicitly calls for the passage of bipartisan data privacy legislation. Although such a law has been a common topic in Washington in recent years, there has been little progress toward a comprehensive federal privacy law since last year’s proposed American Data Privacy and Protection Act. 

The Order also directs the strengthening of privacy-preserving research and technologies through the funding of a Research Coordination Network and the development of guidelines from federal agencies to evaluate the effectiveness of privacy-preserving techniques.

Additionally, the Order discusses a variety of other actions meant to “protect American consumers from fraud, discrimination, and threats to privacy.” The Administration seeks to deploy AI in myriad ways, such as using predictive and generative AI in healthcare delivery and financing while simultaneously conducting safety monitoring of AI enabled technologies in these spaces. Other key industries include transportation and education; the Secretaries of Education and Transportation are tasked with creating guidance and policies to ensure the safe and nondiscriminatory use of AI while minimizing any adverse impact on underserved communities. 

ADVANCING INNOVATION AND COMPETITION

  • Innovation and Competition:

The Biden Administration seeks to strengthen public-private partnerships for advancing innovation, commercialization, and risk-mitigation methods for AI, particularly through increased funding for AI research and exploiting potential updates for patent eligibility to address AI and other emerging technologies. The Administration is particularly interested in using AI to further its agenda in areas such as healthcare, veteran’s affairs, and climate change. 

The AI marketplace is also of interest to the Administration. The Order tasks the Federal Trade Commission with ensuring “fair competition in the AI marketplace” and that “consumers and workers are protected from harms that may be enabled by the use of AI.” At the same time, the administrator of the Small Business Administration will help establish one or more Small Business AI Innovation and Commercialization Institutes that provide support, technical assistance, and other resources to small businesses seeking to innovate, commercialize, scale, and otherwise advance the development of AI. It is likely the Administration is responding to the growing concentration of AI power in a few big technology companies.

  • Advancing Government Use of AI 

The Director of the Office of Management and Budget will convene an interagency council to coordinate AI development and usage across government agencies with the goal of issuing guidance to agencies to “strengthen the effective and appropriate use of AI, advance innovation, and manage risks from AI in the government.” 

With the rise of generative AI, many companies have been struggling to develop internal policies for the use of such tools. In the Order, the Administration discourages federal agencies from “imposing broad general bans or blocks on agency use of generative AI.” Instead, agencies are directed to limit access based on risk assessment and establish guidelines and limitations on the appropriate use of generative AI. This should be of particular interest to companies as they look to federal examples for the development of their own internal use policies. 

  • Strengthening American Leadership 

The order also looks toward international relationships, partnering with international allies in efforts to establish frameworks for managing AI risks, encouraging the adoption of commitments similar to those of the U.S., and advancing responsible global technical standards for AI development. The Secretary of State, Administrator of the U.S. Agency for International Development, and the Secretary of Commerce shall also publish an AI Global Development Playbook that incorporates the AI Risk Management Framework principles into “contexts beyond those of the United States borders.”

 

The Takeaways:

President Biden’s Executive Order seeks to establish uniform guidance and best practices for the development and use of AI, particularly in areas of high risk. However, given the limitations of an Executive Order compared to formal legislation, it remains to be seen how the order will affect the private industry. 

Importantly, the Order does not address any of the hotly contested intellectual property issues raised by AI, including whether use of copyrighted materials to train AI models constitutes infringement or whether AI-generated outputs are eligible for copyright or patent protections. We continue to blog about these issues as they are battled out in court (see, e.g., here and here).

For now, it is clear that the Administration is heavily focused on issues of bias, discrimination, privacy, and security. Developers and organizations using AI should refer to the existing regulatory guidelines referenced in the Order, such as the NIST AI Risk Management Framework, as AI products are developed and brought to market. Additionally, organizations and AI developers involved in high risk areas, such as employment, housing, healthcare, or the consumer finance market, should begin conducting bias audits and stay up to date with any specific agency regulations. Those not involved in high risk areas should nevertheless begin developing internal AI policies, paying particular attention to potential adverse effects on consumers from the use of AI.