The White House recently issued its most extensive policy directive yet concerning the development and use of artificial intelligence (AI) through a 100-plus-page Executive Order (EO) titled "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" and accompanying "Fact Sheet" summary.

Following in the footsteps of last year's Blueprint for AI Bill of Rights and updates to the National Artificial Intelligence Research and Development Strategic Plan published earlier this year, the EO represents the most significant step yet from the Biden administration regarding AI. Like these previous efforts, the EO acknowledges both the potential and the challenges associated with AI while setting a policy framework aimed at the safe and responsible use of the technology, with implications for a wide variety of companies. The EO also signals the government's intentions to use its purchasing power to leverage Responsible AI and other initiatives, with significance for government contractors.

As a unilateral action of the executive branch, this EO cannot alter existing laws or appropriate funds (both of which would require Congressional approval). Rather, the EO primarily provides guidance and directives to federal agencies and, more broadly, outlines the administration's policies and priorities on AI. The EO places a strong emphasis on inter-agency coordination, international collaboration, and a multistakeholder approach to navigate the complexities of AI. Importantly, the EO also establishes requirements and expectations of the federal government relating to AI technologies procured through government contracts. Through these directives, the EO aims to promote innovation, protect individuals' rights, and establish the United States as a leader in the global AI landscape while addressing the inherent risks and challenges posed by AI technologies.

Eight Guiding Principles

The EO emphasizes advancing and governing AI development and use based on eight guiding principles, summarized below. Federal agencies are instructed to adhere to these principles while considering views from a wide range of stakeholders including industry, academia, labor unions, and international allies.

  • Safety and Security. The EO directs agencies to ensure AI systems are reliable and secure, with robust evaluations and risk mitigation strategies. This includes directing the National Institute of Standards and Technology (NIST) within the U.S. Department of Commerce to establish mechanisms for testing and monitoring AI systems both before and after deployment, with a particular emphasis on generative AI and "dual-use foundation models" (defined as large, general-purpose models that excel at "tasks that pose a serious risk to security, national economic security, national public health or safety"). Additionally, the EO directs agencies to explore potential mechanisms (e.g., watermarking) and develop guidance to help Americans identify AI-generated content.
  • Innovation and Competition. The EO emphasizes fostering a competitive AI ecosystem, including by developing the U.S. AI workforce and promoting investments in AI-related education, training, and research. It also directs agencies to address potential intellectual property challenges posed by AI, such as through directing the U.S. Patent and Trademark Office to issue guidance on the patentability of AI technologies and AI-assisted inventions. Additionally, with the aim of promoting competition, the EO mandates that all agencies developing AI policies and regulations (particularly emphasizing the Federal Trade Commission [FTC]) use their existing authorities to promote a competitive AI marketplace, including taking steps to prevent dominant market players from disadvantaging competitors and providing new opportunities for small businesses and entrepreneurs.
  • Workforce Support. The EO instructs federal agencies, such as the U.S. Department of Labor (DOL) and U.S. Department of Education, to consider the impact of AI on the workforce and promote job training, education, and other measures to assist workers, particularly those displaced by advancements in AI technologies.
  • Equity and Civil Rights. Building on the Biden administration's efforts with the Blueprint for AI Bill of Rights, the EO mandates that AI policies support the administration's goals of advancing equity and civil rights and combating the spread of bias and discrimination through use of AI technologies. For example, the EO targets use of AI technologies in the criminal justice system, directing the attorney general to collaborate with various federal agencies to develop guidance and best practices to prevent use of AI technologies from exacerbating discrimination in sentencing and other aspects of the criminal justice system.
  • Protecting the Public. The EO encourages federal agencies to consider using their existing authorities to enforce existing consumer protection laws and to enact safeguards against fraud, bias, discrimination, and other potential harms from AI, particularly in critical fields like healthcare, financial services, education, and telecommunications.
  • Protecting Privacy. The EO emphasizes the importance of protecting privacy and civil liberties as AI technologies continue to develop. In particular, it directs the NIST to develop guidelines related to agencies' use of privacy-enhancing technologies and directs the National Science Foundation to fund the further research and development of privacy-enhancing technologies.
  • Advancing Federal Government Use of AI. The EO mandates interagency efforts, led by the Office of Management and Budget, to enhance the federal government's AI capabilities, including by increasing the responsible use of AI technologies by government agencies, where appropriate, and hiring and developing AI talent. Government contractors should expect these initiatives to accelerate agency efforts (such as those of the U.S. Department of Defense as part of its Responsible AI Strategy and Implementation Pathway) to incorporate Responsible AI and ethics principles into contracts with federal agencies.
  • Strengthening Global Leadership. The EO aims to bolster American leadership in AI internationally through international collaborations and setting technical standards. It mandates a coordinated effort led by various federal agencies (including the Department of State) to engage with international allies, promote responsible AI practices, and manage AI-associated risks on a global scale. The EO tasks the secretary of commerce to create a global engagement plan to promote and develop AI standards—including for AI nomenclature, data handling practices, AI system trustworthiness, and risk management—guided by the National Institute of Standards and Technology's AI Risk Management Framework. The EO also outlines plans for safeguarding critical infrastructure from AI-induced risks and advancing AI in global development, emphasizing a comprehensive approach towards responsible and beneficial AI utilization both domestically and internationally.

The White House Artificial Intelligence Council

To help implement the directives, the EO establishes the White House Artificial Intelligence Council. Its primary role is to ensure that federal agencies are aligned in developing and implementing policies related to AI, as stipulated by the EO. The AI Council will be chaired by the assistant to the president and deputy chief of staff for Policy. Its membership will include a broad array of cabinet members and heads of key agencies. The chair is empowered to form and oversee subgroups within the AI Council and to include additional agency heads as necessary to facilitate effective policy coordination and action on AI-related matters.

Shaping the Regulatory Landscape at Home and Abroad

The EO is the most comprehensive effort to date to outline a federal strategy on AI. It arrives at a pivotal moment for emerging efforts to adopt AI legislation and regulation, reflecting a strategic response to global AI regulatory developments amidst a challenging legislative environment in the United States. With Congressional action unlikely in the near-term and the presidential election coming next year, the EO demonstrates the Biden administration's intent to take initiative, within the confines of executive power, to shape AI policy both domestically and internationally.

With the EU nearing completion of its comprehensive AI regulatory framework, the AI Act, the EO represents the United States' intent to assert itself as an international leader in AI regulation. This proactive stance may be partly motivated by the United States' experience with the EU's General Data Protection Regulation (GDPR), which had widespread implications for international business practices and set the global benchmark for privacy regulation to the EU's standards.

As the EO signals potential heightened regulatory requirements for AI systems in the future, the Artificial Intelligence & Machine Learning industry group at Perkins Coie will continue to monitor the rapidly-evolving AI regulatory environment, both domestically and abroad, to anticipate challenges for our clients. These challenges may include navigating the complexities of compliance with emerging standards, protecting intellectual property, and managing legal risks associated with the deployment and use of AI. Our focus remains to provide our clients with the strategic insight necessary to navigate the development and use AI solutions within an ever-shifting regulatory landscape.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.