Senate Majority Leader Charles Schumer (D-NY) unveiled his much-anticipated proposed bipartisan legislative framework for regulating artificial intelligence (AI) during a recent keynote address at the Center for Strategic and International Studies. It is likely no coincidence that this effort gets underway as the European Union advances its sweeping AI Act draft legislation that is expected to be finalized this fall.

Bipartisan Working Group

To support his proposed framework and resulting policy proposals, Senator Schumer has formed a bipartisan AI working group that will work closely with the leaders of the Senate's Commerce, Homeland Security, Antitrust, Judiciary, and Intelligence committees to shape the draft legislation. Senator Schumer stated that traditional legislative processes will not suffice because the complexities and speed of AI development demand a novel and fast-moving approach. Currently, the working group is headed by Senators Schumer, Heinrich (D-NM), Young (R-IN), and Rounds (R-SD). However, Senator Schumer noted that he has requested that each of the committee chairs reach across the aisle to ranking members and work together on developing proposals. Further, he specifically called for Senators Bennet (D-CO), Thune (R-SD), Blumenthal (D-CT), Blackburn (R-TN), and Hawley (R-MO), who have spoken out about the potential risks of AI, to join the group.

Key Considerations and Questions

Senator Schumer emphasized that he finds it necessary for Congress to actively engage with the AI revolution, and he stated that failing to regulate the rapidly advancing technology would put American citizens and businesses at risk, noting "the age of AI is here, and it is here to stay." The framework comes in part in response to warnings from some AI experts about the potential risks of unregulated AI. The initiative also aligns with ongoing efforts undertaken by the White House and other lawmakers to address this issue.

Senator Schumer began his speech by outlining the questions he thinks Congress must consider in drafting AI legislation, including:

  1. What is the proper balance between collaboration and competition among the entities developing AI?
  2. How much federal intervention, if any, is needed to encourage innovation?
  3. What is the proper balance between private AI systems and open AI systems?
  4. How does the government ensure that innovation and competition are open, free, and fair?

A New Framework

Senator Schumer's proposed legislative framework, called "SAFE Innovation for AI," spans four key pillars that are designed to encourage innovation of AI in a safe manner:

  1. Security. Strengthen national security by examining potential AI threats and ensure economic security for workers, particularly those in low-skilled, low-income jobs.
  2. Accountability. Support the creation of systems to address misinformation and bias, protect creator rights, and address copyright concerns.
  3. Foundation. Set the norms for AI use to ensure AI systems uphold democratic values, protect elections, promote societal benefits, and address copyright concerns.
  4. Explainability. Require companies to explain how an AI system reached a certain outcome in an understandable manner.

The framework aims to strengthen the potential benefits of AI, such as combating diseases and improving efficiency, while addressing potential risks, such as displacement of workers, misinformation campaigns, and electoral interference.

Congressional Forums on AI

In addition to his new framework, Senator Schumer plans to host a series of "AI Insight Forums" beginning this fall. The forums will feature top AI developers, executives, scientists, community leaders, security experts, workers, and others. The findings of these discussions will form the basis for detailed policy proposals for Congress to consider but will not replace efforts already underway. Senator Schumer outlined the following subjects as topics to be discussed during the forums:

  • What are the right questions?
  • Copyright.
  • Intellectual property.
  • Use cases and risk management.
  • Workforce implications.
  • National security.
  • How to guard against doomsday scenarios.
  • AI's role in our social world.
  • Transparency, explainability, and alignment.
  • Privacy and liability.

Moving Forward

AI remains at the forefront of the minds of regulators, and further developments can be expected throughout the year. The Artificial Intelligence & Machine Learning industry group at Perkins Coie will continue monitoring changes to the AI regulatory landscape to keep clients informed of potential legal and regulatory issues surrounding the development and use of AI products and services.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.