The Role Of Acceptable Use Policies In AI

WF
William Fry

Contributor

William Fry is a leading full-service Irish law firm with over 310 legal and tax professionals and 460 staff. The firm's client-focused service combines technical excellence with commercial awareness and a practical, constructive approach to business issues. The firm advices leading domestic and international corporations, financial institutions and government organisations. It regularly acts on complex, multi-jurisdictional transactions and commercial disputes.
As the deployment of Artificial Intelligence (AI) systems becomes more widespread, acceptable use policies (AUPs) are emerging as a critical mechanism...
Ireland Technology
To print this article, all you need is to be registered or login on Mondaq.com.

As the deployment of Artificial Intelligence (AI) systems becomes more widespread, acceptable use policies (AUPs) are emerging as a critical mechanism for ensuring compliance with regulatory frameworks like the Artificial Intelligence Act (AI Act).

The versatility of general-purpose AI systems necessitates robust AUPs to mitigate the risk of misuse. Without such policies, providers and users alike may inadvertently engage in activities that violate legal and ethical standards – particularly the AI Act, exposing themselves to significant liabilities and regulatory obligations and penalties.

Defining Acceptable Use Policies

Acceptable use policies are contractual agreements that outline permissible and impermissible uses of technology. In the context of foundation models, these policies specify what users can and cannot do with AI systems, aiming to prevent harmful or illegal applications. By embedding these restrictions within terms of service or model licenses, developers create binding obligations for users, thereby extending their control over the use of their technologies.

The Necessity of Acceptable Use Policies for AI Systems

The benefit for AUPs is underscored by the findings of research published by Stanford University's Centre for Research on Foundation Models. The research by Kevin Klyman notes that foundation model developers are increasingly proactive in adopting AUPs to prevent unacceptable uses of their AI systems. Despite this, there is significant variability in how these policies are implemented and enforced. The study reveals that out of ten leading developers, only three disclose their enforcement mechanisms, and only two provide justifications when enforcing these policies. Klyman suggests that this lack of transparency raises questions about the efficacy and consistency of AUPs in practice.

Challenges and Considerations

Organisations face several challenges when it comes to regulating foundation models through AUPs. One primary concern highlighted by the research is the enforcement of these policies. Even when AUPs are clearly defined, enforcing them can be difficult, especially in decentralised or open-source environments.

It is also important to consider that if a user of a general purpose AI system made a significant modification to the intended use of that system which could cause that system to become prohibited or high-risk under the AI Act, then the user who made that modification could potentially be deemed to become the provider of that system under the AI Act, with all the obligations that brings.

Content-Based Prohibitions and Industry-Specific Restrictions

The research indicates that AUPs typically include content-based prohibitions, such as bans on generating explicit, fraudulent, abusive, or deceptive content. However, there is a lack of uniformity in these restrictions across different developers. For instance, while some developers prohibit the use of their models for political content, others do not. Similarly, prohibitions on generating content related to eating disorders, weapons, and misinformation vary widely.

Industry-specific restrictions are another critical aspect of AUPs. The research highlights that many developers prohibit the use of their models for weapons development, legal, medical, and financial advice, and surveillance. These restrictions aim to prevent misuse in highly regulated or sensitive industries, although the specifics can differ significantly from one developer to another.

Enforcement and Transparency

The effectiveness of AUPs hinges on their enforcement, yet Klyman notes that many developers provide little information on how they enforce these policies. This opacity contrasts with other digital technologies, where transparency reports are more common. The research suggests that without clear enforcement mechanisms, the deterrent effect of AUPs may be limited, reducing their potential to prevent harmful uses.

The Role of Governments and Self-Regulation

Governments have a vested interest in ensuring that foundation models are used responsibly. The EU's AI Act, for instance, mandates that providers of general-purpose AI models disclose their AUPs to both the EU's AI Office and downstream providers. This requirement aims to enhance transparency and accountability. However, Klyman points out that the enforcement of these policies remains a challenge, particularly in jurisdictions where regulatory frameworks are still evolving.

Self-regulation, through voluntary commitments and industry norms, plays a complementary role. The research references the White House's Voluntary AI Commitments, which encourage companies to disclose appropriate and inappropriate uses of their models. While these commitments signal a positive step towards responsible AI use, they lack the binding force of formal regulations.

Emerging Norms and Policy Proposals

Norms around the use of foundation models are still developing. Leading AI companies have begun to adopt best practices, such as publishing usage guidelines that prohibit harmful activities, as noted by the study. However, the enforcement and transparency of these guidelines vary, highlighting the need for more consistent and robust approaches.

Policy proposals aimed at restricting the use of foundation models should consider existing AUPs. Enhancing the enforceability of these policies and supporting their implementation can help bridge gaps in current regulatory regimes. For instance, the research suggests that governments could incentivise companies to develop more transparent and enforceable AUPs, thereby strengthening the overall governance of AI technologies.

Conclusion

Acceptable use policies are useful when it comes to managing the risks associated with the deployment of foundation models. They serve as a helpful self-regulatory tool, assisting developers and users navigate the complex landscape of AI regulation. However, the effectiveness of AUPs depends on their clarity, enforceability, and transparency. By addressing the challenges and limitations of current AUPs, stakeholders can better harness the transformative potential of AI technologies while mitigating their risks.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More