By David A. Steiger and Stratton Horres

May 16, 2023 - The momentum of artificial intelligence (AI) just keeps building as its rapid-fire adaptation permeates industry after industry. Well-known YouTube contributor Rick Beato recently posted a fascinating video that looks at the latest developments in AI-generated music and its likely impact on record companies and streaming services. The White House announced a new $140 million initiative to mitigate AI risks - to be headed up by Vice President Kamala Harris, as CNBC reports that researchers at the University of Texas at Austin have developed a noninvasive AI system focused on translating human brain activity into a stream of text.

It is reasonable to wonder whether human society is now strapped to a miraculous new technology that will take us to heretofore unimagined heights, or set on an inevitable, terrifying path to dystopia. Can we regulate AI to get the benefits of what it offers while avoiding catastrophe?

Where is AI taking us?

Scanning various media over the past six months, one quickly realizes there are two basic schools of thought on AI. Call one the "transcendently useful tool" school, and the other "the end of the world as we know it" school. Or as we might say, Singing the Body Electric versus Skynet. Because perhaps it's important to keep in mind that the way we look at AI today faithfully tracks the way we have looked at it for many decades now.

"I Sing the Body Electric" was a Twilight Zone episode written by Ray Bradbury and broadcast in 1962. The story revolves around a highly empathetic robotic grandmother who helps heal a family suffering from tragic loss. It points us in the direction of warmly embracing new technology that will lift up and support humanity. It is opposed by, among other things, the Terminator franchise that began in the early 1980s, featuring Skynet, a fully sentient artificial intelligence bent on extermination of all humans.

Arguably, the potential future of AI encompasses both and neither of these scenarios. This all becomes relevant as society collectively decides when and how to regulate the emerging AI industry. Given how important AI is likely to be in the years ahead, getting its regulation right - and getting it right soon - is of critical importance. But getting regulation right means taking an honest look at what regulation can and cannot do in this sphere, given how far it has already developed - and how it is likely to develop from here.

Government regulation and industry self-regulation

Regulation of AI by various governmental bodies was already being developed prior to the attention generated by the public release of ChatGPT late last year. But it was inevitable that as AI became a phenomenon not only recognized by the general public but also viewed by a number of industry insiders as a potential threat, various governmental entities would spring into action.

As a recent Harvard Business Review article by Andrew Burt noted, this includes the European Commission, the U.S. Federal Trade Commission, the U.S. Department of the Treasury and the State of Virginia via its Consumer Data Protection Act. Burt points out that some coalescence of regulatory principles was emerging: requirements to identify risks posed by AI and how to address them; third-party review and testing of systems to ensure independence between developers and those assessing risk; and continuous review of AI systems. All of these steps seem to be reasonable first steps in the regulation of AI.

However, it is important to recognize the limitations and issues surrounding government regulation of AI.

  • First, AI is developing at eye-watering speed. Traditionally, there always has been some time lag between the emergence of new technology and effective efforts at regulating it. Here, where the speed of change is so great, regulators will likely struggle to keep up with the cutting edge of the technology and how to handle emerging risks.
  • Second, regulators may require industry participants to identify known unintended consequences, but governing bodies often have blind spots when it comes to the unintended consequences of their own actions - or inaction.
  • Third, regulators can reach those industry participants that seek to do mainstream business in the United States or the European Union, for example, but they have much less sway in rogue countries that may seek to weaponize AI in a variety of ways.
  • Additionally, there is the tendency of government regulation to become politicized - history has provided examples of regulatory capture by established industry players who used regulation as a means to create oligarchies that prevented innovative new participants from gaining a foothold in the marketplace.

Industry self-regulation received attention when the Future of Life Institute issued its open letter in March 2023 - signed off on by tens of thousands of experts including Elon Musk - which called on all AI labs to immediately pause for six months any training of AI systems more powerful than GPT 4.

Generally speaking, the letter argued that given what a massive shift advanced AI could have on life as we have known it, its rollout must be prepared for and managed with great care and with adequate resources. The letter really brings to mind another old-school work critical of technology run amuck - Mary Shelley's Frankenstein.

Underlying the warnings in the Future of Life Institute letter is the notion that developers are so busy asking what can be done with the new technology, that no one is asking if we should automate away so many jobs, or otherwise risk loss of control of civilization. A six-month pause, the letter argues, could be used to step back from a dangerous race to ever more powerful models with emergent capabilities to working on making systems we already have more accurate, transparent, trustworthy and loyal.

The letter's premise is not entirely unreasonable - but it's probably unrealistic. There is an estimated $154 billion rushing into AI development worldwide in 2023 alone, and those that have invested that money will likely not abide a pause on the underlying work for months on end. They expect a return on their investment, and it's hard to imagine even among the leading tech companies a willingness to commit self-sabotage in this arena while their competitors continue to move forward apace. But what perhaps might really blunt industry self-regulation and even government regulation of this unique technology is summed up in two simple words: open source.

How open source might complicate regulatory efforts

The semianalysis.com site recently published what it described as a leaked document from an anonymous Google employee under the title, "Google 'We have no moat and neither does Open AI.'" It was presented as that single employee's opinions and not those of Google, but it is a fascinating read nonetheless.

The anonymous author argues that neither Google nor Open AI are poised to score the next round of developments in AI: it is independent developers using open source - in this case a leaked version of Meta's LLaMA foundation model. After just weeks of tinkering, at least in the anonymous author's view, open source models are "faster, more customizable, more private and pound for pound more capable."

What does this mean in practical terms? The article suggests, "The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop." It is not difficult to see how, if true, this could present a nightmare scenario for regulators.

When regulators seek to build guardrails in a given industry, there are generally a finite number of players who can be monitored and prosecuted if need be for failing to play by the newly promulgated rules. But if open source allows untold thousands or even millions of independent developers to make major changes and enhancements on a weekly or daily basis, how does any regulatory body practically enforce its regulations?

End-user requirements and preferences

There is a third way that AI products can and will be at least influenced - customers in the open marketplace. Some customers will belong to industry associations, bar associations or medical associations that may develop ethical guidelines that will require certain data security protections or require efforts to curtail "deep fake" technology that creates plausible but false video or audio.

These customers will demand products that abide by these ethical proscriptions; or even in the absence of outside requirements, a customer might simply want to avoid the bad optics that a given AI product might generate if it contains problematic features. While again, this will not deter bad actors who are developing systems for their own nefarious use, it is another avenue to at least build generally accepted principles of AI that the majority of commercial actors would hopefully uphold.

Upshot: AI is still in the wild west phase

Basic regulation of AI is hard to argue with - to the extent that it is incorporated into critical systems, protections against hallucination need to be built in. Discriminatory biases need to be engineered out. Where things get trickier is on issues such as protection of humans' jobs. Who decides how much automation is good or bad? What criteria would be used for that?

Realistically, regulation of AI will be in fits and starts, at least in the short term. We also need to recognize that there will be a number of people working outside the mainstream who may push AI in a negative direction from global society's perspective to serve their own ends. This is perhaps the most dangerous challenge. If the Skynet school is right and AI can become the ultimate weapon of mass destruction at some point, we have examples of mixed success in multilateral efforts to control use of WMDs such as the Biological Weapons Convention and the Nuclear Non-Proliferation Treaty. At a minimum, our government should begin working with other state actors to develop a similar framework for AI, recognizing as noted above, however, that unlike traditional WMDs, open source may create many more potential AI stakeholders than just nation states.

Conclusion

Most people recognize the many possible benefits that AI is likely to bring in the coming months and years. Like just about all new technology, it needs to be harnessed and excesses need to be curtailed, particularly when credible dangers to human life emerge.

In the end, though, the authors of this piece, with no disrespect to the signatories to the Future of Life Open Letter, or Geoffrey Hinton, tend to agree with the views of neural network pioneer Jürgen Schmidhuber who recently stated that "in 95% of all cases, AI research is really about our old motto, which is make human lives longer and healthier and easier." This view is supported by a new paper from Stanford that argues that seeming "emergent capabilities" of large AI models are a mirage created by particular metrics some programmers have used to evaluate them.

So Sing the Body Electric most of the time, but - on the fringes - perhaps we should still keep a watchful eye out for Skynet. As the movie Terminator2:JudgmentDay reminds us, "The future's not set. There's no fate but what we make for ourselves."

Originally published by Thomson Reuters Westlaw Today.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.