Achieving Alignment

F1

AI

Volume 2

Achieving Alignment

The technological advancement of artificial intelligence is inevitable. Ensuring it is used for the betterment of society is not. For this, we need nothing short of leadership, legal cooperation and comity among nations.

Words by Luke Scanlon

Illustrations by Mario Wagner

International bodies, governments and regulators across the world are grappling with the question of how best to regulate AI.

Central to many of these discussions are attempts to achieve alignment in two respects: with human interests, values and objectives, and with how the laws themselves operate across borders.

At a global level, this is no easy task. To date, there remain notable differences between proposed legal frameworks, even among those of the Group of Seven (G7), an intergovernmental forum that is now working together through the Hiroshima AI Process, a collective means for considering how AI can be regulated.

To understand the scope of this challenge, I’ve outlined the key areas in which common ground is needed and the likely trajectory for the development of AI regulations that align with one another and the objectives of humanity overall.

Defining AI

For a legal framework to be effective, there must be certainty about what it is intended to regulate. Unfortunately, at this stage, there is a high level of divergence regarding what should fall within the scope of AI laws. 

The European Union, through its draft of the AI Act, has issued a detailed prescriptive law intended to regulate all development and use of AI systems. In its initial draft, the European Commission, one of the EU’s three key legislative bodies, defines AI systems to include not only machine-learning techniques but also all software that can for a set of human-defined objectives generate outputs using “logic-based techniques” or “search and optimization methods.” The breadth of this definition would classify many existing software products used today as AI. 

Other EU bodies involved in the legislative process have, however, put forward a more limited definition that parallels the approach the United Kingdom is taking. 

The U.K. in its initial attempts to define AI has focused less on the prescriptive detail of the techniques or methods used and more on the behavior of the systems themselves. For a system to be considered AI and therefore subject to future AI regulation, it must display two characteristics: the ability to act autonomously and to perform some level of adaptivity in response to training.  

In the U.S., the National Institute of Standards and Technology (NIST), a standards-setting body, has taken somewhat of a middle-ground approach. Its definition of AI systems includes engineered or machine-based systems that can, for a given set of objectives, generate outputs. However, it restricts the category of software and systems that may be considered AI to those “designed to operate with varying levels of autonomy.”

The White House’s Office of Science and Technology Policy has taken yet another approach and in its Blueprint for an AI Bill of Rights distinguishes “automated systems” from “passive computing infrastructure.” What differentiates the two is that automated systems influence or determine the outcome of decisions, make or aid decisions, inform policy implementation and/or collect data or observations; a passive computing infrastructure does not.

Various U.S. agencies — including the Civil Rights Division of the Department of Justice, the Federal Trade Commission, the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission — have similarly stated that automated systems, including those “used to automate workflows” fall within their regulatory authority, but there’s no guarantee that their interpretations of that term (and how they apply it) will be consistent.

For the business community, achieving a common understanding of what will be subject to AI regulation is no academic exercise. If prescriptive requirements are introduced that restrict the use of AI, and there is no consensus on which software or systems are subject to those restrictions, the administrative cost that businesses will face in investigating and classifying systems to determine whether they are within or outside the scope of AI laws will be burdensome.

The Issue of Enforcement

The ability to enforce AI laws is another hot-button issue. If enforcement mechanisms cannot be established that are effective across borders, attempts to restrict the development and use of AI in one country may have the unintended consequence of giving individuals or businesses located elsewhere an early-mover advantage, or cause unsafe practices to shift to locations difficult to govern. 

The potential for a lack of consistent enforcement has led to a very real discussion about whether an international level of governance for AI can be established. Comparisons are being made to the use of nuclear power and the role intergovernmental organizations can take in governing, and, where necessary, restraining its development and use.  

Unlike nuclear power and the restrictions that can be placed on the transport of uranium or the process of enrichment, the transfer of the raw materials and data required to build effective AI applications across borders is not easily prevented. 

Developing an effective global enforcement system for AI is therefore problematic.

It is more likely that many countries will plan carefully for the jurisdictional reach of their own laws to extend beyond their geographical borders. In this regard, AI laws may include localization restrictions, and, for example, follow the path of U.S. securities laws that prevent crypto activity beyond U.S. borders in certain circumstances. Or they might mirror EU laws regarding personal data, which place significant restrictions on processing data outside the EU.

Who Should Have Authority?

A key decision for lawmakers is whether to restrict the use of AI to people and organizations that obtain a license from a statutory or regulatory authority.

Generally, only authorized providers can give legal advice, provide financial services or build nuclear power plants — the reasoning goes that the same restrictions should apply to working with AI.

The EU has taken this approach and set out in its draft legislation prohibitions on the use of AI for high-risk purposes where detailed conformity assessments have not been passed. Before releasing an AI system, a business would need to engage in a process that requires a review of its quality-management system, technical documentation and, in some cases, source code and also give details of any third-party data sources it has used. 

Calls for a similar approach are beginning to be heard in other places. For example, during the “Oversight of A.I.: Rules for Artificial Intelligence” hearing in May of this year, conducted by the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law, multiple references were made to the need to restrict the use of AI in the U.S. to those who first obtain a license.  

If licensing regimes are to be introduced in different jurisdictions, businesses operating across borders will need to pay close attention to the requirements of each jurisdiction. Significant regulatory enforcement fines may result if licensing requirements are not met.

Key Prohibitions and Protections

Whether or not strict licensing regimes are introduced, there is a high likelihood that many jurisdictions will take steps to ensure that a range of legal requirements for AI are set out more prescriptively in law. These requirements could range from protections against unfair bias and discrimination to complete prohibitions on activities deemed to be malicious or manipulative.

The EU’s current approach to bias and discrimination requires organizations to take steps to ensure that unfair bias is addressed throughout the training, validation and testing of datasets. Throughout each of these stages, processes will be needed to demonstrate that an AI system does not rely on data that may lead to health or safety issues or discriminatory outcomes.

The EU’s position also requires bias monitoring, detection and correction measures to be put in place, and for organizations to be particularly focused on preventing “automation bias” or overreliance on the output of AI systems. The risk of biased outputs resulting in negative feedback loops — that is, influencing future operations of an AI system — will also need to be managed.  

Other jurisdictions are taking a similar approach to managing bias and discrimination and putting in place specific rules regarding transparency. There is a growing expectation that users be made aware of risks before they interact with AI directly — especially when AI is used in a manner that will directly impact their legal rights or financial interests.

A Clear Prescription

Every organization that uses an AI system to produce outputs needs to understand the extent to which the outputs it produces may be protected by intellectual property (IP) rights or result in the infringement on another business’ or person’s IP.

Courts have already begun to consider cases, such as those brought by Getty Images, where AI systems have generated outputs that are alleged to reproduce existing works protected by IP. (Though, at the time of this writing, the case law remains unsettled.) 

There are also legal concerns regarding AI systems that generate identical results for two or more users, as this makes it difficult to determine which, or any user should have inherent rights to exploit the potential commercial value of the outputs created.

In some jurisdictions, including in the U.S., there is uncertainty as to whether outputs generated by AI can be owned by anyone at all.

Data protection regulators are considering how best to address instances of AI generating outputs that use personal data or private information without the consent of the individual to whom the data relates or another legal ground that allows for the data to be processed. In Italy, for example, the data protection authority briefly banned ChatGPT, and another AI company was fined 20 million euros for unlawful processing of personal information. In the U.S, at least one case is challenging the legality of using personal data in developing AI models.

Related privacy concerns occur when trained large language models (LLMs) generate personal data by inference in response to user queries without the knowledge of the person to whom the data relates. 

Perhaps even more alarming, and one of highest concern for lawmakers, is the extent to which AI systems can create misinformation at scale and negatively influence financial markets and political and social discussions, or otherwise result in harm. To counter this, lawmakers are considering how best to regulate the accuracy of data and AI models. 

Existing legal protections against fraud, defamation, misleading conduct and those targeting misinformation online are being considered in the context of AI. In some jurisdictions, these protections may be strengthened, creating a greater liability risk for businesses that use AI without first putting in place accuracy and model control mechanisms that safeguard against the generation of misinformation.

AI’s impact on advertising is also an area for concern and one that may be the subject of future direct regulation. The EU, for example, is taking the approach of prohibiting AI systems that deploy “subliminal techniques beyond a person’s consciousness,” which may distort a person’s behavior and cause psychological harm.

No mention is made by the EU, however, of how courts and regulators will determine whether a practice is a subliminal technique. As other regulation in the EU is already focusing on the use of “dark patterns,” described as those that impair a person’s ability to make autonomous and informed choices, it is likely that more attention will be given to the use of AI in advertising contexts, including for nudging, within choice architectures and overall messaging.

No Time to Waste 

This is just a sample of the choices legislators and regulators need to make quickly — over the coming months, not years — to finalize their approaches, and use them to build cross-border cooperation. You cannot keep this technological genie in a bottle. There is no time to waste.

With the potential for AI to touch every aspect of business and our lives, it is imperative to actively engage with the policy discussion that is currently taking place. The success of these discussions will determine if regulatory frameworks can be built that allow life-changing AI use cases, such as those in medicine, to continue to evolve — while also implementing strong safeguards against harmful and destructive practices.

Ultimately, ethical and legal considerations must shape and temper the world’s use of AI tools, rather than the other way around. Alignment is for our common good. 

Luke Scanlon is a U.K.-based lawyer at Pinsent Masons. He advises on AI legal and regulatory policy and the implementation of responsible AI frameworks.

Share this article.