close
close

The human factor in artificial intelligence AI regulation: ensuring accountability

As artificial intelligence (AI) technology continues to advance and permeate various aspects of society, it poses significant challenges to existing legal frameworks. One recurring issue is how the law should regulate actors who lack intention. Traditional legal principles often rely on the concept of mens rea, or the mental state of an agent, to establish liability in areas such as free speech, copyright, and criminal law. However, AI agents, as currently conceived, do not possess intentions in the same way that humans do. This presents a potential loophole where AI could be immune from liability simply because these systems lack the requisite mental state.

A new paper from Yale Law School entitledAI law is Risa lawky Agents Without Intention” addresses this critical issue by proposing the use of objective standards to regulate AI. These standards are drawn from various parts of the law that either attribute intent to actors or bind them to objective standards of conduct. The basic argument is that AI programs should be viewed as tools used by people and organizations, making those people and organizations liable for the actions of the AI. We need to understand that traditional legal frameworks for determining liability depend on the mental state of the agent, which does not apply to AI agents without intent. The article therefore suggests moving to objective standards to fill this gap. The author argues that people and organizations using AI should be held liable for the harm they cause, similar to how principals are held liable for their agents. It also emphasizes imposing duties of due diligence and risk mitigation on those designing, implementing, and deploying AI technologies. Clear standards and legal rules need to be established to ensure that AI companies internalize the costs associated with the risks their technologies pose to society.

This article presents an interesting comparison between AI agents and the principal-contractor relationship in tort law, which offers a valuable framework for understanding how liability should be assigned in the context of AI technologies. In tort law, principals are liable for the actions of their agents when those actions are performed on the principal’s behalf. Doctrine reply above is a specific application of this principle in which employers are liable for torts committed by their employees in the course of employment. When people or organizations use AI systems, these systems can be viewed as agents acting on their behalf. The basic idea is that legal responsibility for the actions of AI agents should be assigned to the human entities that employ them. This ensures that individuals and companies cannot avoid liability by simply using AI to perform tasks that would otherwise be performed by human agents.

Therefore, given that AI agents do not have intentions, the law should require them and their human superiors to adhere to objective standards that include:

  • Negligence – AI systems should be designed with due care.
  • Strict Liability – In certain high-risk applications, such as fiduciary duties, the highest level of care may be required.
  • No Diminished Duty of Care – Replacing an AI agent with a human agent should not result in a diminished duty of care. For example, if an AI enters into a contract on behalf of a principal, the principal remains fully liable for the terms and consequences of the contract.

This article discusses and addresses the challenge of regulating AI programs that are inherently intentionless within existing legal frameworks that often rely on the concept of mens rea (the subject’s mental state) to assign liability. It has been argued that in traditional legal contexts, the law sometimes attributes intentions to entities that lack clear human intentions, such as corporations or associations, and requires standards of behavior from external entities regardless of their actual intentions. Therefore, the article suggests that the law should treat AI programs as if they had intentions, assuming that they intend to produce reasonable and predictable consequences for their actions. This approach would hold AI systems accountable for outcomes in a way similar to how humans are treated in some legal contexts. The article also discusses the application of subjective standards that are typically applied to protect human freedom to AI programs. It has been stated that the main argument is that AI programs lack the individual autonomy and political freedom that justify the application of subjective standards to human actors. An example is given of First Amendment protections that balance the rights of speakers and listeners. However, protection of AI speech based on listener rights does not justify the use of subjective standards because AI does not have subjective intentions. Thus, because AI does not have subjective intentions, the law should attribute intentions to AI programs, assuming that they intend reasonable and predictable consequences for their actions. The law should apply objective standards of behavior to AI programs based on what a reasonable person would do in similar circumstances, which includes applying standards of reasonableness.

The article/report presents two practical applications for which AI programs should be regulated using objective standards: defamation and copyright infringement. It examines how objective standards and reasonable regulation can address liability issues arising from AI technologies. The issue he addresses here is how to determine liability for AI technologies, with particular emphasis on large language models (LLMs) that can generate harmful or infringing content.

The key components of the applications discussed are:

  • Defamatory hallucinations:

LLMs can generate false and defamatory content when prompted to do so, but unlike humans, they do not have intent, making traditional defamation standards optional. They should be treated analogously to flawed products. Product designers should be expected to implement safeguards to reduce the risk of defamatory content. Furthermore, when an AI agent acts as a prompter, a product liability approach applies. Human prompters are liable if they publish defamatory material generated by LLMs, with standard defamation laws modified to reflect the nature of AI. Users must exercise due diligence in designing prompts and verifying the accuracy of AI-generated content, refraining from disseminating known or reasonably suspected false and defamatory material.

Concerns about copyright infringement have led to numerous lawsuits against artificial intelligence companies. LLMs may generate content that violates copyrighted material, which raises questions about fair use and liability. Therefore, to deal with this AI, companies can obtain licenses from copyright owners to use their works for training purposes and generate new content, as well as create a collective copyright organization, which could facilitate the issuance of blanket licenses, but this approach has limitations due to the diverse and dispersed nature of copyright owners. Additionally, AI companies should be required to take reasonable steps to reduce the risk of copyright infringement, which is a requirement of the fair use defense.

Application:

This research paper examines the legal liability of AI technologies using principles derived from agency law, imputed intent, and objective standards. By treating the actions of AI similarly to human agents under agency law, we emphasize that principals must take responsibility for the actions of their AI agents, without providing a diminution of the duty of care.

Aabis Islam is a student of LLB from National Law University, Delhi. With a keen interest in AI law, Aabis passionately explores the intersections of AI and legal frameworks. Dedicated to understanding the implications of AI in various legal contexts, Aabis is keen to explore advancements in AI technologies and their practical applications in the legal field.

(Announcing Gretel Navigator) Create, edit, and extend tabular data with the first composite AI system trusted by EY, Databricks, Google, and Microsoft