The integration of artificial intelligence (AI) into various aspects of daily life brings both promise and peril. From driverless cars to medical diagnoses to all types of written content popping up online, AI's potential to improve efficiency and accuracy is undeniable. However, when AI systems fail to perform as intended, or worse, cause harm, determining liability becomes a complex and pressing concern that legal experts say governmental laws aren’t yet prepared to answer. Here’s where the U.S. system currently stands on AI liability laws, and what businesses using and developing the technology need to be aware of as we head into the unknown.
The Complex and Uncertain Landscape of AI Liability
Determining liability in cases of AI failure involves a multitude of factors and stakeholders. Scenarios such as a driverless car hitting a pedestrian or incorrect medical diagnoses by AI software highlight the potential risks. According to law professor Jane Bambauer, as AI becomes more ingrained in decision-making processes, companies may find it challenging to evade liability.
“If in the coming years, we wind up using AI the way most commentators expect, by leaning on it to outsource a lot of our content and judgment calls, I don’t think companies will be able to escape some form of liability,” Bambauer says.
In many ways, in more high-stakes situations physical harm or medical malpractice are more straightforward. These industries have established guardrails to decide who is at fault, and insurance to cover that liability cost. But when we look at generative AI and speech generated by AI systems, the legal framework surrounding AI liability remains nebulous. Graham Ryan, a litigator at Jones Walker, points out that Section 230 of the Communications Decency Act, a cornerstone of internet regulation, does not cover AI-generated content.
“Generative AI is the wild west when it comes to legal risk for internet technology companies, unlike any other time in the history of the internet since its inception,” Ryan says. Section 230 has protected internet companies in the past from harm caused by content on their platforms. Think: if you say something defamatory about your neighbor on Facebook, your neighbor can sue you, but not Meta.
Now, the concern is if AI writes something false or defamatory, and the debate over the extent of AI's autonomy in content generation further complicates matters. The absence of clear guidelines leaves companies vulnerable to lawsuits over AI-generated content.
Implications for AI Companies and Users
It’s clear to experts that companies will be sued for AI misbehaving on their platforms. The extent to which they are held liable is murky, but the debate has already extended to Congress and the Supreme Court. Normally, when companies perceive a gap in existing laws, they lobby Congress for a fix. But lately, Congress has been keen to strip away some of the protections Section 230 already offers, by specifying that companies can only have Section 230 protection if they play by certain rules. Congress’s current mood is the opposite of what companies that make and use AI want.
Michael Karanicolas, executive director of the Institute for Technology, Law & Policy at UCLA, suggests that managing legal threats may involve restricting certain uses of AI technology.
“If we have these tools, and large volumes of people are doing dangerous things as a result of receiving garbage information from them, I’d argue it isn’t necessarily a bad thing to assign cost or liability as a result of these harms, or to make it unprofitable to offer these technologies,” he says.
The potential for AI-related lawsuits underscores the importance of proactive risk management strategies for companies. From implementing stringent quality control measures to investing in robust error-detection mechanisms, businesses must prioritize accountability and transparency in their AI deployments. Moreover, users must be aware of the inherent risks associated with AI technologies and advocate for regulatory frameworks that safeguard their rights and interests.
Conclusion
As AI continues to permeate various sectors, the question of liability looms large. The evolving legal landscape reflects a delicate balance between fostering innovation and ensuring accountability. With regulations gradually taking shape in the U.S. and beyond, stakeholders must navigate the complexities of AI liability with transparency, accountability, and risk assessment at the forefront. In doing so, they can mitigate legal risks while harnessing AI's transformative potential for the betterment of society.
Sources
Stanford’s Center for Internet and Society