When an AI system injures a lot of people, what will the lawsuits look like?
Will it be easier or harder to bring a class action for claims about AI products?
As we get into the swing of a new year, we find ourselves in the midst of another flurry of legislative activity around AI. For years, a lot of conversation has focused on what kinds of laws we need to pass to regulate AI—do we need new laws, and what should those new laws say?
Those are important questions. But we also are finding out that, regardless of the state of the law, the lawsuits are already here. And that means a whole different set of questions—about how suing people over AI is going to work. Who can sue whom? Over what? What will it take to prove a case? What kind of remedies are on the table? These questions are part of the debate over how we should regulate AI, and have arisen in various ways in debates over legislation so far. But they also can take on a life of their own, especially as litigation springs up that forces courts to manage these issues before legislators or regulators proactively address them. Part of what I hope to do sometimes on this blog is highlight the world of AI litigation alongside questions of AI policy—because litigation is going to be an important part of the policy picture.
And right now, one of the issues that I find most interesting in AI litigation is a set of questions surrounding class actions. Class actions are an incredibly important type of lawsuit, especially in certain areas like consumer protection and civil rights. By allowing plaintiffs to join their claims together, they enable lawsuits that would be too expensive to bring on their own; and by putting a lot of money on the table, they create powerful incentives for lawyers to bring those lawsuits, for better or worse. So this post is addressing one particular feature of class actions in the world of AI that I think we will be seeing more about in the months and years ahead: when AI tools' automation of tasks will make it easier to bring class actions vs when it will make it harder to bring class actions. The result is likely to have an impact on the implementation of AI policy and companies' risk assessments of AI tools, affecting the uptake and distribution of the underlying technology.
Class actions and AI
We have some early evidence that AI litigation is going to involve a lot of class actions. Some of the most high-profile lawsuits involving machine-learning tools so far have been brought as proposed class actions, like the RealPage class actions alleging antitrust violations in rental pricing or the UnitedHealth class action alleging unlawful claim denials. There are other important suits that aren't class actions, too, like the New York Times' lawsuit against OpenAI. But class actions have been showing up often in the AI dockets over the last couple of years.
And, as I describe in more length in a forthcoming law review article, this shouldn't be that surprising. A key source of the putative value of AI (although not the only one) is its ability to automate tasks that previously required more human attention. In other words, AI tools allow businesses and individuals to automate and scale up processes, goods, and services that used to be harder to do at scale. You might, for instance, have a smartphone app or a chatbot that can do some kinds of medical diagnosis for hundreds or thousands of people at very little marginal cost compared to having doctors diagnose people.
And what happens when that app makes mistakes? Well, then the impacts of those mistakes may be correspondingly scaled up, and class actions—one of the main ways the law addresses injuries at scale—become a natural fit.
But will class actions work well when it comes to AI-related problems? It's a big question, and it's too early to make much in the way of blanket pronouncements. But there are a couple of potential interactions between AI tools and class action doctrine that I think are worth paying attention to. And one of them is the question whether AI tools are going to make it easier or harder to satisfy the requirement that members of a class action be injured in ways that are similar to each other.
Will AI-related injuries cause plaintiffs to be more similar to each other, or more different?
Class actions are powerful procedural devices, and not every case can be turned into one. There are many requirements, but some of the most important ones boil down roughly to the idea that the different class members' legal claims have to be similar to each other—and similar enough that it makes sense to resolve the claims all together, rather than addressing them individually. Bearing these similarity requirements in mind, I think there are two effects that AI tools are likely to have on the litigation landscape—what I think of as a “homogenizing effect” and a “differentiating effect.”
First, AI tools will have a homogenizing effect on plaintiffs’ claims in contexts where preexisting decentralized or informal systems are replaced with a single AI process. Take, for instance, Wal-Mart v. Dukes, in which the Supreme Court said that a group of women alleging employment discrimination could not proceed as a class action because, among other things, their allegations involved many different decisions made by many different individual managers across the country. Such a suit might well look different in a world in which a large company uses an AI tool as a significant part of its process for, e.g., hiring, promotion, or pay decisions. Using a single system to deal with many decisions can create a common failure point for people affected by those decisions. In a suit like Dukes alleging bias, if it’s possible to prove that the automated system was biased, that may go a long way toward advancing the claims of all the plaintiffs at once—making a strong case for class treatment. Of course, there may be many other obstacles, but all else being equal such a substitution from many different actors to one AI system should tend to make plaintiffs' claims more similar and thus more amenable to being brought as a class action.
But in other circumstances there is also the potential for the opposite effect, a differentiating effect that tends to make claims less similar to each other. That's because AI tools will allow for much more nuanced and individualized automation than previous types of systems. Think about an online investment adviser. They might previously have had a system where you click through a few screens with questions about your risk tolerance and financial goals, and then end up being recommended a set of investment options. All of this—the questions, the description of the options—would be done with boilerplate text, identical for everyone.
In the near future, it's plausible to imagine that you could be managed by an AI chatbot, which would have a bespoke conversation with you about your goals and offer realtime, responsive pitches for different investment products as part of a chat conversation. And a lawsuit alleging, for instance, that the company's advice was misleading or self-dealing, might have to deal with more hurdles under this regime. Because the relevant communications are different for every individual, claims about those communications (like whether they were misleading) could be more difficult to prove on a class-wide basis. Again, this is on an "all else being equal" basis—these class actions may still be doable. But AI tools that allow for more differentiated treatment of consumers, employees, or other groups of people will tend to put hurdles in the path of class actions relating to those tools.
The world of AI litigation
It's too soon to say whether the homogenizing or differentiating effect will be more pronounced, and how big the effects will be in general. But it's worth thinking about in the months and years ahead for a few reasons. First, as policymakers consider the rules and regulations to govern AI, it will be important to keep an eye on enforcement efforts like private lawsuits—and it might be a mistake to assume they will operate in the AI context the same they do in other contexts. And second, as everyone else considers the use of these new tools—whether as consumers, employees, employers, product developers, etc.—it will be important to have a sense of what the relevant risks are. And litigation will play a big role in that—both as a driver of one kind of salient risk (litigation risk), and also as a deterring force on conduct. So it's worth paying attention to the growing world of AI litigation, in addition to the substantive laws on the table, and the developments of the technology more broadly.