If a Self-Driving Car Gets in an Accident, Who—or What—Is Liable?
The carmaker, the car owner, or the robot car itself? On the surprisingly not-crazy argument for granting robots legal personhood.
On first contact with the idea that robots should be extended legal personhood, it sounds crazy. Robots aren’t people! And that is true. But the concept of legal personhood is less about what is or is not a flesh-and-blood person and who/what is or is not able to be hauled into court. And if we want to have robots do more things for us, like drive us around or deliver us things, we might need to assign them a role in the law, says lawyer John Frank Weaver, author of the book Robots Are People, Too, in a post at Slate. “If we are dealing with robots like they are real people, the law should recognize that those interactions are like our interactions with real people,” Weaver writes. “In some cases, that will require recognizing that the robots are insurable entities like real people or corporations and that a robot’s liability is self-contained.” Here’s the problem: If we don’t define robots as entities with certain legal rights and obligations, we will have a very difficult time using them effectively. And the tool that we have for assigning those things is legal personhood. Right now, companies like Google, which operate self-driving cars, are in a funny place. Let’s say Google were to sell a self-driving car to you. And then it got into an accident. Who should be responsible for the damages—you or Google? The algorithm that drives the car, not to mention the sensors and all the control systems, are Google’s products. Even the company’s own people have argued that tickets should not be given to any occupant of the car, but to Google itself. (via If a Self-Driving Car Gets in an Accident, Who—or What—Is Liable? - The Atlantic)