Robot Criminals

When a robot harms humans, are there any grounds for holding it criminally liable for its misconduct? Yes, provided that the robot is capable of making, acting on, and communicating the reasons behind its moral decisions. If such a robot fails to observe the minimum moral standards that society requires of it, labeling it as a criminal can effectively fulfill criminal law’s function of censuring wrongful conduct and alleviating the emotional harm that may be inflicted on human victims.

Imposing criminal liability on robots does not absolve robot manufacturers, trainers, or owners of their individual criminal liability. The former is not rendered redundant by the latter. It is possible that no human is sufficiently at fault in causing a robot to commit a particular morally wrongful action. Additionally, imposing criminal liability on robots might sometimes have significant instrumental value, such as helping to identify culpable individuals and serving as a self-policing device for individuals who interact with robots. Finally, treating robots that satisfy the above-mentioned conditions as moral agents appears much more plausible if we adopt a less human-centric account of moral agency.