The Case for Outsourcing Morality to AI

February 2, 2023

(Wired) – Once, it made sense to hold the manufacturer or operator of a machine responsible if the machine caused harm, but with the advent of machines that could learn from their interactions with the world, this practice made less sense. Learning automata (to use Matthias’ terminology) could do things that were neither predictable nor reasonably foreseeable by their human overseers. What’s more, they could do these things without direct human supervision or control. It would no longer be morally fair or legally just to hold humans responsible for the actions of machines.  (Read More)