So, I got an interesting response from a reader concerning my recent rant on AI and robots and old science fiction. The part that raised some questions was:
Isaac Asimov's Three Laws of Robotics are a set of guidelines for the behavior of robots, designed to ensure their interaction with humans is safe and ethical. They are: 1) A robot may not harm a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
The response from this reader was:
Law 1 is hugely problematic. Just think of all the 'hate laws' being pushed at the moment. What is 'harm'? And what if stopping a human being coming to harm requires harming them?
Yep. He has got it right. But then again, you have to think a little past that. About those laws, what they are trying to do, who is doing them, and the culture that promulgated them.
Consider for a moment the “trolley problem” presented above. “Holy Kobiashi Maru Batman!” This tired conundrum is trotted out and undergraduates preen and strut with their tired ass rationales.
But I think that this kind of thing is exactly what worries my gentle reader who pointed out the dilemma. Our society really can’t stand the idea of “you’re damned if you do and you’re damned if you don’t”.
The simple and unsophisticated presentation of the trolley problem is one where the mental/physical states of the person operating the switch and the victims on the tracks are unknown. This is both simplistic and stupid.
Imagine you own petty bigotries and problematic actions (and please don’t think they aren’t there) and then imagine that you knew the identities and mental states of the “victims” on the track. Now you have a real problem don’t you?
What if the “one” is your daughter? I would venture to guess that there would be five dead people at the end of the experiment. What if you knew that four of the five had terminal disease and would die in a week, would the change in death timing mean anything to you?
Let’s use an imaginary “Harry Potter” scenario but with no “magic” to help you out. What if the “one” was Sweet Hermione and the “five” were mean-old Slytherins and you were a Hufflepuff? Maybe a different answer depending on your house. I am certain members of Ravenclaw and Slytherin would not take much time to make their respective choices.
The Robots and intelligences that we are trying to make will be a different hodgepodge of conflicting goals, prejudices, compromises and methodologies that make up our laws. But at the end, the rules coded into them will be our rules because we did the coding. The chance that they can come up with a solution that will make everyone happy is exactly zero.
My solution to the trolley problem is that I would walk away. If there is no way to win, don’t play. Maybe that is what we need to teach.