The Three Laws demand infinite wisdom in zero time.
I was asked why we have not implemented Asimov’s Laws in robotics.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
We of course have no system which can even recognise those failure states adequately, let alone enforce them as an inviolable condition of AI action.
Even if we had a model, the “through inaction” clause is a horrible trap. Whichever process we would run to evaluate a situation would place a robot in a quandary – spending time evaluating the situation might represent a delay which could lead to hard through inaction. Yet rushing in might lead to a different failure; if the robot runs in to a burning building to rescue a trapped human it could cause structural collapse and harm the victim, or other undetected victims.
If new sensor data is established during the process, do we restart and risk delay, or ignore the information?
Using an anytime algorithm has risks; commit too early risks harm from wrong action, acting too late risks harm from delay.
The world is not deterministic, we cannot perfectly predict outcomes, and new information is constantly arriving. Even if we had a system capable of evaluating harmful states, there’s an infinite regress problem in knowing when to trigger action.
In fact, since assessing risks requires runtime and we can never be sure if we are done, a 3 laws compliant machine might never be able to perform any other function. Any clock cycles spent on making the coffee or building the bridge or collecting the shopping is stealing from the available runtime which must, by a strict reading of the laws, be spent assessing dangers. To do otherwise would be inaction.
Asimov’s own writings were essentially logic puzzles about failure states of even apparently well formed laws. The logical contradictions in the laws were the problem the stories explored. But there’s a computational problem too – the Laws create a system that becomes more paralyzed in pace with its capabilities, because smarter systems recognize more subtle ways their decision-making process itself might cause harm.
Leave a Reply