Bottom post of the previous page:
I've been thinking whether it wouldn't be for the best to change the first law of the default AI lawset. Already brought this up in another thread, but maybe it's better to get a fresh discussion about this."You may not injure a human being or cause a human being to come to harm."
It would make human orders more important, so that silicons wouldn't be able to Law 1 as an excuse to ignore orders, unless the danger is really obvious. Currently it seems a bit common for AI players to make vague predictions about whether something might lead to harm - and act extremely obstructive, even though no humans ordered them to and no humans were harmed. Then saying only "Law 1 trumps all others" when confronted about it, even when their "harm preventing" acted against the orders of humans.
Unlike the previous trial of removing the inaction clause entirely, it would still bar silicons from creating extremely dangerous situations, such as randomly letting people into armory or atmospherics, or provoking others to harm humans for them - since in those circumstances, they would still be the cause of harm - however, it would also mean that they are no longer explicitly instructed to take action without a human's orders.
I believe it would generally make silicons a bit more neutral in ongoing conflicts - without inaction law, silicons would no longer have free reign to act on a hunch that something might lead to harm and then use it as an excuse to act on their own initiative, while ignore orders they're are given.
I do believe it would be a good thing.