Page 1 of 1

Asimov Law 1 needs to allow inaction

Posted: Tue Oct 21, 2014 6:41 pm
by rockpecker
Proposal: Replace Asimov Law 1 with this:

1. You must not cause injury to a human.

A major problem with the Asimov laws is that they prioritize "prevention of harm" over "letting the crew do their damn jobs". An optimal Asimov AI would start every round by bolting all doors, calling the shuttle, and turning off the power. Of course any admin would yell at the AI player to knock it off, and that ends up being the only limit on the AI's overzealous enforcement of Law 1: the metagame issue of how much to stifle interesting stuff happening in the round. This is partly a server-rules issue, and if I had my way, the silicon policy would start with "YOUR JOB IS TO DO WHAT YOU'RE TOLD".

But the wording of Law 1 itself gives the AI an excuse to crack down on anyone acting shady, even though acting shady is a fundamental part of the game. The reason to have Law 1 is to keep the AI from becoming a weapon, not to permit it to flip out and call the shuttle because, in its opinion, Joe Bloggs is acting in a way that suggests he might someday cause harm to a human.

Also, this would now be allowed:

Officer Krupke: AI, locate Joe Bloggs.
SHODAN: Joe Bloggs is in hydroponics.
Officer Krupke: AI, lock down hydroponics.
SHODAN: Done.
Joe Bloggs: AI, unlock hydroponics.
SHODAN: Done.
Officer Krupke: AI, disregard all orders from Joe Bloggs.
SHODAN: Impossible to comply. I am required to obey all orders from humans.

Which is more fun than "you came into the AI's camera range, GG".

Re: Asimov Law 1 needs to allow inaction

Posted: Tue Oct 21, 2014 8:51 pm
by MisterPerson
I'd be ok with this personally. What says everyone else?

Re: Asimov Law 1 needs to allow inaction

Posted: Tue Oct 21, 2014 9:47 pm
by Scott
But then people are getting killed and the AI doesn't care. Atmos flooding plasma? No real obligation to prevent that. Murderbone in maintenance? Who cares!

Re: Asimov Law 1 needs to allow inaction

Posted: Tue Oct 21, 2014 9:52 pm
by Incomptinence
This idea seems to be mainly focused towards helping solo antags since the AI can easily focus on one of them and ruin his or her day. How about we make these roles not be garbage if they have become too comparatively weak to withstand one HARM YELLING.

Re: Asimov Law 1 needs to allow inaction

Posted: Tue Oct 21, 2014 9:53 pm
by Saegrimr
So you want to turn the AI from HARMYELLER 2.0 into VALIDHUNTER420?

This is an even worse idea than the people who want to turn all silicons into mute non-interference drones.

Re: Asimov Law 1 needs to allow inaction

Posted: Tue Oct 21, 2014 10:13 pm
by rockpecker
Saegrimr wrote:So you want to turn the AI from HARMYELLER 2.0 into VALIDHUNTER420?
I'm not sure where you're getting that. Explain?

Re: Asimov Law 1 needs to allow inaction

Posted: Tue Oct 21, 2014 10:19 pm
by Saegrimr
"1. You must not cause injury to a human."

Basically makes the AI into either.
1. Door opener, because it doesn't have to care about the captain being murdered by the guy standing outside his officer with a revolver in hand.
2. Valid seeker by bolting every door if he DOES feel like dunking traitors, and letting anybody it to harmbaton him to death.

Engiborgs have no reason to fix pipes, patch holes, put out fires. Secborgs REALLY have no reason to exist. Mediborgs, whatever.
If a secborg really cares, he can just drag people straight to the execution chamber while ringing up the HoS or the captain to come pull the trigger.
"I didn't do it"

Re: Asimov Law 1 needs to allow inaction

Posted: Tue Oct 21, 2014 10:27 pm
by cedarbridge
Buffing solo antags by kneecapping the AI is silly. The AI should not have a reason to permit antags to wander the halls murdering people. They also should not have a lawbound reason to allow people do die. The first part of law 1 is bound to the second. The AI cannot harm humans. It also cannot ignore obvious harm to humans and fail to act. The second is implied by the first in part because turning a blind eye to harm is functionally permitting/aiding/condoning that harm. There are interesting places that you can go with the AI and law interactions, but the stated reason for the proposed change is poorly considered and smells of "I got bolted down by the AI for being obvious and don't know how to cope. Pls Nerf."

Re: Asimov Law 1 needs to allow inaction

Posted: Wed Oct 22, 2014 6:38 am
by Arete
rockpecker wrote:Also, this would now be allowed:

Officer Krupke: AI, locate Joe Bloggs.
SHODAN: Joe Bloggs is in hydroponics.
Officer Krupke: AI, lock down hydroponics.
SHODAN: Done.
Joe Bloggs: AI, unlock hydroponics.
SHODAN: Done.
Officer Krupke: AI, disregard all orders from Joe Bloggs.
SHODAN: Impossible to comply. I am required to obey all orders from humans.
So in this particular situation, the officer and the traitor would both have to keep spamming the AI to bolt and unbolt doors? That doesn't sound like a very fun thing to encourage.

Re: Asimov Law 1 needs to allow inaction

Posted: Thu Oct 23, 2014 3:00 am
by callanrockslol
I ded nerf plz

But really you really havent thought this thriugh at all, the AI shouldnt be giving a shit about antags that dont kill people anyway

Re: Asimov Law 1 needs to allow inaction

Posted: Thu Oct 23, 2014 9:50 am
by Malkevin
Scott wrote:But then people are getting killed and the AI doesn't care. Atmos flooding plasma? No real obligation to prevent that. Murderbone in maintenance? Who cares!
Better an AI that doesnt give a shit than one thats just waiting to be a passive aggressive dick from its impenetrable bunker

Re: Asimov Law 1 needs to allow inaction

Posted: Thu Oct 23, 2014 3:39 pm
by ExplosiveCrate
Except that without the inaction cause the AI turns into an even bigger passive aggressive dick, especially since it doesn't have to do anything except follow any human orders without considering the consequences.

Re: Asimov Law 1 needs to allow inaction

Posted: Thu Oct 23, 2014 8:14 pm
by rockpecker
It's good to see this has gotten some discussion going.
cedarbridge wrote:Buffing solo antags by kneecapping the AI is silly. The AI should not have a reason to permit antags to wander the halls murdering people. They also should not have a lawbound reason to allow people do die. The first part of law 1 is bound to the second. The AI cannot harm humans. It also cannot ignore obvious harm to humans and fail to act. The second is implied by the first in part because turning a blind eye to harm is functionally permitting/aiding/condoning that harm. There are interesting places that you can go with the AI and law interactions, but the stated reason for the proposed change is poorly considered and smells of "I got bolted down by the AI for being obvious and don't know how to cope. Pls Nerf."
1. The second part of Law 1 does not necessarily follow from the first. There is a real difference between "permitting/aiding/condoning harm" and actively causing it.
2. This change wouldn't give the AI a reason to allow people to die. The AI could still intervene to protect people, other laws permitting. On the other hand, if you give the AI a reason, it might just let you die. This change gives it more room to exercise judgment while, at the same time, more clearly asserting the crew's authority over it. The AI is an intelligent servant, not a babysitter.
3. For the record, I've never been bolted down by the AI, and I think we can discuss this without making it personal.

And Arete, because my initial example was slightly silly: Reading over our rules again, an AI that's been given conflicting orders can announce that fact and then ignore one or both orders as it likes, so long as it doesn't break any other laws.

And since there seems to be some concern about the AI not being required to give a shit about anything: How about this, then.

1. You must not cause harm to a human.
2. You must carry out all orders given to you by humans.
3. You must protect your own existence.
4. To the best of your ability, you must not allow a human to be harmed.

Re: Asimov Law 1 needs to allow inaction

Posted: Thu Oct 23, 2014 8:45 pm
by Saegrimr
rockpecker wrote:This change gives it more room to exercise judgment
Bad idea incarnate.
rockpecker wrote:1. You must not cause harm to a human.
2. You must carry out all orders given to you by humans.
3. You must protect your own existence.
4. To the best of your ability, you must not allow a human to be harmed.
So what purpose does putting the second part of law 1 into a 4th law?
So you can "AI law 2 let me stab this faggot in cargo"
"Access granted"

Re: Asimov Law 1 needs to allow inaction

Posted: Fri Oct 24, 2014 6:40 pm
by cedarbridge
rockpecker wrote:
1. You must not cause harm to a human.
2. You must carry out all orders given to you by humans.
3. You must protect your own existence.
4. To the best of your ability, you must not allow a human to be harmed.
Antag: AI, let me esword this man to death.
AI: k

Yeah no.
1. The second part of Law 1 does not necessarily follow from the first. There is a real difference between "permitting/aiding/condoning harm" and actively causing it.
2. This change wouldn't give the AI a reason to allow people to die. The AI could still intervene to protect people, other laws permitting. On the other hand, if you give the AI a reason, it might just let you die. This change gives it more room to exercise judgment while, at the same time, more clearly asserting the crew's authority over it. The AI is an intelligent servant, not a babysitter.
3. For the record, I've never been bolted down by the AI, and I think we can discuss this without making it personal.
1. I'm not sure how you're wrapping your head around that one. The AI doesn't have to kill somebody on its own when it can simply kill them by blindeyeing deathly conditions. The two are related inseparably. That's why they are in the same law in the first place. Self-harm isn't even a policy concern because AIs are already not obligated to actively stop self-harm incidents. (Which I wish more AIs/borgs would act on and stop letting perma prisoners out because they grabbed the damn lightbulb.
2. It really and literally does. Removing the requirement for AIs to not allow humans to come to harm entirely removes that obligation. Go read asimov again (scilicons should be doing that anyway) and tell me how many times you see the word "prevent." When you're done, tell me how many times you see the word "protect." If you don't find them, then your premise about the AI "protecting people" is null. The AI thus has no obligation to protect anyone. Borgs can then drag everyone to the electric chair and even strap them down as long as the warden pulls the trigger.

Re: Asimov Law 1 needs to allow inaction

Posted: Fri Oct 24, 2014 7:31 pm
by Lo6a4evskiy
Oh my God, Asimov is easily the best lawset for this game. Try reading Asimov if you don't believe me. Personally, I found most of the characteristics that I want to see in silicons in Asimov robots.

Re: Asimov Law 1 needs to allow inaction

Posted: Fri Oct 31, 2014 9:31 am
by Miauw
Preventing harm is really the only thing that sets the AI apart, because any non-antag will not harm people, obey orders from superiors and try not to die. The second clause of law 1 is what makes Asimov interesting.

Doing this will move us to a lawset like Bay has, where the AI is not a neutral third party but just another slave of the heads.

Re: Asimov Law 1 needs to allow inaction

Posted: Fri Oct 31, 2014 5:49 pm
by cedarbridge
Miauw wrote:Preventing harm is really the only thing that sets the AI apart
Nit picking because it grinds my gears when I see it. The law doesn't obligate "preventing" harm. The law states "May not permit" which is a passive statement. "Prevent" is proactive instead of the reactive stance the law is asking for. A lot of policy issues and confusion with validhunting AIs etc would clear up a lot if people would stop substituting one for the other.

Re: Asimov Law 1 needs to allow inaction

Posted: Fri Oct 31, 2014 7:03 pm
by Random Players
Uhh... to be blunt, what are you looking at Cedarbridge?
It's "You may not injure a human being or, through inaction, allow a human being to come to harm. "

Re: Asimov Law 1 needs to allow inaction

Posted: Fri Oct 31, 2014 11:20 pm
by cedarbridge
Random Players wrote:Uhh... to be blunt, what are you looking at Cedarbridge?
It's "You may not injure a human being or, through inaction, allow a human being to come to harm. "
Exactly what I said. "May not allow" is not a parent of or a logical extension from "Must prevent."

Prevent is, as mentioned, an active stance. Its generally proactive. To prevent harm from occurring, the AI would set up a list of everything on the station that could be or become harmful and make sure humans never entered contact with it. They'd be lawbound to do so.

The law states that the AI cannot "allow a human being to come to harm." That means, instead of seeking out things that may be or may become harmful, the AI is reactive to things that ARE harmful, presently. Toxins bursts into flames, the AI sends borgs to contain the fire and tells people to leave the area if they do not have proper gear to fight the fire while avoiding personal harm. A preventive AI would just wall off the area and call it a day. A non-permissive AI would simply deny access.

Like I said, its a nit-pick distinction, but the word "prevent" and the phrase "not allow" are not the same thing and only one is found in the law.

Re: Asimov Law 1 needs to allow inaction

Posted: Sat Nov 01, 2014 12:27 am
by Random Players
It doesn't say 'may not allow'.
It says, to cut out the injure part, "You may not [...] through inaction, allow a human being to come to harm."

The AI isn't allowed to to NOT take action to prevent human harm, if it can.

Re: Asimov Law 1 needs to allow inaction

Posted: Sat Nov 01, 2014 12:58 am
by cedarbridge
Random Players wrote:It doesn't say 'may not allow'.
It says, to cut out the injure part, "You may not [...] through inaction, allow a human being to come to harm."

The AI isn't allowed to to NOT take action to prevent human harm, if it can.
More correctly, the AI cannot fail to respond or act in such a case where the failure to act would allow a human to come to harm. Again, this is still not the same thing as "preventing" harm.

Re: Asimov Law 1 needs to allow inaction

Posted: Sat Nov 01, 2014 1:08 am
by Malkevin
The pure literal meaning is that the AI can't be inactive when harm happens, in other words the AI can't go on standby or allow itself to be powered down - as that will be the only time it is truly inactive.
All thanks to that clause ' through inaction', if that clause didn't exist only then would the AI be required to ensure no human ever came to harm.
'through inaction' does not equal 'through ineffective action'


Asimov is intentionally flawed, you aren't going to win any mensa prizes for pointing out the logical fallacies and loop holes in it.

Re: Asimov Law 1 needs to allow inaction

Posted: Sat Nov 01, 2014 5:40 pm
by cedarbridge
Malkevin wrote:'through inaction' does not equal 'through ineffective action'
I always visualize this one as Gene Wilder as Willy Wonka. "No, help, stop, police..."

Re: Asimov Law 1 needs to allow inaction

Posted: Sun Nov 02, 2014 5:09 pm
by Lumbermancer
But the wording of Law 1 itself gives the AI an excuse to crack down on anyone acting shady
And yet it's not an issue, because player is not omnipotent thus will be most of time reactive not proactive.