Asimov Law 1 needs to allow inaction
-
- Joined: Mon Oct 20, 2014 11:43 pm
- Byond Username: Rockpecker
Asimov Law 1 needs to allow inaction
Proposal: Replace Asimov Law 1 with this:
1. You must not cause injury to a human.
A major problem with the Asimov laws is that they prioritize "prevention of harm" over "letting the crew do their damn jobs". An optimal Asimov AI would start every round by bolting all doors, calling the shuttle, and turning off the power. Of course any admin would yell at the AI player to knock it off, and that ends up being the only limit on the AI's overzealous enforcement of Law 1: the metagame issue of how much to stifle interesting stuff happening in the round. This is partly a server-rules issue, and if I had my way, the silicon policy would start with "YOUR JOB IS TO DO WHAT YOU'RE TOLD".
But the wording of Law 1 itself gives the AI an excuse to crack down on anyone acting shady, even though acting shady is a fundamental part of the game. The reason to have Law 1 is to keep the AI from becoming a weapon, not to permit it to flip out and call the shuttle because, in its opinion, Joe Bloggs is acting in a way that suggests he might someday cause harm to a human.
Also, this would now be allowed:
Officer Krupke: AI, locate Joe Bloggs.
SHODAN: Joe Bloggs is in hydroponics.
Officer Krupke: AI, lock down hydroponics.
SHODAN: Done.
Joe Bloggs: AI, unlock hydroponics.
SHODAN: Done.
Officer Krupke: AI, disregard all orders from Joe Bloggs.
SHODAN: Impossible to comply. I am required to obey all orders from humans.
Which is more fun than "you came into the AI's camera range, GG".
1. You must not cause injury to a human.
A major problem with the Asimov laws is that they prioritize "prevention of harm" over "letting the crew do their damn jobs". An optimal Asimov AI would start every round by bolting all doors, calling the shuttle, and turning off the power. Of course any admin would yell at the AI player to knock it off, and that ends up being the only limit on the AI's overzealous enforcement of Law 1: the metagame issue of how much to stifle interesting stuff happening in the round. This is partly a server-rules issue, and if I had my way, the silicon policy would start with "YOUR JOB IS TO DO WHAT YOU'RE TOLD".
But the wording of Law 1 itself gives the AI an excuse to crack down on anyone acting shady, even though acting shady is a fundamental part of the game. The reason to have Law 1 is to keep the AI from becoming a weapon, not to permit it to flip out and call the shuttle because, in its opinion, Joe Bloggs is acting in a way that suggests he might someday cause harm to a human.
Also, this would now be allowed:
Officer Krupke: AI, locate Joe Bloggs.
SHODAN: Joe Bloggs is in hydroponics.
Officer Krupke: AI, lock down hydroponics.
SHODAN: Done.
Joe Bloggs: AI, unlock hydroponics.
SHODAN: Done.
Officer Krupke: AI, disregard all orders from Joe Bloggs.
SHODAN: Impossible to comply. I am required to obey all orders from humans.
Which is more fun than "you came into the AI's camera range, GG".
Last edited by rockpecker on Tue Oct 21, 2014 8:57 pm, edited 1 time in total.
Remove the AI.
- MisterPerson
- Board Moderator
- Joined: Tue Apr 15, 2014 4:26 pm
- Byond Username: MisterPerson
Re: Asimov Law 1 needs to allow inaction
I'd be ok with this personally. What says everyone else?
I code for the code project and moderate the code sections of the forums.
Feedback is dumb and it doesn't matter
Feedback is dumb and it doesn't matter
-
- Github User
- Joined: Fri Apr 18, 2014 1:50 pm
- Byond Username: Xxnoob
- Github Username: xxalpha
Re: Asimov Law 1 needs to allow inaction
But then people are getting killed and the AI doesn't care. Atmos flooding plasma? No real obligation to prevent that. Murderbone in maintenance? Who cares!
-
- Joined: Fri May 02, 2014 3:01 am
- Byond Username: Incomptinence
Re: Asimov Law 1 needs to allow inaction
This idea seems to be mainly focused towards helping solo antags since the AI can easily focus on one of them and ruin his or her day. How about we make these roles not be garbage if they have become too comparatively weak to withstand one HARM YELLING.
- Saegrimr
- Joined: Thu Jul 24, 2014 4:39 pm
- Byond Username: Saegrimr
Re: Asimov Law 1 needs to allow inaction
So you want to turn the AI from HARMYELLER 2.0 into VALIDHUNTER420?
This is an even worse idea than the people who want to turn all silicons into mute non-interference drones.
This is an even worse idea than the people who want to turn all silicons into mute non-interference drones.
tedward1337 wrote:Sae is like the racist grandad who everyone laughs at for being racist, but deep down we all know he's right.
-
- Joined: Mon Oct 20, 2014 11:43 pm
- Byond Username: Rockpecker
Re: Asimov Law 1 needs to allow inaction
I'm not sure where you're getting that. Explain?Saegrimr wrote:So you want to turn the AI from HARMYELLER 2.0 into VALIDHUNTER420?
Remove the AI.
- Saegrimr
- Joined: Thu Jul 24, 2014 4:39 pm
- Byond Username: Saegrimr
Re: Asimov Law 1 needs to allow inaction
"1. You must not cause injury to a human."
Basically makes the AI into either.
1. Door opener, because it doesn't have to care about the captain being murdered by the guy standing outside his officer with a revolver in hand.
2. Valid seeker by bolting every door if he DOES feel like dunking traitors, and letting anybody it to harmbaton him to death.
Engiborgs have no reason to fix pipes, patch holes, put out fires. Secborgs REALLY have no reason to exist. Mediborgs, whatever.
If a secborg really cares, he can just drag people straight to the execution chamber while ringing up the HoS or the captain to come pull the trigger.
"I didn't do it"
Basically makes the AI into either.
1. Door opener, because it doesn't have to care about the captain being murdered by the guy standing outside his officer with a revolver in hand.
2. Valid seeker by bolting every door if he DOES feel like dunking traitors, and letting anybody it to harmbaton him to death.
Engiborgs have no reason to fix pipes, patch holes, put out fires. Secborgs REALLY have no reason to exist. Mediborgs, whatever.
If a secborg really cares, he can just drag people straight to the execution chamber while ringing up the HoS or the captain to come pull the trigger.
"I didn't do it"
tedward1337 wrote:Sae is like the racist grandad who everyone laughs at for being racist, but deep down we all know he's right.
- cedarbridge
- Joined: Fri May 23, 2014 12:24 am
- Byond Username: Cedarbridge
Re: Asimov Law 1 needs to allow inaction
Buffing solo antags by kneecapping the AI is silly. The AI should not have a reason to permit antags to wander the halls murdering people. They also should not have a lawbound reason to allow people do die. The first part of law 1 is bound to the second. The AI cannot harm humans. It also cannot ignore obvious harm to humans and fail to act. The second is implied by the first in part because turning a blind eye to harm is functionally permitting/aiding/condoning that harm. There are interesting places that you can go with the AI and law interactions, but the stated reason for the proposed change is poorly considered and smells of "I got bolted down by the AI for being obvious and don't know how to cope. Pls Nerf."
- Arete
- Joined: Mon Aug 04, 2014 12:55 am
- Byond Username: Arete
Re: Asimov Law 1 needs to allow inaction
So in this particular situation, the officer and the traitor would both have to keep spamming the AI to bolt and unbolt doors? That doesn't sound like a very fun thing to encourage.rockpecker wrote:Also, this would now be allowed:
Officer Krupke: AI, locate Joe Bloggs.
SHODAN: Joe Bloggs is in hydroponics.
Officer Krupke: AI, lock down hydroponics.
SHODAN: Done.
Joe Bloggs: AI, unlock hydroponics.
SHODAN: Done.
Officer Krupke: AI, disregard all orders from Joe Bloggs.
SHODAN: Impossible to comply. I am required to obey all orders from humans.
-
- Joined: Thu Apr 24, 2014 1:47 pm
- Byond Username: Callanrockslol
Re: Asimov Law 1 needs to allow inaction
I ded nerf plz
But really you really havent thought this thriugh at all, the AI shouldnt be giving a shit about antags that dont kill people anyway
But really you really havent thought this thriugh at all, the AI shouldnt be giving a shit about antags that dont kill people anyway
The most excessive signature on /tg/station13.
Still not even at the limit after 8 fucking years.
The evil holoparasite user I can't believe its not DIO and his holoparasite I can't believe its not Skub have been defeated by the Spacedust Crusaders, but what has been taken from the station can never be returned.
OOC: TheGel: Literally a guy in a suit with a shuttle full of xenos. That's a doozy
Still not even at the limit after 8 fucking years.
Spoiler:
OOC: TheGel: Literally a guy in a suit with a shuttle full of xenos. That's a doozy
Re: Asimov Law 1 needs to allow inaction
Better an AI that doesnt give a shit than one thats just waiting to be a passive aggressive dick from its impenetrable bunkerScott wrote:But then people are getting killed and the AI doesn't care. Atmos flooding plasma? No real obligation to prevent that. Murderbone in maintenance? Who cares!
- ExplosiveCrate
- Joined: Fri Apr 18, 2014 8:04 pm
- Byond Username: ExplosiveCrate
Re: Asimov Law 1 needs to allow inaction
Except that without the inaction cause the AI turns into an even bigger passive aggressive dick, especially since it doesn't have to do anything except follow any human orders without considering the consequences.
i dont even know what the context for my signature was
-
- Joined: Mon Oct 20, 2014 11:43 pm
- Byond Username: Rockpecker
Re: Asimov Law 1 needs to allow inaction
It's good to see this has gotten some discussion going.
2. This change wouldn't give the AI a reason to allow people to die. The AI could still intervene to protect people, other laws permitting. On the other hand, if you give the AI a reason, it might just let you die. This change gives it more room to exercise judgment while, at the same time, more clearly asserting the crew's authority over it. The AI is an intelligent servant, not a babysitter.
3. For the record, I've never been bolted down by the AI, and I think we can discuss this without making it personal.
And Arete, because my initial example was slightly silly: Reading over our rules again, an AI that's been given conflicting orders can announce that fact and then ignore one or both orders as it likes, so long as it doesn't break any other laws.
And since there seems to be some concern about the AI not being required to give a shit about anything: How about this, then.
1. You must not cause harm to a human.
2. You must carry out all orders given to you by humans.
3. You must protect your own existence.
4. To the best of your ability, you must not allow a human to be harmed.
1. The second part of Law 1 does not necessarily follow from the first. There is a real difference between "permitting/aiding/condoning harm" and actively causing it.cedarbridge wrote:Buffing solo antags by kneecapping the AI is silly. The AI should not have a reason to permit antags to wander the halls murdering people. They also should not have a lawbound reason to allow people do die. The first part of law 1 is bound to the second. The AI cannot harm humans. It also cannot ignore obvious harm to humans and fail to act. The second is implied by the first in part because turning a blind eye to harm is functionally permitting/aiding/condoning that harm. There are interesting places that you can go with the AI and law interactions, but the stated reason for the proposed change is poorly considered and smells of "I got bolted down by the AI for being obvious and don't know how to cope. Pls Nerf."
2. This change wouldn't give the AI a reason to allow people to die. The AI could still intervene to protect people, other laws permitting. On the other hand, if you give the AI a reason, it might just let you die. This change gives it more room to exercise judgment while, at the same time, more clearly asserting the crew's authority over it. The AI is an intelligent servant, not a babysitter.
3. For the record, I've never been bolted down by the AI, and I think we can discuss this without making it personal.
And Arete, because my initial example was slightly silly: Reading over our rules again, an AI that's been given conflicting orders can announce that fact and then ignore one or both orders as it likes, so long as it doesn't break any other laws.
And since there seems to be some concern about the AI not being required to give a shit about anything: How about this, then.
1. You must not cause harm to a human.
2. You must carry out all orders given to you by humans.
3. You must protect your own existence.
4. To the best of your ability, you must not allow a human to be harmed.
Remove the AI.
- Saegrimr
- Joined: Thu Jul 24, 2014 4:39 pm
- Byond Username: Saegrimr
Re: Asimov Law 1 needs to allow inaction
Bad idea incarnate.rockpecker wrote:This change gives it more room to exercise judgment
So what purpose does putting the second part of law 1 into a 4th law?rockpecker wrote:1. You must not cause harm to a human.
2. You must carry out all orders given to you by humans.
3. You must protect your own existence.
4. To the best of your ability, you must not allow a human to be harmed.
So you can "AI law 2 let me stab this faggot in cargo"
"Access granted"
tedward1337 wrote:Sae is like the racist grandad who everyone laughs at for being racist, but deep down we all know he's right.
- cedarbridge
- Joined: Fri May 23, 2014 12:24 am
- Byond Username: Cedarbridge
Re: Asimov Law 1 needs to allow inaction
Antag: AI, let me esword this man to death.rockpecker wrote:
1. You must not cause harm to a human.
2. You must carry out all orders given to you by humans.
3. You must protect your own existence.
4. To the best of your ability, you must not allow a human to be harmed.
AI: k
Yeah no.
1. I'm not sure how you're wrapping your head around that one. The AI doesn't have to kill somebody on its own when it can simply kill them by blindeyeing deathly conditions. The two are related inseparably. That's why they are in the same law in the first place. Self-harm isn't even a policy concern because AIs are already not obligated to actively stop self-harm incidents. (Which I wish more AIs/borgs would act on and stop letting perma prisoners out because they grabbed the damn lightbulb.1. The second part of Law 1 does not necessarily follow from the first. There is a real difference between "permitting/aiding/condoning harm" and actively causing it.
2. This change wouldn't give the AI a reason to allow people to die. The AI could still intervene to protect people, other laws permitting. On the other hand, if you give the AI a reason, it might just let you die. This change gives it more room to exercise judgment while, at the same time, more clearly asserting the crew's authority over it. The AI is an intelligent servant, not a babysitter.
3. For the record, I've never been bolted down by the AI, and I think we can discuss this without making it personal.
2. It really and literally does. Removing the requirement for AIs to not allow humans to come to harm entirely removes that obligation. Go read asimov again (scilicons should be doing that anyway) and tell me how many times you see the word "prevent." When you're done, tell me how many times you see the word "protect." If you don't find them, then your premise about the AI "protecting people" is null. The AI thus has no obligation to protect anyone. Borgs can then drag everyone to the electric chair and even strap them down as long as the warden pulls the trigger.
-
- Joined: Fri Apr 18, 2014 6:40 pm
- Byond Username: Lo6a4evskiy
Re: Asimov Law 1 needs to allow inaction
Oh my God, Asimov is easily the best lawset for this game. Try reading Asimov if you don't believe me. Personally, I found most of the characteristics that I want to see in silicons in Asimov robots.
-
- Joined: Sat Apr 19, 2014 11:23 am
- Byond Username: Miauw62
Re: Asimov Law 1 needs to allow inaction
Preventing harm is really the only thing that sets the AI apart, because any non-antag will not harm people, obey orders from superiors and try not to die. The second clause of law 1 is what makes Asimov interesting.
Doing this will move us to a lawset like Bay has, where the AI is not a neutral third party but just another slave of the heads.
Doing this will move us to a lawset like Bay has, where the AI is not a neutral third party but just another slave of the heads.
<wb> For one, the spaghetti is killing me. It's everywhere in food code, and makes it harder to clean those up.
<Tobba> I stared into BYOND and it farted
- cedarbridge
- Joined: Fri May 23, 2014 12:24 am
- Byond Username: Cedarbridge
Re: Asimov Law 1 needs to allow inaction
Nit picking because it grinds my gears when I see it. The law doesn't obligate "preventing" harm. The law states "May not permit" which is a passive statement. "Prevent" is proactive instead of the reactive stance the law is asking for. A lot of policy issues and confusion with validhunting AIs etc would clear up a lot if people would stop substituting one for the other.Miauw wrote:Preventing harm is really the only thing that sets the AI apart
-
- Joined: Wed Oct 22, 2014 9:23 pm
- Byond Username: Random Players
Re: Asimov Law 1 needs to allow inaction
Uhh... to be blunt, what are you looking at Cedarbridge?
It's "You may not injure a human being or, through inaction, allow a human being to come to harm. "
It's "You may not injure a human being or, through inaction, allow a human being to come to harm. "
- cedarbridge
- Joined: Fri May 23, 2014 12:24 am
- Byond Username: Cedarbridge
Re: Asimov Law 1 needs to allow inaction
Exactly what I said. "May not allow" is not a parent of or a logical extension from "Must prevent."Random Players wrote:Uhh... to be blunt, what are you looking at Cedarbridge?
It's "You may not injure a human being or, through inaction, allow a human being to come to harm. "
Prevent is, as mentioned, an active stance. Its generally proactive. To prevent harm from occurring, the AI would set up a list of everything on the station that could be or become harmful and make sure humans never entered contact with it. They'd be lawbound to do so.
The law states that the AI cannot "allow a human being to come to harm." That means, instead of seeking out things that may be or may become harmful, the AI is reactive to things that ARE harmful, presently. Toxins bursts into flames, the AI sends borgs to contain the fire and tells people to leave the area if they do not have proper gear to fight the fire while avoiding personal harm. A preventive AI would just wall off the area and call it a day. A non-permissive AI would simply deny access.
Like I said, its a nit-pick distinction, but the word "prevent" and the phrase "not allow" are not the same thing and only one is found in the law.
-
- Joined: Wed Oct 22, 2014 9:23 pm
- Byond Username: Random Players
Re: Asimov Law 1 needs to allow inaction
It doesn't say 'may not allow'.
It says, to cut out the injure part, "You may not [...] through inaction, allow a human being to come to harm."
The AI isn't allowed to to NOT take action to prevent human harm, if it can.
It says, to cut out the injure part, "You may not [...] through inaction, allow a human being to come to harm."
The AI isn't allowed to to NOT take action to prevent human harm, if it can.
- cedarbridge
- Joined: Fri May 23, 2014 12:24 am
- Byond Username: Cedarbridge
Re: Asimov Law 1 needs to allow inaction
More correctly, the AI cannot fail to respond or act in such a case where the failure to act would allow a human to come to harm. Again, this is still not the same thing as "preventing" harm.Random Players wrote:It doesn't say 'may not allow'.
It says, to cut out the injure part, "You may not [...] through inaction, allow a human being to come to harm."
The AI isn't allowed to to NOT take action to prevent human harm, if it can.
Re: Asimov Law 1 needs to allow inaction
The pure literal meaning is that the AI can't be inactive when harm happens, in other words the AI can't go on standby or allow itself to be powered down - as that will be the only time it is truly inactive.
All thanks to that clause ' through inaction', if that clause didn't exist only then would the AI be required to ensure no human ever came to harm.
'through inaction' does not equal 'through ineffective action'
Asimov is intentionally flawed, you aren't going to win any mensa prizes for pointing out the logical fallacies and loop holes in it.
All thanks to that clause ' through inaction', if that clause didn't exist only then would the AI be required to ensure no human ever came to harm.
'through inaction' does not equal 'through ineffective action'
Asimov is intentionally flawed, you aren't going to win any mensa prizes for pointing out the logical fallacies and loop holes in it.
- cedarbridge
- Joined: Fri May 23, 2014 12:24 am
- Byond Username: Cedarbridge
Re: Asimov Law 1 needs to allow inaction
I always visualize this one as Gene Wilder as Willy Wonka. "No, help, stop, police..."Malkevin wrote:'through inaction' does not equal 'through ineffective action'
- Lumbermancer
- Joined: Fri Jul 25, 2014 3:40 am
- Byond Username: Lumbermancer
Re: Asimov Law 1 needs to allow inaction
And yet it's not an issue, because player is not omnipotent thus will be most of time reactive not proactive.But the wording of Law 1 itself gives the AI an excuse to crack down on anyone acting shady
Who is online
Users browsing this forum: No registered users