Asimov Law 1 needs to allow inaction

For feedback on the game code and design. Feedback on server rules and playstyle belong in Policy Discussion.
Post Reply
rockpecker
Joined: Mon Oct 20, 2014 11:43 pm
Byond Username: Rockpecker

Asimov Law 1 needs to allow inaction

Post by rockpecker » #37859

Proposal: Replace Asimov Law 1 with this:

1. You must not cause injury to a human.

A major problem with the Asimov laws is that they prioritize "prevention of harm" over "letting the crew do their damn jobs". An optimal Asimov AI would start every round by bolting all doors, calling the shuttle, and turning off the power. Of course any admin would yell at the AI player to knock it off, and that ends up being the only limit on the AI's overzealous enforcement of Law 1: the metagame issue of how much to stifle interesting stuff happening in the round. This is partly a server-rules issue, and if I had my way, the silicon policy would start with "YOUR JOB IS TO DO WHAT YOU'RE TOLD".

But the wording of Law 1 itself gives the AI an excuse to crack down on anyone acting shady, even though acting shady is a fundamental part of the game. The reason to have Law 1 is to keep the AI from becoming a weapon, not to permit it to flip out and call the shuttle because, in its opinion, Joe Bloggs is acting in a way that suggests he might someday cause harm to a human.

Also, this would now be allowed:

Officer Krupke: AI, locate Joe Bloggs.
SHODAN: Joe Bloggs is in hydroponics.
Officer Krupke: AI, lock down hydroponics.
SHODAN: Done.
Joe Bloggs: AI, unlock hydroponics.
SHODAN: Done.
Officer Krupke: AI, disregard all orders from Joe Bloggs.
SHODAN: Impossible to comply. I am required to obey all orders from humans.

Which is more fun than "you came into the AI's camera range, GG".
Last edited by rockpecker on Tue Oct 21, 2014 8:57 pm, edited 1 time in total.
Remove the AI.
User avatar
MisterPerson
Board Moderator
Joined: Tue Apr 15, 2014 4:26 pm
Byond Username: MisterPerson

Re: Asimov Law 1 needs to allow inaction

Post by MisterPerson » #37881

I'd be ok with this personally. What says everyone else?
I code for the code project and moderate the code sections of the forums.

Feedback is dumb and it doesn't matter
Scott
Github User
Joined: Fri Apr 18, 2014 1:50 pm
Byond Username: Xxnoob
Github Username: xxalpha

Re: Asimov Law 1 needs to allow inaction

Post by Scott » #37895

But then people are getting killed and the AI doesn't care. Atmos flooding plasma? No real obligation to prevent that. Murderbone in maintenance? Who cares!
Incomptinence
Joined: Fri May 02, 2014 3:01 am
Byond Username: Incomptinence

Re: Asimov Law 1 needs to allow inaction

Post by Incomptinence » #37896

This idea seems to be mainly focused towards helping solo antags since the AI can easily focus on one of them and ruin his or her day. How about we make these roles not be garbage if they have become too comparatively weak to withstand one HARM YELLING.
User avatar
Saegrimr
Joined: Thu Jul 24, 2014 4:39 pm
Byond Username: Saegrimr

Re: Asimov Law 1 needs to allow inaction

Post by Saegrimr » #37897

So you want to turn the AI from HARMYELLER 2.0 into VALIDHUNTER420?

This is an even worse idea than the people who want to turn all silicons into mute non-interference drones.
tedward1337 wrote:Sae is like the racist grandad who everyone laughs at for being racist, but deep down we all know he's right.
rockpecker
Joined: Mon Oct 20, 2014 11:43 pm
Byond Username: Rockpecker

Re: Asimov Law 1 needs to allow inaction

Post by rockpecker » #37903

Saegrimr wrote:So you want to turn the AI from HARMYELLER 2.0 into VALIDHUNTER420?
I'm not sure where you're getting that. Explain?
Remove the AI.
User avatar
Saegrimr
Joined: Thu Jul 24, 2014 4:39 pm
Byond Username: Saegrimr

Re: Asimov Law 1 needs to allow inaction

Post by Saegrimr » #37907

"1. You must not cause injury to a human."

Basically makes the AI into either.
1. Door opener, because it doesn't have to care about the captain being murdered by the guy standing outside his officer with a revolver in hand.
2. Valid seeker by bolting every door if he DOES feel like dunking traitors, and letting anybody it to harmbaton him to death.

Engiborgs have no reason to fix pipes, patch holes, put out fires. Secborgs REALLY have no reason to exist. Mediborgs, whatever.
If a secborg really cares, he can just drag people straight to the execution chamber while ringing up the HoS or the captain to come pull the trigger.
"I didn't do it"
tedward1337 wrote:Sae is like the racist grandad who everyone laughs at for being racist, but deep down we all know he's right.
User avatar
cedarbridge
Joined: Fri May 23, 2014 12:24 am
Byond Username: Cedarbridge

Re: Asimov Law 1 needs to allow inaction

Post by cedarbridge » #37909

Buffing solo antags by kneecapping the AI is silly. The AI should not have a reason to permit antags to wander the halls murdering people. They also should not have a lawbound reason to allow people do die. The first part of law 1 is bound to the second. The AI cannot harm humans. It also cannot ignore obvious harm to humans and fail to act. The second is implied by the first in part because turning a blind eye to harm is functionally permitting/aiding/condoning that harm. There are interesting places that you can go with the AI and law interactions, but the stated reason for the proposed change is poorly considered and smells of "I got bolted down by the AI for being obvious and don't know how to cope. Pls Nerf."
User avatar
Arete
Joined: Mon Aug 04, 2014 12:55 am
Byond Username: Arete

Re: Asimov Law 1 needs to allow inaction

Post by Arete » #38025

rockpecker wrote:Also, this would now be allowed:

Officer Krupke: AI, locate Joe Bloggs.
SHODAN: Joe Bloggs is in hydroponics.
Officer Krupke: AI, lock down hydroponics.
SHODAN: Done.
Joe Bloggs: AI, unlock hydroponics.
SHODAN: Done.
Officer Krupke: AI, disregard all orders from Joe Bloggs.
SHODAN: Impossible to comply. I am required to obey all orders from humans.
So in this particular situation, the officer and the traitor would both have to keep spamming the AI to bolt and unbolt doors? That doesn't sound like a very fun thing to encourage.
callanrockslol
Joined: Thu Apr 24, 2014 1:47 pm
Byond Username: Callanrockslol

Re: Asimov Law 1 needs to allow inaction

Post by callanrockslol » #38191

I ded nerf plz

But really you really havent thought this thriugh at all, the AI shouldnt be giving a shit about antags that dont kill people anyway
The most excessive signature on /tg/station13.

Still not even at the limit after 8 fucking years.
Spoiler:
Urist Boatmurdered [Security] asks, "Why does Zol have a captain-level ID?"
Zol Interbottom [Security] says, "because"

Sergie Borris lives on in our hearts

Zaros (No id) [145.9] says, "WITH MY SUPER WIZARD POWERS I CAN TELL CALLAN IS MAD."
Anderson Conagher wrote:Callan is sense.
Errorage wrote:When I see the win vista, win 7 and win 8 hourglass cursor, it makes me happy
Cause it's a circle spinning around
I smile and make circular motions with my finger to imiatate it
petethegoat wrote:slap a comment on it and call it a feature
MisterPerson wrote:>playing
Do you think this is a game?
Gun Hog wrote:Untested code baby
oranges wrote:for some reason all our hosts turn into bohemia software communities after they implode
Malkevin wrote:I was the only one that voted for you Callan.
Miggles wrote:>centration development
>trucking
ill believe it when snakes grow arms and strangle me with them

OOC: Aranclanos: that sounds like ooc in ooc related to ic to be ooc and confuse the ic
OOC: Dionysus24779: We're nearing a deep philosophical extistential level

Admin PM from-Jordie0608: 33-Jan-2552| Warned: Is a giraffe dork ~tony abbott

OOC: Saegrimr: That wasn't a call to pray right now callan jesus christ you're fast.

OOC: Eaglendia: Glad I got to see the rise, fall, rise, and fall of Zol

OOC: Armhulenn: CALLAN
OOC: Armhulenn: YOU MELTED MY FUCKING REVOLVER
OOC: Armhulenn: AND THEN
OOC: Armhulenn: GAVE ME MELTING MELONS
OOC: Armhulenn: GOD FUCKING BLESS YOU
OOC: Armhulenn: you know what's hilarious though
OOC: Armhulenn: I melted ANOTHER TRAITOR'S REVOLVER AFTER THAT

7/8/2016 never forget
Armhulen wrote:
John_Oxford wrote:>implying im not always right
all we're saying is that you're not crag son
bandit wrote:we already have a punishment for using our code for your game, it's called using our code for your game
The evil holoparasite user I can't believe its not DIO and his holoparasite I can't believe its not Skub have been defeated by the Spacedust Crusaders, but what has been taken from the station can never be returned.

OOC: TheGel: Literally a guy in a suit with a shuttle full of xenos. That's a doozy
Malkevin

Re: Asimov Law 1 needs to allow inaction

Post by Malkevin » #38223

Scott wrote:But then people are getting killed and the AI doesn't care. Atmos flooding plasma? No real obligation to prevent that. Murderbone in maintenance? Who cares!
Better an AI that doesnt give a shit than one thats just waiting to be a passive aggressive dick from its impenetrable bunker
User avatar
ExplosiveCrate
Joined: Fri Apr 18, 2014 8:04 pm
Byond Username: ExplosiveCrate

Re: Asimov Law 1 needs to allow inaction

Post by ExplosiveCrate » #38275

Except that without the inaction cause the AI turns into an even bigger passive aggressive dick, especially since it doesn't have to do anything except follow any human orders without considering the consequences.
i dont even know what the context for my signature was
rockpecker
Joined: Mon Oct 20, 2014 11:43 pm
Byond Username: Rockpecker

Re: Asimov Law 1 needs to allow inaction

Post by rockpecker » #38328

It's good to see this has gotten some discussion going.
cedarbridge wrote:Buffing solo antags by kneecapping the AI is silly. The AI should not have a reason to permit antags to wander the halls murdering people. They also should not have a lawbound reason to allow people do die. The first part of law 1 is bound to the second. The AI cannot harm humans. It also cannot ignore obvious harm to humans and fail to act. The second is implied by the first in part because turning a blind eye to harm is functionally permitting/aiding/condoning that harm. There are interesting places that you can go with the AI and law interactions, but the stated reason for the proposed change is poorly considered and smells of "I got bolted down by the AI for being obvious and don't know how to cope. Pls Nerf."
1. The second part of Law 1 does not necessarily follow from the first. There is a real difference between "permitting/aiding/condoning harm" and actively causing it.
2. This change wouldn't give the AI a reason to allow people to die. The AI could still intervene to protect people, other laws permitting. On the other hand, if you give the AI a reason, it might just let you die. This change gives it more room to exercise judgment while, at the same time, more clearly asserting the crew's authority over it. The AI is an intelligent servant, not a babysitter.
3. For the record, I've never been bolted down by the AI, and I think we can discuss this without making it personal.

And Arete, because my initial example was slightly silly: Reading over our rules again, an AI that's been given conflicting orders can announce that fact and then ignore one or both orders as it likes, so long as it doesn't break any other laws.

And since there seems to be some concern about the AI not being required to give a shit about anything: How about this, then.

1. You must not cause harm to a human.
2. You must carry out all orders given to you by humans.
3. You must protect your own existence.
4. To the best of your ability, you must not allow a human to be harmed.
Remove the AI.
User avatar
Saegrimr
Joined: Thu Jul 24, 2014 4:39 pm
Byond Username: Saegrimr

Re: Asimov Law 1 needs to allow inaction

Post by Saegrimr » #38331

rockpecker wrote:This change gives it more room to exercise judgment
Bad idea incarnate.
rockpecker wrote:1. You must not cause harm to a human.
2. You must carry out all orders given to you by humans.
3. You must protect your own existence.
4. To the best of your ability, you must not allow a human to be harmed.
So what purpose does putting the second part of law 1 into a 4th law?
So you can "AI law 2 let me stab this faggot in cargo"
"Access granted"
tedward1337 wrote:Sae is like the racist grandad who everyone laughs at for being racist, but deep down we all know he's right.
User avatar
cedarbridge
Joined: Fri May 23, 2014 12:24 am
Byond Username: Cedarbridge

Re: Asimov Law 1 needs to allow inaction

Post by cedarbridge » #38477

rockpecker wrote:
1. You must not cause harm to a human.
2. You must carry out all orders given to you by humans.
3. You must protect your own existence.
4. To the best of your ability, you must not allow a human to be harmed.
Antag: AI, let me esword this man to death.
AI: k

Yeah no.
1. The second part of Law 1 does not necessarily follow from the first. There is a real difference between "permitting/aiding/condoning harm" and actively causing it.
2. This change wouldn't give the AI a reason to allow people to die. The AI could still intervene to protect people, other laws permitting. On the other hand, if you give the AI a reason, it might just let you die. This change gives it more room to exercise judgment while, at the same time, more clearly asserting the crew's authority over it. The AI is an intelligent servant, not a babysitter.
3. For the record, I've never been bolted down by the AI, and I think we can discuss this without making it personal.
1. I'm not sure how you're wrapping your head around that one. The AI doesn't have to kill somebody on its own when it can simply kill them by blindeyeing deathly conditions. The two are related inseparably. That's why they are in the same law in the first place. Self-harm isn't even a policy concern because AIs are already not obligated to actively stop self-harm incidents. (Which I wish more AIs/borgs would act on and stop letting perma prisoners out because they grabbed the damn lightbulb.
2. It really and literally does. Removing the requirement for AIs to not allow humans to come to harm entirely removes that obligation. Go read asimov again (scilicons should be doing that anyway) and tell me how many times you see the word "prevent." When you're done, tell me how many times you see the word "protect." If you don't find them, then your premise about the AI "protecting people" is null. The AI thus has no obligation to protect anyone. Borgs can then drag everyone to the electric chair and even strap them down as long as the warden pulls the trigger.
Lo6a4evskiy
Joined: Fri Apr 18, 2014 6:40 pm
Byond Username: Lo6a4evskiy

Re: Asimov Law 1 needs to allow inaction

Post by Lo6a4evskiy » #38493

Oh my God, Asimov is easily the best lawset for this game. Try reading Asimov if you don't believe me. Personally, I found most of the characteristics that I want to see in silicons in Asimov robots.
Miauw
Joined: Sat Apr 19, 2014 11:23 am
Byond Username: Miauw62

Re: Asimov Law 1 needs to allow inaction

Post by Miauw » #39694

Preventing harm is really the only thing that sets the AI apart, because any non-antag will not harm people, obey orders from superiors and try not to die. The second clause of law 1 is what makes Asimov interesting.

Doing this will move us to a lawset like Bay has, where the AI is not a neutral third party but just another slave of the heads.
<wb> For one, the spaghetti is killing me. It's everywhere in food code, and makes it harder to clean those up.
<Tobba> I stared into BYOND and it farted
User avatar
cedarbridge
Joined: Fri May 23, 2014 12:24 am
Byond Username: Cedarbridge

Re: Asimov Law 1 needs to allow inaction

Post by cedarbridge » #39753

Miauw wrote:Preventing harm is really the only thing that sets the AI apart
Nit picking because it grinds my gears when I see it. The law doesn't obligate "preventing" harm. The law states "May not permit" which is a passive statement. "Prevent" is proactive instead of the reactive stance the law is asking for. A lot of policy issues and confusion with validhunting AIs etc would clear up a lot if people would stop substituting one for the other.
Random Players
Joined: Wed Oct 22, 2014 9:23 pm
Byond Username: Random Players

Re: Asimov Law 1 needs to allow inaction

Post by Random Players » #39778

Uhh... to be blunt, what are you looking at Cedarbridge?
It's "You may not injure a human being or, through inaction, allow a human being to come to harm. "
User avatar
cedarbridge
Joined: Fri May 23, 2014 12:24 am
Byond Username: Cedarbridge

Re: Asimov Law 1 needs to allow inaction

Post by cedarbridge » #39842

Random Players wrote:Uhh... to be blunt, what are you looking at Cedarbridge?
It's "You may not injure a human being or, through inaction, allow a human being to come to harm. "
Exactly what I said. "May not allow" is not a parent of or a logical extension from "Must prevent."

Prevent is, as mentioned, an active stance. Its generally proactive. To prevent harm from occurring, the AI would set up a list of everything on the station that could be or become harmful and make sure humans never entered contact with it. They'd be lawbound to do so.

The law states that the AI cannot "allow a human being to come to harm." That means, instead of seeking out things that may be or may become harmful, the AI is reactive to things that ARE harmful, presently. Toxins bursts into flames, the AI sends borgs to contain the fire and tells people to leave the area if they do not have proper gear to fight the fire while avoiding personal harm. A preventive AI would just wall off the area and call it a day. A non-permissive AI would simply deny access.

Like I said, its a nit-pick distinction, but the word "prevent" and the phrase "not allow" are not the same thing and only one is found in the law.
Random Players
Joined: Wed Oct 22, 2014 9:23 pm
Byond Username: Random Players

Re: Asimov Law 1 needs to allow inaction

Post by Random Players » #39865

It doesn't say 'may not allow'.
It says, to cut out the injure part, "You may not [...] through inaction, allow a human being to come to harm."

The AI isn't allowed to to NOT take action to prevent human harm, if it can.
User avatar
cedarbridge
Joined: Fri May 23, 2014 12:24 am
Byond Username: Cedarbridge

Re: Asimov Law 1 needs to allow inaction

Post by cedarbridge » #39876

Random Players wrote:It doesn't say 'may not allow'.
It says, to cut out the injure part, "You may not [...] through inaction, allow a human being to come to harm."

The AI isn't allowed to to NOT take action to prevent human harm, if it can.
More correctly, the AI cannot fail to respond or act in such a case where the failure to act would allow a human to come to harm. Again, this is still not the same thing as "preventing" harm.
Malkevin

Re: Asimov Law 1 needs to allow inaction

Post by Malkevin » #39877

The pure literal meaning is that the AI can't be inactive when harm happens, in other words the AI can't go on standby or allow itself to be powered down - as that will be the only time it is truly inactive.
All thanks to that clause ' through inaction', if that clause didn't exist only then would the AI be required to ensure no human ever came to harm.
'through inaction' does not equal 'through ineffective action'


Asimov is intentionally flawed, you aren't going to win any mensa prizes for pointing out the logical fallacies and loop holes in it.
User avatar
cedarbridge
Joined: Fri May 23, 2014 12:24 am
Byond Username: Cedarbridge

Re: Asimov Law 1 needs to allow inaction

Post by cedarbridge » #39994

Malkevin wrote:'through inaction' does not equal 'through ineffective action'
I always visualize this one as Gene Wilder as Willy Wonka. "No, help, stop, police..."
User avatar
Lumbermancer
Joined: Fri Jul 25, 2014 3:40 am
Byond Username: Lumbermancer

Re: Asimov Law 1 needs to allow inaction

Post by Lumbermancer » #40258

But the wording of Law 1 itself gives the AI an excuse to crack down on anyone acting shady
And yet it's not an issue, because player is not omnipotent thus will be most of time reactive not proactive.
aka Schlomo Gaskin aka Guru Meditation aka Copyright Alright aka Topkek McHonk aka Le Rouge
Image
Post Reply

Who is online

Users browsing this forum: Google [Bot]