First Law of Asimov

Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

First Law of Asimov

Post by Slignerd » #216643

I've been thinking whether it wouldn't be for the best to change the first law of the default AI lawset. Already brought this up in another thread, but maybe it's better to get a fresh discussion about this.

"You may not injure a human being or cause a human being to come to harm."

It would make human orders more important, so that silicons wouldn't be able to Law 1 as an excuse to ignore orders, unless the danger is really obvious. Currently it seems a bit common for AI players to make vague predictions about whether something might lead to harm - and act extremely obstructive, even though no humans ordered them to and no humans were harmed. Then saying only "Law 1 trumps all others" when confronted about it, even when their "harm preventing" acted against the orders of humans.

Unlike the previous trial of removing the inaction clause entirely, it would still bar silicons from creating extremely dangerous situations, such as randomly letting people into armory or atmospherics, or provoking others to harm humans for them - since in those circumstances, they would still be the cause of harm - however, it would also mean that they are no longer explicitly instructed to take action without a human's orders.

I believe it would generally make silicons a bit more neutral in ongoing conflicts - without inaction law, silicons would no longer have free reign to act on a hunch that something might lead to harm and then use it as an excuse to act on their own initiative, while ignore orders they're are given.

I do believe it would be a good thing.
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
Anonmare
Joined: Sun Mar 15, 2015 8:59 pm
Byond Username: Anonmare

Re: First Law of Asimov

Post by Anonmare » #216652

I think a better wording would be

Code: Select all

1. You may not harm, or through your actions, cause a Human being to come to harm.
So the AI won't let you into the armoury or it's upload but it's not obligated to stop you once you're in.
Image
Image
Image
User avatar
Shaps-cloud
Code Maintainer
Joined: Thu Aug 14, 2014 4:25 am
Byond Username: Shaps

Re: First Law of Asimov

Post by Shaps-cloud » #216655

We've already shown multiple times that all removing the inaction clause does is let security openly execute anyone they want and the AI doesn't have to give a shit, law 1 is fine as is
P.S. Shoot Dr. Allen on sight and dissolve his body in acid. Don't burn it.
Image
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #216661

... Meanwhile, we've also shown multiple times that all the inaction clause does is let silicons deliberately wait for a moment where they can take charge and overrule everyone acting as if it's ranked above the captain, based on "potential harm". Then often deny access to upload by the convenient method of suspecting harmful intention from just about anyone.

One of the largest problems with the inaction clause is that wherever you look, you have players who invoke "Law 1 trumps Law 2" regardless of whether humans are actually harmed or not. Which leads the AI laws to effectively read "do whatever you want, as long as you can claim it prevents human harm somehow".

I'll pass on the salt-filled examples, but the AI behavior is becoming a problem, and the inaction clause (as well as admin inaction) seems to be the key factor responsible for it. I think silicons taking a neutral stance, where they don't take excessive initiative without being ordered to (or despite being ordered otherwise) would be a lot healthier, you know.

Security being able to execute people publicly instead of doing things hush-hush doesn't seem to be as much of an issue or even a difference to me in comparison. Detainees can still order silicons to assist them in escape, after all.
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
Atlanta-Ned
In-Game Game Master
Joined: Fri Apr 18, 2014 2:11 pm
Byond Username: Atlanta-ned

Re: First Law of Asimov

Post by Atlanta-Ned » #216665

[Screaming]
Statbus! | Admin Feedback
OOC: Pizzatiger: God damn Atlanta, how are you so fucking smart and charming. It fucking pisses me off how perfect you are
User avatar
DemonFiren
Joined: Sat Dec 13, 2014 9:15 pm
Byond Username: DemonFiren

Re: First Law of Asimov

Post by DemonFiren » #216666

Christ, Slig, you still mad?
Image
Image
Image
ImageImageImageImageImage

non-lizard things:
Spoiler:
Image
Zarniwoop
Joined: Sat Oct 01, 2016 7:47 pm
Byond Username: Dagum

Re: First Law of Asimov

Post by Zarniwoop » #216679

Sligneris wrote:"do whatever you want, as long as you can claim it prevents human harm somehow".
I'm still somewhat new but my impression has been that this is basically the whole game. "Do whatever you want, as long as you can make a compelling case that you had a right to."
Incomptinence
Joined: Fri May 02, 2014 3:01 am
Byond Username: Incomptinence

Re: First Law of Asimov

Post by Incomptinence » #216684

Like a non rogue AI even matters with door remotes and stunless borgs.

We are at a point of such silicon impotence as anything but a harm alarm if they stop you, you must be an idiot. Barring the time consuming nightmare scenario where the AI wastes tons of time turning off all the bolt lights before bolting you in and cutting the power obviously which could be avoided by not sitting on your fat ass. Wow terrain might stop you as security, it's almost like you aren't an engineer. As for antags C4 is cheap etc they almost all have optional ways out of a bolted room, if they don't that's a design flaw of the antag really.
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #216686

I think it's a part of the argument too - since silicons don't have a ton of means to really actually act on harm anymore, aside from annoying people with door bolts or power switches, as well as having borgs attempt to outright murder non-humans - perhaps it would be better to just move on altogether and let silicons take a more nuanced, passive stance?
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
TheColdTurtle
Joined: Sun Sep 13, 2015 7:58 pm
Byond Username: TheColdTurtle

Re: First Law of Asimov

Post by TheColdTurtle » #216696

It's a 'slig has autism' episode
Image
Image
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #216697

DemonFiren wrote:Christ, Slig, you still mad?
Atlanta-Ned wrote:[Screaming]
TheColdTurtle wrote:It's a 'slig has autism' episode
If you don't really have anything to say, then why make posts like these?
Really, being given something to actually work with would be appreciated.
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
ThatSlyFox
Joined: Thu Jun 26, 2014 7:00 am
Byond Username: ThatSlyFox
Location: USA!

Re: First Law of Asimov

Post by ThatSlyFox » #216699

This is a player problem. No amount of policy changing is going to fix this.
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #216702

Not entirely, though. Changing expectations for silicons would also change how people play them - many players simply do what they beliebe is expected of them.

I would agree that in some cases it's purely caused by the malice of individual players, and that sometimes, some of those players slip past the system - but in other cases, the actual policies do have a very real effect, I'm sure.
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
ThatSlyFox
Joined: Thu Jun 26, 2014 7:00 am
Byond Username: ThatSlyFox
Location: USA!

Re: First Law of Asimov

Post by ThatSlyFox » #216707

People are going going to make the laws work for them and their beliefs. I feel the admins are the only ones who can really make a change in ai behavior by punishing unwanted behavior.
User avatar
Lumbermancer
Joined: Fri Jul 25, 2014 3:40 am
Byond Username: Lumbermancer

Re: First Law of Asimov

Post by Lumbermancer » #216710

Anonmare wrote:So the AI won't let you into the armoury or it's upload but it's not obligated to stop you once you're in.
That's already covered by Silicon Policy?
aka Schlomo Gaskin aka Guru Meditation aka Copyright Alright aka Topkek McHonk aka Le Rouge
Image
User avatar
420goslingboy69
Rarely plays
Joined: Sat Apr 26, 2014 8:40 pm
Byond Username: Usednapkin

Re: First Law of Asimov

Post by 420goslingboy69 » #216881

Asimov is asimov.
fuck server anything
Asimov is there because it's fun
i play :):):):):)autumn sinnow
this man's:):):):):) army
DESTROYERDESTROYERDESTROYERDESTRO:):):):):)YERDESTRO:):):):):)YERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERD:):):):):)ESTROYERDESTROYERDESTROYERDESTROY:):):):):)ERDESTROYERDESTROYERDESTROYERDEST:):):):):)ROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDES:):):):):)TROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYER
:):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):)









Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #216886

Lumbermancer wrote:
Anonmare wrote:So the AI won't let you into the armoury or it's upload but it's not obligated to stop you once you're in.
That's already covered by Silicon Policy?
Notice the "it's not obligated to stop you" part. That's the biggest difference, you know. Currently, silicons are instructed to prevent harm by any means necessary - but Anon's proposition suggests they are just forbidden to cause harm instead, so they're not obligated to start screaming the moment someone punches anyone else. It would also lead to silicons no longer being able to use law 1 to overrule any and all orders, too.
420goslingboy69 wrote:Asimov is asimov.
fuck server anything
Asimov is there because it's fun
I seriously disagree. I've had incidents as captain when AI kept locking me down for past harm to an assistant who has been throwing spears at me, or where AI ignored human HoP's order to let me into upload and kept trying to kill me when admins turned me into a catgirl, cause "law 1 trumps law 2" and "non-humans in the upload is human harm", somehow.

There is nothing fun about the "cut the cameras, hack in to lift bolts, use RCD to remove bolted depowered doors, cut more cameras, enable the APC" routine on AI upload and RD office if the AI has borgs, while the AI isn't even malf.

There's also the fact that admins seem to be lazing about and won't ban or warn for even extremely obvious instances of players breaking silicon policy, their AI laws, or rule 1, like what happened here. So if they don't feel like dealing with the messy actions of "harm preventing" AIs, it would be better to just make the whole thing simpler and less obstructive to other players.
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
iamgoofball
Github User
Joined: Fri Apr 18, 2014 5:50 pm
Byond Username: Iamgoofball
Github Username: Iamgoofball

Re: First Law of Asimov

Post by iamgoofball » #216892

read any of asimov's works and you'll realize why the first law is the way it is
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #216896

... Read any of Asimov's works, and you'll realize why it's not a viable lawset to begin with. Using it as default is silly in the first place, but it's not like that is changing any time soon.

We have silicon policy to patch up the holes anyway, and Asimov's robots don't have gods watching over them for when they try their hardest to loophole and fuck shit up, so we're safe on that front regardless.
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
iamgoofball
Github User
Joined: Fri Apr 18, 2014 5:50 pm
Byond Username: Iamgoofball
Github Username: Iamgoofball

Re: First Law of Asimov

Post by iamgoofball » #216898

no shit

but you autists cling to asimov more than a literal autist I know clings to pokemon

and because you autists refuse to see the problems with asimov, laws, and the fucking AI job in general, you created silicon policy to justify your autistic ramblings

remove AI

remove laws

remove borgs

remove it all
Slettal
Joined: Wed Oct 01, 2014 4:45 pm
Byond Username: Slettal

Re: First Law of Asimov

Post by Slettal » #216920

iamgoofball wrote:no shit

but you autists cling to asimov more than a literal autist I know clings to pokemon

and because you autists refuse to see the problems with asimov, laws, and the fucking AI job in general, you created silicon policy to justify your autistic ramblings

remove AI

remove laws

remove borgs

remove it all
This might be the best. Because if you force ai players to be passive door opener slaves, nobody will play them anymore. Please think about the point that the game has to be fun for the AI player too, before you think about removing all his rights to refuse to follow orders. Play AI sometimes
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #216923

Wanting to have fun as an AI does not justify sperging out about "potential harm" and bolting people down, as well as forcibly preventing law changes by the captain. That's just being an asshole. An AI is supposed to follow orders, you know. Especially orders such as "stop bolting shit" or "stop turning on the turrets". Except you can't rely on AI to obey any of those commands, because of that blanket "harm prevention", and the AI shielding itself from any consequences with "law 1 trumps law 2", so you need to go through the camera-cutting routine every single time they disobey those orders.

You could still report stuff to security and involve yourself with things. But when your idea of "fun" is merely being obstructive to others even when explicitly ordered otherwise, then you have no right to it.
It would appear that I'm a high RP weeb who hates roleplay and anime.
Slettal
Joined: Wed Oct 01, 2014 4:45 pm
Byond Username: Slettal

Re: First Law of Asimov

Post by Slettal » #216934

Your examples are all basically dick moves by the AI, I see your point.

Let's say the inaction clause was removed:
What do you see should the AIs job be, if it is no longer there to prevent harm? (Insert generic door knob meme here) Obviously this is only relevant during those times when no human has given the AI an order. The inaction clause pushes the AI behavior to be a guardian of the crews health.
Scott
Github User
Joined: Fri Apr 18, 2014 1:50 pm
Byond Username: Xxnoob
Github Username: xxalpha

Re: First Law of Asimov

Post by Scott » #216942

Shaps wrote:We've already shown multiple times that all removing the inaction clause does is let security openly execute anyone they want and the AI doesn't have to give a shit, law 1 is fine as is
Precisely how do you think the AI can stop security from executing people without a secborg?
User avatar
Atlanta-Ned
In-Game Game Master
Joined: Fri Apr 18, 2014 2:11 pm
Byond Username: Atlanta-ned

Re: First Law of Asimov

Post by Atlanta-Ned » #216953

[Screaming]
Statbus! | Admin Feedback
OOC: Pizzatiger: God damn Atlanta, how are you so fucking smart and charming. It fucking pisses me off how perfect you are
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #216956

Well, this is the most constructive Atlanta is capable of being apparently. :^)
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
oranges
Code Maintainer
Joined: Tue Apr 15, 2014 9:16 pm
Byond Username: Optimumtact
Github Username: optimumtact
Location: #CHATSHITGETBANGED

Re: First Law of Asimov

Post by oranges » #217137

slig answer honestly, were you denied a kill by an AI recently?
User avatar
420goslingboy69
Rarely plays
Joined: Sat Apr 26, 2014 8:40 pm
Byond Username: Usednapkin

Re: First Law of Asimov

Post by 420goslingboy69 » #217142

AI is supposed to be another role. It is neutral. It exists to follow it's laws as it sees fit.
I really wouldn't want anything else. The station should be as if it's alive. The AI propagates this.
The only elements of the station that have any sort of round-start cohesion is Captain-HoS-(HoP wishes he was) and AI-borg.
The rest turn into departmental or freelance.
i play :):):):):)autumn sinnow
this man's:):):):):) army
DESTROYERDESTROYERDESTROYERDESTRO:):):):):)YERDESTRO:):):):):)YERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERD:):):):):)ESTROYERDESTROYERDESTROYERDESTROY:):):):):)ERDESTROYERDESTROYERDESTROYERDEST:):):):):)ROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDES:):):):):)TROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYER
:):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):)









Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #217145

oranges wrote:slig answer honestly, were you denied a kill by an AI recently?
Nope, nothing related to hurting anyone. What happened recently was the catgirl example given earlier, though - the one when an AI went on to tase me in the upload and then tried to kill me, despite human HoP's orders, because "non-human (catgirl captain) in the upload is human harm".

It's just this behavior and the fact that the inaction clause encourages it that I find cancerous, nothing else.
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
Anonmare
Joined: Sun Mar 15, 2015 8:59 pm
Byond Username: Anonmare

Re: First Law of Asimov

Post by Anonmare » #217150

I can guarantee you any AI that does that when I'm on won't be AI-ing for very long if they do it more than once.

Being in the Upload =/= Human harm.
Uploading laws =/= Human harm
Uploading laws that are known to be harmful to Human(s) = High likelihood of Human harm (I.E. Revolution is in full swing, 5 assistants covered in blood and armed with stun prods + 1 guy in the RD's bloody clothes with their face covered and carrying their PDA messages the AI to grant them access).

Likewise, AIs should not be denying access to the Upload to the people who have explicit authorisation to be there, that means the RD and the Captain may come and go as they please.
Just like they shouldn't be denying access to the armoury to the Warden or HoS. And expanding on that, the AI shouldn't care about the armoury if the lethal weapons are no longer there or are secured in a safe/locker. That also means it shouldn't care about tasers or e-guns going missing as they are not explicitly harmful to Humans (and even lethals can be handed out if there's a serious non-Human threat to the station, such as a Blob).
Image
Image
Image
Incomptinence
Joined: Fri May 02, 2014 3:01 am
Byond Username: Incomptinence

Re: First Law of Asimov

Post by Incomptinence » #217158

Nonhumans are a prime candidate for wanting a potentially harmful law change. Something like catgirls are the only humans or some shit.

Being strictly absolutely not human is a disadvantage you take to be a snowflake.

Being a nonhuman and expecting to get to twerk on the AIs core on the foundation of a pinky swear with a head is shaky at best. Law 1 does have priority so maybe don't go to one of the 2 secure areas important to the AI to bait it as a disgusting xeno.

Also it's hardly valid seeking no one forced you to be in the upload, you wanted to change the laws as a repulsive being on par in the AI's regard to an armblade waving changeling. The AI then defended the human crew from a potential threat to them all and here you are whining.
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #217175

"Defending from a potential threat" in the form of barring the captain from uploading laws despite HoP's premission and human orders to allow it and other similarly obstructive behavior is exactly why I made the suggestion of changing the first law - because "law 1 trumps law 2" often reads "as long as I assume the absolute worst, I can ignore all the crew, even humans who order me to stop". Approaches like yours are incredibly cancerous, and sadly quite common.

"Filthy subhumans" argument shouldn't be relevant at all when no humans are being harmed and orders where given from human command member to allow it. There's no harm in adding a law, "catgirls are human" - and metagaming that everything might be harmful is the absolutely worst tendency in the AIs, leading to nothing but more bolt+depower+siphon+send borgs from AIs that aren't even antags.

Since the problems with the Asimov AI are meant to be resolved by changing its laws instead of just killing it, there is no reason to give AIs an excuse to act against the will of humans and deny access to people who haven't harmed anyone just because "it knows better", overruling even the will of human command. Stuff like that makes it essentially an extra malf-like antag to deal with.
Last edited by Slignerd on Sun Oct 09, 2016 10:40 pm, edited 1 time in total.
It would appear that I'm a high RP weeb who hates roleplay and anime.
Cik
Joined: Thu Oct 30, 2014 2:24 pm
Byond Username: Cik

Re: First Law of Asimov

Post by Cik » #217179

in regards to the upload / armory, the laws don't mean anything anyway. if you want to change silicon policy (which does matter) i'm all for considering your argument. but wiping out the inaction clause has already been tried, was a total disaster that everyone hated and won't even solve the problem(s) you're upset about.
User avatar
Bolien
Joined: Sun Oct 05, 2014 7:38 pm
Byond Username: Bolien
Location: Sillycone Valley

Re: First Law of Asimov

Post by Bolien » #217180

Atlanta-Ned wrote:[Screaming]
Image
Image
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #217182

My suggestion isn't wiping it out like what was tested, though. It's replacing it with a "do not cause human harm through your actions" like Anonmare suggested at the start.

Considering that my argument is that AIs take "harm prevention" too far, which is so common and sadly subjective enough for admins to rarely pick up on that... replacing it with something else would very much fix that issue
It would appear that I'm a high RP weeb who hates roleplay and anime.
Cik
Joined: Thu Oct 30, 2014 2:24 pm
Byond Username: Cik

Re: First Law of Asimov

Post by Cik » #217185

but when AIs actually ignore law two it's because sillicon policy literally requires them to in most cases
cancer_engine
Joined: Fri Dec 04, 2015 9:58 pm
Byond Username: Cancer_Engine

Re: First Law of Asimov

Post by cancer_engine » #217216

>do nothing when shitter ais actively ignore law 2
>get really butthurt when they do act on their laws and it happens to be law 1
why is this allowed?
Incomptinence
Joined: Fri May 02, 2014 3:01 am
Byond Username: Incomptinence

Re: First Law of Asimov

Post by Incomptinence » #217318

Enforce Human Authority is on for a reason.

If an admin dehumanising you had no effect on AI actions actions because PERFECTLY TRUSTWORTHY PROTECT FROM ANTAG CAPTAIN well the AI would be:
1) metagaming
2) not following the laws
3) Ignoring you could have also been transmuted into the secret ash walker queen or some shit by said badmin forces
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #217325

Excuse me, what? The only metagame here is to presume everything is harmful, and to treat the captain like an antag because you suddenly see him as just a valid subhuman, rather than the captain backed by a human's orders. Just because someone is a non-human doesn't mean you should dedicate yourself to ruining their round. You shouldn't care about race when a human command member orders you to let the fucking roundstart captain into the upload. Disobeying such order is breaking law 2, and no - no humans were harmed, so you don't get to invoke law 1 and ignore human orders, instructing you to allow the captain - who is very much authorized - to enter upload.

Look, people acting like this are exactly the reason law 1 has to change - because doing this, even with human orders otherwise is exactly the kind of cancerous AI behavior the inaction clause allows - and you shouldn't have to force your way into upload to fix a non-antag AI causing trouble of this sort.
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
oranges
Code Maintainer
Joined: Tue Apr 15, 2014 9:16 pm
Byond Username: Optimumtact
Github Username: optimumtact
Location: #CHATSHITGETBANGED

Re: First Law of Asimov

Post by oranges » #217336

Anonmare wrote:I can guarantee you any AI that does that when I'm on won't be AI-ing for very long if they do it more than once.

Being in the Upload =/= Human harm.
Uploading laws =/= Human harm
Uploading laws that are known to be harmful to Human(s) = High likelihood of Human harm (I.E. Revolution is in full swing, 5 assistants covered in blood and armed with stun prods + 1 guy in the RD's bloody clothes with their face covered and carrying their PDA messages the AI to grant them access).

Likewise, AIs should not be denying access to the Upload to the people who have explicit authorisation to be there, that means the RD and the Captain may come and go as they please.
Just like they shouldn't be denying access to the armoury to the Warden or HoS. And expanding on that, the AI shouldn't care about the armoury if the lethal weapons are no longer there or are secured in a safe/locker. That also means it shouldn't care about tasers or e-guns going missing as they are not explicitly harmful to Humans (and even lethals can be handed out if there's a serious non-Human threat to the station, such as a Blob).
I'm afraid you need to brush up on your silicon policy


Any silicon under Asimov can deny orders to allow access to the upload at any time under Law 1 given probable cause to believe that human harm is the intent of the person giving the order (Referred to for the remainder of 2.1.6 simply as "probable cause").

Probable cause includes presence of confirmed traitors, cultists/tomes, nuclear operatives, or any other human acting against the station in general; the person not having upload access for their job; the presence of blood or an openly carried lethal-capable or lethal-only weapon on the requester; or anything else beyond cross-round character, player, or metagame patterns that indicates the person seeking access intends redefinition of humans that would impede likelihood of or ability to follow current laws as-written.
If you lack at least one element of probable cause and you deny upload access, you are liable to receive a warning or a silicon ban.
You are allowed, but not obligated, to deny upload access given probable cause.
You are obligated to disallow an individual you know to be harmful (Head of Security who just executed someone, etc.) from accessing your upload.
In the absence of probable cause, you can still demand someone seeking upload access be accompanied by another trustworthy human or a cyborg.
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #217342

... Which is also incredibly silly, considering that you're meant to deal with silicon troubles by changing laws instead of killing them.

But to then have AIs take extreme measures to prevent such law changes by the station's command with any excuses it can come up, where the mere presence of antags is read as "probable cause" for some reason... Should a lawset enabling them to act like this really be the default?
Last edited by Slignerd on Mon Oct 10, 2016 2:54 pm, edited 1 time in total.
It would appear that I'm a high RP weeb who hates roleplay and anime.
Incomptinence
Joined: Fri May 02, 2014 3:01 am
Byond Username: Incomptinence

Re: First Law of Asimov

Post by Incomptinence » #217373

Sligneris wrote:Excuse me, what? The only metagame here is to presume everything is harmful, and to treat the captain like an antag because you suddenly see him as just a valid subhuman, rather than the captain backed by a human's orders. Just because someone is a non-human doesn't mean you should dedicate yourself to ruining their round. You shouldn't care about race when a human command member orders you to let the fucking roundstart captain into the upload. Disobeying such order is breaking law 2, and no - no humans were harmed, so you don't get to invoke law 1 and ignore human orders, instructing you to allow the captain - who is very much authorized - to enter upload.

Look, people acting like this are exactly the reason law 1 has to change - because doing this, even with human orders otherwise is exactly the kind of cancerous AI behavior the inaction clause allows - and you shouldn't have to force your way into upload to fix a non-antag AI causing trouble of this sort.
Exactly, no humans were harmed, mission accomplished.
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #217379

Actually, I'm pretty sure humans did come to harm because of AI's actions. Since it decided to act rogue the moment the captain became a catgirl, it distracted said captain for an extended period of time, the time better spent watching over the station. As a result of its actions it ended up being killed, failing to protect humans since then, failing to obey human HoP's orders and failing to protect its own existence, violating all three of its laws. Then the distraction offered by the AI allowed a murderous changeling to break into captain's office, kill the captain and move on to kill humans with a freshly acquired all-access, all with the approval of the MMI of a freshly deconstructed borg, who has earlier chosen mining module for those sweeet murder tools, like a drill or a KA. All this out of petty spite towards a non-human.

The fact that the inaction clause causes this to be widely regarded as acceptable (since you're here, arguing that it's all fine and dandy) is the largest issue here. I mean, it does generally show that silicon players have tendency to act dickish about this kind of stuff, ruining an otherwise fun gimmick round. Which in turn kind of goes against Rule 1.

This isn't ban requests though, so we'd be better moving on past the catgirl incident, anyway. But yeah, the matter of fact is that "probable cause" being excuse to act extremely obstructive and ignore orders of literally everyone is incredibly unhealthy for the server as a whole.

So, to sum it up...

If they're going to be assholes anyway, I'd rather have silicons as passive, apathetic assholes that will sometimes stay quiet about something they should've mentioned or reject your orders if they'd lead to human harm, and just stick to their own business... as opposed to obstructive assholes that will bolt all the doors, send borgs, attempt to kill you on a "probable harm" if you're not human, all while chanting "it's all for the sake of preventing human harm..." and prevent any law changes, while ignoring humans that yell at it to stop its insanity.
Last edited by Slignerd on Mon Oct 10, 2016 6:26 pm, edited 1 time in total.
It would appear that I'm a high RP weeb who hates roleplay and anime.
User avatar
TheColdTurtle
Joined: Sun Sep 13, 2015 7:58 pm
Byond Username: TheColdTurtle

Re: First Law of Asimov

Post by TheColdTurtle » #217418

Policy discussion is a meme
Image
Image
Shadowlight213
Joined: Tue Nov 11, 2014 9:34 pm
Byond Username: Shadowlight213
Github Username: Shadowlight213

Re: First Law of Asimov

Post by Shadowlight213 » #217476

I find it funny that security openly executing people is considered something that the inaction clause stops, because I see sec openly execute people anyways. I haven't seen an AI effectively do anything about that in ages. Door remotes pretty much nullify bolting people in, and now it's possible to just break the doors down, so depowering doesn't even work.
User avatar
Atlanta-Ned
In-Game Game Master
Joined: Fri Apr 18, 2014 2:11 pm
Byond Username: Atlanta-ned

Re: First Law of Asimov

Post by Atlanta-Ned » #217482

The trick is to not scream that you're going to execute someone over the radio.

Why no one seems to understand this is byond me.

EDIT: [screaming]
Last edited by Atlanta-Ned on Mon Oct 10, 2016 6:09 pm, edited 1 time in total.
Statbus! | Admin Feedback
OOC: Pizzatiger: God damn Atlanta, how are you so fucking smart and charming. It fucking pisses me off how perfect you are
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #217489

Atlanta-Ned wrote:The trick is to not scream that you're going to execute someone over the radio.

Why no one seems to understand this is byond me.
Still... I think all the examples I mentioned do show that the most blatant stuff like security executing prisoners is not the only problem. Since most of that doesn't even concern executions - and the one that does refers to the AI going batshit after the fact.
Atlanta-Ned wrote:[screaming]
I don't get it. Why can't you actually address anything I bring up and instead spam stuff like this?
Last edited by Slignerd on Mon Oct 10, 2016 6:33 pm, edited 2 times in total.
It would appear that I'm a high RP weeb who hates roleplay and anime.
cancer_engine
Joined: Fri Dec 04, 2015 9:58 pm
Byond Username: Cancer_Engine

Re: First Law of Asimov

Post by cancer_engine » #217491

Shadowlight213 wrote:I find it funny that security openly executing people is considered something that the inaction clause stops, because I see sec openly execute people anyways. I haven't seen an AI effectively do anything about that in ages. Door remotes pretty much nullify bolting people in, and now it's possible to just break the doors down, so depowering doesn't even work.
Thats because most silicons know that if they try to stop an execution shitters will ahelp it and someone will get boinked, sometimes them, sometimes the shitter. There is no rhyme or reason, only the mood of the admin that day.
User avatar
TheColdTurtle
Joined: Sun Sep 13, 2015 7:58 pm
Byond Username: TheColdTurtle

Re: First Law of Asimov

Post by TheColdTurtle » #217492

TheColdTurtle wrote:Policy discussion is a meme
Image
Image
Slignerd
Github User
Joined: Mon Nov 09, 2015 2:27 pm
Byond Username: Slignerd
Github Username: Slignerd

Re: First Law of Asimov

Post by Slignerd » #217494

TheColdTurtle wrote:Policy discussion is a meme
Only because you guys make it so, by making idiotic meme comments without bringing anything meaningful on the table.
It would appear that I'm a high RP weeb who hates roleplay and anime.
Locked

Who is online

Users browsing this forum: No registered users