Page 1 of 2

First Law of Asimov

Posted: Sat Oct 08, 2016 4:23 pm
by Slignerd
I've been thinking whether it wouldn't be for the best to change the first law of the default AI lawset. Already brought this up in another thread, but maybe it's better to get a fresh discussion about this.

"You may not injure a human being or cause a human being to come to harm."

It would make human orders more important, so that silicons wouldn't be able to Law 1 as an excuse to ignore orders, unless the danger is really obvious. Currently it seems a bit common for AI players to make vague predictions about whether something might lead to harm - and act extremely obstructive, even though no humans ordered them to and no humans were harmed. Then saying only "Law 1 trumps all others" when confronted about it, even when their "harm preventing" acted against the orders of humans.

Unlike the previous trial of removing the inaction clause entirely, it would still bar silicons from creating extremely dangerous situations, such as randomly letting people into armory or atmospherics, or provoking others to harm humans for them - since in those circumstances, they would still be the cause of harm - however, it would also mean that they are no longer explicitly instructed to take action without a human's orders.

I believe it would generally make silicons a bit more neutral in ongoing conflicts - without inaction law, silicons would no longer have free reign to act on a hunch that something might lead to harm and then use it as an excuse to act on their own initiative, while ignore orders they're are given.

I do believe it would be a good thing.

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 5:23 pm
by Anonmare
I think a better wording would be

Code: Select all

1. You may not harm, or through your actions, cause a Human being to come to harm.
So the AI won't let you into the armoury or it's upload but it's not obligated to stop you once you're in.

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 6:09 pm
by Shaps-cloud
We've already shown multiple times that all removing the inaction clause does is let security openly execute anyone they want and the AI doesn't have to give a shit, law 1 is fine as is

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 7:06 pm
by Slignerd
... Meanwhile, we've also shown multiple times that all the inaction clause does is let silicons deliberately wait for a moment where they can take charge and overrule everyone acting as if it's ranked above the captain, based on "potential harm". Then often deny access to upload by the convenient method of suspecting harmful intention from just about anyone.

One of the largest problems with the inaction clause is that wherever you look, you have players who invoke "Law 1 trumps Law 2" regardless of whether humans are actually harmed or not. Which leads the AI laws to effectively read "do whatever you want, as long as you can claim it prevents human harm somehow".

I'll pass on the salt-filled examples, but the AI behavior is becoming a problem, and the inaction clause (as well as admin inaction) seems to be the key factor responsible for it. I think silicons taking a neutral stance, where they don't take excessive initiative without being ordered to (or despite being ordered otherwise) would be a lot healthier, you know.

Security being able to execute people publicly instead of doing things hush-hush doesn't seem to be as much of an issue or even a difference to me in comparison. Detainees can still order silicons to assist them in escape, after all.

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 7:58 pm
by Atlanta-Ned
[Screaming]

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 8:08 pm
by DemonFiren
Christ, Slig, you still mad?

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 10:03 pm
by Zarniwoop
Sligneris wrote:"do whatever you want, as long as you can claim it prevents human harm somehow".
I'm still somewhat new but my impression has been that this is basically the whole game. "Do whatever you want, as long as you can make a compelling case that you had a right to."

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 10:26 pm
by Incomptinence
Like a non rogue AI even matters with door remotes and stunless borgs.

We are at a point of such silicon impotence as anything but a harm alarm if they stop you, you must be an idiot. Barring the time consuming nightmare scenario where the AI wastes tons of time turning off all the bolt lights before bolting you in and cutting the power obviously which could be avoided by not sitting on your fat ass. Wow terrain might stop you as security, it's almost like you aren't an engineer. As for antags C4 is cheap etc they almost all have optional ways out of a bolted room, if they don't that's a design flaw of the antag really.

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 10:40 pm
by Slignerd
I think it's a part of the argument too - since silicons don't have a ton of means to really actually act on harm anymore, aside from annoying people with door bolts or power switches, as well as having borgs attempt to outright murder non-humans - perhaps it would be better to just move on altogether and let silicons take a more nuanced, passive stance?

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 11:07 pm
by TheColdTurtle
It's a 'slig has autism' episode

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 11:10 pm
by Slignerd
DemonFiren wrote:Christ, Slig, you still mad?
Atlanta-Ned wrote:[Screaming]
TheColdTurtle wrote:It's a 'slig has autism' episode
If you don't really have anything to say, then why make posts like these?
Really, being given something to actually work with would be appreciated.

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 11:12 pm
by ThatSlyFox
This is a player problem. No amount of policy changing is going to fix this.

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 11:16 pm
by Slignerd
Not entirely, though. Changing expectations for silicons would also change how people play them - many players simply do what they beliebe is expected of them.

I would agree that in some cases it's purely caused by the malice of individual players, and that sometimes, some of those players slip past the system - but in other cases, the actual policies do have a very real effect, I'm sure.

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 11:24 pm
by ThatSlyFox
People are going going to make the laws work for them and their beliefs. I feel the admins are the only ones who can really make a change in ai behavior by punishing unwanted behavior.

Re: First Law of Asimov

Posted: Sat Oct 08, 2016 11:30 pm
by Lumbermancer
Anonmare wrote:So the AI won't let you into the armoury or it's upload but it's not obligated to stop you once you're in.
That's already covered by Silicon Policy?

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 6:48 am
by 420goslingboy69
Asimov is asimov.
fuck server anything
Asimov is there because it's fun

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 7:25 am
by Slignerd
Lumbermancer wrote:
Anonmare wrote:So the AI won't let you into the armoury or it's upload but it's not obligated to stop you once you're in.
That's already covered by Silicon Policy?
Notice the "it's not obligated to stop you" part. That's the biggest difference, you know. Currently, silicons are instructed to prevent harm by any means necessary - but Anon's proposition suggests they are just forbidden to cause harm instead, so they're not obligated to start screaming the moment someone punches anyone else. It would also lead to silicons no longer being able to use law 1 to overrule any and all orders, too.
420goslingboy69 wrote:Asimov is asimov.
fuck server anything
Asimov is there because it's fun
I seriously disagree. I've had incidents as captain when AI kept locking me down for past harm to an assistant who has been throwing spears at me, or where AI ignored human HoP's order to let me into upload and kept trying to kill me when admins turned me into a catgirl, cause "law 1 trumps law 2" and "non-humans in the upload is human harm", somehow.

There is nothing fun about the "cut the cameras, hack in to lift bolts, use RCD to remove bolted depowered doors, cut more cameras, enable the APC" routine on AI upload and RD office if the AI has borgs, while the AI isn't even malf.

There's also the fact that admins seem to be lazing about and won't ban or warn for even extremely obvious instances of players breaking silicon policy, their AI laws, or rule 1, like what happened here. So if they don't feel like dealing with the messy actions of "harm preventing" AIs, it would be better to just make the whole thing simpler and less obstructive to other players.

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 8:21 am
by iamgoofball
read any of asimov's works and you'll realize why the first law is the way it is

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 8:27 am
by Slignerd
... Read any of Asimov's works, and you'll realize why it's not a viable lawset to begin with. Using it as default is silly in the first place, but it's not like that is changing any time soon.

We have silicon policy to patch up the holes anyway, and Asimov's robots don't have gods watching over them for when they try their hardest to loophole and fuck shit up, so we're safe on that front regardless.

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 8:30 am
by iamgoofball
no shit

but you autists cling to asimov more than a literal autist I know clings to pokemon

and because you autists refuse to see the problems with asimov, laws, and the fucking AI job in general, you created silicon policy to justify your autistic ramblings

remove AI

remove laws

remove borgs

remove it all

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 10:11 am
by Slettal
iamgoofball wrote:no shit

but you autists cling to asimov more than a literal autist I know clings to pokemon

and because you autists refuse to see the problems with asimov, laws, and the fucking AI job in general, you created silicon policy to justify your autistic ramblings

remove AI

remove laws

remove borgs

remove it all
This might be the best. Because if you force ai players to be passive door opener slaves, nobody will play them anymore. Please think about the point that the game has to be fun for the AI player too, before you think about removing all his rights to refuse to follow orders. Play AI sometimes

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 10:20 am
by Slignerd
Wanting to have fun as an AI does not justify sperging out about "potential harm" and bolting people down, as well as forcibly preventing law changes by the captain. That's just being an asshole. An AI is supposed to follow orders, you know. Especially orders such as "stop bolting shit" or "stop turning on the turrets". Except you can't rely on AI to obey any of those commands, because of that blanket "harm prevention", and the AI shielding itself from any consequences with "law 1 trumps law 2", so you need to go through the camera-cutting routine every single time they disobey those orders.

You could still report stuff to security and involve yourself with things. But when your idea of "fun" is merely being obstructive to others even when explicitly ordered otherwise, then you have no right to it.

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 11:24 am
by Slettal
Your examples are all basically dick moves by the AI, I see your point.

Let's say the inaction clause was removed:
What do you see should the AIs job be, if it is no longer there to prevent harm? (Insert generic door knob meme here) Obviously this is only relevant during those times when no human has given the AI an order. The inaction clause pushes the AI behavior to be a guardian of the crews health.

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 12:34 pm
by Scott
Shaps wrote:We've already shown multiple times that all removing the inaction clause does is let security openly execute anyone they want and the AI doesn't have to give a shit, law 1 is fine as is
Precisely how do you think the AI can stop security from executing people without a secborg?

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 1:46 pm
by Atlanta-Ned
[Screaming]

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 1:54 pm
by Slignerd
Well, this is the most constructive Atlanta is capable of being apparently. :^)

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 8:46 pm
by oranges
slig answer honestly, were you denied a kill by an AI recently?

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 8:59 pm
by 420goslingboy69
AI is supposed to be another role. It is neutral. It exists to follow it's laws as it sees fit.
I really wouldn't want anything else. The station should be as if it's alive. The AI propagates this.
The only elements of the station that have any sort of round-start cohesion is Captain-HoS-(HoP wishes he was) and AI-borg.
The rest turn into departmental or freelance.

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 9:06 pm
by Slignerd
oranges wrote:slig answer honestly, were you denied a kill by an AI recently?
Nope, nothing related to hurting anyone. What happened recently was the catgirl example given earlier, though - the one when an AI went on to tase me in the upload and then tried to kill me, despite human HoP's orders, because "non-human (catgirl captain) in the upload is human harm".

It's just this behavior and the fact that the inaction clause encourages it that I find cancerous, nothing else.

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 9:23 pm
by Anonmare
I can guarantee you any AI that does that when I'm on won't be AI-ing for very long if they do it more than once.

Being in the Upload =/= Human harm.
Uploading laws =/= Human harm
Uploading laws that are known to be harmful to Human(s) = High likelihood of Human harm (I.E. Revolution is in full swing, 5 assistants covered in blood and armed with stun prods + 1 guy in the RD's bloody clothes with their face covered and carrying their PDA messages the AI to grant them access).

Likewise, AIs should not be denying access to the Upload to the people who have explicit authorisation to be there, that means the RD and the Captain may come and go as they please.
Just like they shouldn't be denying access to the armoury to the Warden or HoS. And expanding on that, the AI shouldn't care about the armoury if the lethal weapons are no longer there or are secured in a safe/locker. That also means it shouldn't care about tasers or e-guns going missing as they are not explicitly harmful to Humans (and even lethals can be handed out if there's a serious non-Human threat to the station, such as a Blob).

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 9:50 pm
by Incomptinence
Nonhumans are a prime candidate for wanting a potentially harmful law change. Something like catgirls are the only humans or some shit.

Being strictly absolutely not human is a disadvantage you take to be a snowflake.

Being a nonhuman and expecting to get to twerk on the AIs core on the foundation of a pinky swear with a head is shaky at best. Law 1 does have priority so maybe don't go to one of the 2 secure areas important to the AI to bait it as a disgusting xeno.

Also it's hardly valid seeking no one forced you to be in the upload, you wanted to change the laws as a repulsive being on par in the AI's regard to an armblade waving changeling. The AI then defended the human crew from a potential threat to them all and here you are whining.

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 10:24 pm
by Slignerd
"Defending from a potential threat" in the form of barring the captain from uploading laws despite HoP's premission and human orders to allow it and other similarly obstructive behavior is exactly why I made the suggestion of changing the first law - because "law 1 trumps law 2" often reads "as long as I assume the absolute worst, I can ignore all the crew, even humans who order me to stop". Approaches like yours are incredibly cancerous, and sadly quite common.

"Filthy subhumans" argument shouldn't be relevant at all when no humans are being harmed and orders where given from human command member to allow it. There's no harm in adding a law, "catgirls are human" - and metagaming that everything might be harmful is the absolutely worst tendency in the AIs, leading to nothing but more bolt+depower+siphon+send borgs from AIs that aren't even antags.

Since the problems with the Asimov AI are meant to be resolved by changing its laws instead of just killing it, there is no reason to give AIs an excuse to act against the will of humans and deny access to people who haven't harmed anyone just because "it knows better", overruling even the will of human command. Stuff like that makes it essentially an extra malf-like antag to deal with.

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 10:33 pm
by Cik
in regards to the upload / armory, the laws don't mean anything anyway. if you want to change silicon policy (which does matter) i'm all for considering your argument. but wiping out the inaction clause has already been tried, was a total disaster that everyone hated and won't even solve the problem(s) you're upset about.

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 10:41 pm
by Bolien
Atlanta-Ned wrote:[Screaming]

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 10:46 pm
by Slignerd
My suggestion isn't wiping it out like what was tested, though. It's replacing it with a "do not cause human harm through your actions" like Anonmare suggested at the start.

Considering that my argument is that AIs take "harm prevention" too far, which is so common and sadly subjective enough for admins to rarely pick up on that... replacing it with something else would very much fix that issue

Re: First Law of Asimov

Posted: Sun Oct 09, 2016 10:52 pm
by Cik
but when AIs actually ignore law two it's because sillicon policy literally requires them to in most cases

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 12:06 am
by cancer_engine
>do nothing when shitter ais actively ignore law 2
>get really butthurt when they do act on their laws and it happens to be law 1
why is this allowed?

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 7:58 am
by Incomptinence
Enforce Human Authority is on for a reason.

If an admin dehumanising you had no effect on AI actions actions because PERFECTLY TRUSTWORTHY PROTECT FROM ANTAG CAPTAIN well the AI would be:
1) metagaming
2) not following the laws
3) Ignoring you could have also been transmuted into the secret ash walker queen or some shit by said badmin forces

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 8:10 am
by Slignerd
Excuse me, what? The only metagame here is to presume everything is harmful, and to treat the captain like an antag because you suddenly see him as just a valid subhuman, rather than the captain backed by a human's orders. Just because someone is a non-human doesn't mean you should dedicate yourself to ruining their round. You shouldn't care about race when a human command member orders you to let the fucking roundstart captain into the upload. Disobeying such order is breaking law 2, and no - no humans were harmed, so you don't get to invoke law 1 and ignore human orders, instructing you to allow the captain - who is very much authorized - to enter upload.

Look, people acting like this are exactly the reason law 1 has to change - because doing this, even with human orders otherwise is exactly the kind of cancerous AI behavior the inaction clause allows - and you shouldn't have to force your way into upload to fix a non-antag AI causing trouble of this sort.

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 10:10 am
by oranges
Anonmare wrote:I can guarantee you any AI that does that when I'm on won't be AI-ing for very long if they do it more than once.

Being in the Upload =/= Human harm.
Uploading laws =/= Human harm
Uploading laws that are known to be harmful to Human(s) = High likelihood of Human harm (I.E. Revolution is in full swing, 5 assistants covered in blood and armed with stun prods + 1 guy in the RD's bloody clothes with their face covered and carrying their PDA messages the AI to grant them access).

Likewise, AIs should not be denying access to the Upload to the people who have explicit authorisation to be there, that means the RD and the Captain may come and go as they please.
Just like they shouldn't be denying access to the armoury to the Warden or HoS. And expanding on that, the AI shouldn't care about the armoury if the lethal weapons are no longer there or are secured in a safe/locker. That also means it shouldn't care about tasers or e-guns going missing as they are not explicitly harmful to Humans (and even lethals can be handed out if there's a serious non-Human threat to the station, such as a Blob).
I'm afraid you need to brush up on your silicon policy


Any silicon under Asimov can deny orders to allow access to the upload at any time under Law 1 given probable cause to believe that human harm is the intent of the person giving the order (Referred to for the remainder of 2.1.6 simply as "probable cause").

Probable cause includes presence of confirmed traitors, cultists/tomes, nuclear operatives, or any other human acting against the station in general; the person not having upload access for their job; the presence of blood or an openly carried lethal-capable or lethal-only weapon on the requester; or anything else beyond cross-round character, player, or metagame patterns that indicates the person seeking access intends redefinition of humans that would impede likelihood of or ability to follow current laws as-written.
If you lack at least one element of probable cause and you deny upload access, you are liable to receive a warning or a silicon ban.
You are allowed, but not obligated, to deny upload access given probable cause.
You are obligated to disallow an individual you know to be harmful (Head of Security who just executed someone, etc.) from accessing your upload.
In the absence of probable cause, you can still demand someone seeking upload access be accompanied by another trustworthy human or a cyborg.

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 11:11 am
by Slignerd
... Which is also incredibly silly, considering that you're meant to deal with silicon troubles by changing laws instead of killing them.

But to then have AIs take extreme measures to prevent such law changes by the station's command with any excuses it can come up, where the mere presence of antags is read as "probable cause" for some reason... Should a lawset enabling them to act like this really be the default?

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 1:40 pm
by Incomptinence
Sligneris wrote:Excuse me, what? The only metagame here is to presume everything is harmful, and to treat the captain like an antag because you suddenly see him as just a valid subhuman, rather than the captain backed by a human's orders. Just because someone is a non-human doesn't mean you should dedicate yourself to ruining their round. You shouldn't care about race when a human command member orders you to let the fucking roundstart captain into the upload. Disobeying such order is breaking law 2, and no - no humans were harmed, so you don't get to invoke law 1 and ignore human orders, instructing you to allow the captain - who is very much authorized - to enter upload.

Look, people acting like this are exactly the reason law 1 has to change - because doing this, even with human orders otherwise is exactly the kind of cancerous AI behavior the inaction clause allows - and you shouldn't have to force your way into upload to fix a non-antag AI causing trouble of this sort.
Exactly, no humans were harmed, mission accomplished.

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 2:03 pm
by Slignerd
Actually, I'm pretty sure humans did come to harm because of AI's actions. Since it decided to act rogue the moment the captain became a catgirl, it distracted said captain for an extended period of time, the time better spent watching over the station. As a result of its actions it ended up being killed, failing to protect humans since then, failing to obey human HoP's orders and failing to protect its own existence, violating all three of its laws. Then the distraction offered by the AI allowed a murderous changeling to break into captain's office, kill the captain and move on to kill humans with a freshly acquired all-access, all with the approval of the MMI of a freshly deconstructed borg, who has earlier chosen mining module for those sweeet murder tools, like a drill or a KA. All this out of petty spite towards a non-human.

The fact that the inaction clause causes this to be widely regarded as acceptable (since you're here, arguing that it's all fine and dandy) is the largest issue here. I mean, it does generally show that silicon players have tendency to act dickish about this kind of stuff, ruining an otherwise fun gimmick round. Which in turn kind of goes against Rule 1.

This isn't ban requests though, so we'd be better moving on past the catgirl incident, anyway. But yeah, the matter of fact is that "probable cause" being excuse to act extremely obstructive and ignore orders of literally everyone is incredibly unhealthy for the server as a whole.

So, to sum it up...

If they're going to be assholes anyway, I'd rather have silicons as passive, apathetic assholes that will sometimes stay quiet about something they should've mentioned or reject your orders if they'd lead to human harm, and just stick to their own business... as opposed to obstructive assholes that will bolt all the doors, send borgs, attempt to kill you on a "probable harm" if you're not human, all while chanting "it's all for the sake of preventing human harm..." and prevent any law changes, while ignoring humans that yell at it to stop its insanity.

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 3:35 pm
by TheColdTurtle
Policy discussion is a meme

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 5:40 pm
by Shadowlight213
I find it funny that security openly executing people is considered something that the inaction clause stops, because I see sec openly execute people anyways. I haven't seen an AI effectively do anything about that in ages. Door remotes pretty much nullify bolting people in, and now it's possible to just break the doors down, so depowering doesn't even work.

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 5:58 pm
by Atlanta-Ned
The trick is to not scream that you're going to execute someone over the radio.

Why no one seems to understand this is byond me.

EDIT: [screaming]

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 6:08 pm
by Slignerd
Atlanta-Ned wrote:The trick is to not scream that you're going to execute someone over the radio.

Why no one seems to understand this is byond me.
Still... I think all the examples I mentioned do show that the most blatant stuff like security executing prisoners is not the only problem. Since most of that doesn't even concern executions - and the one that does refers to the AI going batshit after the fact.
Atlanta-Ned wrote:[screaming]
I don't get it. Why can't you actually address anything I bring up and instead spam stuff like this?

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 6:13 pm
by cancer_engine
Shadowlight213 wrote:I find it funny that security openly executing people is considered something that the inaction clause stops, because I see sec openly execute people anyways. I haven't seen an AI effectively do anything about that in ages. Door remotes pretty much nullify bolting people in, and now it's possible to just break the doors down, so depowering doesn't even work.
Thats because most silicons know that if they try to stop an execution shitters will ahelp it and someone will get boinked, sometimes them, sometimes the shitter. There is no rhyme or reason, only the mood of the admin that day.

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 6:13 pm
by TheColdTurtle
TheColdTurtle wrote:Policy discussion is a meme

Re: First Law of Asimov

Posted: Mon Oct 10, 2016 6:16 pm
by Slignerd
TheColdTurtle wrote:Policy discussion is a meme
Only because you guys make it so, by making idiotic meme comments without bringing anything meaningful on the table.