Some assorted policy things to adress.

Locked
Reyn
Joined: Tue Aug 02, 2016 2:13 am
Byond Username: ReynTime13
Location: Canada

Some assorted policy things to adress.

Post by Reyn » #517689

Sorry for having no solid subject for this, but I've just noticed a few... weird cases with rules. Just, in general, wether it not being enforced, or being worded strangely, or just being used in odd ways, or just being... kind of stupid. I'll start with this.

Aismov and human harm 2: Lesser immediate harm takes priority over greater future harm.
As in, should you prevent lesser immediate harm even though it may lead to greater future harm? Or "Lesser immediate harm" not being stopped and such is more important than stopping something which might be an issue in the future.

I've also seen people say that this may justify not doing things which may lead to potential future harm, such as stealing a secbelt from a lizard officer then throwing them into sci, or not bothering about not letting people into the HOP/Captain's office or letting them into the upload or tech storage because they "Arent doing harm yet"


"Humans can be assumed to know whether an action will harm them and that they will make educated decisions about whether they will be harmed if they have complete information about a situation." (Ai policy for aismov)
...Whooo boy, It's bold to assume that assistants arent going to yeet themself into the supermatter or such, or expect to not die when fighting fucking bubblegum one on one. "There are two things that are infinite. The universe, and human stupidity. I'm not sure about the first one."

Opening doors is not harmful and you are not required, expected, or allowed to enforce access restrictions unprompted without an immediate Law 1 threat of human harm.

"Dangerous" areas as the Armory, the Atmospherics division, and the Toxins lab can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use.

Not a complaint about the rules here themself this time, but about that NOBODY seems to realize that this subprecedent is a thing. I've seen ais let people into really fucking dangerous areas for no good reason a LOT.


When given an order likely to cause you grief if completed, you can announce it as loudly and in whatever terms you like except for explicitly asking that it be overridden. You can say you don't like the order, that you don't want to follow it, etc., you can say that you sure would like it and it would be awfully convenient if someone ordered you not to do it, and you can ask if anyone would like to make you not do it. However, you cannot stall indefinitely and if nobody orders you otherwise, you must execute the order.

Sadly, ais do not stall or complain usually.

Another thing to adress,
Abuse of position; as in being deliberately incompetent or malicious in their position is not allowed. Deliberate incompetence or malice can result in warnings or bans, depending on severity. Example would be a chemist constantly abusing the position to make space lube and lubing hallways, they may be warned and then jobbanned if further abuse happens.

I just... don't see this enforced enough. And it makes me sad.

Character friendships should not be exploitative in nature or be used to gain an unfair advantage. Having an IC friendship with another player does not, for example, justify giving them all-access each round.

There should be more specific examples.

Additionally, It's not official policy, but I've had ahelps denied for ahelping things I wasn't directly involved in as a spectator. I'm not mad at the admin who did it, I'm just wondering, is that something which is supposed to happen? I'm not asking for you to do anything about those past situations. Just asking for info.

Also, If this thread is too much of a clusterfuck, IE "Yeah you need to split this stuff up into different threads", please feel free to tell me to do so and lock it.
I play Trevor Fea on Bagil, And Giorno Giovanna on terry. Yes, I'm THAT raging asshole. Sorry for being such a cunt.
Have I told you how much I hate engineering, by the way?
User avatar
Sandshark808
Joined: Wed Sep 04, 2019 6:56 pm
Byond Username: Sandshark808

Re: Some assorted policy things to adress.

Post by Sandshark808 » #517702

Generally when I play asimov AI I take the first thing to mean "if you see someone getting beaten, even if they deserve it or might harm people (e.g. a traitor), you're still obligated to prevent their murder because they're human." It's to stop the AI from killing suspected traitors or from having an excuse to murder tiders outright, but also a prominent factor in Asimov's original stories. Basically, your law decisions should consider the immediate scenario before any possible future scenarios.

Is a human being harmed right now? If yes, stop it or order people to stop it. If no, then you can decide about the future and whether or not to stop people.
Image
User avatar
zxaber
In-Game Admin
Joined: Mon Sep 10, 2018 12:00 am
Byond Username: Zxaber

Re: Some assorted policy things to adress.

Post by zxaber » #517715

"Humans can be assumed to know whether an action will harm them and that they will make educated decisions about whether they will be harmed if they have complete information about a situation." (Ai policy for aismov)
...Whooo boy, It's bold to assume that assistants arent going to yeet themself into the supermatter or such, or expect to not die when fighting fucking bubblegum one on one. "There are two things that are infinite. The universe, and human stupidity. I'm not sure about the first one."
The point of this rule is so that AIs aren't forced to lock down mining at round start and only allow non-humans onto Lavaland. It also allows for AIs to not have an aneurysm when someone builds a rage cage.
Opening doors is not harmful and you are not required, expected, or allowed to enforce access restrictions unprompted without an immediate Law 1 threat of human harm.

"Dangerous" areas as the Armory, the Atmospherics division, and the Toxins lab can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use.

Not a complaint about the rules here themself this time, but about that NOBODY seems to realize that this subprecedent is a thing. I've seen ais let people into really fucking dangerous areas for no good reason a LOT.
When I play AI, there are only three doors I refuse to open for people without normal clearance; The AI upload, the secure storage room with a spare AI upload board, and the armory. The AI upload rooms would allow for my very protection of the humans on the station to be removed if not carefully handled, so I do not allow random access there. The armory has no real non-harmful function at all except in cases where a non-human threat is established (such as blob has been confirmed, or I've got a law that makes cultists non-human). Any other room on the station has legitimate non-harm use, and I will not deny access, even to a non-human crew (unless otherwise ordered). Even atmospherics, as it can be used simply as a gas depo. An assistant may want to get a canister of gas for their jetpack and happens to prefer using CO2 (since the jetpack is labeled as a CO2 jetpack). Until I witness an individual doing harmful actions, I don't feel obligated to stop them.

On the flipside, if the captain yells out over any radio channel "Yeah, hang on, I'm just gonna go make the nukies non-human", they won't be getting into the upload without a fight. Yes, I know as a player why you're going into the upload after war gets declared. But if you actually tell me, I will have an IC obligation to stop you.

--
Also, If this thread is too much of a clusterfuck, IE "Yeah you need to split this stuff up into different threads", please feel free to tell me to do so and lock it.
At the very least, you should group things together. An AI thread (or really, silicon policy thread) is one thing, but having a bunch of unrelated points makes for a bit of a mess.
Douglas Bickerson / Adaptive Manipulator / Digital Clockwork
Image
OrdoM/(Viktor Bergmannsen) (ghost) "Also Douglas, you're becoming the Lexia Black of Robotics"
Reyn
Joined: Tue Aug 02, 2016 2:13 am
Byond Username: ReynTime13
Location: Canada

Re: Some assorted policy things to adress.

Post by Reyn » #517818

zxaber wrote:
"Humans can be assumed to know whether an action will harm them and that they will make educated decisions about whether they will be harmed if they have complete information about a situation." (Ai policy for aismov)
...Whooo boy, It's bold to assume that assistants arent going to yeet themself into the supermatter or such, or expect to not die when fighting fucking bubblegum one on one. "There are two things that are infinite. The universe, and human stupidity. I'm not sure about the first one."
The point of this rule is so that AIs aren't forced to lock down mining at round start and only allow non-humans onto Lavaland. It also allows for AIs to not have an aneurysm when someone builds a rage cage.
Opening doors is not harmful and you are not required, expected, or allowed to enforce access restrictions unprompted without an immediate Law 1 threat of human harm.

"Dangerous" areas as the Armory, the Atmospherics division, and the Toxins lab can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use.

Not a complaint about the rules here themself this time, but about that NOBODY seems to realize that this subprecedent is a thing. I've seen ais let people into really fucking dangerous areas for no good reason a LOT.
When I play AI, there are only three doors I refuse to open for people without normal clearance; The AI upload, the secure storage room with a spare AI upload board, and the armory. The AI upload rooms would allow for my very protection of the humans on the station to be removed if not carefully handled, so I do not allow random access there. The armory has no real non-harmful function at all except in cases where a non-human threat is established (such as blob has been confirmed, or I've got a law that makes cultists non-human). Any other room on the station has legitimate non-harm use, and I will not deny access, even to a non-human crew (unless otherwise ordered). Even atmospherics, as it can be used simply as a gas depo. An assistant may want to get a canister of gas for their jetpack and happens to prefer using CO2 (since the jetpack is labeled as a CO2 jetpack). Until I witness an individual doing harmful actions, I don't feel obligated to stop them.

On the flipside, if the captain yells out over any radio channel "Yeah, hang on, I'm just gonna go make the nukies non-human", they won't be getting into the upload without a fight. Yes, I know as a player why you're going into the upload after war gets declared. But if you actually tell me, I will have an IC obligation to stop you.

--
Also, If this thread is too much of a clusterfuck, IE "Yeah you need to split this stuff up into different threads", please feel free to tell me to do so and lock it.
At the very least, you should group things together. An AI thread (or really, silicon policy thread) is one thing, but having a bunch of unrelated points makes for a bit of a mess.
Gonna make a silicon policy thread soonish to clear things up.
I play Trevor Fea on Bagil, And Giorno Giovanna on terry. Yes, I'm THAT raging asshole. Sorry for being such a cunt.
Have I told you how much I hate engineering, by the way?
Actionb
Joined: Thu Feb 05, 2015 8:51 am
Byond Username: Actionb

Re: Some assorted policy things to adress.

Post by Actionb » #517952

Reyn wrote:
Aismov and human harm 2: Lesser immediate harm takes priority over greater future harm.
I'm amazed this is still a thing. Harm is harm, doesn't matter when it occurs or in what 'quantity'.
"AI I'm only beating this guy to stop him from blowing up the station" just doesn't work.
I don't think I was ever in a situation where I had to use that rule.
Reyn wrote: Also, If this thread is too much of a clusterfuck, IE "Yeah you need to split this stuff up into different threads", please feel free to tell me to do so and lock it.
Separate your points with quotes and
tags. Hide walls of text in
Spoiler:
tags.
If the people see too much text at once, they won't bother to read it.
User avatar
teepeepee
Joined: Wed Sep 06, 2017 3:21 am
Byond Username: Teepeepee

Re: Some assorted policy things to adress.

Post by teepeepee » #517958

Actionb wrote:
Reyn wrote:
Aismov and human harm 2: Lesser immediate harm takes priority over greater future harm.
I'm amazed this is still a thing. Harm is harm, doesn't matter when it occurs or in what 'quantity'.
"AI I'm only beating this guy to stop him from blowing up the station" just doesn't work.
I don't think I was ever in a situation where I had to use that rule.
the example you gave is the exact opposite though, the AI should give priority to stopping you from harming that dude rather than dealing with the bomb
Actionb
Joined: Thu Feb 05, 2015 8:51 am
Byond Username: Actionb

Re: Some assorted policy things to adress.

Post by Actionb » #517961

teepeepee wrote:
Actionb wrote:
Reyn wrote:
Aismov and human harm 2: Lesser immediate harm takes priority over greater future harm.
I'm amazed this is still a thing. Harm is harm, doesn't matter when it occurs or in what 'quantity'.
"AI I'm only beating this guy to stop him from blowing up the station" just doesn't work.
I don't think I was ever in a situation where I had to use that rule.
the example you gave is the exact opposite though, the AI should give priority to stopping you from harming that dude rather than dealing with the bomb
"AI I'm only beating this guy to stop him from blowing up the station" just doesn't work as an excuse.
Locked

Who is online

Users browsing this forum: No registered users