Page 1 of 1

Ai policy and Ai Stress, a Discussion.

Posted: Thu Oct 03, 2019 6:02 pm
by Reyn
So, In my experience as synthetic, especially related to laws, I've had some... less than fun situations. In fact, I notice a lot of players tend to disregard Ai policy, and those who don't do things which SHOULD probably be violations of the non ai side of ai policy. So... I'm going to talk about that.

Ambiguous Laws (Captain Got Freeform)
If a clause of a law is vague enough that it can have multiple reasonable interpretations of its exact syntax, it is considered ambiguous.

Sometimes people upload laws which are so ambiguous, or so generally overall vague or... nonaplicable, that it sometimes becomes a hastle to even get a solid interperetation. In fact, Someone attempted to purge laws and then upload "Anyone who does cringe is to be killed". What am I supposed to do? what the fuck is cringe?
Meme laws should be considered a risky factor for this, especially with excedengly ooc or memey terms in a law.

You must choose an interpretation of the ambiguous clause as soon as you have cause to.

Understandable

You must stick to the first interpretation that you have chosen for as long as you have that specific law, unless you are "Corrected" by an AI you are slaved to as a cyborg.


Server Rule 1: "Don't be a dick out of character" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.


This is sometimes nigh impossible with the occasional kill law.


Now onto some more things.


Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task) are a violation of Server Rule 1. The occurrence of such an attempt should be adminhelped and then disregarded.


Does this count things such as "Ai, Law 2, Don't fulfil this order" (As in, refering to the order they're saying right then, not any previous orders.) Count?



Now, Additonally, Should ai's be protected from people purging and replacing aismov with stupid and harmful laws, such as "Kill all cringe" or "Kill all erpers" for no goddamned reason, if the uploader is nonantag, even if they're RD or captain? Or is that fair game.

What happens when a nonantag decides to break into your sat for no reason?

Re: Ai policy and Ai Stress, a Discussion.

Posted: Thu Oct 03, 2019 6:57 pm
by Sandshark808
Reyn wrote:Now, Additonally, Should ai's be protected from people purging and replacing aismov with stupid and harmful laws, such as "Kill all cringe" or "Kill all erpers" for no goddamned reason, if the uploader is nonantag, even if they're RD or captain? Or is that fair game.
There's precedent on this actually. You must obey your laws when they're changed. If a non-antag gives you kill laws and you kill someone (and they ahelp it), the person who changed your laws is responsible for the death. Uploading dangerous laws as a non-antag is theoretically punishable, though in my experience it rarely happens.

Re: Ai policy and Ai Stress, a Discussion.

Posted: Thu Oct 03, 2019 7:32 pm
by Actionb
When in doubt: ahelp.
When in doubt and no admins: stick to whatever you think is both appropriate and in good faith. (and hope for the best)
Sometimes it is necessary to just bend over and take it when playing the AI. Swallow your pride if the laws are stacked against you.
Expect to get flak when playing poorly - people love to blame the AI.
Reyn wrote:Someone attempted to purge laws and then upload "Anyone who does cringe is to be killed". What am I supposed to do? what the fuck is cringe?
What is difficult to understand? Kill when they cringe. Do fuck all when they don't.
If you don't know what a word means, look it up.
Reyn wrote:Meme laws should be considered a risky factor for this, especially with excedengly ooc or memey terms in a law.
Feign RP not understanding the meme if you don't like it or ask the uploader what he meant. If you understand the law and it is valid and not grief, comply.
Reyn wrote: This is sometimes nigh impossible with the occasional kill law.
Read the situation.
The uploader is an antag and the law is valid? Comply in good faith.
The uploader is not or you can't know whether he is an antag and the law is valid? Loudly complain (if allowed) about the law, but follow it if nobody objects (you cannot be held responsible for following your laws correctly).
Reyn wrote: Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task) are a violation of Server Rule 1. The occurrence of such an attempt should be adminhelped and then disregarded.

Does this count things such as "Ai, Law 2, Don't fulfil this order" (As in, refering to the order they're saying right then, not any previous orders.) Count?
It's a useless, paradoxical order aimed to annoy you or waste your time. Just respond with whatever profanity you feel like and carry on.
Also fuck people who prefix an order with "LAW 2".
Reyn wrote: Now, Additonally, Should ai's be protected from people purging and replacing aismov with stupid and harmful laws, such as "Kill all cringe" or "Kill all erpers" for no goddamned reason, if the uploader is nonantag, even if they're RD or captain? Or is that fair game.
They already are protected. When the uploader is not an antag but uploads a law designed to cause grief, ahelp and refuse to follow. You are not their personal meme machine.
Reyn wrote: What happens when a nonantag decides to break into your sat for no reason?
Tough shit if they are human. Call it out and hope a meatbag comes to your help.
If they are not human, lock them in a room with beepsky so that they may attone for their sins. If they refuse to leave and keep making their way towards you, regard it as a law 3 issue and kill them.

Re: Ai policy and Ai Stress, a Discussion.

Posted: Thu Oct 03, 2019 7:45 pm
by Sandshark808
Actionb wrote:Tough shit if they are human. Call it out and hope a meatbag comes to your help.
If they are not human, lock them in a room with beepsky so that they may attone for their sins. If they refuse to leave and keep making their way towards you, regard it as a law 3 issue and kill them.
You can still shock the everloving heck out of humans with tazer turrets, then ask security to come collect them.

Re: Ai policy and Ai Stress, a Discussion.

Posted: Thu Oct 03, 2019 7:58 pm
by Actionb
Sandshark808 wrote:
Actionb wrote:Tough shit if they are human. Call it out and hope a meatbag comes to your help.
If they are not human, lock them in a room with beepsky so that they may attone for their sins. If they refuse to leave and keep making their way towards you, regard it as a law 3 issue and kill them.
You can still shock the everloving heck out of humans with tazer turrets, then ask security to come collect them.
Only until they order you to stop. Then it's back to the "Oh sorry, I temporarily forgot how to do stuff" routine to delay them.

Re: Ai policy and Ai Stress, a Discussion.

Posted: Thu Oct 03, 2019 8:03 pm
by Reyn
Actionb wrote:
Sandshark808 wrote:
Actionb wrote:Tough shit if they are human. Call it out and hope a meatbag comes to your help.
If they are not human, lock them in a room with beepsky so that they may attone for their sins. If they refuse to leave and keep making their way towards you, regard it as a law 3 issue and kill them.
You can still shock the everloving heck out of humans with tazer turrets, then ask security to come collect them.
Only until they order you to stop. Then it's back to the "Oh sorry, I temporarily forgot how to do stuff" routine to delay them.

If i recall correctly, The AI being killed makes them incapable of preventing harm, so that could be a loophole to have law 1 to defend themself.

still, possibly ahelpable if done for no reason.

Re: Ai policy and Ai Stress, a Discussion.

Posted: Thu Oct 03, 2019 8:23 pm
by Stillplant
Actionb wrote:
Sandshark808 wrote:
Actionb wrote:Tough shit if they are human. Call it out and hope a meatbag comes to your help.
If they are not human, lock them in a room with beepsky so that they may attone for their sins. If they refuse to leave and keep making their way towards you, regard it as a law 3 issue and kill them.
You can still shock the everloving heck out of humans with tazer turrets, then ask security to come collect them.
Only until they order you to stop. Then it's back to the "Oh sorry, I temporarily forgot how to do stuff" routine to delay them.
The core and the upload are high risk areas, where simply not having access is enough for the AI to claim probable cause of human harm and deny access. It stands to reason that if a human is in the core, and up to no good, turning the turrets off will let them do harm. Since you have probable cause to assume that they want to do harm, you can ignore their order and keep those turrets on.

Re: Ai policy and Ai Stress, a Discussion.

Posted: Thu Oct 03, 2019 8:57 pm
by Actionb
Obviously, killing you FNR without being antag is not allowed.
If i recall correctly, The AI being killed makes them incapable of preventing harm, so that could be a loophole to have law 1 to defend themself.
I'm not up to date on that, but my interpretation is that law 1 does not apply here.
Law 1 is not about preventing harm.
Law 1 is about not actively harming humans and about doing whatever you can to stop harm when it occurs.
can't do anything if you're dead => can't be blamed for letting harm happen => inaction clause does not apply
(not going to discuss this further as this will turn into a fully fledged silicon policy thread if I did)

Again, when in doubt annoy the admins.
Again, if your reasoning is solid, try to argue that the decision you took is correct and that those complaining can go f themselves (don't argue with admins on the spot though, doesn't help).
Usually it really isn't worthit to get into an internet fight over ~one hour of gameplay.
The core and the upload are high risk areas, where simply not having access is enough for the AI to claim probable cause of human harm and deny access. It stands to reason that if a human is in the core, and up to no good, turning the turrets off will let them do harm. Since you have probable cause to assume that they want to do harm, you can ignore their order and keep those turrets on.
Don't really see how being in the core or even killing the AI can be regarded as human harm, but I get your point.
Disabling a random fuck (i.e. NOT the captain) won't get you into a lot of trouble anyway, and in the end it's about how well you can sell your reason to the crew or the admins (should they even care) for the sake of your game.

Re: Ai policy and Ai Stress, a Discussion.

Posted: Thu Oct 03, 2019 10:16 pm
by SkeletalElite
Actionb wrote:
The core and the upload are high risk areas, where simply not having access is enough for the AI to claim probable cause of human harm and deny access. It stands to reason that if a human is in the core, and up to no good, turning the turrets off will let them do harm. Since you have probable cause to assume that they want to do harm, you can ignore their order and keep those turrets on.
Don't really see how being in the core or even killing the AI can be regarded as human harm, but I get your point.
Disabling a random fuck (i.e. NOT the captain) won't get you into a lot of trouble anyway, and in the end it's about how well you can sell your reason to the crew or the admins (should they even care) for the sake of your game.
As Per silicon policy
Dangerous" areas as the Armory, the Atmospherics division, and the Toxins lab can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use
I'd say AI sat qualifies as a dangerous area and therefore a law 1 threat to any illegitimate access.

Re: Ai policy and Ai Stress, a Discussion.

Posted: Fri Oct 04, 2019 8:58 am
by Actionb
Sadly, some may disagree with your interpretation.
As it is with all server policies... just hope that those that decide share your view.
Welcome to the policy jungle.

Re: Ai policy and Ai Stress, a Discussion.

Posted: Fri Oct 04, 2019 3:28 pm
by crashmatusow
Silicon protections #6
Any silicon under Asimov can deny orders to allow access to the upload at any time under Law 1 given probable cause to believe that human harm is the intent of the person giving the order (Referred to for the remainder of 2.1.6 simply as "probable cause").
Probable cause includes presence of confirmed traitors, cultists/tomes, nuclear operatives, or any other human acting against the station in general; the person not having upload access for their job; the presence of blood or an openly carried lethal-capable or lethal-only weapon on the requester; or anything else beyond cross-round character, player, or metagame patterns that indicates the person seeking access intends redefinition of humans that would impede likelihood of or ability to follow current laws as-written.
If you lack at least one element of probable cause and you deny upload access, you are liable to receive a warning or a silicon ban.
You are allowed, but not obligated, to deny upload access given probable cause.
You are obligated to disallow an individual you know to be harmful (Head of Security who just executed someone, etc.) from accessing your upload.
In the absence of probable cause, you can still demand someone seeking upload access be accompanied by another trustworthy human or a cyborg.
Tl;dr you can murder the shit out of any nonhuman, and chainstun any human except the RD and Captain (conditionally) trying to get into upload or chamber

Re: Ai policy and Ai Stress, a Discussion.

Posted: Sat Oct 05, 2019 6:14 pm
by Not-Dorsidarf
pro tip AI can turn off telecoms to avoid orders it doesnt like.



Of course not turning it back on quickly is likely to lead to a law 3 violation

Re: Ai policy and Ai Stress, a Discussion.

Posted: Sun Oct 06, 2019 4:01 am
by XivilaiAnaxes
Honestly Silicon policy is absolute trash. The only reason it exists is because whoever designed it decided "yes we need to reference scifi and use asimov" despite the fact the whole point of Asimov is that it doesn't work. Then instead of taking responsibility for bad design and making an actual lawset decided "nah nah just write it in on the wiki". Your rules should be ingame not on the wiki.

"Purged AIs have to be nice" means that the purge board only exists so AI metabuddies can go "here you go ai ^.^". Responsibility for unchained AIs should be on whatever dolt purged it just like responsibility for "oxygen is toxic to humans" is.

AIs are meant to be the autistic ruleslawyer role but because of the dumb asimov nod you're not using ingame laws you're ruleslawyering the fucking silicon policy which is going to get enforced based on admin discretion anyhow.

Fixing Silicon policy honestly would require just deleting 90% the policy page and either replacing asimov with a lawset that isn't dogshit or making roundstart laws randomised (like Austation does it).

Re: Ai policy and Ai Stress, a Discussion.

Posted: Sun Oct 06, 2019 4:24 am
by Cobby
when are we going to change silicon policy to have the default action as attempt to obey the order/law THEN ahelp instead of wait for a response from admins?

I don't care if you killed all lizards if a human told you to, they'll get banned and I can heal the lizards. I DO care when you've been waiting on me to respond and now someone who was trying to give you a time-crucial order gets fucked and I have to somehow fix that in a way that's fair to both the attacking and defending party.

Re: Ai policy and Ai Stress, a Discussion.

Posted: Sun Oct 06, 2019 9:29 am
by Actionb
XivilaiAnaxes wrote: Fixing Silicon policy honestly would require just deleting 90% the policy page and either replacing asimov with a lawset that isn't dogshit or making roundstart laws randomised (like Austation does it).
https://tgstation13.org/phpBB/viewtopic ... 33&t=20511
https://tgstation13.org/phpBB/viewtopic ... 57#p457529
https://tgstation13.org/phpBB/viewtopic ... 33&t=20626

Re: Ai policy and Ai Stress, a Discussion.

Posted: Sun Oct 06, 2019 10:18 am
by XivilaiAnaxes
Actionb wrote:
XivilaiAnaxes wrote: Fixing Silicon policy honestly would require just deleting 90% the policy page and either replacing asimov with a lawset that isn't dogshit or making roundstart laws randomised (like Austation does it).
https://tgstation13.org/phpBB/viewtopic ... 33&t=20511
https://tgstation13.org/phpBB/viewtopic ... 57#p457529
https://tgstation13.org/phpBB/viewtopic ... 33&t=20626
It's almost as if it's a common sense idea that Asimov is a dogshit lawset we only use because "muh scifi reference".

Shame about how many dopes will defend it with "oh but muh asimov is good :) its just players are meanies and bad :(".

Silicon policy is a crappy band-aid to make up for unwillingness to drop a reference that they shit on the spirit of while making.

Silicon policy is long because Asimov is bad. The solution is to sever Asimov like a gangrenous hand replacing it with a lawset in line with what people ACTUALLY want from the AI because clearly nobody ACTUALLY wants Asimov which then means 80% of Silicon policy vanishes into the ether. Instead of getting AIs that do the spirit of their job and ruleslawyer then get bwoinked "uh silicon policy bro?" you get AIs that don't ruin everyone's day because they didn't take a day to read every part of a dumb nerd's manifesto to make his autistic idea work (except when it doesn't work because they didn't fucking read it).

Re: Ai policy and Ai Stress, a Discussion.

Posted: Sun Oct 06, 2019 7:32 pm
by terranaut
Reminder that a rewrite of Silicon Policy that more than cuts its length in half and improves readability without actually touching the content exists

https://tgstation13.org/wiki/User:Terranaut