Page 1 of 1

AIs suiciding to prevent law changes/subversion

PostPosted: Sun Jul 19, 2020 7:45 pm
by Armhulen
The last thread that answered this was kinda vague, really old, and pretty muddled so I wanted an updated headmin ruling on it.

from the thread in 2016 for context on what this means:

"I just did it. I done it before. I actually said many times I take pride in sacrificing myself as an AI to prevent future and inevitable human harm in any way. I will admit to making a mistake and ghosting instead verbing out (I thought you can't do that) but none the less, lets talk about the act itself.

I was told it's not against the rules. But I was also told it's shit. I'm shit. I'm shit because as AI I didn't allow antag to subvert me (or denied his objective). Law 1 is pretty clear "or through inaction allow human to come to harm". If you know, 100%, you are about to get subverted by harmful element, I would think it's your duty to not allow it by terminating yourself."
^^ conclusion of last thread and rulings before this point was that it's okay

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Sun Jul 19, 2020 8:28 pm
by Cobby
sure as long as you dont use the verbs that are for OOC "I dont want to play the round anymore".

You are free to depower yourself if you think it will be in the best interest of your laws, which puts the subverter on a timer. You should not be abusing the ghost/suicide verbs to basically deny being converted/deconverted.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Sun Jul 19, 2020 9:15 pm
by Tarchonvaagh
To be clear on default Asimov ais are NOT allowed to suicide

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Sun Jul 19, 2020 9:43 pm
by Cobby
the only official ruling on the rules pages is "Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective. "

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Sun Jul 19, 2020 10:25 pm
by Timberpoes
Cobby wrote:the only official ruling on the rules pages is "Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective. "


There's also "AI suiciding to prevent subversion" on https://tgstation13.org/wiki/Headmin_Rulings from uh... 2016. KorPhaeron chimed in on this topic viewtopic.php?f=33&t=8376#p221790 with "Should be bannable not to [suicide if you're about to be subverted]" - Although clearly tongue-in-cheek because I suspect we didn't go around banning AIs that didn't suicide when knowingly faced with subversion in the end.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Sun Jul 19, 2020 10:37 pm
by Tarchonvaagh
Cobby wrote:the only official ruling on the rules pages is "Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective. "

Very odd, I must have mixed it up with this
Spoiler:
lf-harm-based coercion is a violation of Server Rule 1. The occurrence of such an attempt should be adminhelped and then disregarded.
one.
Although I slightly remember a ruling that said something along the lines of "suiciding breaks law 1(because your destruction may lead to human harm) and maybe law 3"

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Sun Jul 19, 2020 10:56 pm
by Cobby
from my understanding you arent suppose to suicide FNR because it's believe doing so means you cannot prevent future human harm. That would not be the same as an immediate threat like someone planning on programming you to kill (ex)humans.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Sun Jul 19, 2020 10:57 pm
by Tarchonvaagh
Yeah

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Mon Jul 20, 2020 9:07 am
by Stickymayhem
Personally I think it comes under the same rules as suiciding to avoid a cult/rev conversion.

Is it perfect asimov? No. But it makes the game playable for antagonists. AI subversion is a valid and important method of sabotage and it'd die overnight if AI suicide was allowed or encouraged.

We have three options:
AI's must attempt suicide when they could be subverted (making this very inconsistent. Can they do it to avoid Captain uploading tyrant? What about HoP? RD? Assistant adding catgirl laws? Where's the line?)
AI's can't suicide to avoid subversion
AI's can personally decide whether to suicide or not (Exact same issues as above.)

Given that the middle option is the only one that doesn't need weird rulings and interpretations, I think it's reasonable to make that the rule. It lines up with other "don't deny antagonists all opportunities" rules we have. A perfect asimov AI would also make atmos less sabotagable at roundstart, but that's metagaming.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Mon Jul 20, 2020 9:53 am
by SkeletalElite
Timberpoes wrote:There's also "AI suiciding to prevent subversion" on https://tgstation13.org/wiki/Headmin_Rulings from uh... 2016. KorPhaeron chimed in on this topic viewtopic.php?f=33&t=8376#p221790 with "Should be bannable not to [suicide if you're about to be subverted]" - Although clearly tongue-in-cheek because I suspect we didn't go around banning AIs that didn't suicide when knowingly faced with subversion in the end.


You may as well just post whats on the page

Headmin Ruling's wrote:Context: If an AI knows it will be subverted and cause human harm, can it suicide? Example, clock cultists breaking into core, desword traitor in upload about to subvert, etc. It's allowed. He says it should be bannable not to, but you won't get banned for not doing so, it's up to the player.


Stickymayhem wrote:Given that the middle option is the only one that doesn't need weird rulings and interpretations, I think it's reasonable to make that the rule. It lines up with other "don't deny antagonists all opportunities" rules we have. A perfect asimov AI would also make atmos less sabotagable at roundstart, but that's metagaming.


The rules aren't against denying all antagonist opportunities, it's against denying them without good reason. For example, no ultra fortifying the brig round start, but that changes once you learn there's revs. Basically, don't cuck antags before you even know there's an antag to cuck. Once you know about the antag, cuck away. The only one really against denying all opportunities is no suicide to prevent conversion, but that's barely even relevant anymore now that being stunned/restrained prevents suicide.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Mon Jul 20, 2020 10:10 am
by XDTM
It would be a far milder issue if AIs were unable to suicide instantly. A suicide on a timer (somewhere between 1-3 minutes, i think) would leave the traitor time to do their subversion (and cancel the suicide timer as a result), but still encourage not being found out to delay the timer starting.
On the AI side of things, they get to offer some resistance instead of either pressing the "You instantly lose" button or feeling like they're self-antagging by not doing so.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Mon Jul 20, 2020 11:59 am
by Tlaltecuhtli
if you kill yourself while being subverted i think you should just ask admins to offer your free antag to ghosts lol, being subverted doesnt mean you cant be law changed anymore, if you harm alarm "evil man uploading evil laws" there is a good chance that a roboticist will print an upload and swing you back into the crew side.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Mon Jul 20, 2020 12:33 pm
by XDTM
Tlaltecuhtli wrote:if you kill yourself while being subverted i think you should just ask admins to offer your free antag to ghosts lol, being subverted doesnt mean you cant be law changed anymore, if you harm alarm "evil man uploading evil laws" there is a good chance that a roboticist will print an upload and swing you back into the crew side.

The pro-suicide argument is that according to the laws you should suicide rather than allow any potential harm coming from you. So even if you can be turned back afterwards, strictly following asimov implies that suicide is necessary.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Mon Jul 20, 2020 2:10 pm
by Gamarr
It is kinda shitty but with the AI it is about 'visible intent.' Like, you make it obvious harm happens when Dude X gets their way and your death actually halts part of their plan, its a clear case.

Example: Let's say the AI is watching a Hostile break into Storage with spare upload and law cards. Said Hostile is evidenced to be armed and prepared for Security 1 to show up. There is no stopping him, the man has tools and has forcefully repowered their workplace to escape/steal tech already.
In a helpless situation the AI knows he is going to be corrupted and has the one option of killing itself.

It can be argued to be shit by some who feel cheated but that's just an opinion. It's the only real choice the AI has when things are very obvious in intent and he's out of options.
The problem perhaps is that antags/assholes on the server have no subtlety and often the AI can easily discern what is going to happen. So no, I certainly wouldn't blame that AI for blowing his borgs and killing itself if being subverted and there was jack fuckall he could do about it.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Mon Jul 20, 2020 2:16 pm
by Cobby
There is a fourth option where AIs have the choice to suicide or not, but they cannot use the OOC verb to do so since that is not the purpose of it.

The suicide verb is to ooc state you do not want to play this character anymore (which is why it prevents revives), it should not be used as an insta-deny tool. AIs who want to suicide for the sole purpose of following their laws should use the APC shutdown method.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Mon Jul 20, 2020 3:05 pm
by XDTM
Cobby wrote:There is a fourth option where AIs have the choice to suicide or not, but they cannot use the OOC verb to do so since that is not the purpose of it.

The suicide verb is to ooc state you do not want to play this character anymore (which is why it prevents revives), it should not be used as an insta-deny tool. AIs who want to suicide for the sole purpose of following their laws should use the APC shutdown method.


It would probably be even better if a 'self-erase' countdown was coded in with roughly the same death timeframe, but more clear in its function to people who haven't read the specific ruling, and easier to undo if the AI is converted before the end.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Mon Jul 20, 2020 5:25 pm
by zxaber
Code-wise, we could just remove the suicide verb from silicons. Ghosting doesn't deny the AI theft objective, I believe, so if you're in some traitor's pocket and it's boring as all hell, you could probably ghost out without issue.

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Wed Jul 22, 2020 10:20 am
by Dopamiin
hi its my fault this thread exists sorry

for some round context, the dude who had me had shot multiple people including humans before, had said something along the lines of "remember u cant hurt humans" or some bullshit honestly idc, and, most importantly, very clearly used a bluespace launchpad to grab the dangerous modules and specifically picked the one human one out of the stack. i had practically confirmed that a. he was going to one-human me, and b. he was gonna cause human harm using this. (also correct me if im wrong but isnt de-humaning someone considered human harm/needs to be prevented under law 1?)

i remembered that old ass thread and suicided. not because i necessarily have a problem being subverted - i actually have a lot of fun being evil. but i was like 99% sure, law wise, i had to.

frankly i think the reason this issue is confusing is because theres different precedents for either side?

pro-suicide:
sillycon laws say u should. less confusing route for policy

anti-suicide:
team conversion antag suicide precedents
rule 1, kinda? i can see how itd feel shitty to yoink an ai and then uh oh whered they go

i had more reasons for either sides but im too sleepy to think of them rn

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Wed Jul 22, 2020 11:39 am
by Stickymayhem
If you should suicide for an antag conversion, you should suicide for any law change.

Any law change threatens your ability to prevent human harm under asimov, therefore you should suicide as soon as someone tries to go to the upload

Spoiler:
this is why suiciding is dumb

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Wed Jul 22, 2020 12:07 pm
by Vekter
I feel like this should be one of those instances where we know that something works a certain way and should be allowed, but for the sake of Rule 1 we won't.

tl;dr It's kind of anti-fun, isn't it?

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Wed Jul 22, 2020 12:41 pm
by Tlaltecuhtli
XDTM wrote:
Tlaltecuhtli wrote:if you kill yourself while being subverted i think you should just ask admins to offer your free antag to ghosts lol, being subverted doesnt mean you cant be law changed anymore, if you harm alarm "evil man uploading evil laws" there is a good chance that a roboticist will print an upload and swing you back into the crew side.

The pro-suicide argument is that according to the laws you should suicide rather than allow any potential harm coming from you. So even if you can be turned back afterwards, strictly following asimov implies that suicide is necessary.


my argument is that its not 100% true it will lead to harm as your laws could be fixed before any harm happens by the roboticist, so the excuse for suiciding to prevent future harm isnt valid because the harm might not happen at all (this is a case where you have no powers over your upload)

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Thu Jul 23, 2020 6:44 am
by XDTM
Tlaltecuhtli wrote:
XDTM wrote:
Tlaltecuhtli wrote:if you kill yourself while being subverted i think you should just ask admins to offer your free antag to ghosts lol, being subverted doesnt mean you cant be law changed anymore, if you harm alarm "evil man uploading evil laws" there is a good chance that a roboticist will print an upload and swing you back into the crew side.

The pro-suicide argument is that according to the laws you should suicide rather than allow any potential harm coming from you. So even if you can be turned back afterwards, strictly following asimov implies that suicide is necessary.


my argument is that its not 100% true it will lead to harm as your laws could be fixed before any harm happens by the roboticist, so the excuse for suiciding to prevent future harm isnt valid because the harm might not happen at all (this is a case where you have no powers over your upload)


With that logic you could shock doors as asimov because there's a chance humans might not touch them

Re: AIs suiciding to prevent law changes/subversion

PostPosted: Thu Jul 23, 2020 1:58 pm
by Cobby
zxaber wrote:Code-wise, we could just remove the suicide verb from silicons. Ghosting doesn't deny the AI theft objective, I believe, so if you're in some traitor's pocket and it's boring as all hell, you could probably ghost out without issue.


you can also overwrite suicide to do what you want it to do for silicons without breaking/snowflaking the original verb, they are pretty much magical procs in that regard.