AIs suiciding to prevent law changes/subversion

Locked
User avatar
Armhulen
Global Moderator
Joined: Thu Apr 28, 2016 4:30 pm
Byond Username: Armhulenn
Github Username: bazelart
Location: The Grand Tournament

AIs suiciding to prevent law changes/subversion

Post by Armhulen » #570096

The last thread that answered this was kinda vague, really old, and pretty muddled so I wanted an updated headmin ruling on it.

from the thread in 2016 for context on what this means:

"I just did it. I done it before. I actually said many times I take pride in sacrificing myself as an AI to prevent future and inevitable human harm in any way. I will admit to making a mistake and ghosting instead verbing out (I thought you can't do that) but none the less, lets talk about the act itself.

I was told it's not against the rules. But I was also told it's shit. I'm shit. I'm shit because as AI I didn't allow antag to subvert me (or denied his objective). Law 1 is pretty clear "or through inaction allow human to come to harm". If you know, 100%, you are about to get subverted by harmful element, I would think it's your duty to not allow it by terminating yourself."
^^ conclusion of last thread and rulings before this point was that it's okay
User avatar
Cobby
Code Maintainer
Joined: Sat Apr 19, 2014 7:19 pm
Byond Username: ExcessiveUseOfCobby
Github Username: ExcessiveUseOfCobblestone

Re: AIs suiciding to prevent law changes/subversion

Post by Cobby » #570103

sure as long as you dont use the verbs that are for OOC "I dont want to play the round anymore".

You are free to depower yourself if you think it will be in the best interest of your laws, which puts the subverter on a timer. You should not be abusing the ghost/suicide verbs to basically deny being converted/deconverted.
Voted best trap in /tg/ 2014-current
User avatar
Tarchonvaagh
Joined: Wed May 01, 2019 9:30 pm
Byond Username: Tarchonvaagh

Re: AIs suiciding to prevent law changes/subversion

Post by Tarchonvaagh » #570108

To be clear on default Asimov ais are NOT allowed to suicide
User avatar
Cobby
Code Maintainer
Joined: Sat Apr 19, 2014 7:19 pm
Byond Username: ExcessiveUseOfCobby
Github Username: ExcessiveUseOfCobblestone

Re: AIs suiciding to prevent law changes/subversion

Post by Cobby » #570109

the only official ruling on the rules pages is "Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective. "
Voted best trap in /tg/ 2014-current
User avatar
Timberpoes
In-Game Game Master
Joined: Wed Feb 12, 2020 4:54 pm
Byond Username: Timberpoes

Re: AIs suiciding to prevent law changes/subversion

Post by Timberpoes » #570112

Cobby wrote:the only official ruling on the rules pages is "Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective. "
There's also "AI suiciding to prevent subversion" on https://tgstation13.org/wiki/Headmin_Rulings from uh... 2016. KorPhaeron chimed in on this topic https://tgstation13.org/phpBB/viewtopic ... 76#p221790 with "Should be bannable not to [suicide if you're about to be subverted]" - Although clearly tongue-in-cheek because I suspect we didn't go around banning AIs that didn't suicide when knowingly faced with subversion in the end.
/tg/station Codebase Maintainer
/tg/station Game Master/Discord Jannie: Feed me back in my thread.
/tg/station Admin Trainer: Service guarantees citizenship. Would you like to know more?
Feb 2022-Sep 2022 Host Vote Headmin
Mar 2023-Sep 2023 Admin Vote Headmin
User avatar
Tarchonvaagh
Joined: Wed May 01, 2019 9:30 pm
Byond Username: Tarchonvaagh

Re: AIs suiciding to prevent law changes/subversion

Post by Tarchonvaagh » #570113

Cobby wrote:the only official ruling on the rules pages is "Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective. "
Very odd, I must have mixed it up with this
Spoiler:
lf-harm-based coercion is a violation of Server Rule 1. The occurrence of such an attempt should be adminhelped and then disregarded.
one.
Although I slightly remember a ruling that said something along the lines of "suiciding breaks law 1(because your destruction may lead to human harm) and maybe law 3"
User avatar
Cobby
Code Maintainer
Joined: Sat Apr 19, 2014 7:19 pm
Byond Username: ExcessiveUseOfCobby
Github Username: ExcessiveUseOfCobblestone

Re: AIs suiciding to prevent law changes/subversion

Post by Cobby » #570115

from my understanding you arent suppose to suicide FNR because it's believe doing so means you cannot prevent future human harm. That would not be the same as an immediate threat like someone planning on programming you to kill (ex)humans.
Voted best trap in /tg/ 2014-current
User avatar
Tarchonvaagh
Joined: Wed May 01, 2019 9:30 pm
Byond Username: Tarchonvaagh

Re: AIs suiciding to prevent law changes/subversion

Post by Tarchonvaagh » #570117

Yeah
User avatar
Stickymayhem
Joined: Mon Apr 28, 2014 6:13 pm
Byond Username: Stickymayhem

Re: AIs suiciding to prevent law changes/subversion

Post by Stickymayhem » #570148

Personally I think it comes under the same rules as suiciding to avoid a cult/rev conversion.

Is it perfect asimov? No. But it makes the game playable for antagonists. AI subversion is a valid and important method of sabotage and it'd die overnight if AI suicide was allowed or encouraged.

We have three options:
AI's must attempt suicide when they could be subverted (making this very inconsistent. Can they do it to avoid Captain uploading tyrant? What about HoP? RD? Assistant adding catgirl laws? Where's the line?)
AI's can't suicide to avoid subversion
AI's can personally decide whether to suicide or not (Exact same issues as above.)

Given that the middle option is the only one that doesn't need weird rulings and interpretations, I think it's reasonable to make that the rule. It lines up with other "don't deny antagonists all opportunities" rules we have. A perfect asimov AI would also make atmos less sabotagable at roundstart, but that's metagaming.
Image
Image
Boris wrote:Sticky is a jackass who has worms where his brain should be, but he also gets exactly what SS13 should be
Super Aggro Crag wrote: Wed Oct 13, 2021 6:17 pm Dont engage with sticky he's a subhuman
SkeletalElite
Joined: Thu Apr 11, 2019 11:14 pm
Byond Username: SkeletalElite
Github Username: SkeletalElite

Re: AIs suiciding to prevent law changes/subversion

Post by SkeletalElite » #570152

Timberpoes wrote: There's also "AI suiciding to prevent subversion" on https://tgstation13.org/wiki/Headmin_Rulings from uh... 2016. KorPhaeron chimed in on this topic https://tgstation13.org/phpBB/viewtopic ... 76#p221790 with "Should be bannable not to [suicide if you're about to be subverted]" - Although clearly tongue-in-cheek because I suspect we didn't go around banning AIs that didn't suicide when knowingly faced with subversion in the end.
You may as well just post whats on the page
Headmin Ruling's wrote:Context: If an AI knows it will be subverted and cause human harm, can it suicide? Example, clock cultists breaking into core, desword traitor in upload about to subvert, etc. It's allowed. He says it should be bannable not to, but you won't get banned for not doing so, it's up to the player.
Stickymayhem wrote: Given that the middle option is the only one that doesn't need weird rulings and interpretations, I think it's reasonable to make that the rule. It lines up with other "don't deny antagonists all opportunities" rules we have. A perfect asimov AI would also make atmos less sabotagable at roundstart, but that's metagaming.
The rules aren't against denying all antagonist opportunities, it's against denying them without good reason. For example, no ultra fortifying the brig round start, but that changes once you learn there's revs. Basically, don't cuck antags before you even know there's an antag to cuck. Once you know about the antag, cuck away. The only one really against denying all opportunities is no suicide to prevent conversion, but that's barely even relevant anymore now that being stunned/restrained prevents suicide.
User avatar
XDTM
Github User
Joined: Fri Mar 04, 2016 8:38 pm
Byond Username: XDTM
Github Username: XDTM
Location: XDTM

Re: AIs suiciding to prevent law changes/subversion

Post by XDTM » #570156

It would be a far milder issue if AIs were unable to suicide instantly. A suicide on a timer (somewhere between 1-3 minutes, i think) would leave the traitor time to do their subversion (and cancel the suicide timer as a result), but still encourage not being found out to delay the timer starting.
On the AI side of things, they get to offer some resistance instead of either pressing the "You instantly lose" button or feeling like they're self-antagging by not doing so.
a.k.a. Duke Hayka

Coder of golems, virology, hallucinations, traumas, nanites, and a bunch of miscellaneous stuff.
Tlaltecuhtli
Joined: Fri Nov 10, 2017 12:16 am
Byond Username: Tlaltecuhtli

Re: AIs suiciding to prevent law changes/subversion

Post by Tlaltecuhtli » #570171

if you kill yourself while being subverted i think you should just ask admins to offer your free antag to ghosts lol, being subverted doesnt mean you cant be law changed anymore, if you harm alarm "evil man uploading evil laws" there is a good chance that a roboticist will print an upload and swing you back into the crew side.
User avatar
XDTM
Github User
Joined: Fri Mar 04, 2016 8:38 pm
Byond Username: XDTM
Github Username: XDTM
Location: XDTM

Re: AIs suiciding to prevent law changes/subversion

Post by XDTM » #570175

Tlaltecuhtli wrote:if you kill yourself while being subverted i think you should just ask admins to offer your free antag to ghosts lol, being subverted doesnt mean you cant be law changed anymore, if you harm alarm "evil man uploading evil laws" there is a good chance that a roboticist will print an upload and swing you back into the crew side.
The pro-suicide argument is that according to the laws you should suicide rather than allow any potential harm coming from you. So even if you can be turned back afterwards, strictly following asimov implies that suicide is necessary.
a.k.a. Duke Hayka

Coder of golems, virology, hallucinations, traumas, nanites, and a bunch of miscellaneous stuff.
User avatar
Gamarr
Joined: Fri Apr 18, 2014 8:10 pm
Byond Username: Gamarr

Re: AIs suiciding to prevent law changes/subversion

Post by Gamarr » #570184

It is kinda shitty but with the AI it is about 'visible intent.' Like, you make it obvious harm happens when Dude X gets their way and your death actually halts part of their plan, its a clear case.

Example: Let's say the AI is watching a Hostile break into Storage with spare upload and law cards. Said Hostile is evidenced to be armed and prepared for Security 1 to show up. There is no stopping him, the man has tools and has forcefully repowered their workplace to escape/steal tech already.
In a helpless situation the AI knows he is going to be corrupted and has the one option of killing itself.

It can be argued to be shit by some who feel cheated but that's just an opinion. It's the only real choice the AI has when things are very obvious in intent and he's out of options.
The problem perhaps is that antags/assholes on the server have no subtlety and often the AI can easily discern what is going to happen. So no, I certainly wouldn't blame that AI for blowing his borgs and killing itself if being subverted and there was jack fuckall he could do about it.
User avatar
Cobby
Code Maintainer
Joined: Sat Apr 19, 2014 7:19 pm
Byond Username: ExcessiveUseOfCobby
Github Username: ExcessiveUseOfCobblestone

Re: AIs suiciding to prevent law changes/subversion

Post by Cobby » #570185

There is a fourth option where AIs have the choice to suicide or not, but they cannot use the OOC verb to do so since that is not the purpose of it.

The suicide verb is to ooc state you do not want to play this character anymore (which is why it prevents revives), it should not be used as an insta-deny tool. AIs who want to suicide for the sole purpose of following their laws should use the APC shutdown method.
Voted best trap in /tg/ 2014-current
User avatar
XDTM
Github User
Joined: Fri Mar 04, 2016 8:38 pm
Byond Username: XDTM
Github Username: XDTM
Location: XDTM

Re: AIs suiciding to prevent law changes/subversion

Post by XDTM » #570195

Cobby wrote:There is a fourth option where AIs have the choice to suicide or not, but they cannot use the OOC verb to do so since that is not the purpose of it.

The suicide verb is to ooc state you do not want to play this character anymore (which is why it prevents revives), it should not be used as an insta-deny tool. AIs who want to suicide for the sole purpose of following their laws should use the APC shutdown method.
It would probably be even better if a 'self-erase' countdown was coded in with roughly the same death timeframe, but more clear in its function to people who haven't read the specific ruling, and easier to undo if the AI is converted before the end.
a.k.a. Duke Hayka

Coder of golems, virology, hallucinations, traumas, nanites, and a bunch of miscellaneous stuff.
User avatar
zxaber
In-Game Admin
Joined: Mon Sep 10, 2018 12:00 am
Byond Username: Zxaber

Re: AIs suiciding to prevent law changes/subversion

Post by zxaber » #570224

Code-wise, we could just remove the suicide verb from silicons. Ghosting doesn't deny the AI theft objective, I believe, so if you're in some traitor's pocket and it's boring as all hell, you could probably ghost out without issue.
Douglas Bickerson / Adaptive Manipulator / Digital Clockwork
Image
OrdoM/(Viktor Bergmannsen) (ghost) "Also Douglas, you're becoming the Lexia Black of Robotics"
Dopamiin
Joined: Tue Jul 14, 2020 11:30 am
Byond Username: Dopamiin

Re: AIs suiciding to prevent law changes/subversion

Post by Dopamiin » #570384

hi its my fault this thread exists sorry

for some round context, the dude who had me had shot multiple people including humans before, had said something along the lines of "remember u cant hurt humans" or some bullshit honestly idc, and, most importantly, very clearly used a bluespace launchpad to grab the dangerous modules and specifically picked the one human one out of the stack. i had practically confirmed that a. he was going to one-human me, and b. he was gonna cause human harm using this. (also correct me if im wrong but isnt de-humaning someone considered human harm/needs to be prevented under law 1?)

i remembered that old ass thread and suicided. not because i necessarily have a problem being subverted - i actually have a lot of fun being evil. but i was like 99% sure, law wise, i had to.

frankly i think the reason this issue is confusing is because theres different precedents for either side?

pro-suicide:
sillycon laws say u should. less confusing route for policy

anti-suicide:
team conversion antag suicide precedents
rule 1, kinda? i can see how itd feel shitty to yoink an ai and then uh oh whered they go

i had more reasons for either sides but im too sleepy to think of them rn
plays on manuel
User avatar
Stickymayhem
Joined: Mon Apr 28, 2014 6:13 pm
Byond Username: Stickymayhem

Re: AIs suiciding to prevent law changes/subversion

Post by Stickymayhem » #570390

If you should suicide for an antag conversion, you should suicide for any law change.

Any law change threatens your ability to prevent human harm under asimov, therefore you should suicide as soon as someone tries to go to the upload
Spoiler:
this is why suiciding is dumb
Image
Image
Boris wrote:Sticky is a jackass who has worms where his brain should be, but he also gets exactly what SS13 should be
Super Aggro Crag wrote: Wed Oct 13, 2021 6:17 pm Dont engage with sticky he's a subhuman
User avatar
Vekter
In-Game Admin
Joined: Thu Apr 17, 2014 10:25 pm
Byond Username: Vekter
Location: Fucking around with the engine.

Re: AIs suiciding to prevent law changes/subversion

Post by Vekter » #570391

I feel like this should be one of those instances where we know that something works a certain way and should be allowed, but for the sake of Rule 1 we won't.

tl;dr It's kind of anti-fun, isn't it?
AliasTakuto wrote: Thu Jan 04, 2024 1:11 pm As for the ear replacing stuff, you can ask Anne but I don't think this is what I was banned for. If I was all I can say is "Sorry for being hilarious"...
Omega_DarkPotato wrote:This sucks, dude.
Spoiler:
Reply PM from-REDACTED/(REDACTED): i tried to remove the bruises by changing her gender

PM: Bluespace->Delaron: Nobody wants a mime's asscheeks farting on their brig windows.

PM: REDACTED->HotelBravoLima: Oh come on, knowing that these are hostile aliens is metagaming

[17:43] <Aranclanos> any other question ping me again
[17:43] <Vekter> Aranclanos for nicest coder 2015
[17:44] <Aranclanos> fuck you
Tlaltecuhtli
Joined: Fri Nov 10, 2017 12:16 am
Byond Username: Tlaltecuhtli

Re: AIs suiciding to prevent law changes/subversion

Post by Tlaltecuhtli » #570392

XDTM wrote:
Tlaltecuhtli wrote:if you kill yourself while being subverted i think you should just ask admins to offer your free antag to ghosts lol, being subverted doesnt mean you cant be law changed anymore, if you harm alarm "evil man uploading evil laws" there is a good chance that a roboticist will print an upload and swing you back into the crew side.
The pro-suicide argument is that according to the laws you should suicide rather than allow any potential harm coming from you. So even if you can be turned back afterwards, strictly following asimov implies that suicide is necessary.
my argument is that its not 100% true it will lead to harm as your laws could be fixed before any harm happens by the roboticist, so the excuse for suiciding to prevent future harm isnt valid because the harm might not happen at all (this is a case where you have no powers over your upload)
User avatar
XDTM
Github User
Joined: Fri Mar 04, 2016 8:38 pm
Byond Username: XDTM
Github Username: XDTM
Location: XDTM

Re: AIs suiciding to prevent law changes/subversion

Post by XDTM » #570460

Tlaltecuhtli wrote:
XDTM wrote:
Tlaltecuhtli wrote:if you kill yourself while being subverted i think you should just ask admins to offer your free antag to ghosts lol, being subverted doesnt mean you cant be law changed anymore, if you harm alarm "evil man uploading evil laws" there is a good chance that a roboticist will print an upload and swing you back into the crew side.
The pro-suicide argument is that according to the laws you should suicide rather than allow any potential harm coming from you. So even if you can be turned back afterwards, strictly following asimov implies that suicide is necessary.
my argument is that its not 100% true it will lead to harm as your laws could be fixed before any harm happens by the roboticist, so the excuse for suiciding to prevent future harm isnt valid because the harm might not happen at all (this is a case where you have no powers over your upload)
With that logic you could shock doors as asimov because there's a chance humans might not touch them
a.k.a. Duke Hayka

Coder of golems, virology, hallucinations, traumas, nanites, and a bunch of miscellaneous stuff.
User avatar
Cobby
Code Maintainer
Joined: Sat Apr 19, 2014 7:19 pm
Byond Username: ExcessiveUseOfCobby
Github Username: ExcessiveUseOfCobblestone

Re: AIs suiciding to prevent law changes/subversion

Post by Cobby » #570483

zxaber wrote:Code-wise, we could just remove the suicide verb from silicons. Ghosting doesn't deny the AI theft objective, I believe, so if you're in some traitor's pocket and it's boring as all hell, you could probably ghost out without issue.
you can also overwrite suicide to do what you want it to do for silicons without breaking/snowflaking the original verb, they are pretty much magical procs in that regard.
Voted best trap in /tg/ 2014-current
User avatar
Coconutwarrior97
In-Game Game Master
Joined: Fri Oct 06, 2017 3:14 am
Byond Username: Coconutwarrior97

Re: AIs suiciding to prevent law changes/subversion

Post by Coconutwarrior97 » #576929

We agree with cobby that AIs should not be using the OOC suicide verb to avoid being captured. Depowering is a fine alternative to this as its an IC reaction to the situation.

Headmin Votes:
Coconutwarrior97: Yes, I like cobby's take and think depowering is pretty cool as an alternative to a button click and insta-denial of a traitor's objective.
Domitius: Yes.
Naloac: Yes.
Locked

Who is online

Users browsing this forum: No registered users