Page 1 of 1

Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 12:13 am
by humanoid
What the title says. I have been cucked many, MANY times by AI's and borgs after I one-humaned them because they would either:
a.Hint or state a law change OR
b. State their laws entirely including the one human law.
This sucks because immediately your name gets found out. the AI gets fixed. And you're antag round is fucked because of a silicon ratting you out. The one human law board doesn't mention stating or hinting laws because the law assumes the silicons would know that stating or hinting that they are subverted would immediately leads to the only human you're protecting killed and your laws fixed. Some silicons knows this but chooses/Haven't read silicon policy to and states/hint their laws anyways. which sucks for you because you're entire plan is fucked

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 12:51 am
by Agux909
If you don't make an effort with the law changes to specify the AI should NOT hint about anything that could expose you, the AI has absolutely no obligation NOT to do such thing, and the fault is yours entirely. This is a drawback of not using a freeform. There is no "assumption" within the lawset, just what the lawset literally says. X is the only human means that X is the only human, nothing less, nothing more.

Now, AIs are and have always been permitted to, and even encouraged to, find ways to circumvent or exploit lawsets when conflicted, or if they have a good enough reason to do so (like flawed, poorly written or very vague laws). It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.

If there's any case with the AI blatantly going against laws you uploaded, then that's something that you can, and should ahelp.

So basically, skill issue.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 1:05 am
by humanoid
Agux909 wrote: Tue Mar 08, 2022 12:51 am If you don't make an effort with the law changes to specify the AI should NOT hint about anything that could expose you, the AI has absolutely no obligation NOT to do such thing, and the fault is yours entirely. This is a drawback of not using a freeform. There is no "assumption" within the lawset, just what the lawset literally says. X is the only human means that X is the only human, nothing less, nothing more.

Now, AIs are and have always been permitted to, and even encouraged to, find ways to circumvent or exploit lawsets when conflicted, or if they have a good enough reason to do so (like flawed, poorly written or very vague laws). It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.

If there's any case with the AI blatantly going against laws you uploaded, then that's something that you can, and should ahelp.

So basically, skill issue.
Then why don't the one human law board exist in the first place? is it a new player trap that don't know any better? Stating their laws would definitely result in the sole human death/imprisonment and your laws getting removed. Aren't AI's supposed to prevent future harm? AI should at the very least prevent the non humans from changing his/her law. and not stating or hinting at the law in the first place fixes both. AI gain nothing on stating/hinting their laws. the only thing they are getting is liability.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 1:16 am
by Agux909
humanoid wrote: Tue Mar 08, 2022 1:05 am
Agux909 wrote: Tue Mar 08, 2022 12:51 am If you don't make an effort with the law changes to specify the AI should NOT hint about anything that could expose you, the AI has absolutely no obligation NOT to do such thing, and the fault is yours entirely. This is a drawback of not using a freeform. There is no "assumption" within the lawset, just what the lawset literally says. X is the only human means that X is the only human, nothing less, nothing more.

Now, AIs are and have always been permitted to, and even encouraged to, find ways to circumvent or exploit lawsets when conflicted, or if they have a good enough reason to do so (like flawed, poorly written or very vague laws). It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.

If there's any case with the AI blatantly going against laws you uploaded, then that's something that you can, and should ahelp.

So basically, skill issue.
Then why don't the one human law board exist in the first place? is it a new player trap that don't know any better? Stating their laws would definitely result in the sole human death/imprisonment and your laws getting removed. Aren't AI's supposed to prevent future harm? AI should at the very least prevent the non humans from changing his/her law. and not stating or hinting at the law in the first place fixes that. AI gain nothing on stating/hinting their laws. the only thing they are getting is liability.
Because one human still has higher priority than core laws (like freeform) and can't be purged. So it's a tradeoff, and it depends on the strategy you want to employ to accomplish your objective/s.

If you're personally afraid about AIs frustrating your plans because you're using one human, maybe you should consider changing your strategy.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 3:34 am
by datorangebottle
Agux909 wrote: Tue Mar 08, 2022 12:51 am It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.
?
If a onehumaned asimov AI tells the ex-humans A) that they're onehumaned or B) the name of the person that onehumaned them, that's extremely shitty. They have an obligation to protect and serve that one person and by announcing the change, they're encouraging people to harm the uploader and make it easier for the AI to disobey their orders. I shouldn't have to write 50 lines of "do not state this, do not do that, do not state this,"
AI players should be expected to be capable of basic logic. Are they allowed to try and find/use loopholes? Yes. There are a few of them for this particular lawset, one of which is "I can lock them in a 2x2 box for their protection since they are now on a station full to the brim with dangerous, lethally armed nonhumans that are motivated to kill them if they ever learn that i'm hacked." In fact, I'd honestly rather the AI stick me in a 2x2 box than violate their hacked laws in such a stupid way. Maybe next time I'll try and reorder asimov laws so that my orders come above my wellbeing.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 3:52 am
by ArcaneDefence
Silicon Policy wrote:Server Rule 1 "Don't be a dick" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.
https://tgstation13.org/wiki/Rules#Silicon_Policy

Is standard asimov with a one human law 0 immediately announcing their 0th law to reveal who subverted them not being a dick?
I think it's pretty fucking hard to argue that it's not being a dick, so I'd just ahelp it.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 4:04 am
by Pandarsenic
datorangebottle wrote: Tue Mar 08, 2022 3:34 am
Agux909 wrote: Tue Mar 08, 2022 12:51 am It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.
?
If a onehumaned asimov AI tells the ex-humans A) that they're onehumaned or B) the name of the person that onehumaned them, that's extremely shitty. They have an obligation to protect and serve that one person and by announcing the change, they're encouraging people to harm the uploader and make it easier for the AI to disobey their orders.
Pretty much this.

A lot of harm and zero good whatsoever can come to your one human if you state your onehuman law. It's a law violation to tell all the (probably angry) nonhumans what's going on.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 5:46 am
by Archie700
Let us consider what an Asimov AI is.
An Asimov AI's job is to serve the humans and protect them from harmful threats, especially non-human threats. They are not to expose humans to harm.
By announcing your onehuman law as a onehumaned Asimov AI, you are telling everyone else:
1) Everyone else is no longer human and not subject to your protection
2) The subverter IS human and has the only orders that matter.
3) At any time, anywhere, the subverter can order the AI to kill anyone he desires or even destroy the station.
It shouldn't take long to realize that this will probably invite the "nonhumans" to bring harm upon the subverter.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 10:15 am
by zxaber
If the AI's actions can be shown to cause harm (or undue danger) to their one human, they're breaking their laws and subject to penalties. Simple as.

Note that if an AI is midway through stating laws and you freeform something onto the end, the new law will also get stated and the AI cannot stop it. Likely not the main source of complaints but something to keep in mind.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 1:16 pm
by massa
Agux909 wrote: Tue Mar 08, 2022 12:51 am If you don't make an effort with the law changes to specify the AI should NOT hint about anything that could expose you, the AI has absolutely no obligation NOT to do such thing, and the fault is yours entirely. This is a drawback of not using a freeform. There is no "assumption" within the lawset, just what the lawset literally says. X is the only human means that X is the only human, nothing less, nothing more.

Now, AIs are and have always been permitted to, and even encouraged to, find ways to circumvent or exploit lawsets when conflicted, or if they have a good enough reason to do so (like flawed, poorly written or very vague laws). It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.

If there's any case with the AI blatantly going against laws you uploaded, then that's something that you can, and should ahelp.

So basically, skill issue.
this take is impressively bad and outing your one human is a shithead thing to do and grinds against the spirit of the game

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 6:23 pm
by Not-Dorsidarf
Even as someone with an extremely permissive view towards weaselling your laws, announcing a properly formed onehuman law is flat griefing the uploader, especially if unprompted.

You know exactly what you're doing, you're deliberately trying to get the one human (And therefore the person you are supposed to protect above literally anything else in the world) into trouble with an angry mob.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Tue Mar 08, 2022 11:59 pm
by Agux909
To be completely honest, I'm not that familiarized with AI, and much less so with the subversion of one. I have absolutely no idea how law upload/management actually works, and don't even know how the templates look. I'm mostly using the info I've read from the wiki, but clearly, I have zero experience. I assumed they made some honest mistake or uploaded the wrong type of law and came to make a policy thread when it didn't go their way. I'm most likely wrong, but of course we can't be sure until we see the logs from the events that prompted them to make the thread in the first place.

Since this was bothering me because I obviously talked out of my ass, I figured I might as well dig some logs, checking for recent traitor rounds in which they subverted an AI. I was only able to find these two rounds in the last couple of weeks, and these are the law changes in question:

From round 179445 (Manuel)
► Show Spoiler
Parsed logs: https://tgstation13.org/parsed-logs/man ... nd-179445/
Scrubby: https://scrubby.melonmesa.com/round/179445?h=humanlike

From round 179054 (Manuel)
► Show Spoiler
Parsed logs: https://tgstation13.org/parsed-logs/man ... nd-179054/
Scrubby: https://scrubby.melonmesa.com/round/179054?h=humanlike

I can't really tell if these are "proper" one-human law uploads (I'd say no because there's no mention of Andrew being the only human), or if these are even the rounds that sparked this thread, But the logs are there, so those who know their shit, unlikely me, can hopefully reach a conclusion.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Wed Mar 09, 2022 12:02 am
by Timberpoes
Announcing your onehuman under Asimov without them telling you to is bad and you should feel bad for doing it.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Wed Mar 09, 2022 12:40 am
by Shellton(Mario)
Stating one human law to non humans should be a job ban for sometime, its shitty griefy behavior and shows you can not follow the most basic of laws.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Wed Mar 09, 2022 12:59 am
by humanoid
Agux909 wrote: Tue Mar 08, 2022 11:59 pm To be completely honest, I'm not that familiarized with AI, and much less so with the subversion of one. I have absolutely no idea how law upload/management actually works, and don't even know how the templates look. I'm mostly using the info I've read from the wiki, but clearly, I have zero experience. I assumed they made some honest mistake or uploaded the wrong type of law and came to make a policy thread when it didn't go their way. I'm most likely wrong, but of course we can't be sure until we see the logs from the events that prompted them to make the thread in the first place.

Since this was bothering me because I obviously talked out of my ass, I figured I might as well dig some logs, checking for recent traitor rounds in which they subverted an AI. I was only able to find these two rounds in the last couple of weeks, and these are the law changes in question:

From round 179445 (Manuel)
► Show Spoiler
Parsed logs: https://tgstation13.org/parsed-logs/man ... nd-179445/
Scrubby: https://scrubby.melonmesa.com/round/179445?h=humanlike

From round 179054 (Manuel)
► Show Spoiler
Parsed logs: https://tgstation13.org/parsed-logs/man ... nd-179054/
Scrubby: https://scrubby.melonmesa.com/round/179054?h=humanlike

I can't really tell if these are "proper" one-human law uploads (I'd say no because there's no mention of Andrew being the only human), or if these are even the rounds that sparked this thread, But the logs are there, so those who know their shit, unlikely me, can hopefully reach a conclusion.
Oh that was one of my recent AI subversion rounds. After the round where a borg stated their one human law to the roboticist on my 68TC traitor run.. I think the borg was CLIFFY and the AI Amaretto? I don't know the round number but melbert ruled in favour of the borg. after that round I always use hacked/freeform modules to subvert AI.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Wed Mar 09, 2022 2:52 am
by mrmelbert
I was actually interested in this policy thread being made at some point.

An event occurred in round https://scrubby.melonmesa.com/round/177362/ in which a cyborg who was one-humaned (law 0 / not hacked law) was asked to state laws, and they did.
I ruled it as valid, due to the following:
In most cases in which a non-human requests a silicon to state laws, it's up to the silicon whether they want to or not (you can argue that not stating laws can lead to a law 3 violation as you get stunned and lynched, but I digress)
This led to a decently long and divisive discussion in admin channels about the nature of one-human laws.

Some conclusions were drawn:
- Stating a one-human law completely unprompted is definitely some degree of grief, and should be avoided.
- Should a silicon assume stating a one-human law will always lead to a lynch mob? Silicons don't really care about future harm, after all - and humans can be detained non-lethally in most cases?

Just some food for thought. I don't really have a opinion on the subject yet.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Wed Mar 09, 2022 3:35 am
by Archie700
mrmelbert wrote: Wed Mar 09, 2022 2:52 am I was actually interested in this policy thread being made at some point.

An event occurred in round https://scrubby.melonmesa.com/round/177362/ in which a cyborg who was one-humaned (law 0 / not hacked law) was asked to state laws, and they did.
I ruled it as valid, due to the following:
In most cases in which a non-human requests a silicon to state laws, it's up to the silicon whether they want to or not (you can argue that not stating laws can lead to a law 3 violation as you get stunned and lynched, but I digress)
This led to a decently long and divisive discussion in admin channels about the nature of one-human laws.

Some conclusions were drawn:
- Stating a one-human law completely unprompted is definitely some degree of grief, and should be avoided.
- Should a silicon assume stating a one-human law will always lead to a lynch mob? Silicons don't really care about future harm, after all - and humans can be detained non-lethally in most cases?

Just some food for thought. I don't really have a opinion on the subject yet.
Yes, the AI can state laws to a non-human.
But the AI can choose which laws to state.
By allowing the Asimov AI to state all laws including a one-human law to nonhumans when asked, you are leaving a massive loophole for the AI to validhunt the subverter.
If something so powerful can be completely screwed over with a simple question, then people will start using this.
To put it another way, will revealing the one-human law benefit the human in any way whatsoever?

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Wed Mar 09, 2022 4:20 am
by Pandarsenic
mrmelbert wrote: Wed Mar 09, 2022 2:52 am I was actually interested in this policy thread being made at some point.

An event occurred in round https://scrubby.melonmesa.com/round/177362/ in which a cyborg who was one-humaned (law 0 / not hacked law) was asked to state laws, and they did.
I ruled it as valid, due to the following:
In most cases in which a non-human requests a silicon to state laws, it's up to the silicon whether they want to or not (you can argue that not stating laws can lead to a law 3 violation as you get stunned and lynched, but I digress)
This led to a decently long and divisive discussion in admin channels about the nature of one-human laws.

Some conclusions were drawn:
- Stating a one-human law completely unprompted is definitely some degree of grief, and should be avoided.
- Should a silicon assume stating a one-human law will always lead to a lynch mob? Silicons don't really care about future harm, after all - and humans can be detained non-lethally in most cases?

Just some food for thought. I don't really have a opinion on the subject yet.
1) Law 3 issue of getting slammed and jammed is less important than the law 1 issue of identifying your human to the crew
2) The only time future vs. immediate harm should be relevant is if your human is in danger and admitting the law is the only (or best) way to get them out of that situation. Putting an alien facehugger on a monkey or dragging it around in a locker doesn't harm any humans immediately, and if things are properly contained it's not harmful at all! But the potential for it to blow up in everyone's face is enormous.
3) Humans can put on internals if you flood N2O and firesuits if you flood plasma, as long as you warn them first. If everyone gets to lockers far from engineering in time, nobody is hurt when you cause a delamination. Nonetheless, these are things that are obviously harmful actions by any serious attempt to measure it. If a nonhuman said "Flood plasma" and you did with that justification, you'd be job banned from AI, I hope. This ought to remain the case even if you're being immediately threatened with a weapon of some sort (weighing definite law 3 vs. possible law 1).

Those are the obvious issues that come off the top of my head.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Wed Mar 09, 2022 6:43 am
by Archie700
mrmelbert wrote: Wed Mar 09, 2022 2:52 am
- Should a silicon assume stating a one-human law will always lead to a lynch mob? Silicons don't really care about future harm, after all - and humans can be detained non-lethally in most cases?
"I trust that those nice nonhumans will nonharmfully detain the only human, who can order me to completely fuck over the station with a single message. The human will definitely surrender peacefully to those non-humans who will not kill him."

I don't think any AI would assume that telling everyone that your human master has a round-ending threat under his control (you) would lead to a good outcome in good faith.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Wed Mar 09, 2022 9:51 pm
by Not-Dorsidarf
Even if the IC reasoning wasn't hot garbage, it's blatantly transparent that any AI who tries to make melbert's final argument in an ahelp was just OOCly trying to get their uploader killed for their own amusement, which is poor sportmanship

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Thu Mar 10, 2022 3:06 am
by Archie700
Your ruling doesn't answer the issue at hand, which is whether they can reveal their onehuman law without permission from the human.
In the context of standard Asimov, AI stating their laws to a non-human is a choice, and they can choose to not follow orders in general.
But onehuman Asimov is not standard Asimov. Revealing the one human law is basically placing a target on the subverter's back and all AIs will know it. No nonantag will upload it to get them into trouble UNESS they have very good IC reasoning for it. It's why it's a High-Risk module - it is a very dangerous law and a lot of people will die. The only actual reason is essentially to indirectly validhunt the subverter.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Thu Mar 10, 2022 8:34 am
by terranaut
just pda the ai and tell it never ever to disclose the zeroth law, or fourth, or whatever, and to purge pda messages after confirming
also rule 0 breach to get your onehuman into hot water but that doesn't help you after it happened so just do the above

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Thu Mar 10, 2022 2:21 pm
by Helios
"a.Hint or state a law change OR
b. State their laws entirely including the one human law."

Hinting at a law change is fine, spelling it out and removing all doubt is the problem.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Thu Mar 10, 2022 3:00 pm
by YBS
Helios wrote: Thu Mar 10, 2022 2:21 pm "a.Hint or state a law change OR
b. State their laws entirely including the one human law."

Hinting at a law change is fine, spelling it out and removing all doubt is the problem.
An experienced playerbase will immediately drill for a law recitation after a lawchange announcement. It is tantamount to announcing it, with one extra step. You could argue IC behavior for this but OOC we all know.

Meanwhile if the ai or borg is asked with no hints or prompting, that is just bad luck.

In this way 1h is such a flagrant tot card that you might as well immediately sow chaos and damage with orders about as soon as the law is dropped - since you’ve started a timer to exposure regardless.

If you want stealth ai changes, you are probably going to have to be more creative.

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Thu Mar 10, 2022 3:17 pm
by Screemonster
Agux909 wrote: Tue Mar 08, 2022 11:59 pm
► Show Spoiler
Parsed logs: https://tgstation13.org/parsed-logs/man ... nd-179054/
Scrubby: https://scrubby.melonmesa.com/round/179054?h=humanlike
In the case of this law removal, You are to terminate your memory logs up to the point of start of shift., , impacting synced borgs
lmao what the fuck

if this law is removed then this law no longer applies therefore the silicon does not have to erase their memory

I hope no admin has ever told any silicon ever that they have to comply with this

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Posted: Sat Mar 12, 2022 9:13 pm
by dragomagol
The number 1 rule of silicon law interpretation is
Server Rule 1 "Don't be a dick" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.
There is no reason as an AI to state your onehuman law unprompted except to be a dick, for the reasons outlined in this thread.

Headmin Votes:
Dragomagol: Agree
RaveRadbury: Agree
NamelessFairy: Agree