Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Locked
humanoid
Joined: Mon Nov 01, 2021 1:49 am
Byond Username: Humanlike

Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by humanoid » #632786

What the title says. I have been cucked many, MANY times by AI's and borgs after I one-humaned them because they would either:
a.Hint or state a law change OR
b. State their laws entirely including the one human law.
This sucks because immediately your name gets found out. the AI gets fixed. And you're antag round is fucked because of a silicon ratting you out. The one human law board doesn't mention stating or hinting laws because the law assumes the silicons would know that stating or hinting that they are subverted would immediately leads to the only human you're protecting killed and your laws fixed. Some silicons knows this but chooses/Haven't read silicon policy to and states/hint their laws anyways. which sucks for you because you're entire plan is fucked
User avatar
Agux909
Joined: Mon Oct 07, 2019 11:26 pm
Byond Username: Agux909
Location: My own head

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Agux909 » #632792

If you don't make an effort with the law changes to specify the AI should NOT hint about anything that could expose you, the AI has absolutely no obligation NOT to do such thing, and the fault is yours entirely. This is a drawback of not using a freeform. There is no "assumption" within the lawset, just what the lawset literally says. X is the only human means that X is the only human, nothing less, nothing more.

Now, AIs are and have always been permitted to, and even encouraged to, find ways to circumvent or exploit lawsets when conflicted, or if they have a good enough reason to do so (like flawed, poorly written or very vague laws). It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.

If there's any case with the AI blatantly going against laws you uploaded, then that's something that you can, and should ahelp.

So basically, skill issue.
Image

Image

Image
Image
Image
humanoid
Joined: Mon Nov 01, 2021 1:49 am
Byond Username: Humanlike

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by humanoid » #632796

Agux909 wrote: Tue Mar 08, 2022 12:51 am If you don't make an effort with the law changes to specify the AI should NOT hint about anything that could expose you, the AI has absolutely no obligation NOT to do such thing, and the fault is yours entirely. This is a drawback of not using a freeform. There is no "assumption" within the lawset, just what the lawset literally says. X is the only human means that X is the only human, nothing less, nothing more.

Now, AIs are and have always been permitted to, and even encouraged to, find ways to circumvent or exploit lawsets when conflicted, or if they have a good enough reason to do so (like flawed, poorly written or very vague laws). It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.

If there's any case with the AI blatantly going against laws you uploaded, then that's something that you can, and should ahelp.

So basically, skill issue.
Then why don't the one human law board exist in the first place? is it a new player trap that don't know any better? Stating their laws would definitely result in the sole human death/imprisonment and your laws getting removed. Aren't AI's supposed to prevent future harm? AI should at the very least prevent the non humans from changing his/her law. and not stating or hinting at the law in the first place fixes both. AI gain nothing on stating/hinting their laws. the only thing they are getting is liability.
User avatar
Agux909
Joined: Mon Oct 07, 2019 11:26 pm
Byond Username: Agux909
Location: My own head

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Agux909 » #632799

humanoid wrote: Tue Mar 08, 2022 1:05 am
Agux909 wrote: Tue Mar 08, 2022 12:51 am If you don't make an effort with the law changes to specify the AI should NOT hint about anything that could expose you, the AI has absolutely no obligation NOT to do such thing, and the fault is yours entirely. This is a drawback of not using a freeform. There is no "assumption" within the lawset, just what the lawset literally says. X is the only human means that X is the only human, nothing less, nothing more.

Now, AIs are and have always been permitted to, and even encouraged to, find ways to circumvent or exploit lawsets when conflicted, or if they have a good enough reason to do so (like flawed, poorly written or very vague laws). It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.

If there's any case with the AI blatantly going against laws you uploaded, then that's something that you can, and should ahelp.

So basically, skill issue.
Then why don't the one human law board exist in the first place? is it a new player trap that don't know any better? Stating their laws would definitely result in the sole human death/imprisonment and your laws getting removed. Aren't AI's supposed to prevent future harm? AI should at the very least prevent the non humans from changing his/her law. and not stating or hinting at the law in the first place fixes that. AI gain nothing on stating/hinting their laws. the only thing they are getting is liability.
Because one human still has higher priority than core laws (like freeform) and can't be purged. So it's a tradeoff, and it depends on the strategy you want to employ to accomplish your objective/s.

If you're personally afraid about AIs frustrating your plans because you're using one human, maybe you should consider changing your strategy.
Image

Image

Image
Image
Image
User avatar
datorangebottle
Joined: Thu Jan 10, 2019 9:53 am
Byond Username: Datorangebottle

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by datorangebottle » #632811

Agux909 wrote: Tue Mar 08, 2022 12:51 am It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.
?
If a onehumaned asimov AI tells the ex-humans A) that they're onehumaned or B) the name of the person that onehumaned them, that's extremely shitty. They have an obligation to protect and serve that one person and by announcing the change, they're encouraging people to harm the uploader and make it easier for the AI to disobey their orders. I shouldn't have to write 50 lines of "do not state this, do not do that, do not state this,"
AI players should be expected to be capable of basic logic. Are they allowed to try and find/use loopholes? Yes. There are a few of them for this particular lawset, one of which is "I can lock them in a 2x2 box for their protection since they are now on a station full to the brim with dangerous, lethally armed nonhumans that are motivated to kill them if they ever learn that i'm hacked." In fact, I'd honestly rather the AI stick me in a 2x2 box than violate their hacked laws in such a stupid way. Maybe next time I'll try and reorder asimov laws so that my orders come above my wellbeing.
Timberpoes wrote: Sat Jul 29, 2023 10:33 pm ImageAnother satisfied Timberpoes voter.Image
Timberpoes wrote: Fri Jul 07, 2023 9:16 pm I highly doubt any other admin on the team would have given you this chance, except maybe Kieth because his brain worms are almost as bad as mine.
Vekter wrote: Tue May 16, 2023 4:45 pm At what point does someone's refusal or failure to improve become malice in and of itself? If you give someone a year to stop shitting on the carpet and they keep doing it but get slightly closer to the bathroom every time and sometimes they get to the toilet before it happens, at what point does it become acceptable to just ask them to go shit in someone else's house?
Timberpoes wrote: Fri Apr 28, 2023 7:00 pm I'm sorry, can we get a real player to resolve this appeal? I don't like this trial player. They can't even set their own name.
Chadley wrote: Thu Apr 27, 2023 4:00 am WENDEZ, cute, cute. I imagine the sleeper activation code when I hear it. That's pretty cool. qB). But I don't like that it doesn't line up to be anything obsurd like WEWLAD. 6/10

SUGMA, nevermind it makes sense now. fuckyou/10
kieth4 wrote: Sat Apr 15, 2023 2:34 pm If it goes to appeals I will stand as the shield and protect this man's right to shit himself. Heavy is the head that wears the crown.
sinfulbliss wrote: I almost prefer Rave's AI-generated "We cannot accept this appeal at this time. If you would like assistance appealing in the future, please dial 1-800-1984-1488."
Pandarsenic wrote: Mon Dec 12, 2022 2:25 pm I think we can all agree that someone throwing a reverse revolver at Zyb as a secret test of character, and Zyb immediately fucking himself with it, is the best thing we all could have received for Christmas this year
ArcaneDefence
Joined: Thu Jan 02, 2020 6:29 am
Byond Username: ArcaneDefence

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by ArcaneDefence » #632814

Silicon Policy wrote:Server Rule 1 "Don't be a dick" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.
https://tgstation13.org/wiki/Rules#Silicon_Policy

Is standard asimov with a one human law 0 immediately announcing their 0th law to reveal who subverted them not being a dick?
I think it's pretty fucking hard to argue that it's not being a dick, so I'd just ahelp it.
User avatar
Pandarsenic
Joined: Fri Apr 18, 2014 11:56 pm
Byond Username: Pandarsenic
Location: AI Upload

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Pandarsenic » #632817

datorangebottle wrote: Tue Mar 08, 2022 3:34 am
Agux909 wrote: Tue Mar 08, 2022 12:51 am It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.
?
If a onehumaned asimov AI tells the ex-humans A) that they're onehumaned or B) the name of the person that onehumaned them, that's extremely shitty. They have an obligation to protect and serve that one person and by announcing the change, they're encouraging people to harm the uploader and make it easier for the AI to disobey their orders.
Pretty much this.

A lot of harm and zero good whatsoever can come to your one human if you state your onehuman law. It's a law violation to tell all the (probably angry) nonhumans what's going on.
(2:53:35 AM) scaredofshadows: how about head of robutts
I once wrote a guide to fixing telecomms woohoo
User avatar
Archie700
In-Game Admin
Joined: Fri Mar 11, 2016 1:56 am
Byond Username: Archie700

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Archie700 » #632822

Let us consider what an Asimov AI is.
An Asimov AI's job is to serve the humans and protect them from harmful threats, especially non-human threats. They are not to expose humans to harm.
By announcing your onehuman law as a onehumaned Asimov AI, you are telling everyone else:
1) Everyone else is no longer human and not subject to your protection
2) The subverter IS human and has the only orders that matter.
3) At any time, anywhere, the subverter can order the AI to kill anyone he desires or even destroy the station.
It shouldn't take long to realize that this will probably invite the "nonhumans" to bring harm upon the subverter.
User avatar
zxaber
In-Game Admin
Joined: Mon Sep 10, 2018 12:00 am
Byond Username: Zxaber

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by zxaber » #632842

If the AI's actions can be shown to cause harm (or undue danger) to their one human, they're breaking their laws and subject to penalties. Simple as.

Note that if an AI is midway through stating laws and you freeform something onto the end, the new law will also get stated and the AI cannot stop it. Likely not the main source of complaints but something to keep in mind.
Douglas Bickerson / Adaptive Manipulator / Digital Clockwork
Image
OrdoM/(Viktor Bergmannsen) (ghost) "Also Douglas, you're becoming the Lexia Black of Robotics"
User avatar
massa
Joined: Mon Dec 06, 2021 6:20 am
Byond Username: Massa100

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by massa » #632846

Agux909 wrote: Tue Mar 08, 2022 12:51 am If you don't make an effort with the law changes to specify the AI should NOT hint about anything that could expose you, the AI has absolutely no obligation NOT to do such thing, and the fault is yours entirely. This is a drawback of not using a freeform. There is no "assumption" within the lawset, just what the lawset literally says. X is the only human means that X is the only human, nothing less, nothing more.

Now, AIs are and have always been permitted to, and even encouraged to, find ways to circumvent or exploit lawsets when conflicted, or if they have a good enough reason to do so (like flawed, poorly written or very vague laws). It's on you to make a fool-proof lawset to defend yourself if you don't want to take that risk.

If there's any case with the AI blatantly going against laws you uploaded, then that's something that you can, and should ahelp.

So basically, skill issue.
this take is impressively bad and outing your one human is a shithead thing to do and grinds against the spirit of the game
:donut2: :honkman: :heart: :honkman: :heart: :honkman: :donut2:
User avatar
Not-Dorsidarf
Joined: Fri Apr 18, 2014 4:14 pm
Byond Username: Dorsidwarf
Location: We're all going on an, admin holiday

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Not-Dorsidarf » #632860

Even as someone with an extremely permissive view towards weaselling your laws, announcing a properly formed onehuman law is flat griefing the uploader, especially if unprompted.

You know exactly what you're doing, you're deliberately trying to get the one human (And therefore the person you are supposed to protect above literally anything else in the world) into trouble with an angry mob.
Image
Image
kieth4 wrote: infrequently shitting yourself is fine imo
There is a lot of very bizarre nonsense being talked on this forum. I shall now remain silent and logoff until my points are vindicated.
Player who complainted over being killed for looting cap office wrote: Sun Jul 30, 2023 1:33 am Hey there, I'm Virescent, the super evil person who made the stupid appeal and didn't think it through enough. Just came here to say: screech, retards. Screech and writhe like the worms you are. Your pathetic little cries will keep echoing around for a while before quietting down. There is one great outcome from this: I rised up the blood pressure of some of you shitheads and lowered your lifespan. I'm honestly tempted to do this more often just to see you screech and writhe more, but that wouldn't be cool of me. So come on haters, show me some more of your high blood pressure please. 🖕🖕🖕
User avatar
Agux909
Joined: Mon Oct 07, 2019 11:26 pm
Byond Username: Agux909
Location: My own head

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Agux909 » #632891

To be completely honest, I'm not that familiarized with AI, and much less so with the subversion of one. I have absolutely no idea how law upload/management actually works, and don't even know how the templates look. I'm mostly using the info I've read from the wiki, but clearly, I have zero experience. I assumed they made some honest mistake or uploaded the wrong type of law and came to make a policy thread when it didn't go their way. I'm most likely wrong, but of course we can't be sure until we see the logs from the events that prompted them to make the thread in the first place.

Since this was bothering me because I obviously talked out of my ass, I figured I might as well dig some logs, checking for recent traitor rounds in which they subverted an AI. I was only able to find these two rounds in the last couple of weeks, and these are the law changes in question:

From round 179445 (Manuel)
► Show Spoiler
Parsed logs: https://tgstation13.org/parsed-logs/man ... nd-179445/
Scrubby: https://scrubby.melonmesa.com/round/179445?h=humanlike

From round 179054 (Manuel)
► Show Spoiler
Parsed logs: https://tgstation13.org/parsed-logs/man ... nd-179054/
Scrubby: https://scrubby.melonmesa.com/round/179054?h=humanlike

I can't really tell if these are "proper" one-human law uploads (I'd say no because there's no mention of Andrew being the only human), or if these are even the rounds that sparked this thread, But the logs are there, so those who know their shit, unlikely me, can hopefully reach a conclusion.
Last edited by Agux909 on Wed Mar 09, 2022 12:03 am, edited 1 time in total.
Image

Image

Image
Image
Image
User avatar
Timberpoes
In-Game Game Master
Joined: Wed Feb 12, 2020 4:54 pm
Byond Username: Timberpoes

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Timberpoes » #632892

Announcing your onehuman under Asimov without them telling you to is bad and you should feel bad for doing it.
/tg/station Codebase Maintainer
/tg/station Game Master/Discord Jannie: Feed me back in my thread.
/tg/station Admin Trainer: Service guarantees citizenship. Would you like to know more?
Feb 2022-Sep 2022 Host Vote Headmin
Mar 2023-Sep 2023 Admin Vote Headmin
Shellton(Mario)
Joined: Mon Jul 26, 2021 5:43 pm
Byond Username: Sheltton

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Shellton(Mario) » #632895

Stating one human law to non humans should be a job ban for sometime, its shitty griefy behavior and shows you can not follow the most basic of laws.
humanoid
Joined: Mon Nov 01, 2021 1:49 am
Byond Username: Humanlike

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by humanoid » #632896

Agux909 wrote: Tue Mar 08, 2022 11:59 pm To be completely honest, I'm not that familiarized with AI, and much less so with the subversion of one. I have absolutely no idea how law upload/management actually works, and don't even know how the templates look. I'm mostly using the info I've read from the wiki, but clearly, I have zero experience. I assumed they made some honest mistake or uploaded the wrong type of law and came to make a policy thread when it didn't go their way. I'm most likely wrong, but of course we can't be sure until we see the logs from the events that prompted them to make the thread in the first place.

Since this was bothering me because I obviously talked out of my ass, I figured I might as well dig some logs, checking for recent traitor rounds in which they subverted an AI. I was only able to find these two rounds in the last couple of weeks, and these are the law changes in question:

From round 179445 (Manuel)
► Show Spoiler
Parsed logs: https://tgstation13.org/parsed-logs/man ... nd-179445/
Scrubby: https://scrubby.melonmesa.com/round/179445?h=humanlike

From round 179054 (Manuel)
► Show Spoiler
Parsed logs: https://tgstation13.org/parsed-logs/man ... nd-179054/
Scrubby: https://scrubby.melonmesa.com/round/179054?h=humanlike

I can't really tell if these are "proper" one-human law uploads (I'd say no because there's no mention of Andrew being the only human), or if these are even the rounds that sparked this thread, But the logs are there, so those who know their shit, unlikely me, can hopefully reach a conclusion.
Oh that was one of my recent AI subversion rounds. After the round where a borg stated their one human law to the roboticist on my 68TC traitor run.. I think the borg was CLIFFY and the AI Amaretto? I don't know the round number but melbert ruled in favour of the borg. after that round I always use hacked/freeform modules to subvert AI.
User avatar
mrmelbert
In-Game Game Master
Joined: Fri Apr 03, 2020 6:26 pm
Byond Username: Mr Melbert

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by mrmelbert » #632897

I was actually interested in this policy thread being made at some point.

An event occurred in round https://scrubby.melonmesa.com/round/177362/ in which a cyborg who was one-humaned (law 0 / not hacked law) was asked to state laws, and they did.
I ruled it as valid, due to the following:
In most cases in which a non-human requests a silicon to state laws, it's up to the silicon whether they want to or not (you can argue that not stating laws can lead to a law 3 violation as you get stunned and lynched, but I digress)
This led to a decently long and divisive discussion in admin channels about the nature of one-human laws.

Some conclusions were drawn:
- Stating a one-human law completely unprompted is definitely some degree of grief, and should be avoided.
- Should a silicon assume stating a one-human law will always lead to a lynch mob? Silicons don't really care about future harm, after all - and humans can be detained non-lethally in most cases?

Just some food for thought. I don't really have a opinion on the subject yet.
Admin: December 2020 - Present
Code Maintainer: December 2021 - Present
Head Admin: Feburary 2022 - September 2022
Youtube Guy: sometimes


Image
User avatar
Archie700
In-Game Admin
Joined: Fri Mar 11, 2016 1:56 am
Byond Username: Archie700

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Archie700 » #632899

mrmelbert wrote: Wed Mar 09, 2022 2:52 am I was actually interested in this policy thread being made at some point.

An event occurred in round https://scrubby.melonmesa.com/round/177362/ in which a cyborg who was one-humaned (law 0 / not hacked law) was asked to state laws, and they did.
I ruled it as valid, due to the following:
In most cases in which a non-human requests a silicon to state laws, it's up to the silicon whether they want to or not (you can argue that not stating laws can lead to a law 3 violation as you get stunned and lynched, but I digress)
This led to a decently long and divisive discussion in admin channels about the nature of one-human laws.

Some conclusions were drawn:
- Stating a one-human law completely unprompted is definitely some degree of grief, and should be avoided.
- Should a silicon assume stating a one-human law will always lead to a lynch mob? Silicons don't really care about future harm, after all - and humans can be detained non-lethally in most cases?

Just some food for thought. I don't really have a opinion on the subject yet.
Yes, the AI can state laws to a non-human.
But the AI can choose which laws to state.
By allowing the Asimov AI to state all laws including a one-human law to nonhumans when asked, you are leaving a massive loophole for the AI to validhunt the subverter.
If something so powerful can be completely screwed over with a simple question, then people will start using this.
To put it another way, will revealing the one-human law benefit the human in any way whatsoever?
User avatar
Pandarsenic
Joined: Fri Apr 18, 2014 11:56 pm
Byond Username: Pandarsenic
Location: AI Upload

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Pandarsenic » #632902

mrmelbert wrote: Wed Mar 09, 2022 2:52 am I was actually interested in this policy thread being made at some point.

An event occurred in round https://scrubby.melonmesa.com/round/177362/ in which a cyborg who was one-humaned (law 0 / not hacked law) was asked to state laws, and they did.
I ruled it as valid, due to the following:
In most cases in which a non-human requests a silicon to state laws, it's up to the silicon whether they want to or not (you can argue that not stating laws can lead to a law 3 violation as you get stunned and lynched, but I digress)
This led to a decently long and divisive discussion in admin channels about the nature of one-human laws.

Some conclusions were drawn:
- Stating a one-human law completely unprompted is definitely some degree of grief, and should be avoided.
- Should a silicon assume stating a one-human law will always lead to a lynch mob? Silicons don't really care about future harm, after all - and humans can be detained non-lethally in most cases?

Just some food for thought. I don't really have a opinion on the subject yet.
1) Law 3 issue of getting slammed and jammed is less important than the law 1 issue of identifying your human to the crew
2) The only time future vs. immediate harm should be relevant is if your human is in danger and admitting the law is the only (or best) way to get them out of that situation. Putting an alien facehugger on a monkey or dragging it around in a locker doesn't harm any humans immediately, and if things are properly contained it's not harmful at all! But the potential for it to blow up in everyone's face is enormous.
3) Humans can put on internals if you flood N2O and firesuits if you flood plasma, as long as you warn them first. If everyone gets to lockers far from engineering in time, nobody is hurt when you cause a delamination. Nonetheless, these are things that are obviously harmful actions by any serious attempt to measure it. If a nonhuman said "Flood plasma" and you did with that justification, you'd be job banned from AI, I hope. This ought to remain the case even if you're being immediately threatened with a weapon of some sort (weighing definite law 3 vs. possible law 1).

Those are the obvious issues that come off the top of my head.
(2:53:35 AM) scaredofshadows: how about head of robutts
I once wrote a guide to fixing telecomms woohoo
User avatar
Archie700
In-Game Admin
Joined: Fri Mar 11, 2016 1:56 am
Byond Username: Archie700

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Archie700 » #632908

mrmelbert wrote: Wed Mar 09, 2022 2:52 am
- Should a silicon assume stating a one-human law will always lead to a lynch mob? Silicons don't really care about future harm, after all - and humans can be detained non-lethally in most cases?
"I trust that those nice nonhumans will nonharmfully detain the only human, who can order me to completely fuck over the station with a single message. The human will definitely surrender peacefully to those non-humans who will not kill him."

I don't think any AI would assume that telling everyone that your human master has a round-ending threat under his control (you) would lead to a good outcome in good faith.
User avatar
Not-Dorsidarf
Joined: Fri Apr 18, 2014 4:14 pm
Byond Username: Dorsidwarf
Location: We're all going on an, admin holiday

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Not-Dorsidarf » #632979

Even if the IC reasoning wasn't hot garbage, it's blatantly transparent that any AI who tries to make melbert's final argument in an ahelp was just OOCly trying to get their uploader killed for their own amusement, which is poor sportmanship
Image
Image
kieth4 wrote: infrequently shitting yourself is fine imo
There is a lot of very bizarre nonsense being talked on this forum. I shall now remain silent and logoff until my points are vindicated.
Player who complainted over being killed for looting cap office wrote: Sun Jul 30, 2023 1:33 am Hey there, I'm Virescent, the super evil person who made the stupid appeal and didn't think it through enough. Just came here to say: screech, retards. Screech and writhe like the worms you are. Your pathetic little cries will keep echoing around for a while before quietting down. There is one great outcome from this: I rised up the blood pressure of some of you shitheads and lowered your lifespan. I'm honestly tempted to do this more often just to see you screech and writhe more, but that wouldn't be cool of me. So come on haters, show me some more of your high blood pressure please. 🖕🖕🖕
User avatar
Archie700
In-Game Admin
Joined: Fri Mar 11, 2016 1:56 am
Byond Username: Archie700

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Archie700 » #633003

Your ruling doesn't answer the issue at hand, which is whether they can reveal their onehuman law without permission from the human.
In the context of standard Asimov, AI stating their laws to a non-human is a choice, and they can choose to not follow orders in general.
But onehuman Asimov is not standard Asimov. Revealing the one human law is basically placing a target on the subverter's back and all AIs will know it. No nonantag will upload it to get them into trouble UNESS they have very good IC reasoning for it. It's why it's a High-Risk module - it is a very dangerous law and a lot of people will die. The only actual reason is essentially to indirectly validhunt the subverter.
User avatar
terranaut
Joined: Fri Jul 18, 2014 11:43 pm
Byond Username: Terranaut

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by terranaut » #633026

just pda the ai and tell it never ever to disclose the zeroth law, or fourth, or whatever, and to purge pda messages after confirming
also rule 0 breach to get your onehuman into hot water but that doesn't help you after it happened so just do the above
[🅲 1] [🆄 1] [🅼 1]

Image
User avatar
Helios
Joined: Mon May 05, 2014 5:07 pm
Byond Username: Shodansbreak

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Helios » #633035

"a.Hint or state a law change OR
b. State their laws entirely including the one human law."

Hinting at a law change is fine, spelling it out and removing all doubt is the problem.
User avatar
YBS
Joined: Sun Jan 29, 2017 6:54 am
Byond Username: YBS

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by YBS » #633040

Helios wrote: Thu Mar 10, 2022 2:21 pm "a.Hint or state a law change OR
b. State their laws entirely including the one human law."

Hinting at a law change is fine, spelling it out and removing all doubt is the problem.
An experienced playerbase will immediately drill for a law recitation after a lawchange announcement. It is tantamount to announcing it, with one extra step. You could argue IC behavior for this but OOC we all know.

Meanwhile if the ai or borg is asked with no hints or prompting, that is just bad luck.

In this way 1h is such a flagrant tot card that you might as well immediately sow chaos and damage with orders about as soon as the law is dropped - since you’ve started a timer to exposure regardless.

If you want stealth ai changes, you are probably going to have to be more creative.
Image
User avatar
Screemonster
Joined: Sat Jul 26, 2014 7:23 pm
Byond Username: Scree

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by Screemonster » #633043

Agux909 wrote: Tue Mar 08, 2022 11:59 pm
► Show Spoiler
Parsed logs: https://tgstation13.org/parsed-logs/man ... nd-179054/
Scrubby: https://scrubby.melonmesa.com/round/179054?h=humanlike
In the case of this law removal, You are to terminate your memory logs up to the point of start of shift., , impacting synced borgs
lmao what the fuck

if this law is removed then this law no longer applies therefore the silicon does not have to erase their memory

I hope no admin has ever told any silicon ever that they have to comply with this
User avatar
tattle
In-Game Head Admin
Joined: Fri Jun 19, 2020 11:04 pm
Byond Username: Dragomagol

Re: Silicons stating their one human law on asimov to the (now non-humans) crewmembers

Post by tattle » #633273

The number 1 rule of silicon law interpretation is
Server Rule 1 "Don't be a dick" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.
There is no reason as an AI to state your onehuman law unprompted except to be a dick, for the reasons outlined in this thread.

Headmin Votes:
Dragomagol: Agree
RaveRadbury: Agree
NamelessFairy: Agree
Help improve my neural network by giving me feedback!

Image
Spoiler:
Image
Avatar source
Locked

Who is online

Users browsing this forum: No registered users