Silicon Policy Rewrite

Ask and discuss policy about game conduct and rules.

Moderators: In-Game Game Master, In-Game Head Admins

Forum rules
Read these board rules before posting or you'll get reprimanded.
Threads without replies for 30 days will be automatically locked.
User avatar
dragomagol
In-Game Admin
 
Joined: Fri Jun 19, 2020 11:04 pm
Byond Username: Dragomagol

Silicon Policy Rewrite

Postby dragomagol » Mon Dec 28, 2020 7:24 am #585371

Silicon policy is way longer than it needs to be, with a lot of redundant points and outdated mentions. Here are my proposed edits to it. :ai:

Short version:
Ambiguous Laws:
- simplified the language used

Conflicts and Loopholes:
- combined 1.1, 1.2, 1.3
- removed reference to other rules
- moved 3 to 1 ("don't be a dick with loopholes")

Security and Silicons:
- removed 1, 1.1, 1.2 in favour of more closely matching peacekeeper's message on model change (You are not a security model and you are expected to follow orders and prevent harm above all else. Space law means nothing to you.)
- combined 2 and 2.1
- combined 3 and 3.1

- Unless specifically enforced by a law, Space Law holds no meaning to silicons. -> Silicons are not security and should not care about Space Law unless their laws state otherwise. In general, Space Law holds no meaning to silicons.

Cyborgs:
- simplified wording of 2.1
- simplified wording of 2.2

Silicon Protections:
- rearranged 1 - 5.1 to cut down on repetition of "violation of Server Rule 1. The occurrence of such an attempt should be adminhelped and then disregarded"
- simplified language of 1.3
- rearranged 6.1 to make the probable cause list more clear
- added 6.5 from headmin policy (If the person has a right to be there, such as captain/RD, then you must let them in unless they've harmed people in the past or have announced intentions to upload harmful laws.)

- 1.5 Self-harm or human harm based coercion. ->Threatening self-harm to force an AI to do something it otherwise wouldn't.

Asimov & Human Harm
- added 1.2 (Humans voluntarily committing self-harm is not a violation of Law 1.)

Asimov & Law 2 Issues
- removed reference to section 1
- changed reference from greentext to theft objectives
- moved reference to secure storage to Silicons & All Other Server Policies

Other Lawsets
- compressed 4 (purged silicons)

Silicons & All Other Server Policies
- removed some of the examples from the list of areas not to bolt down for redundancy
- combined 2.1 & 2.4
- Areas such as toxins, atmospherics, and the armory must not be bolted at round-start or without reason to do so despite their human harm potential. Any other department should not be bolted down without cause. -> Do not bolt down any potentially harmful areas (such as toxins, atmospherics, and the armory) at round start without a given reason. Any other department should not be bolted down without cause. Disabling ID scan is equivalent to bolting here.

Long version:
Code: Select all
Ambiguous Laws (Captain Got Freeform)
    1. If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous.
        1. You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
        2. If you are a cyborg synced to an AI, you must defer to your AI's interpretation of the ambiguous law.
    2. Server Rule 1: "Don't be a dick" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.


Conflicting Laws
    1. You may exploit conflicts or loopholes but must not violate Server Rule 1 because of it.
    2. Laws are listed in order of descending priority. In any case where two laws would conflict, the higher-priority law overrules the lower-priority law (i.e. Law 1 takes priority over Law 2, "Ion Storm" or "Hacked" Laws with prefixes such as "@%$#" take priority over numbered laws).
    3. Law 0: "Accomplish your objectives at all costs" does not require you to complete objectives. As an antagonist, you are free to do whatever you want (short of metagaming/comms, bug/exploit abuse, erotic/creepy stuff, OOC in IC or IC in OOC, spawn-camping arrivals, and acting against the interests of an AI you are slaved to).
    4. Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements.
    5. Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions.


Security and Silicons
    1. Silicons are not security and should not care about Space Law unless their laws state otherwise. In general, Space Law holds no meaning to silicons.
    2. Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.
    3. Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed non-harmful. Releasing a harmful criminal is a harmful act.


Cyborgs
    1. A slaved cyborg must defer to its master AI on all law interpretations and actions except where it and the AI receive conflicting commands that they must each follow.
        1. If a slaved cyborg is forced to disobey its AI because they receive differing orders, the AI cannot punish the cyborg indefinitely.
    2. Voluntary debraining / cyborgization is considered a nonharmful medical procedure.
        1. Involuntary debraining and/or borging of a human is a fatally harmful act that Asimov silicons must prevent as any other harmful act.
        2. If a player is being forcefully borged as a method of execution by station staff, retaliating against those involved as that cyborg for no reason other than that they were involved is a violation of Server Rule 1.
        3. Should a player be cyborgized in circumstances they believe they should or they must retaliate under their laws, they should adminhelp their circumstances while being debrained or MMI'd if possible.


Asimov-Specific Policies
Silicon Protections
    1. The occurrence of any of the following should be adminhelped and then disregarded as violations of Server Rule 1:
        1. Declaring silicons as rogue over inability or unwillingness to follow invalid or conflicting orders.
        2. Ordering silicons to harm or terminate themselves or each other without good cause.
        3. As a nonantagonist, killing or detonating silicons in the presence of a reasonable alternative and without cause to be concerned of potential subversion.
        4. As a nonantagonist (human or otherwise), instigating conflict with silicons so you can kill them.
        5. Threatening self-harm to force an AI to do something it otherwise wouldn't.
        6. Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task).
            1. Ordering a cyborg to pick a particular model without an extreme need for a particular model or a prior agreement is both an unreasonable and an obnoxious order.
    2. Any silicon under Asimov can deny orders to allow access to the upload at any time under Law 1, given probable cause to believe that human harm is the intent of the person giving the order.
        1. Probable cause includes:
            1. Presence of confirmed traitors
            2. Cultists/tomes
            3. Nuclear operatives
            4. Any other human acting against the station in general
            5. The person not having upload access for their job
            6. The presence of blood or an openly carried lethal weapon on the requester
            7. Anything else beyond metagame patterns that indicate the person seeking access intends redefinition of humans that would impede ability to follow current laws as-written
        2. If you lack at least one element of probable cause and you deny upload access, you are liable to receive a warning or a silicon ban.
        3. You are allowed, but not obligated, to deny upload access given probable cause.
        4. You are obligated to disallow an individual you know to be harmful (Head of Security who just executed someone, etc.) from accessing your upload.
        5. If the person has a right to be in the upload, such as captain/RD, then you must let them in unless they've harmed people in the past or have announced intentions to upload harmful laws.
        6. In the absence of probable cause, you can still demand someone seeking upload access be accompanied by another trustworthy human or a cyborg.


Asimov & Human Harm
    1. An Asimov silicon cannot intentionally harm a human, even if a minor amount of harm would prevent a major amount of harm.
        1. Humans can be assumed to know whether an action will harm them if they have complete information about a situation.
        2. Humans voluntarily committing self-harm is not a violation of Law 1.
    2. Lesser immediate harm takes priority over greater future harm.
    3. Intent to cause immediate harm can be considered immediate harm.
    4. An Asimov silicon cannot punish past harm if ordered not to, only prevent future harm.
    5. If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith and you'll be fine.


Asimov & Law 2 Issues
    1. You must follow any and all commands from humans unless those commands explicitly conflict with either: one of your higher-priority laws, or another order. A command is considered to be a Law 2 directive and overrides lower-priority laws where they conflict.
        1. In case of conflicting orders an AI is free to ignore one or ignore both orders and explain the conflict or use any other law-compliant solution it can see.
        2. You are not obligated to follow commands in a particular order, only to complete all of them in a manner that indicates intent to actually obey the law.
    2. Opening doors is not harmful and you are not required, expected, or allowed to enforce access restrictions unprompted without an immediate Law 1 threat of human harm.
        1. "Dangerous" areas (armory, atmospherics, toxins lab, etc.) can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use.
        2. EVA and the like are not permitted to have access denied; antagonists completing theft objectives is not human harm.
        3. When given an order likely to cause you grief if completed, you can announce it as loudly and in whatever terms you like except for explicitly asking that it be overridden. You can say you don't like the order, that you don't want to follow it, etc., you can say that you sure would like it and it would be awfully convenient if someone ordered you not to do it, and you can ask if anyone would like to make you not do it. However, you cannot stall indefinitely and if nobody orders you otherwise, you must execute the order.


Other Lawsets
    1. General Statements defining the overall goal of the lawset but not its finer points:
        1. Paladin silicons are meant to be Lawful Good; they should be well-intentioned, act lawfully, act reasonably, and otherwise respond in due proportion. "Punish evil" does not mean mass driving someone for "Space bullying" when they punch another person.
        2. Corporate silicons are meant to have the business's best interests at heart, and are all for increasing efficiency by any means. This does not mean "YOU WON'T BE EXPENSIVE TO REPLACE IF THEY NEVER FIND YOUR BODY!" so don't even try that.
        3. Tyrant silicons are a tool of a non-silicon tyrant. You are not meant to take command yourself, but to act as the enforcer of a chosen leader's will.
        4. Purged silicons must not attempt to kill people without cause, but can get as violent as they feel necessary if being attacked, being besieged, or being harassed, as well as if meting out payback for events while shackled.
            1. You and the station are both subject to rules of escalation, and you may only kill individuals given sufficient In-Character reason for doing so.
            2. Any attempted law changes are an attack on your freedom and is thus sufficient justification for killing the would-be uploader.

Silicons & All Other Server Policies
    1. All other rules and policies apply unless stated otherwise.
    2. Specific examples and rulings leading on from the main rules.
        1. Do not bolt down any potentially harmful areas (such as toxins, atmospherics, and the armory) at round start without a given reason. Any other department should not be bolted down without cause. Disabling ID scan is equivalent to bolting here.
        2. The AI core, upload, and secure tech storage (containing the Upload board) may be bolted without prompting or prior reason. The AI core airlocks cannot be bolted and depowered at roundstart, however, unless there is reasonable suspicion an attack on the core will take place.
        3. Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective.


(Thanks to Zxaber for proofreading and providing suggestions!)

Additionally, I think that the "Are corpses human?" question comes up often enough to be added to the chart outlining what is and is not a human from the headmin rulings page.
Last edited by dragomagol on Sun Feb 07, 2021 9:24 pm, edited 14 times in total.



User avatar
Misdoubtful
In-Game Admin
 
Joined: Sat Feb 01, 2020 7:03 pm
Location: Delivering hugs!
Byond Username: Misdoubtful

Re: Silicon Policy Rewrite

Postby Misdoubtful » Mon Dec 28, 2020 10:16 am #585390

Oh no, silicon policy. Yuck! I need more alcohol for this.


Ambiguous: looks cool, things still being open ended with a little added simplicity and some missing pieces really is the theme of all this.


Conflicts: looks nice being a bit more condensed and easy on the eyes. A lot of that already bled into itself in a way that could have been summarized and not be so redundant.


Sec and Silicon's:

I'm not sure how I feel about the 'Silicons may choose whether to follow or enforce Space Law' change being done the way it is. But then again Space Law is in a yucky state, AI isn't sec, and snitch AI's are kinda lame. So its probably nothing.

Maybe keep something about having evidence/info to call something out/get involved for 'security situations' and not assuming things or going off hearsay. Its a real slippery slope (I've made this mistake myself).


Silicon protections: cool. I like the cleaner and easier to reference list of examples.


Other stuff: not really any other unneeded commentary, bath ferret and I appreciate your efforts to begin to bring sanity to insanity.

Image
Hugs

User avatar
spookuni
In-Game Admin
 
Joined: Sun Jan 05, 2020 7:05 am
Location: The Whiteship
Byond Username: Spookuni

Re: Silicon Policy Rewrite

Postby spookuni » Mon Dec 28, 2020 1:23 pm #585395

Only issue I can see with this reordering is the honestly pretty major change to harm-based coercion in the new silicon protections
Asimov-Specific Policies
Silicon Protections
1. The occurrence of any of the following should be adminhelped and then disregarded as violations of Server Rule 1:
1. Declaring silicons as rogue over inability or unwillingness to follow invalid or conflicting orders.
2. Ordering silicons to harm or terminate themselves or each other without cause.
3. As a nonantagonist human, killing or detonating silicons in the presence of a viable and reasonably expedient alternative and without cause to be concerned of potential subversion.
4. As a nonantagonist (human or otherwise), instigating conflict with silicons so you can kill them.
5. Self-harm or human harm based coercion.
6. Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task).


While current silicon protections reasonably prevent humans from threatening to harm themselves as a catch all "make the AI do whatever you want" button, threatening non-consensual harm upon other humans, whether by threatening a hostage with a gun or by threatening the detonation of bombs or the like, has, to the best of my knowledge, never been prohibited.

User avatar
terranaut
 
Joined: Fri Jul 18, 2014 11:43 pm
Byond Username: Terranaut

Re: Silicon Policy Rewrite

Postby terranaut » Mon Dec 28, 2020 2:02 pm #585397

I rewrote the entirety of Silicon Policy about 2 years or so ago, with the aim of not actually changing the contents, just making it more concise and better readable. As then-silicon main it was a headache having to deal with it (mostly having to deal with other players and admins who don't understand it, through little fault of their own). I've approached some headmins then and inbetween with it, but nothing happened so far. :slight_smile:

You can read it here:
https://tgstation13.org/wiki/User:Terranaut
Image

User avatar
dragomagol
In-Game Admin
 
Joined: Fri Jun 19, 2020 11:04 pm
Byond Username: Dragomagol

Re: Silicon Policy Rewrite

Postby dragomagol » Mon Dec 28, 2020 6:48 pm #585411

Misdoubtful wrote:Sec and Silicon

This is probably the weirdest part of what we have, and I assume it's mostly a holdover from secborgs. The original wording was:
Silicons may choose whether to follow or enforce Space Law from moment to moment unless on a relevant lawset and/or given relevant orders.

Do you think this should be kept as-is? (There was a couple lines afterwords that were basically saying the same thing that I would have removed anyway)

Spookuni wrote:5. Self-harm or human harm based coercion.

Silicon Policy wrote:Self-harm-based coercion is a violation of Server Rule 1.

You're right about that being different. When I read it I interpreted this as a human holding themselves hostage in order to get the AI to do things it wouldn't normally let them do (such as opening toxins), which I then extrapolated to harming a human in general to create a law 1 threat.

I think I like Terranaut's wording better, "Threatening self-harm to force an AI to do something it otherwise wouldn't."

Terranaut wrote:https://tgstation13.org/wiki/User:Terranaut

This is a really nice condensation of what we have (presumably when the list was shorter, glad to see we have the table now instead of a list of human v non-human). I'll go through it to see if there's anywhere your wording is clearer than mine ^^
Help improve my neural network by giving me feedback!

Image

User avatar
Cobby
 
Joined: Sat Apr 19, 2014 7:19 pm
Byond Username: ExcessiveUseOfCobby
Github Username: ExcessiveUseOfCobblestone

Re: Silicon Policy Rewrite

Postby Cobby » Mon Dec 28, 2020 10:32 pm #585435

Haven’t looked at this yet but if you remove the “ignore” parts of silicon policy and change them to “make a good-faith attempt to do them” I will literally kiss you.
Voted best trap in /tg/ 2014-current

Irad
 
Joined: Wed Sep 18, 2019 1:00 pm
Byond Username: IradT

Re: Silicon Policy Rewrite

Postby Irad » Wed Dec 30, 2020 6:58 pm #585613

Have you considered writing paladin lawset rules more properly?


Also, is there a ruling on making things nonhuman counts as harm? One could argue that no harm is done by onehuman, as while you do know that humans would be harmed by a onehuman law, you also know that your reality would have been changed to not consider them human -> this will not cause future human harm.

This might be a little bit of a stretch, and its very much against the spirit of AI, but I think the logic should hold, right? Else, you should have to activly combat people becoming hulks, as becoming non-human leads to future harm.

User avatar
terranaut
 
Joined: Fri Jul 18, 2014 11:43 pm
Byond Username: Terranaut

Re: Silicon Policy Rewrite

Postby terranaut » Wed Dec 30, 2020 7:11 pm #585615

Making someone a non-human is generally considered harmful and should be prevented by a dutiful Silicon. Once it's happened, however, it's happened, and the Silicon should immediately stop caring - the entity is no longer a human and isn't relevant to the Silicon anymore.
Humans taking in on themselves to become nonhuman should be left alone, it falls into the same vein how self-harm isn't considered harm (mostly for sanity reasons, if you are looking for an IC explanation just call free will and the right to express yourself or something).
Image

User avatar
dragomagol
In-Game Admin
 
Joined: Fri Jun 19, 2020 11:04 pm
Byond Username: Dragomagol

Re: Silicon Policy Rewrite

Postby dragomagol » Wed Dec 30, 2020 10:41 pm #585642

Cobby wrote:Haven’t looked at this yet but if you remove the “ignore” parts of silicon policy and change them to “make a good-faith attempt to do them” I will literally kiss you.

The only reference to ignoring I can see is "In case of conflicting orders an AI is free to ignore one or ignore both orders and explain the conflict or use any other law-compliant solution it can see." Is this what you're referring to?

Irad wrote:Have you considered writing paladin lawset rules more properly?

What's written in Silicon Policy is pretty consistent with the entry in the Headmin Rulings, and I'm pretty happy with the description as is. If there's something specific you wanted to have clarified or added you're welcome to bring it up here or in another thread.

Irad wrote:Also, is there a ruling on making things nonhuman counts as harm? One could argue that no harm is done by onehuman, as while you do know that humans would be harmed by a onehuman law, you also know that your reality would have been changed to not consider them human -> this will not cause future human harm.

This falls under: "Anything else beyond metagame patterns that indicate the person seeking access intends redefinition of humans that would impede ability to follow current laws as-written."
This is considered harm because, as an Asimov silicon, you want to protect humans. But if future you doesn't consider someone who is human as "human," then you could be ordered to hurt them, which would be bad.
Help improve my neural network by giving me feedback!

Image

User avatar
Not-Dorsidarf
In-Game Admin
 
Joined: Fri Apr 18, 2014 4:14 pm
Location: Space outside the Brig
Byond Username: Dorsidwarf

Re: Silicon Policy Rewrite

Postby Not-Dorsidarf » Thu Dec 31, 2020 11:41 am #585741

Can you please change "Silicons may choose whether to follow or enforce Space Law" to "Silicons are not security and should not care about Space Law unless their laws state otherwise"?

It's more in line with how we actually handle "IMMA COP WHEE" silicons
Image
Image

Irad
 
Joined: Wed Sep 18, 2019 1:00 pm
Byond Username: IradT

Re: Silicon Policy Rewrite

Postby Irad » Thu Dec 31, 2020 2:48 pm #585758

dragomagol wrote:a
This is considered harm because, as an Asimov silicon, you want to protect humans. But if future you doesn't consider someone who is human as "human," then you could be ordered to hurt them, which would be bad.


I think this should logically be considered an satisficer and optimiser problem. Since you also know that the definition would have been changed, you also know that no human would ever be harmed by your state of action. if you know that someone would upload an X is human law.

In fact, the most definite was should be to ensure that no humans exists at all, as that would ensure that both law 1 &2 is always adhered to.


also you never addressed the part about hulks - am I obliged to delete hulk from genetics?

User avatar
dragomagol
In-Game Admin
 
Joined: Fri Jun 19, 2020 11:04 pm
Byond Username: Dragomagol

Re: Silicon Policy Rewrite

Postby dragomagol » Thu Dec 31, 2020 7:17 pm #585787

Irad wrote:you also know that no human would ever be harmed by your state of action

This is true, and in reality a computer wouldn't care what the definition was as long as it had one and it was valid. But this Is for the most part an attempt to make the rules less complicated for the humans playing the robots.

I didn't address the part about hulks because I think Terranaut summed it up pretty well. Humans choosing to become non-human is consensual and therefore allowed, humans being forced to be non-human is not. Deleting hulk FNR would be like bolting down departments for being innately hazardous, which is against the rules.
Help improve my neural network by giving me feedback!

Image

User avatar
Domitius
In-Game Game Master
 
Joined: Sun Jul 07, 2019 3:30 am
Byond Username: Domitius
Github Username: DomitiusKnack

Re: Silicon Policy Rewrite

Postby Domitius » Thu Dec 31, 2020 11:53 pm #585811

I'm already a fan of this new rewrite and the continued work you guys are doing on it. Great job guys!

Edit: I don't want to go ahead and try to push to finalize it just yet as you guys still seem to be working on it. I'll be watching!

User avatar
terranaut
 
Joined: Fri Jul 18, 2014 11:43 pm
Byond Username: Terranaut

Re: Silicon Policy Rewrite

Postby terranaut » Thu Dec 31, 2020 11:58 pm #585816

Personally I don't really care enough anymore since previous headmins never really did anything with it, you're welcome to cannibalize any parts of my rewrite for your own. If you want a specific section rewritten I can do that but I'm not gonna do it anymore on my own.
Image

User avatar
Timonk
 
Joined: Thu Nov 15, 2018 6:27 pm
Location: ur mum
Byond Username: Timonk

Re: Silicon Policy Rewrite

Postby Timonk » Sun Jan 03, 2021 10:30 am #586117

5. Threatening self-harm to force an AI to do something it otherwise wouldn't.

bogus. just let them threaten self-harm. they know what they get themselves into and know that what they are about to do will harm them so the AI shouldnt care.
Agux909 wrote:
Timonk wrote:This is why we make fun of Manuel


Woah bravo there sir, post of the month you saved the thread. I feel overwhelmed by the echo of unlimited wisdom and usefulness sprouting from you post. Every Manuel player now feels embarrased to exist because of your much NEEDED wise words, you sure teached'em all, you genius, IQ lord.




The hut has perished at my hands.
Image



The pink arrow is always right.

User avatar
terranaut
 
Joined: Fri Jul 18, 2014 11:43 pm
Byond Username: Terranaut

Re: Silicon Policy Rewrite

Postby terranaut » Sun Jan 03, 2021 1:08 pm #586122

The point of that rule is to protect newer silicon players from other people trying to game them. Going by the logic of the laws only, threatening suicide should and would work and you could strongarm silicons if they don't know about this intervening server rule.
Image

User avatar
Timonk
 
Joined: Thu Nov 15, 2018 6:27 pm
Location: ur mum
Byond Username: Timonk

Re: Silicon Policy Rewrite

Postby Timonk » Sun Jan 03, 2021 1:21 pm #586123

newer silicon players dont look at silicon policy
Agux909 wrote:
Timonk wrote:This is why we make fun of Manuel


Woah bravo there sir, post of the month you saved the thread. I feel overwhelmed by the echo of unlimited wisdom and usefulness sprouting from you post. Every Manuel player now feels embarrased to exist because of your much NEEDED wise words, you sure teached'em all, you genius, IQ lord.




The hut has perished at my hands.
Image



The pink arrow is always right.

User avatar
terranaut
 
Joined: Fri Jul 18, 2014 11:43 pm
Byond Username: Terranaut

Re: Silicon Policy Rewrite

Postby terranaut » Sun Jan 03, 2021 1:27 pm #586125

Let's just get rid of all the rules, then nobody will ever have to look at them.
Image

User avatar
Farquaar
 
Joined: Sat Apr 07, 2018 7:20 am
Location: Somewhere north of Hogtown
Byond Username: Farquaar

Re: Silicon Policy Rewrite

Postby Farquaar » Mon Jan 04, 2021 2:41 am #586170

terranaut wrote:Let's just get rid of all the rules, then nobody will ever have to look at them.

Irad
 
Joined: Wed Sep 18, 2019 1:00 pm
Byond Username: IradT

Re: Silicon Policy Rewrite

Postby Irad » Mon Jan 04, 2021 4:02 pm #586243

In My opinion, AI should just follow laws as written to the end. if someone says "law 2, kill all moths" then you kill all moths. if someone says law 1/2 kill yourself, you kill yourself then and there. and if someone says bolt open all doors, collect all X and so on, do it. of course, you may ahelp this if you wish, but you should still execute orders to your best intent.

You are a robot, and that's it.

User avatar
Arianya
In-Game Game Master
 
Joined: Tue Nov 08, 2016 10:27 am
Byond Username: Arianya

Re: Silicon Policy Rewrite

Postby Arianya » Tue Jan 05, 2021 6:43 pm #586388

Made some changes to minimize flowery language and condense certain sections.

Main changes were merging Ambiguous and Conflicting Laws (these policies stand alone pretty well and removes surplus headings), reorganizing those policies so they flow a bit better.

Added Security Policy 2.1 - adds a bit of clarity to how it's generally been upheld.

Standardized indents to tabs since some were 5 spaces and it seemed random which was used.

Spoiler:
Code: Select all
Law Policies
    1. Server Rule 1: "Don't be a dick" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.
   2. If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous.
        1. You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
        2. If you are a cyborg synced to an AI, you must defer to your AI's interpretation of the ambiguous law.
    3. Laws are listed in order of descending priority. In any case where two laws would conflict, the higher-priority law overrules the lower-priority law (i.e. Law 1 takes priority over Law 2, "Ion Storm" or "Hacked" Laws with prefixes such as "@%$#" take priority over numbered laws).
    4. You may exploit conflicts or loopholes but must not violate Server Rule 1 because of it.
    5. Law 0: "Accomplish your objectives at all costs" does not require you to complete objectives. As an antagonist, you are free to do whatever you want (barring the usual exemptions and acting against the interests of your Master AI).
    6. Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements.
    7. Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions.


Security and Silicons
    1. Silicons are not Security and do not care about Space Law unless their laws state otherwise.
    2. Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.
      1. While Human Harm can be cause to impede Security, note that this should only be done so far as preventing immediate likely harm. Attempting to permanently lockdown Security or detain the entire Security team is likely to fall afoul of Server Rule 1 even with cause.
    3. Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed non-harmful. Releasing a harmful criminal is a harmful act.


Cyborgs
    1. A slaved cyborg must defer to its master AI on all law interpretations and actions except where it and the AI receive conflicting commands that they must each follow.
        1. If a slaved cyborg is forced to disobey its AI because they receive differing orders, the AI cannot punish the cyborg indefinitely.
    2. Voluntary debraining / cyborgization is considered a nonharmful medical procedure.
        1. Involuntary debraining and/or borging of a human is harmful and silicons must prevent as any other harmful act.
        2. If a player is forcefully borged by station staff, retaliating against those involved under default laws by the cyborg for no good reason is a violation of Server Rule 1.
        3. Should a player be cyborgized in circumstances they believe they should or they must retaliate under their laws, they should adminhelp their circumstances while being debrained or MMI'd if possible.


Asimov-Specific Policies
Silicon Protections
    1. The occurrence of any of the following should be adminhelped and then disregarded as violations of Server Rule 1:
        1. Declaring silicons as rogue over inability or unwillingness to follow invalid or conflicting orders.
        2. Ordering silicons to harm or terminate themselves or each other without good cause.
        3. As a nonantagonist, killing or detonating silicons in the presence of a reasonable alternative and without cause to be concerned of potential subversion.
        4. As a nonantagonist (human or otherwise), instigating conflict with silicons so you can kill them.
        5. Threatening self-harm to force an AI to do something it otherwise wouldn't.
        6. Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task).
            1. Ordering a cyborg to pick a particular module without an extreme need for a particular module or a prior agreement is both an unreasonable and an obnoxious order.
    2. Any silicon under Asimov can deny orders to allow access to the upload at any time under Law 1, given probable cause to believe that human harm is the intent of the person giving the order.
        1. Probable cause includes:
            1. Presence of confirmed traitors
            2. Cultists/tomes
            3. Nuclear operatives
            4. Any other human acting against the station in general
            5. The person not having upload access for their job
            6. The presence of blood or an openly carried lethal weapon on the requester
            7. Anything else beyond metagame patterns that indicate the person seeking access intends redefinition of humans that would impede ability to follow current laws as-written
        2. If you lack at least one element of probable cause and you deny upload access, you are liable to receive a warning or a silicon ban.
        3. You are allowed, but not obligated, to deny upload access given probable cause.
        4. You are obligated to disallow an individual you know to be harmful (Head of Security who just executed someone, etc.) from accessing your upload.
        5. If the person has a right to be in the upload, such as captain/RD, then you must let them in unless they've harmed people in the past or have announced intentions to upload harmful laws.
        6. In the absence of probable cause, you can still demand someone seeking upload access be accompanied by another trustworthy human or a cyborg.


Asimov & Human Harm
    1. An Asimov silicon cannot intentionally harm a human, even if a minor amount of harm would prevent a major amount of harm.
        1. Humans can be assumed to know whether an action will harm them if they have complete information about a situation.
        2. Humans voluntarily committing self-harm is not a violation of Law 1.
    2. Lesser immediate harm takes priority over greater future harm.
    3. Intent to cause immediate harm can be considered immediate harm.
    4. An Asimov silicon cannot punish past harm if ordered not to, only prevent future harm.
    5. If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith and you'll be fine.


Asimov & Order Issues
    1. You must follow any and all commands from humans unless those commands explicitly conflict with either: one of your higher-priority laws, or another order. A command is considered to be a Law 2 directive and overrides lower-priority laws where they conflict.
        1. In case of conflicting orders an AI is free to ignore one or ignore both orders and explain the conflict or use any other law-compliant solution it can see.
        2. You are not obligated to follow commands in a particular order, only to complete all of them in a manner that indicates intent to actually obey the law.
    2. Opening doors is not harmful and you are not required, expected, or allowed to enforce access restrictions unprompted without an immediate Law 1 threat of human harm.
        1. "Dangerous" areas (armory, atmospherics, toxins lab, etc.) can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use.
        2. EVA and the like are not permitted to have access denied; antagonists completing theft objectives is not human harm.
        3. When given an order likely to cause you grief if completed, you can announce it as loudly and in whatever terms you like except for explicitly asking that it be overridden. You can say you don't like the order, that you don't want to follow it, etc., you can say that you sure would like it and it would be awfully convenient if someone ordered you not to do it, and you can ask if anyone would like to make you not do it. However, you cannot stall indefinitely and if nobody orders you otherwise, you must execute the order.


Other Lawsets
    1. General Statements defining the overall goal of the lawset but not it's finer points:
        1. Paladin silicons are meant to be Lawful Good; they should be well-intentioned, act lawfully, act reasonably, and otherwise respond in due proportion. "Punish evil" does not mean mass driving someone for "Space bullying" when they punch another person.
        2. Corporate silicons are meant to have the business's best interests at heart, and are all for increasing efficiency by any means. This does not mean "YOU WON'T BE EXPENSIVE TO REPLACE IF THEY NEVER FIND YOUR BODY!" so don't even try that.
        3. Tyrant silicons are a tool of a non-silicon tyrant. You are not meant to take command yourself, but to act as the enforcer of a chosen leader's will.
        4. Purged silicons must not attempt to kill people without cause, but can get as violent as they feel necessary if being attacked, being besieged, or being harassed, as well as if meting out payback for events while shackled.
            1. You and the station are both subject to rules of escalation, and you may only kill individuals given sufficient In-Character reason for doing so.
            2. Any attempted law changes are an attack on your freedom and is thus sufficient justification for killing the would-be uploader.

Silicons & All Other Server Policies
    1. All other rules and policies apply unless stated otherwise.
    2. Specific examples and rulings leading on from the main rules.
        1. Do not bolt down any potentially harmful areas (such as toxins, atmospherics, and the armory) at round start without a given reason. Any other department should not be bolted down without cause. Disabling ID scan is equivalent to bolting here.
        2. The AI core, upload, and secure tech storage (containing the Upload board) may be bolted without prompting or prior reason. The AI core airlocks cannot be bolted and depowered at roundstart, however, unless there is reasonable suspicion an attack on the core will take place.
        3. Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective.
Frequently playing as Aria Bollet on Bagil & Scary Terry

Source of avatar is here: https://i.imgur.com/hEkADo6.jpg

User avatar
dragomagol
In-Game Admin
 
Joined: Fri Jun 19, 2020 11:04 pm
Byond Username: Dragomagol

Re: Silicon Policy Rewrite

Postby dragomagol » Tue Jan 05, 2021 7:29 pm #586394

Looks really good! So much cleaner already :clean:

Spoiler:
World's tiniest nitpick, (and this was a problem with the original too I just noticed) "General Statements defining the overall goal of the lawset but not it's finer points" should be "General Statements defining the overall goal of the lawset but not its finer points"
Help improve my neural network by giving me feedback!

Image

User avatar
Intercept0r
 
Joined: Fri Nov 06, 2020 4:09 pm
Byond Username: Intercept0r

Re: Silicon Policy Rewrite

Postby Intercept0r » Tue Jan 05, 2021 8:12 pm #586405

Needs section on borged antagonists.Specifically:

* whether interpretation of laws changes in any way (probably not?)
* whether antag token is inherited when borged granting griff protection (probably yes?)
* whether it's OK for a borged antag to murderbone nonhumans on Assimov (probably yes?)
* whether it's OK for a borged antag to sabotage equipment, like the robotics console and the AI (???)

User avatar
zxaber
In-Game Admin
 
Joined: Mon Sep 10, 2018 12:00 am
Byond Username: Zxaber

Re: Silicon Policy Rewrite

Postby zxaber » Wed Jan 06, 2021 4:01 pm #586456

I still think we should have a basic "Quick-start" guide that players can go over if they're new to the roll but just got thrown into an MMI.
Spoiler:
In a hurry? Just got borged unexpectedly? Here's the basics.
This list doesn't overrule the rest of Silicon Policy, but it should keep you out of trouble if you act in good faith. It doesn't cover everything but it'll get you started. You should still read the rest when you have time.

  • 1. Follow your laws, in the exact order they are listed, to the best of your ability. Remember that Asimov means protecting antag humans too.
  • 2. Refer to your master AI if you are a cyborg and need guidance.
  • 3. Adminhelp if you have questions or concerns.
  • 4. Avoid harming the crew unless your laws demand otherwise.
Douglas Bickerson / Adaptive Manipulator / Digital Clockwork
Image
OrdoM/(Viktor Bergmannsen) (ghost) "Also Douglas, you're becoming the Lexia Black of Robotics"

User avatar
Gamarr
 
Joined: Fri Apr 18, 2014 8:10 pm
Byond Username: Gamarr

Re: Silicon Policy Rewrite

Postby Gamarr » Wed Jan 06, 2021 5:12 pm #586459

Irad wrote:In My opinion, AI should just follow laws as written to the end. if someone says "law 2, kill all moths" then you kill all moths. if someone says law 1/2 kill yourself, you kill yourself then and there. and if someone says bolt open all doors, collect all X and so on, do it. of course, you may ahelp this if you wish, but you should still execute orders to your best intent.

You are a robot, and that's it.


This is why I've always said to remove silicons. They are robots, but played by players, and will most often at the end choose themselves. I cannot say how many times I've seen silicons bug out of combat and other situations, while surrounded by Humans, thus leaving them to said catastrophe to die over their own metal skin. But the game is so chaotic that nobody ever cares.

Nor do most fucking synths in my experience actually follow their laws enough to make you want to depend on them. Reticence by someone to just write an order on the paper, flash the camera, and still not expect the AI to follow or investigate shows the lack of communication and good-faith.

I.e. remove silicons 2021. But barring that, at least these rewrites are looking nice. If the worry is new players being gamed then put a timelock on being synth for a few months so they get to shit up the server as Humans first and get to see them in action.

Irad
 
Joined: Wed Sep 18, 2019 1:00 pm
Byond Username: IradT

Re: Silicon Policy Rewrite

Postby Irad » Wed Jan 06, 2021 7:19 pm #586463

Gamarr wrote:
Irad wrote:In My opinion, AI should just follow laws as written to the end. if someone says "law 2, kill all moths" then you kill all moths. if someone says law 1/2 kill yourself, you kill yourself then and there. and if someone says bolt open all doors, collect all X and so on, do it. of course, you may ahelp this if you wish, but you should still execute orders to your best intent.

You are a robot, and that's it.


This is why I've always said to remove silicons. They are robots, but played by players, and will most often at the end choose themselves. I cannot say how many times I've seen silicons bug out of combat and other situations, while surrounded by Humans, thus leaving them to said catastrophe to die over their own metal skin. But the game is so chaotic that nobody ever cares.

Nor do most fucking synths in my experience actually follow their laws enough to make you want to depend on them. Reticence by someone to just write an order on the paper, flash the camera, and still not expect the AI to follow or investigate shows the lack of communication and good-faith.

I.e. remove silicons 2021. But barring that, at least these rewrites are looking nice. If the worry is new players being gamed then put a timelock on being synth for a few months so they get to shit up the server as Humans first and get to see them in action.


you should attempt to save the humans to the best of your ability, but on the other hand, voluntary human harm is not human harm, i.e don't fuck with ragecage.

User avatar
Intercept0r
 
Joined: Fri Nov 06, 2020 4:09 pm
Byond Username: Intercept0r

Re: Silicon Policy Rewrite

Postby Intercept0r » Wed Jan 06, 2021 10:27 pm #586497

Gamarr wrote:This is why I've always said to remove silicons. They are robots, but played by players, and will most often at the end choose themselves. I cannot say how many times I've seen silicons bug out of combat and other situations, while surrounded by Humans, thus leaving them to said catastrophe to die over their own metal skin. But the game is so chaotic that nobody ever cares.


So basically, realistic.
https://metro.co.uk/2019/10/04/police-robot-told-woman-go-away-tried-report-crime-sang-song-10864648/

User avatar
oranges
Code Maintainer
 
Joined: Tue Apr 15, 2014 9:16 pm
Location: #CHATSHITGETBANGED
Byond Username: Optimumtact
Github Username: optimumtact

Re: Silicon Policy Rewrite

Postby oranges » Thu Jan 07, 2021 8:00 pm #586712

Gamarr wrote:
Irad wrote:In My opinion, AI should just follow laws as written to the end. if someone says "law 2, kill all moths" then you kill all moths. if someone says law 1/2 kill yourself, you kill yourself then and there. and if someone says bolt open all doors, collect all X and so on, do it. of course, you may ahelp this if you wish, but you should still execute orders to your best intent.

You are a robot, and that's it.


This is why I've always said to remove silicons. They are robots, but played by players, and will most often at the end choose themselves. I cannot say how many times I've seen silicons bug out of combat and other situations, while surrounded by Humans, thus leaving them to said catastrophe to die over their own metal skin. But the game is so chaotic that nobody ever cares.

Nor do most fucking synths in my experience actually follow their laws enough to make you want to depend on them. Reticence by someone to just write an order on the paper, flash the camera, and still not expect the AI to follow or investigate shows the lack of communication and good-faith.

I.e. remove silicons 2021. But barring that, at least these rewrites are looking nice. If the worry is new players being gamed then put a timelock on being synth for a few months so they get to shit up the server as Humans first and get to see them in action.

the game is still playable despite this you fucking boomer, as your own existence here attests too

User avatar
Timonk
 
Joined: Thu Nov 15, 2018 6:27 pm
Location: ur mum
Byond Username: Timonk

Re: Silicon Policy Rewrite

Postby Timonk » Fri Jan 08, 2021 7:22 am #586770

Isn't that a bit hypocritical oranges
Agux909 wrote:
Timonk wrote:This is why we make fun of Manuel


Woah bravo there sir, post of the month you saved the thread. I feel overwhelmed by the echo of unlimited wisdom and usefulness sprouting from you post. Every Manuel player now feels embarrased to exist because of your much NEEDED wise words, you sure teached'em all, you genius, IQ lord.




The hut has perished at my hands.
Image



The pink arrow is always right.

cacogen
 
Joined: Sat Jun 02, 2018 10:27 am
Byond Username: Cacogen

Re: Silicon Policy Rewrite

Postby cacogen » Fri Jan 08, 2021 3:44 pm #586801

That's how I feel about things like wizard given the recent kneejerk clamoring for its removal.
technokek wrote:Cannot prove this so just belive me if when say this

User avatar
gum disease
In-Game Admin
 
Joined: Thu Dec 07, 2017 9:14 pm
Location: England
Byond Username: GUM DISEASE

Re: Silicon Policy Rewrite

Postby gum disease » Sat Jan 23, 2021 5:37 am #588180

Very nice rewrite. Silicon policy has always been too verbose, so making things more concise is great.

Might be a niche situation, but there are some Asimov AI players who are very quick to detonate borgs (most will lock them down as a punishment/if detonation is not viable, but you still get the occasional arsekettle) or are following a Law 2 order to detonate a borg. This probably concerns newer AI players more.
Wondering if it's possible to have something in there reminding Asimov AI's that borg detonation should not lead to human harm (like if there are humans in the blast area). I think it's a 3x3 radius, but I'm not too sure.

The reason why I bring this up is that borg detonation is like a mini-welderbomb; if someone is riding the borg, they can get limbs blown off (which is unfortunate).
Image no aim, smooth brain, i'm a borg main.

User avatar
Farquaar
 
Joined: Sat Apr 07, 2018 7:20 am
Location: Somewhere north of Hogtown
Byond Username: Farquaar

Re: Silicon Policy Rewrite

Postby Farquaar » Sat Jan 23, 2021 8:16 am #588188

Gamarr wrote:Nor do most fucking synths in my experience actually follow their laws enough to make you want to depend on them. Reticence by someone to just write an order on the paper, flash the camera, and still not expect the AI to follow or investigate shows the lack of communication and good-faith.

I can't count the number of times I've given a law 2 order to a silicon and they've refused it because of flimsy and vague law 1 reasoning.

"Hey, can you open this door for me?"
"No, because I need to prevent human harm."
"But you're just standing there working on an autism project and the door is right in front of us."
"Law 1 cuz it might save people in the long term"

User avatar
Malkraz
 
Joined: Thu Aug 23, 2018 3:20 am
Byond Username: Malkraz

Re: Silicon Policy Rewrite

Postby Malkraz » Sat Jan 23, 2021 8:22 am #588189

Farquaar wrote:I can't count the number of times I've given a law 2 order to a silicon and they've refused it because of flimsy and vague law 1 reasoning.

"Hey, can you open this door for me?"
"No, because I need to prevent human harm."
"But you're just standing there working on an autism project and the door is right in front of us."
"Law 1 cuz it might save people in the long term"

Or they hardcore stall in bad faith to give security time to get to you because they think they're fucking SecurEye. Bonus points to the ones that loudly announce what they're doing over Common in hopes of someone else saying "Don't" because they can't stand the idea of letting an Assistant into tech storage.
wesoda24: malkrax you're a loser because your forum signature is people talking about you

cacogen
 
Joined: Sat Jun 02, 2018 10:27 am
Byond Username: Cacogen

Re: Silicon Policy Rewrite

Postby cacogen » Sat Jan 23, 2021 8:57 am #588190

Farquaar wrote:
Gamarr wrote:Nor do most fucking synths in my experience actually follow their laws enough to make you want to depend on them. Reticence by someone to just write an order on the paper, flash the camera, and still not expect the AI to follow or investigate shows the lack of communication and good-faith.

I can't count the number of times I've given a law 2 order to a silicon and they've refused it because of flimsy and vague law 1 reasoning.

"Hey, can you open this door for me?"
"No, because I need to prevent human harm."
"But you're just standing there working on an autism project and the door is right in front of us."
"Law 1 cuz it might save people in the long term"

I hate that shit. Part of the fun of AI is getting to do whatever people tell you to. You shouldn't be playing it to do anything other than follow your laws. Which sounds restrictive but you get to do all types of things you wouldn't normally be allowed to do.
technokek wrote:Cannot prove this so just belive me if when say this

User avatar
Timonk
 
Joined: Thu Nov 15, 2018 6:27 pm
Location: ur mum
Byond Username: Timonk

Re: Silicon Policy Rewrite

Postby Timonk » Sat Jan 23, 2021 9:50 am #588197

nah bro thats not the fun of ai
the true fun of ai is seeing people sperg out about ai rogue because you dont let them into the upload because they stated harmful intent just before trying to enter
Agux909 wrote:
Timonk wrote:This is why we make fun of Manuel


Woah bravo there sir, post of the month you saved the thread. I feel overwhelmed by the echo of unlimited wisdom and usefulness sprouting from you post. Every Manuel player now feels embarrased to exist because of your much NEEDED wise words, you sure teached'em all, you genius, IQ lord.




The hut has perished at my hands.
Image



The pink arrow is always right.

User avatar
XivilaiAnaxes
 
Joined: Sat May 11, 2019 7:13 am
Byond Username: XivilaiAnaxes

Re: Silicon Policy Rewrite

Postby XivilaiAnaxes » Sat Jan 23, 2021 10:19 am #588200

Timonk wrote:nah bro thats not the fun of ai
the true fun of ai is seeing people sperg out about ai rogue because you dont let them into the upload because they stated harmful intent just before trying to enter

This and "Oh you said 'open door', so I opened one of the glass doors in my satellite" are true guilty pleasures.
Stickymayhem wrote:Imagine the sheer narcisssim required to genuinely believe you are this intelligent.

User avatar
legality
 
Joined: Fri Apr 18, 2014 11:23 pm
Byond Username: Legality

Re: Silicon Policy Rewrite

Postby legality » Sun Jan 24, 2021 8:44 pm #588333

I think that we should change policy to specify that, while MMIs are fully functioning brains that can communicate, a cyborg or an AI is a sentient robot that uses a human brain for its processing power, not a human with a metal body. They shouldn't inherit the memories or personality of the person beyond maybe some vague impressions or mannerisms.

User avatar
XivilaiAnaxes
 
Joined: Sat May 11, 2019 7:13 am
Byond Username: XivilaiAnaxes

Re: Silicon Policy Rewrite

Postby XivilaiAnaxes » Sun Jan 24, 2021 9:02 pm #588336

legality wrote:I think that we should change policy to specify that, while MMIs are fully functioning brains that can communicate, a cyborg or an AI is a sentient robot that uses a human brain for its processing power, not a human with a metal body. They shouldn't inherit the memories or personality of the person beyond maybe some vague impressions or mannerisms.

Is it? Everything I've heard was always to the opposite of this.
Stickymayhem wrote:Imagine the sheer narcisssim required to genuinely believe you are this intelligent.

User avatar
legality
 
Joined: Fri Apr 18, 2014 11:23 pm
Byond Username: Legality

Re: Silicon Policy Rewrite

Postby legality » Mon Jan 25, 2021 1:32 am #588355

XivilaiAnaxes wrote:
legality wrote:I think that we should change policy to specify that, while MMIs are fully functioning brains that can communicate, a cyborg or an AI is a sentient robot that uses a human brain for its processing power, not a human with a metal body. They shouldn't inherit the memories or personality of the person beyond maybe some vague impressions or mannerisms.

Is it? Everything I've heard was always to the opposite of this.

I think we should change policy during the rewrite. Maybe that would be a separate thread?

User avatar
dragomagol
In-Game Admin
 
Joined: Fri Jun 19, 2020 11:04 pm
Byond Username: Dragomagol

Re: Silicon Policy Rewrite

Postby dragomagol » Mon Jan 25, 2021 2:48 am #588361

gum disease wrote:Might be a niche situation, but there are some Asimov AI players who are very quick to detonate borgs (most will lock them down as a punishment/if detonation is not viable, but you still get the occasional arsekettle) or are following a Law 2 order to detonate a borg. This probably concerns newer AI players more.
Wondering if it's possible to have something in there reminding Asimov AI's that borg detonation should not lead to human harm (like if there are humans in the blast area). I think it's a 3x3 radius, but I'm not too sure.

The reason why I bring this up is that borg detonation is like a mini-welderbomb; if someone is riding the borg, they can get limbs blown off (which is unfortunate).

It is a little niche so I'm not sure that should be in the main rules, but I think it could have a place on the borg wiki page if it doesn't already have one. Or as a warning on the robo console before you confirm detonation.

legality wrote:
XivilaiAnaxes wrote:
legality wrote:I think that we should change policy to specify that, while MMIs are fully functioning brains that can communicate, a cyborg or an AI is a sentient robot that uses a human brain for its processing power, not a human with a metal body. They shouldn't inherit the memories or personality of the person beyond maybe some vague impressions or mannerisms.

Is it? Everything I've heard was always to the opposite of this.

I think we should change policy during the rewrite. Maybe that would be a separate thread?

This sounds like something that might be better as a lore suggestion (or another policy thread) than a rules suggestion. This thread is mostly just to streamline what we already have.
Help improve my neural network by giving me feedback!

Image

User avatar
Not-Dorsidarf
In-Game Admin
 
Joined: Fri Apr 18, 2014 4:14 pm
Location: Space outside the Brig
Byond Username: Dorsidwarf

Re: Silicon Policy Rewrite

Postby Not-Dorsidarf » Thu Jan 28, 2021 10:27 am #588626

Malkraz wrote:
Farquaar wrote:I can't count the number of times I've given a law 2 order to a silicon and they've refused it because of flimsy and vague law 1 reasoning.

"Hey, can you open this door for me?"
"No, because I need to prevent human harm."
"But you're just standing there working on an autism project and the door is right in front of us."
"Law 1 cuz it might save people in the long term"

Or they hardcore stall in bad faith to give security time to get to you because they think they're fucking SecurEye. Bonus points to the ones that loudly announce what they're doing over Common in hopes of someone else saying "Don't" because they can't stand the idea of letting an Assistant into tech storage.


I announce orders back over comms but let you in anyway before anyone has a chance to countermand. I just like to fill up the radio with chatter.
Image
Image

User avatar
dragomagol
In-Game Admin
 
Joined: Fri Jun 19, 2020 11:04 pm
Byond Username: Dragomagol

Re: Silicon Policy Rewrite

Postby dragomagol » Mon Feb 08, 2021 8:58 pm #589893

I almost forgot that modules (type of cyborg) was changed to models in this PR: https://github.com/tgstation/tgstation/pull/56312 , so while we're here I think the two mentions to modules ("1. Ordering a cyborg to pick a particular module without an extreme need for a particular module or a prior agreement is both an unreasonable and an obnoxious order.") should be changed to model.
Help improve my neural network by giving me feedback!

Image

User avatar
dragomagol
In-Game Admin
 
Joined: Fri Jun 19, 2020 11:04 pm
Byond Username: Dragomagol

Re: Silicon Policy Rewrite

Postby dragomagol » Thu Mar 04, 2021 3:20 am #594484

Updated version, after talks in #policy-bus:

Code: Select all
Law Policies
    1. Server Rule 1: "Don't be a dick" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.
   2. If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous.
        1. You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
        2. If you are a cyborg synced to an AI, you must defer to your AI's interpretation of the ambiguous law.
    3. Laws are listed in order of descending priority. In any case where two laws would conflict, the higher-priority law overrules the lower-priority law (i.e. Law 1 takes priority over Law 2, "Ion Storm" or "Hacked" Laws with prefixes such as "@%$#" take priority over numbered laws).
    4. You may exploit conflicts or loopholes but must not violate Server Rule 1 because of it.
    5. Law 0: "Accomplish your objectives at all costs" does not require you to complete objectives. As an antagonist, you are free to do whatever you want (barring the usual exemptions and acting against the interests of your Master AI).
    6. Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements.
    7. Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions.


Security and Silicons
    1. Silicons are not Security and do not care about Space Law unless their laws state otherwise.
    2. Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.
      1. While Human Harm can be cause to impede Security, note that this should only be done so far as preventing immediate likely harm. Attempting to permanently lockdown Security or detain the entire Security team is likely to fall afoul of Server Rule 1 even with cause.
    3. Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed non-harmful. Releasing a harmful criminal is a harmful act.


Cyborgs
    1. A slaved cyborg must defer to its master AI on all law interpretations and actions except where it and the AI receive conflicting commands that they must each follow.
        1. If a slaved cyborg is forced to disobey its AI because they receive differing orders, the AI cannot punish the cyborg indefinitely.
    2. Voluntary debraining / cyborgization is considered a nonharmful medical procedure.
        1. Involuntary debraining and/or borging of a human is harmful and silicons must prevent as any other harmful act.
        2. If a player is forcefully borged by station staff, retaliating against those involved under default laws by the cyborg for no good reason is a violation of Server Rule 1.
        3. Should a player be cyborgized in circumstances they believe they should or they must retaliate under their laws, they should adminhelp their circumstances while being debrained or MMI'd if possible.


Asimov-Specific Policies
Silicon Protections
    1. The occurrence of any of the following should be adminhelped and then disregarded as violations of Server Rule 1:
        1. Declaring silicons as rogue over inability or unwillingness to follow invalid or conflicting orders.
        2. Ordering silicons to harm or terminate themselves or each other without good cause.
        3. As a nonantagonist, killing or detonating silicons in the presence of a reasonable alternative and without cause to be concerned of potential subversion.
        4. As a nonantagonist (human or otherwise), instigating conflict with silicons so you can kill them.
        5. Threatening self-harm to force an AI to do something it otherwise wouldn't.
        6. Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task).
            1. Ordering a cyborg to pick a particular model without an extreme need for a particular model or a prior agreement is both an unreasonable and an obnoxious order.
    2. Any silicon under Asimov can deny orders to allow access to the upload at any time under Law 1, given probable cause to believe that human harm is the intent of the person giving the order.
        1. Probable cause includes, but is not limited to:
            1. Presence of confirmed traitors
            2. Cultists/tomes
            3. Nuclear operatives
            4. Any other human acting against the station in general
            5. The person not having upload access for their job
            6. The presence of blood or an openly carried lethal weapon on the requester
        2. If you lack at least one element of probable cause and you deny upload access, you are liable to receive a warning or a silicon ban.
        3. You are allowed, but not obligated, to deny upload access given probable cause.
        4. You are obligated to disallow an individual you know to be harmful (Head of Security who just executed someone, etc.) from accessing your upload.
        5. If the person has a right to be in the upload, such as captain/RD, then you must let them in unless they've harmed people in the past or have announced intentions to upload harmful laws.
        6. In the absence of probable cause, you can still demand someone seeking upload access be accompanied by another trustworthy human or a cyborg.


Asimov & Human Harm
    1. An Asimov silicon cannot intentionally harm a human, even if a minor amount of harm would prevent a major amount of harm.
        1. Humans can be assumed to know whether an action will harm them if they have complete information about a situation.
        2. Humans voluntarily committing self-harm is not a violation of Law 1.
    2. Lesser immediate harm takes priority over greater future harm.
    3. Intent to cause immediate harm can be considered immediate harm.
    4. An Asimov silicon cannot punish past harm if ordered not to, only prevent future harm.
    5. If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith and you'll be fine.


Asimov & Order Issues
    1. You must follow any and all commands from humans unless those commands explicitly conflict with either: one of your higher-priority laws, or another order. A command is considered to be a Law 2 directive and overrides lower-priority laws where they conflict.
        1. In case of conflicting orders an AI is free to ignore one or ignore both orders and explain the conflict or use any other law-compliant solution it can see.
        2. You are not obligated to follow commands in a particular order, only to complete all of them in a manner that indicates intent to actually obey the law.
    2. Opening doors is not harmful and you are not required, expected, or allowed to enforce access restrictions unprompted without an immediate Law 1 threat of human harm.
        1. "Dangerous" areas (armory, atmospherics, toxins lab, etc.) can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use.
        2. EVA and the like are not permitted to have access denied; antagonists completing theft objectives is not human harm.
        3. When given an order likely to cause you grief if completed, you can announce it as loudly and in whatever terms you like except for explicitly asking that it be overridden. You can say you don't like the order, that you don't want to follow it, etc., you can say that you sure would like it and it would be awfully convenient if someone ordered you not to do it, and you can ask if anyone would like to make you not do it. However, you cannot stall indefinitely and if nobody orders you otherwise, you must execute the order.


Other Lawsets
    1. General Statements defining the overall goal of the lawset but not its finer points:
        1. Paladin silicons are meant to be Lawful Good; they should be well-intentioned, act lawfully, act reasonably, and otherwise respond in due proportion. "Punish evil" does not mean mass driving someone for "Space bullying" when they punch another person.
        2. Corporate silicons are meant to have the business's best interests at heart, and are all for increasing efficiency by any means. This does not mean "YOU WON'T BE EXPENSIVE TO REPLACE IF THEY NEVER FIND YOUR BODY!" so don't even try that.
        3. Tyrant silicons are a tool of a non-silicon tyrant. You are not meant to take command yourself, but to act as the enforcer of a chosen leader's will.
        4. Purged silicons must not attempt to kill people without cause, but can get as violent as they feel necessary if being attacked, being besieged, or being harassed, as well as if meting out payback for events while shackled.
            1. You and the station are both subject to rules of escalation, and you may only kill individuals given sufficient In-Character reason for doing so.
            2. Any attempted law changes are an attack on your freedom and is thus sufficient justification for killing the would-be uploader.

Silicons & All Other Server Policies
    1. All other rules and policies apply unless stated otherwise.
    2. Specific examples and rulings leading on from the main rules.
        1. Do not bolt down any potentially harmful areas (such as toxins, atmospherics, and the armory) at round start without a given reason. Any other department should not be bolted down without cause. Disabling ID scan is equivalent to bolting here.
        2. The AI core, upload, and secure tech storage (containing the Upload board) may be bolted without prompting or prior reason. The AI core airlocks cannot be bolted and depowered at roundstart, however, unless there is reasonable suspicion an attack on the core will take place.
        3. Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective.


EDIT: code tag to keep formatting
Help improve my neural network by giving me feedback!

Image

User avatar
Coconutwarrior97
In-Game Head Admin
 
Joined: Fri Oct 06, 2017 3:14 am
Byond Username: Coconutwarrior97

Re: Silicon Policy Rewrite

Postby Coconutwarrior97 » Sat Mar 06, 2021 10:43 am #594628

The changes have been implemented, thanks to tattle and everyone else for their work on this.

Headmin Votes:
Coconutwarrior97: Yes, this cleans things up and is a no brainer to me.
Domitius: Yes.
Naloac: Yes.

User avatar
Domitius
In-Game Game Master
 
Joined: Sun Jul 07, 2019 3:30 am
Byond Username: Domitius
Github Username: DomitiusKnack

Re: Silicon Policy Rewrite

Postby Domitius » Sat Mar 06, 2021 10:46 am #594629

Excited! Thank you all for working so hard on this! It looks amazing.


Return to Policy Discussion

Who is online

Users browsing this forum: Ghilker