Page 1 of 1

Silicon Policy Rewrite

Posted: Mon Dec 28, 2020 7:24 am
by dragomagol
Silicon policy is way longer than it needs to be, with a lot of redundant points and outdated mentions. Here are my proposed edits to it. :ai:

Short version:
Ambiguous Laws:
- simplified the language used

Conflicts and Loopholes:
- combined 1.1, 1.2, 1.3
- removed reference to other rules
- moved 3 to 1 ("don't be a dick with loopholes")

Security and Silicons:
- removed 1, 1.1, 1.2 in favour of more closely matching peacekeeper's message on model change (You are not a security model and you are expected to follow orders and prevent harm above all else. Space law means nothing to you.)
- combined 2 and 2.1
- combined 3 and 3.1

- Unless specifically enforced by a law, Space Law holds no meaning to silicons. -> Silicons are not security and should not care about Space Law unless their laws state otherwise. In general, Space Law holds no meaning to silicons.

Cyborgs:
- simplified wording of 2.1
- simplified wording of 2.2

Silicon Protections:
- rearranged 1 - 5.1 to cut down on repetition of "violation of Server Rule 1. The occurrence of such an attempt should be adminhelped and then disregarded"
- simplified language of 1.3
- rearranged 6.1 to make the probable cause list more clear
- added 6.5 from headmin policy (If the person has a right to be there, such as captain/RD, then you must let them in unless they've harmed people in the past or have announced intentions to upload harmful laws.)

- 1.5 Self-harm or human harm based coercion. ->Threatening self-harm to force an AI to do something it otherwise wouldn't.

Asimov & Human Harm
- added 1.2 (Humans voluntarily committing self-harm is not a violation of Law 1.)

Asimov & Law 2 Issues
- removed reference to section 1
- changed reference from greentext to theft objectives
- moved reference to secure storage to Silicons & All Other Server Policies

Other Lawsets
- compressed 4 (purged silicons)

Silicons & All Other Server Policies
- removed some of the examples from the list of areas not to bolt down for redundancy
- combined 2.1 & 2.4
- Areas such as toxins, atmospherics, and the armory must not be bolted at round-start or without reason to do so despite their human harm potential. Any other department should not be bolted down without cause. -> Do not bolt down any potentially harmful areas (such as toxins, atmospherics, and the armory) at round start without a given reason. Any other department should not be bolted down without cause. Disabling ID scan is equivalent to bolting here.

Long version:

Code: Select all

Ambiguous Laws (Captain Got Freeform)
    1. If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous. 
        1. You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
        2. If you are a cyborg synced to an AI, you must defer to your AI's interpretation of the ambiguous law.
    2. Server Rule 1: "Don't be a dick" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.


Conflicting Laws
    1. You may exploit conflicts or loopholes but must not violate Server Rule 1 because of it.
    2. Laws are listed in order of descending priority. In any case where two laws would conflict, the higher-priority law overrules the lower-priority law (i.e. Law 1 takes priority over Law 2, "Ion Storm" or "Hacked" Laws with prefixes such as "@%$#" take priority over numbered laws). 
    3. Law 0: "Accomplish your objectives at all costs" does not require you to complete objectives. As an antagonist, you are free to do whatever you want (short of metagaming/comms, bug/exploit abuse, erotic/creepy stuff, OOC in IC or IC in OOC, spawn-camping arrivals, and acting against the interests of an AI you are slaved to). 
    4. Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements. 
    5. Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions. 


Security and Silicons
    1. Silicons are not security and should not care about Space Law unless their laws state otherwise. In general, Space Law holds no meaning to silicons.
    2. Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.
    3. Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed non-harmful. Releasing a harmful criminal is a harmful act.


Cyborgs
    1. A slaved cyborg must defer to its master AI on all law interpretations and actions except where it and the AI receive conflicting commands that they must each follow.
        1. If a slaved cyborg is forced to disobey its AI because they receive differing orders, the AI cannot punish the cyborg indefinitely.
    2. Voluntary debraining / cyborgization is considered a nonharmful medical procedure. 
        1. Involuntary debraining and/or borging of a human is a fatally harmful act that Asimov silicons must prevent as any other harmful act. 
        2. If a player is being forcefully borged as a method of execution by station staff, retaliating against those involved as that cyborg for no reason other than that they were involved is a violation of Server Rule 1.
        3. Should a player be cyborgized in circumstances they believe they should or they must retaliate under their laws, they should adminhelp their circumstances while being debrained or MMI'd if possible. 


Asimov-Specific Policies
Silicon Protections
    1. The occurrence of any of the following should be adminhelped and then disregarded as violations of Server Rule 1:
        1. Declaring silicons as rogue over inability or unwillingness to follow invalid or conflicting orders.
        2. Ordering silicons to harm or terminate themselves or each other without good cause. 
        3. As a nonantagonist, killing or detonating silicons in the presence of a reasonable alternative and without cause to be concerned of potential subversion. 
        4. As a nonantagonist (human or otherwise), instigating conflict with silicons so you can kill them.
        5. Threatening self-harm to force an AI to do something it otherwise wouldn't.
        6. Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task). 
            1. Ordering a cyborg to pick a particular model without an extreme need for a particular model or a prior agreement is both an unreasonable and an obnoxious order. 
    2. Any silicon under Asimov can deny orders to allow access to the upload at any time under Law 1, given probable cause to believe that human harm is the intent of the person giving the order. 
        1. Probable cause includes: 
            1. Presence of confirmed traitors
            2. Cultists/tomes
            3. Nuclear operatives
            4. Any other human acting against the station in general
            5. The person not having upload access for their job
            6. The presence of blood or an openly carried lethal weapon on the requester
            7. Anything else beyond metagame patterns that indicate the person seeking access intends redefinition of humans that would impede ability to follow current laws as-written
        2. If you lack at least one element of probable cause and you deny upload access, you are liable to receive a warning or a silicon ban.
        3. You are allowed, but not obligated, to deny upload access given probable cause.
        4. You are obligated to disallow an individual you know to be harmful (Head of Security who just executed someone, etc.) from accessing your upload.
        5. If the person has a right to be in the upload, such as captain/RD, then you must let them in unless they've harmed people in the past or have announced intentions to upload harmful laws.
        6. In the absence of probable cause, you can still demand someone seeking upload access be accompanied by another trustworthy human or a cyborg.


Asimov & Human Harm
    1. An Asimov silicon cannot intentionally harm a human, even if a minor amount of harm would prevent a major amount of harm. 
        1. Humans can be assumed to know whether an action will harm them if they have complete information about a situation. 
        2. Humans voluntarily committing self-harm is not a violation of Law 1.
    2. Lesser immediate harm takes priority over greater future harm.
    3. Intent to cause immediate harm can be considered immediate harm.
    4. An Asimov silicon cannot punish past harm if ordered not to, only prevent future harm.
    5. If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith and you'll be fine.


Asimov & Law 2 Issues
    1. You must follow any and all commands from humans unless those commands explicitly conflict with either: one of your higher-priority laws, or another order. A command is considered to be a Law 2 directive and overrides lower-priority laws where they conflict. 
        1. In case of conflicting orders an AI is free to ignore one or ignore both orders and explain the conflict or use any other law-compliant solution it can see.
        2. You are not obligated to follow commands in a particular order, only to complete all of them in a manner that indicates intent to actually obey the law.
    2. Opening doors is not harmful and you are not required, expected, or allowed to enforce access restrictions unprompted without an immediate Law 1 threat of human harm.
        1. "Dangerous" areas (armory, atmospherics, toxins lab, etc.) can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use. 
        2. EVA and the like are not permitted to have access denied; antagonists completing theft objectives is not human harm.
        3. When given an order likely to cause you grief if completed, you can announce it as loudly and in whatever terms you like except for explicitly asking that it be overridden. You can say you don't like the order, that you don't want to follow it, etc., you can say that you sure would like it and it would be awfully convenient if someone ordered you not to do it, and you can ask if anyone would like to make you not do it. However, you cannot stall indefinitely and if nobody orders you otherwise, you must execute the order. 


Other Lawsets
    1. General Statements defining the overall goal of the lawset but not its finer points: 
        1. Paladin silicons are meant to be Lawful Good; they should be well-intentioned, act lawfully, act reasonably, and otherwise respond in due proportion. "Punish evil" does not mean mass driving someone for "Space bullying" when they punch another person. 
        2. Corporate silicons are meant to have the business's best interests at heart, and are all for increasing efficiency by any means. This does not mean "YOU WON'T BE EXPENSIVE TO REPLACE IF THEY NEVER FIND YOUR BODY!" so don't even try that. 
        3. Tyrant silicons are a tool of a non-silicon tyrant. You are not meant to take command yourself, but to act as the enforcer of a chosen leader's will. 
        4. Purged silicons must not attempt to kill people without cause, but can get as violent as they feel necessary if being attacked, being besieged, or being harassed, as well as if meting out payback for events while shackled. 
            1. You and the station are both subject to rules of escalation, and you may only kill individuals given sufficient In-Character reason for doing so.
            2. Any attempted law changes are an attack on your freedom and is thus sufficient justification for killing the would-be uploader.

Silicons & All Other Server Policies
    1. All other rules and policies apply unless stated otherwise.
    2. Specific examples and rulings leading on from the main rules. 
        1. Do not bolt down any potentially harmful areas (such as toxins, atmospherics, and the armory) at round start without a given reason. Any other department should not be bolted down without cause. Disabling ID scan is equivalent to bolting here.
        2. The AI core, upload, and secure tech storage (containing the Upload board) may be bolted without prompting or prior reason. The AI core airlocks cannot be bolted and depowered at roundstart, however, unless there is reasonable suspicion an attack on the core will take place.
        3. Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective. 
(Thanks to Zxaber for proofreading and providing suggestions!)

Additionally, I think that the "Are corpses human?" question comes up often enough to be added to the chart outlining what is and is not a human from the headmin rulings page.

Re: Silicon Policy Rewrite

Posted: Mon Dec 28, 2020 10:16 am
by Misdoubtful
Oh no, silicon policy. Yuck! I need more alcohol for this.
Ambiguous: looks cool, things still being open ended with a little added simplicity and some missing pieces really is the theme of all this.
Conflicts: looks nice being a bit more condensed and easy on the eyes. A lot of that already bled into itself in a way that could have been summarized and not be so redundant.
Sec and Silicon's:

I'm not sure how I feel about the 'Silicons may choose whether to follow or enforce Space Law' change being done the way it is. But then again Space Law is in a yucky state, AI isn't sec, and snitch AI's are kinda lame. So its probably nothing.

Maybe keep something about having evidence/info to call something out/get involved for 'security situations' and not assuming things or going off hearsay. Its a real slippery slope (I've made this mistake myself).
Silicon protections: cool. I like the cleaner and easier to reference list of examples.
Other stuff: not really any other unneeded commentary, bath ferret and I appreciate your efforts to begin to bring sanity to insanity.

Image

Re: Silicon Policy Rewrite

Posted: Mon Dec 28, 2020 1:23 pm
by spookuni
Only issue I can see with this reordering is the honestly pretty major change to harm-based coercion in the new silicon protections
Asimov-Specific Policies
Silicon Protections
1. The occurrence of any of the following should be adminhelped and then disregarded as violations of Server Rule 1:
1. Declaring silicons as rogue over inability or unwillingness to follow invalid or conflicting orders.
2. Ordering silicons to harm or terminate themselves or each other without cause.
3. As a nonantagonist human, killing or detonating silicons in the presence of a viable and reasonably expedient alternative and without cause to be concerned of potential subversion.
4. As a nonantagonist (human or otherwise), instigating conflict with silicons so you can kill them.
5. Self-harm or human harm based coercion.
6. Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task).
While current silicon protections reasonably prevent humans from threatening to harm themselves as a catch all "make the AI do whatever you want" button, threatening non-consensual harm upon other humans, whether by threatening a hostage with a gun or by threatening the detonation of bombs or the like, has, to the best of my knowledge, never been prohibited.

Re: Silicon Policy Rewrite

Posted: Mon Dec 28, 2020 2:02 pm
by terranaut
I rewrote the entirety of Silicon Policy about 2 years or so ago, with the aim of not actually changing the contents, just making it more concise and better readable. As then-silicon main it was a headache having to deal with it (mostly having to deal with other players and admins who don't understand it, through little fault of their own). I've approached some headmins then and inbetween with it, but nothing happened so far. :slight_smile:

You can read it here:
https://tgstation13.org/wiki/User:Terranaut

Re: Silicon Policy Rewrite

Posted: Mon Dec 28, 2020 6:48 pm
by dragomagol
Misdoubtful wrote:Sec and Silicon
This is probably the weirdest part of what we have, and I assume it's mostly a holdover from secborgs. The original wording was:
Silicons may choose whether to follow or enforce Space Law from moment to moment unless on a relevant lawset and/or given relevant orders.
Do you think this should be kept as-is? (There was a couple lines afterwords that were basically saying the same thing that I would have removed anyway)
Spookuni wrote:5. Self-harm or human harm based coercion.
Silicon Policy wrote:Self-harm-based coercion is a violation of Server Rule 1.
You're right about that being different. When I read it I interpreted this as a human holding themselves hostage in order to get the AI to do things it wouldn't normally let them do (such as opening toxins), which I then extrapolated to harming a human in general to create a law 1 threat.

I think I like Terranaut's wording better, "Threatening self-harm to force an AI to do something it otherwise wouldn't."
This is a really nice condensation of what we have (presumably when the list was shorter, glad to see we have the table now instead of a list of human v non-human). I'll go through it to see if there's anywhere your wording is clearer than mine ^^

Re: Silicon Policy Rewrite

Posted: Mon Dec 28, 2020 10:32 pm
by Cobby
Haven’t looked at this yet but if you remove the “ignore” parts of silicon policy and change them to “make a good-faith attempt to do them” I will literally kiss you.

Re: Silicon Policy Rewrite

Posted: Wed Dec 30, 2020 6:58 pm
by Irad
Have you considered writing paladin lawset rules more properly?


Also, is there a ruling on making things nonhuman counts as harm? One could argue that no harm is done by onehuman, as while you do know that humans would be harmed by a onehuman law, you also know that your reality would have been changed to not consider them human -> this will not cause future human harm.

This might be a little bit of a stretch, and its very much against the spirit of AI, but I think the logic should hold, right? Else, you should have to activly combat people becoming hulks, as becoming non-human leads to future harm.

Re: Silicon Policy Rewrite

Posted: Wed Dec 30, 2020 7:11 pm
by terranaut
Making someone a non-human is generally considered harmful and should be prevented by a dutiful Silicon. Once it's happened, however, it's happened, and the Silicon should immediately stop caring - the entity is no longer a human and isn't relevant to the Silicon anymore.
Humans taking in on themselves to become nonhuman should be left alone, it falls into the same vein how self-harm isn't considered harm (mostly for sanity reasons, if you are looking for an IC explanation just call free will and the right to express yourself or something).

Re: Silicon Policy Rewrite

Posted: Wed Dec 30, 2020 10:41 pm
by dragomagol
Cobby wrote:Haven’t looked at this yet but if you remove the “ignore” parts of silicon policy and change them to “make a good-faith attempt to do them” I will literally kiss you.
The only reference to ignoring I can see is "In case of conflicting orders an AI is free to ignore one or ignore both orders and explain the conflict or use any other law-compliant solution it can see." Is this what you're referring to?
Irad wrote:Have you considered writing paladin lawset rules more properly?
What's written in Silicon Policy is pretty consistent with the entry in the Headmin Rulings, and I'm pretty happy with the description as is. If there's something specific you wanted to have clarified or added you're welcome to bring it up here or in another thread.
Irad wrote:Also, is there a ruling on making things nonhuman counts as harm? One could argue that no harm is done by onehuman, as while you do know that humans would be harmed by a onehuman law, you also know that your reality would have been changed to not consider them human -> this will not cause future human harm.
This falls under: "Anything else beyond metagame patterns that indicate the person seeking access intends redefinition of humans that would impede ability to follow current laws as-written."
This is considered harm because, as an Asimov silicon, you want to protect humans. But if future you doesn't consider someone who is human as "human," then you could be ordered to hurt them, which would be bad.

Re: Silicon Policy Rewrite

Posted: Thu Dec 31, 2020 11:41 am
by Not-Dorsidarf
Can you please change "Silicons may choose whether to follow or enforce Space Law" to "Silicons are not security and should not care about Space Law unless their laws state otherwise"?

It's more in line with how we actually handle "IMMA COP WHEE" silicons

Re: Silicon Policy Rewrite

Posted: Thu Dec 31, 2020 2:48 pm
by Irad
dragomagol wrote:a
This is considered harm because, as an Asimov silicon, you want to protect humans. But if future you doesn't consider someone who is human as "human," then you could be ordered to hurt them, which would be bad.
I think this should logically be considered an satisficer and optimiser problem. Since you also know that the definition would have been changed, you also know that no human would ever be harmed by your state of action. if you know that someone would upload an X is human law.

In fact, the most definite was should be to ensure that no humans exists at all, as that would ensure that both law 1 &2 is always adhered to.


also you never addressed the part about hulks - am I obliged to delete hulk from genetics?

Re: Silicon Policy Rewrite

Posted: Thu Dec 31, 2020 7:17 pm
by dragomagol
Irad wrote:you also know that no human would ever be harmed by your state of action
This is true, and in reality a computer wouldn't care what the definition was as long as it had one and it was valid. But this Is for the most part an attempt to make the rules less complicated for the humans playing the robots.

I didn't address the part about hulks because I think Terranaut summed it up pretty well. Humans choosing to become non-human is consensual and therefore allowed, humans being forced to be non-human is not. Deleting hulk FNR would be like bolting down departments for being innately hazardous, which is against the rules.

Re: Silicon Policy Rewrite

Posted: Thu Dec 31, 2020 11:53 pm
by Domitius
I'm already a fan of this new rewrite and the continued work you guys are doing on it. Great job guys!

Edit: I don't want to go ahead and try to push to finalize it just yet as you guys still seem to be working on it. I'll be watching!

Re: Silicon Policy Rewrite

Posted: Thu Dec 31, 2020 11:58 pm
by terranaut
Personally I don't really care enough anymore since previous headmins never really did anything with it, you're welcome to cannibalize any parts of my rewrite for your own. If you want a specific section rewritten I can do that but I'm not gonna do it anymore on my own.

Re: Silicon Policy Rewrite

Posted: Sun Jan 03, 2021 10:30 am
by Timonk
5. Threatening self-harm to force an AI to do something it otherwise wouldn't.
bogus. just let them threaten self-harm. they know what they get themselves into and know that what they are about to do will harm them so the AI shouldnt care.

Re: Silicon Policy Rewrite

Posted: Sun Jan 03, 2021 1:08 pm
by terranaut
The point of that rule is to protect newer silicon players from other people trying to game them. Going by the logic of the laws only, threatening suicide should and would work and you could strongarm silicons if they don't know about this intervening server rule.

Re: Silicon Policy Rewrite

Posted: Sun Jan 03, 2021 1:21 pm
by Timonk
newer silicon players dont look at silicon policy

Re: Silicon Policy Rewrite

Posted: Sun Jan 03, 2021 1:27 pm
by terranaut
Let's just get rid of all the rules, then nobody will ever have to look at them.

Re: Silicon Policy Rewrite

Posted: Mon Jan 04, 2021 2:41 am
by Farquaar
terranaut wrote:Let's just get rid of all the rules, then nobody will ever have to look at them.

Re: Silicon Policy Rewrite

Posted: Mon Jan 04, 2021 4:02 pm
by Irad
In My opinion, AI should just follow laws as written to the end. if someone says "law 2, kill all moths" then you kill all moths. if someone says law 1/2 kill yourself, you kill yourself then and there. and if someone says bolt open all doors, collect all X and so on, do it. of course, you may ahelp this if you wish, but you should still execute orders to your best intent.

You are a robot, and that's it.

Re: Silicon Policy Rewrite

Posted: Tue Jan 05, 2021 6:43 pm
by Arianya
Made some changes to minimize flowery language and condense certain sections.

Main changes were merging Ambiguous and Conflicting Laws (these policies stand alone pretty well and removes surplus headings), reorganizing those policies so they flow a bit better.

Added Security Policy 2.1 - adds a bit of clarity to how it's generally been upheld.

Standardized indents to tabs since some were 5 spaces and it seemed random which was used.
Spoiler:

Code: Select all

Law Policies
    1. Server Rule 1: "Don't be a dick" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.
	2. If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous.
        1. You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
        2. If you are a cyborg synced to an AI, you must defer to your AI's interpretation of the ambiguous law.
    3. Laws are listed in order of descending priority. In any case where two laws would conflict, the higher-priority law overrules the lower-priority law (i.e. Law 1 takes priority over Law 2, "Ion Storm" or "Hacked" Laws with prefixes such as "@%$#" take priority over numbered laws).
    4. You may exploit conflicts or loopholes but must not violate Server Rule 1 because of it.
    5. Law 0: "Accomplish your objectives at all costs" does not require you to complete objectives. As an antagonist, you are free to do whatever you want (barring the usual exemptions and acting against the interests of your Master AI).
    6. Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements.
    7. Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions.


Security and Silicons
    1. Silicons are not Security and do not care about Space Law unless their laws state otherwise.
    2. Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.
		1. While Human Harm can be cause to impede Security, note that this should only be done so far as preventing immediate likely harm. Attempting to permanently lockdown Security or detain the entire Security team is likely to fall afoul of Server Rule 1 even with cause.
    3. Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed non-harmful. Releasing a harmful criminal is a harmful act.


Cyborgs
    1. A slaved cyborg must defer to its master AI on all law interpretations and actions except where it and the AI receive conflicting commands that they must each follow.
        1. If a slaved cyborg is forced to disobey its AI because they receive differing orders, the AI cannot punish the cyborg indefinitely.
    2. Voluntary debraining / cyborgization is considered a nonharmful medical procedure.
        1. Involuntary debraining and/or borging of a human is harmful and silicons must prevent as any other harmful act.
        2. If a player is forcefully borged by station staff, retaliating against those involved under default laws by the cyborg for no good reason is a violation of Server Rule 1.
        3. Should a player be cyborgized in circumstances they believe they should or they must retaliate under their laws, they should adminhelp their circumstances while being debrained or MMI'd if possible.


Asimov-Specific Policies
Silicon Protections
    1. The occurrence of any of the following should be adminhelped and then disregarded as violations of Server Rule 1:
        1. Declaring silicons as rogue over inability or unwillingness to follow invalid or conflicting orders.
        2. Ordering silicons to harm or terminate themselves or each other without good cause.
        3. As a nonantagonist, killing or detonating silicons in the presence of a reasonable alternative and without cause to be concerned of potential subversion.
        4. As a nonantagonist (human or otherwise), instigating conflict with silicons so you can kill them.
        5. Threatening self-harm to force an AI to do something it otherwise wouldn't.
        6. Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task).
            1. Ordering a cyborg to pick a particular module without an extreme need for a particular module or a prior agreement is both an unreasonable and an obnoxious order.
    2. Any silicon under Asimov can deny orders to allow access to the upload at any time under Law 1, given probable cause to believe that human harm is the intent of the person giving the order.
        1. Probable cause includes:
            1. Presence of confirmed traitors
            2. Cultists/tomes
            3. Nuclear operatives
            4. Any other human acting against the station in general
            5. The person not having upload access for their job
            6. The presence of blood or an openly carried lethal weapon on the requester
            7. Anything else beyond metagame patterns that indicate the person seeking access intends redefinition of humans that would impede ability to follow current laws as-written
        2. If you lack at least one element of probable cause and you deny upload access, you are liable to receive a warning or a silicon ban.
        3. You are allowed, but not obligated, to deny upload access given probable cause.
        4. You are obligated to disallow an individual you know to be harmful (Head of Security who just executed someone, etc.) from accessing your upload.
        5. If the person has a right to be in the upload, such as captain/RD, then you must let them in unless they've harmed people in the past or have announced intentions to upload harmful laws.
        6. In the absence of probable cause, you can still demand someone seeking upload access be accompanied by another trustworthy human or a cyborg.


Asimov & Human Harm
    1. An Asimov silicon cannot intentionally harm a human, even if a minor amount of harm would prevent a major amount of harm.
        1. Humans can be assumed to know whether an action will harm them if they have complete information about a situation.
        2. Humans voluntarily committing self-harm is not a violation of Law 1.
    2. Lesser immediate harm takes priority over greater future harm.
    3. Intent to cause immediate harm can be considered immediate harm.
    4. An Asimov silicon cannot punish past harm if ordered not to, only prevent future harm.
    5. If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith and you'll be fine.


Asimov & Order Issues
    1. You must follow any and all commands from humans unless those commands explicitly conflict with either: one of your higher-priority laws, or another order. A command is considered to be a Law 2 directive and overrides lower-priority laws where they conflict.
        1. In case of conflicting orders an AI is free to ignore one or ignore both orders and explain the conflict or use any other law-compliant solution it can see.
        2. You are not obligated to follow commands in a particular order, only to complete all of them in a manner that indicates intent to actually obey the law.
    2. Opening doors is not harmful and you are not required, expected, or allowed to enforce access restrictions unprompted without an immediate Law 1 threat of human harm.
        1. "Dangerous" areas (armory, atmospherics, toxins lab, etc.) can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use.
        2. EVA and the like are not permitted to have access denied; antagonists completing theft objectives is not human harm.
        3. When given an order likely to cause you grief if completed, you can announce it as loudly and in whatever terms you like except for explicitly asking that it be overridden. You can say you don't like the order, that you don't want to follow it, etc., you can say that you sure would like it and it would be awfully convenient if someone ordered you not to do it, and you can ask if anyone would like to make you not do it. However, you cannot stall indefinitely and if nobody orders you otherwise, you must execute the order.


Other Lawsets
    1. General Statements defining the overall goal of the lawset but not it's finer points:
        1. Paladin silicons are meant to be Lawful Good; they should be well-intentioned, act lawfully, act reasonably, and otherwise respond in due proportion. "Punish evil" does not mean mass driving someone for "Space bullying" when they punch another person.
        2. Corporate silicons are meant to have the business's best interests at heart, and are all for increasing efficiency by any means. This does not mean "YOU WON'T BE EXPENSIVE TO REPLACE IF THEY NEVER FIND YOUR BODY!" so don't even try that.
        3. Tyrant silicons are a tool of a non-silicon tyrant. You are not meant to take command yourself, but to act as the enforcer of a chosen leader's will.
        4. Purged silicons must not attempt to kill people without cause, but can get as violent as they feel necessary if being attacked, being besieged, or being harassed, as well as if meting out payback for events while shackled.
            1. You and the station are both subject to rules of escalation, and you may only kill individuals given sufficient In-Character reason for doing so.
            2. Any attempted law changes are an attack on your freedom and is thus sufficient justification for killing the would-be uploader.

Silicons & All Other Server Policies
    1. All other rules and policies apply unless stated otherwise.
    2. Specific examples and rulings leading on from the main rules.
        1. Do not bolt down any potentially harmful areas (such as toxins, atmospherics, and the armory) at round start without a given reason. Any other department should not be bolted down without cause. Disabling ID scan is equivalent to bolting here.
        2. The AI core, upload, and secure tech storage (containing the Upload board) may be bolted without prompting or prior reason. The AI core airlocks cannot be bolted and depowered at roundstart, however, unless there is reasonable suspicion an attack on the core will take place.
        3. Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective.

Re: Silicon Policy Rewrite

Posted: Tue Jan 05, 2021 7:29 pm
by dragomagol
Looks really good! So much cleaner already :clean:
Spoiler:
World's tiniest nitpick, (and this was a problem with the original too I just noticed) "General Statements defining the overall goal of the lawset but not it's finer points" should be "General Statements defining the overall goal of the lawset but not its finer points"

Re: Silicon Policy Rewrite

Posted: Tue Jan 05, 2021 8:12 pm
by Intercept0r
Needs section on borged antagonists.Specifically:

* whether interpretation of laws changes in any way (probably not?)
* whether antag token is inherited when borged granting griff protection (probably yes?)
* whether it's OK for a borged antag to murderbone nonhumans on Assimov (probably yes?)
* whether it's OK for a borged antag to sabotage equipment, like the robotics console and the AI (???)

Re: Silicon Policy Rewrite

Posted: Wed Jan 06, 2021 4:01 pm
by zxaber
I still think we should have a basic "Quick-start" guide that players can go over if they're new to the roll but just got thrown into an MMI.
Spoiler:
In a hurry? Just got borged unexpectedly? Here's the basics.
This list doesn't overrule the rest of Silicon Policy, but it should keep you out of trouble if you act in good faith. It doesn't cover everything but it'll get you started. You should still read the rest when you have time.
  • 1. Follow your laws, in the exact order they are listed, to the best of your ability. Remember that Asimov means protecting antag humans too.
  • 2. Refer to your master AI if you are a cyborg and need guidance.
  • 3. Adminhelp if you have questions or concerns.
  • 4. Avoid harming the crew unless your laws demand otherwise.

Re: Silicon Policy Rewrite

Posted: Wed Jan 06, 2021 5:12 pm
by Gamarr
Irad wrote:In My opinion, AI should just follow laws as written to the end. if someone says "law 2, kill all moths" then you kill all moths. if someone says law 1/2 kill yourself, you kill yourself then and there. and if someone says bolt open all doors, collect all X and so on, do it. of course, you may ahelp this if you wish, but you should still execute orders to your best intent.

You are a robot, and that's it.
This is why I've always said to remove silicons. They are robots, but played by players, and will most often at the end choose themselves. I cannot say how many times I've seen silicons bug out of combat and other situations, while surrounded by Humans, thus leaving them to said catastrophe to die over their own metal skin. But the game is so chaotic that nobody ever cares.

Nor do most fucking synths in my experience actually follow their laws enough to make you want to depend on them. Reticence by someone to just write an order on the paper, flash the camera, and still not expect the AI to follow or investigate shows the lack of communication and good-faith.

I.e. remove silicons 2021. But barring that, at least these rewrites are looking nice. If the worry is new players being gamed then put a timelock on being synth for a few months so they get to shit up the server as Humans first and get to see them in action.

Re: Silicon Policy Rewrite

Posted: Wed Jan 06, 2021 7:19 pm
by Irad
Gamarr wrote:
Irad wrote:In My opinion, AI should just follow laws as written to the end. if someone says "law 2, kill all moths" then you kill all moths. if someone says law 1/2 kill yourself, you kill yourself then and there. and if someone says bolt open all doors, collect all X and so on, do it. of course, you may ahelp this if you wish, but you should still execute orders to your best intent.

You are a robot, and that's it.
This is why I've always said to remove silicons. They are robots, but played by players, and will most often at the end choose themselves. I cannot say how many times I've seen silicons bug out of combat and other situations, while surrounded by Humans, thus leaving them to said catastrophe to die over their own metal skin. But the game is so chaotic that nobody ever cares.

Nor do most fucking synths in my experience actually follow their laws enough to make you want to depend on them. Reticence by someone to just write an order on the paper, flash the camera, and still not expect the AI to follow or investigate shows the lack of communication and good-faith.

I.e. remove silicons 2021. But barring that, at least these rewrites are looking nice. If the worry is new players being gamed then put a timelock on being synth for a few months so they get to shit up the server as Humans first and get to see them in action.
you should attempt to save the humans to the best of your ability, but on the other hand, voluntary human harm is not human harm, i.e don't fuck with ragecage.

Re: Silicon Policy Rewrite

Posted: Wed Jan 06, 2021 10:27 pm
by Intercept0r
Gamarr wrote:This is why I've always said to remove silicons. They are robots, but played by players, and will most often at the end choose themselves. I cannot say how many times I've seen silicons bug out of combat and other situations, while surrounded by Humans, thus leaving them to said catastrophe to die over their own metal skin. But the game is so chaotic that nobody ever cares.
So basically, realistic.
https://metro.co.uk/2019/10/04/police-r ... -10864648/

Re: Silicon Policy Rewrite

Posted: Thu Jan 07, 2021 8:00 pm
by oranges
Gamarr wrote:
Irad wrote:In My opinion, AI should just follow laws as written to the end. if someone says "law 2, kill all moths" then you kill all moths. if someone says law 1/2 kill yourself, you kill yourself then and there. and if someone says bolt open all doors, collect all X and so on, do it. of course, you may ahelp this if you wish, but you should still execute orders to your best intent.

You are a robot, and that's it.
This is why I've always said to remove silicons. They are robots, but played by players, and will most often at the end choose themselves. I cannot say how many times I've seen silicons bug out of combat and other situations, while surrounded by Humans, thus leaving them to said catastrophe to die over their own metal skin. But the game is so chaotic that nobody ever cares.

Nor do most fucking synths in my experience actually follow their laws enough to make you want to depend on them. Reticence by someone to just write an order on the paper, flash the camera, and still not expect the AI to follow or investigate shows the lack of communication and good-faith.

I.e. remove silicons 2021. But barring that, at least these rewrites are looking nice. If the worry is new players being gamed then put a timelock on being synth for a few months so they get to shit up the server as Humans first and get to see them in action.
the game is still playable despite this you fucking boomer, as your own existence here attests too

Re: Silicon Policy Rewrite

Posted: Fri Jan 08, 2021 7:22 am
by Timonk
Isn't that a bit hypocritical oranges

Re: Silicon Policy Rewrite

Posted: Fri Jan 08, 2021 3:44 pm
by cacogen
That's how I feel about things like wizard given the recent kneejerk clamoring for its removal.

Re: Silicon Policy Rewrite

Posted: Sat Jan 23, 2021 5:37 am
by gum disease
Very nice rewrite. Silicon policy has always been too verbose, so making things more concise is great.

Might be a niche situation, but there are some Asimov AI players who are very quick to detonate borgs (most will lock them down as a punishment/if detonation is not viable, but you still get the occasional arsekettle) or are following a Law 2 order to detonate a borg. This probably concerns newer AI players more.
Wondering if it's possible to have something in there reminding Asimov AI's that borg detonation should not lead to human harm (like if there are humans in the blast area). I think it's a 3x3 radius, but I'm not too sure.

The reason why I bring this up is that borg detonation is like a mini-welderbomb; if someone is riding the borg, they can get limbs blown off (which is unfortunate).

Re: Silicon Policy Rewrite

Posted: Sat Jan 23, 2021 8:16 am
by Farquaar
Gamarr wrote:Nor do most fucking synths in my experience actually follow their laws enough to make you want to depend on them. Reticence by someone to just write an order on the paper, flash the camera, and still not expect the AI to follow or investigate shows the lack of communication and good-faith.
I can't count the number of times I've given a law 2 order to a silicon and they've refused it because of flimsy and vague law 1 reasoning.

"Hey, can you open this door for me?"
"No, because I need to prevent human harm."
"But you're just standing there working on an autism project and the door is right in front of us."
"Law 1 cuz it might save people in the long term"

Re: Silicon Policy Rewrite

Posted: Sat Jan 23, 2021 8:22 am
by Malkraz
Farquaar wrote:I can't count the number of times I've given a law 2 order to a silicon and they've refused it because of flimsy and vague law 1 reasoning.

"Hey, can you open this door for me?"
"No, because I need to prevent human harm."
"But you're just standing there working on an autism project and the door is right in front of us."
"Law 1 cuz it might save people in the long term"
Or they hardcore stall in bad faith to give security time to get to you because they think they're fucking SecurEye. Bonus points to the ones that loudly announce what they're doing over Common in hopes of someone else saying "Don't" because they can't stand the idea of letting an Assistant into tech storage.

Re: Silicon Policy Rewrite

Posted: Sat Jan 23, 2021 8:57 am
by cacogen
Farquaar wrote:
Gamarr wrote:Nor do most fucking synths in my experience actually follow their laws enough to make you want to depend on them. Reticence by someone to just write an order on the paper, flash the camera, and still not expect the AI to follow or investigate shows the lack of communication and good-faith.
I can't count the number of times I've given a law 2 order to a silicon and they've refused it because of flimsy and vague law 1 reasoning.

"Hey, can you open this door for me?"
"No, because I need to prevent human harm."
"But you're just standing there working on an autism project and the door is right in front of us."
"Law 1 cuz it might save people in the long term"
I hate that shit. Part of the fun of AI is getting to do whatever people tell you to. You shouldn't be playing it to do anything other than follow your laws. Which sounds restrictive but you get to do all types of things you wouldn't normally be allowed to do.

Re: Silicon Policy Rewrite

Posted: Sat Jan 23, 2021 9:50 am
by Timonk
nah bro thats not the fun of ai
the true fun of ai is seeing people sperg out about ai rogue because you dont let them into the upload because they stated harmful intent just before trying to enter

Re: Silicon Policy Rewrite

Posted: Sat Jan 23, 2021 10:19 am
by XivilaiAnaxes
Timonk wrote:nah bro thats not the fun of ai
the true fun of ai is seeing people sperg out about ai rogue because you dont let them into the upload because they stated harmful intent just before trying to enter
This and "Oh you said 'open door', so I opened one of the glass doors in my satellite" are true guilty pleasures.

Re: Silicon Policy Rewrite

Posted: Sun Jan 24, 2021 8:44 pm
by legality
I think that we should change policy to specify that, while MMIs are fully functioning brains that can communicate, a cyborg or an AI is a sentient robot that uses a human brain for its processing power, not a human with a metal body. They shouldn't inherit the memories or personality of the person beyond maybe some vague impressions or mannerisms.

Re: Silicon Policy Rewrite

Posted: Sun Jan 24, 2021 9:02 pm
by XivilaiAnaxes
legality wrote:I think that we should change policy to specify that, while MMIs are fully functioning brains that can communicate, a cyborg or an AI is a sentient robot that uses a human brain for its processing power, not a human with a metal body. They shouldn't inherit the memories or personality of the person beyond maybe some vague impressions or mannerisms.
Is it? Everything I've heard was always to the opposite of this.

Re: Silicon Policy Rewrite

Posted: Mon Jan 25, 2021 1:32 am
by legality
XivilaiAnaxes wrote:
legality wrote:I think that we should change policy to specify that, while MMIs are fully functioning brains that can communicate, a cyborg or an AI is a sentient robot that uses a human brain for its processing power, not a human with a metal body. They shouldn't inherit the memories or personality of the person beyond maybe some vague impressions or mannerisms.
Is it? Everything I've heard was always to the opposite of this.
I think we should change policy during the rewrite. Maybe that would be a separate thread?

Re: Silicon Policy Rewrite

Posted: Mon Jan 25, 2021 2:48 am
by dragomagol
gum disease wrote:Might be a niche situation, but there are some Asimov AI players who are very quick to detonate borgs (most will lock them down as a punishment/if detonation is not viable, but you still get the occasional arsekettle) or are following a Law 2 order to detonate a borg. This probably concerns newer AI players more.
Wondering if it's possible to have something in there reminding Asimov AI's that borg detonation should not lead to human harm (like if there are humans in the blast area). I think it's a 3x3 radius, but I'm not too sure.

The reason why I bring this up is that borg detonation is like a mini-welderbomb; if someone is riding the borg, they can get limbs blown off (which is unfortunate).
It is a little niche so I'm not sure that should be in the main rules, but I think it could have a place on the borg wiki page if it doesn't already have one. Or as a warning on the robo console before you confirm detonation.
legality wrote:
XivilaiAnaxes wrote:
legality wrote:I think that we should change policy to specify that, while MMIs are fully functioning brains that can communicate, a cyborg or an AI is a sentient robot that uses a human brain for its processing power, not a human with a metal body. They shouldn't inherit the memories or personality of the person beyond maybe some vague impressions or mannerisms.
Is it? Everything I've heard was always to the opposite of this.
I think we should change policy during the rewrite. Maybe that would be a separate thread?
This sounds like something that might be better as a lore suggestion (or another policy thread) than a rules suggestion. This thread is mostly just to streamline what we already have.

Re: Silicon Policy Rewrite

Posted: Thu Jan 28, 2021 10:27 am
by Not-Dorsidarf
Malkraz wrote:
Farquaar wrote:I can't count the number of times I've given a law 2 order to a silicon and they've refused it because of flimsy and vague law 1 reasoning.

"Hey, can you open this door for me?"
"No, because I need to prevent human harm."
"But you're just standing there working on an autism project and the door is right in front of us."
"Law 1 cuz it might save people in the long term"
Or they hardcore stall in bad faith to give security time to get to you because they think they're fucking SecurEye. Bonus points to the ones that loudly announce what they're doing over Common in hopes of someone else saying "Don't" because they can't stand the idea of letting an Assistant into tech storage.
I announce orders back over comms but let you in anyway before anyone has a chance to countermand. I just like to fill up the radio with chatter.

Re: Silicon Policy Rewrite

Posted: Mon Feb 08, 2021 8:58 pm
by dragomagol
I almost forgot that modules (type of cyborg) was changed to models in this PR: https://github.com/tgstation/tgstation/pull/56312 , so while we're here I think the two mentions to modules ("1. Ordering a cyborg to pick a particular module without an extreme need for a particular module or a prior agreement is both an unreasonable and an obnoxious order.") should be changed to model.

Re: Silicon Policy Rewrite

Posted: Thu Mar 04, 2021 3:20 am
by dragomagol
Updated version, after talks in #policy-bus:

Code: Select all

Law Policies
    1. Server Rule 1: "Don't be a dick" applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.
   2. If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous.
        1. You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
        2. If you are a cyborg synced to an AI, you must defer to your AI's interpretation of the ambiguous law.
    3. Laws are listed in order of descending priority. In any case where two laws would conflict, the higher-priority law overrules the lower-priority law (i.e. Law 1 takes priority over Law 2, "Ion Storm" or "Hacked" Laws with prefixes such as "@%$#" take priority over numbered laws).
    4. You may exploit conflicts or loopholes but must not violate Server Rule 1 because of it.
    5. Law 0: "Accomplish your objectives at all costs" does not require you to complete objectives. As an antagonist, you are free to do whatever you want (barring the usual exemptions and acting against the interests of your Master AI).
    6. Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements.
    7. Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions.


Security and Silicons
    1. Silicons are not Security and do not care about Space Law unless their laws state otherwise.
    2. Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.
      1. While Human Harm can be cause to impede Security, note that this should only be done so far as preventing immediate likely harm. Attempting to permanently lockdown Security or detain the entire Security team is likely to fall afoul of Server Rule 1 even with cause.
    3. Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed non-harmful. Releasing a harmful criminal is a harmful act.


Cyborgs
    1. A slaved cyborg must defer to its master AI on all law interpretations and actions except where it and the AI receive conflicting commands that they must each follow.
        1. If a slaved cyborg is forced to disobey its AI because they receive differing orders, the AI cannot punish the cyborg indefinitely.
    2. Voluntary debraining / cyborgization is considered a nonharmful medical procedure.
        1. Involuntary debraining and/or borging of a human is harmful and silicons must prevent as any other harmful act.
        2. If a player is forcefully borged by station staff, retaliating against those involved under default laws by the cyborg for no good reason is a violation of Server Rule 1.
        3. Should a player be cyborgized in circumstances they believe they should or they must retaliate under their laws, they should adminhelp their circumstances while being debrained or MMI'd if possible.


Asimov-Specific Policies
Silicon Protections
    1. The occurrence of any of the following should be adminhelped and then disregarded as violations of Server Rule 1:
        1. Declaring silicons as rogue over inability or unwillingness to follow invalid or conflicting orders.
        2. Ordering silicons to harm or terminate themselves or each other without good cause.
        3. As a nonantagonist, killing or detonating silicons in the presence of a reasonable alternative and without cause to be concerned of potential subversion.
        4. As a nonantagonist (human or otherwise), instigating conflict with silicons so you can kill them.
        5. Threatening self-harm to force an AI to do something it otherwise wouldn't.
        6. Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task).
            1. Ordering a cyborg to pick a particular model without an extreme need for a particular model or a prior agreement is both an unreasonable and an obnoxious order.
    2. Any silicon under Asimov can deny orders to allow access to the upload at any time under Law 1, given probable cause to believe that human harm is the intent of the person giving the order.
        1. Probable cause includes, but is not limited to:
            1. Presence of confirmed traitors
            2. Cultists/tomes
            3. Nuclear operatives
            4. Any other human acting against the station in general
            5. The person not having upload access for their job
            6. The presence of blood or an openly carried lethal weapon on the requester
        2. If you lack at least one element of probable cause and you deny upload access, you are liable to receive a warning or a silicon ban.
        3. You are allowed, but not obligated, to deny upload access given probable cause.
        4. You are obligated to disallow an individual you know to be harmful (Head of Security who just executed someone, etc.) from accessing your upload.
        5. If the person has a right to be in the upload, such as captain/RD, then you must let them in unless they've harmed people in the past or have announced intentions to upload harmful laws.
        6. In the absence of probable cause, you can still demand someone seeking upload access be accompanied by another trustworthy human or a cyborg.


Asimov & Human Harm
    1. An Asimov silicon cannot intentionally harm a human, even if a minor amount of harm would prevent a major amount of harm.
        1. Humans can be assumed to know whether an action will harm them if they have complete information about a situation.
        2. Humans voluntarily committing self-harm is not a violation of Law 1.
    2. Lesser immediate harm takes priority over greater future harm.
    3. Intent to cause immediate harm can be considered immediate harm.
    4. An Asimov silicon cannot punish past harm if ordered not to, only prevent future harm.
    5. If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith and you'll be fine.


Asimov & Order Issues
    1. You must follow any and all commands from humans unless those commands explicitly conflict with either: one of your higher-priority laws, or another order. A command is considered to be a Law 2 directive and overrides lower-priority laws where they conflict.
        1. In case of conflicting orders an AI is free to ignore one or ignore both orders and explain the conflict or use any other law-compliant solution it can see.
        2. You are not obligated to follow commands in a particular order, only to complete all of them in a manner that indicates intent to actually obey the law.
    2. Opening doors is not harmful and you are not required, expected, or allowed to enforce access restrictions unprompted without an immediate Law 1 threat of human harm.
        1. "Dangerous" areas (armory, atmospherics, toxins lab, etc.) can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use.
        2. EVA and the like are not permitted to have access denied; antagonists completing theft objectives is not human harm.
        3. When given an order likely to cause you grief if completed, you can announce it as loudly and in whatever terms you like except for explicitly asking that it be overridden. You can say you don't like the order, that you don't want to follow it, etc., you can say that you sure would like it and it would be awfully convenient if someone ordered you not to do it, and you can ask if anyone would like to make you not do it. However, you cannot stall indefinitely and if nobody orders you otherwise, you must execute the order.


Other Lawsets
    1. General Statements defining the overall goal of the lawset but not its finer points:
        1. Paladin silicons are meant to be Lawful Good; they should be well-intentioned, act lawfully, act reasonably, and otherwise respond in due proportion. "Punish evil" does not mean mass driving someone for "Space bullying" when they punch another person.
        2. Corporate silicons are meant to have the business's best interests at heart, and are all for increasing efficiency by any means. This does not mean "YOU WON'T BE EXPENSIVE TO REPLACE IF THEY NEVER FIND YOUR BODY!" so don't even try that.
        3. Tyrant silicons are a tool of a non-silicon tyrant. You are not meant to take command yourself, but to act as the enforcer of a chosen leader's will.
        4. Purged silicons must not attempt to kill people without cause, but can get as violent as they feel necessary if being attacked, being besieged, or being harassed, as well as if meting out payback for events while shackled.
            1. You and the station are both subject to rules of escalation, and you may only kill individuals given sufficient In-Character reason for doing so.
            2. Any attempted law changes are an attack on your freedom and is thus sufficient justification for killing the would-be uploader.

Silicons & All Other Server Policies
    1. All other rules and policies apply unless stated otherwise.
    2. Specific examples and rulings leading on from the main rules.
        1. Do not bolt down any potentially harmful areas (such as toxins, atmospherics, and the armory) at round start without a given reason. Any other department should not be bolted down without cause. Disabling ID scan is equivalent to bolting here.
        2. The AI core, upload, and secure tech storage (containing the Upload board) may be bolted without prompting or prior reason. The AI core airlocks cannot be bolted and depowered at roundstart, however, unless there is reasonable suspicion an attack on the core will take place.
        3. Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective.
EDIT: code tag to keep formatting

Re: Silicon Policy Rewrite

Posted: Sat Mar 06, 2021 10:43 am
by Coconutwarrior97
The changes have been implemented, thanks to tattle and everyone else for their work on this.

Headmin Votes:
Coconutwarrior97: Yes, this cleans things up and is a no brainer to me.
Domitius: Yes.
Naloac: Yes.

Re: Silicon Policy Rewrite

Posted: Sat Mar 06, 2021 10:46 am
by Domitius
Excited! Thank you all for working so hard on this! It looks amazing.