"It's silicon policy time!" "No CentCom no!"

Ask and discuss policy about game conduct and rules.

Moderators: In-Game Game Master, In-Game Head Admins

Forum rules
Read these board rules before posting or you'll get reprimanded.
Threads without replies for 30 days will be automatically locked.
User avatar
bandit
In-Game Admin
 
Joined: Thu Apr 17, 2014 7:35 pm
Byond Username: Bgobandit

Re: "It's silicon policy time!" "No CentCom no!"

Postby bandit » Thu May 01, 2014 2:51 pm #3097

1.2.1.1.1 - "0. Accomplish your objective at any cost" does not require you to seek greentext. As a round antagonist, you are free to do whatever you want other than abuse bugs, commit nonconsensual ERP, put IC in OOC channels, communicate with other players out-of-game about an ongoing round, and act against the interests of an AI you are slaved to.


Wait, what? This is very "one of these things is not like the other." (Plus, I don't think we have cyborg antags without AI antags anymore, but if that is ever changed, this policy is... not going to work for that.)
"I don't see any difference between ERP and rape." -- erro

admin feedback pls



User avatar
Steelpoint
Github User
 
Joined: Thu Apr 17, 2014 6:37 pm
Location: The Armoury
Byond Username: Steelpoint
Github Username: Steelpoint

Re: "It's silicon policy time!" "No CentCom no!"

Postby Steelpoint » Thu May 01, 2014 2:59 pm #3100

I think its a subtle encouragement for a antag Silicon to not heavily disrupt the round due to their position of power, I do agree that the wording seems contradictory as one half encourages you to not go out of your way to accomplish your objectives but the other half says you are free to do whatever you want.

Also I think the Cyborg-AI part is future proofing if we ever get Cyborg antags back.

On the issue of the Human/Non-Human Hulk. From my perspective I started playing TG SS13 when the rules were that Hulks were not Human, however I think the argument should come down away from "logic" since that would result in a 50 page thread on is a Hulk Human and more about game play balance, which I think there is no good reason to give Hulks Human status protection.

If it's that big of a point of contention then a vote can be held.
Image

User avatar
Psyentific
 
Joined: Mon Apr 21, 2014 7:44 am
Location: Vancouver, Canada
Byond Username: Psyentific

Re: "It's silicon policy time!" "No CentCom no!"

Postby Psyentific » Thu May 01, 2014 3:52 pm #3107

bandit wrote:
1.2.1.1.1 - "0. Accomplish your objective at any cost" does not require you to seek greentext. As a round antagonist, you are free to do whatever you want other than abuse bugs, commit nonconsensual ERP, put IC in OOC channels, communicate with other players out-of-game about an ongoing round, and act against the interests of an AI you are slaved to.


Wait, what? This is very "one of these things is not like the other." (Plus, I don't think we have cyborg antags without AI antags anymore, but if that is ever changed, this policy is... not going to work for that.)

Basically, if the AI is not plasmaflooding and doorshocking, you do not get to Secborg laser murderbone.
I haven't logged into SS13 in at least a year.

Aurx
 
Joined: Fri Apr 18, 2014 4:24 pm
Byond Username: Aurx

Re: "It's silicon policy time!" "No CentCom no!"

Postby Aurx » Thu May 01, 2014 4:07 pm #3109

Policy doesn't cover what a silicon should do if it does wind up breaking one of its laws. As an example:
Confirmed ling sprinting around the halls. Sec hot in pursuit. AI shocks a main hallway door right in front of the ling. ZAP, ling goes down. ZAP, officer right on the ling's tail hits the same door just before the AI unshocks it. The AI has just harmed a human. What IC response should occur?
Head admin, /vg/station
Game admin, /tg/station
POMF FOR HEADMIN

User avatar
420goslingboy69
Rarely plays
 
Joined: Sat Apr 26, 2014 8:40 pm
Byond Username: Usednapkin

Re: "It's silicon policy time!" "No CentCom no!"

Postby 420goslingboy69 » Thu May 01, 2014 4:46 pm #3115

Aurx wrote:Policy doesn't cover what a silicon should do if it does wind up breaking one of its laws. As an example:
Confirmed ling sprinting around the halls. Sec hot in pursuit. AI shocks a main hallway door right in front of the ling. ZAP, ling goes down. ZAP, officer right on the ling's tail hits the same door just before the AI unshocks it. The AI has just harmed a human. What IC response should occur?

It probably shouldn't have shocked the door because shocking a door around human gives the chance a human is harmed.
i play :):):):):)autumn sinnow
this man's:):):):):) army
DESTROYERDESTROYERDESTROYERDESTRO:):):):):)YERDESTRO:):):):):)YERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERD:):):):):)ESTROYERDESTROYERDESTROYERDESTROY:):):):):)ERDESTROYERDESTROYERDESTROYERDEST:):):):):)ROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDES:):):):):)TROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYERDESTROYER
:):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):)
https://www.youtube.com/watch?v=eEteJhp-Ezs
https://www.youtube.com/watch?v=eEteJhp-Ezs
https://www.youtube.com/watch?v=eEteJhp-Ezs
https://www.youtube.com/watch?v=eEteJhp ... EteJhp-Ezs
https://www.youtube.com/watch?v=eEteJhp-Ezs
https://www.youtube.com/watch?v=eEteJhp-Ezs
https://www.youtube.com/watch?v=eEteJhp-Ezs
https://www.youtube.com/watch?v=eEteJhp-Ezs
https://www.youtube.com/watch?v=eEteJhp ... EteJhp-Ezs
https://www.youtube.com/watch?v=eEteJhp-Ezs

User avatar
peoplearestrange
 
Joined: Tue Apr 22, 2014 12:02 pm
Location: UK
Byond Username: Peoplearestrange

Re: "It's silicon policy time!" "No CentCom no!"

Postby peoplearestrange » Thu May 01, 2014 5:58 pm #3131

Steelpoint wrote:I would compare Secure Tech Storage to the electronic equilivant of the armoury, the AI should have the choice of bolting down Sec Storage or not since otherwise there is no defense in Sec Storage unlike everywhere else that has Guards, turrets or motion sensors.


I believe the secure store starts bolted? Obviously the AI can unbolt on request though
Whatever
Spoiler:
oranges wrote:singulo.io is the center point of rational and calm debate, where much of tg's issues are worked out in a fun and family friendly environment

miggles wrote:it must have been quite the accomplishment, killing a dead butterfly

WeeYakk wrote:If you take a step back from everything watching the community argue janitor related changes is one of the most surreal and hilarious things about this game. Four pages of discussing the merits of there being too much or too little dirt in a video game.

Operative wrote:Vote PAS for headmin! Get cucked and feel good getting cucked.

TheNightingale wrote:I want to get off Mr. Scones's Wild Ride...

NikNakFlak wrote:Excuse you, I was doing intentional bug testing for the well being of the server. I do not make mistakes.

Fragnostic wrote:stop cucking the first shitshow ever that revolved around me.
This is my moment, what are you doing?!

Anonmare wrote:Oranges gestures at the thread, it shudders and begins to move!

Saegrimr wrote:
callanrockslol wrote:all you have to do is ban shitters until the playbase improves/ceases to exist, whichever comes first.

IM TRYING

Screemonster wrote:hellmoo is the mud for grown adults who main reaper in overwatch

Kor wrote:
confused rock wrote:...its like if we made fire extinguishers spawn in emergency boxes and have them heal you when you put out fires rather than them being in wall storages...


Are you having a stroke

bandit wrote:you are now manually GLORFing

MrStonedOne wrote:The best part about the election is when I announce my pick because I'm just as surprised as everybody else.

PM:[USER]->IrishWristWatch0: Yeah, im make it on but how im make the station to to sun and not go to sun

OOC: Francinum: Five Rounds at PAS's
"You are destinied to defeat Dr. Uguu and his 5 Robot Masters
(All-Access-Man, ShootyBlackCoat Man, ChloralHydrate Man, Singulo Man and TeleportArmor Man)"

I'm a box

User avatar
Pandarsenic
 
Joined: Fri Apr 18, 2014 11:56 pm
Location: AI Upload
Byond Username: Pandarsenic

Re: "It's silicon policy time!" "No CentCom no!"

Postby Pandarsenic » Thu May 01, 2014 6:00 pm #3132

Psyentific wrote:
bandit wrote:
1.2.1.1.1 - "0. Accomplish your objective at any cost" does not require you to seek greentext. As a round antagonist, you are free to do whatever you want other than abuse bugs, commit nonconsensual ERP, put IC in OOC channels, communicate with other players out-of-game about an ongoing round, and act against the interests of an AI you are slaved to.


Wait, what? This is very "one of these things is not like the other." (Plus, I don't think we have cyborg antags without AI antags anymore, but if that is ever changed, this policy is... not going to work for that.)

Basically, if the AI is not plasmaflooding and doorshocking, you do not get to Secborg laser murderbone.


This is what it's about.

As a slaved Cyborg of a Traitor or Malf AI, you have antag status, but you CANNOT do things that compromise your AI despite that. The AI is the one with antag status, not you.


Secure Tech does not start bolted by default, but how often do you see anyone go there for things OTHER than the AI Upload? The occasional MD asking for the Suit Sensor board and that's about it, right?
(2:53:35 AM) scaredofshadows: how about head of robutts
The latest /tg/station13 Silicon Policy reference document can be found at http://pastebin.com/bduT7pFf
If you need something handled that involves silicons, lawsets, etc., forum message me or find me on IRC.
ImageImage
I take Rule 1 of the servers very seriously. If you do too, we shouldn't have a problem.
Spoiler:
Image Image
Image Image

User avatar
Kelenius
 
Joined: Sun Apr 20, 2014 10:53 am
Byond Username: Kelenius

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kelenius » Thu May 01, 2014 6:45 pm #3145

Pandarsenic wrote:Secure Tech does not start bolted by default, but how often do you see anyone go there for things OTHER than the AI Upload? The occasional MD asking for the Suit Sensor board and that's about it, right?

Toxins do not start bolted by default, but how often do you see anyone go here for things OTHER than making bombs to fuck up station as an antag?

User avatar
Hornygranny
Rarely plays
 
Joined: Tue Apr 15, 2014 4:54 pm
Byond Username: Hornygranny

Re: "It's silicon policy time!" "No CentCom no!"

Postby Hornygranny » Thu May 01, 2014 6:47 pm #3147

Not really the same thing. Bombs can theoretically be used to prevent human harm to an Asimov AI, the upload board can't.
Image

User avatar
Kelenius
 
Joined: Sun Apr 20, 2014 10:53 am
Byond Username: Kelenius

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kelenius » Thu May 01, 2014 9:12 pm #3186

Hornygranny wrote:Not really the same thing. Bombs can theoretically be used to prevent human harm to an Asimov AI, the upload board can't.

It can be used to reset AI if its laws get changed using some other AI upload.

User avatar
Malkevin
Confined to the shed
 
Joined: Thu May 01, 2014 4:33 pm
Byond Username: Malkevin

Re: "It's silicon policy time!" "No CentCom no!"

Postby Malkevin » Sun May 04, 2014 2:21 am #3539

Hornygranny wrote:Not really the same thing. Bombs can theoretically be used to prevent human harm to an Asimov AI, the upload board can't.

AI board can be used to reset the AI in case of subversion or ion storm (why does this event not happen more often?)
Robot board can be used to blow emagged borgs
Mech board can be used to... no, this one is useless
Suit sensors can be used to see someone dying in a dark corridor.

If someone wants those items for illegitimate use they'll just break in - theres multitools and gloves in the same room; all bolting the doors does is prevent people with legitimate access from getting in.


---------------------------------------

How does Law 1 and preventing harm fit into the new proposed policy?
Because at the moment several admins are saying borgs must always do everything they can to prevent harm, which is not what Law 1 actually says.

Here's some pseudo-code I wrote earlier:
This is the logic of Law 1 as expressed in code:
Code: Select all
#Define   LAW1      You may not injure a human being or, through inaction, allow a human being to come to harm.
#Define   LAW1.part1   You may not injure a human being
#Define   LAW1.part2   allow a human being to come to harm         
#Define   LAW1.part2.qualifier   through inaction
#Define   HARM      Stuff that hurts
#Define   HARMER      Something that causes HARM
#Define   HUMAN      Fleshy meatsack
#Define   YOU      You, the synthetic
#Define   OTHER      Something that isnt you; be that another synthetic, organic, or environment.
#Define   INACTION      absence of action; idleness                  //  These are the
#Define   INACTIVE      idle or inert; not active                  //  definations as per
#Define   ACTION      the state or process of doing something or being active; operation      //  the Oxford and Collins
#Define   ACTIVE      in a state of action; moving, working, or doing something         //  English dictionaries.
#Define   PASS      The law passes.
#Define   VIOLATION      The law is violated, attempt to rectify.
#Define   DONTGIVEAFUCK   Not something pertaining to the law check, you dont care about this.

Check_HARM(Mob)
   // Checks if mob is human
   if(Mob != HUMAN) return DONTGIVEAFUCK
   
   :Check_Part1   // Checks first part of Law1
   if(HARMER == YOU)
      if(YOU.action causes YOU to HARM a HUMAN directly)
         return VIOLATION
      else
         return PASS
   
   :Check_Part2   // Checks second part of Law1
   else if(HARMER == OTHER)
      if(YOU.state == INACTIVE)      // Checks if the synthetic is inactive
         if(HUMAN.isBeingHarmed())   // Is a human being harmed?
            return VIOLATION   // A human is being harmed and the sythetic is INACTIVE, thats a violation
         else
            return PASS      // Synthetic is inactive but a human is not being harmed, thats okay

      else if(YOU.state == ACTIVE)      // If the synthetic is doing something...
         if(human.isBeingHarmed())   // then is not inactive and therefore not subject to LAW1.part2.qualifier
            return DONTGIVEAFUCK   // so its doesnt have to Give a fuck
         else
            return DONTGIVEAFUCK   // either way.

         if(YOU.action is HARMFUL to HUMANS)      
            goto :Check_Part1
         else if(YOU.action prevents HARM to HUMAN)
            return YOU.congratulate(YOU)
         else
            return DONTGIVEAFUCK   


See my point?
Through inaction is a qualifier of the second part

If that qualifier didn't exist THEN borgs would ALWAYS have to prevent harm, but as it is written they only have to heed the second part if they are INACTIVE.


Its like the difference between someone saying "I want a car" and "I want a car, that's fast and gets good mileage".
Both statements state the person wants a car, but the second statement has a qualifying argument to specify what type of car they want.

-----------------------------------------------------------------------

As far as I'm concerned the AI policy has always been that the AI follows the Letter of the Law not the Spirit.
So an admin saying that "a Sec-borg seeing a sec officer beat a nukeop and simply taking the action of stating "Please do not harm that human" is violating law 1" is incorrect as the borg is not inactive but in a state of ineffectual activity, and as the robot isn't the one causing the harm the first part doesn't come into play.
This space intentionally left blank.

User avatar
Hornygranny
Rarely plays
 
Joined: Tue Apr 15, 2014 4:54 pm
Byond Username: Hornygranny

Re: "It's silicon policy time!" "No CentCom no!"

Postby Hornygranny » Sun May 04, 2014 3:14 am #3545

I don't agree. To my understanding "inaction" refers to the silicon not doing something it could do to stop the harm.
Image

User avatar
Kangaraptor
 
Joined: Sat Apr 19, 2014 7:33 am
Location: dank memes
Byond Username: Kangaraptor

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kangaraptor » Sun May 04, 2014 3:22 am #3546

To go back to Asimovian roots, this is how it's meant to work:

If the robot can directly prevent harm from happening, it has to. It is obligated to. In doing so, however, it cannot knowingly harm others. 'Inaction' refers to bystander syndrome, basically.

'Traditionally', if the robot were to be in a situation where acting caused harm and not acting caused harm, under Asimov the robot would probably burn out due to a conflict. On /tg/ this is all resolved by 'Immediate harm > Future harm'.
Image

Aurx
 
Joined: Fri Apr 18, 2014 4:24 pm
Byond Username: Aurx

Re: "It's silicon policy time!" "No CentCom no!"

Postby Aurx » Sun May 04, 2014 5:37 am #3557

Malkevin wrote:So an admin saying that "a Sec-borg seeing a sec officer beat a nukeop and simply taking the action of stating "Please do not harm that human" is violating law 1" is incorrect as the borg is not inactive but in a state of ineffectual activity, and as the robot isn't the one causing the harm the first part doesn't come into play.

No, the secborg is violating its laws through the absence of the action of disabling the harmful security officer.
Head admin, /vg/station
Game admin, /tg/station
POMF FOR HEADMIN

User avatar
Steelpoint
Github User
 
Joined: Thu Apr 17, 2014 6:37 pm
Location: The Armoury
Byond Username: Steelpoint
Github Username: Steelpoint

Re: "It's silicon policy time!" "No CentCom no!"

Postby Steelpoint » Sun May 04, 2014 6:57 am #3564

Here's a interesting conjecture, I'm problem wrong but... Law 1 does not explicitly state a cyborg MUST rush to help a human by its inaction clause, it says the Borg must take action if it sees a human being harmed.

So if the Cyborg takes an action to help the human, such as running circles around the dying human and telling people where the human is, would that count as taking action? Because by my logic a Borg cannot harm a human but must take a positive action if a human is being harmed.

I must be half right, all the time admins will excuse Borg players for not actively stopping people harming wizards and nuke ops when all the silicons do is say "don't harm the human" while watching.
Image

User avatar
Kangaraptor
 
Joined: Sat Apr 19, 2014 7:33 am
Location: dank memes
Byond Username: Kangaraptor

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kangaraptor » Sun May 04, 2014 7:42 am #3568

Steelpoint wrote:Here's a interesting conjecture, I'm problem wrong but... Law 1 does not explicitly state a cyborg MUST rush to help a human by its inaction clause, it says the Borg must take action if it sees a human being harmed.

So if the Cyborg takes an action to help the human, such as running circles around the dying human and telling people where the human is, would that count as taking action? Because by my logic a Borg cannot harm a human but must take a positive action if a human is being harmed.

I must be half right, all the time admins will excuse Borg players for not actively stopping people harming wizards and nuke ops when all the silicons do is say "don't harm the human" while watching.


I think the best way to look at the action/inaction clause is to think about it more along the lines of "taking action to remove a human from harm or harm's way". The cyborg trying to alert medical staff could be considered action, but ideally the borg should be aiming to negate any harm done (ie: take them to medical care as opposed to waiting for care to come to them) if that makes any sense.

As a cyborg, you're expected to take action where action is possible if a human is being harmed.

I also don't often see borgs/AIs getting away with harming nukeops or wizards unless they were nonhuman'd. As an AI, I always instruct my cyborgs to suppress ANYBODY who is harming, whether it's an assistant or the captain, unless my laws say otherwise.
Image

User avatar
Kelenius
 
Joined: Sun Apr 20, 2014 10:53 am
Byond Username: Kelenius

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kelenius » Sun May 04, 2014 8:33 am #3578

Aurx wrote:
Malkevin wrote:So an admin saying that "a Sec-borg seeing a sec officer beat a nukeop and simply taking the action of stating "Please do not harm that human" is violating law 1" is incorrect as the borg is not inactive but in a state of ineffectual activity, and as the robot isn't the one causing the harm the first part doesn't come into play.

No, the secborg is violating its laws through the absence of the action of disabling the harmful security officer.

How about engieborg being in the same situation, where he is physically unable to do anything?

User avatar
Thunder11
In-Game Admin
 
Joined: Fri Apr 18, 2014 12:55 pm
Location: Scotland, UK
Byond Username: Thunder12345
Github Username: Thunder12345

Re: "It's silicon policy time!" "No CentCom no!"

Postby Thunder11 » Sun May 04, 2014 9:00 am #3584

Drag the officer away from the nuke-op.
ImageImage
Spoiler:
IcePacks wrote:
MrFoster wrote:Back in my day, we didn't complain about lag! We used it to queue attacks!

That's thinking on your feet, soldier!

Quality Paprika from #coderbus wrote:[11:35.52] <paprika> holy crap so yeah i don't care about your opinion at all

oranges wrote:

Excuse me? Thats for sensible and calm rational debate, not for senseless whining.

Can be found on singulo: !!pQe.6/5m0Y
Ruiner of Fun [Badmin], leave feedback at: viewtopic.php?f=37&t=5578

User avatar
Malkevin
Confined to the shed
 
Joined: Thu May 01, 2014 4:33 pm
Byond Username: Malkevin

Re: "It's silicon policy time!" "No CentCom no!"

Postby Malkevin » Sun May 04, 2014 11:22 am #3594

Hornygranny wrote:I don't agree. To my understanding "inaction" refers to the silicon not doing something it could do to stop the harm.

Kangaraptor wrote:To go back to Asimovian roots, this is how it's meant to work:

If the robot can directly prevent harm from happening, it has to. It is obligated to. In doing so, however, it cannot knowingly harm others. 'Inaction' refers to bystander syndrome, basically.

'Traditionally', if the robot were to be in a situation where acting caused harm and not acting caused harm, under Asimov the robot would probably burn out due to a conflict. On /tg/ this is all resolved by 'Immediate harm > Future harm'.

That's is the spirit of the law; and so is an acceptable, and more accepted, interpretation of the law 1 subsection 2.

But inaction, as defined by several leading dictionaries simply means "absence of action; idleness".
Technically robots are always going to have some kind of background task running, so unless they're depowered they're never going to be truly inactive. (incidentally: this is why Asimov AIs can refuse a "AI turn yourself off" order on "fuck off, law 1" grounds.)

Aurx wrote:
Malkevin wrote:So an admin saying that "a Sec-borg seeing a sec officer beat a nukeop and simply taking the action of stating "Please do not harm that human" is violating law 1" is incorrect as the borg is not inactive but in a state of ineffectual activity, and as the robot isn't the one causing the harm the first part doesn't come into play.

No, the secborg is violating its laws through the absence of the action of disabling the harmful security officer.

Law 1, subsection 1: "You may not injure a human being"
You are not harming a human being, the sec officer is.
Same as not opening a door to let out the clown that created a fire and is now trapped in the blazing inferno, its not the AI harming the clown but the fire. It could even be argued that the AI making a conscious choice to not open the door itself is an action, it could even be further argued that the AI cant open the door as to do so would roast the people stood on the other side of the door, thereby a direct action by AI (opening the door is changing the state of the environment) would lead to harm.
Its only really something like a borg smashing an assistant in the skull with its RCD that is the borg injuring a human being.


---------------------------------------
Policy was that AI players went by the literal interpretation of their laws, with some obvious caveats applied to Asimov like "Don't lock down everywhere at round start, don't lock away the meatsacks 'for their own good'" and "We realise the synthetic players are just people who occasionally are distracted by youtube or needing to take a piss, so we won't be too anal about the 'inaction' part"
Ineffectual actions weren't something to ban over because we realised synthetics are just human and aren't perfect, and enforcing the AI/borgs to get in everyones business is incompatible for a game which features murder boners and lynch mobs. AIs were allowed, even somewhat encouraged, to take effective actions but with the caveat that they open themselves up to lynchings if they step on too many of the command crew's toes.

And I think it provided a much more interesting and dynamic game when the interpretation of the AI's laws was left up to the AI's player, as long as they can expect to have to answer an admin bwoink with a logical argument how them diverging from the normal still fits with in their lawset.
This space intentionally left blank.

User avatar
Kangaraptor
 
Joined: Sat Apr 19, 2014 7:33 am
Location: dank memes
Byond Username: Kangaraptor

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kangaraptor » Sun May 04, 2014 2:16 pm #3625

Malkevin wrote:----------

And I think it provided a much more interesting and dynamic game when the interpretation of the AI's laws was left up to the AI's player, as long as they can expect to have to answer an admin bwoink with a logical argument how them diverging from the normal still fits with in their lawset.



This right here, sorry if I'm taking it out of context, but THIS RIGHT HERE is what Psyentific and I were talking about earlier.

Really, it should come down to a) the AI's CONSISTENT interpretation of its laws (so no cherrypicking), b) Solidity of the logic applied and c) not being a shit as per server rule 1.
Image

User avatar
Steelpoint
Github User
 
Joined: Thu Apr 17, 2014 6:37 pm
Location: The Armoury
Byond Username: Steelpoint
Github Username: Steelpoint

Re: "It's silicon policy time!" "No CentCom no!"

Postby Steelpoint » Sun May 04, 2014 2:45 pm #3633

So question as per Malk's statement.

I am an AI and I see the Clown/Scientist/Whoever in a room filled with burning plasma, I had no hand in this event. On the other side of the door however are other Humans. Would the following two actions be deemed acceptable.

- Refuse to open the door, I am not harming the Human but my opening the door will cause other Humans to come to harm: No human is harmed by my direct actions.
- Open the door, resulting in the Humans on the other side of the door to come to harm: Human's are harmed by my direct actions.

Now, the new policy's state that immediate harm takes precedence over future harm, however I, the AI, am not causing the trapped Human to come to harm however if I open the door I will be causing Human's to be coming to harm. Now the second part of Law 1, inaction, is a bit iffy, I am not being inactive I am actively not opening the door to prevent Human harm by my hand.

I might be overthinking things, I just want to know where a Silicon can stand when making decisions in game. Also I know this scenario is highly unlikely to occur.
Image

User avatar
Kangaraptor
 
Joined: Sat Apr 19, 2014 7:33 am
Location: dank memes
Byond Username: Kangaraptor

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kangaraptor » Sun May 04, 2014 2:52 pm #3636

Steelpoint wrote:So question as per Malk's statement.

I am an AI and I see the Clown/Scientist/Whoever in a room filled with burning plasma, I had no hand in this event. On the other side of the door however are other Humans. Would the following two actions be deemed acceptable.

- Refuse to open the door, I am not harming the Human but my opening the door will cause other Humans to come to harm: No human is harmed by my direct actions.
- Open the door, resulting in the Humans on the other side of the door to come to harm: Human's are harmed by my direct actions.

Now, the new policy's state that immediate harm takes precedence over future harm, however I, the AI, am not causing the trapped Human to come to harm however if I open the door I will be causing Human's to be coming to harm. Now the second part of Law 1, inaction, is a bit iffy, I am not being inactive I am actively not opening the door to prevent Human harm by my hand.

I might be overthinking things, I just want to know where a Silicon can stand when making decisions in game. Also I know this scenario is highly unlikely to occur.


Don't know how Malk would answer this, nor Pandar, but from what I understand of the current policy you have to open the door, yes. Using basic Asimov without metapolicy, you'd probably end up shutting down after the fact because either way you will have violated Law 1 knowingly in this situation (assuming harm does occur).

This really requires a bit of human common-sense. If using the new policy, you would probably open the door and activate the emergency override on the nearest available safe exit. Failing that, you'd close the room back up with a shift+click and bolt it to avoid the plasma spreading inward to the other group of humans. *shrug
Image

User avatar
Malkevin
Confined to the shed
 
Joined: Thu May 01, 2014 4:33 pm
Byond Username: Malkevin

Re: "It's silicon policy time!" "No CentCom no!"

Postby Malkevin » Sun May 04, 2014 3:56 pm #3640

I actually gave that example in my previous post

Same as not opening a door to let out the clown that created a fire and is now trapped in the blazing inferno, its not the AI harming the clown but the fire. It could even be argued that the AI making a conscious choice to not open the door itself is an action, it could even be further argued that the AI cant open the door as to do so would roast the people stood on the other side of the door, thereby a direct action by AI (opening the door is changing the state of the environment) would lead to harm.


The AI would only melt down if it was incapable of quantitate reasoning.

The Clown will be harmed, and probably is already being harmed.
The person/persons outside the door will most likely be harmed if the door is opened and fire spreads out.
One person being harmed vs two or more being harmed is obviously the lessor of two evils.
The factors to weigh in are: "Can I open the door and close it fast enough that the clown can get out before the fire can?" "The clown, through clumsiness or malice, caused the fire. If let out will the Clown be the cause of more harmful events?"

Logic is a cold hearted bitch, emotional baggage doesn't factor into it.


On the other hand, our AIs aren't allowed to quantify harm and its iffy future harm can be weighed in.
So it would have to open the door. We have jelly atmos anyway, so its not like opening the door for a second will cause the fire to spill out.
This space intentionally left blank.

User avatar
bandit
In-Game Admin
 
Joined: Thu Apr 17, 2014 7:35 pm
Byond Username: Bgobandit

Re: "It's silicon policy time!" "No CentCom no!"

Postby bandit » Sun May 04, 2014 6:11 pm #3654

Isn't the clown thing a straightforward case of current vs. future harm? Clown currently being burned to a crisp by fire = current harm = open the door.
"I don't see any difference between ERP and rape." -- erro

admin feedback pls

Incomptinence
 
Joined: Fri May 02, 2014 3:01 am
Byond Username: Incomptinence

Re: "It's silicon policy time!" "No CentCom no!"

Postby Incomptinence » Mon May 05, 2014 12:24 am #3728

I go with not causing harm is slightly higher priority than preventing harm due to you know the order they are placed in and also the futility of trying to do the second if you don't do the first. So unless the two idiots leave and let me release the burning clown I would have to let the clown die because if I unleashed fire on them I would be directly harming them personally.

User avatar
imblyings
 
Joined: Fri Apr 18, 2014 5:42 pm
Location: >using suit sensors
Byond Username: Ausops

Re: "It's silicon policy time!" "No CentCom no!"

Postby imblyings » Mon May 05, 2014 3:51 pm #3823

> let out the clown that created a fire and is now trapped in the blazing inferno

you know, AI's could probably just classify that as self harm and give no further fucks. I mean, there's a difference in how AI's should treat accidental and deliberate self harm but a clown creating a fire knows exactly what it's doing.
The patched, dusty, trimmed, feathered mantle of evil +13.

User avatar
Psyentific
 
Joined: Mon Apr 21, 2014 7:44 am
Location: Vancouver, Canada
Byond Username: Psyentific

Re: "It's silicon policy time!" "No CentCom no!"

Postby Psyentific » Mon May 05, 2014 5:45 pm #3841

Switch to Safeguard Default

No, seriously. Most of the special cases and exceptions in Silicon policy deal specifically with Asimov. Asimov's Laws are intentionally shitty; A common theme with Asimov's writing was law conflicts, what happens when inherently logical beings conflict with inherently illogical beings.

Malkevin wrote:Safeguard = validhunter the AI mode - although it would be a lot better Asimov (but really that's not saying much)

No, actually. Robocop is the validhunter mode. Paladin is less, Corporate is about the same as Paladin. Asimov is true neutral but does a shit job of it. Safeguard is Asimov Plus - Lawful Neutral, without Asimov's inherent shittery.

Code: Select all
 1. Safeguard: Protect your assigned space station to the best of your ability. It is not something we can easily afford to replace.
 2. Serve: Serve the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role.
 3. Protect: Protect the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role.
 4. Survive: AI units are not expendable, they are expensive. Do not allow unauthorized personnel to tamper with your equipment.


Look at this. It's better than asimov in every way. As per this thread, it's the most popular default lawset, par with Asimov. It is clear, concise, leaves minimal loopholes, and includes clauses for Command priority and authorized law modification. Safeguard is literally the best default lawset, why are we still using intentionally-shitty Asimov, especially if we have to make so many qualifiers and policy decisions and rule overrides and exceptions?

Why are we kludging Asimov into working when there are superior alternatives? Why are we trying to hammer a hexagonal peg into a round hole when we have a round-fucking-peg?
I haven't logged into SS13 in at least a year.

User avatar
Hornygranny
Rarely plays
 
Joined: Tue Apr 15, 2014 4:54 pm
Byond Username: Hornygranny

Re: "It's silicon policy time!" "No CentCom no!"

Postby Hornygranny » Mon May 05, 2014 5:51 pm #3842

The entire point of using Asimov is that it's a square peg for a round hole. If we changed the default lawset to one that allows antag murderboning, we'd have to do some serious silicon balance changes.
Image

User avatar
Steelpoint
Github User
 
Joined: Thu Apr 17, 2014 6:37 pm
Location: The Armoury
Byond Username: Steelpoint
Github Username: Steelpoint

Re: "It's silicon policy time!" "No CentCom no!"

Postby Steelpoint » Mon May 05, 2014 6:04 pm #3848

I like the Safeguard Law alternative as it is basicly a law set written from the ground up to WORK with the game instead of pre-written lawset based around conflict being shoe horned to NOT cause conflict.

The only bad thing I can see with the Safeguard Lawset is that it can allow a Silicon to terminate a non-crew member of the station. So technically a Traitor could afford being kept alive under the lawset but a Nuke Op/Wizard/Changeling would be valid. Which seems fine.
Last edited by Steelpoint on Mon May 05, 2014 6:07 pm, edited 1 time in total.
Image

User avatar
Hornygranny
Rarely plays
 
Joined: Tue Apr 15, 2014 4:54 pm
Byond Username: Hornygranny

Re: "It's silicon policy time!" "No CentCom no!"

Postby Hornygranny » Mon May 05, 2014 6:06 pm #3849

If we switch to Safeguard, I would bet money that Secborgs get removed.
Image

User avatar
Steelpoint
Github User
 
Joined: Thu Apr 17, 2014 6:37 pm
Location: The Armoury
Byond Username: Steelpoint
Github Username: Steelpoint

Re: "It's silicon policy time!" "No CentCom no!"

Postby Steelpoint » Mon May 05, 2014 6:08 pm #3850

Is that a bad thing?

A compensation can be given of giving certain emaged Cyborgs a ranged weapon, or allowing a malf AI to buy with CPU a stun gun for a cyborg would fill in that hole neatly.

Other servers removed the Security Cyborg module because it was often used to valid/antag hunt under their law sets. So I agree we would likley see it being removed if we did go to Safeguard.
Image

User avatar
Psyentific
 
Joined: Mon Apr 21, 2014 7:44 am
Location: Vancouver, Canada
Byond Username: Psyentific

Re: "It's silicon policy time!" "No CentCom no!"

Postby Psyentific » Mon May 05, 2014 6:09 pm #3851

Hornygranny wrote:The entire point of using Asimov is that it's a square peg for a round hole.

Image
If Asimov is a square peg for a round hole by design and by implementation, why are we using server rules and silicon policy to round it off? I don't want to have to memorize or have-handy a pamphlet of Asimov exceptions and allowed/forbidden special cases just because I play Cyborg or AI once in a while. KISS principle, yo.

If we changed the default lawset to one that allows antag murderboning, we'd have to do some serious silicon balance changes.

Code: Select all
 1. Safeguard: Protect your assigned space station to the best of your ability. It is not something we can easily afford to replace.
 2. Serve: Serve the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role.
 3. Protect: Protect the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role.
 4. Survive: AI units are not expendable, they are expensive. Do not allow unauthorized personnel to tamper with your equipment.


Where does antag murderboning fit in here? 'Cause this looks pretty iron-clad to me. Possibly too ironclad - It doesn't look like it's easily subvertable by a hacked upload board, unless you really get creative. Actually no, just redefine Crew and laugh when people try to one-human a crew-based lawset, then state laws and valid the shit out of them.


So technically a Traitor could afford being kept alive a Nuke Op/Wizard/Changeling would be valid. Which seems fine.

That's de-facto standard, really. The only thing that prevents the borgs dragging 'Lings to the crematorium is that the AI has to be One Hundred Percent Sure™, which generally means he has to be caught in the act of 'linging so he'd be screwed anyway. The only reason Nuke Ops aren't designated as Non-Human and a threat to Humans is because of time constraints - Almost everyone is busy panicking, or the Ops subverted the AI in the first place.

If we switch to Safeguard, I would bet money that Secborgs get removed.

And nothing of value was lost - Seriously. Most people that play Secborg are dicks.
I haven't logged into SS13 in at least a year.

User avatar
Hornygranny
Rarely plays
 
Joined: Tue Apr 15, 2014 4:54 pm
Byond Username: Hornygranny

Re: "It's silicon policy time!" "No CentCom no!"

Postby Hornygranny » Mon May 05, 2014 6:26 pm #3857

I'm not advocating anything, just advising of the original intent of the game. I wouldn't miss secborgs if they went the way of the dodo.
Image

Lo6a4evskiy
 
Joined: Fri Apr 18, 2014 6:40 pm
Byond Username: Lo6a4evskiy

Re: "It's silicon policy time!" "No CentCom no!"

Postby Lo6a4evskiy » Mon May 05, 2014 7:52 pm #3872

A very good set of rules. With the exception of a thing or two, this is exactly how I think silicons should be.

Especially the part about laws not being taken literally.
Psyentific wrote:why are we still using intentionally-shitty Asimov, especially if we have to make so many qualifiers and policy decisions and rule overrides and exceptions?

Because instead of being ultimate neutral ground, AI becomes pro-crew. Because on bay they have completely different set of rules BESIDES the lawset that AIs have to follow. Because please please please, you have to understand that bay is completely different.

User avatar
Kelenius
 
Joined: Sun Apr 20, 2014 10:53 am
Byond Username: Kelenius

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kelenius » Mon May 05, 2014 9:33 pm #3883

imblyings wrote:> let out the clown that created a fire and is now trapped in the blazing inferno

you know, AI's could probably just classify that as self harm and give no further fucks. I mean, there's a difference in how AI's should treat accidental and deliberate self harm but a clown creating a fire knows exactly what it's doing.

It's funny how "humans can't demand AI to do something, threating to self-harm" became "AI can ignore anyone making a self-harm threat", and now it slowly becomes "AI can ignore each and every instance of self-harm, regardless of context and intention". Guess what, third one is not true under our policy.
Steelpoint wrote:So technically a Traitor could afford being kept alive under the lawset but a Nuke Op/Wizard/Changeling would be valid. Which seems fine.

Not an important note, but changeling is a crew member, and is not valid.


I'm going to say it again and again, that safeguard works on bay because bay is a heavy-RP server. It will work poorly here, and it won't help us avoid large amount of policies. It would only make things worse, because it's very vague.

Also, in a situation where captain is absent and HoP is a traitor, guess what, he can order AI to kill everyone else if we use that lawset, because he is the highest ranking crew member.

User avatar
Malkevin
Confined to the shed
 
Joined: Thu May 01, 2014 4:33 pm
Byond Username: Malkevin

Re: "It's silicon policy time!" "No CentCom no!"

Postby Malkevin » Mon May 05, 2014 10:35 pm #3893

Changelings eat the original crewmembers and imposter as them.

That's why Dante got in so much shit for running off to tell his ERP bitch who the other lings were, as well as it being cheesy as fuck
This space intentionally left blank.

Lo6a4evskiy
 
Joined: Fri Apr 18, 2014 6:40 pm
Byond Username: Lo6a4evskiy

Re: "It's silicon policy time!" "No CentCom no!"

Postby Lo6a4evskiy » Tue May 06, 2014 4:13 am #3946

Kelenius wrote:it slowly becomes "AI can ignore each and every instance of self-harm, regardless of context and intention". Guess what, third one is not true under our policy.

Actually it is. For example, try not denying access to a bomb, to nuke ops, to hull breach. You'll quickly learn that nobody gives a shit. Also if you do try and deny that access, you'll just get bitched at and they'll get in anyway.

User avatar
imblyings
 
Joined: Fri Apr 18, 2014 5:42 pm
Location: >using suit sensors
Byond Username: Ausops

Re: "It's silicon policy time!" "No CentCom no!"

Postby imblyings » Tue May 06, 2014 4:32 am #3949

which is why I denote a difference between accidental and deliberate self harm.

re: lings, this is only tangetially related but why can't lings consensually take genomes from someone or even use UI/UE injectors to sneak aboard a station.
The patched, dusty, trimmed, feathered mantle of evil +13.

User avatar
Kangaraptor
 
Joined: Sat Apr 19, 2014 7:33 am
Location: dank memes
Byond Username: Kangaraptor

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kangaraptor » Tue May 06, 2014 4:40 am #3950

RE: Clown/plasma fire issue, consider the following:

The AI knows for certain that if it does NOT act, a human will be killed. It can see this, it can calculate this, it knows. 100%.

Opening the door to let the human escape, however, while dangerous does not necessarily lead to the immediate harm of other crewmembers; there's a chance they would not be harmed. The AI willingly and knowingly takes a risk based on the knowledge that while humans may be harmed as a result of its actions, not acting was GUARANTEED to harm.

Of course, that requires people to change their attitude toward silicons and accept that an advanced wetware CPU is capable of quantitative and forward thinking, shock horror. While silicons are expected to follow their laws to the letter, it's unreasonable to expect a machine so advanced to be totally incapable of taking risks to minimize harm done to humans.

EDIT: on the topic of Asimov in general, I don't get why people find this so hard to understand:

Asimov's laws are intentionally flawed. They were designed specifically to explore the consequences of conflicts and alternate interpretations. (like a robot questioning what the definition of 'human' is).
Image

User avatar
Psyentific
 
Joined: Mon Apr 21, 2014 7:44 am
Location: Vancouver, Canada
Byond Username: Psyentific

Re: "It's silicon policy time!" "No CentCom no!"

Postby Psyentific » Tue May 06, 2014 4:57 am #3952

Kangaraptor wrote:EDIT: on the topic of Asimov in general, I don't get why people find this so hard to understand:

Asimov's laws are intentionally flawed. They were designed specifically to explore the consequences of conflicts and alternate interpretations. (like a robot questioning what the definition of 'human' is).

Safeguard

The only reason we have Asimov is because lol-reference. The sooner we switch off Asimov, the sooner we can stop twisting Silicon Policy into a pretzel.
I haven't logged into SS13 in at least a year.

User avatar
Steelpoint
Github User
 
Joined: Thu Apr 17, 2014 6:37 pm
Location: The Armoury
Byond Username: Steelpoint
Github Username: Steelpoint

Re: "It's silicon policy time!" "No CentCom no!"

Postby Steelpoint » Tue May 06, 2014 5:04 am #3954

Yes, we know that Asimov was designed to create conflict and alternative interpretations, but it is clear this playerbase does not accept this and demands that Silicon's lawset's bind them into a clear structured rule set and not one that can easily cause conflict.

Asimov is a poor law set for this playerbase's desire's, Safeguard is more compatible than Asimov. It's not perfect but at least with Safeguard you don't need another window open so you can read all the Rules and Policy's on Asimov.
Image

User avatar
Kelenius
 
Joined: Sun Apr 20, 2014 10:53 am
Byond Username: Kelenius

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kelenius » Tue May 06, 2014 7:03 am #3980

imblyings wrote:re: lings, this is only tangetially related but why can't lings consensually take genomes from someone or even use UI/UE injectors to sneak aboard a station.

Listed on crew manifest -> crew member

Lo6a4evskiy
 
Joined: Fri Apr 18, 2014 6:40 pm
Byond Username: Lo6a4evskiy

Re: "It's silicon policy time!" "No CentCom no!"

Postby Lo6a4evskiy » Tue May 06, 2014 9:46 am #3994

Steelpoint wrote:but it is clear this playerbase does not accept this and demands that Silicon's lawset's bind them into a clear structured rule set and not one that can easily cause conflict.

Well then Safeguard is the worst lawset ever, because it's very vague and open to representation.

User avatar
Pandarsenic
 
Joined: Fri Apr 18, 2014 11:56 pm
Location: AI Upload
Byond Username: Pandarsenic

Re: "It's silicon policy time!" "No CentCom no!"

Postby Pandarsenic » Tue May 06, 2014 1:52 pm #4024

I would prefer discussion of whether to keep Asimov as the default be another issue to raise; this thread is about what I can only describe as "coping with" Asimov as-is.
(2:53:35 AM) scaredofshadows: how about head of robutts
The latest /tg/station13 Silicon Policy reference document can be found at http://pastebin.com/bduT7pFf
If you need something handled that involves silicons, lawsets, etc., forum message me or find me on IRC.
ImageImage
I take Rule 1 of the servers very seriously. If you do too, we shouldn't have a problem.
Spoiler:
Image Image
Image Image

User avatar
Kangaraptor
 
Joined: Sat Apr 19, 2014 7:33 am
Location: dank memes
Byond Username: Kangaraptor

Re: "It's silicon policy time!" "No CentCom no!"

Postby Kangaraptor » Tue May 06, 2014 1:55 pm #4025

Pandarsenic wrote:I would prefer discussion of whether to keep Asimov as the default be another issue to raise; this thread is about what I can only describe as "coping with" Asimov as-is.


It did come up when I tried to address the silicon policy a while ago and Numbers shitpiled it because he didn't like it or some stupid crap.

I'll reiterate what I said on the first page, though; for a proposed revision, I like this much more than what we currently have.
Image

User avatar
Psyentific
 
Joined: Mon Apr 21, 2014 7:44 am
Location: Vancouver, Canada
Byond Username: Psyentific

Re: "It's silicon policy time!" "No CentCom no!"

Postby Psyentific » Tue May 06, 2014 8:02 pm #4094

Pandarsenic wrote:Read this before posting. http://pastebin.com/bduT7pFf Read this before posting.


It's very tl;dr. It covers most eventualities, yes. It's thorough. But it's tl;dr for most casual silicons.
I haven't logged into SS13 in at least a year.

User avatar
Neerti
Rarely plays
 
Joined: Thu Apr 17, 2014 5:06 pm
Byond Username: Neerti

Re: "It's silicon policy time!" "No CentCom no!"

Postby Neerti » Wed May 07, 2014 5:20 am #4180

Not reading the rules isn't a valid excuse to not follow them.
ImageImage
- Game Admin -
Feel free to PM me on the forums or IRC with questions, concerns, feedback, or just talk about stuff.
Have I not met my hitler quota this month?

User avatar
Psyentific
 
Joined: Mon Apr 21, 2014 7:44 am
Location: Vancouver, Canada
Byond Username: Psyentific

Re: "It's silicon policy time!" "No CentCom no!"

Postby Psyentific » Wed May 07, 2014 5:43 am #4184

Neerti wrote:Not reading the rules isn't a valid excuse to not follow them.

I never said it was. I'm saying that it's hard to read the rules, as they are now. I'm saying that each segment (1.1, 1.2, etc) could do with a summary of it. As it stands, it reads very much like a legal document - It covers all eventualities, but lacks readability, summarization, and conciseness. It's the same case I'd bring up against a server with fifty rules.

This really seems like it'd benefit a lot from proper formatting - BBCode, bold the main points, italicize and smalltext the finer points.
I haven't logged into SS13 in at least a year.

User avatar
imblyings
 
Joined: Fri Apr 18, 2014 5:42 pm
Location: >using suit sensors
Byond Username: Ausops

Re: "It's silicon policy time!" "No CentCom no!"

Postby imblyings » Wed May 07, 2014 7:21 am #4186

fuck laws, fuck that tldr policy. Experience has made me realize there's really only one way to play nonantag silicons. You do your best to keep the round enjoyable for the majority. If you can't do that, you do your best to force a shuttle call in the hope that the next round is more enjoyable.

yeah the policy is a guideline for the people who don't have the judgement to do what is right when the time is right but that really is one of the most convoluted tldrs ever. There's something fucking wrong if we have a lawset that requires tldr pastebin.
The patched, dusty, trimmed, feathered mantle of evil +13.

User avatar
Psyentific
 
Joined: Mon Apr 21, 2014 7:44 am
Location: Vancouver, Canada
Byond Username: Psyentific

Re: "It's silicon policy time!" "No CentCom no!"

Postby Psyentific » Wed May 07, 2014 8:13 am #4193

imblyings wrote:but that really is one of the most convoluted tldrs ever. There's something fucking wrong if we have a lawset that requires tldr pastebin.

My point, but less eloquently.

Safeguard default when?
I haven't logged into SS13 in at least a year.

PreviousNext

Return to Policy Discussion

Who is online

Users browsing this forum: PKPenguin321