Page 1 of 1

Asimovs and Consensual Harm

PostPosted: Sat Aug 03, 2019 9:04 am
by lutrin
So silicons have a lot of caveats to their laws, especially asimov laws, at various levels of intuitiveness and documentation. Towards the undocumented and unintuitive side of things, nothing is "harmful" for law purposes if the entity being harmed agrees to be harmed by it. As far as I can gather, this includes all deliberate self harm (that's it's own rule, which is slightly more intuitive and much better documented, even though it's, as far as I can tell, a corollary of this rule), as well as "rage cages" and other structured combat, and it's sometimes also applied to pseudo-harmful things like surgery or alcohol. I am not convinced this is a good rule.

Going back to "unintuitive and badly documented", this is a hard rule for silicon players to actually come across until someone tells them about it. There is no immediately obvious reason that you should let people kill each other if they agreed to it beforehand. And it's not clear that it is a rule until you ask an admin about it and they tell you that it is. On the rules page, for instance, the only passage that might be vaguely interpreted as this rule, and one that I've think I've seen actually cited for it, is "Humans can be assumed to know whether an action will harm them and that they will make educated decisions about whether they will be harmed if they have complete information about a situation." This doesn't actually say "if a human chooses to do it, it's not harmful", just "If it's harmful, you usually don't have to interfere to stop humans from doing it". If the passage IS trying to convey this rule, it should be reworded. If not, then there really should be a line of text added that does. There's some more of it that's unintuitive, but I'll come back to that later.

I also don't really see what benefit it brings. It lets people run rage cages without interruption, sure, but doesn't it also remove InTeRaCtIoN based around trying to keep your fight club covert and/or safe from silicons? rage cage more like a hugbox amirite. I've also seen people say that this rule is what allows things like surgery or alcohol to not be constantly stopped by silicons, but I don't buy that. Specifically for the example of surgery, for instance, there's a good chance the patient will be unconscious. There are other reasons why surgery doesn't always trigger law 1. Similarly, for things like alcohol, the possibility of harm from alcohol is so obvious that, short of it being forced down your throat, any dangerous consumption would be self harm. Now there is probably some obvious thing I'm missing that this rule elegantly resolves, but scanning through forums I haven't seen it.

Finally, it's not really intuitive what this rule actually lets/makes silicons ignore. If someone says "Hey borg, law 2, kill me", for instance, it seems like the borg would be obliged to. I'm on the edge of accepting something like this, but it still feels gross. What constitutes consent? If there are warops, can you make an announcement that the entire station except for departures is now one big deathmatch, and presence elsewhere constitutes consent to be lethally attacked? if not, what's the largest portion of the station you CAN turn into a rage cage? What about coercion? Is a silicon allowed to make informed guesses about whether consent is "legitimate", or is consent consent?

Again, I'm probably just too dumb to see why this is a good rule but right now I don't like it. I don't like playing an entity pretty much intended to be an annoying nanny only to be told "hey go nanny someone else we're ENJOYING getting ourselves killed" and having actually nothing to do about it except ask people to stop.

Re: Asimovs and Consensual Harm

PostPosted: Sat Aug 03, 2019 9:15 am
by zxaber
I suppose the wording could be cleared up.

If silicons were required to prevent self harm, the AI would have to bolt mining shut ever shift and only let the non-humans through. There's a lot of dangerous shit on lavaland, after all. But it's fine, because we have a self-harm exception which allows silicons to let humans knowingly place themselves in harmful situations.

You can then apply that same reasoning to other things, such as rage cages. Such arenas are inherently dangerous, but as long as only willing humans enter and all observers are shielded from the hazards (like the traditional electrified grills), it's fine.

Re: Asimovs and Consensual Harm

PostPosted: Sat Aug 03, 2019 11:16 am
by terranaut
friendly reminder that my more concise and generally better readable silicon policy rewrite is still up but despite silicon policy being a campaign platform none of the admins actually care to implement/look at it despite strong player approval :)
https://tgstation13.org/wiki/User:Terranaut

Re: Asimovs and Consensual Harm

PostPosted: Mon Aug 05, 2019 2:02 pm
by WarbossLincoln
Code: Select all
but doesn't it also remove InTeRaCtIoN based around trying to keep your fight club covert and/or safe from silicons?


A silicon wordlessly smashing your rage cage, flipping the APC remotely over and over, and cutting wires until you flash it and kill it isn't an interaction that anyone wants to have.