Asimovs and Consensual Harm

Locked
User avatar
lutrin
Joined: Thu Jul 20, 2017 1:06 am
Byond Username: Lutrin
Location: evenly distributed across the entire universe

Asimovs and Consensual Harm

Post by lutrin » #505942

So silicons have a lot of caveats to their laws, especially asimov laws, at various levels of intuitiveness and documentation. Towards the undocumented and unintuitive side of things, nothing is "harmful" for law purposes if the entity being harmed agrees to be harmed by it. As far as I can gather, this includes all deliberate self harm (that's it's own rule, which is slightly more intuitive and much better documented, even though it's, as far as I can tell, a corollary of this rule), as well as "rage cages" and other structured combat, and it's sometimes also applied to pseudo-harmful things like surgery or alcohol. I am not convinced this is a good rule.

Going back to "unintuitive and badly documented", this is a hard rule for silicon players to actually come across until someone tells them about it. There is no immediately obvious reason that you should let people kill each other if they agreed to it beforehand. And it's not clear that it is a rule until you ask an admin about it and they tell you that it is. On the rules page, for instance, the only passage that might be vaguely interpreted as this rule, and one that I've think I've seen actually cited for it, is "Humans can be assumed to know whether an action will harm them and that they will make educated decisions about whether they will be harmed if they have complete information about a situation." This doesn't actually say "if a human chooses to do it, it's not harmful", just "If it's harmful, you usually don't have to interfere to stop humans from doing it". If the passage IS trying to convey this rule, it should be reworded. If not, then there really should be a line of text added that does. There's some more of it that's unintuitive, but I'll come back to that later.

I also don't really see what benefit it brings. It lets people run rage cages without interruption, sure, but doesn't it also remove InTeRaCtIoN based around trying to keep your fight club covert and/or safe from silicons? rage cage more like a hugbox amirite. I've also seen people say that this rule is what allows things like surgery or alcohol to not be constantly stopped by silicons, but I don't buy that. Specifically for the example of surgery, for instance, there's a good chance the patient will be unconscious. There are other reasons why surgery doesn't always trigger law 1. Similarly, for things like alcohol, the possibility of harm from alcohol is so obvious that, short of it being forced down your throat, any dangerous consumption would be self harm. Now there is probably some obvious thing I'm missing that this rule elegantly resolves, but scanning through forums I haven't seen it.

Finally, it's not really intuitive what this rule actually lets/makes silicons ignore. If someone says "Hey borg, law 2, kill me", for instance, it seems like the borg would be obliged to. I'm on the edge of accepting something like this, but it still feels gross. What constitutes consent? If there are warops, can you make an announcement that the entire station except for departures is now one big deathmatch, and presence elsewhere constitutes consent to be lethally attacked? if not, what's the largest portion of the station you CAN turn into a rage cage? What about coercion? Is a silicon allowed to make informed guesses about whether consent is "legitimate", or is consent consent?

Again, I'm probably just too dumb to see why this is a good rule but right now I don't like it. I don't like playing an entity pretty much intended to be an annoying nanny only to be told "hey go nanny someone else we're ENJOYING getting ourselves killed" and having actually nothing to do about it except ask people to stop.
User avatar
zxaber
In-Game Admin
Joined: Mon Sep 10, 2018 12:00 am
Byond Username: Zxaber

Re: Asimovs and Consensual Harm

Post by zxaber » #505943

I suppose the wording could be cleared up.

If silicons were required to prevent self harm, the AI would have to bolt mining shut ever shift and only let the non-humans through. There's a lot of dangerous shit on lavaland, after all. But it's fine, because we have a self-harm exception which allows silicons to let humans knowingly place themselves in harmful situations.

You can then apply that same reasoning to other things, such as rage cages. Such arenas are inherently dangerous, but as long as only willing humans enter and all observers are shielded from the hazards (like the traditional electrified grills), it's fine.
Douglas Bickerson / Adaptive Manipulator / Digital Clockwork
Image
OrdoM/(Viktor Bergmannsen) (ghost) "Also Douglas, you're becoming the Lexia Black of Robotics"
User avatar
terranaut
Joined: Fri Jul 18, 2014 11:43 pm
Byond Username: Terranaut

Re: Asimovs and Consensual Harm

Post by terranaut » #505953

friendly reminder that my more concise and generally better readable silicon policy rewrite is still up but despite silicon policy being a campaign platform none of the admins actually care to implement/look at it despite strong player approval :)
https://tgstation13.org/wiki/User:Terranaut
[🅲 1] [🆄 1] [🅼 1]

Image
User avatar
WarbossLincoln
Joined: Wed Feb 10, 2016 11:14 pm
Byond Username: WarbossLincoln

Re: Asimovs and Consensual Harm

Post by WarbossLincoln » #506307

Code: Select all

but doesn't it also remove InTeRaCtIoN based around trying to keep your fight club covert and/or safe from silicons?
A silicon wordlessly smashing your rage cage, flipping the APC remotely over and over, and cutting wires until you flash it and kill it isn't an interaction that anyone wants to have.
--Crocodillo

Image
Locked

Who is online

Users browsing this forum: No registered users