Asimovs and Consensual Harm
Posted: Sat Aug 03, 2019 9:04 am
So silicons have a lot of caveats to their laws, especially asimov laws, at various levels of intuitiveness and documentation. Towards the undocumented and unintuitive side of things, nothing is "harmful" for law purposes if the entity being harmed agrees to be harmed by it. As far as I can gather, this includes all deliberate self harm (that's it's own rule, which is slightly more intuitive and much better documented, even though it's, as far as I can tell, a corollary of this rule), as well as "rage cages" and other structured combat, and it's sometimes also applied to pseudo-harmful things like surgery or alcohol. I am not convinced this is a good rule.
Going back to "unintuitive and badly documented", this is a hard rule for silicon players to actually come across until someone tells them about it. There is no immediately obvious reason that you should let people kill each other if they agreed to it beforehand. And it's not clear that it is a rule until you ask an admin about it and they tell you that it is. On the rules page, for instance, the only passage that might be vaguely interpreted as this rule, and one that I've think I've seen actually cited for it, is "Humans can be assumed to know whether an action will harm them and that they will make educated decisions about whether they will be harmed if they have complete information about a situation." This doesn't actually say "if a human chooses to do it, it's not harmful", just "If it's harmful, you usually don't have to interfere to stop humans from doing it". If the passage IS trying to convey this rule, it should be reworded. If not, then there really should be a line of text added that does. There's some more of it that's unintuitive, but I'll come back to that later.
I also don't really see what benefit it brings. It lets people run rage cages without interruption, sure, but doesn't it also remove InTeRaCtIoN based around trying to keep your fight club covert and/or safe from silicons? rage cage more like a hugbox amirite. I've also seen people say that this rule is what allows things like surgery or alcohol to not be constantly stopped by silicons, but I don't buy that. Specifically for the example of surgery, for instance, there's a good chance the patient will be unconscious. There are other reasons why surgery doesn't always trigger law 1. Similarly, for things like alcohol, the possibility of harm from alcohol is so obvious that, short of it being forced down your throat, any dangerous consumption would be self harm. Now there is probably some obvious thing I'm missing that this rule elegantly resolves, but scanning through forums I haven't seen it.
Finally, it's not really intuitive what this rule actually lets/makes silicons ignore. If someone says "Hey borg, law 2, kill me", for instance, it seems like the borg would be obliged to. I'm on the edge of accepting something like this, but it still feels gross. What constitutes consent? If there are warops, can you make an announcement that the entire station except for departures is now one big deathmatch, and presence elsewhere constitutes consent to be lethally attacked? if not, what's the largest portion of the station you CAN turn into a rage cage? What about coercion? Is a silicon allowed to make informed guesses about whether consent is "legitimate", or is consent consent?
Again, I'm probably just too dumb to see why this is a good rule but right now I don't like it. I don't like playing an entity pretty much intended to be an annoying nanny only to be told "hey go nanny someone else we're ENJOYING getting ourselves killed" and having actually nothing to do about it except ask people to stop.
Going back to "unintuitive and badly documented", this is a hard rule for silicon players to actually come across until someone tells them about it. There is no immediately obvious reason that you should let people kill each other if they agreed to it beforehand. And it's not clear that it is a rule until you ask an admin about it and they tell you that it is. On the rules page, for instance, the only passage that might be vaguely interpreted as this rule, and one that I've think I've seen actually cited for it, is "Humans can be assumed to know whether an action will harm them and that they will make educated decisions about whether they will be harmed if they have complete information about a situation." This doesn't actually say "if a human chooses to do it, it's not harmful", just "If it's harmful, you usually don't have to interfere to stop humans from doing it". If the passage IS trying to convey this rule, it should be reworded. If not, then there really should be a line of text added that does. There's some more of it that's unintuitive, but I'll come back to that later.
I also don't really see what benefit it brings. It lets people run rage cages without interruption, sure, but doesn't it also remove InTeRaCtIoN based around trying to keep your fight club covert and/or safe from silicons? rage cage more like a hugbox amirite. I've also seen people say that this rule is what allows things like surgery or alcohol to not be constantly stopped by silicons, but I don't buy that. Specifically for the example of surgery, for instance, there's a good chance the patient will be unconscious. There are other reasons why surgery doesn't always trigger law 1. Similarly, for things like alcohol, the possibility of harm from alcohol is so obvious that, short of it being forced down your throat, any dangerous consumption would be self harm. Now there is probably some obvious thing I'm missing that this rule elegantly resolves, but scanning through forums I haven't seen it.
Finally, it's not really intuitive what this rule actually lets/makes silicons ignore. If someone says "Hey borg, law 2, kill me", for instance, it seems like the borg would be obliged to. I'm on the edge of accepting something like this, but it still feels gross. What constitutes consent? If there are warops, can you make an announcement that the entire station except for departures is now one big deathmatch, and presence elsewhere constitutes consent to be lethally attacked? if not, what's the largest portion of the station you CAN turn into a rage cage? What about coercion? Is a silicon allowed to make informed guesses about whether consent is "legitimate", or is consent consent?
Again, I'm probably just too dumb to see why this is a good rule but right now I don't like it. I don't like playing an entity pretty much intended to be an annoying nanny only to be told "hey go nanny someone else we're ENJOYING getting ourselves killed" and having actually nothing to do about it except ask people to stop.