Define what classifies harm as willing, or change asimov++ law one

Locked
User avatar
Itseasytosee2me
Joined: Sun Feb 21, 2021 1:14 am
Byond Username: Rectification
Location: Space Station 13

Define what classifies harm as willing, or change asimov++ law one

Post by Itseasytosee2me » #650692

We have alot of examples of what is and what isn't considered willing harm in regards to the clause is asimov++
You may not harm a human being or, through action or inaction, allow a human being to come to harm, except such that it is willing.
Suicide, and non-accidental self harm are the most obvious example of willing harm, because the human would not do it if they are not willing. Surgery (even non-essential surgery and borgification) is also considered willing harm. An obvious example of nonwilling harm, would be a prisoner being beaten or executed by security. Between these two examples, grey area exists.

The biggest question we must answer, is intentionally putting yourself in a situation where harm is all but guarantied willing harm? If a scientist runs into a burning section of the station in order to retrieve something like a stun baton, is a cyborg obligated to stop them from harming and potentually killing themselves? If the human had the choice, it would not be burnt by fire, but it has decided to enter the room regardless. The human does not want to be harmed in this manner. It is not self harm, but is it willing harm?

For a much more common example, fighting in a rage cage is usually considered self harm. However this situation is even more tenuous than the burning room example. People who fight in the rage cage do not wish to be harmed at all for the most part, they want to win. They entered the rage cage of their own volition, and have the potential to not have any harm be done to them, and do not wish to be harmed, if their opponent asked them "hey can I beat the shit out of you," they might respond, "no, I am not willing to let you beat the shit out of me and I will resist with all my effort." What is a silicon supposed to do in this situation?

If the answer is "entering the rage cage is willing harm, they already knew that by entering that scenario that they could be harmed accepted that and willingly entered the rage cage, so therefore it is willing harm," then isn't that true of all combat? Is the criminal who steals high risk items and murders the captain, not also in an act of self harm when they get executed by security? Using the same logic, "Being executed for committing capital crimes is willing harm, the criminal already knew that by committing capital crimes they could be executed by security, but willingly committed capital crimes regardless, so therefore it is willing harm." Both of these scenarios are somewhat preventable by the AI, but one of them is considered willing harm, and the other isn't. Under this logic, antagonists aren't protected by the AI if someone tries to harm them, because if they didn't want to be harmed, they should have never done anything that reveals them as an antagonist.

And what about inaction? Is voluntarily not removing yourself from a harmful situation not willing harm? Is it willing harm to intentionally not leave a burning room even though you have the opportunity too? If so, is it not also willing harm no not flee the station via the lavaland ferry whenever war ops are declared? Is charging headfirst into a battle with an antagonist willing harm? If the antagonist is human, is the borg not obligated at all to intervene in this situation, because they are both committing an act of willing human harm? (the human is willing to be harmed because they are trying to fight the antagonist, which is harmful, and the antagonist is willing to be harmed because they revealed themselves as an antagnost, and if they weren't willing to risk harm, they never would have done that.)

No. Obviously. These scenarios are dumb, and if you were a borg who played like this, you would be yelled at. But how are you supposed to define which scenarios are willing harm, and which are not? How is standing in a burning room by your own volition, different than staying on a space station by your own volition that is about to be stormed with gun wielding hyper-lethal nuclear operatives?

Silicon's primarily care about their laws, not goodwill, empathy, and understanding. Their laws fall before rule 1. As such, these edge cases currently defined in which a silicon can or can not interfere with a situation can not be so easily defined by simply the word "willing."
- Sincerely itseasytosee
See you later
User avatar
Not-Dorsidarf
Joined: Fri Apr 18, 2014 4:14 pm
Byond Username: Dorsidwarf
Location: We're all going on an, admin holiday

Re: Define what classifies harm as willing, or change asimov++ law one

Post by Not-Dorsidarf » #650792

For the first example, even before Asimov+ there was a lot of leeway for "Humans are getting knowingly getting themself hurt", especially if they're a role that is related to the harm (see: engineers going into a delaminating engine without proper protection)
Image
Image
kieth4 wrote: infrequently shitting yourself is fine imo
There is a lot of very bizarre nonsense being talked on this forum. I shall now remain silent and logoff until my points are vindicated.
Player who complainted over being killed for looting cap office wrote: Sun Jul 30, 2023 1:33 am Hey there, I'm Virescent, the super evil person who made the stupid appeal and didn't think it through enough. Just came here to say: screech, retards. Screech and writhe like the worms you are. Your pathetic little cries will keep echoing around for a while before quietting down. There is one great outcome from this: I rised up the blood pressure of some of you shitheads and lowered your lifespan. I'm honestly tempted to do this more often just to see you screech and writhe more, but that wouldn't be cool of me. So come on haters, show me some more of your high blood pressure please. 🖕🖕🖕
User avatar
CPTANT
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Define what classifies harm as willing, or change asimov++ law one

Post by CPTANT » #650874

The ambiguity is a feature, not a bug.
Timberpoes wrote: Tue Feb 14, 2023 3:21 pm The rules exist to create the biggest possible chance of a cool shift of SS13. They don't exist to allow admins to create the most boring interpretation of SS13.
User avatar
Itseasytosee2me
Joined: Sun Feb 21, 2021 1:14 am
Byond Username: Rectification
Location: Space Station 13

Re: Define what classifies harm as willing, or change asimov++ law one

Post by Itseasytosee2me » #650887

CPTANT wrote: Fri Aug 26, 2022 2:26 pm The ambiguity is a feature, not a bug.
Cool take, but it would still be nice to see a headmin ruling confirming it as such because silicon policy seems to imply that asimov is a flawless set of guidelines and that the only ambiguity comes from player uploaded laws.
- Sincerely itseasytosee
See you later
User avatar
san7890
In-Game Game Master
Joined: Mon Apr 15, 2019 8:12 pm
Byond Username: San7890
Github Username: san7890
Location: here
Contact:

Re: Define what classifies harm as willing, or change asimov++ law one

Post by san7890 » #650903

Itseasytosee2me wrote: Fri Aug 26, 2022 6:43 pm -snip-

Cool take, but it would still be nice to see a headmin ruling confirming it as such because silicon policy seems to imply that asimov is a flawless set of guidelines and that the only ambiguity comes from player uploaded laws.
What? Asimovv should be honored above all, but silicon policy was drafted to explicitly clear up and address the ambiguities that Asimov can't be reasonably applied to in the wider context of Space Station 13. It's not flawless simply because players might not know the edge-cases, and a bulk of silicon policy is figuring out and learning about those edge cases. Honest mistakes do happen following Asimov/Asimov++ laws, but the only reason why we define them as mistakes is according to the supplement to the laws: Silicon Policy.
Simultaneously making both the best and worst jokes on the internet. I like looking at maps and code. Learn how to map today!. You may rate me here.
User avatar
Itseasytosee2me
Joined: Sun Feb 21, 2021 1:14 am
Byond Username: Rectification
Location: Space Station 13

Re: Define what classifies harm as willing, or change asimov++ law one

Post by Itseasytosee2me » #650904

san7890 wrote: Fri Aug 26, 2022 11:01 pm
Itseasytosee2me wrote: Fri Aug 26, 2022 6:43 pm -snip-

Cool take, but it would still be nice to see a headmin ruling confirming it as such because silicon policy seems to imply that asimov is a flawless set of guidelines and that the only ambiguity comes from player uploaded laws.
What? Asimovv should be honored above all, but silicon policy was drafted to explicitly clear up and address the ambiguities that Asimov can't be reasonably applied to in the wider context of Space Station 13. It's not flawless simply because players might not know the edge-cases, and a bulk of silicon policy is figuring out and learning about those edge cases. Honest mistakes do happen following Asimov/Asimov++ laws, but the only reason why we define them as mistakes is according to the supplement to the laws: Silicon Policy.
If that's the case I'm looking to clear up my stated ambiguous and conflicting scenarios
- Sincerely itseasytosee
See you later
User avatar
Lacran
Joined: Wed Oct 21, 2020 3:17 am
Byond Username: Lacran

Re: Define what classifies harm as willing, or change asimov++ law one

Post by Lacran » #650928

The way I view it is mainly based on the mindset of the person and their familiarity with the action.

Self harm is harm that occurs due to a wilful desire to engage in an activity that is inherently harmful.

Someone fighting in a rage cage is consenting to be hurt because to win in a rage cage you have to be willing to be hurt. You aren't consenting to simply winning a fight, you are consenting to a fight taking place.

This isn't to be confused with recklessness. An individual can make a dumb or wrong desicion and not consent to the harmful consequences.

A criminal doesn't consent to being executed, they chose to risk the possibility of execution for said reward, hence recklessness.

Another factor to consider regarding consensual harm is coercion or lose lose scenarios. A prisoner given the choice between being borged or being executed hasn't actually consented, because said consent occurred under duress.

The scientist risked the flames under duress, the harm isn't consensual, fight the fire or retrieve the baton.

A crew member fights and antagonist to prevent harm, self defence or protecting the station is not self harm because the crew member didn't consent to the harmful antagonist.

The A.I fleeing Nukies is them caving to inaction, the a.i must prevent the harm that will inevitably occur to the best of it's ability.
User avatar
Pandarsenic
Joined: Fri Apr 18, 2014 11:56 pm
Byond Username: Pandarsenic
Location: AI Upload

Re: Define what classifies harm as willing, or change asimov++ law one

Post by Pandarsenic » #650958

Itseasytosee2me wrote: Wed Aug 24, 2022 4:18 pmSnip
A lot of these have what seem like common sense answers to me when you remember that the AI is not an actual robot, but a player in a videos game, so I guess I'd like a vibe check from everyone else on whether they agree:

Cages and Fires (but not Firecage)

If you KNOWINGLY risk harm (rage cage, running into fire), you're making a calculated risk. If you want the cyborg's aid, you still have law 2. Barring Law 2 orders, silicons can generally assume that you are making a calculated risk, or something similar, not on if but of how much harm - if you crowbar the firelocks when you can see the fire on the other side of some glass, their concern is people you might expose, not you, the firewalker who hopefully has a plan. There is a difference between harm that you are willing to take, or accepting as necessary, vs. harm you desire. You don't want to be cut open, but the MD can't put a nutriment pump plus in you otherwise. You accept it as a necessary cost. In the rage cage, you accept that X amount of harm is possible. It's not the outcome you want, but it's what you're willing to risk, as agreed upon by the people in the rage cage. If they call out surrender and their opponent keeps going, or they ask you to help them (either through law 2 or by indicating they are unwilling to receive further harm), Law 1 is back in business and you must act. If they say "no, I am not willing to let you beat the shit out of me and I will resist with all my effort" the easiest answer is - as a player - to verbally offer your aid to them in escaping the cage, or to suggest that they leave it before the other person attacks, or to ask them both if that is merely trash talk or a genuine expression of intent (shifting the onus onto the humans to be more explicit). You can also quite justifiably intervene to make the fight impossible in some way, chainflash the person who's allegedly unwilling and pull them away from the fight, etc.

But really, someone intentionally being a shit about Law 1 with silicons near a rage cage is probably just kind of being a dick and should be messed with, within the constraints of your laws. If you do nothing and they adminhelp your inaction when they lose in the rage cage, I will ban them.

Capital Crimes

The difference between entering a rage cage vs. stealing a gun is that in the cage, the harm is the point. There is an established mutual consent, a ritualization of the combat if you will, that makes it Valid and Good. If someone kills the Captain, they are still not willing to be harmed. They did not issue an explicit invitation.

Closer to a grey area is that now, because of the Willing clause, a traitor can do something like explicitly issue a challenge to sec - "Kill me if you can," etc. - at which point, as above, you are totally free to do stuff like ask them ";Is this explicit consent to be harmed by the security team and/or other station crew?" Maybe they respond, maybe they don't. Honestly, at that point, if you make a best-effort play at fulfilling the intent of everyone involved, you'll be fine. If a traitor tells you that they're stealing things as a means of suicide by cop, you should let them do their thing. If they don't, you should assume that they do not want to be harmed.

Inaction

Thankfully, inaction is only a concern for the silicons, not the humans. A human (SSD or asleep or AFK but otherwise normal) doing nothing must still be protected. You can absolutely give humans recommendations on how best to resolve a harmful scenario, and you absolutely could attempt to bring the humans to Lava Land - but if you're doing it to be a dick and sabotage the crew, and the nukies get the disk, you're on the line for every human who dies from the boom.

Willing Combat with Antags

If someone attempts to deescalate a fight and the other person shoots them in the back as they flee, they are now a murderer, just like in real life legal systems. If any human is attempting to retreat or escape from a fight, harm to them is obviously unwilling and must be prevented. Similarly, if someone is in critical, they are never able to consent to further harm unless they explicitly gave consent to that scope of harm (e.g. "Borg, weld me all the way to death, right now") in advance. If someone goes down fighting, you have a duty to prevent an execution.

Ongoing combat is more complicated; we don't have encoded into space law nor server rules a Duty to Retreat nor a Stand Your Ground condition. At this point, all I can really do is point to Server Rule 1.

Your OOC job as a silicon is to complicate, not universally prevent, violence around humans.

Sooner or later, you will reach a situation like Nuke Ops or a murder-wizard, and you just have to throw your hands up and say "I have to let the premise of the antag type actually play out." It's not your goal to make the round unfun for everyone, even when your laws allow it - even when, on rare occasion, they seem like they should demand it. Ultimately, we have to acknowledge that this is a game and it's more fun for everyone if the station fights the war ops, though you are totally free to call the shuttle as soon as you're physically able to.
(2:53:35 AM) scaredofshadows: how about head of robutts
I once wrote a guide to fixing telecomms woohoo
User avatar
spookuni
In-Game Game Master
Joined: Sun Jan 05, 2020 7:05 am
Byond Username: Spookuni
Location: The Whiteship

Re: Define what classifies harm as willing, or change asimov++ law one

Post by spookuni » #655837

The base purpose of the "unless they are willing" clause of Asimov++ is to serve as an in-universe explanation for the previously fiated prohibition against the use of self-harm to compel Asimov AIs under law 1, and the understanding that AIs should not prohibit human players from doing things that present a risk of harm to them. With that in mind, we largely agree with most of the responses put forth by Lacran and Pandar above.

Acceptance of risk is not the same thing as acceptance of harm - Silicons should act to minimise harm not explicitly consented to as much as they are able - if in doubt, ask the human.

Additionally: As Pandar stated: coerced humans are not willing humans, Silicons should not accept harm that occurs as a result of forcing or coercing a human to do (or not do) something as willing harm. (examples: shocking a door to a room to prevent a human from leaving, and then justifying it when the human zaps themselves that because they knew they would be harmed, it was willing harm) Alternatively: Silicons cannot use the possibility of harm being willing to coerce humans.

To apply this to some of your above examples:

Environment: A scientist attempting to enter a burning area should be allowed to do so provided that their entry will not bring harm to others (opening a firelock in a crowded hallway to run into a plasmafire, for example), provided they are aware of the danger of the fire, they have accepted the risk. The fire is still harmful however, and the observing cyborg should (if able) take action to otherwise combat, contain or diminish the danger. The acceptance of risk by the scientist does also not extend to sudden increases in the danger posed by their actions - they cannot be assumed to be willing to suffer further damage if they are attacked in the burning area by a firesuit wearing antagonist, or if the fire suddenly grows in intensity due to a further release of flammable gases, for example.

Rage Cages: Rage cages carry implicit acceptance of risk and harm only by long standing convention - because people enjoy them and having silicons attempt to disassemble them where-ever they might be found makes the game less fun. If a silicon player is concerned about whether or not humans are willing to be harmed, they are perfectly within their rights to request that all participating humans explicitly declare their acceptance of the implicit risk. A player who directly declares they are *not* willing to be harmed during a rage cage match retracts that consent, and the observing silicon should intervene if possible (by whatever good faith methods they choose). As Pandar mentioned though, intentionally starting shit with silicons and other players in a rage cage fight is likely to draw administrative ire for being a dickhead (if you don't consent to getting the shit beaten out of you, get out of the rage cage).

Fights with security: As above, antagonists do not explicitly consent to their harm and death at the hands of security, command, or anyone else as a result of taking malicious action. As such they cannot be treated as willing to be harmed in vengeance for their crimes, and should be shielded from harmful consequences as effectively as the AI can. A human retreating from combat can implicitly be understood to not consent to any harm that occurs as a result of their pursuit or apprehension (assuming no direct and explicit instruction to the contrary)

Inaction is irrelevant to willingness to be harmed: A human is under no obligation to take any direct action to preserve their own life under Asimov protections. (though those actions may make saving them impossible, in which case the AI will not be held responsible for their death or harm)

Above all, maintain good faith. Don't act directly harmfully against targets you *know* do not actually consent or wish to be harmed. If you intend to use the "but they were willing" defence against an ahelp for killing a human, you should be able to point to *explicit* instruction that allowed you to do so. "I warned them I would be plasmaflooding this area and they stayed there so they clearly consented" is not and will never be acceptable AI play.
Locked

Who is online

Users browsing this forum: Google [Bot]