Cobby wrote:It’s not down the line though as soon as you are no longer asimov or you make the list of humans shorter you are free (per laws, not by server rules I guess) to harm humans you were “supposed” to be protecting up to that point. You can IMMEDIATELY go and beat the fuck out of someone who you were suppose to be protecting, which may be required depending on your laws and orders. You become an IMMEDIATE threat towards those ex-humans.
“Please make it so the wizard isn’t human” > law change > wow you can harm the wizard now what a shock
“Please make me human” > makes human > wow they can now punch people freely and ignore harm by inaction who would have guessed that’s why they asked for it
You don’t start a contained plasma fire someone could easily spread then “well not my problem” because you weren’t the one that spread it. That’s a lot different than “well if X Y and Z all happen under the right constellations it can create a harmful scenario therefore I will not be letting you into cabin 1”. We aren’t programming real life robots and people certainly don’t play the role like real life robots, we are making sure you play a role on a video game in good faith so we can make those distinctions.
“How’s that different from letting someone in an area with weapons because they MAY harm”
1) it isn’t depending on the circumstance, as you have full right to deny people if you think they may use the weapons for immediate (even if they don’t have it on them currently) human harm.
2) humans don’t have required laws like silicons. If you just received a “people with black hair are not human” law and you see 2 people fighting, one with black hair, then you have a REQUIREMENT to stop them which includes harm if they don’t comply (implying you aren’t frothing at the mouth looking for a reason to tray slam / welder someone to death anyways).
3) the humans were already humans and not a silicon trying to become human
This argument is kinda dumb honestly and literally includes "It's different because I say so." and goes against the general spirit of how we enforce the inaction clause (AKA possibilities of humans hurting people in a theoretical unknown future circumstance should be ignored because there is never, ever a scenario where crew or AI has fun when the AI tries to enforce that).
1) The clownborg has no intent to commit
immediate harm which he knows with 100% fact because he is himself and knows his intent
2) This is compleely correct, but doesn't seem to be a relevant statment
3) This is completely correct, but it doesnt back up your argument in any way other than "its meaningful because I say so".
Just stick to "He was wrong because law 3 says he has to stop himself bein destroyed unless another law is involved and theres no law -1 you must be clown"