The cyborg had, out of curiosity, previously checked for what crimes the prisoner was apprehended for (if you don't know, if you check a perma prisoner's file on a security console, it tells you their IC crime), and it was "uploading disruptive laws to the AI".
Anyways to get to the point, the cyborg refused to let the prisoner free as requested. Their reasoning was that by proxy of being a prisoner, they were a danger to society and could not be let go under asiimov law 1.
I didn't agree, saying that law 2 takes precedence and that law 1 kicks in there's active proof that the person in question is actively committing or has active intent to commit harm. The cyborg ended up obeying me since I'm basically it's boss but I just in case ahelped about it to make sure I was, yknow, right, and after talking it with Harricross for a bit I think it's pretty ambiguous, thus why I'm making this thread.
Basically, in the situation in which:
- A prisioner is human
- Said prisoner law 2's a cyborg to be let free from prison
- The IC crime they commited does not involve any act of human harm.
Should the cyborg obey the law 2? Or should the cyborg classify this under law 1? And if its the latter, for what reason?
I think it all revolves the following line in sillicon policy that involves this specific situation:
This line should be clearer. Like, does the prisoner itself asking to be released under law 2 count as an order that would allow a prisoner to be released? It would be shitty to security for a prisoner to be released, but wouldn't it be shitty for a prisoner who is aware of the ai's current laws and wants to take advantage of that for it to not work?Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.