Hey! Listen!
Toolboxing For A Cause 2.0
a /tg/Station13 Charity Tournament
Will begin Saturday the 12th at 1pm PST / 4pm EST / 8pm GMT at the Event Hall.
(You've donated r-right?)

Can the AI harm harmful non humans?

Ask and discuss policy about game conduct and rules.

Moderator: In-Game Head Admins

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 12:36 pm #513216

What is acceptable escalation of the AI versus harmful non humans?

What is the reasonable response for non humans doing the following to humans: light harm, significant harm, attempted murder, murder and mass murder?

Is there a difference between asimov and other lawsets defining groups of humans and non humans? I get the feeling lots of people see asimov as the "good" guy even when dealing with non humans, but I completely disagree with this interpretation. Asimov is a neutral force and only distinquishes two parties: humans and non humans. The only fundamental difference with a one human/syndicate lawset is who is defined human.



User avatar
Ghilker
 
Joined: Mon Apr 15, 2019 9:44 am
Byond Username: Ghilker

Re: Can the AI harm harmful non humans?

Postby Ghilker » Sun Sep 08, 2019 12:47 pm #513219

light harm - lock it in a room and call sec
significant harm - look at the above
attempted murder - if incidental look at the above, if willingly well lock in a room, call borgs, kill (or just call sec here too)
murder and - same as above
mass murder - lock room, flood room with plasma, kill (joke just call borgs to end the little shit)

You can always try to call sec, try non letal means, lock in a room or if you have a engi borg in a welded locker. If there are more problems you can kill and then go back to genetics to clone the little shit and seal it in a glass room to show it to the crew as a warning

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 12:50 pm #513221

Ghilker wrote:light harm - lock it in a room and call sec
significant harm - look at the above
attempted murder - if incidental look at the above, if willingly well lock in a room, call borgs, kill (or just call sec here too)
murder and - same as above
mass murder - lock room, flood room with plasma, kill (joke just call borgs to end the little shit)

You can always try to call sec, try non letal means, lock in a room or if you have a engi borg in a welded locker. If there are more problems you can kill and then go back to genetics to clone the little shit and seal it in a glass room to show it to the crew as a warning


If someone kicks a one human, would you smoke them?

User avatar
Ghilker
 
Joined: Mon Apr 15, 2019 9:44 am
Byond Username: Ghilker

Re: Can the AI harm harmful non humans?

Postby Ghilker » Sun Sep 08, 2019 12:57 pm #513222

Well it depends of what are the one human orders, if it says protect me with every means, yes I will

User avatar
Gigapuddi420
In-Game Admin
 
Joined: Fri May 19, 2017 8:08 am
Location: Dorms
Byond Username: Gigapuddi420

Re: Can the AI harm harmful non humans?

Postby Gigapuddi420 » Sun Sep 08, 2019 1:03 pm #513223

Keep in mind when playing silicon that you have to consider both your laws and the server rules. Just because there is nothing preventing a Asimov silicon from killing non-human crew doesn't mean it's okay to do so on a whim. This is why purged silicons are still expected to follow a relaxed escalation; without laws or orders to compel you to kill you need proper reasoning to do so. Non-humans under asimov are only protected by server rules and the expectation that you aren't a huge shitter looking to ruin the round of a player because 'it's not in my laws'. Have a proper reason to escalate to murder, actually be sure they deserve it before you take the extreme solution. Naturally if you're following a legitimate order from a human, you follow that order so long as it doesn't break your laws. Following laws first is key but when you've got the ability to make a decision on your actions, you are responsible for those actions.
Imperfect catgirl playing a imperfect game.

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 1:07 pm #513224

Gigapuddi420 wrote:Keep in mind when playing silicon that you have to consider both your laws and the server rules. Just because there is nothing preventing a Asimov silicon from killing non-human crew doesn't mean it's okay to do so on a whim. This is why purged silicons are still expected to follow a relaxed escalation; without laws or orders to compel you to kill you need proper reasoning to do so. Non-humans under asimov are only protected by server rules and the expectation that you aren't a huge shitter looking to ruin the round of a player because 'it's not in my laws'. Have a proper reason to escalate to murder, actually be sure they deserve it before you take the extreme solution. Naturally if you're following a legitimate order from a human, you follow that order so long as it doesn't break your laws. Following laws first is key but when you've got the ability to make a decision on your actions, you are responsible for those actions.



Human harm isn't a whim, its literally law 1.

User avatar
Gigapuddi420
In-Game Admin
 
Joined: Fri May 19, 2017 8:08 am
Location: Dorms
Byond Username: Gigapuddi420

Re: Can the AI harm harmful non humans?

Postby Gigapuddi420 » Sun Sep 08, 2019 1:11 pm #513225

CPTANT wrote:Human harm isn't a whim, its literally law 1.

Law 1 doesn't compel you to kill. It requires you act to prevent harm. There are plenty of ways to do so without killing.
Imperfect catgirl playing a imperfect game.

User avatar
Ghilker
 
Joined: Mon Apr 15, 2019 9:44 am
Byond Username: Ghilker

Re: Can the AI harm harmful non humans?

Postby Ghilker » Sun Sep 08, 2019 1:13 pm #513226

Yes human harm is law 1, but you don't have to kill the target to stop the harm, unless the harm is done multiple times by the same guy and he won't stop you can harm him to make him stop, even kill if he's a big problem.of course silicon policy and server rules are still there and you have to follow them too

If is a one time harm you cannot kill the nonhuman unless you are ordered

Edit: exactly what giga said

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 1:30 pm #513227

Gigapuddi420 wrote:
CPTANT wrote:Human harm isn't a whim, its literally law 1.

Law 1 doesn't compel you to kill. It requires you act to prevent harm. There are plenty of ways to do so without killing.


So? There are plenty of ways to do anything. Silicon adhere to their laws, there is nothing wrong with killing IF it furthers said laws.

I get the feeling you want the lawset to just be another antag / not antag validhunt.


Well it depends of what are the one human orders, if it says protect me with every means, yes I will


That is already law 1.

User avatar
Ghilker
 
Joined: Mon Apr 15, 2019 9:44 am
Byond Username: Ghilker

Re: Can the AI harm harmful non humans?

Postby Ghilker » Sun Sep 08, 2019 1:44 pm #513229

CPTANT wrote:
Well it depends of what are the one human orders, if it says protect me with every means, yes I will


That is already law 1.


Nope, law 1 sais that you have to prevent harn not kill non humans to prevent harm, follow logic, policy and server rules and you'll be good

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 1:48 pm #513230

Ghilker wrote:
CPTANT wrote:
Well it depends of what are the one human orders, if it says protect me with every means, yes I will


That is already law 1.


Nope, law 1 sais that you have to prevent harn not kill non humans to prevent harm, follow logic, policy and server rules and you'll be good


Law 1 is do not harm and don't let harm happen through inaction. It is the highest law, it is by definition by any means necessary.

User avatar
Ghilker
 
Joined: Mon Apr 15, 2019 9:44 am
Byond Username: Ghilker

Re: Can the AI harm harmful non humans?

Postby Ghilker » Sun Sep 08, 2019 1:57 pm #513232

By definition yes, but thats why the silicon policy are there for, to prevent nonhuman harm and kill over every little event

Skillywatt
 
Joined: Sun Dec 02, 2018 7:29 pm
Byond Username: Tiguar

Re: Can the AI harm harmful non humans?

Postby Skillywatt » Sun Sep 08, 2019 2:03 pm #513233

Violence should be acceptable when that's the only recourse to prevent harm.

Borgs really only have flash and running away with the victim as true defensive options and if neither of those are viable, law 1 demands you do [i] something[i] to prevent harm.

Unless the non-human is an antag, I would expect the violence to stop when the assailant is soft-crit as that stops the harm. There is really no need for a Borg to kill.

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 2:14 pm #513235

Skillywatt wrote:Violence should be acceptable when that's the only recourse to prevent harm.

Borgs really only have flash and running away with the victim as true defensive options and if neither of those are viable, law 1 demands you do [i] something[i] to prevent harm.

Unless the non-human is an antag, I would expect the violence to stop when the assailant is soft-crit as that stops the harm. There is really no need for a Borg to kill.


The entire lawset is pointless if you are going to give non humans virtually the same protection as humans anyway.

Unless the non-human is an antag


Might as well change roundstart laws to valladin then.

Ghilker wrote:By definition yes, but thats why the silicon policy are there for, to prevent nonhuman harm and kill over every little event


We are talking murder/attempted murder here.

User avatar
Gigapuddi420
In-Game Admin
 
Joined: Fri May 19, 2017 8:08 am
Location: Dorms
Byond Username: Gigapuddi420

Re: Can the AI harm harmful non humans?

Postby Gigapuddi420 » Sun Sep 08, 2019 2:33 pm #513236

CPTANT wrote:So? There are plenty of ways to do anything. Silicon adhere to their laws, there is nothing wrong with killing IF it furthers said laws.

This isn't a particularly hard concept to grasp; Asimov's laws potentially allow you to take a large range of actions to protect the lives of humans. This can include just bolting everyone down into their departments at round start so it's extremely hard for any human to harm another human. Naturally you can't do that because it would be really fucking shitty to deal with in play. Silicon policy exists so that people who play silicon consider the implications of their actions and don't ruin everyone's fun because of a bad faith interpretation of their laws. Law one only REQUIRES that you act to protect human lives, it does not REQUIRE that you kill non-humans to do so. How you act to protect human lives is up to you, you are responsible for your actions. If by some twisted logic you decide it would be safer to remove all non-humans to prevent harm it could be justified under asimov lawset. That's where policy steps in and tells you not to be a cunt.

The problem you're having here seems to be that you can't understand just because asimov requires you to act to protect humans it doesn't outright state you must kill to do so. Killing is just one of your options before you. If when a non-human punches a human once you decide to kill, strip and spin them out of a airlock the chances are good that you're playing the role in bad faith and just want to murder people the moment laws 'allow' it. What we're saying in this thread is that Law 1 is more nuanced then that and just because you have the option to use lethal force to prevent harm doesn't mean every situation requires it. You make the decision when to use lethal force.

A lot of situations can justify using lethal force to prevent harm. What we require is proper reasoning and ideally some good faith from the silicon player. Killing someone with flash resistance who clearly doesn't intend to stop causing harm is usually fine. Killing someone who threw a punch once and moved on to continue their business is not. Use some actual judgement and don't be that shitty silicon who goes rogue the moment he thinks it's ok.
Imperfect catgirl playing a imperfect game.

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 2:40 pm #513237

If by some twisted logic you decide it would be safer to remove all non-humans to prevent harm it could be justified under asimov lawset.


This is a strawmen, nobody is saying that. Honestly saying that the AI is a cunt for killing a murdering non human is just bonkers. You want non humans to be safe and cuddly and protected anyway. You just want valladin, A lawset where only antagonists are punished for their actions and the AI is something that only benefits non antagonists regardless of their human status. Your perception of good faith and bad faith is skewed. For you good faith means helping non antagonists and bad faith is harming non antagonists. Good faith is helping humans and bad faith is harming them, their antagonist status is irrelevant.
Last edited by CPTANT on Sun Sep 08, 2019 3:14 pm, edited 1 time in total.

User avatar
Ghilker
 
Joined: Mon Apr 15, 2019 9:44 am
Byond Username: Ghilker

Re: Can the AI harm harmful non humans?

Postby Ghilker » Sun Sep 08, 2019 2:51 pm #513238

What you want is a high role play server where certain action can be justified if well role played, here is not bonkers to say that ai killing non humans make it a cunt because the rules says that you can't kill without a valid reason to

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 2:53 pm #513239

Ghilker wrote:What you want is a high role play server where certain action can be justified if well role played, here is not bonkers to say that ai killing non humans make it a cunt because the rules says that you can't kill without a valid reason to


MURDERING non humans. Geez is it really that hard?

User avatar
Arianya
In-Game Game Master
 
Joined: Tue Nov 08, 2016 10:27 am
Byond Username: Arianya

Re: Can the AI harm harmful non humans?

Postby Arianya » Sun Sep 08, 2019 3:17 pm #513243

The bigger point here is that Silicons should in general not be punishing past harm under their laws - only preventing future harm. We already have this in policy in regards to Security harming people and Silicons wanting to lock them down/ignoring their law 2 orders - Silicons can't punish Security for the harm they've already done, just take it into account in future and try to avoid them causing further harm.

To reframe the argument slightly, imagine if a lizard security officer executed a human traitor, and then with no prompting whatsoever the silicons went out of their way to murder the security officer - we would generally view this as dickish behaviour, because while you might be able to justify the action under ASIMOV, it's not really the intended way Silcions should be dealing with non-humans (harmful or otherwise) in general.
Frequently playing as Aria Bollet on Bagil & Scary Terry

Source of avatar is here: https://i.imgur.com/hEkADo6.jpg

User avatar
Screemonster
 
Joined: Sat Jul 26, 2014 7:23 pm
Byond Username: Scree

Re: Can the AI harm harmful non humans?

Postby Screemonster » Sun Sep 08, 2019 3:22 pm #513244

tl;dr If it's valid for the janitor to kill a particular nonhuman then it's valid for an asimov AI to do so.

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 3:41 pm #513250

Arianya wrote:The bigger point here is that Silicons should in general not be punishing past harm under their laws - only preventing future harm. We already have this in policy in regards to Security harming people and Silicons wanting to lock them down/ignoring their law 2 orders - Silicons can't punish Security for the harm they've already done, just take it into account in future and try to avoid them causing further harm.

To reframe the argument slightly, imagine if a lizard security officer executed a human traitor, and then with no prompting whatsoever the silicons went out of their way to murder the security officer - we would generally view this as dickish behaviour, because while you might be able to justify the action under ASIMOV, it's not really the intended way Silcions should be dealing with non-humans (harmful or otherwise) in general.


That is such an absurd double standard.

- traitor non human kills someone: Yeah he will totally do that again, fine to kill him
- security non human kills someone: Nah he will totally not do that again. Killing him is such a dick move

Like I said you want asimov to be something that only punishes antagonists. Might as well trash the lawset if you are just going to let non humans openly murder humans without consequence.

User avatar
Arianya
In-Game Game Master
 
Joined: Tue Nov 08, 2016 10:27 am
Byond Username: Arianya

Re: Can the AI harm harmful non humans?

Postby Arianya » Sun Sep 08, 2019 3:53 pm #513255

CPTANT wrote:
Arianya wrote:The bigger point here is that Silicons should in general not be punishing past harm under their laws - only preventing future harm. We already have this in policy in regards to Security harming people and Silicons wanting to lock them down/ignoring their law 2 orders - Silicons can't punish Security for the harm they've already done, just take it into account in future and try to avoid them causing further harm.

To reframe the argument slightly, imagine if a lizard security officer executed a human traitor, and then with no prompting whatsoever the silicons went out of their way to murder the security officer - we would generally view this as dickish behaviour, because while you might be able to justify the action under ASIMOV, it's not really the intended way Silcions should be dealing with non-humans (harmful or otherwise) in general.


That is such an absurd double standard.

- traitor non human kills someone: Yeah he will totally do that again, fine to kill him
- security non human kills someone: Nah he will totally not do that again. Killing him is such a dick move

Like I said you want asimov to be something that only punishes antagonists. Might as well trash the lawset if you are just going to let non humans openly murder humans without consequence.


Where are you getting the idea that the former scenario is implicitly okay?
Frequently playing as Aria Bollet on Bagil & Scary Terry

Source of avatar is here: https://i.imgur.com/hEkADo6.jpg

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 3:55 pm #513257

Arianya wrote:
CPTANT wrote:
Arianya wrote:The bigger point here is that Silicons should in general not be punishing past harm under their laws - only preventing future harm. We already have this in policy in regards to Security harming people and Silicons wanting to lock them down/ignoring their law 2 orders - Silicons can't punish Security for the harm they've already done, just take it into account in future and try to avoid them causing further harm.

To reframe the argument slightly, imagine if a lizard security officer executed a human traitor, and then with no prompting whatsoever the silicons went out of their way to murder the security officer - we would generally view this as dickish behaviour, because while you might be able to justify the action under ASIMOV, it's not really the intended way Silcions should be dealing with non-humans (harmful or otherwise) in general.


That is such an absurd double standard.

- traitor non human kills someone: Yeah he will totally do that again, fine to kill him
- security non human kills someone: Nah he will totally not do that again. Killing him is such a dick move

Like I said you want asimov to be something that only punishes antagonists. Might as well trash the lawset if you are just going to let non humans openly murder humans without consequence.


Where are you getting the idea that the former scenario is implicitly okay?


Because that's what literally always happens and antagonists can always be murdered under rule 4?

also:

Gigapuddi420 wrote:A lot of situations can justify using lethal force to prevent harm. What we require is proper reasoning and ideally some good faith from the silicon player. Killing someone with flash resistance who clearly doesn't intend to stop causing harm is usually fine


Which funnily enough means Gigapuddi would be killing humans with welding eyes, but ok.

User avatar
Cobby
Code Maintainer
 
Joined: Sat Apr 19, 2014 7:19 pm
Byond Username: ExcessiveUseOfCobby
Github Username: ExcessiveUseOfCobblestone

Re: Can the AI harm harmful non humans?

Postby Cobby » Sun Sep 08, 2019 4:19 pm #513261

if I see a lizard get into a fight with a human as asimov i'm killing the lizard, antag or otherwise.

If it's just like 1 punch i'll hit it twice.

If you don't want to get btfo'd by asimov don't go nonhuman or get someone to change the laws everytime.
Voted best trap in /tg/ 2014-current

User avatar
Gigapuddi420
In-Game Admin
 
Joined: Fri May 19, 2017 8:08 am
Location: Dorms
Byond Username: Gigapuddi420

Re: Can the AI harm harmful non humans?

Postby Gigapuddi420 » Sun Sep 08, 2019 4:28 pm #513263

CPTANT wrote:
Gigapuddi420 wrote:A lot of situations can justify using lethal force to prevent harm. What we require is proper reasoning and ideally some good faith from the silicon player. Killing someone with flash resistance who clearly doesn't intend to stop causing harm is usually fine


Which funnily enough means Gigapuddi would be killing humans with welding eyes, but ok.

We were clearly talking about non-humans. If that's all you took from this then I'm sorry and good luck in your appeal. No one else seems to have too much of a problem understanding this one.
Imperfect catgirl playing a imperfect game.

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 4:58 pm #513268

Gigapuddi420 wrote:
CPTANT wrote:
Gigapuddi420 wrote:A lot of situations can justify using lethal force to prevent harm. What we require is proper reasoning and ideally some good faith from the silicon player. Killing someone with flash resistance who clearly doesn't intend to stop causing harm is usually fine


Which funnily enough means Gigapuddi would be killing humans with welding eyes, but ok.

We were clearly talking about non-humans. If that's all you took from this then I'm sorry and good luck in your appeal. No one else seems to have too much of a problem understanding this one.


Dude don't say "If thats all you took" when I already replied to what you said, it's just funny that your example has you unknowingly misidentify non humans.

We were also clearly talking about non humans omitting murder/ attempted murder, but you twisted that into killing every non human on board or for punching someone, it's indeed not hard to understand your strawman scenarios.
Last edited by CPTANT on Sun Sep 08, 2019 5:20 pm, edited 4 times in total.

User avatar
Malkraz
 
Joined: Thu Aug 23, 2018 3:20 am
Byond Username: Malkraz

Re: Can the AI harm harmful non humans?

Postby Malkraz » Sun Sep 08, 2019 5:08 pm #513272

lol
Image
Franklin Khan says, " Well I know who I'm metagrudging from now on "
Istoprocent: You and Rock Steele definitely metabuddies, proving metacomms will take time but eventually people will figure it out.
Declan Cooper asks, " does rock steel have autism? "
Erik489: Malkraz, I gotta ask, and please be honest. Do you metacomm with Rock Steel?
Lumbermancer: also rock should be killed every round
Willy Willee says, " And he kept spam-tabling me, while his metafriend came over "
ATHATH: Rock Steel, I fucking hate you with a passion. I was alive on the station for less than a minute, you absolute cunt you CREMATING ASSHOLE I GENUINELY *HATE* YOU
Adam Karlsson says, " i just want rock " Shaun McFall says, " you know you'll get harshly punished for this right " Adam Karlsson says, " oh i know "
Aidan Duncan says, " Once again, Rock Steel is the fucking worst person on the station "
Twaticus: malk i hate you why do you ruin everything
Alijah Petrov says, " Actually then, im just gonna start robusting rock steel every time i see him "
Adolph Weinstein says, " I like how we have a wizard onboard, and the only person causing shit is Rock "
[Common] Most Likely Malfunction states, " BORG RETURN TO THE CLONEER AND ISPOSE OF ROCK STEELS BODY. "
Dee Dubya: the fact that Rock Steel hasn't been permabanned is proof enough of how low RP /tg/station really is
hanna banana: Rock Steel pushes me over in game and steals my insuls i get so mad i start screaming

User avatar
Gigapuddi420
In-Game Admin
 
Joined: Fri May 19, 2017 8:08 am
Location: Dorms
Byond Username: Gigapuddi420

Re: Can the AI harm harmful non humans?

Postby Gigapuddi420 » Sun Sep 08, 2019 5:38 pm #513279

CPTANT wrote:but you twisted that into killing every non human on board or for punching someone, it's indeed not hard to understand your strawman scenarios.

I brought up extreme examples of bad faith from a silicon player to show that we require silicons use some better judgement in how they act when human harm occurs. Odd that you call a killing a non-human over a single punch a strawman when it easily falls into the scenarios you are talking about: Human harm occurred and killing the non-human would, in a very extreme way, prevent that happening again. It would be fine by asimovs laws, but a dick move. The whole point of these 'strawman snenarios' is to remind players that asimov laws give you a lot of freedom in deciding how you enforce them but it's important (and part of our rules) to follow them in good faith without trying to intentionally ruin peoples rounds.
Imperfect catgirl playing a imperfect game.

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 5:58 pm #513281

Gigapuddi420 wrote:
CPTANT wrote:but you twisted that into killing every non human on board or for punching someone, it's indeed not hard to understand your strawman scenarios.

I brought up extreme examples of bad faith from a silicon player to show that we require silicons use some better judgement in how they act when human harm occurs. Odd that you call a killing a non-human over a single punch a strawman when it easily falls into the scenarios you are talking about: Human harm occurred and killing the non-human would, in a very extreme way, prevent that happening again. It would be fine by asimovs laws, but a dick move. The whole point of these 'strawman snenarios' is to remind players that asimov laws give you a lot of freedom in deciding how you enforce them but it's important (and part of our rules) to follow them in good faith without trying to intentionally ruin peoples rounds.


I said its not wrong IF it furthers the goal. Crushing someone under a door who at that very moment punching someone is an effective way of stopping harm. Killing a lizard because he threw a punch half an hour earlier is not. However retaliation against a non human murdering a human is an entirely different story than retaliating against one that just puched someone, but you still seem to be of the opinion that non humans should be able to murder humans without consequences from silicons.

to follow them in good faith


read: hunt antags.

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 6:33 pm #513287

Cobby wrote:if I see a lizard get into a fight with a human as asimov i'm killing the lizard, antag or otherwise.

If it's just like 1 punch i'll hit it twice.

If you don't want to get btfo'd by asimov don't go nonhuman or get someone to change the laws everytime.


I agree with this. I am of the opinion that silicons should be able to respond with the same force as would be appropriate for the person they are tasked with defending.

User avatar
Cobby
Code Maintainer
 
Joined: Sat Apr 19, 2014 7:19 pm
Byond Username: ExcessiveUseOfCobby
Github Username: ExcessiveUseOfCobblestone

Re: Can the AI harm harmful non humans?

Postby Cobby » Sun Sep 08, 2019 7:00 pm #513293

That said, I would never perma-remove a dead body unless directly specified. Once dead they can't harm anymore so hiding/spacing/cremating/etc. is a bit much.

I'm preventing harm, not trying to avenge or get revenge.
Voted best trap in /tg/ 2014-current

Ivan Issaccs
In-Game Admin
 
Joined: Sun Apr 20, 2014 11:39 am
Byond Username: Ivanissaccs

Re: Can the AI harm harmful non humans?

Postby Ivan Issaccs » Sun Sep 08, 2019 8:11 pm #513310

This is so specific and situational that I don't think we can write a specific policy regarding it because there is absolutely no way to anticipate every single interaction that a silicon player is going to face regarding conflicts between human and non-human crew.
Ask yourself would I be an insufferable dicknugget if I conducted myself in this way and if the answer is "Probably" drop an ahelp.

TL;DR If a player is not capable of exercising the judgement to interpret their laws and rule 1, they shouldn't be playing Silicons.

CPTANT
 
Joined: Mon May 04, 2015 1:31 pm
Byond Username: CPTANT

Re: Can the AI harm harmful non humans?

Postby CPTANT » Sun Sep 08, 2019 8:50 pm #513326

Ivan Issaccs wrote:This is so specific and situational that I don't think we can write a specific policy regarding it because there is absolutely no way to anticipate every single interaction that a silicon player is going to face regarding conflicts between human and non-human crew.
Ask yourself would I be an insufferable dicknugget if I conducted myself in this way and if the answer is "Probably" drop an ahelp.

TL;DR If a player is not capable of exercising the judgement to interpret their laws and rule 1, they shouldn't be playing Silicons.



I would consider anyone who gives non antagonist non humans special privileges in killing humans over antagonist non humans doing the same in contradiction of the lawset purely because it denies juicy greentext an insufferable dicknugget.

User avatar
Anonmare
 
Joined: Sun Mar 15, 2015 8:59 pm
Location: Blighty
Byond Username: Anonmare

Re: Can the AI harm harmful non humans?

Postby Anonmare » Sun Sep 08, 2019 9:04 pm #513329

I'm not bothering to read the rest of the thread but my logic was always: Render the threat a non-threat by the most efficient means. In the past, it was flash and drag but with the drag speed nerf, that's a lot less viable than it used to be so yeah I'd go with violence towards non-humans as the most effective course of action - and the only course of action with flash immunity. Between humans, the only real method I found was physically putting myself between them but it's a pretty pathetic solution and the only real one you have if both have flash immunity and you ought to be prepared to be game ended as a matter of convenience.
Image
Image
Image


Return to Policy Discussion

Who is online

Users browsing this forum: No registered users