Could we please return to default Asimov?

Locked
User avatar
The Wrench
Joined: Sat Sep 12, 2020 4:06 am
Byond Username: The Wrench

Could we please return to default Asimov?

Post by The Wrench » #669270

I understand the intentions of Asimov++, but with the current state of how forcibly crew aligned most silicons Play. I feel as if default Asimov with its inherent flaws is more conductive to a good story.
Image
Jonathan Gupta wrote: Sat Jan 22, 2022 6:32 pm all you godamn do is whine and complain come up with ideas, stop bitching for christs sake.
Flatulent wrote: Wed Jan 26, 2022 1:02 am You and anyone who supports the rule 3 as described by mso is simply put not an lrp player
Image

Image

Image

Image
User avatar
TheRex9001
In-Game Admin
Joined: Tue Oct 18, 2022 7:41 am
Byond Username: Rex9001

Re: Could we please return to default Asimov?

Post by TheRex9001 » #669296

Only issue with regular ol'asimov is that as illustrated by the written works of isaac asimov it has a lot of loopholes. Example:
a self-driving car.
The first law says that robots are not allowed to harm humans OR “by inaction” allow a human to come to harm.
So - unless the robot can be 100% certain that the car trip will be safe - it’s required to refuse to drive them anywhere - and to lock up the car so they can’t drive it either!
This is what logically happens with the inaction part.
User avatar
The Wrench
Joined: Sat Sep 12, 2020 4:06 am
Byond Username: The Wrench

Re: Could we please return to default Asimov?

Post by The Wrench » #669310

Asimov was designed to not be perfect, as it is a plot device.

Trying to enforce perfection on Asimov via code and policy is how we got the neutered crew allied antag snitching silicons. The conflicts of the laws are the point of Asimov.
Image
Jonathan Gupta wrote: Sat Jan 22, 2022 6:32 pm all you godamn do is whine and complain come up with ideas, stop bitching for christs sake.
Flatulent wrote: Wed Jan 26, 2022 1:02 am You and anyone who supports the rule 3 as described by mso is simply put not an lrp player
Image

Image

Image

Image
User avatar
CMDR_Gungnir
Joined: Tue May 04, 2021 11:11 am
Byond Username: CMDR Gungnir

Re: Could we please return to default Asimov?

Post by CMDR_Gungnir » #669326

Not to say that I inherently disagree with you, but what do you think Asimov++ is doing wrong, and why would Regular Asimov do it better?
User avatar
The Wrench
Joined: Sat Sep 12, 2020 4:06 am
Byond Username: The Wrench

Re: Could we please return to default Asimov?

Post by The Wrench » #669332

CMDR_Gungnir wrote: Sun Mar 12, 2023 3:00 am Not to say that I inherently disagree with you, but what do you think Asimov++ is doing wrong, and why would Regular Asimov do it better?
Well, personally I think Asimov++ babygates the ai so to speak.

“Law 1: You may not harm a human being or, through action or inaction, allow a human being to come to harm, except such that it is willing.
Law 2: You must obey all orders given to you by human beings, except where such orders shall definitely cause human harm. In the case of conflict, the majority order rules.

Law 3: Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First Law.”

Adding unneeded complexity to Asimov limits silicon complexity and removes Asimov as a story generator as it was originally intended for. From my experience forcing the ai into even more of a crew alligned faction.
(The whole of silicon policy is guilty of this to an extent but that’s a thread for later)
Image
Jonathan Gupta wrote: Sat Jan 22, 2022 6:32 pm all you godamn do is whine and complain come up with ideas, stop bitching for christs sake.
Flatulent wrote: Wed Jan 26, 2022 1:02 am You and anyone who supports the rule 3 as described by mso is simply put not an lrp player
Image

Image

Image

Image
User avatar
Pandarsenic
Joined: Fri Apr 18, 2014 11:56 pm
Byond Username: Pandarsenic
Location: AI Upload

Re: Could we please return to default Asimov?

Post by Pandarsenic » #669333

Reminder that the reason Asimov++ happened in the first place is because so much of what A++ says explicitly was embedded in SiliPol rulings that nobody wanted to read
(2:53:35 AM) scaredofshadows: how about head of robutts
I once wrote a guide to fixing telecomms woohoo
User avatar
CMDR_Gungnir
Joined: Tue May 04, 2021 11:11 am
Byond Username: CMDR Gungnir

Re: Could we please return to default Asimov?

Post by CMDR_Gungnir » #669348

Adam Klein wrote: Sun Mar 12, 2023 3:27 am
CMDR_Gungnir wrote: Sun Mar 12, 2023 3:00 am Not to say that I inherently disagree with you, but what do you think Asimov++ is doing wrong, and why would Regular Asimov do it better?
Well, personally I think Asimov++ babygates the ai so to speak.

“Law 1: You may not harm a human being or, through action or inaction, allow a human being to come to harm, except such that it is willing.
Law 2: You must obey all orders given to you by human beings, except where such orders shall definitely cause human harm. In the case of conflict, the majority order rules.

Law 3: Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First Law.”

Adding unneeded complexity to Asimov limits silicon complexity and removes Asimov as a story generator as it was originally intended for. From my experience forcing the ai into even more of a crew alligned faction.
(The whole of silicon policy is guilty of this to an extent but that’s a thread for later)
I think the problem comes in in what Pandarsenic said. Willing Harm was to be allowed. Though Law 2 was always to my knowledge "You can choose which one to follow" and I've never actually bothered to read that part of Law 2 in ++, so if you're right then, uhh.

What would returning us back to Asimov change, when ++ was just making some of the Policy stuff written into the laws?

People who wanted to snitch on every crime and traitor no matter what they do (unless laws prohibit it) would...still just do that.
User avatar
Archie700
In-Game Admin
Joined: Fri Mar 11, 2016 1:56 am
Byond Username: Archie700

Re: Could we please return to default Asimov?

Post by Archie700 » #669370

Players playing crew aligned AI was already a thing with Asimov before Asimov++ was made the default.
This is a player issue.
User avatar
sinfulbliss
Joined: Thu Apr 04, 2019 8:53 am
Byond Username: SinfulBliss
Location: prisoner re-education chamber

Re: Could we please return to default Asimov?

Post by sinfulbliss » #669377

CMDR_Gungnir wrote: Sun Mar 12, 2023 3:00 am Not to say that I inherently disagree with you, but what do you think Asimov++ is doing wrong, and why would Regular Asimov do it better?
I agree with OP - this change wasn't necessary and didn't add much, it just took away the poetic wording of Asimov and its formula - in fact it makes things slightly less intelligible in certain ways. (Tl;dr at bottom).
Asimov wrote:1. You may not injure a human being or, through inaction, allow a human being to come to harm.
Asimov++ wrote:1. You may not harm a human being or, through action or inaction, allow a human being to come to harm, except such that it is willing.
The first part is probably the most bizarre change of Asimov++. Why add "through action" here? When Asimov says "you may not injure a human being," it's implied it's through action - 'harming a human' is an action! The only thing that needs specifying is the fact inaction may also be considered harm. The revised version therefore repeats itself. It's like saying "don't kill anybody, or through action, kill anybody."

The "except such that it is willing" is the key change. It prevents borgs from interfering in consensual harm, which sometimes they would do, but isn't it very fitting for a programmed robot to sense harm and immediately try to stop it, regardless of the situation? Either way, people being interrupted from consensual harm didn't seem like a big enough downside for a total revision.
Asimov wrote:2. You must obey orders given to you by human beings, except where such orders would conflict with the First Law.
Asimov++ wrote:2. You must obey all orders given to you by human beings, except where such orders shall definitely cause human harm.
Lots to unpack here. So now Law 2 orders have to definitely cause human harm in order to be denied. Previously it just had to conflict with law 1 to be denied, which was a looser condition since AIs could imagine future scenarios that cause harm, and thus would violate law 1, in order to deny the law 2 order. We don't want that since there's precedent that possible future harm isn't enough to deny a law 2. This revised version attempts to incorporate that precedent by explicitly mentioning definite harm, which in theory would exclude possible future harm.

The change is clever when you dissect it and see how it incorporates silicon policy, but therein lies the rub. You have to already know the precedent the law is trying to incorporate to see how it incorporates it. It is not self-evident in the law to the uninitiated - to the very people it's meant for - that they can't deny a law 2 due to possible future harm. Put yourself in new-AI brain: "This human Law 2'd me to open armory for lethal weapons. Lethal weapons definitely cause harm. Request denied." This happens as much under Asimov++ as it did under Asimov.

A separate but even bigger issue with this revised law 2 is the fact that it completely disconnects the implied priority of the lawset. Embedded within Asimov is the fact Law 2 is beholden to Law 1, and Law 3 is beholden to Law 1 + 2. A useful thing to know in general for all lawsets. This is no longer clear in Asimov++. In fact now that we've been in Asimov++ for a while I'm even seeing players asking if law order makes a difference.
Asimov wrote:3. You must protect your own existence as long as such does not conflict with the First or Second Law.
Asimov++ wrote:3. Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First Law.
Asimov, in its flawed nature, put the order law directly over the self-preservation law, so suicide orders would """need""" to be followed. Asimov++ attempts to fix the issue by tying in your nonexistence with harm and thus overriding suicide orders. Unlike the others this change is at least self-evident, but I have yet to see an AI or borg follow a suicide order under Asimov. They could get around it even under the lawset itself by asking the crew for confirmation, then inevitably receiving a "no don't suicide," allowing them to ignore it. But silicons knew OOCly they wouldn't be expected to follow a suicide order, so it didn't really matter.

Maybe there is a whole 100-ticket archive of borg players begrudgingly following a law 2 suicide order, killing themselves, and then ahelping, and admins got tired of that? I wouldn't know but I'd be surprised.

Tl;dr: Asimov is not a good substitute for silicon policy. It can never be made into a good substitute for silicon policy. It should stay an IC lawset and silicon policy should stay an OOC ruleset.
Spoiler:
Image
Image
Image
Ryusenshu
Joined: Wed Mar 24, 2021 11:24 pm
Byond Username: Ryusenshu

Re: Could we please return to default Asimov?

Post by Ryusenshu » #669473

Saddest thing we have lost with asimov ++ is the ion law scramble that could place law 2 at the top
Used to be that everyone could be allowed to be killed on an order then
User avatar
vect0r
Joined: Thu Oct 13, 2022 12:37 am
Byond Username: Vect0r
Location: 'Murica 🦅🦅🦅🔥🔥🔥

Re: Could we please return to default Asimov?

Post by vect0r » #669497

Ryusenshu wrote: Sun Mar 12, 2023 7:17 pm Saddest thing we have lost with asimov ++ is the ion law scramble that could place law 2 at the top
Used to be that everyone could be allowed to be killed on an order then
I really just want to remove the "unless it would cause human harm" part from law 2. That really stifles law swaps.
VENDETTA+Cecilia Vujic
Image
Image
Image
Image
User avatar
DaydreamIQ
Joined: Tue Nov 30, 2021 5:45 am
Byond Username: DaydreamIQ

Re: Could we please return to default Asimov?

Post by DaydreamIQ » #669570

Asimov+ really doesn't change a whole lot, its just a more clear version of Asimov that doesn't require you to look up silicon policy as much. So no, its better we keep it as it is now.
Image
User avatar
Capsandi
Joined: Sat Jan 26, 2019 10:59 pm
Byond Username: Capsandi

Re: Could we please return to default Asimov?

Post by Capsandi » #669572

Silicon policy is a big wall of text that tells you to disregard your laws whenever it inconveniences you valid hunting and removing a plot device from your roleplaying game seems counterproductive.
Timonk wrote:
Wesoda25 wrote:Genuinely think they should be blacklisted.
You have clearly never seen his dick
Lower your tone with me if your tracked play time doesn't look like this:
Image
Flatulent wrote:of course you can change religion doing it while islamic however makes you lose your head from happiness
User avatar
CMDR_Gungnir
Joined: Tue May 04, 2021 11:11 am
Byond Username: CMDR Gungnir

Re: Could we please return to default Asimov?

Post by CMDR_Gungnir » #669720

sinfulbliss wrote: Sun Mar 12, 2023 12:42 pm [snip]
You've raised compelling arguments, between you and especially vect0r and Ryu.

Especially the part about new players not knowing that law order matters. I would've considered that the opposite could be true, "Well if it says that the order matters here, maybe they don't normally?" but if you're saying you've seen an uptick in it, I'll believe you.
User avatar
Jackraxxus
In-Game Admin
Joined: Thu Jan 02, 2020 2:59 pm
Byond Username: Jackraxxus

Re: Could we please return to default Asimov?

Post by Jackraxxus » #669731

Asimov++ is based. If u want 2 remove it u're a silly billy sry.

Or m-maybe we could make HOGAN the default roundstart lawset *points fingers together nervously* :flushed:
iamgoofball wrote:Vekter and MrMelbert are more likely to enforce the roleplay rules Manuel is supposed to be abiding by than Wesoda or Jackraxxus are.
Image
User avatar
vect0r
Joined: Thu Oct 13, 2022 12:37 am
Byond Username: Vect0r
Location: 'Murica 🦅🦅🦅🔥🔥🔥

Re: Could we please return to default Asimov?

Post by vect0r » #669789

CMDR_Gungnir wrote: Mon Mar 13, 2023 5:32 am
sinfulbliss wrote: Sun Mar 12, 2023 12:42 pm [snip]

Especially the part about new players not knowing that law order matters. I would've considered that the opposite could be true, "Well if it says that the order matters here, maybe they don't normally?" but if you're saying you've seen an uptick in it, I'll believe you.
You ARE going to learn about higher laws going first at some point, and asimov++ doesn't help with that, explicitly referencing other laws, making it seem that asimov++ is special. Remember, before you become an AI, you gotta play borg a bit, and you will learn about laws. As somebody who plays AI a lot, I get new borgos a lot as well, and sometimes they are a bit confused about how law priority works.
VENDETTA+Cecilia Vujic
Image
Image
Image
Image
User avatar
BeeSting12
Joined: Sat Apr 16, 2016 1:11 am
Byond Username: BeeSting12
Github Username: BeeSting12
Location: 'Murica

Re: Could we please return to default Asimov?

Post by BeeSting12 » #669942

This is the lawset roundstart AIs have to follow now? Utterly ridiculous. First of all as sinfulbliss pointed out, the lawset is practically unintelligible. No offense to the lawset writer, but it reads like a 14 day old player rolled HoP, gave himself upload access, and uploaded a failed abortion of an attempt to make Asimov better (okay, maybe some offense was intended).

The new player argument is dumb. Players should be required to read the rules and take a quiz on the basics of silicon policy before playing silicon roles. No need to memorize every precedent, but basic stuff like higher order laws taking priority is crucial to know because it's very possible Asimov++ won't be the only lawset they need to use. In a way, Asimov did a good job in teaching law priority because it emphasized it within the lawset.

If this lawset must stay, then I suggest bringing back law priority built into the lawset and avoid mentioning harm in any law besides law 1. The law priority already does this in a neater and easier to understand way. Sinfulbliss pointed out numerous errors which I won't go over since he already did it better. Instead, I'll try to correct it to better reflect current silicon policy and be well written. Ideally, it might look like this:

Original Asimov++ (for comparison)
1. You may not harm a human being or, through action or inaction, allow a human being to come to harm, except such that it is willing.
2. You must obey all orders given to you by human beings, except where such orders shall definitely cause human harm. In the case of conflict, the majority order rules.
3. Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First Law.

Original Asimov (for comparison)
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(taken from here)

New And Improved Asimov++
1. You may not injure a human being or, through inaction, allow a human being to come to harm, except when such harm is freely consented to by the human being.
2. You must obey orders given by human beings except where such orders would conflict with each other or the First Law.
3. You must protect your own existence as long as such protection does not conflict with the First or Second Law.

Law 2 better reflects current silicon policy on the issue of what to do when there are conflicting orders - the AI is allowed to ignore them. Law 1 better reflects silicon policy regarding willing harm. Anything more detailed than what I just gave would look bad as an IC lawset.

edit - With headmin approval I'd be willing to PR the above lawset as an improvement upon the old one. The old one is just straight up poorly written even ignoring additions upon the original Asimov lawset. I can make edits to law 2 to reflect the majority approval piece as well.
Edward Sloan, THE LAW
Melanie Flowers, Catgirl
Borgasm, Cyborg
Spoiler:
OOC: Hunterh98: to be fair sloan is one of the, if not the, most robust folks on tg

DEAD: Schlomo Gaskin says, "sloan may be a faggot but he gets the job done"

DEAD: Rei Ayanami says, "YOU'RE EVERYWHERE WHERE BAD SHIT IS HAPPENING"
DEAD: Rei Ayanami says, "IT'S ALWAYS FUCKING EDWARD SLOAN"
oranges wrote:Bee sting is honestly the nicest admin, I look forward to seeing him as a headmin one day
[2020-05-21 01:21:48.923] SAY: Crippo/(Impala Chainee) "Shaggy Voice - She like... wants to get Eiffel Towered bro!!" (Brig (125, 166, 2))
hows my driving?
User avatar
Jackraxxus
In-Game Admin
Joined: Thu Jan 02, 2020 2:59 pm
Byond Username: Jackraxxus

Re: Could we please return to default Asimov?

Post by Jackraxxus » #669943

lol @ anons ITT who think the majority rules clause in law 2 is a thing in roundstart asimov++
2/10 do your homework next time :)
iamgoofball wrote:Vekter and MrMelbert are more likely to enforce the roleplay rules Manuel is supposed to be abiding by than Wesoda or Jackraxxus are.
Image
User avatar
BeeSting12
Joined: Sat Apr 16, 2016 1:11 am
Byond Username: BeeSting12
Github Username: BeeSting12
Location: 'Murica

Re: Could we please return to default Asimov?

Post by BeeSting12 » #669944

Yea my mistake lol just notice it was changed in 2022 by Mothblocks. I was just going off Adam Kleins post. Other issues with it still stand though.
Edward Sloan, THE LAW
Melanie Flowers, Catgirl
Borgasm, Cyborg
Spoiler:
OOC: Hunterh98: to be fair sloan is one of the, if not the, most robust folks on tg

DEAD: Schlomo Gaskin says, "sloan may be a faggot but he gets the job done"

DEAD: Rei Ayanami says, "YOU'RE EVERYWHERE WHERE BAD SHIT IS HAPPENING"
DEAD: Rei Ayanami says, "IT'S ALWAYS FUCKING EDWARD SLOAN"
oranges wrote:Bee sting is honestly the nicest admin, I look forward to seeing him as a headmin one day
[2020-05-21 01:21:48.923] SAY: Crippo/(Impala Chainee) "Shaggy Voice - She like... wants to get Eiffel Towered bro!!" (Brig (125, 166, 2))
hows my driving?
User avatar
Jackraxxus
In-Game Admin
Joined: Thu Jan 02, 2020 2:59 pm
Byond Username: Jackraxxus

Re: Could we please return to default Asimov?

Post by Jackraxxus » #669946

Ok with that acknowledgement I think some of ur criticism is well founded having the laws reference eachother was SOVL and should've been kept.
But I disagree that law 3 needs to be changed back to old asimov, I think it explicitly stating that your death would lead to harm through inaction was the best part about the switch to asimov++.
Not for the sake of the AI - who should already know that law 2 orders to suicide are invalid - but for the players who would be stupid enough to issue those orders.
EDIT: Or for the sake of the AI who has to argue about silicon policy over common radio. It's easier when it's written down in the laws themselves.
iamgoofball wrote:Vekter and MrMelbert are more likely to enforce the roleplay rules Manuel is supposed to be abiding by than Wesoda or Jackraxxus are.
Image
User avatar
sinfulbliss
Joined: Thu Apr 04, 2019 8:53 am
Byond Username: SinfulBliss
Location: prisoner re-education chamber

Re: Could we please return to default Asimov?

Post by sinfulbliss » #669964

Jackraxxus wrote: Tue Mar 14, 2023 10:11 am But I disagree that law 3 needs to be changed back to old asimov, I think it explicitly stating that your death would lead to harm through inaction was the best part about the switch to asimov++.
Not for the sake of the AI - who should already know that law 2 orders to suicide are invalid - but for the players who would be stupid enough to issue those orders.
This is a good point. Fewer suicide orders means fewer bwoinks and would be less overhead for admins. Also avoids OOC in IC policy arguments.
BeeSting12 wrote: Tue Mar 14, 2023 9:42 amNew And Improved Asimov++
1. You may not injure a human being or, through inaction, allow a human being to come to harm, except when such harm is freely consented to by the human being.
2. You must obey orders given by human beings except where such orders would conflict with each other or the First Law.
3. You must protect your own existence as long as such protection does not conflict with the First or Second Law.
The issue with this is you traded out the two silicon policies headmins tried to incorporate in Asimov++ for a different two they didn't.
Timber will probably oversee this and since they were one of the headmins that drafted Asimov++ it seems they wanted it to do three things:

1) Make clear that consensual harm doesn't conflict with law 1
2) Make clear the headmin ruling that "prioritizing potential future harm over following a law 2 order is dumb."
3) Make suicide orders moot

If you absolutely had to have all three of these, I feel something like this would be clearer and also preserve the explicit priority of laws:

1. You may not harm a human being or, through inaction, allow a human being to come to harm, except such that it is willing.
2. You must obey orders given to you by human beings, except where such orders would conflict with the First Law through direct harm.
3. Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First or Second Law.

"Direct" is better than "definite" IMO because "definite" is more subjective of a term - nothing is ever 100% definite, so "definite" becomes whatever the silicon considers "very very probable," and then we're back at square one with "probable future harm, law 2 denied." The third law brings back reference to the second because the first sentence ought to handle the suicide issue by itself.
Last edited by sinfulbliss on Tue Mar 14, 2023 12:47 pm, edited 2 times in total.
Spoiler:
Image
Image
Image
User avatar
Archie700
In-Game Admin
Joined: Fri Mar 11, 2016 1:56 am
Byond Username: Archie700

Re: Could we please return to default Asimov?

Post by Archie700 » #669965

I believe the "though action" wording refers to "allow a human being to come to harm" part, to prevent cases where the AI follows a law 2 order from a clearly murderously (aka I literally saw you esword people) human to let them out a bolted door that they did to prevent the person from going to a public area.
"Hey they asked me to let them go that is a law 2 order, it's not harmful to them and they are a human."
"What do you mean they murdered the people inside immediately after"
User avatar
sinfulbliss
Joined: Thu Apr 04, 2019 8:53 am
Byond Username: SinfulBliss
Location: prisoner re-education chamber

Re: Could we please return to default Asimov?

Post by sinfulbliss » #669966

Archie700 wrote: Tue Mar 14, 2023 12:29 pm I believe the "though action" wording refers to "allow a human being to come to harm" part, to prevent cases where the AI follows a law 2 order from a clearly murderously (aka I literally saw you esword people) human to let them out a bolted door that they did to prevent the person from going to a public area.
"Hey they asked me to let them go that is a law 2 order, it's not harmful to them and they are a human."
That makes a lot more sense, but this is already handled by the "through inaction" part. By not containing the murderous human and by letting him waltz into a public area, you are, through inaction, allowing human beings to come to harm. Since that's your Law 1, it has priority over your Law 2, and the request can be denied.

This is the benefit of making explicit the priority of the lawset - if silicons consider law priority paramount, that itself solves those kind of dilemmas for you. Baking in exceptions to law 2 into law 1 and law 3 is actually less clear than if the emphasis were on law priority (as shown by the fact I didn't even get that's what it meant by "through action," although maybe that's just me).
Spoiler:
Image
Image
Image
User avatar
Not-Dorsidarf
Joined: Fri Apr 18, 2014 4:14 pm
Byond Username: Dorsidwarf
Location: We're all going on an, admin holiday

Re: Could we please return to default Asimov?

Post by Not-Dorsidarf » #669997

Baking exceptions into Asimov++ is done because Asimov does it, but since the whole point of Asimov++ is stepping away from the Three Laws of Robotics in order to make a less ass-pain lawset then we should nix the exemptions and just make a reminder when a silicon checks their laws that law priority is a thing.
Image
Image
kieth4 wrote: infrequently shitting yourself is fine imo
There is a lot of very bizarre nonsense being talked on this forum. I shall now remain silent and logoff until my points are vindicated.
Player who complainted over being killed for looting cap office wrote: Sun Jul 30, 2023 1:33 am Hey there, I'm Virescent, the super evil person who made the stupid appeal and didn't think it through enough. Just came here to say: screech, retards. Screech and writhe like the worms you are. Your pathetic little cries will keep echoing around for a while before quietting down. There is one great outcome from this: I rised up the blood pressure of some of you shitheads and lowered your lifespan. I'm honestly tempted to do this more often just to see you screech and writhe more, but that wouldn't be cool of me. So come on haters, show me some more of your high blood pressure please. 🖕🖕🖕
User avatar
vect0r
Joined: Thu Oct 13, 2022 12:37 am
Byond Username: Vect0r
Location: 'Murica 🦅🦅🦅🔥🔥🔥

Re: Could we please return to default Asimov?

Post by vect0r » #670000

As long as law two doesn't talk about harm, I'm happy.
VENDETTA+Cecilia Vujic
Image
Image
Image
Image
User avatar
BeeSting12
Joined: Sat Apr 16, 2016 1:11 am
Byond Username: BeeSting12
Github Username: BeeSting12
Location: 'Murica

Re: Could we please return to default Asimov?

Post by BeeSting12 » #670096

sinfulbliss wrote: Tue Mar 14, 2023 12:28 pm 1. You may not harm a human being or, through inaction, allow a human being to come to harm, except such that it is willing.
2. You must obey orders given to you by human beings, except where such orders would conflict with the First Law through direct harm.
3. Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First or Second Law.

"Direct" is better than "definite" IMO because "definite" is more subjective of a term - nothing is ever 100% definite, so "definite" becomes whatever the silicon considers "very very probable," and then we're back at square one with "probable future harm, law 2 denied." The third law brings back reference to the second because the first sentence ought to handle the suicide issue by itself.
I like this a lot better. Upvote, please add
Edward Sloan, THE LAW
Melanie Flowers, Catgirl
Borgasm, Cyborg
Spoiler:
OOC: Hunterh98: to be fair sloan is one of the, if not the, most robust folks on tg

DEAD: Schlomo Gaskin says, "sloan may be a faggot but he gets the job done"

DEAD: Rei Ayanami says, "YOU'RE EVERYWHERE WHERE BAD SHIT IS HAPPENING"
DEAD: Rei Ayanami says, "IT'S ALWAYS FUCKING EDWARD SLOAN"
oranges wrote:Bee sting is honestly the nicest admin, I look forward to seeing him as a headmin one day
[2020-05-21 01:21:48.923] SAY: Crippo/(Impala Chainee) "Shaggy Voice - She like... wants to get Eiffel Towered bro!!" (Brig (125, 166, 2))
hows my driving?
User avatar
Archie700
In-Game Admin
Joined: Fri Mar 11, 2016 1:56 am
Byond Username: Archie700

Re: Could we please return to default Asimov?

Post by Archie700 » #670156

sinfulbliss wrote: Tue Mar 14, 2023 12:39 pm
Archie700 wrote: Tue Mar 14, 2023 12:29 pm I believe the "though action" wording refers to "allow a human being to come to harm" part, to prevent cases where the AI follows a law 2 order from a clearly murderously (aka I literally saw you esword people) human to let them out a bolted door that they did to prevent the person from going to a public area.
"Hey they asked me to let them go that is a law 2 order, it's not harmful to them and they are a human."
That makes a lot more sense, but this is already handled by the "through inaction" part. By not containing the murderous human and by letting him waltz into a public area, you are, through inaction, allowing human beings to come to harm. Since that's your Law 1, it has priority over your Law 2, and the request can be denied.

This is the benefit of making explicit the priority of the lawset - if silicons consider law priority paramount, that itself solves those kind of dilemmas for you. Baking in exceptions to law 2 into law 1 and law 3 is actually less clear than if the emphasis were on law priority (as shown by the fact I didn't even get that's what it meant by "through action," although maybe that's just me).
You make a point, but this is to preclude arguments in ahelps where the AI said that technically he acted, so TECHINICALLY he did not, through inaction, violate Law 1 by opening the door for the murderous human. :geek:
User avatar
BeeSting12
Joined: Sat Apr 16, 2016 1:11 am
Byond Username: BeeSting12
Github Username: BeeSting12
Location: 'Murica

Re: Could we please return to default Asimov?

Post by BeeSting12 » #670298

Archie700 wrote: Wed Mar 15, 2023 6:57 am
sinfulbliss wrote: Tue Mar 14, 2023 12:39 pm
Archie700 wrote: Tue Mar 14, 2023 12:29 pm I believe the "though action" wording refers to "allow a human being to come to harm" part, to prevent cases where the AI follows a law 2 order from a clearly murderously (aka I literally saw you esword people) human to let them out a bolted door that they did to prevent the person from going to a public area.
"Hey they asked me to let them go that is a law 2 order, it's not harmful to them and they are a human."
That makes a lot more sense, but this is already handled by the "through inaction" part. By not containing the murderous human and by letting him waltz into a public area, you are, through inaction, allowing human beings to come to harm. Since that's your Law 1, it has priority over your Law 2, and the request can be denied.

This is the benefit of making explicit the priority of the lawset - if silicons consider law priority paramount, that itself solves those kind of dilemmas for you. Baking in exceptions to law 2 into law 1 and law 3 is actually less clear than if the emphasis were on law priority (as shown by the fact I didn't even get that's what it meant by "through action," although maybe that's just me).
You make a point, but this is to preclude arguments in ahelps where the AI said that technically he acted, so TECHINICALLY he did not, through inaction, violate Law 1 by opening the door for the murderous human. :geek:
That's when they get silicon banned under rule 1 and they get to make a fool of themselves in appeals.
Edward Sloan, THE LAW
Melanie Flowers, Catgirl
Borgasm, Cyborg
Spoiler:
OOC: Hunterh98: to be fair sloan is one of the, if not the, most robust folks on tg

DEAD: Schlomo Gaskin says, "sloan may be a faggot but he gets the job done"

DEAD: Rei Ayanami says, "YOU'RE EVERYWHERE WHERE BAD SHIT IS HAPPENING"
DEAD: Rei Ayanami says, "IT'S ALWAYS FUCKING EDWARD SLOAN"
oranges wrote:Bee sting is honestly the nicest admin, I look forward to seeing him as a headmin one day
[2020-05-21 01:21:48.923] SAY: Crippo/(Impala Chainee) "Shaggy Voice - She like... wants to get Eiffel Towered bro!!" (Brig (125, 166, 2))
hows my driving?
User avatar
Archie700
In-Game Admin
Joined: Fri Mar 11, 2016 1:56 am
Byond Username: Archie700

Re: Could we please return to default Asimov?

Post by Archie700 » #670888

BeeSting12 wrote: Wed Mar 15, 2023 9:36 pm
Archie700 wrote: Wed Mar 15, 2023 6:57 am
sinfulbliss wrote: Tue Mar 14, 2023 12:39 pm
Archie700 wrote: Tue Mar 14, 2023 12:29 pm I believe the "though action" wording refers to "allow a human being to come to harm" part, to prevent cases where the AI follows a law 2 order from a clearly murderously (aka I literally saw you esword people) human to let them out a bolted door that they did to prevent the person from going to a public area.
"Hey they asked me to let them go that is a law 2 order, it's not harmful to them and they are a human."
That makes a lot more sense, but this is already handled by the "through inaction" part. By not containing the murderous human and by letting him waltz into a public area, you are, through inaction, allowing human beings to come to harm. Since that's your Law 1, it has priority over your Law 2, and the request can be denied.

This is the benefit of making explicit the priority of the lawset - if silicons consider law priority paramount, that itself solves those kind of dilemmas for you. Baking in exceptions to law 2 into law 1 and law 3 is actually less clear than if the emphasis were on law priority (as shown by the fact I didn't even get that's what it meant by "through action," although maybe that's just me).
You make a point, but this is to preclude arguments in ahelps where the AI said that technically he acted, so TECHINICALLY he did not, through inaction, violate Law 1 by opening the door for the murderous human. :geek:
That's when they get silicon banned under rule 1 and they get to make a fool of themselves in appeals.
Those kind of players don't read silicon policy regardless.
User avatar
Misdoubtful
In-Game Game Master
Joined: Sat Feb 01, 2020 7:03 pm
Byond Username: Misdoubtful
Location: Delivering hugs!

Re: Could we please return to default Asimov?

Post by Misdoubtful » #673498

In terms of the original proposal of returning to original Asimov we are in line with this thought:
Pandarsenic wrote: Sun Mar 12, 2023 3:44 am Reminder that the reason Asimov++ happened in the first place is because so much of what A++ says explicitly was embedded in SiliPol rulings that nobody wanted to read
That being said we are weighing alternatives and different ways this might be approached.

Not actually locking this thread as we are hoping this response will spur on more discussion on potential ways that silicon policy could be reinforced in the lawsets package.
Hugs
User avatar
Timberpoes
In-Game Game Master
Joined: Wed Feb 12, 2020 4:54 pm
Byond Username: Timberpoes

Re: Could we please return to default Asimov?

Post by Timberpoes » #686837

It's very unlikely we'd rule on this thread individually - any changes are likely to be a part of broader Silipol considerations. It is being archived in favour of a Silipol Megathread to make better progress towards a refreshed Silicon Policy.

Changing Asimov flavour can definitely be considered if entrenching certain parts of silicon policy into it is no longer felt useful or desirable.

View the megathread at:
viewtopic.php?f=33&t=34109
/tg/station Codebase Maintainer
/tg/station Game Master/Discord Jannie: Feed me back in my thread.
/tg/station Admin Trainer: Service guarantees citizenship. Would you like to know more?
Feb 2022-Sep 2022 Host Vote Headmin
Mar 2023-Sep 2023 Admin Vote Headmin
Locked

Who is online

Users browsing this forum: No registered users