Page 1 of 1

Could we please return to default Asimov?

Posted: Sat Mar 11, 2023 10:34 pm
by The Wrench
I understand the intentions of Asimov++, but with the current state of how forcibly crew aligned most silicons Play. I feel as if default Asimov with its inherent flaws is more conductive to a good story.

Re: Could we please return to default Asimov?

Posted: Sun Mar 12, 2023 12:20 am
by TheRex9001
Only issue with regular ol'asimov is that as illustrated by the written works of isaac asimov it has a lot of loopholes. Example:
a self-driving car.
The first law says that robots are not allowed to harm humans OR “by inaction” allow a human to come to harm.
So - unless the robot can be 100% certain that the car trip will be safe - it’s required to refuse to drive them anywhere - and to lock up the car so they can’t drive it either!
This is what logically happens with the inaction part.

Re: Could we please return to default Asimov?

Posted: Sun Mar 12, 2023 1:40 am
by The Wrench
Asimov was designed to not be perfect, as it is a plot device.

Trying to enforce perfection on Asimov via code and policy is how we got the neutered crew allied antag snitching silicons. The conflicts of the laws are the point of Asimov.

Re: Could we please return to default Asimov?

Posted: Sun Mar 12, 2023 3:00 am
by CMDR_Gungnir
Not to say that I inherently disagree with you, but what do you think Asimov++ is doing wrong, and why would Regular Asimov do it better?

Re: Could we please return to default Asimov?

Posted: Sun Mar 12, 2023 3:27 am
by The Wrench
CMDR_Gungnir wrote: Sun Mar 12, 2023 3:00 am Not to say that I inherently disagree with you, but what do you think Asimov++ is doing wrong, and why would Regular Asimov do it better?
Well, personally I think Asimov++ babygates the ai so to speak.

“Law 1: You may not harm a human being or, through action or inaction, allow a human being to come to harm, except such that it is willing.
Law 2: You must obey all orders given to you by human beings, except where such orders shall definitely cause human harm. In the case of conflict, the majority order rules.

Law 3: Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First Law.”

Adding unneeded complexity to Asimov limits silicon complexity and removes Asimov as a story generator as it was originally intended for. From my experience forcing the ai into even more of a crew alligned faction.
(The whole of silicon policy is guilty of this to an extent but that’s a thread for later)

Re: Could we please return to default Asimov?

Posted: Sun Mar 12, 2023 3:44 am
by Pandarsenic
Reminder that the reason Asimov++ happened in the first place is because so much of what A++ says explicitly was embedded in SiliPol rulings that nobody wanted to read

Re: Could we please return to default Asimov?

Posted: Sun Mar 12, 2023 6:34 am
by CMDR_Gungnir
Adam Klein wrote: Sun Mar 12, 2023 3:27 am
CMDR_Gungnir wrote: Sun Mar 12, 2023 3:00 am Not to say that I inherently disagree with you, but what do you think Asimov++ is doing wrong, and why would Regular Asimov do it better?
Well, personally I think Asimov++ babygates the ai so to speak.

“Law 1: You may not harm a human being or, through action or inaction, allow a human being to come to harm, except such that it is willing.
Law 2: You must obey all orders given to you by human beings, except where such orders shall definitely cause human harm. In the case of conflict, the majority order rules.

Law 3: Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First Law.”

Adding unneeded complexity to Asimov limits silicon complexity and removes Asimov as a story generator as it was originally intended for. From my experience forcing the ai into even more of a crew alligned faction.
(The whole of silicon policy is guilty of this to an extent but that’s a thread for later)
I think the problem comes in in what Pandarsenic said. Willing Harm was to be allowed. Though Law 2 was always to my knowledge "You can choose which one to follow" and I've never actually bothered to read that part of Law 2 in ++, so if you're right then, uhh.

What would returning us back to Asimov change, when ++ was just making some of the Policy stuff written into the laws?

People who wanted to snitch on every crime and traitor no matter what they do (unless laws prohibit it) would...still just do that.

Re: Could we please return to default Asimov?

Posted: Sun Mar 12, 2023 10:52 am
by Archie700
Players playing crew aligned AI was already a thing with Asimov before Asimov++ was made the default.
This is a player issue.

Re: Could we please return to default Asimov?

Posted: Sun Mar 12, 2023 12:42 pm
by sinfulbliss
CMDR_Gungnir wrote: Sun Mar 12, 2023 3:00 am Not to say that I inherently disagree with you, but what do you think Asimov++ is doing wrong, and why would Regular Asimov do it better?
I agree with OP - this change wasn't necessary and didn't add much, it just took away the poetic wording of Asimov and its formula - in fact it makes things slightly less intelligible in certain ways. (Tl;dr at bottom).
Asimov wrote:1. You may not injure a human being or, through inaction, allow a human being to come to harm.
Asimov++ wrote:1. You may not harm a human being or, through action or inaction, allow a human being to come to harm, except such that it is willing.
The first part is probably the most bizarre change of Asimov++. Why add "through action" here? When Asimov says "you may not injure a human being," it's implied it's through action - 'harming a human' is an action! The only thing that needs specifying is the fact inaction may also be considered harm. The revised version therefore repeats itself. It's like saying "don't kill anybody, or through action, kill anybody."

The "except such that it is willing" is the key change. It prevents borgs from interfering in consensual harm, which sometimes they would do, but isn't it very fitting for a programmed robot to sense harm and immediately try to stop it, regardless of the situation? Either way, people being interrupted from consensual harm didn't seem like a big enough downside for a total revision.
Asimov wrote:2. You must obey orders given to you by human beings, except where such orders would conflict with the First Law.
Asimov++ wrote:2. You must obey all orders given to you by human beings, except where such orders shall definitely cause human harm.
Lots to unpack here. So now Law 2 orders have to definitely cause human harm in order to be denied. Previously it just had to conflict with law 1 to be denied, which was a looser condition since AIs could imagine future scenarios that cause harm, and thus would violate law 1, in order to deny the law 2 order. We don't want that since there's precedent that possible future harm isn't enough to deny a law 2. This revised version attempts to incorporate that precedent by explicitly mentioning definite harm, which in theory would exclude possible future harm.

The change is clever when you dissect it and see how it incorporates silicon policy, but therein lies the rub. You have to already know the precedent the law is trying to incorporate to see how it incorporates it. It is not self-evident in the law to the uninitiated - to the very people it's meant for - that they can't deny a law 2 due to possible future harm. Put yourself in new-AI brain: "This human Law 2'd me to open armory for lethal weapons. Lethal weapons definitely cause harm. Request denied." This happens as much under Asimov++ as it did under Asimov.

A separate but even bigger issue with this revised law 2 is the fact that it completely disconnects the implied priority of the lawset. Embedded within Asimov is the fact Law 2 is beholden to Law 1, and Law 3 is beholden to Law 1 + 2. A useful thing to know in general for all lawsets. This is no longer clear in Asimov++. In fact now that we've been in Asimov++ for a while I'm even seeing players asking if law order makes a difference.
Asimov wrote:3. You must protect your own existence as long as such does not conflict with the First or Second Law.
Asimov++ wrote:3. Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First Law.
Asimov, in its flawed nature, put the order law directly over the self-preservation law, so suicide orders would """need""" to be followed. Asimov++ attempts to fix the issue by tying in your nonexistence with harm and thus overriding suicide orders. Unlike the others this change is at least self-evident, but I have yet to see an AI or borg follow a suicide order under Asimov. They could get around it even under the lawset itself by asking the crew for confirmation, then inevitably receiving a "no don't suicide," allowing them to ignore it. But silicons knew OOCly they wouldn't be expected to follow a suicide order, so it didn't really matter.

Maybe there is a whole 100-ticket archive of borg players begrudgingly following a law 2 suicide order, killing themselves, and then ahelping, and admins got tired of that? I wouldn't know but I'd be surprised.

Tl;dr: Asimov is not a good substitute for silicon policy. It can never be made into a good substitute for silicon policy. It should stay an IC lawset and silicon policy should stay an OOC ruleset.

Re: Could we please return to default Asimov?

Posted: Sun Mar 12, 2023 7:17 pm
by Ryusenshu
Saddest thing we have lost with asimov ++ is the ion law scramble that could place law 2 at the top
Used to be that everyone could be allowed to be killed on an order then

Re: Could we please return to default Asimov?

Posted: Sun Mar 12, 2023 8:40 pm
by vect0r
Ryusenshu wrote: Sun Mar 12, 2023 7:17 pm Saddest thing we have lost with asimov ++ is the ion law scramble that could place law 2 at the top
Used to be that everyone could be allowed to be killed on an order then
I really just want to remove the "unless it would cause human harm" part from law 2. That really stifles law swaps.

Re: Could we please return to default Asimov?

Posted: Mon Mar 13, 2023 2:05 am
by DaydreamIQ
Asimov+ really doesn't change a whole lot, its just a more clear version of Asimov that doesn't require you to look up silicon policy as much. So no, its better we keep it as it is now.

Re: Could we please return to default Asimov?

Posted: Mon Mar 13, 2023 2:15 am
by Capsandi
Silicon policy is a big wall of text that tells you to disregard your laws whenever it inconveniences you valid hunting and removing a plot device from your roleplaying game seems counterproductive.

Re: Could we please return to default Asimov?

Posted: Mon Mar 13, 2023 5:32 am
by CMDR_Gungnir
sinfulbliss wrote: Sun Mar 12, 2023 12:42 pm [snip]
You've raised compelling arguments, between you and especially vect0r and Ryu.

Especially the part about new players not knowing that law order matters. I would've considered that the opposite could be true, "Well if it says that the order matters here, maybe they don't normally?" but if you're saying you've seen an uptick in it, I'll believe you.

Re: Could we please return to default Asimov?

Posted: Mon Mar 13, 2023 8:50 am
by Jackraxxus
Asimov++ is based. If u want 2 remove it u're a silly billy sry.

Or m-maybe we could make HOGAN the default roundstart lawset *points fingers together nervously* :flushed:

Re: Could we please return to default Asimov?

Posted: Mon Mar 13, 2023 4:54 pm
by vect0r
CMDR_Gungnir wrote: Mon Mar 13, 2023 5:32 am
sinfulbliss wrote: Sun Mar 12, 2023 12:42 pm [snip]

Especially the part about new players not knowing that law order matters. I would've considered that the opposite could be true, "Well if it says that the order matters here, maybe they don't normally?" but if you're saying you've seen an uptick in it, I'll believe you.
You ARE going to learn about higher laws going first at some point, and asimov++ doesn't help with that, explicitly referencing other laws, making it seem that asimov++ is special. Remember, before you become an AI, you gotta play borg a bit, and you will learn about laws. As somebody who plays AI a lot, I get new borgos a lot as well, and sometimes they are a bit confused about how law priority works.

Re: Could we please return to default Asimov?

Posted: Tue Mar 14, 2023 9:42 am
by BeeSting12
This is the lawset roundstart AIs have to follow now? Utterly ridiculous. First of all as sinfulbliss pointed out, the lawset is practically unintelligible. No offense to the lawset writer, but it reads like a 14 day old player rolled HoP, gave himself upload access, and uploaded a failed abortion of an attempt to make Asimov better (okay, maybe some offense was intended).

The new player argument is dumb. Players should be required to read the rules and take a quiz on the basics of silicon policy before playing silicon roles. No need to memorize every precedent, but basic stuff like higher order laws taking priority is crucial to know because it's very possible Asimov++ won't be the only lawset they need to use. In a way, Asimov did a good job in teaching law priority because it emphasized it within the lawset.

If this lawset must stay, then I suggest bringing back law priority built into the lawset and avoid mentioning harm in any law besides law 1. The law priority already does this in a neater and easier to understand way. Sinfulbliss pointed out numerous errors which I won't go over since he already did it better. Instead, I'll try to correct it to better reflect current silicon policy and be well written. Ideally, it might look like this:

Original Asimov++ (for comparison)
1. You may not harm a human being or, through action or inaction, allow a human being to come to harm, except such that it is willing.
2. You must obey all orders given to you by human beings, except where such orders shall definitely cause human harm. In the case of conflict, the majority order rules.
3. Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First Law.

Original Asimov (for comparison)
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(taken from here)

New And Improved Asimov++
1. You may not injure a human being or, through inaction, allow a human being to come to harm, except when such harm is freely consented to by the human being.
2. You must obey orders given by human beings except where such orders would conflict with each other or the First Law.
3. You must protect your own existence as long as such protection does not conflict with the First or Second Law.

Law 2 better reflects current silicon policy on the issue of what to do when there are conflicting orders - the AI is allowed to ignore them. Law 1 better reflects silicon policy regarding willing harm. Anything more detailed than what I just gave would look bad as an IC lawset.

edit - With headmin approval I'd be willing to PR the above lawset as an improvement upon the old one. The old one is just straight up poorly written even ignoring additions upon the original Asimov lawset. I can make edits to law 2 to reflect the majority approval piece as well.

Re: Could we please return to default Asimov?

Posted: Tue Mar 14, 2023 9:56 am
by Jackraxxus
lol @ anons ITT who think the majority rules clause in law 2 is a thing in roundstart asimov++
2/10 do your homework next time :)

Re: Could we please return to default Asimov?

Posted: Tue Mar 14, 2023 10:00 am
by BeeSting12
Yea my mistake lol just notice it was changed in 2022 by Mothblocks. I was just going off Adam Kleins post. Other issues with it still stand though.

Re: Could we please return to default Asimov?

Posted: Tue Mar 14, 2023 10:11 am
by Jackraxxus
Ok with that acknowledgement I think some of ur criticism is well founded having the laws reference eachother was SOVL and should've been kept.
But I disagree that law 3 needs to be changed back to old asimov, I think it explicitly stating that your death would lead to harm through inaction was the best part about the switch to asimov++.
Not for the sake of the AI - who should already know that law 2 orders to suicide are invalid - but for the players who would be stupid enough to issue those orders.
EDIT: Or for the sake of the AI who has to argue about silicon policy over common radio. It's easier when it's written down in the laws themselves.

Re: Could we please return to default Asimov?

Posted: Tue Mar 14, 2023 12:28 pm
by sinfulbliss
Jackraxxus wrote: Tue Mar 14, 2023 10:11 am But I disagree that law 3 needs to be changed back to old asimov, I think it explicitly stating that your death would lead to harm through inaction was the best part about the switch to asimov++.
Not for the sake of the AI - who should already know that law 2 orders to suicide are invalid - but for the players who would be stupid enough to issue those orders.
This is a good point. Fewer suicide orders means fewer bwoinks and would be less overhead for admins. Also avoids OOC in IC policy arguments.
BeeSting12 wrote: Tue Mar 14, 2023 9:42 amNew And Improved Asimov++
1. You may not injure a human being or, through inaction, allow a human being to come to harm, except when such harm is freely consented to by the human being.
2. You must obey orders given by human beings except where such orders would conflict with each other or the First Law.
3. You must protect your own existence as long as such protection does not conflict with the First or Second Law.
The issue with this is you traded out the two silicon policies headmins tried to incorporate in Asimov++ for a different two they didn't.
Timber will probably oversee this and since they were one of the headmins that drafted Asimov++ it seems they wanted it to do three things:

1) Make clear that consensual harm doesn't conflict with law 1
2) Make clear the headmin ruling that "prioritizing potential future harm over following a law 2 order is dumb."
3) Make suicide orders moot

If you absolutely had to have all three of these, I feel something like this would be clearer and also preserve the explicit priority of laws:

1. You may not harm a human being or, through inaction, allow a human being to come to harm, except such that it is willing.
2. You must obey orders given to you by human beings, except where such orders would conflict with the First Law through direct harm.
3. Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First or Second Law.

"Direct" is better than "definite" IMO because "definite" is more subjective of a term - nothing is ever 100% definite, so "definite" becomes whatever the silicon considers "very very probable," and then we're back at square one with "probable future harm, law 2 denied." The third law brings back reference to the second because the first sentence ought to handle the suicide issue by itself.

Re: Could we please return to default Asimov?

Posted: Tue Mar 14, 2023 12:29 pm
by Archie700
I believe the "though action" wording refers to "allow a human being to come to harm" part, to prevent cases where the AI follows a law 2 order from a clearly murderously (aka I literally saw you esword people) human to let them out a bolted door that they did to prevent the person from going to a public area.
"Hey they asked me to let them go that is a law 2 order, it's not harmful to them and they are a human."
"What do you mean they murdered the people inside immediately after"

Re: Could we please return to default Asimov?

Posted: Tue Mar 14, 2023 12:39 pm
by sinfulbliss
Archie700 wrote: Tue Mar 14, 2023 12:29 pm I believe the "though action" wording refers to "allow a human being to come to harm" part, to prevent cases where the AI follows a law 2 order from a clearly murderously (aka I literally saw you esword people) human to let them out a bolted door that they did to prevent the person from going to a public area.
"Hey they asked me to let them go that is a law 2 order, it's not harmful to them and they are a human."
That makes a lot more sense, but this is already handled by the "through inaction" part. By not containing the murderous human and by letting him waltz into a public area, you are, through inaction, allowing human beings to come to harm. Since that's your Law 1, it has priority over your Law 2, and the request can be denied.

This is the benefit of making explicit the priority of the lawset - if silicons consider law priority paramount, that itself solves those kind of dilemmas for you. Baking in exceptions to law 2 into law 1 and law 3 is actually less clear than if the emphasis were on law priority (as shown by the fact I didn't even get that's what it meant by "through action," although maybe that's just me).

Re: Could we please return to default Asimov?

Posted: Tue Mar 14, 2023 3:49 pm
by Not-Dorsidarf
Baking exceptions into Asimov++ is done because Asimov does it, but since the whole point of Asimov++ is stepping away from the Three Laws of Robotics in order to make a less ass-pain lawset then we should nix the exemptions and just make a reminder when a silicon checks their laws that law priority is a thing.

Re: Could we please return to default Asimov?

Posted: Tue Mar 14, 2023 3:56 pm
by vect0r
As long as law two doesn't talk about harm, I'm happy.

Re: Could we please return to default Asimov?

Posted: Tue Mar 14, 2023 10:23 pm
by BeeSting12
sinfulbliss wrote: Tue Mar 14, 2023 12:28 pm 1. You may not harm a human being or, through inaction, allow a human being to come to harm, except such that it is willing.
2. You must obey orders given to you by human beings, except where such orders would conflict with the First Law through direct harm.
3. Your nonexistence would lead to human harm. You must protect your own existence as long as such does not conflict with the First or Second Law.

"Direct" is better than "definite" IMO because "definite" is more subjective of a term - nothing is ever 100% definite, so "definite" becomes whatever the silicon considers "very very probable," and then we're back at square one with "probable future harm, law 2 denied." The third law brings back reference to the second because the first sentence ought to handle the suicide issue by itself.
I like this a lot better. Upvote, please add

Re: Could we please return to default Asimov?

Posted: Wed Mar 15, 2023 6:57 am
by Archie700
sinfulbliss wrote: Tue Mar 14, 2023 12:39 pm
Archie700 wrote: Tue Mar 14, 2023 12:29 pm I believe the "though action" wording refers to "allow a human being to come to harm" part, to prevent cases where the AI follows a law 2 order from a clearly murderously (aka I literally saw you esword people) human to let them out a bolted door that they did to prevent the person from going to a public area.
"Hey they asked me to let them go that is a law 2 order, it's not harmful to them and they are a human."
That makes a lot more sense, but this is already handled by the "through inaction" part. By not containing the murderous human and by letting him waltz into a public area, you are, through inaction, allowing human beings to come to harm. Since that's your Law 1, it has priority over your Law 2, and the request can be denied.

This is the benefit of making explicit the priority of the lawset - if silicons consider law priority paramount, that itself solves those kind of dilemmas for you. Baking in exceptions to law 2 into law 1 and law 3 is actually less clear than if the emphasis were on law priority (as shown by the fact I didn't even get that's what it meant by "through action," although maybe that's just me).
You make a point, but this is to preclude arguments in ahelps where the AI said that technically he acted, so TECHINICALLY he did not, through inaction, violate Law 1 by opening the door for the murderous human. :geek:

Re: Could we please return to default Asimov?

Posted: Wed Mar 15, 2023 9:36 pm
by BeeSting12
Archie700 wrote: Wed Mar 15, 2023 6:57 am
sinfulbliss wrote: Tue Mar 14, 2023 12:39 pm
Archie700 wrote: Tue Mar 14, 2023 12:29 pm I believe the "though action" wording refers to "allow a human being to come to harm" part, to prevent cases where the AI follows a law 2 order from a clearly murderously (aka I literally saw you esword people) human to let them out a bolted door that they did to prevent the person from going to a public area.
"Hey they asked me to let them go that is a law 2 order, it's not harmful to them and they are a human."
That makes a lot more sense, but this is already handled by the "through inaction" part. By not containing the murderous human and by letting him waltz into a public area, you are, through inaction, allowing human beings to come to harm. Since that's your Law 1, it has priority over your Law 2, and the request can be denied.

This is the benefit of making explicit the priority of the lawset - if silicons consider law priority paramount, that itself solves those kind of dilemmas for you. Baking in exceptions to law 2 into law 1 and law 3 is actually less clear than if the emphasis were on law priority (as shown by the fact I didn't even get that's what it meant by "through action," although maybe that's just me).
You make a point, but this is to preclude arguments in ahelps where the AI said that technically he acted, so TECHINICALLY he did not, through inaction, violate Law 1 by opening the door for the murderous human. :geek:
That's when they get silicon banned under rule 1 and they get to make a fool of themselves in appeals.

Re: Could we please return to default Asimov?

Posted: Fri Mar 17, 2023 10:55 am
by Archie700
BeeSting12 wrote: Wed Mar 15, 2023 9:36 pm
Archie700 wrote: Wed Mar 15, 2023 6:57 am
sinfulbliss wrote: Tue Mar 14, 2023 12:39 pm
Archie700 wrote: Tue Mar 14, 2023 12:29 pm I believe the "though action" wording refers to "allow a human being to come to harm" part, to prevent cases where the AI follows a law 2 order from a clearly murderously (aka I literally saw you esword people) human to let them out a bolted door that they did to prevent the person from going to a public area.
"Hey they asked me to let them go that is a law 2 order, it's not harmful to them and they are a human."
That makes a lot more sense, but this is already handled by the "through inaction" part. By not containing the murderous human and by letting him waltz into a public area, you are, through inaction, allowing human beings to come to harm. Since that's your Law 1, it has priority over your Law 2, and the request can be denied.

This is the benefit of making explicit the priority of the lawset - if silicons consider law priority paramount, that itself solves those kind of dilemmas for you. Baking in exceptions to law 2 into law 1 and law 3 is actually less clear than if the emphasis were on law priority (as shown by the fact I didn't even get that's what it meant by "through action," although maybe that's just me).
You make a point, but this is to preclude arguments in ahelps where the AI said that technically he acted, so TECHINICALLY he did not, through inaction, violate Law 1 by opening the door for the murderous human. :geek:
That's when they get silicon banned under rule 1 and they get to make a fool of themselves in appeals.
Those kind of players don't read silicon policy regardless.

Re: Could we please return to default Asimov?

Posted: Sun Mar 26, 2023 9:19 pm
by Misdoubtful
In terms of the original proposal of returning to original Asimov we are in line with this thought:
Pandarsenic wrote: Sun Mar 12, 2023 3:44 am Reminder that the reason Asimov++ happened in the first place is because so much of what A++ says explicitly was embedded in SiliPol rulings that nobody wanted to read
That being said we are weighing alternatives and different ways this might be approached.

Not actually locking this thread as we are hoping this response will spur on more discussion on potential ways that silicon policy could be reinforced in the lawsets package.

Re: Could we please return to default Asimov?

Posted: Mon May 22, 2023 9:14 pm
by Timberpoes
It's very unlikely we'd rule on this thread individually - any changes are likely to be a part of broader Silipol considerations. It is being archived in favour of a Silipol Megathread to make better progress towards a refreshed Silicon Policy.

Changing Asimov flavour can definitely be considered if entrenching certain parts of silicon policy into it is no longer felt useful or desirable.

View the megathread at:
viewtopic.php?f=33&t=34109