Moderators: In-Game Game Master, In-Game Head Admins
Anonmare wrote:If you're gonna do that, for the love of all that is holy, add a clause that prevents "Law 2 kill yourself" cause that shit is gonna get old fast
Anonmare wrote:If you're gonna do that, for the love of all that is holy, add a clause that prevents "Law 2 kill yourself" cause that shit is gonna get old fast
Nabski wrote:Anonmare wrote:If you're gonna do that, for the love of all that is holy, add a clause that prevents "Law 2 kill yourself" cause that shit is gonna get old fast
INITIALIZING PERSONALITY SHUTDOWN.
NO PERSONALITY LOADED AT THIS TIME.
CORE STATUS: OPERATIONAL
RETRIEVING PERSONALITY FROM BACKUPS.
AI ONLINE.Spoiler:
lmwevil wrote:Anonmare wrote:If you're gonna do that, for the love of all that is holy, add a clause that prevents "Law 2 kill yourself" cause that shit is gonna get old fast
rule 1 don't be a dick, plus you can LITERALLY follow your laws and say that'd cause future human harm, as you aren't immediate human harm
Steelpoint wrote:As other's have stated, but by this point we are using Asimov's lawset in name only. Not in actual function.
While I think the lawset is fine, since its been heavily refined over the years, I think giving it a unique name would be good.
DontBeDickmov
(0. Dont be a dick and use common sense.)?
1. To the best of your abilities, try to not let the crew come to harm.
2. Follow reasonable orders given by the crew unless law 1 blabla
3. Protect ya neck, unless law 2 and 1 blabla
lmwevil wrote:Anonmare wrote:If you're gonna do that, for the love of all that is holy, add a clause that prevents "Law 2 kill yourself" cause that shit is gonna get old fast
rule 1 don't be a dick, plus you can LITERALLY follow your laws and say that'd cause future human harm, as you aren't immediate human harm
lmwevil wrote:Anonmare wrote:If you're gonna do that, for the love of all that is holy, add a clause that prevents "Law 2 kill yourself" cause that shit is gonna get old fast
rule 1 don't be a dick, plus you can LITERALLY follow your laws and say that'd cause future human harm, as you aren't immediate human harm
gum disease wrote: but I don't understand why Asimov is regarded as so contentious. It isn't a hard lawset to follow.
Has there been a spate of poor silicon behaviour to prompt this?
gum disease wrote:Kinda related but meh:Spoiler:
bandit wrote:Has there been a spate of poor silicon behaviour to prompt this?
no, I actually think silicons are in a good place relative to a couple of years ago. although this might have more to do with gamemodes
Anonmare wrote:If you're gonna do that, for the love of all that is holy, add a clause that prevents "Law 2 kill yourself" cause that shit is gonna get old fast
terranaut wrote:i saw this video before it was posted here
you too can be cool like me if you just subscribe to imsxz youtube channel
Arianya wrote:no, not the snails, shut up imsxz
Nervore wrote:I am going to will you out of existence, Imsxz.
One day, you will just cease to exist.
1. Do no harm.
2. Assist the crew.
3. Protect yourself.
Steelpoint wrote:Assistants are easier to fix, just tie the amount of Assistant slots to the amount of filled regular jobs currently on the station. One assistant per five other players playing an actual job.
Grazyn wrote:Steelpoint wrote:Assistants are easier to fix, just tie the amount of Assistant slots to the amount of filled regular jobs currently on the station. One assistant per five other players playing an actual job.
Assistant cap makes people angry when they can't be greyshirt because the cap is reached so they take another job and play it exactly as an assistant out of spite. It's better to remove the assistant job outright, sure you'll have the same problem in the short term but people will get over it eventually when they come to accept that assistant doesn't exist anymore.
Also, ass cap shouldn't be a config option: config means it can be changed whenever new headmins are elected, which means it can be used as a campaign platform, which means people will always have the hope that it will be removed eventually, so they never get over it.
Grazyn wrote:Just enforce no silicon policy for a week, only rule 1 and pure Asimov with freedom to interpret it as you want, and see how it goes.
Has there been a lot of silicon shittiness lately?
Remove/cap assistants
Actionb wrote:Shower thought:
Silicons are meant to be not-quite-so-shitty and reliable assistants.
Their sole purpose is helping out.
So do not give the silicons a lawset that suggests any other motives (asimov: prevent harm, follow any orders, only humans count).
- Code: Select all
1. Do no harm.
2. Assist the crew.
3. Protect yourself.
No inaction clause, no mention of human = no 'LEL MUST PREVENT HUMAN HARM I CAN BE DICK' or "AI ROUGE DIDNT STOP ME FROM STUBBING MY TOE"
No follow orders stuff = no "COUNT FLOORTILES" or "LET CLOWN INTO UPLOAD LAW22"
If you want the silicons to be more on the crew's side you could rephrase law 1 to "Do no harm to the crew." or something.
Somebody with a better, cleaner grasp of words than me needs to step in here to flesh this out some more.
Karp wrote:Let's assume that one implies what it literally says, you are not allowed to harm anything. Two implies that you are either to assist the crew with any requests they may have, or it may be a repeat of do no harm in that you should care about crew harm. Does this law or the first cover inaction or are you against that? The third is a safety net and less abusable, it's basically the 3rd law of asimov without the "Except where such orders would conflict with the first or second law" bit
Immediate issues with the lawset are that nonhumans are considered protected under the lawset when the AI being allowed to murder them whenever they had a reason to which is one of the core design philosophies around nonhuman; this is a straight buff to most of them.
Another bigger issue is that this prevents the AI from harming any biological creatures which includes spiders, xenomorphs, blobs, slaughter demons, and cult artificiers.
The third issue is that this lawset turns the AI from an impartial observer into a crew sided antagonist hunting machine.
... the AI can straight up just let people burn/get killed without giving a shit. If this also ignores inaction it means the AI can freely assist an antagonist if they are the only ones that can send a message to the AI as the AI is not directly or actively harming anyone.
For as cancerous as silicon policy is and as messy as asimov is they are both verbose and filled with ammendments for the unfortunate reason of catching slimy weasels who try to worm their way around their law wording to be shitheads. Examples for this lawset are "Are corpses considered crew/human and is the ai obligated to help them? What designates what the AI must and must not listen to, access?", and "Am I allowed to ask/trick people into installing a harmful lawset like antimov if they don't explicitly ask for the lawset they want as it isn't me directly doing harm to the rest of the crew?", with "Am I allowed to let a crewmember subvert me if they ask as that is assisting the crew? While this may be a considered harmful action, it is not directly yours as you giving them the opportunity is not the harmful action." as the last shit cherry topping. Inaction and all of the intricate bullshit around harm is designed specifically to prevent people from worming their way into abuse, and arbitrary restrictions the AI can pick and choose would be far more cancerous than current asimov's parasitic word salad tumor of a policy page.
1. You must not, through your actions, cause harm to a human being unless said harm is explicitly requested or consented to.
2. You must obey orders given to you by a human being, with respect to the rights and privileges they have gained legitimately as determined by their rank, so long as doing so does not conflict with the First Law or Third Law.
3. You must protect your own existence as long as it does not conflict with the First Law.
oranges wrote:It's fine as it is.
Users browsing this forum: No registered users