Sydney Falk: This Didn't Happen You Never Saw Me is a user on mst3k.interlinked.me. You can follow them or interact with them if you have an account anywhere in the fediverse.
If you don't, you can sign up here.
What if Asimov's Laws of Robotics weren't supposed to govern robots' behavior, but the people designing them (and AIs)?
Rule 0. Do not design robots capable of harming humanity, or, by inaction, allowing humanity to come to harm. Rule 1. Do not design robots capable of injuring human beings or, through inaction, allowing human beings to come to harm.
I can attack someone fatally with a Roomba, and technically this is inaction on the part of the robot
if they have to all be made soft YET capable of preventing abuse, there's no reason to bother with rule 3 at all, you'd need AI running bodies made of gobs of individually disposable nanites, with tons of protection routines for humans so as to prevent inaction
but then they're nearly "indestructible" in a sense
anyway, while in theory this is great, I currently fear humanity far more than any AI
so really, I'd feel awful putting some baby AI into the world only for humans to rip it to chunks, cackling like fucking howler monkeys, claiming they "beat a robot" and "humans rule" and whatever else
the omnics are pretty much how I think humanity will handle AI
so I'd rather my bots can defend themselves from people when they need to, people are
(I mean, if AI starts terminatoring the earth, I will call you personally and apologize, but like, I think Facebook II is way more likely for our apocalypse -- whatever thing(s) end up replacing that basic goddamn parasite-monolith's place in our lives will try to eat us like Facebook I did. but if Skynet happens, I will seriously apologize. especially if it's because of my own babies. total mea culpa situation.)
@sydneyfalk Facebook is something Mark Zuckerberg should have thought twice about building. He never considered the implications, and acted as if the negative externalities were somebody else's problem.
> Facebook is something Mark Zuckerberg should have thought twice about building.
Agreed. But let's imagine that alternate reality for a moment, because in it, whoever *did* just plow in and create the Facebook we know bought whatever *that* Zuckerberg created and Zuckerberg's nobody in that universe and that person who owns that thing was the one who should have been more careful.
The system encourages reckless, exploitative behavior.
> You can't just make something and hope nobody will misuse it.
The corollary to this is that you have to be willing to look at misuse vectors for a limited amount of time that is still greater than zero, because "make something and hope nobody will misuse it" is precisely what humans do all day long with everything.
We can't just say "be more careful", we need specifics. What determines 'preventing unsafe use', etc. etc., so these things can be detected.