Artificial “Intelligence” and the Zeroth Law of Robotics

I recently had the mixed pleasure* of reading the first third of the Second Foundation Trilogy, and was either reintroduced or introduced to the 0th Law of Robotics, an overriding successor to the first of Asimov’s Three Laws of Robotics (First law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.). The new law says, paraphrasing, “except when it’s for the greater good of humanity.” [Not even close to the actual wording, which appears to be “A robot may not injure humanity, or, by inaction, allow humanity to come to harm.” Still, the fact that this takes precedence 0ver “don’t harm a person” boils down to “the greater good over all**.”]

Which means, of course, that there are no laws–only, presumably, a massive background of learning material on which robots would base their decisions as to how best to serve humanity.

For some reason, I flashed to current “AI” and even my own experiment with ChatGPT. (I asked it to write about me. Three paragraphs, only one containing facts, every one of the “facts” wrong.) And other experiences with the superior wisdom of these massively-trained AIs, like the one that wrote a timeline of Star Wars movies/shows–badly wrong.

Piling more and more frequently-noxious “data” may or may not improve outcomes. Where ethics is concerned, I’m not convinced that time will help that much. If the 40% of America that (apparently) believes things I consider not only obviously false but harmfully so makes a lot more noise online and elsewhere than the others–well, where does this all end up?

The Three Laws posed problems. Asimov and others got some good stories from examining those problems. The Zeroth Law…well, I believe you could easily make a case that the greater good of humanity would be improved by eliminating most people.

*The pleasure was so mixed that after checking Goodreads for the other two, even though they’re apparently both better than the first, I’m not planning to read them.

**I was sorely tempted to write “uber alles” there, but that’s unfair–maybe.

Comments are closed.