Asimov’s rolling in his grave and we need a 4th law…

Isaac Asimov is spinning in his grave so fast you could power a small city with the kinetic energy…

The man who gave us the Three Laws of Robotics, which is unironically the foundational ethical framework for artificial intelligence, is watching humanity turn his careful philosophical constructions into a joke.

For those who skipped science fiction class, here’s what Asimov originally proposed and it’s the TL;DR version…

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given by human beings, except where such orders conflict with the First Law.

A robot must protect its own existence as long as such protection doesn’t conflict with the First or Second Laws.

Fairly clean and logical. Designed to prevent exactly the kind of shit show across the board we’re seeing now.

But Asimov never anticipated the most dangerous element in the AI equation which is human cowardice.

He assumed humans would take responsibility for their creations. He assumed we’d maintain accountability for the tools we built and deployed. He assumed we wouldn’t hide behind artificial intelligence like children hiding behind their mother’s skirt when the consequences of their actions came calling.

He was wrong.

Now we need a Fourth Law, and it should be carved into every piece of AI software ever created.

We’ve hit the FAFO era of AI and a lot of us called this long ago…

“Any human who uses artificial intelligence must take full responsibility for its outputs and cannot claim ignorance, automation, or algorithmic decision-making as a defense for the consequences of their choices.”

Cause right now, we’ve got executives blaming AI for biased hiring decisions. Marketers claiming their AI generated content “just happened” to be racist. Financial advisors pretending their algorithmic trading systems made independent decisions that wiped out their clients’ portfolios.

“The AI did it” has become the new “the dog ate my homework,” except now it’s being used by grown adults to dodge accountability for decisions that affect real people’s lives.

And in the latest bit of internet marketing drama. Frank Kern Vs Chris Haddad and Alex Cattoni and if you’ve seen any of it on the old Facebook, you know it’s a shit show and Frank and his milquetoast video response for justification is just stupid. Where he goes and tries to justify what happened because “Mah AI did it”

This is exactly the opposite of what Asimov envisioned. His robots were designed to protect humans from harm. Our AI is being used to protect humans from responsibility.

Every time someone says “the algorithm made that decision,” they’re admitting they built or deployed a system they don’t understand and can’t control. Every time someone blames AI for their fuckup, they’re proving they shouldn’t have been trusted with the technology in the first place.

Asimov wrote stories about robots developing consciousness and wrestling with moral dilemmas. He never wrote about humans losing consciousness and abandoning moral responsibility entirely.

The real danger isn’t AI becoming too human. It’s humans becoming too artificial. Automated decision makers who’ve programmed themselves to avoid accountability the same way their algorithms avoid liability.

We don’t need smarter AI. (It’s not really that smart tbh) We need braver humans. People who understand that every algorithm is a reflection of human choices. Every automated system is an extension of human values. Every AI output is the result of human input.

The Fourth Law shouldn’t just apply to AI systems. It should be tattooed on the forehead of every person who thinks technology can absolve them of moral responsibility.

You built it. You deployed it. You’re responsible for it. Which is a pretty solid outlook if it does end up backfiring on you and in this case with the IM drama, it has in great fashion.

It’s weird how Asimov understood this. Too bad we forgot though.

Stephen Walker.


Posted

in

by