Squeak Toys

Squeak Toys

Is AI going to kill us?

Well, this generation won’t, but now that I got your attention, I wanted to talk about one AI-related problem that is not with AI itself: anthropomorphizing. That is, our capacity to infuse other species and objects with human-like motivations, emotions, and capabilities.

This is uniquely human. You don’t see dogs looking at spiders – or for that matter, car engines – and imbuing them with feelings. “Aww, look at that sweet spider, he’s feeling guilty because he tried to bite me” has not ever crossed a canine’s mind, nor has “That thing that makes the roaring sound is angry at me because I peed on the carpet.”

We do this all the time. We do this so brilliantly that…well, Pixar. It’s a cool (or not so cool, depending on your viewpoint) skill we humans have, the ability to relate to other species and objects by thinking of them as being more like us.

But here is where it becomes particularly dangerous when it comes to generative AI.

LLMs are, in the most reductive definition, very clever squeak toys. We press the button, and they talk back to us in a way that makes them sound eerily human. Their ability to generate contextual, human-sounding noises that please us is truly remarkable, and we are yet at the very early stages; they are only going to get exponentially better from here.

So what are we going to do? I mean, isn’t language the one thing humans can do that no other species can emulate? We talk to each other. It’s what makes us stand apart. Our big brains have made it so, and we don’t even have to think about it, we just know it. I’m fully aware of research showing other species communicating in a form of language, but it’s not the same thing, unless someone suddenly proves that the sharks on the shores of Australia “speak” a different language than those on the California coast, and dolphins serve as interpreters in worldwide shark conferences on how to deal with pesky humans.

Of course we are going to pretend LLMs are partially human.

Anthropomorphizing them is natural.

But they are not human. More importantly, they will never be human. Even when the general AI barrier is crossed, which could happen this decade or much further in the future (really, nobody knows,) and then AI continues its exponential growth and becomes millions and billions times smarter than us, it will never be human. It will never feel the way we feel. It will never intuit the way we intuit.

Where this becomes dangerous is when public pressure leads to bad policy outcomes.

We are already, at this early stage, seeing misinformed calls for treating AI as sentient and equal to humans. These calls are bound to increase the more human-sounding AI becomes. But no matter how good they sound, they are computers, and shutting one off doesn’t equate to murder. It won’t.

Ever.

It’s a different species.

Yes, we must at some point face issues related to, for example, cross-species crimes between AIs and humans. On my part I think that, as part of human-species risk management, we need to start laying down the policy frameworks today, so that we are not caught with our pants down. Sadly, humans are about as good at managing risk as they are at accepting that puppies don’t feel guilt the way humans do, regardless of how they look back at us.

In other words, not so much.

So I don’t expect much on this front, and hence the danger. Because even this early stage of squeak toys, they are showing an outstanding ability to project emotionality, simply due to us, the humans who trained them on everything we’ve ever done, are so good at doing it ourselves. The more we fall for it, the more likely we are to pursue the wrong policy solutions.

Something to consider, perhaps, before we slap googly eyes on server racks in nuclear-powered data centers on the moon.

Related Posts
Leave a Reply

Your email address will not be published.Required fields are marked *