As a member of the Theia Institute, a think tank focused on topics such as AI Ethics, I often find myself tackling headline topics. The whole “what are we missing?” thing is a big part of the mission of the institute.
Here is one: as AI advances at a speed that doesn’t even give you enough time to collect your breath first in order for it to be taken away, I see a lot of discussion about the potential of AI to destroy humanity. Some folks consider it inevitable. Some, like Elon Musk, think it’s a much lower chance – “only” 20% – but that it’s totally worth taking that risk. Just Google this stuff and, even if you only limit yourself to experts (whatever that means), see days of your life vanishing as you reach the inevitable conclusion that the real answer is…
Nobody knows.
But a question occurred to me which doesn’t seem to be discussed much at all, or really anywhere as far as I can tell. In other words, perfect think-tank bait.
What does “destroy” mean?
Because the answer to that is one absolutely, positively Brobdingnagian (gawd I love that word) hidden assumption.
Try it yourself. Read carefully between the lines of all these predictions, and see if you can suss out what each expert believes the word “destroy” means. You’ll quickly notice that they don’t agree, just assume everybody means the same thing.
You will also discover two very interesting patterns:
First, the percentage risk that “it” might happen correlates directly to the internal severity assessment – if you’re in security, think of it as a sort of CVSS score – of the outcome following said destruction. If you think of it in terms of the vast majority of humanity will be put in vats and serve as a power source for the Matrix, then your risk assessment of the chance of it happening tends to be low to non-existent. If you think of it in terms of many humans will die but it won’t be catastrophic because they will likely be the ones already poor and suffering so whatever no big deal you gotta break some eggs to make an omlette, then your number might be higher. While I do not know him, that latter appears to capture the sentiments of Mr. Musk.
Second, and the reason I decided to write this column, is that for all I can tell, everyone’s focusing on physical outcomes. In mathematical form, you could write it so: Destroy = Less Humans, with the “less” determining the severity assessment.
A few folks talk about human enslavement by AI, but they are mostly science fiction authors and fans (lots of techies love SF!), and they still tend to focus on the physical human state of being.
But – woo-woo alert! – I ask us to reconsider.
Why are we ignoring the mental and emotional outcomes of the word “destroy”?
Human bodies are there to enable the brain to survive, function and thrive. Biologically, they don’t have any other evolutionary purpose. And the brain itself feels and thinks, often guided by bodily sensations. It’s the whole mind, heart, body triad.
Where is the heart and mind in all of this? Why are we ignoring them? I would like to redefine “destroy” to explicitly include them.
Once we do that, the question becomes far more complex but also far more interesting. It also leads to a very different sort of risk assessment, so let me get that one out of the way: I believe that AI will, with 100% certainty, destroy humanity as we know it. It’s simply a question of when. Yes, yes, you may say “transform” while I say “destroy”.
Potato, po-tah-toh.
The important point is that, just like, say, fire and the wheel changed humanity forever, ultimately destroying what was before, so will AI.
So far, so benign, right? I am not saying anything you don’t already know.
But let’s not gloss over the details. The speed here matters, because we face an explosive combination, of the speed of transition together with human society being massive, in comparison to anything we’ve ever experienced in human history.
The effects of AI will not be felt locally or regionally, over decades. They will be felt all at once all over the world. The scale of potential disruption is enormous, and that’s before we even get to what that disruption means.
As David Graeber pointed out, most jobs in the workplace are simply not necessary; he called them “bullshit jobs”. We’ve been ignoring this at our peril for a long time, but AI is forcing us to acknowledge this reality much faster than anyone is ready to admit. Political discussion keeps focusing on the unemployment rate. Here is a fun question to ponder: are we ready for the time, which could arrive as early as next decade, where 40-50% structural unemployment is the norm and most skilled jobs are no longer needed?
Even if super-genius AI never directly harms our physical existence, doesn’t that fit under the “destroy” umbrella? because humans are not very good at just sitting around being idle. This is the sort of thing that starts revolutions and war. AI needs do nothing more than what it already is doing, which is make it painfully obvious that a huge army of skilled white-collar workers in almost any field isn’t needed. It includes jobs like accountants and paralegals and even triage nurses; a recent study out of Israel showed that ChatGPT 4 – an extremely limited large language model – surpasses humans in passing medical exams. It includes, for example, programmers, a job I predict will mostly disappear over the next decade.
What is going to happen to humanity’s collective state of heart and mind if we don’t tackle these implications now?
Because all we seem to be doing right now is doubling down on more. Elon Musk is justified in seeing the world the way he does, because the outcome he envisions as beneficial also implies a much more extreme version of control… for him and his peers. You think we’re dealing with a wealth concentration problem now? imagine the world where half of humanity has nothing useful to do, enabled by control of a technology that is instrumental to everyday life and is effectively in the hands of very few.
You don’t need to imagine much here, because even without AI, you’re already seeing it all around in the modern technology-enabled world. The societal pain just hasn’t been quite acute enough yet. Graeber’s bullshit jobs are still there, but they won’t be for much longer. And policy makers are still driven by notions that are outdated by decades.
Let me propose one practical solution, that can at least give us a bit of breathing room: change full-time employment tomorrow from 40 hours a week to 20, without any reduction in pay. At least this way you will force the adjustment to take a few years longer. It’s perhaps the gentlest form of redistributive economic policy I can think of, but in reality, it’s just a stopgap, a bandaid. The truth is that, even if AI isn’t destroying us physically and is in fact providing the benefits without the harm we must prepare for a very near future where most people simply don’t need to work.
You have to understand: this is the good outcome!
If you think companies will suddenly all “realize” that keeping employees on the payroll is the “right thing to do” when it clearly hurts their potential profits, I don’t know how to help you. If you think politicians are ready to go out there and promote policies that take into account this reality – which means, yes, things like universal basic income and truly aggressive redistribution of wealth – then I don’t know how to help you.
Hell, society clearly isn’t even ready to think in terms different than the value of a person is tied to their work. We gotta change that, but how?
We’re not even ready to talk about it!
There is so much more to say here – I can easily write a book on various aspects of the near future implications of this scale of massive disruption – but I like to keep my column relatively short. Still, it seems to me that, when even the good outcomes involve “destruction” due to our inability as humans to agree on the basic need to prepare, then it is certain that destruction will occur.
And humanity – that is, all of us, all over the world, rather than just a privileged few – is going to suffer mightily.
So, you know… kinda feels like the word “destroy” needs a bit of redefining. Wouldn’t you agree?