We’re in a chaotic time for technology where, on the one hand, there’s tremendous fear about AI taking away all our jobs, concentrating capital into the hands of a few large companies and governments, maybe even bringing about the end of humanity as we know it. Lets call this The Doomer Thesis. On the other hand, there’s tremendous optimism and hype for the potential unlocks in creativity from all these cheap AI tools, growth from new startups disrupting old industries, and the potential to make everyone way more productive. Lets call this The Techno-Optimist Thesis.
How can we reconcile these two poles of tremendous fear and unprecedented optimism?
The state of AI is confusing because while its hard enough to keep up with the dizzying pace of new models, techniques, capabilities, and companies, there is also embedded in our predictions about AI these two diametrically opposite visions of what it all means and where we’re going.
Neither I, nor anyone else, can tell you exactly where we’re going, but I’m going to try to help make sense of it by positing a powerful mental model for understanding not just AI predictions, but this entire class of problems where there are two competing ideas about the world that are extreme opposites of one another.
Our model starts with a fundamental axiom of logic: the Law of Non-Contradiction. Simply, there are no contradictions in the universe. We assume, because the world is a logical, coherent whole, there can be no true contradictions. There cannot be in the same state space both X and Not X. This doesn’t mean there can’t be apparent or seeming contradictions. It means only that where such contradictions appear, what is indicated is a gap in our model of the world, or, as is often the case, a lapse in the language required to express a distinction at the proper level of abstraction.
Let me put this in concrete terms. Where there are two completely opposite ideas about the world, this indicates one of three possibilities:
The most common, by far, is that they’re both wrong. That our situation represents some form of black and white thinking on both sides (a common psychological defense mechanism against uncertainty), and that the truth is somewhere in the middle.
One or the other is correct. This is the least common case, because it’s very rare that predictions about the world follow extreme paths. It does happen. Every once in a while there’s a 1 in 1000 year magnitude earthquake, but I wouldn’t bet my life on one happening tomorrow or my soul on it never happening again.
Both are incorrect, but also not wrong, because our situations is being mis-stated or misinterpreted as inherently contradictory. Think of the behavior of photons in quantum mechanics where they appear to act as both particles and waves. While the exact relationship of quantum mechanics to our physical world is not entirely understood, we chalk these seemingly contradictory observations up to a deficit in our model, rather than one or the other or neither.
I am presenting to you the idea that the most likely explanation that parsimoniously harmonizes both the Doomer and Techno-Optimist positions is this third possibility.
The AI Doomers and the Techno-Optimists are both not wrong because they’re talking past each other. And it’s understandable why: there are strong incentives, true believers, and lots of money to be made on both sides. There are also serious risks: geopolitical, economic, and existential—hinging on the outcome. All of this noise and tension clouds our ability to think clearly about the nature of the concern.
And because we can’t think clearly, we’re not talking about it clearly, either.
The truth is that there is no contradiction. AI will be both a godsend for productivity, startups, and some of humanity’s biggest problems, AND an extremely disruptive force that displaces billions of jobs, turns over entire industries, and presents serious sociological and geopolitical challenges.
The sooner we inject clarity into our thinking and therefore our conversations, the sooner we can address the benefits and the risks in a level-headed way.
Let us heed Hölderlin’s famous words as quoted in Heidegger’s Question Concerning Technology: “where the danger is, grows the saving power also.”
This is a reposting of this video essay, adapted and edited for the written form.