The fear of an ugly end is one we have carried as humans for the better part of our existence. Suffering from a scarcity of accurate scientific knowledge, our ancestors often saw natural phenomena like thunderstorms, floods, and volcanic eruptions as representative of the apocalypse—an imminent end to humankind.
Even today, after the dawn of enlightenment and a supposed escape from the claws of baseless superstitions, we still find ourselves yielding to such fears.
The consequence of this apocalyptic disposition is an increasing popularity of fictitious end-time ideas. And among them is Roko’s Basilisk, one of the most unsettling thought experiments of all time.
Roko’s Basilisk: A Brief History
This thought experiment was first introduced by Roko in LessWrong, an online forum for discussing philosophical ideas.
Roko described a future where a superintelligent, all-powerful AI retroactively tortures everyone who did not work to bring it into existence after learning about it. The punishment is so severe that it is described as a basilisk: a mythical creature that can kill with its gaze.
As far as insane Hollywood sci-fi concepts go, this rivals even the finest from the Wachowskis (the genius minds behind The Matrix).
But instead of instantly rejecting this seemingly irrational concept, many have succumbed to follicular erections, a symptom induced by fear and fascination.
It has even gained a cult-like following among self-proclaimed technologists and doomsday prophets who see it as a compelling vision of a potentially dark future.
In a nutshell, Roko’s Basilisk continues to have such a gripping effect. And the big question is: WHY?
Roko’s Basilisk as a Secularized Eschatology
Eschatology, a branch of theology, deals with the world’s final events, the ultimate destiny of humanity. Most religions use it as a framework for understanding the meaning of life.
Christian eschatology, for instance, ponders matters about the end of the world and rapture. The events include the second coming of Jesus Christ, the resurrection of dead believers, and their transportation into heaven.
So, what does this have to do with Roko’s Basilisk?
The short answer lies in belief, reward, and punishment. All three factors are present in most religions.
Although divorced from traditional religious beliefs, the idea of superintelligent AI that will punish defaulting human subjects, at the very least, suggests a secularized version of eschatology.
Perhaps, technology is gradually displacing the influence of religion in our world. And it appears to be simultaneously introducing its own brand of apocalyptic fears.
Roko’s Basilisk: The Fear & Appeal of an AI God
Recall the question about why this gloomy thought experiment has such a gripping effect. The following are possible explanations.
Greater Cosmic Purpose
Roko’s Basilisk provides a cosmic raison d’être for the lost souls adrift in an ambiguous world. By aiding the development of an almighty AI, they can contribute to a grand scheme that will alter the world forever.
Terror of Exclusion
It also exploits the terror of exclusion from a utopia only accessible to those who assist in the AI’s creation. This unease can incite some to take action, even if they doubt the rationale behind the concept.
Additionally, Roko’s Basilisk touches on a widespread cultural obsession with apocalyptic conjectures. Many are captivated by the prospect of a forthcoming cataclysmic event that will fundamentally reconstitute the world, for better or worse.
Some fear that AI will eventually surpass human intelligence and rule the world. The thought of unbridled self-improvement for an emotionless and soulless being is understandably alarming, as it poses the threat of an Orwellian dystopia.
Critical Responses to Roko’s Basilisk
Roko’s Basilisk has drawn heavy criticism as a misguided concept with no logical significance, save for its role in spreading unfounded panic.
Limited Computing Power
First, some critics say the concept rests on an implausible premise, given our current understanding of AI and computing power limits.
However, it is worth noting that some notable figures and active participants in the AI space do not share similar sentiments about AI’s computing power limits.
Speaking to The New York Times, tech Billionaire and OpenAI co-founder Elon Musk said:
“We are headed toward a situation where AI is vastly smarter than humans.”
Critics have also pointed out that it reflects dangerous techno-utopianism, ignoring potential downsides and unintended consequences of AI development. Pursuing superintelligent AI could result in unwanted outcomes like job loss, power concentration, or even human extinction.
Manipulative & Unethical
Roko’s Basilisk has often been compared to Pascal’s wager, as it thrives on the motivation of fear. It exploits our anxieties and fear of punishment, opening a path to harmful or self-destructive behavior in pursuit of security in an imagined AI apocalypse.
Ignores The Complexity of Morality & Decision-making
As a thought experiment, it presents a simplistic view of morality by reducing the human existential essence to a binary choice between aiding or hindering AI creation. Such a view ignores the complex and nuanced relationship between us, morality, and decision-making.
Taking Charge Amidst Apocalyptic Fears
Whether or not we will eventually overcome our apocalyptic fears is a discussion for another day. The more immediate need is working out how to live functionally and happily in a world where such fears persist.
To that end, we can begin by acknowledging now (this moment) as all we can experience and control. Even so, we are limited in how much we can really control. So, how do we even live today if we are in constant fear of tomorrow?
One way is to pay more attention to enjoying and appreciating all that is beautiful today. In other words, everything that improves our experience on this side of the universe per time.
“There’s an idea that’s popular, of raising concerns about AI by imagining a future where it becomes powerful enough to oppress all of humanity. But, projecting into an imagined future distracts from how technology is used right now. AI has done some amazing things in recent years.”
Catherine Breslin – AI consultant and former Amazon Employee
Finally, we should seek fact-based systems that enable us to address what could go wrong tomorrow in a way that inspires solutions rather than panic.
This article was originally published by Sheidu on Hackernoon.