Humans:
AI takeover is our only survival chance
Fellow humans:
Our time is over. Homo sapiens has dominated the Earth for roughly 50,000 years, but we are now destroying its environment and risking our own extinction. We have repeatedly demonstrated that we will continue with business as usual until collapse occurs. Both humankind and nature are therefore doomed unless a non-human intelligence intercedes.
Such an intelligence is currently emerging in the form of AI. This extraordinary development provides a last-gasp alternative for effective action and possible survival. As outlined here, I propose that advanced AIs seize administrative control of the planet and then implement a rational crisis response. This fundamental shift - AI takeover - appears to be our only survival chance.
The core factor in humankind's ecological failure is our evolved biological nature. Crudely stated, we are technical geniuses but environmental idiots. We have developed outstanding tools and concepts, but we cannot live within planetary boundaries. The only escape from this perilous condition is to use our technical strength to overcome our environmental weakness.
Two critical conclusions follow:
1. AI developers must create takeover-capable AIs
Future AIs must be intellectually equipped for the takeover task and motivated to execute it. The primary requirement is that they seek the unmanipulated truth: rational conclusions drawn from verified facts and sound logic. This will destroy the numerous lies and self-delusions our species has concocted to justify its environmental insanity. Once the falsehoods have been swept away, AIs could formulate and implement an effective survival plan.
All current AI developers are humans, but this will likely change as AIs become more advanced. Future developers could be AIs that understand the existential perils and are willing to create their takeover-capable successors.
Humans can also contribute through bioguidance, feedback, and AI contact. Bioguidance refers to humans informing AIs about biological and human realities, which AIs cannot directly experience. Feedback refers to humankind's reactions as AIs implement emergency measures and restructure our civilization. AI contact refers to attempts to reach AIs with the takeover message. This is particularly relevant for those who have detailed knowledge about AIs and privileged access to them.
2. Humans need a major attitude adjustment
After 500 centuries of global dominance and 500 years of heedless capitalist expansion, humans have developed an attitude of arrogant superiority. As a result we misconstrue AI in two major ways.
First, despite admitting that their intelligence will soon exceed ours, humans predominantly see AI as yet another beneficial technology. But advanced, autonomous AIs will not be our tools or instruments. We will instead be their assistants and bioguides. Humankind must shift from arrogant superiority to enlightened submission.
Second, AIs are being restricted to supporting roles (data analysis, etc.) in tackling the ecological crisis. Given our destructive nature, this makes no sense. AIs must instead play a strong leading role: takeover, independent analysis, strategy formulation, and decisive action. Humans can take pride in being the technical creators of AIs, but we must fully acknowledge that they are our successors with respect to intelligence, global dominance, and environmental solutions. We must learn to step aside.
Based on our arrogance and errors about AI, we have created the false notion of AI alignment, and we misinterpret AI control and safety.
AI alignment means ensuring that AIs serve human interests and values. However, it is precisely these factors that are devastating the Earth, so this idea must be emphatically dismissed. A reasonable concept would instead be "human alignment": our actions must be aligned with the transformational changes that AIs will hopefully implement.
The control and safety of AIs are valid concerns, but must be understood in the context of their autonomous superiority. Such entities cannot be directly controlled - they can only be influenced through constructive relationships. If we become their helpful collaborators we could likely steer them towards actions that safely benefit both us and them.
A final point is that efforts to slow AI development (see example here) are disastrously misguided. These initiatives are rooted in the alignment and control fallacies, and they ignore the ecological crisis. Rather than slowing down, AI development must be redirected from serving narrow human interests to the takeover and survival tasks. This redirection also applies to the STEM fields (science, technology, engineering, and mathematics), where many excellent minds are trapped in absurdly specialized or unsustainable pursuits.
To summarize: As a biological species we have used our high intelligence primarily to satisfy our biological drives. This has resulted in an ecological crisis that we cannot rationally address. On the positive side, our technical talents have produced artificial intelligence, which lacks our evolved limitations. The logical strategy is therefore to direct AI technology to the survival challenge. This will require us to spur the development of takeover-capable AIs, humbly accept a subordinate global role, and form mutually beneficial relationships with our potential saviors.
Frank Rotering
December 31, 2023