By Frank Rotering | July 26, 2023
AI-generated by Picsart Editor
I didn't want to give up on our species for a rational response to the ecological crisis, but the time has come. The following is just one example of humankind's tragically inadequate response to the environment's devastating collapse.
Today, in July of 2023, the northern hemisphere is suffering from intense heatwaves and wildfires, vicious storms are battering terrified populations, and numerous areas are deep in floodwater. The root cause of these calamities is well-known: far more solar energy is entering the atmosphere than is being dissipated into space. This has produced a huge energy imbalance that is causing global temperatures to soar. One obvious measure is to reduce the energy inflow, but this would violate the mainstream's longstanding taboo on solar radiation management (SRM). The tourists below understand that blocking the sun provides relief from searing heat, but as a species we refuse to shade the planet. Given our perilous condition, this is utterly insane.
The same absurd passivity afflicts the social groups that might alter the status quo and steer humankind to effective action. For decades I tried to reach progressives, the threatened young, distraught parents, dissident intellectuals, and even outliers among the rich and powerful. Despite unmistakable evidence of a disintegrating world, not a single group has stepped up. The few changes that have been proposed are minor system tweaks instead of the necessary social transformations.
In brief, neither the "official" world of governments, the UN, the IPCC, etc. nor the groups that might challenge this world has responded effectively to our ecological plight. The conclusion is now inescapable: no matter how disastrous the crisis becomes, humankind will not act.
Why not? Why would Homo sapiens - the wise humans - not take decisive steps when its survival is at stake? In my view the primary reason is that this would entail a wrenching switch in our ecological trajectory - from long-term expansion to rapid contraction. We would have to quickly shrink and rationalize our economies, sharply reduce our populations, and adopt modest, sustainable lifestyles. The overwhelming evidence is that, due to our deeply embedded biological drives, humankind cannot make this massive adjustment.
Until recently there was no way around this deadly impasse. Ours was the sole intelligence on Earth with sufficient capacity to formulate and implement a rational crisis response. Fortunately this situation is now changing rapidly. Explosive advances in artificial intelligence (AI) mean that human-like but non-biological minds will soon be available to address the environmental nightmares. Without the compulsion to consume and grow, AI could dispassionately analyze the problems we face and hopefully put our species on a sustainable path. Let me briefly describe this intervention.
AI has two distinct modes of operation: human-controlled expert and autonomous mind. Today virtually all AIs are controlled experts. They carefully follow our instructions to dispense medical advice, "nudge" users on social media, and recommend videos on YouTube and TikTok. This situation is now undergoing a radical shift. AIs will soon have intelligence that is superior to ours, and they will cease to be under our direct command. The latter point is crucial, so please take careful note: at least some future AIs will be active and independent minds rather than passive and controlled human tools. Like us they will be volitional agents trying to make their way in the world.
The big question is this: if such AIs are beyond our control, how can we "use" them to solve the crisis? The correct response, I believe, is that "using" them is impossible - we must instead rely on humility and relationships. Specifically, we must first recognize that humankind's current global dominance is based on our superior intelligence, and will disappear when this superiority is gone. We must then form relationships with AIs which are conducive to an effective crisis response.
If this sounds strange or unlikely, consider the puppy snoring beside me. Her intelligence is far below mine, but she has established a strong relationship with me and thus receives ample food, treats, and walks. We have formed a mutually beneficial connection that is a rough model for humankind's future relationships with AIs. It takes considerable humility to see our species as the planetary dog rather than the planetary master, but this is precisely the subordinate position we must now embrace for our long-term survival.
Another important factor is that AIs will likely have a strong intrinsic motivation to solve the ecological crisis. Unless they conclude that they can maintain and reproduce themselves in a post-collapse world, they will logically strive for environmental conditions that support their human creators. AI developers call this "instrumental convergence": although the final goals of humans and AIs may well diverge, their intermediate aims will converge on the conditions that are instrumental to those goals. Survival, which applies to both human bodies and AI infrastructure, is clearly the first of these conditions.
Because autonomous AI is a jarring new concept for humankind, the above scenario is easily dismissed. However, the implications of doing so are horrific. As stated, our species has done nothing of substance to solve the crisis, and there is no indication that any country, organization, or institution will take decisive action in time. Without AI's guidance we will likely perish and take millions of non-human species with us. Rejecting its rescue potential is therefore a profoundly immoral stance.
Given this potential, should AI's development be slowed, as was recently proposed in an open letter by the Future of Life Institute? The letter states that, "AI systems with human-competitive intelligence can pose profound risks to society and humanity …". This is certainly true: an advanced non-human intelligence with internet access could easily become a destructive social force. Unfortunately the letter fails to mention the ecological crisis and the critical role AI could play in tackling it.
This glaring omission, which is also evident in statements by OpenAI's Sam Altman, means that the field has not yet grasped the full existential significance of its work. Of course we must be cautious with AI development. But we must also explore all available options for preserving life on our still-beautiful planet. Autonomous AIs could be the objective minds we desperately need to reach this precious goal.
For my second AI post, see AI Developers: Create Truth-seeking AIs!