AI Documents

1955 - A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence

This project initiated the modern era of AI research.  It was conceived as follows:

"We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."

The authors of this proposal were John McCarthy (Dartmouth), Marvin Minsky (Harvard), Nathaniel Rochester (IBM), and Claude Shannon (Bell Labs).

A highly significant fact is that the Dartmouth project was funded by the Rockefeller Foundation.  As discussed in appendix F of Youth Ecological Revolution, the Foundation spearheaded a massive effort, starting in the early 20th century, to restructure society and human relations for "industrial capitalism".  Somewhat belatedly, AI is now part of this effort.

by Vernor Vinge

Vinge states that, "... we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence."  He then explores "Superhumanity" and the "Singularity", which he defines as an "exponential runaway" of technological advancement.  He warns that, "Any [highly] intelligent machine … would not be humankind's "tool" — any more than humans are the tools of rabbits or robins or chimpanzees."

Despite this clear assertion, many of today's AI "experts" see AI as yet another tool to benefit humankind.

This is a rudimentary attempt to formalize principles for AI development and deployment.  One of these is value alignment: "Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation."  This means they must be, "… compatible with ideals of human dignity, rights, freedoms, and cultural diversity."

As always, the reasons why AI systems "should be" designed in this manner are missing: it is simply assumed that human ideals, etc. are genuine and worth transmitting to AIs.

This open letter was disseminated by the Future of Life Institute in March, 2023.  It called for "… all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."  The letter was frequently cited in the months that followed, but has apparently had little effect.  It asks three rhetorical questions that require candid responses:

"Should we let machines flood our information channels with propaganda and untruth?"  (In each case, "Should" is italicized in the original)

My response: Today's societies are already flooded with numerous blatant falsehoods, particularly in the environmental arena.  If we build truth-seeking AIs these lies might be authoritatively refuted.

 

"Should we automate away all the jobs, including the fulfilling ones?"

My response: If such automation is profitable, capitalism's economic logic and competitive pressures will ensure that the jobs are wiped out.  The only solution is to abandon both the system and its logic.  This will require the implementation of a humane and sustainable theory such as my proposed Economics of Needs and Limits.

 

"Should we risk loss of control of our civilization?" 

My response: A story in today's Guardian (Dec. 29, 2023) is headlined: "World will look back at 2023 as year ‘humanity exposed its inability to tackle climate crisis’".  If we can't rationally address an existential crisis we obviously have no business "controlling" our civilization.  This is why AI takeover is now necessary for ecological survival.

This deeply misleading document was signed by numerous AI luminaries, including Yoshua Bengio, Stuart Russell, Elon Musk, Yuval Harari, Emad Mostaque, Connor Leahy, Max Tegmark, and Gary Marcus.