Technology

Hitting the Books: Why a Dartmouth professor coined the term ‘artificial intelligence’ | Engadget

If Wu-Tang had produced it in ’23 instead of ’93, they would have called it DREAM, because data rules everything around me. Where once our society traded power based on the strength of our arms and pocketbooks, the modern world is driven by data empowerment algorithms to classify, isolate, and sell us. These black box oracles of imperious and imperceptible decision-making deign who gets home loans, who gets bail, who finds love, and who gets the state to take their children away.

In his new book, How Data Happened: A Story from the Age of Reason to the Age of Algorithms, Building on their existing curriculum, Columbia University professors Chris Wiggins and Matthew L Jones examine how data is turned into actionable insights and used to shape everything from our political views and social mores to our military responses and economic activities. In the excerpt below, Wiggins and Jones discuss the work of mathematician John McCarthy, the Dartmouth junior professor who single-handedly coined the term “artificial intelligence” … as part of his strategy to secure research funding. Of summer.

White background with multicolored blocks descending from the top like a Tetris board to complete

ww norton

Taken from How Data Happened: A Story from the Age of Reason to the Age of Algorithms By Chris Wiggins and Matthew L. Jones. Published by W. W. Norton. Copyright © 2023 by Chris Wiggins and Matthew L Jones. All rights reserved.


Crafting “Artificial Intelligence”

A passionate advocate of symbolic approaches, mathematician John McCarthy is often credited with inventing the term “artificial intelligence,” including himself: “I invented the term artificial intelligence,” he explained, “when we were trying to raise money for a summer.” study” to aim at “the long-term goal of achieving intelligence at the human level.” The “summer study” in question was titled “The Dartmouth Summer Research Project on Artificial Intelligence,” and the requested funding came from the Rockefeller Foundation. At the time, a junior professor of mathematics at Dartmouth, McCarthy was assisted in introducing him to Rockefeller by his former mentor Claude Shannon. As McCarthy describes the positioning of the term, “Shannon thought artificial intelligence was too flashy a term and could attract unfavorable attention.” However, McCarthy wanted to avoid overlap with the existing field of “automata studies” (including “neural networks” and Turing machines) and took a position to declare a new field. “So I decided not to fly any more false flags.” The ambition was huge; the 1955 proposal stated that “every aspect of learning or any other characteristic of intelligence can, in principle, be described with such precision that a machine can be made to simulate it.” McCarthy ended up with more brain modelers than axiomatic mathematicians of the kind he wanted at the 1956 meeting, which came to be known as the Dartmouth Workshop. The event saw the coming together of various, often conflicting, efforts to make digital computers perform tasks considered intelligent, yet as historian of artificial intelligence Jonnie Penn argues, the absence of psychological expertise at the workshop meant that the description of the intelligence was “informed primarily by a set of specialists working outside of the human sciences.” Each participant saw the roots of their company differently. McCarthy recalled, “everyone who was there was pretty stubborn in following the ideas that had before I came, and as far as I could see, there was no real exchange of ideas.

Like Turing’s 1950 paper, the 1955 proposal for a summer workshop on artificial intelligence seems incredibly prescient in retrospect. The seven problems that McCarthy, Shannon, and their collaborators set out to study became mainstays of computer science and the field of artificial intelligence:

  1. “Automatic computers” (programming languages)

  2. “How can you program a computer to use a language?” (natural language processing)

  3. “Neuron Nets” (neural networks and deep learning)

  4. “Theory of the Size of a Calculation” (computational complexity)

  5. “Self-improvement” (machine learning)

  6. “Abstractions” (feature engineering)

  7. “Randomness and Creativity” (Monte Carlo methods including stochastic learning).

The term “artificial intelligence”, in 1955, was an aspiration rather than a commitment to a method. AI, in this broad sense, involved both the discovery of what human intelligence comprises in attempting to create artificial intelligence, and a less philosophically charged effort simply to make computers do difficult things that a human might attempt.

Just a few of these aspirations fueled the efforts that, in today’s usage, became synonymous with artificial intelligence: the idea that machines can learn from data. Among computer scientists, learning from data would be looked down upon for generations.

Most of the first half century of artificial intelligence was focused on combining logic with knowledge encoded in machines. The data collected from daily activities was not the focus; it paled in prestige next to logic. In the last five years, artificial intelligence and machine learning have begun to be used interchangeably; it’s a powerful thought exercise to remind yourself that it didn’t have to be this way. For the first few decades in the life of artificial intelligence, learning from data seemed like the wrong approach, an unscientific approach, used by those unwilling to “just program” knowledge into the computer. Before data ruled, rules did.

For all their enthusiasm, most of the participants in the Dartmouth workshop brought few concrete results. One group was different. A team from the RAND Corporation, led by Herbert Simon, had brought the products in the form of an automatic theorem prover. This algorithm could produce proofs of basic arithmetic and logical theorems. But the math was just a test case for them. As historian Hunter Heyck has emphasized, that group began less with computing or mathematics than with studying how to understand large bureaucratic organizations and the psychology of the people who solve problems within them. For Simon and Newell, human brains and computers were problem solvers of the same kind.

Our position is that the appropriate way to describe a part of problem-solving behavior is in terms of a program: a specification of what the organism will do under various environmental circumstances in terms of certain elementary information processes that it is capable of performing. . Digital computers enter the picture only because they can, through proper programming, be induced to execute the same sequences of information processing that humans execute when solving problems. Therefore, as we will see, these programs describe both human and mechanical problem solving at the level of information processes.

Although they provided many of the first important successes in early artificial intelligence, Simon and Newell focused on a practical investigation of the organization of humans. They were interested in solving human problems that mixed what Jonnie Penn calls a “composite of early 20th century British symbolic logic and the American administrative logic of a hyper-rationalized organization.” Before adopting the AI ​​moniker, they positioned their work as the study of “information processing systems” that included humans and machines alike, which were based on the best understanding of human reasoning at the time.

Simon and his collaborators were deeply involved in debates about the nature of humans as rational animals. Simon later received the Nobel Prize in Economics for his work on the limitations of human rationality. He was concerned, along with a group of postwar intellectuals, to refute the notion that human psychology should be understood as an animal reaction to positive and negative stimuli. Like others, he rejected a behaviorist view of the human being as driven by reflexes, almost automatically, and that learning referred primarily to the accumulation of facts acquired through such experience. Great human abilities, like speaking a natural language or doing advanced mathematics, could never arise from experience alone; they required much more. To focus only on the data was to misunderstand human spontaneity and intelligence. Central to the development of cognitive science, this generation of intellectuals emphasized abstraction and creativity over data analysis, sensory or otherwise. Historian Jamie Cohen-​Cole explains: “Learning was not so much a process of acquiring data about the world as it was the development of a skill or the acquisition of proficiency with a conceptual tool that could then be creatively implemented.” This emphasis on the conceptual was central to Simon and Newell’s Logical Theoretical program, which not only traversed logical processes, but deployed human-like “heuristics” to accelerate the search for means to ends. Scholars such as George Pólya who research how mathematicians solve problems have emphasized the creativity involved in using heuristics to solve mathematical problems. So math wasn’t drudgery, it wasn’t like doing lots and lots of long division or reducing large amounts of data. It was a creative activity and, in the eyes of its creators, a bulwark against the totalitarian visions of human beings, whether left or right. (And so was life in a bureaucratic organization; you don’t have to be laborious in this picture; it could be a place for creativity. Just don’t tell your employees.)

All Engadget Recommended products are curated by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publication.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button