also called AI, the scientific effort to design and build intelligent artifacts. Since the effort inevitably presupposes and tests theories about the nature of intelligence, it has implications for the philosophy of mind – perhaps even more than does empirical psychology. For one thing, actual construction amounts to a direct assault on the mind–body problem; should it succeed, some form of materialism would seem to be vindicated. For another, a working model, even a limited one, requires a more global conception of what intelligence is than do experiments to test specific hypotheses. In fact, psychology’s own overview of its domain has been much influenced by fundamental concepts drawn from AI.
Although the idea of an intelligent artifact is old, serious scientific research dates only from the 1950s, and is associated with the development of programmable computers. Intelligence is understood as a structural property or capacity of an active system; i.e., it does not matter what the system is made of, as long as its parts and their interactions yield intelligent behavior overall. For instance, if solving logical problems, playing chess, or conversing in English manifests intelligence, then it is not important whether the ‘implementation’ is electronic, biological, or mechanical, just as long as it solves, plays, or talks. Computers are relevant mainly because of their flexibility and economy: software systems are unmatched in achievable active complexity per invested effort.
Despite the generality of programmable structures and the variety of historical approaches to the mind, the bulk of AI research divides into two broad camps – which we can think of as language-oriented and pattern-oriented, respectively. Conspicuous by their absence are significant influences from the conditionedresponse paradigm, the psychoanalytic tradition, the mental picture idea, empiricist (atomistic) associationism, and so on. Moreover, both AI camps tend to focus on cognitive issues, sometimes including perception and motor control. Notably omitted are such psychologically important topics as affect, personality, aesthetic and moral judgment, conceptual change, mental illness, etc. Perhaps such matters are beyond the purview of artificial intelligence; yet it is an unobvious substantive thesis that intellect can be cordoned off and realized independently of the rest of human life.
The two main AI paradigms emerged together in the 1950s (along with cybernetic and information-theoretic approaches, which turned out to be dead ends); and both are vigorous today. But for most of the sixties and seventies, the language-based orientation dominated attention and funding, for three signal reasons. First, computer data structures and processes themselves seemed languagelike: data were syntactically and semantically articulated, and processing was localized (serial). Second, twentieth-century linguistics and logic made it intelligible that and how such systems might work: automatic symbol manipulation made clear, powerful sense. Finally, the sorts of performance most amenable to the approach – explicit reasoning and ‘figuring out’ – strike both popular and educated opinion as particularly ‘intellectual’; hence, early successes were all the more impressive, while ‘trivial’ stumbling blocks were easier to ignore. The basic idea of the linguistic or symbol manipulation camp is that thinking is like talking – inner discourse – and, hence, that thoughts are like sentences. The suggestion is venerable; and Hobbes even linked it explicitly to computation. Yet, it was a major scientific achievement to turn the general idea into a serious theory. The account does not apply only, or even especially, to the sort of thinking that is accessible to conscious reflection. Nor is the ‘language of thought’ supposed to be much like English, predicate logic, LISP, or any other familiar notation; rather, its detailed character is an empirical research problem. And, despite fictional stereotypes, the aim is not to build superlogical or inhumanly rational automata. Our human tendencies to take things for granted, make intuitive leaps, and resist implausible conclusions are not weaknesses that AI strives to overcome but abilities integral to real intelligence that AI aspires to share. In what sense, then, is thought supposed to be languagelike? Three items are essential. First, thought tokens have a combinatorial syntactic structure; i.e., they are compounds of welldefined atomic constituents in well-defined (recursively specifiable) arrangements. So the constituents are analogous to words, and the arrangements are analogous to phrases and sentences; but there is no supposition that they should resemble any known words or grammar. Second, the contents of thought tokens, what they ‘mean,’ are a systematic function of their composition: the constituents and forms of combination have determinate significances that together determine the content of any wellformed compound. So this is like the meaning of a sentence being determined by its grammar and the meanings of its words. Third, the intelligent progress or sequence of thought is specifiable by rules expressed syntactically – they can be carried out by processes sensitive only to syntactic properties. Here the analogy is to proof theory: the formal validity of an argument is a matter of its according with rules expressed formally. But this analogy is particularly treacherous, because it immediately suggests the rigor of logical inference; but, if intelligence is specifiable by formal rules, these must be far more permissive, context-sensitive, and so on, than those of formal logic. Syntax as such is perfectly neutral as to how the constituents are identified (by sound, by shape, by magnetic profile) and arranged (in time, in space, via address pointers). It is, in effect, a free parameter: whatever can serve as a bridge between the semantics and the processing. The account shares with many others the assumptions that thoughts are contentful (meaningful) and that the processes in which they occur can somehow be realized physically. It is distinguished by the two further theses that there must be some independent way of describing these thoughts that mediates between (simultaneously determines) their contents and how they are processed, and that, so described, they are combinatorially structured. Such a description is syntactical.
We can distinguish two principal phases in language-oriented AI, each lasting about twenty years. Very roughly, the first phase emphasized processing (search and reasoning), whereas the second has emphasized representation (knowledge). To see how this went, it is important to appreciate the intellectual breakthrough required to conceive AI at all. A machine, such as a computer, is a deterministic system, except for random elements. That is fine for perfectly constrained domains, like numerical calculation, sorting, and parsing, or for domains that are constrained except for prescribed randomness, such as statistical modeling. But, in the general case, intelligent behavior is neither perfectly constrained nor perfectly constrained with a little random variation thrown in. Rather, it is generally focused and sensible, yet also fallible and somewhat variable. Consider, e.g., chess playing (an early test bed for AI): listing all the legal moves for any given position is a perfectly constrained problem, and easy to program; but choosing the best move is not. Yet an intelligent player does not simply determine which moves would be legal and then choose one randomly; intelligence in chess play is to choose, if not always the best, at least usually a good move. This is something between perfect determinacy and randomness, a ‘between’ that is not simply a mixture of the two. How is it achievable in a machine?
The crucial innovation that first made AI concretely and realistically conceivable is that of a heuristic procedure. (The term ‘heuristic’ derives from the Greek word for discovery, as in Archimedes’ exclamation ‘Eureka!’) The relevant point for AI is that discovery is a matter neither of following exact directions to a goal nor of dumb luck, but of looking around sensibly, being guided as much as possible by what you know in advance and what you find along the way. So a heuristic procedure is one for sensible discovery, a procedure for sensibly guided search. In chess, e.g., a player does well to bear in mind a number of rules of thumb: other things being equal, rooks are more valuable than knights, it is an asset to control the center of the board, and so on. Such guidelines, of course, are not valid in every situation; nor will they all be best satisfied by the same move. But, by following them while searching as far ahead through various scenarios as possible, a player can make generally sensible moves – much better than random – within the constraints of the game. This picture even accords fairly well with the introspective feel of choosing a move, particularly for less experienced players. The essential insight for AI is that such roughand-ready (ceteris paribus) rules can be deterministically programmed. It all depends on how you look at it. One and the same bit of computer program can be, from one point of view, a deterministic, infallible procedure for computing how a given move would change the relative balance of pieces, and from another, a generally sensible but fallible procedure for estimating how ‘good’ that move would be. The substantive thesis about intelligence – human and artificial alike – then is that our powerful but fallible ability to form ‘intuitive’ hunches, educated guesses, etc., is the result of (largely unconscious) search, guided by such heuristic rules. The second phase of language-inspired AI, dating roughly from the mid-1970s, builds on the idea of heuristic procedure, but dramatically changes the emphasis. The earlier work was framed by a conception of intelligence as finding solutions to problems (good moves, e.g.). From such a perspective, the specification of the problem (the rules of the game plus the current position) and the provision of some heuristic guides (domain-specific rules of thumb) are merely a setting of the parameters; the real work, the real exercise of intelligence, lies in the intensive guided search undertaken in