connectionism

connectionism an approach to modeling cognitive systems which utilizes networks of simple processing units that are inspired by the basic structure of the nervous system. Other names for this approach are neural network modeling and parallel distributed processing. Connectionism was pioneered in the period 1940–65 by researchers such as Frank Rosenblatt and Oliver Selfridge. Interest in using such networks diminished during the 1970s because of limitations encountered by existing networks and the growing attractiveness of the computer model of the mind (according to which the mind stores symbols in memory and registers and performs computations upon them). Connectionist models enjoyed a renaissance in the 1980s, partly as the result of the discovery of means of overcoming earlier limitations (e.g., development of the back-propagation learning algorithm by David Rumelhart, Geoffrey Hinton, and Ronald Williams, and of the Boltzmann-machine learning algorithm by David Ackley, Geoffrey Hinton, and Terrence Sejnowski), and partly as limitations encountered with the computer model rekindled interest in alternatives. Researchers employing connectionist-type nets are found in a variety of disciplines including psychology, artificial intelligence, neuroscience, and physics. There are often major differences in the endeavors of these researchers: psychologists and artificial intelligence researchers are interested in using these nets to model cognitive behavior, whereas neuroscientists often use them to model processing in particular neural systems.
A connectionist system consists of a set of processing units that can take on activation values. These units are connected so that particular units can excite or inhibit others. The activation of any particular unit will be determined by one or more of the following: inputs from outside the system, the excitations or inhibitions supplied by other units, and the previous activation of the unit. There are a variety of different architectures invoked in connectionist systems. In feedforward nets units are clustered into layers and connections pass activations in a unidirectional manner from a layer of input units to a layer of output units, possibly passing through one or more layers of hidden units along the way. In these systems processing requires one pass of processing through the network. Interactive nets exhibit no directionality of processing: a given unit may excite or inhibit another unit, and it, or another unit influenced by it, might excite or inhibit the first unit. A number of processing cycles will ensue after an input has been given to some or all of the units until eventually the network settles into one state, or cycles through a small set of such states. One of the most attractive features of connectionist networks is their ability to learn. This is accomplished by adjusting the weights connecting the various units of the system, thereby altering the manner in which the network responds to inputs. To illustrate the basic process of connectionist learning, consider a feedforward network with just two layers of units and one layer of connections. One learning procedure (commonly referred to as the delta rule) first requires the network to respond, using current weights, to an input. The activations on the units of the second layer are then compared to a set of target activations, and detected differences are used to adjust the weights coming from active input units. Such a procedure gradually reduces the difference between the actual response and the target response. In order to construe such networks as cognitive models it is necessary to interpret the input and output units. Localist interpretations treat individual input and output units as representing concepts such as those found in natural language. Distributed interpretations correlate only patterns of activation of a number of units with ordinary language concepts. Sometimes (but not always) distributed models will interpret individual units as corresponding to microfeatures. In one interesting variation on distributed representation, known as coarse coding, each symbol will be assigned to a different subset of the units of the system, and the symbol will be viewed as active only if a predefined number of the assigned units are active. A number of features of connectionist nets make them particularly attractive for modeling cognitive phenomena in addition to their ability to learn from experience. They are extremely efficient at pattern-recognition tasks and often generalize very well from training inputs to similar test inputs. They can often recover complete patterns from partial inputs, making them good models for content-addressable memory. Interactive networks are particularly useful in modeling cognitive tasks in which multiple constraints must be satisfied simultaneously, or in which the goal is to satisfy competing constraints as well as possible. In a natural manner they can override some constraints on a problem when it is not possible to satisfy all, thus treating the constraints as soft. While the cognitive connectionist models are not intended to model actual neural processing, they suggest how cognitive processes can be realized in neural hardware. They also exhibit a feature demonstrated by the brain but difficult to achieve in symbolic systems: their performance degrades gracefully as units or connections are disabled or the capacity of the network is exceeded, rather than crashing.
Serious challenges have been raised to the usefulness of connectionism as a tool for modeling cognition. Many of these challenges have come from theorists who have focused on the complexities of language, especially the systematicity exhibited in language. Jerry Fodor and Zenon Pylyshyn, for example, have emphasized the manner in which the meaning of complex sentences is built up compositionally from the meaning of components, and argue both that compositionality applies to thought generally and that it requires a symbolic system. Therefore, they maintain, while cognitive systems might be implemented in connectionist nets, these nets do not characterize the architecture of the cognitive system itself, which must have capacities for symbol storage and manipulation. Connectionists have developed a variety of responses to these objections, including emphasizing the importance of cognitive functions such as pattern recognition, which have not been as successfully modeled by symbolic systems; challenging the need for symbol processing in accounting for linguistic behavior; and designing more complex connectionist architectures, such as recurrent networks, capable of responding to or producing systematic structures.
See also ARTIFICIAL INTELLIGENCE , COGNI- TIVE SCIENCE , PHILOSOPHY OF MIN. W.B.

meaning of the word connectionism root of the word connectionism composition of the word connectionism analysis of the word connectionism find the word connectionism definition of the word connectionism what connectionism means meaning of the word connectionism emphasis in word connectionism