while other supervenience relations are too strong to use to formulate non-reductive materialism since they imply reducibility. According to Kim, non-reductive materialism is an unstable position.
Materialism as a supervenience thesis. Several philosophers have in recent years attempted to define the thesis of materialism using a global supervenience thesis. Their aim is not to formulate a brand of non-reductive materialism; they maintain that their supervenience thesis may well imply reducibility. Their aim is, rather, to formulate a thesis to which anyone who counts as a genuine materialist must subscribe. David Lewis has maintained that materialism is true if and only if any non-alien possible worlds that are physically indiscernible are mentally indiscernible as well. Non-alien possible worlds are worlds that have exactly the same perfectly natural properties as the actual world. Frank Jackson has offered this proposal: materialism is true if and only if any minimal physical duplicate of the actual world is a duplicate simpliciter of the actual world. A world is a physical duplicate of the actual world if and only if it is exactly like the actual world in every physical respect (physical particular for physical particular, physical property for physical property, physical relation for physical relation, etc.); and a world is a duplicate simpliciter of the actual world if and only if it is exactly like the actual world in every respect. A minimal physical duplicate of the actual world is a physical duplicate that contains nothing else (by way of particulars, kinds, properties, etc.) than it must in order to be a physical duplicate of the actual world. Two questions arise for any formulation of the thesis of materialism. Is it adequate to materialism? And, if it is, is it true? Functionalism. The nineteenth-century British philosopher George Henry Lewes maintained that while not every neurological event is mental, every mental event is neurological. He claimed that what makes certain neurological events mental events is their causal role in the organism. This is a very early version of functionalism, nowadays a leading approach to the mind–body problem. Functionalism implies an answer to the question of what makes a state token a mental state of a certain kind M: namely, that it is an instance of some functional state type identical with M. There are two versions of this proposal. On one, a mental state type M of a system will be identical with the state type that plays a certain causal role R in the system. The description ‘the state type that plays R in the system’ will be a nonrigid designator; moreover, different state types may play R in different organisms, in which case the mental state is multiply realizable. On the second version, a mental state type M is identical with a second-order state type, the state of being in some first-order state that plays causal role R. More than one first-order state may play role R, and thus M may be multiply realizable. On either version, if the relevant causal roles are specifiable in physical or topic-neutral terms, then the functional definitions of mental state types will be, in principle, physically reductive. Since the roles would be specified partly in topic-neutral terms, there may well be possible worlds in which the mental states are realized by non-physical states; thus, functionalism does not imply token physicalism. However, functionalists typically maintain that, on the empirical evidence, mental states are realized (in our world) only by physical states. Functionalism comes in many varieties.
Smart’s topic-neutral analysis of our talk of sensations is in the spirit of functionalism. And Armstrong’s central state materialism counts as a kind of functionalism since it maintains that mental states are states apt to produce a certain range of behavior, and thus identifies states as mental states by their performing this causal role. However, functionalists today typically hold that the defining causal roles include causal roles vis-à-vis input state types, as well as output state types, and also vis-à-vis other internal state types of the system in question.
In the 1960s David Lewis proposed a functionalist theory, analytical functionalism, according to which definitions of mental predicates such as ‘belief’, ‘desire’, and the like (though not predicates such as ‘believes that p’ or ‘desires that q’) can be obtained by conjoining the platitudes of commonsense psychology and formulating the Ramsey sentence for the conjunction. The relevant Ramsey sentence is a second-order quantificational sentence that quantifies over the mental predicates in the conjunction of commonsense psychological platitudes, and from it one can derive definitions of the mental predicates. On this view, it will be analytic that a certain mental state (e.g., belief) is the state that plays a certain causal role vis-à-vis other states; and it is a matter of empirical investigation what state plays the role. Lewis claimed that such investigation reveals that the state types that play the roles in question are physical states.
In the early 1960s, Putnam proposed a version of scientific functionalism, machine state functionalism: according to this view, mental states are types of Turing machine table states. Turing machines are mechanical devices consisting of a tape with squares on it that either are blank or contain symbols, and an executive that can move one square to the left, or one square to the right, or stay where it is. And it can either write a symbol on a square, erase a symbol on a square, or leave the square as it is. (According to the Church-Turing thesis, every computable function can be computed by a Turing machine.) Now there are two functions specifying such a machine: one from input states to output states, the other from input states to input states. And these functions are expressible by counterfactuals (e.g., ‘If the machine is in state s1 and receives input I, it will emit output O and enter state s2’). Machine tables are specified by the counterfactuals that express the functions in question. So the main idea of machine state functionalism is that any given mental type is definable as the state type that participates in certain counterfactual relationships specified in terms of purely formal, and so not semantically interpreted, state types. Any system whose inputs, outputs, and internal states are counterfactually related in the way characterized by a machine table is a realization of that table. This version of machine state functionalism has been abandoned: no one maintains that the mind has the architecture of a Turing machine. However, computational psychology, a branch of cognitive psychology, presupposes a scientific functionalist view of cognitive states: it takes the mind to have a computational architecture. (See the section on cognitive psychology below.) Functionalism – the view that what makes a state a realization of a mental state is its playing a certain causal role – remains a leading theory of mind. But functionalism faces formidable difficulties. Block has pinpointed one. On the one hand, if the input and output states that figure in the causal role alleged to define a certain mental state are specified in insufficient detail, the functional definition will be too liberal: it will mistakenly classify certain states as of that mental type. On the other hand, if the input and output states are specified in too much detail, the functional definition will be chauvinistic: it will fail to count certain states as instances of the mental state that are in fact such instances. Moreover, it has also been argued that functionalism cannot capture conscious states since types of conscious states do not admit of functional definitions. Cognitive psychology, content, and consciousness Cognitive psychology. Many claim that one aim of cognitive psychology is to provide explanations of intentional capacities, capacities to be in intentional states (e.g., believing) and to engage in intentional activities (e.g., reasoning). Fodor has argued that classical cognitive psychology postulates a cognitive architecture that includes a language of thought: a system of mental representation with a combinatorial syntax and semantics, and computational processes defined over these mental representations in virtue of their syntactic structures. On this view, cognition is rule-governed symbol manipulation. Mental symbols have meanings, but they participate in computational processes solely in virtue of their syntactic or formal properties. The mind is, so to speak, a syntactic engine. The view implies a kind of content parallelism: syntaxsensitive causal transitions between symbols will preserve semantic coherence. Fodor has maintained that, on this language-of-thought view of cognition (the classical view), being in a beliefthat-p state can be understood as consisting in bearing a computational relation (one that is constitutive of belief) to a sentence in the language of thought that means that p; and similarly for desire, intention, and the like. The explanation of intentional capacities will be provided by a computational theory for mental sentences in conjunction with a psychosemantic theory, a theory of meaning for mental sentences.
A research program in cognitive science called connectionism postulates networks of neuron-like units. The units can be either on or off, or can have continuous levels of activation. Units are connected, the connections have various degrees of strength, and the connections can be either inhibitory or excitatory. Connectionism has provided fruitful models for studying how neural networks compute information. Moreover, connectionists have had much success in modeling pattern recognition tasks (e.g., facial recognition) and tasks consisting of learning categories from examples. Some connectionists maintain that connectionism will yield an alternative to the classical language-of-thought account of intentional states and capacities. However, some favor a mixed-models approach to cognition: some cognitive capacities are symbolic, some connectionist. And some hold that connectionism will yield an implementational architecture for a symbolic cognitive architecture, one that will help explain