quantum logic the logic of which the models are certain non-Boolean algebras derived from the mathematical representation of quantum mechanical systems. (The models of classical logic are, formally, Boolean algebras.) This is the central notion of quantum logic in the literature, although the term covers a variety of modal logics, dialogics, and operational logics proposed to elucidate the structure of quantum mechanics and its relation to classical mechanics. The dynamical quantities of a classical mechanical system (position, momentum, energy, etc.) form a commutative algebra, and the dynamical properties of the system (e.g., the property that the position lies in a specified range, or the property that the momentum is greater than zero, etc.) form a Boolean algebra. The transition from classical to quantum mechanics involves the transition from a commutative algebra of dynamical quantities to a noncommutative algebra of so-called observables. One way of understanding the conceptual revolution from classical to quantum mechanics is in terms of a shift from the class of Boolean algebras to a class of non-Boolean algebras as the appropriate relational structures for the dynamical properties of mechanical systems, hence from a Boolean classical logic to a non-Boolean quantum logic as the logic applicable to the fundamental physical processes of our universe. This conception of quantum logic was developed formally in a classic 1936 paper by G. Birkhoff and J. von Neumann (although von Neumann first proposed the idea in 1927). The features that distinguish quantum logic from classical logic vary with the formulation. In the Birkhoff – von Neumann logic, the distributive law of classical logic fails, but this is by no means a feature of all versions of quantum logic. It follows from Gleason’s theorem (1957) that the non-Boolean models do not admit two-valued homomorphisms in the general case, i.e., there is no partition of the dynamical properties of a quantum mechanical system into those possessed by the system and those not possessed by the system that preserves algebraic structure, and equivalently no assignment of values to the observables of the system that preserves algebraic structure. This result was proved independently for finite sets of observables by S. Kochen and E. P. Specker (1967). It follows that the probabilities specified by the Born interpretation of the state function of a quantum mechanical system for the results of measurements of observables cannot be derived from a probability distribution over the different possible sets of dynamical properties of the system, or the different possible sets of values assignable to the observables (of which one set is presumed to be actual), determined by hidden variables in addition to the state function, if these sets of properties or values are required to preserve algebraic structure. While Bell’s theorem (1964) excludes hidden variables satisfying a certain locality condition, the Kochen-Specker theorem relates the non-Booleanity of quantum logic to the impossibility of hidden variable extensions of quantum mechanics, in which value assignments to the observables satisfy constraints imposed by the algebraic structure of the observables. See also BOOLEAN ALGEBRA , PHILOSOPHY OF SCIENCE , QUANTUM MECHANIC. J.Bub quantum mechanics, also called quantum theory, the science governing objects of atomic and subatomic dimensions. Developed independently by Werner Heisenberg (as matrix mechanics, 1925) and Erwin Schrödinger (as wave mechanics, 1926), quantum mechanics breaks with classical treatments of the motions and interactions of bodies by introducing probability and acts of measurement in seemingly irreducible ways. In the widely used Schrödinger version, quantum mechanics associates with each physical system a time-dependent function, called the state function (alternatively, the state vector or Y function). The evolution of the system is represented by the temporal transformation of the state function in accord with a master equation, known as the Schrödinger equation. Also associated with a system are ‘observables’: (in principle) measurable quantities, such as position, momentum, and energy, including some with no good classical analogue, such as spin. According to the Born interpretation (1926), the state function is understood instrumentally: it enables one to calculate, for any possible value of an observable, the probability that a measurement of that observable would find that particular value.
The formal properties of observables and state functions imply that certain pairs of observables (such as linear momentum in a given direction, and position in the same direction) are incompatible in the sense that no state function assigns probability 1 to the simultaneous determination of exact values for both observables. This is a qualitative statement of the Heisenberg uncertainty principle (alternatively, the indeterminacy principle, or just the uncertainty principle). Quantitatively, that principle places a precise limit on the accuracy with which one may simultaneously measure a pair of incompatible observables. There is no corresponding limit, however, on the accuracy with which a single observable (say, position alone, or momentum alone) may be measured. The uncertainty principle is sometimes understood in terms of complementarity, a general perspective proposed by Niels Bohr according to which the connection between quantum phenomena and observation forces our classical concepts to split into mutually exclusive packages, both of which are required for a complete understanding but only one of which is applicable under any particular experimental conditions. Some take this to imply an ontology in which quantum objects do not actually possess simultaneous values for incompatible observables; e.g., do not have simultaneous position and momentum. Others would hold, e.g., that measuring the position of an object causes an uncontrollable change in its momentum, in accord with the limits on simultaneous accuracy built into the uncertainty principle. These ways of treating the principle are not uncontroversial.
Philosophical interest arises in part from where the quantum theory breaks with classical physics: namely, from the apparent breakdown of determinism (or causality) that seems to result from the irreducibly statistical nature of the theory, and from the apparent breakdown of observer-independence or realism that seems to result from the fundamental role of measurement in the theory. Both features relate to the interpretation of the state function as providing only a summary of the probabilities for various measurement outcomes. Einstein, in particular, criticized the theory on these grounds, and in 1935 suggested a striking thought experiment to show that, assuming no action-at-a-distance, one would have to consider the state function as an incomplete description of the real physical state for an individual system, and therefore quantum mechanics as merely a provisional theory. Einstein’s example involved a pair of systems that interact briefly and then separate, but in such a way that the outcomes of various measurements performed on each system, separately, show an uncanny correlation. In 1951 the physicist David Bohm simplified Einstein’s example, and later (1957) indicated that it may be realizable experimentally. The physicist John S. Bell then formulated a locality assumption (1964), similar to Einstein’s, that constrains factors which might be used in describing the state of an individual system, so-called hidden variables. Locality requires that in the Einstein- Bohm experiment hidden variables not allow the measurement performed on one system in a correlated pair immediately to influence the outcome obtained in measuring the other, spatially separated system. Bell demonstrated that locality (in conjunction with other assumptions about hidden variables) restricts the probabilities for measurement outcomes according to a system of inequalities known as the Bell inequalities, and that the probabilities of certain quantum systems violate these inequalities. This is Bell’s theorem. Subsequently several experiments of the Einstein-Bohm type have been performed to test the Bell inequalities. Although the results have not been univocal, the consensus is that the experimental data support the quantum theory and violate the inequalities. Current research is trying to evaluate the implications of these results, including the extent to which they rule out local hidden variables. (See J. Cushing and E. McMullin, eds., Philosophical Consequences of Quantum Theory, 1989.) The descriptive incompleteness with which Einstein charged the theory suggests other problems. A particularly dramatic one arose in correspondence between Schrödinger and Einstein; namely, the ‘gruesome’ Schrödinger cat paradox. Here a cat is confined in a closed chamber containing a radioactive atom with a fifty-fifty chance of decaying in the next hour. If the atom decays it triggers a relay that causes a hammer to fall and smash a glass vial holding a quantity of prussic acid sufficient to kill the cat. According to the Schrödinger equation, after an hour the state function for the entire atom ! relay ! hammer ! glass vial ! cat system is such that if we observe the cat the probability for finding it alive (dead) is 50 percent. However, this evolved state function is one for which there is no definite result; according to it, the cat is neither alive nor dead. How then does any definite fact of the matter arise, and when? Is the act of observation itself instrumental in bringing about the observed result, does that result come about by virtue of some special random process, or is there some other account compatible with definite results of measurements? This is the so-called quantum measurement problem and it too is an active area of research.
See also DETERMINISM , EINSTEIN , FIELD THEORY , PHILOSOPHY OF SCIENCE , RELATIV — IT. A.F.