to posited unobservable features of the world in its explanatory apparatus will always encounter rival theories incompatible with the original theory but equally compatible with all possible observational data that might be taken as confirmatory of the original theory is the claim of the underdetermination thesis.
A generalization taken to have ‘lawlike force’ is called a law of nature. Some suggested criteria for generalizations having lawlike force are the ability of the generalization to back up the truth of claims expressed as counterfactual conditions; the ability of the generalization to be confirmed inductively on the basis of evidence that is only a proper subset of all the particular instances falling under the generality; and the generalization having an appropriate place in the simple, systematic hierarchy of generalizations important for fundamental scientific theories of the world. The application of a scientific law to a given actual situation is usually hedged with the proviso that for the law’s predictions to hold, ‘all other, unspecified, features of the situation are normal.’ Such a qualifying clause is called a ceteris paribus clause. Such ‘everything else being normal’ claims cannot usually be ‘filled out,’ revealing important problems concerning the ‘open texture’ of scientific claims. The claim that the full specification of the state of the world at one time is sufficient, along with the laws of nature, to fix the full state of the world at any other time, is the claim of determinism. This is not to be confused with claims of total predictability, since even if determinism were true the full state of the world at a time might be, in principle, unavailable for knowledge. Concepts of the foundations of physical theories. Here, finally, are a few concepts that are crucial in discussing the foundations of physical theories, in particular theories of space and time and quantum theory: The doctrine that space and time must be thought of as a family of spatial and temporal relations holding among the material constituents of the universe is called relationism. Relationists deny that ‘space itself’ should be considered an additional constituent of the world over and above the world’s material contents. The doctrine that ‘space itself’ must be posited as an additional constituent of the world over and above ordinary material things of the world is substantivalism. Mach’s principle is the demand that all physical phenomena, including the existence of inertial forces used by Newton to argue for a substantivalist position, be explainable in purely relationist terms. Mach speculated that Newton’s explanation for the forces in terms of acceleration with respect to ‘space itself’ could be replaced with an explanation resorting to the acceleration of the test object with respect to the remaining matter of the universe (the ‘fixed stars’). In quantum theory the claim that certain ‘conjugate’ quantities, such as position and momentum, cannot be simultaneously ‘determined’ to arbitrary degrees of accuracy is the uncertainty principle. The issue of whether such a lack of simultaneous exact ‘determination’ is merely a limitation on our knowledge of the system or is, instead, a limitation on the system’s having simultaneous exact values of the conjugate quantities, is a fundamental one in the interpretation of quantum mechanics.
Bell’s theorem is a mathematical result aimed at showing that the explanation of the statistical correlations that hold between causally noninteractive systems cannot always rely on the positing that when the systems did causally interact in the past independent values were fixed for some feature of each of the two systems that determined their future observational behavior. The existence of such ‘local hidden variables’ would contradict the correlational predictions of quantum mechanics. The result shows that quantum mechanics has a profoundly ‘non-local’ nature.
Can quantum probabilities and correlations be obtained as averages over variables at some deeper level than those specifying the quantum state of a system? If such quantities exist they are called hidden variables. Many different types of hidden variables have been proposed: deterministic, stochastic, local, non-local, etc. A number of proofs exist to the effect that positing certain types of hidden variables would force probabilistic results at the quantum level that contradict the predictions of quantum theory.
Complementarity was the term used by Niels Bohr to describe what he took to be a fundamental structure of the world revealed by quantum theory. Sometimes it is used to indicate the fact that magnitudes occur in conjugate pairs subject to the uncertainty relations. Sometimes it is used more broadly to describe such aspects as the ability to encompass some phenomena in a wave picture of the world and other phenomena in a particle picture, but implying that no one picture will do justice to all the experimental results.
The orthodox formalization of quantum theory posits two distinct ways in which the quantum state can evolve. When the system is ‘unobserved,’ the state evolves according to the deterministic Schrödinger equation. When ‘measured,’ however, the system suffers a discontinuous ‘collapse of the wave packet’ into a new quantum state determined by the outcome of the measurement process. Understanding how to reconcile the measurement process with the laws of dynamic evolution of the system is the measurement problem. Conservation and symmetry. A number of important physical principles stipulate that some physical quantity is conserved, i.e. that the quantity of it remains invariant over time. Early conservation principles were those of matter (mass), of energy, and of momentum. These became assimilated together in the relativistic principle of the conservation of momentum-energy. Other conservation laws (such as the conservation of baryon number) arose in the theory of elementary particles. A symmetry in physical theory expressed the invariance of some structural feature of the world under some transformation. Examples are translation and rotation invariance in space and the invariance under transformation from one uniformly moving reference frame to another. Such symmetries express the fact that systems related by symmetry transformations behave alike in their physical evolution. Some symmetries are connected with space-time, such as those noted above, whereas others (such as the symmetry of electromagnetism under socalled gauge transformations) are not. A very important result of the mathematician Emma Noether shows that each conservation law is derivable from the existence of an associated underlying symmetry. Chaos theory and chaotic systems. In the history of the scientific study of deterministic systems, the paradigm of explanation has been the prediction of the future states of a system from a specification of its initial state. In order for such a prediction to be useful, however, nearby initial states must lead to future states that are close to one another. This is now known to hold only in exceptional cases. In general deterministic systems are chaotic systems, i.e., even initial states very close to one another will lead in short intervals of time to future states that diverge quickly from one another. Chaos theory has been developed to provide a wide range of concepts useful for describing the structure of the dynamics of such chaotic systems. The theory studies the features of a system that will determine if its evolution is chaotic or non-chaotic and provides the necessary descriptive categories for characterizing types of chaotic motion. Randomness. The intuitive distinction between a sequence that is random and one that is orderly plays a role in the foundations of probability theory and in the scientific study of dynamical systems. But what is a random sequence? Subjectivist definitions of randomness focus on the inability of an agent to determine, on the basis of his knowledge, the future occurrences in the sequence. Objectivist definitions of randomness seek to characterize it without reference to the knowledge of any agent. Some approaches to defining objective randomness are those that require probability to be the same in the original sequence and in subsequences ‘mechanically’ selectable from it, and those that define a sequence as random if it passes every ‘effectively constructible’ statistical test for randomness. Another important attempt to characterize objective randomness compares the length of a sequence to the length of a computer program used to generate the sequence. The basic idea is that a sequence is random if the computer programs needed to generate the sequence are as long as the sequence itself. See also CONFIRMATION, DUHEM, EXPLA- NATION , HYPOTHETICO -DEDUCTIVE METHOD , LAWLIKE GENERALIZATION , PHILOSOPHY OF THE SOCIAL SCIENCES , SCIENTIFIC REALISM , THEORETICAL TER. L.S.