induction, (1) in the narrow sense, inference to a generalization from its instances; (2) in the broad sense, any ampliative inference – i.e., any inference where the claim made by the conclusion goes beyond the claim jointly made by the premises. Induction in the broad sense includes, as cases of particular interest: argument by analogy, predictive inference, inference to causes from signs and symptoms, and confirmation of scientific laws and theories. The narrow sense covers one extreme case that is not ampliative. That is the case of mathematical induction, where the premises of the argument necessarily imply the generalization that is its conclusion. Inductive logic can be conceived most generally as the theory of the evaluation of ampliative inference. In this sense, much of probability theory, theoretical statistics, and the theory of computability are parts of inductive logic. In addition, studies of scientific method can be seen as addressing in a less formal way the question of the logic of inductive inference. The name ‘inductive logic’ has also, however, become associated with a specific approach to these issues deriving from the work of Bayes, Laplace, De Morgan, and Carnap. On this approach, one’s prior probabilities in a state of ignorance are determined or constrained by some principle for the quantification of ignorance and one learns by conditioning on the evidence. A recurrent difficulty with this line of attack is that the way in which ignorance is quantified depends on how the problem is described, with different logically equivalent descriptions leading to different prior probabilities. Carnap laid down as a postulate for the application of his inductive logic that one should always condition on one’s total evidence. This rule of total evidence is usually taken for granted, but what justification is there for it? Good pointed out that the standard Bayesian analysis of the expected value of new information provides such a justification. Pure cost-free information always has non-negative expected value, and if there is positive probability that it will affect a decision, its expected value is positive. Ramsey made the same point in an unpublished manuscript. The proof generalizes to various models of learning uncertain evidence. A deductive account is sometimes presented where induction proceeds by elimination of possibilities that would make the conclusion false. Thus Mill’s methods of experimental inquiry are sometimes analyzed as proceeding by elimination of alternative possibilities. In a more general setting, the hypothetico-deductive account of science holds that theories are confirmed by their observational consequences – i.e., by elimination of the possibilities that this experiment or that observation falsifies the theory. Induction by elimination is sometimes put forth as an alternative to probabilistic accounts of induction, but at least one version of it is consistent with – and indeed a consequence of – probabilistic accounts. It is an elementary fact of probability that if F, the potential falsifier, is inconsistent with T and both have probability strictly between 0 and 1, then the probability of T conditional on not-F is higher than the unconditional probability of T.
In a certain sense, inductive support of a universal generalization by its instances may be a special case of the foregoing, but this point must be treated with some care. In the first place, the universal generalization must have positive prior probability. (It is worth noting that Carnap’s systems of inductive logic do not satisfy this condition, although systems of Hintikka and Niiniluoto do.) In the second place, the notion of instance must be construed so the ‘instances’ of a universal generalization are in fact logical consequences of it. Thus ‘If A is a swan then A is white’ is an instance of ‘All swans are white’ in the appropriate sense, but ‘A is a white swan’ is not. The latter statement is logically stronger than ‘If A is a swan then A is white’ and a complete report on species, weight, color, sex, etc., of individual A would be stronger still. Such statements are not logical consequences of the universal generalization, and the theorem does not hold for them. For example, the report of a man 7 feet 11¾ inches tall might actually reduce the probability of the generalization that all men are under 8 feet tall.
Residual queasiness about the foregoing may be dispelled by a point made by Carnap apropos of Hempel’s discussion of paradoxes of confirmation. ‘Confirmation’ is ambiguous. ‘E confirms H’ may mean that the probability of H conditional on E is greater than the unconditional probability of H, in which case deductive consequences of H confirm H under the conditions set forth above. Or ‘E confirms H’ may mean that the probability of H conditional on E is high (e.g., greater tha.95), in which case if E confirms H, then E confirms every logical consequence of H. Conflation of the two senses can lead one to the paradoxical conclusion that E confirms E & P and thus P for any statement, P. See also CONFIRMATION, MATHEMATICAL INDUCTION , MILL ‘s METHODS , PROBLEM OF INDUCTIO. B.Sk.