Bernoulli’s theorem also called the (weak) law of large numbers, the principle that if a series of trials is repeated n times where (a) there are two possible outcomes, 0 and 1, on each trial, (b) the probability p of 0 is the same on each trial, and (c) this probability is independent of the outcome of other trials, then, for arbitrary positive e, as the number n of trials is increased, the probability that the absolute value Kr/n – pK of the difference between the relative frequency r/n of 0’s in the n trials and p is less than e approaches 1. The first proof of this theorem was given by Jakob Bernoulli in Part IV of his posthumously published Ars Conjectandi of 1713. Simplifications were later constructed and his result has been generalized in a series of ‘weak laws of large numbers.’ Although Bernoulli’s theorem derives a conclusion about the probability of the relative frequency r/n of 0’s for large n of trials given the value of p, in Ars Conjectandi and correspondence with Leibniz, Bernoulli thought it could be used to reason from information about r/n to the value of p when the latter is unknown. Speculation persists as to whether Bernoulli anticipated the inverse inference of Bayes, the confidence interval estimation of Peirce, J. Neyman, and E. S. Pearson, or the fiducial argument of R. A. Fisher. See also PROBABIL — IT. I.L.