the study of ontological and epistemological problems raised by the content and practice of mathematics. The present agenda in this field evolved from critical developments, notably the collapse of Pythagoreanism, the development of modern calculus, and an early twentieth-century foundational crisis, which forced mathematicians and philosophers to examine mathematical methods and presuppositions. Greek mathematics. The Pythagoreans, who represented the height of early demonstrative Greek mathematics, believed that all scientific relations were measureable by natural numbers (1, 2, 3, etc.) or ratios of natural numbers, and thus they assumed discrete, atomic units for the measurement of space, time, and motion. The discovery of irrational magnitudes scotched the first of these beliefs. Zeno’s paradoxes showed that the second was incompatible with the natural assumption that space and time are infinitely divisible. The Greek reaction, ultimately codified in Euclid’s Elements, included Plato’s separation of mathematics from empirical science and, within mathematics, distinguished number theory – a study of discretely ordered entities – from geometry, which concerns continua. Following Aristotle (and employing methods perfected by Eudoxus), Euclid’s proofs used only ‘potentially infinite’ geometric and arithmetic procedures. The Elements’ axiomatic form and its constructive proofs set a standard for future mathematics. Moreover, its dependence on visual intuition (whose consequent deductive gaps were already noted by Archimedes), together with the challenge of Euclid’s infamous fifth postulate (about parallel lines), and the famous unsolved problems of compass and straightedge construction, established an agenda for generations of mathematicians. The calculus. The two millennia following Euclid saw new analytical tools (e.g., Descartes’s geometry) that wedded arithmetic and geometric considerations and toyed with infinitesimally small quantities. These, together with the demands of physical application, tempted mathematicians to abandon the pristine Greek dichotomies. Matters came to a head with Newton’s and Leibniz’s (almost simultaneous) discovery of the powerful computational techniques of the calculus. While these unified physical science in an unprecedented way, their dependence on unclear notions of infinitesimal spatial and temporal increments emphasized their shaky philosophical foundation. Berkeley, for instance, condemned the calculus for its unintuitability. However, this time the power of the new methods inspired a decidedly conservative response. Kant, in particular, tried to anchor the new mathematics in intuition. Mathematicians, he claimed, construct their objects in the ‘pure intuitions’ of space and time. And these mathematical objects are the a priori forms of transcendentally ideal empirical objects. For Kant this combination of epistemic empiricism and ontological idealism explained the physical applicability of mathematics and thus granted ‘objective validity’ (i.e., scientific legitimacy) to mathematical procedures.
Two nineteenth-century developments undercut this Kantian constructivism in favor of a more abstract conceptual picture of mathematics. First, Jànos Bolyai, Carl F. Gauss, Bernhard Riemann, Nikolai Lobachevsky, and others produced consistent non-Euclidean geometries, which undid the Kantian picture of a single a priori science of space, and once again opened a rift between pure mathematics and its physical applications. Second, Cantor and Dedekind defined the real numbers (i.e., the elements of the continuum) as infinite sets of rational (and ultimately natural) numbers. Thus they founded mathematics on the concepts of infinite set and natural number. Cantor’s set theory made the first concept rigorously mathematical; while Peano and Frege (both of whom advocated securing rigor by using formal languages) did that for the second. Peano axiomatized number theory, and Frege ontologically reduced the natural numbers to sets (indeed sets that are the extensions of purely logical concepts). Frege’s Platonistic conception of numbers as unintuitable objects and his claim that mathematical truths follow analytically from purely logical definitions – the thesis of logicism – are both highly anti-Kantian.
Foundational crisis and movements. But anti- Kantianism had its own problems. For one thing, Leopold Kronecker, who (following Peter Dirichlet) wanted mathematics reduced to arithmetic and no further, attacked Cantor’s abstract set theory on doctrinal grounds. Worse yet, the discovery of internal antinomies challenged the very consistency of abstract foundations. The most famous of these, Russell’s paradox (the set of all sets that are not members of themselves both is and isn’t a member of itself), undermined Frege’s basic assumption that every well-formed concept has an extension. This was a full-scale crisis. To be sure, Russell himself (together with Whitehead) preserved the logicist foundational approach by organizing the universe of sets into a hierarchy of levels so that no set can be a member of itself. (This is type theory.) However, the crisis encouraged two explicitly Kantian foundational projects. The first, Hilbert’s Program, attempted to secure the ‘ideal’ (i.e., infinitary) parts of mathematics by formalizing them and then proving the resultant formal systems to be conservative (and hence consistent) extensions of finitary theories. Since the proof itself was to use no reasoning more complicated than simple numerical calculations – finitary reasoning – the whole metamathematical project belonged to the untainted (‘contentual’) part of mathematics. Finitary reasoning was supposed to update Kant’s intuition-based epistemology, and Hilbert’s consistency proofs mimic Kant’s notion of objective validity. The second project, Brouwer’s intuitionism, rejected formalization, and was not only epistemologically Kantian (resting mathematical reasoning on the a priori intuition of time), but ontologically Kantian as well. For intuitionism generated both the natural and the real numbers by temporally ordered conscious acts. The reals, in particular, stem from choice sequences, which exploit Brouwer’s epistemic assumptions about the open future. These foundational movements ultimately failed. Type theory required ad hoc axioms to express the real numbers; Hilbert’s Program foundered on Gödel’s theorems; and intuitionism remained on the fringes because it rejected classical logic and standard mathematics. Nevertheless the legacy of these movements – their formal methods, indeed their philosophical agenda – still characterizes modern research on the ontology and epistemology of mathematics. Set theory, e.g. (despite recent challenges from category theory), is the lingua franca of modern mathematics. And formal languages with their precise semantics are ubiquitous in technical and philosophical discussions. Indeed, even intuitionistic mathematics has been formalized, and Michael Dummett has recast its ontological idealism as a semantic antirealism that defines truth as warranted assertability. In a similar semantic vein, Paul Benacerraf proposed that the philosophical problem with Hilbert’s approach is inability to provide a uniform realistic (i.e., referential, non-epistemic) semantics for the allegedly ideal and contentual parts of mathematics; and the problem with Platonism is that its semantics makes its objects unknowable. Ontological issues. From this modern perspective, the simplest realism is the outright Platonism that attributes a standard model consisting of ‘independent’ objects to classical theories expressed in a first-order language (i.e., a language whose quantifiers range over objects but not properties). But in fact realism admits variations on each aspect. For one thing, the Löwenheim-Skolem theorem shows that formalized theories can have non-standard models. There are expansive non-standard models: Abraham Robinson, e.g., used infinitary non-standard models of Peano’s axioms to rigorously reintroduce infinitesimals. (Roughly, an infinitesimal is the reciprocal of an infinite element in such a model.) And there are also ‘constructive’ models, whose objects must be explicitly definable. Predicative theories (inspired by Poincaré and Hermann Weyl), whose stage-by-stage definitions refer only to previously defined objects, produce one variety of such models. Gödel’s constructive universe, which uses less restricted definitions to model apparently non-constructive axioms like the axiom of choice, exemplifies another variety. But there are also views (various forms of structuralism) which deny that formal theories have unique standard models at all. These views – inspired by the fact, already sensed by Dedekind, that there are multiple equivalid realizations of formal arithmetic – allow a mathematical theory to characterize only a broad family of models and deny unique reference to mathematical terms. Finally, some realistic approaches advocate formalization in secondorder languages, and some eschew ordinary semantics altogether in favor of substitutional quantification. (These latter are still realistic, for they still distinguish truth from knowledge.)
Strict finitists – inspired by Wittgenstein’s more stringent epistemic constraints – reject even the open-futured objects admitted by Brouwer, and countenance only finite (or even only ‘feasible’) objects. In the other direction, A. A. Markov and his school in Russia introduced a syntactic notion of algorithm from which they developed the field of ‘constructive analysis.’ And the American mathematician Errett Bishop, starting from a Brouwer-like disenchantment with mathematical realism and with strictly formal approaches, recovered large parts of classical analysis within a non-formal constructive framework.
All of these approaches assume abstract (i.e., causally isolated) mathematical objects, and thus they have difficulty explaining the wide applicability of mathematics (constructive or otherwise) within empirical science. One response, Quine’s ‘indispensability’ view, integrates mathematical theories into the general network of empirical science. For Quine, mathematical objects – just like ordinary physical objects – exist simply in virtue of being referents for terms in our best scientific theory. By contrast Hartry Field, who denies that any abstract objects exist, also denies that any purely mathematical assertions are literally true. Field attempts to recast physical science in a relational language without mathematical terms and then use Hilbert-style conservative extension results to explain the evident utility of abstract mathematics. Hilary Putnam and Charles Parsons have each suggested views according to which mathematics has no objects proper to itself, but rather concerns only the possibilities of physical constructions. Recently, Geoffrey Hellman has combined this modal approach with structuralism. Epistemological issues. The equivalence (proved in the 1930s) of several different representations of computability to the reasoning representable in elementary formalized arithmetic led Alonzo Church to suggest that the notion of finitary reasoning had been precisely defined. Church’s thesis (so named by Stephen Kleene) inspired Georg Kreisel’s investigations (in the 1960s and 70s) of the general conditions for rigorously analyzing other informal philosophical notions like semantic consequence, Brouwerian choice sequences, and the very notion of