computer theory

the theory of the design, uses, powers, and limits of modern electronic digital computers. It has important bearings on philosophy, as may be seen from the many philosophical references herein. Modern computers are a radically new kind of machine, for they are active physical realizations of formal languages of logic and arithmetic. Computers employ sophisticated languages, and they have reasoning powers many orders of magnitude greater than those of any prior machines. Because they are far superior to humans in many important tasks, they have produced a revolution in society that is as profound as the industrial revolution and is advancing much more rapidly. Furthermore, computers themselves are evolving rapidly.
When a computer is augmented with devices for sensing and acting, it becomes a powerful control system, or a robot. To understand the implications of computers for philosophy, one should imagine a robot that has basic goals and volitions built into it, including conflicting goals and competing desires. This concept first appeared in Karel Capek’s play Rossum’s Universal Robots (1920), where the word ‘robot’ originated.
A computer has two aspects, hardware and programming languages. The theory of each is relevant to philosophy.
The software and hardware aspects of a computer are somewhat analogous to the human mind and body. This analogy is especially strong if we follow Peirce and consider all information processing in nature and in human organisms, not just the conscious use of language. Evolution has produced a succession of levels of sign usage and information processing: self-copying chemicals, self-reproducing cells, genetic programs directing the production of organic forms, chemical and neuronal signals in organisms, unconscious human information processing, ordinary languages, and technical languages. But each level evolved gradually from its predecessors, so that the line between body and mind is vague. The hardware of a computer is typically organized into three general blocks: memory, processor (arithmetic unit and control), and various inputoutput devices for communication between machine and environment. The memory stores the data to be processed as well as the program that directs the processing. The processor has an arithmetic-logic unit for transforming data, and a control for executing the program. Memory, processor, and input-output communicate to each other through a fast switching system.
The memory and processor are constructed from registers, adders, switches, cables, and various other building blocks. These in turn are composed of electronic components: transistors, resistors, and wires. The input and output devices employ mechanical and electromechanical technologies as well as electronics. Some input-output devices also serve as auxiliary memories; floppy disks and magnetic tapes are examples. For theoretical purposes it is useful to imagine that the computer has an indefinitely expandable storage tape. So imagined, a computer is a physical realization of a Turing machine. The idea of an indefinitely expandable memory is similar to the logician’s concept of an axiomatic formal language that has an unlimited number of proofs and theorems. The software of a modern electronic computer is written in a hierarchy of programming languages. The higher-level languages are designed for use by human programmers, operators, and maintenance personnel. The ‘machine language’ is the basic hardware language, interpreted and executed by the control. Its words are sequences of binary digits or bits. Programs written in intermediate-level languages are used by the computer to translate the languages employed by human users into the machine language for execution. A programming language has instructional means for carrying out three kinds of operations: data operations and transfers, transfers of control from one part of the program to the other, and program self-modification. Von Neumann designed the first modern programming language. A programming language is general purpose, and an electronic computer that executes it can in principle carry out any algorithm or effective procedure, including the simulation of any other computer. Thus the modern electronic computer is a practical realization of the abstract concept of a universal Turing machine. What can actually be computed in practice depends, of course, on the state of computer technology and its resources. It is common for computers at many different spatial locations to be interconnected into complex networks by telephone, radio, and satellite communication systems. Insofar as users in one part of the network can control other parts, either legitimately or illegitimately (e.g., by means of a ‘computer virus’), a global network of computers is really a global computer. Such vast computers greatly increase societal interdependence, a fact of importance for social philosophy. The theory of computers has two branches, corresponding to the hardware and software aspects of computers. The fundamental concept of hardware theory is that of a finite automaton, which may be expressed either as an idealized logical network of simple computer primitives, or as the corresponding temporal system of input, output, and internal states. A finite automaton may be specified as a logical net of truth-functional switches and simple memory elements, connected to one another by idealized wires. These elements function synchronously, each wire being in a binary state (0 or 1) at each moment of time t % 0, 1, 2, . . . . Each switching element (or ‘gate’) executes a simple truth-functional operation (not, or, and, nor, not-and, etc.) and is imagined to operate instantaneously (compare the notions of sentential connective and truth table). A memory element (flip-flop, binary counter, unit delay line) preserves its input bit for one or more time-steps.
A well-formed net of switches and memory elements may not have cycles through switches only, but it typically has feedback cycles through memory elements. The wires of a logical net are of three kinds: input, internal, and output. Correspondingly, at each moment of time a logical net has an input state, an internal state, and an output state. A logical net or automaton need not have any input wires, in which case it is a closed system.
The complete history of a logical net is described by a deterministic law: at each moment of time t, the input and internal states of the net determine its output state and its next internal state. This leads to the second definition of ‘finite automaton’: it is a deterministic finite-state system characterized by two tables. The transition table gives the next internal state produced by each pair of input and internal states. The output table gives the output state produced by each input state and internal state.
The state analysis approach to computer hardware is of practical value only for systems with a few elements (e.g., a binary-coded decimal counter), because the number of states increases as a power of the number of elements. Such a rapid rate of increase of complexity with size is called the combinatorial explosion, and it applies to many discrete systems. However, the state approach to finite automata does yield abstract models of law-governed systems that are of interest to logic and philosophy. A correctly operating digital computer is a finite automaton. Alan Turing defined the finite part of what we now call a Turing machine in terms of states. It seems doubtful that a human organism has more computing power than a finite automaton.
A closed finite automaton illustrates Nietzsche’s law of eternal return. Since a finite automaton has a finite number of internal states, at least one of its internal states must occur infinitely many times in any infinite state history. And since a closed finite automaton is deterministic and has no inputs, a repeated state must be followed by the same sequence of states each time it occurs. Hence the history of a closed finite automaton is periodic, as in the law of eternal return. Idealized neurons are sometimes used as the primitive elements of logical nets, and it is plausible that for any brain and central nervous system there is a logical network that behaves the same and performs the same functions. This shows the close relation of finite automata to the brain and central nervous system. The switches and memory elements of a finite automaton may be made probabilistic, yielding a probabilistic automaton. These automata are models of indeterministic systems. Von Neumann showed how to extend deterministic logical nets to systems that contain selfreproducing automata. This is a very basic logical design relevant to the nature of life. The part of computer programming theory most relevant to philosophy contains the answer to Leibniz’s conjecture concerning his characteristica universalis and calculus ratiocinator. He held that ‘all our reasoning is nothing but the joining and substitution of characters, whether these characters be words or symbols or pictures.’ He thought therefore that one could construct a universal, arithmetic language with two properties of great philosophical importance. First, every atomic concept would be represented by a prime number. Second, the truth-value of any logically true-or-false statement expressed in the characteristica universalis could be calculated arithmetically, and so any rational dispute could be resolved by calculation. Leibniz expected to do the computation by hand with the help of a calculating machine; today we would do it on an electronic computer. However, we know now that Leibniz’s proposed language cannot exist, for no computer (or computer program) can calculate the truth-value of every logically true-orfalse statement given to it. This fact follows from a logical theorem about the limits of what computer programs can do. Let E be a modern electronic computer with an indefinitely expandable memory, so that E has the power of a universal Turing machine. And let L be any formal language in which every arithmetic statement can be expressed, and which is consistent. Leibniz’s proposed characteristica universalis would be such a language. Now a computer that is operating correctly is an active formal language, carrying

meaning of the word computer theory root of the word computer theory composition of the word computer theory analysis of the word computer theory find the word computer theory definition of the word computer theory what computer theory means meaning of the word computer theory emphasis in word computer theory