Evolutionary epistemology takes into account the fact of belonging humanly within the natural order. It is our embodied relationships within the natural order which shape our categories of thought and very expectations. Contrary to Leibniz, it is evolution—and not God or logical necessity—that is responsible for the “pre-established harmony” between ordinary cognition and natural reality. And it is this belonging that must ultimately also be responsible for the harmony possible between mathematics, scientific thought and the natural world. Einstein called the capacity of reason and mathematics to grasp reality “miraculous.” Following Hume, he saw no logically necessary connection between experience or thought and external reality. What was missing in his generation’s understanding of that relationship is today called evolutionary epistemology, according to which the organism establishes the connections between its own cognition and objective reality in such ways as to permit its survival. In other words, cognition is shaped by natural selection. While evolutionary epistemologists may use logic to reflect on the human condition shared with other creatures, logic itself must then be an evolutionary product. This brings evolutionary epistemology full circle as a discipline that cannot elude self-reference. It is therefore a prime candidate for a second-order science, which considers its own methodology as part of its subject matter. An evolutionary general theory of intelligence and cognition might help account for the astonishing effectiveness of logic and mathematics, which would then simply reflect the broader capacity to model, abstract, and generalize that characterizes human cognition. It would also be better positioned than first-order sciences (such as psychology presently still is) to shed light on the age-old problems of mind-body dualism and the “hard problem of consciousness.” Only such a reflexive approach can hope to resolve the contradiction involved in trying to understand consciousness using tools historically devised to exclude it.
Category Archives: Nuggets
Digital Physics
Digital physics is associated with the notion that physical reality itself consists of nothing but information, mathematics, simulation, computation, or geometry—also known as the computational metaphor, pancomputationalism, and the it from bit philosophy. Without that metaphysical premise, digital physics merely represents an arbitrary restriction on the kind of math to employ in physics. Specifically, for convenience it aims to exclude the continuum and problems involving infinities. While this restriction facilitates computation, there is no guarantee that it corresponds to reality. To make physical reality digital to facilitate computation is rather the tail wagging the dog. For the universe to be thus computable in principle would mean that it is somehow artificial rather than natural—made rather than found. It would be a product of definition, like a program. Moreover, though mathematics effectively describes and anticipates many features of the world, to assert that physical reality is reducible to mathematics (let alone to digital computation), is purely metaphysics. While dressed up in the latest technology, such notions as digital physics, the “mathematical universe”, or the notion that the world is a simulation or computation, simply harks back to the idealism of Plato and Pythagoras. It reflects the wide influence of the computer on modern life and thought. But it is no more plausible that physical reality corresponds to the technology of the 21st century than that it corresponds to the clocks and waterworks of the 18th century or the steam engines of the 19th! Information is a new catchword in science, likewise a product of modern technology. Its significance for physics arose first because the basic equation of communication theory resembles the basic equation of thermodynamics (“information” playing the role in one of “entropy” in the other). It has proven a useful concept in physics and in computation, as a non-material quantity divorced from its familiar connotation of conveying meaning. But to assert it as a fundamental new physical entity—even the basis of physics—is an essentially idealist move and a symptom of lingering dualism.
Determinism in Physics
Determinism in physics means that the state of a real system at a given time is fixed by its prior states. This presumes time and a notion of system. The state of the system is expressed by an equation as a function of time. But it is the equation that is “deterministic,” not the real system. The notion of physical determinism confuses the logical relationships of variables in the equation with some physical or metaphysical force of one state to “fix” another. After all, the word determine is ambiguous, meaning either to fix or to ascertain. One is an ontological claim, presuming some causally continuous process relating one state and a later one. The other is an epistemological claim, presuming a conscious agent, such as a scientist. The equation enables such claims, but only at the cost of describing an idealized fiction.
The obvious advantage of determinism in physics is that one can predict the future or retrodict the past. In order to be calculable in the required way, the system must be well defined and follow mechanical rules. In other words, it must be a deductive system, an idealization, which can only correspond approximately to a natural system. The ability to accurately predict the future of planetary motions, for example, results from the fact that such real systems coincide, for all practical purposes, with deductive systems. (In such massive systems, deviations on a micro scale cancel out to yield a statistically precise averaged macroscopic pattern, which is then codified as a law.) A deterministic system always evolves the same way from a given starting point. The equations involved in the early studies of dynamics were linear, and well-behaved phenomena were deliberately selected for study that could be expressed by such equations. The notion of determinism afforded a point of view outside the system under study, from which to regard change without being affected by it. But this has obvious limits in dealing with the universe as a whole.
Deductionism is ultimate reduction
Deductionism is the belief that all of physical reality can be mapped by formal constructs, such as mathematical models. In practice, these are often the actual objects of scientific study. Deductionism is the premise that nature is reducible ultimately to the terms of a deductive system. (Greek deductionism had held that one should be able to deduce natural details, like theorems of geometry, strictly from first principles.) Such a belief, often tacit, is an example of what Gerald Holton called thematic content: an implicit intuitive assumption. In its extreme, deductionism holds that models work because they are identical to the natural realities they model. Models are usually defined by equations, and the systems studied are chosen and re-defined in such a way that they can be described by (preferably simple) equations. Deductionism thus blurs the very distinction between mathematics and the physical world. However, one basic difference is that knowledge of physical structures is contingent, or “synthetic,” while mathematical structures are “analytic.” That is, physics depends on empirical evidence, while mathematical concepts depend only on reason and how they are defined. It is only because physical systems have first been re-defined as mathematical structures that there seems to be a built-in correspondence between model and reality. The correspondence between mathematics and physics is actually a correspondence of one deductive system to another. The correspondence with reality remains an article of faith. But it is often a self-confirming faith, since phenomena that can be treated with existing mathematics are the ones generally selected for study. One reason for this is that a deductive system is equivalent to a machine. A deductive approach serves technology in particular, since mathematical models can often be translated into literal machines or devices. But seeing the world through deductive eyes limits what can be perceived as significant for investigation. It may also limit the demand for new mathematics.
Cosmological Self-organization
There is more to the evolution of living matter than natural selection of random mutations, which is a passive mechanical process. Similarly, there may be more to the evolution of “inert” matter than the passive processes of efficient causality that have dominated science since Galileo. In particular, processes of active self-organization have until recently remained largely ignored and beyond the reach of that science. Yet, self-organization in the universe at large may be a necessary backdrop to the arising of life. Cosmological self-organization may be the key to the resolution of conundrums such as the apparent highly improbable fine-tuning of the universe to life. A cosmos that appears unlikely from a mechanist perspective could be probable in the light of processes of self-organization.
The founding fathers of science had assumed that nature is necessarily comprehensible. For related reasons, they assumed that matter lacks inherent powers of self-organization. Matter was simple, and could be moved or organized by outside forces, but was incapable of moving or organizing itself. Mechanist thought concerns simple linear causes and effects. One system is presumed to act on another from without, in a potentially endless chain of cause and effect. It is little wonder such thought does not readily accommodate the complex feedback loops, non-linear processes, multiple causation, and holism involved in what are now recognized as self-organizing processes. Mechanism, especially as an engineering stance implying an imposition from the top down, is the very thing that cannot explain processes that self-organize from the bottom up. Some processes of cosmic self-organization are currently recognized—in galaxy formation, for example. Broader questions include what role self-organization plays at the level of the universe as a whole, what role it plays in leading to the kind of universe we live in, and whether it leads inevitably to life and intelligence.
Cosmological Constant
The cosmological constant was initially a mathematical fudge factor in Einstein’s field equation for general relativity, inserted to allow a static universe before Edwin Hubble’s discovery of expansion. Now it is taken seriously as a physical force (dark energy) driving the accelerating expansion of the universe. Some idealist-minded cosmologists thought the cosmological constant should be exactly zero. (But then, some even found reason to marvel that the gravitational constant differs from zero!) However, despite perennial interest in tidy numbers, there is no a priori reason to expect nature to involve them. It is nature that must determine theory, not the other way around. To determine the cosmological constant empirically requires many assumptions regarding methods for gathering data from observations. Such a measurement, in other words, is highly theory dependent. A blatant problem posed by the cosmological constant is the huge difference between its observed value, based on astronomical observations, and predictions from theoretical physics regarding the quantum vacuum, to which the cosmological constant is assumed to correspond. It is possible to explain away this discrepancy using anthropic reasoning—in other words, as a selection effect. That is, a typical theoretical value would lead to a universe that would not support life (for example, because it would be too short-lived for evolutionary time scales). So, because in fact we are present in it, our “region” of space must exist as an extremely unlikely part of a vastly larger meta-space where life cannot arise. The discrepancy between theory and observation spans many orders of magnitude. It is literally astronomical. On this view, it implies the extravagance of a universe many orders of magnitude larger than the known universe (our region). While such is possible, it seems ontologically more economical to question quantum field theory on the one hand, and the measured values, on the other.
Cosmic Natural Selection
On the analogy of biological natural selection, cosmic natural selection is some process whereby a universe with the properties of our universe arises out of a vast number of other possibilities. In order to provide those possibilities, cosmologists propose the so-called “multiverse.” And in order to choose among them, cosmologists may propose some selection mechanism. An example is Lee Smolin’s black hole selection theory, which proposes small random changes in basic parameters between “bangs” (such as the Big Bang at the origin of our universe). While designed to avoid the inadequacies of anthropic reasoning, such a theory is still metaphysically extravagant in requiring either a spatial or a temporal context of many other universes. The concept is modeled on biological natural selection over many generations, which operates through the mechanism of genetic mutation, to account for the evolution of life. But just as biological natural selection may not be the sole factor involved in the evolution of life, so cosmological natural selection may not be the sole factor in the evolution of our universe. Smolin’s theory requires an absolute time as the context for a suite of universes arising from black holes. It provides no rationale for “small” random fluctuations in parameters required on the analogy of genetic mutations. Present physics theory posits quantum fluctuations as a possible mechanism, but what reason is there to assume it applies in what are by definition potentially arbitrarily different worlds? Moreover, genetic mutations can be large, producing nonviable offspring. If there can be large as well as small changes in parameters between “bangs,” the number of nonviable universes multiplies accordingly. Natural selection is an essentially passive process. It is part of the mechanist vision that dominated the thinking of Darwin’s time. The whole question could be approached in terms of a more active concept of self-organization.
Computational Metaphor
For half a century, the computer has been a dominant metaphor for understanding the operations of the brain. Even Descartes and Leibniz had intuited the potential of mechanism for understanding mental processing. Drawing on computational metaphors, representational theories of mind attempt to explore the organism’s role in cognition. The computer programmer can thereby put herself in the organism’s shoes. More recently, computation has served to understand the operations of the physical world at large. The computational metaphor thus reflects the development of technology—and the mechanist metaphor—from simpler mechanical systems to computers. The laws of physics are now regarded by some as computer algorithms and the hardware of the universe as a computer. The underlying metaphysical premise tacitly remains the same: nature is equivalent to an artifact, a machine, a text, a program, a product of definition. To qualify as a scientific hypothesis, such an assumption would have to be experimentally testable. It does not have to be tested, however—let alone be proven—to qualify as the basis for research programs. But, whether applied to the cosmos or to the brain, the computer metaphor continues to ignore the most obvious fact of physical reality, which is that it is not programmed by some external intelligence, but is somehow self-programming and self-activating. It is not an artifact, a product of our definitions (nor of the gods’). We may be deluded by our abilities to create convincing simulations. But reality and simulation are distinct things. One is found, the other made. Nature itself and our scientific models of it are also distinct things. Computer animation can be entertaining, especially when we indulge the illusion that it is real. Similarly, scientific modeling can be fruitful for specific purposes, which tempts us to think it provides an image of the world as it “really” is.
Book of Nature
In medieval Europe, the natural world was referred to as the “Book of Nature,” which was considered to supplement the Bible as a guide to divine will. Scripture and nature were alternative expressions of God’s message and purpose for humanity. There were thus two testimonials to the Creation: the Bible and the natural world itself. Study of nature meant attention to the miraculous and portentous; like scripture, the Book of Nature was read for its prophetic value, not out of dispassionate curiosity. Protestant scientists in particular, who rejected the priesthood and hierarchy of the Church and slavish devotion to past scholarship, wanted to read and interpret the Bible for themselves—and equally the Book of Nature. Both were to be taken more literally and freed from the fancies of medieval interpretation. Early science hardly breaks with medieval tradition concerning the Book of Nature but implicitly regards nature as a text to be deciphered. New interest in the study of languages encouraged a more formal approach to the natural world. Galileo proclaimed the Book of Nature to be written in the language of mathematics. Equations and theories are literally texts. To the degree we identify them with natural reality, the world itself is implicitly conceived as a text. The advantage of this way of seeing the world is control: scientific models and mathematical equations facilitate prediction and the control necessary for technology. Like machines, models and equations are human constructions. They are finite and well defined, products of definition. Natural phenomena are sought ought that can be treated effectively in terms of such constructs. But this may represent only a small part of natural reality. Only recently, for example—with the aid of the computer—have complex systems begun to be studied. Previously, simple systems were considered because they could be modeled by equations that could be solved with pencil and paper. The world may turn out not to resemble a book—or any other text, such as a computer program.