Category Archives: Nuggets

Natural Agency

We naturally conceive agency in human terms. Even our modern ideas of causality stem ultimately from (early childhood) experiences of causing our own bodies or other material things to move. We see that other objects can affect each other in “mechanical” ways—but this leaves open the question of what initiates the causal chain. There is a built-in dualism in this (mechanistic) way of seeing things, since a non-material first cause is required to set in motion a material system. A soul or mind or “will” is required outside the physical system, if only to observe it. This is the background of what we mean by “agent”—someone who intentionally acts toward an end. But concepts of agency and intentionality could reasonably be broadened to include any convergence of means toward end. Excluding “final” cause (so dear to Aristotle) in the explanation of behavior is merely a prejudice of the modern era. An extended meaning of agency could include, for example, the adaptations of organisms to their environments. Thus the brain is plausibly an agent, even when it is viewed as a physical system. While causal, the internal connections within it can also be described as intentional. They converge toward ends and often involve the sort of “directedness” usually associated with intentionality. If the brain can be viewed in both causal and intentional terms, why not other complex physical systems? The concept of agency in nature has been sadly neglected simply through a bias originating in the physical sciences. Natural agency does not have to mean human agency. Nor, perhaps, should it be restricted to living matter. The concept of self-organization involves the convergence of means toward ends, and applies to the non-living world as well. A corollary is that the natural world could be entitled to legal rights in the human world without being defined as a human player.

Multiverse

For the Universe as a whole system, there is no environment and no history to set boundary conditions. This has led many cosmologists to speculate that the universe we perceive might be a tiny “bubble” in a much vaster plurality of universes. If such a “multiverse” were large enough, perhaps infinite or eternal, the problem of initial conditions would disappear. Enough trial runs of random “universes” would inevitably produce one like ours. Many scientists are not satisfied, however, to let the beginning go thus unexplained in traditional causal terms. Some question the extravagance of bringing in a metaphysically extravagant multiverse to contend with improbable initial conditions, which simply defers the issue of the origin. It would be ontologically more economical to question the standard model or the ways in which it is applied to cosmological observations. Astronomy is a highly inferential science. Nominally empirical measurements are actually highly theory-dependent. The estimations of improbability inspired by them may be spurious. Indeed, the very notion of a randomly generated universe is nonsensical if there is but one role of the cosmic die. Supposed implications of “fine-tuning” may reflect the state of science more than the state of the universe.

Multiverses come in several varieties. One concept has “universes,” such as ours, as particular regions in a meta-space. Given special relativity, these would most likely be causally isolated from each other, which makes such a scheme difficult to falsify. Another concept has successive universes in a meta-time. These would also be causally isolated if information is not preserved from one universe to the next. Other versions might combine features of such schemes—for example, Lee Smolin’s black hole cosmic selection theory. One motivation for all such schemes is to solve certain conundrums in cosmology, such as the appearance of fine-tuning of various parameters of theory required to predict a universe like ours hospitable to life.

Mechanist Philosophy

Information technology is the latest expression of the mechanist philosophy. Machines (even robots) had been conceived by the ancient Greeks, though their prejudice against manual labor mostly prevented them from realizing their fancies in technology. The prevailing models for understanding nature were themselves still natural: organisms and “elements” such as fire, air, earth, and water. Nevertheless, the Greek ideal of deductive certainty informed the body of scientific theory that underpins modern technology. It inspired the Scientific Revolution, which arose in the context of medieval European interest in machines. The Enlightenment understanding of efficient causality favored a mechanist view of nature and the rise of machines in society, whose successful use then reinforced the mechanist view. The sort of cause arising within things themselves was absent in the mechanist cosmos, since it had been transferred to the divine will operating from outside nature.

Computation is the latest mechanist metaphor by which to understand nature. The concept of mechanism is projected back upon nature as the organizing principle behind the very life and consciousness that creates the concept of mechanism! Yet, far from being a reasonable model for understanding nature, the machine is the very antithesis of the natural. Insofar as it implies an imposition of design and manufacture from the top down, mechanism is the very thing that cannot explain physical reality as a self-organizing process from the bottom up. The philosophy of mechanism reduces nature to simple artifacts (models, equations). Under its sway, science turned a blind eye to the complex interconnectedness within nature that might account for teleology and apparent design. This was obviously fruitful in many ways, as demonstrated by the success of technology. Yet the mechanist philosophy has crippled science in certain areas, and has indirectly forced it into a defensive stance in regard to the intelligent design movement.

Mathematical Platonism

It is a general truism that all forms of cognition are produced jointly by the world and the mind. Mathematics is a form of cognition, and thus no exception. The scientist does not circumvent this truism by claiming objectivity, nor does the mathematician by embracing mathematical Platonism. Yet, this does not prevent some mathematicians from holding that mathematics has a reality independent both of nature and of human minds. Mathematical Platonism is the belief that mathematical truths exist apart from mathematicians and their particular constructs in any given era. It is discovered as a pre-existing realm, not created by mere mortals. The totality of mathematical truth or logical possibility is a finished and timeless structure. It is not a question of empirical experience, but something transcendent, eternal, objective, fixed. While Gödel evidently believed in such a thing, ironically he showed that any constructed part of mathematics (of a certain complexity) cannot map the whole. A given mathematical construct is a product of definition. As such, it may be considered “timeless.” Yet someone constructs it at a particular time and place. The corpus of mathematics too has a history and is a work in progress. On the one hand, mathematics is a matter of formal definition and of logic, to which time seems irrelevant. Yet even logic is not fixed for all time if logic is a product of human evolution. Its very game-like nature, as a voluntarily adopted set of rules for manipulating a set of symbols, means it can be altered at will. That is, as a complete abstraction mathematics is an arbitrary convention. Its relation to physical reality, on the other hand, is a function of the evolution of human cognition. This relation too could change. To insist that it is fixed for all time and that mathematics has nothing to do with cognition is simply naïve.

Intentionality in Nature

In the mechanist vision, the non-living world does not possess its own intentionality or agency. Even in the living world, natural intentionality or teleology is ideally reduced to some version of efficient cause. There is no place in the mechanist vision for intentionality in nature at large. Modern science continues to view inanimate matter as passively obedient to externally imposed laws—like a computer obeys its program. It is not seen as imbued with its own powers of self-organization like organisms are. Living things provide our model for self-organizing systems, but our model for intentionality remains largely our own consciousness. According to scientific understanding, consciousness is now recognized as a product of natural self-organization. However, owing perhaps to the profound human desire to be separate from nature and above it, we still seem unwilling to admit either consciousness or intentionality as broad natural categories. (Similarly—if indeed it exists—the broad role of self-organization in inanimate matter is not yet recognized.) Nevertheless, intentionality might be a useful concept outside its human context. It is only a matter of convenient habit, evolutionary conditioning, and historical circumstance that we regard the world fundamentally from a third-person point of view: as a passive it incapable of responding on an equal footing to our interventions. In science this means a certain kind of description, which I call first-order science. Characteristically it describes the world as it is supposed to be “in itself” without reference to the describer. The fact that consciousness is only reluctantly an object of scientific investigation mirrors the exclusion of the scientist’s consciousness from her accounts. The scientific interaction with nature (as in experiment) may be described in third-person terms, insofar as it might affect the data. But the scientist’s intentionality is excluded from the description as irrelevant. To include intentionality in nature as an object for scientific study would imply including the human intentionality behind science. And that would be to open, as they say, a can of worms.

Immanent Reality of Nature

Nature’s immanence is an ancient and intuitive idea. But it was at odds with the Platonic and Pythagorean philosophies, which insisted that reality must accessible to a rational mind, whose order is forcibly imposed upon a chaotic or amorphous substratum. It is the Ideal that is ultimately “real.” Accordingly, the apparent reality of nature is derivative and not immanent. From that perspective, the resistance of physical reality to the impositions of thought and action renders the natural world at best imperfect, at worst evil. This idea appealed to the Christian and Muslim mind, for which the reality and properties of nature depended entirely on divine will. The significance of the natural world for medieval Christians lay in being a divine creation, the venue for the spiritual journey, and a “book” to read like scripture for spiritual guidance. This mindset continued into the modern era to influence scientific ideas about nature. Non-living matter, at least, was considered passive, subject to rational analysis because it was essentially no more than a (divine) product of definition in the first place. The derivative reality of nature implies that the mathematical laws of nature could be considered transcendent, if not divinely decreed. They exist somehow a priori and apart from the universe itself. The immanent reality of nature, however, implies that laws of nature are not transcendent; they do not exist independently of matter. They are neither immutable nor governing. Not only the details, but the laws too must be immanent in matter, not imposed on it externally either by God of the theorist. The apparent universality and constancy of physical laws, for example, is an empirical finding, not a necessary or transcendent truth. The laws themselves are no more than compact expressions of empirical findings, not directives that matter must obey, like a computer obeys lines of programming code.

Identity of Indiscernibles

Leibniz’ principle of the “identity of indiscernibles” implies that things are distinct if they do not share all possible relationships or properties. Distinctness means having distinct relationships to the rest of the world and thus at least one differing property. Whether things can be held to be identical depends therefore on the possibility to exhaustively enumerate a finite list of those properties or relationships. But such a complete list is possible only for deductive systems, where the properties and relationships are definitional. That is, they are made not found. As a principle, the identity of indiscernibles already assumes that the world is such a system.

The identity of an individual thing is relative to its parts as well as to the wholes of which it is a part. The collected properties or parts of a natural thing do not (as assumed in reductionism) constitute the thing as a whole. But they do constitute an artifact and in fact define an artifact. Thus an artifact can be exhaustively described, while a natural thing cannot. Two artifacts can be identical because it is possible to exhaustively search and compare their lists of defining properties. Any list of properties which thought could assign to a natural thing, however, cannot exhaust its being, since it may have indefinite properties.

The fact that elementary particles cannot be marked or tagged as individuals leads to a characteristically different statistics for quantum entities. In quantum physics, there are difficulties of principle involved in locating the set of all objects that satisfy the definition of a given particle type. In fact, in creating the statistics, it is not objects that are counted, but measurement events—which may not represent objects at all, but merely quantities. Quantum “objects” tread a line between natural things, with indefinite properties, and artifacts that are no more than what they are defined to be.

Idealism in Physics

Science developed as a synergy of concerns, with an ever-shifting balance between idealism and realism (or materialism). The realist thread in physics focuses on the role of the external world in supplying empirical data and driving their interpretation. The thread of idealism in physics emphasizes the role of the theorist in creating theory and in defining, collecting, selecting, and interpreting data. Pushed to the extreme, scientific idealism is deductionism—the belief that the world can be reduced to a logical system, an idea. The empiricism and ontological materialism of science necessarily rest upon a bedrock of idealism, occasionally exposed in the upheavals of scientific advance. This is evident, for example, in the hope of a final theory: the insistence by some theorists that physical phenomena can be exhaustively and definitively mapped mathematically in theory. However, this faith that reality can be reduced to deductive systems implies that it is not conceived as material at all, but as ultimately ideal. For example, the classical “realism” espoused by Einstein in opposition to the “incompleteness” of quantum theory is not realist at all. Rather, it insists that physical reality should correspond perfectly to human definition. In that way, it is guaranteed to be comprehensible, with an identifiable cause for every effect. Despite Einstein’s secularism, this axiom of completeness harks back to the religious heritage of science, which viewed material reality as a product of divine definition by a rational mind, whose role is to be taken over in modernity by the theorist’s definition. (This suggests that some of Einstein’s aphorisms invoking God might be more than mere figures of speech.) Religion is essentially idealist in outlook. While idealism is not in itself religious, its versions in science tend to side with religion against the autonomy and immanent reality of nature. But if nature is real, no theory can be complete.

First-order Science

“First-order” science is an account of events in the physical world. It is not concerned with analyzing theory and methodology, nor the state of mind of the scientist. Such considerations are generally left to philosophers and historians of science. This restriction serves to keep science within currently defined bounds, and is responsible for much of its success. Yet it also represents a limit on the kind and terms of reflection that may be undertaken. One symptom of this limit is the tendency to recycle a domain of description to serve as its own rationale, which I call “the problem of cognitive domains.” This dilemma results from attempting to use a conceptual framework that does not accommodate reflexivity in a world in which reflexivity is an everyday, unavoidable, and essential aspect. Another symptom is the inability of first-order science to question its own assumptions and methods—for example, the focus on isolated systems and linear equations. Of course, in order to fully understand the results of experiment or observation, the physics of the apparatus—and of intervening physical processes involved in observation—may be considered along with the physics of the defined system under study. But any other kind of role that thought might play in shaping scientific results is not discussed. Despite reference to “observers” one remains within the bounds of first-order description, which focuses on what the observer sees rather than on what the observer brings to observation. For good reasons, the Scientific Revolution served above all to redefine natural philosophy as first-order science: a science whose exclusive focus is the external world, not its own process. The nature of the scientist’s participation is excluded from discussion. The very existence of consciousness, therefore, poses a problem for first-order science, which deals strictly in objectivist accounts. The ability of cosmology now to consider the universe as a whole poses a similar problem. For there is no longer a place outside the system for the observer to stand.

Fine-tuning Argument

“Fine-tuning” names a group of problems in cosmology and theoretical physics that suggest the apparent implausibility of certain physical facts. For example, how do we explain the extreme range of values among fundamental parameters or ratios between them? How to explain the apparent dependence of the actual world on critical values of certain theoretical parameters? The question, however, may not be why the world is unlikely but why we should perceive it so. From that point of view, the challenge is to understand the conditioning in our thought that makes the values of such parameters strike us as improbable. Is fine tuning a false problem, merely a setup? The very fact of defining some theory with “variables” that would allow for the world to be entirely different merely by “tweeking” them is already suspicious. Such variables may be no more than theoretical artifacts, motivated by the theorist’s ambition to create the world from scratch! Religious thinkers have embraced fine-tuning arguments to suggest that the special conditions required for the actual world could not be accidental. Yet, both religion and science ignore the possibility that there might exist processes through which the universe actively tunes itself. A fine-tuning argument is like examining a complex machine and noticing that changing any dimension or detail even slightly would interfere with its proper functioning. Organisms, in contrast, actively strive to maintain their state. While organisms, within limits, are naturally robust, machines are essentially vulnerable to accident and decay. (This is why the mechanist universe conceived by the founding fathers periodically required divine maintenance.) There is but one way for a machine to work properly and an infinite number of ways to fail. Thus, it seems improbable for cosmic order to have come about accidentally or to have persisted through time in the face of the second law of thermodynamics. But this means no more than that we have not been able to imagine a universe that self-organizes and self-maintains.