Intentionality is an elusive concept that fundamentally means reference of something to something else. Reference, however, is not a property, state, or relationship inhering in things or symbols, nor between them; it is rather an action performed by an agent, who should be specified. It is an operation of relating or mapping one thing or domain to another. These domains may differ in their character (again, as defined by some agent). A picture, for example, might be a representation of a real landscape, in the domain of painted images. As such it refers to the landscape, and it is the painter who does the referring. Similarly, a word or sentence might represent a person’s thought, perception, or intention. The relevant agents, domains, and the nature of the mappings must be included before intentionality can be properly characterized.
In these terms, the rings of a tree, for example, may seem to track or indicate the age of the tree or periods favorable to growth. Yet, it is the external observer, not the tree, who establishes this connection and who makes the reference. Connections made by the tree itself (if such exist) are of a different sort. In all likelihood, the tree rings involve causal but not intentional connections.
A botanist might note connections she considers salient and may conclude that they are causal. Thus, changing environmental conditions can be deemed a cause of tree ring growth. Alternatively, it would stretch imagination to suppose that the tree intended to put on growth in response to favorable conditions. Or that God (or Nature) intended to produce the tree ring pattern in response to weather conditions. These suppositions would project human intentionality where it doesn’t belong. Equally, it would be far-fetched to think that the tree deliberately created the rings in order to store in itself a record of those environmental changes, either for its own future use or for the benefit of human observers. The tree is simply not the kind of system that can do that. The intentionality we are dealing with is rather that of the observer. On the other hand, there are systems besides human beings that can do the kind of things we mean by referring, intending, and representing. In the case of such systems, it is paramount to distinguish clearly the intentionality of the system itself from that of the observer. This issue arises frequently in artificial intelligence, where the intentionality of the programmer is supposed to transfer to the automated system.
The traditional understanding of intentionality generally fails to make this distinction, largely because it is tied to human language usage. “Reference” is taken for granted to mean linguistic reference or something modeled on it. Intentionality is thus often considered inherently propositional even though, as far as we know, only people formulate propositions. If we wish to indulge a more abstract notion of ‘proposition’, we must concede that in some sense the system makes assertions itself, for its own reasons and not those of the observer. If ‘proposition’ is to be liberated from human statements and reasoning, the intention behind it must be conceived in an abstract sense, as a connection or mapping (in the mathematical sense) made by an agent for its own purposes.
Human observers make assertions of causality according to human intentions, whereas intentional systems in general make their own internal (and non-verbal) connections, for their own reasons, regardless of whatever causal processes a human observer happens to note. Accordingly, an ‘intentional system’ is not merely one to which a human observer imputes her own intentionality as an explanatory convenience (as in Dennett’s “intentional stance”). Such a definition excludes systems from having their own intentionality, which reflects the longstanding mechanist bias of western science since its inception: that matter inherently lacks the power of agency we attribute to ourselves, and can only passively suffer the transmission of efficient causes.
An upshot of all this is that the project to explain consciousness scientifically requires careful distinctions that are often glossed over. One must distinguish the observer’s speculations about causal relations—between brain states and environment—from speculations about the brain’s tracking or representational activities, which are intentional in the sense used here. The observer may propose either causal or intentional connections, or both, occurring between a brain (or organism) and the world. But, in both cases, these are assertions made by the observer, rather than by the brain (organism) in question. The observer is at liberty to propose specific connections that she believes the brain (organism) makes, in order to try to understand the latter’s intentionality. That is, she may attempt to model brain processes from the organism’s own point of view, attempting as it were to “walk in the shoes of the brain.” Yet, such speculations are necessarily in the domain of the observer’s consciousness and intentionality. In trying to understand how the brain produces phenomenality (the “hard problem of consciousness”), one must be clear about which agent is involved and which point of view.
In general, one must distinguish phenomenal experience itself from propositions (facts) asserted about it. I am the witness (subject, or experiencer) directly to my own experience, about which I may also have thoughts in the form of propositions I could assert regarding the content of the experience. These could be proposed as facts about the world or as facts about the experiencing itself. Along with other observers, I may speculate that my brain, or some part of it, is the agent that creates and presents my phenomenal experience to “me.” Other people might also have thoughts (assert propositions) about my experience as they imagine it; they may also observe my behavior and propose facts about it they associate with what they imagine my experience to be. All these possibilities involve the intentionality of different agents in differing contexts.
One might think that intentionality necessarily involves propositions or something like them. This is effectively the basis on which an intentional analysis of brain processes inevitably proceeds, since it is a third-person description in the domain of scientific language. This is least problematic when dealing with human cognition, since humans are language users who normally translate their thoughts and perceptions into verbal statements. It is more problematic when dealing with other creatures. However, in all cases such propositions are in fact put forward by the observer rather than by the system observed. (Unless, of course, these happen to be the same individual; but even then, there are two distinct roles.)
The observer can do no better than to theoretically propose operations of the system in question, formulated in ordinary or some symbolic language. The theorist puts herself in the place of the system to try to fathom its strategies—what she would do, given what she conceives as its aims. This hardly implies that the system in question (the brain) “thinks” in human-language sentences (let alone equations) any more than a computer does. But, with these caveats, we can say that it is a reasonable strategy to translate the putative operations of a cognitive system into propositions constructed by the observer.
In the perspective presented here, phenomenality is grounded in intentionality, rather than the other way around. This does not preclude that intentionality can be about representations themselves or phenomenal experience per se (rather than about the world), since the phenomenal content as such can be the object of attention. The point to bear in mind is that two domains of description are involved, which should not be conflated. Speculation about a system’s intentionality is an observer’s third-person description; whereas a direct expression of experience is a first-person description by the subject. This is so, even when subject and observer happen to be the same person. It is nonsense to talk of phenomenality (qualia) as though it were a public domain like the physical world, to which multiple subjects can have access. It is the external world that offers common access. We are free to imagine the experience of agents similar to ourselves. But there is no verifiable common inner world.
All mental activity, conscious or unconscious, is necessarily intentional, insofar as the connections involved are made by the organism for its own purposes. (They may simultaneously be causal, as proposed by an observer.) But not all intentional systems are conscious. Phenomenal states are thus a subset of intentional states. All experience depends on intentional connections (for example, between neurons); but not all intentional connections result in conscious experience.