Words, spoken or written, are finite symbols that can only very selectively reflect the richness of sensory experience. Words are inherently ambiguous because they “chunk” thought. They divide the continuity of sound in speech. More importantly, they divide the continuity of possible thought into named categories. Whereas sensory fields are relatively continuous, perception itself is chunked: we identify specific colours of the rainbow, for example, (conventionally: red, orange, yellow, green, blue, indigo). And we see the world in terms of recognizable objects (cats, dogs). Language enhances this coarse-graining effect by providing fixed labels. If you look closely at the rainbow, there is continuity between the “distinct” colours. It is possible to make finer distinctions and apply labels to them, such as “cadmium yellow, “aquamarine,” or “chartreuse.” Or you can concentrate on the sensory distinctions and forget about the labels.
Speech is temporally linear, face-to-face verbal communication. It relies on the context of many cues besides the literal semantic content conveyed. These include tone, inflection, pause and parsing, body language, facial expression, eye contact, etc. By definition, text conveys only the literal content as a free-standing artifact. It exists outside time. It allows meaning to be taken out of context.
Text represents a message in a graphic symbol system. Speech, when literally transcribed, does not necessarily follow grammar or a consistent presentation; it may ramble and repeat, providing a “shotgun” approach to conveying meaning. (Very few people speak with the relative precision of their writing.) Speech is adjusted to the listener in real time, to establish mutual understanding. While it may be grammatically precise, text is ambiguous in meaning, not only because of the multiple meanings of words, but also because of the absence of live feedback which, in speech, helps to clarify.
Language is a product of history—i.e., of actual usages over time. Words have an evolutionary history that is partly logical extension and association, and partly accident. Confusingly, the same word may come to represent quite different things. (Dog, for example, is both a noun and a verb; it can mean a canine, a gripping tool, or a kind of sandwich.) Often there is a connection between these meanings hidden in the etymology.
Words mostly represent categories (such as ‘dog’), and may fail to distinguish between the category and an individual member (‘Lassie’). On the other hand, dog may elicit an image or memory of a particular creature or encounter, which serves as the experiential referent for the category. Thus, even unconsciously, a remembered experience can stand as the symbol for an abstraction in a given personal lexicon. These referents of words are different for everyone, so that words have individualized associations that colour and play havoc with supposedly general meanings. If you were bitten by a strange dog in childhood, the word may elicit something quite different for you than for someone who had a cuddly pet. We know that all men are mortal and that some are criminals. But man, mortal, and criminal are categories (labels), not individuals.
Even in the animal world, communication makes deception possible. Camouflage is a form of communication and of deception. Some birds fake injury to lead predators away from their young. Chimpanzees deliberately create a distraction to seize food or a mate while competitors are not looking. Naturally, for humans, an early use of grammatical language served outright lying: to keep a straight face while telling a falsehood. The written word greatly expanded the possibilities of falsehood since there was no longer a face to keep straight.
Language is an engine of creativity. Along with explicit lying, it makes counterfactual proposals of all sorts possible, of which imagination and supposition (“what if…”) are examples. This enables storytelling, myth, and fiction—which we regard as inventions rather than deceptions. It also enables abstraction, which depends on extending concrete examples to unseen possibilities, and idealization, which is a fictive representation that generalizes experience. Another form of invention facilitated by language is the machine. Of course, language makes possible the collective collaboration behind technological invention. In addition, the very structure of grammatical language serves as a template for formal propositional thought, especially mathematics and logic. A formal (or axiomatic) system effectively has the elements of a language. And these correspond to the elements and principles of a conceptual system that can potentially be realized physically as a machine.
The very structure of language (nouns, verbs, adjectives, subjects, objects, word order, etc.) shapes the structure of thought and perception, in such a way that vague abstractions can be reified as tangible things, or defined as elements of a formalizable system. In addition, grammar also makes explicit nonsense possible. You can say things that appear to make sense, yet are gibberish. (Think of the Jabberwocky poem by Lewis Carrol.) That capability has another consequence as well: the possibility to make relatively empty statements, which are perfunctory and grammatically correct, but are neither truths nor untruths but evasions. They contain information in the technical (Shannon) sense, but they fail to inform. Politicians are skilled at this. And so, by default, are “large language models” (AI chatbots).
While useful for some purposes, chatbots tend to glibness. They are rich in cursory superficiality and empty, fictive, or motherhood statements, such as you might find in advertising or on the back covers of some books. Their reports may be “accurate” insofar as they are not literally false; yet they may fail to capture or re-present the essential information or any real intentions behind the information presented. (On the other hand, chatbots are capable of blatant fabrication, politely known as “hallucination.”) These limitations are understandable because chatbots and other AI are not minds with intentions. They have no comprehension of their verbal claims or other outputs, which are inherently empty for them, since AI lacks its own intentionality. Meaning or content is coincidental and derivative from human meanings and usages. This is because, for LLMs, there is no real solution to the symbol grounding problem—how minds assign meaning to input—short of the AI becoming a real, embodied mind.
There may, however, be real dangers for human beings through increasing dependence on chatbots, which by definition can only regurgitate an amalgam based on earlier human expressions. Glibness, like mediocrity, is contagious. The thought, language, and creative expression, of people who rely on LLMs as tools to substitute for their own original thought, may come to resemble the vapid chatbot style. Quite apart from Terminator scenarios, and long before we facilitate artificial consciousness, we may have found one more way to debilitate the species, whose hallmark is language-based reason. If we can only think glibly, or defer to artificial agents that don’t genuinely think at all, what is to become of us?
There is much discussion these days about “extended mind”: tools that help us with our cognitive tasks, be they calculators, chatbots, or neural implants. In a general sense, all technology extends our being. Yet, an identity crisis looms in all of this. Where is the subject (“I”) located in relation to these extensions? The traditional relationship between tool-user and tool reflects the normal relationship between subject and object (I am here, it is there, perhaps literally at arm’s length). This relationship is blurred when we depend on “it” to do our thinking or cognizing for us, even when “it” remains outside the human skin. The change will be all the greater with implants and other cyborg modifications to the body and brain, especially as they connect to the Internet. As others have pointed out, our dependence on language is our Achilles Heel, the vulnerability through which LLMs could dominate humanity. But perhaps we will be able to reassure ourselves (with “our own” verbal thoughts) that the new normal is “natural” and how things ought to be, perhaps even how they’ve always been. After all, thought is mostly self-talk, if not self-hypnosis. Words R Us.