For most of us, it is almost impossible to imagine a world devoid of language. The words we use and the worlds we build with them are so fundamental to our understanding, that to strip them back and envision a bare universe, devoid of symbolic thought, would be to abandon the very thing that makes us human. We are a species of ‘worldbuilders’, both figuratively in the sense of how we collectively imagine the world, and literally, in that as we mould it to resemble these shared fictions. In the course of the last fifty thousand years, we have become an epochal, disruptive force in the history of evolution. Since our ancestors scraped their way through the last ice age, we have collectively dismantled the environment and rebuilt it to suit our interests, either assimilating organisms into our evolving technicity or driving them to oblivion. In that geologically brief span of time, we have gone from launching spears to nuclear warheads.
What was it that triggered this metamorphosis? The short answer is: language. The long answer will form a series of blogs over the coming months, where I’d like to look at this question in more detail and explore the nature of language – in relationship to both the nature of the self – and the nature of technology. Indeed, I hope to argue that language is a technology, from which we have built both self-identities and worldviews; ancient tools to aid navigation, comprehension and manipulation of the physical universe.
Still the most dominant model of language is Noam Chomsky’s ‘Universal Grammar’; the idea that the capacity for language is hard-wired into human brains. The biologist E.O Wilson once quipped that Chomsky’s famously complex model thrived precisely because it “seldom suffered the indignity of being understood”. To proponents of new models of linguistics, such as Construction Grammar, it is a needlessly complex theory. Instead, they propose a view of language that it is not ‘inbuilt’, as such, but is a solution to a common problem that utilises existing biological ‘hardware’, and takes different forms depending on the environmental and social context. Some proponents, such as developmental psychologist Michael Tomasello, believe language is, in this sense, a tool; words are assembled from sounds as tools were assembled from raw materials. Likewise, linguist and philosopher Marcelo Dascal also argues that languages are “cognitive technologies”.
Certainly all of the physiological components of language, such as the tongue, lungs, teeth and so forth, are ‘dual use’, although have been fine-tuned for communication over time via natural selection. Even the language centres of the brain, such as Broca’s and Wernicle’s areas, are not, strictly speaking, ‘language centres’, for they serve a multitude of functions that are also used in the construction of language. The ‘language gene‘ FOXp2 is not just there for language; it also serves the purpose of regulating lung development. Indeed, it is the conceit of language itself that elements of the world should so neatly fall into our categorical constructs, however precisely we define them. In the words of philosopher Alan Watts, the world is much more “wiggly” than that.
This shift towards thinking of language as a technology occurs at a time when several authors began to view technology as a natural process. Kevin Kelly in ‘What Technology Wants’, (2010) and W. Brian Arthur in ‘The Nature of Technology’, (2012), have both argued that technology follows similar evolutionary processes as organic matter: sharing common patterns with the emergence of other structures in the natural world, such as trees and galaxies. Kelly argues that technology is the ‘Seventh Kingdom’ of life; The Technium. So which is it; is technology naturally occurring, or is language in some sense ‘artificial’? This dichotomy itself tells us much about the strengths and weaknesses of language and its bias towards binary thinking in a ‘wiggly’ analogue universe. As succinctly stated by Polish-American philosopher Alfred Korzybski; “the map is not the territory”. The way we imagine the world is not the world, just an approximation in perpetual revision.
The notion that the conceptual world is an illusory construct has haunted us for thousands of years, and is a recurring theme in the writings of mystics, philosophers and scientists. The ancient Vedic religions of India refer to this conceptual world as Māyā – the power of illusion – and consider it to be a manifestation of a yet deeper substratum of reality that, by definition, we cannot conceive. Pseudo-Dionysius the Areopagite, the sixth century Christian theologian, developed a method of apophatic inquiry. The method attempts to expose paradoxes in the nature of language to try and break down conceptual thought as a method of understanding the nature of God. This form of Christian mysticism survived until at least the 14th century in the form of the anonymously published, medieval manuscript ‘The Cloud of Unknowing’, which advised students “On account of pride, knowledge may often deceive you.”
The empirical sciences have also been subject to skepticism through the ages. Immanuel Kant believed it impossible to know “things in themselves” but only “things as we perceive them”; knowledge was something we could only approximate by conceiving of what he called the “noumenal” world; the realm of ideas. The standard model of physics, foundational to all modern technology, is notorious in failing to account for the forces of gravity and time, and does not account for observational data, such as ‘dark energy’ and ‘dark matter’. The quest to devise a new physical model of the universe has been a driving force in modern science, but one fraught with frustrations. This does not imply that the standard model is not ‘real’, merely that it is incomplete; a tool that has limitations. We grasp at a shape in the dark, but can only make out parts of its form.
In 2010, Physicist Stephen Hawking proposed the idea of Model Dependent Realism; the idea that physical laws we have derived over the centuries are not parts of the universe itself, but simply cognitive models undergoing revision – conceptual technologies that we can use to accurately predict events in the universe. Although a supporter of M-Theory, a challenger to the much thought over Grand Unified Theory, he remains open to the idea that ultimate knowledge may continually hover in our collective vision; coming into sharper focus, yet remaining forever out of our grasp.
And what of mathematics? Is this simply a cognitive tool or ‘the language of nature’ as some philosophers of science maintain? Neither position has been proven for certain, but both suggest mind-bending possibilities about the deeper nature of reality. If mathematics is the language of nature, it speaks of something that stretches the limits of the human mind. In the late 19th century, the mathematician Gregor Cantor proved, via his Continuum Hypothesis, that there existed “infinities of infinities”. We also know through Set Theory (also developed by Cantor) that all mathematics can be derived from what is called an ‘empty set’. In the words of Ian Stewart at the University of Warwick “the dreadful secret of mathematics is that it’s all based on nothing.” Thus we have a substratum of reality that both equates to 0 and is also infinitely infinite. The alternative, however, is even stranger; that mathematics is not the language of nature and merely a sophisticated human technology and, in the remote future, we will evolve whole new symbolic models of reality that are, right now, literally beyond our comprehension.
The long term trend is that through technologies – both cognitive and physical – we can infer causal processes of the universe in sharper and sharper detail and, in doing so, develop more and more sophisticated means to transform our environment. These new technologies spread throughout groups and confer an evolutionary advantage, leading to the emergence of new vocabularies that can be applied to different domains within society – leading to new ways of thinking, new technological developments, and so forth. Throughout time, developments in technology correlate with shifts in the ways we make sense of the universe, from the earliest times during the Middle and Upper Palaeolithic, to more recent times, during the Reformation and the Enlightenment. I hope to argue that this is the result of a feedback loop in which language emerges to describe and make sense of the environment, to aid navigation, comprehension, prediction and ultimately results in physical transformation of the environment that again inspired new ways of thinking.
This process is self-similar in nature, occurring fractally across all scales and throughout time and, like the paradigm shifts observed in the scientific community by Thomas Kuhn, are often localised to small groups or subcultures, giving the appearance of overall stability in the society or culture as a whole. When such trends pass certain tipping points, they lead to large scale technological and social revolutions, often occurring in tandem. The printing press is a classic example of a technology that enabled new worldviews and perhaps even new forms of cognition, which in turn led to new technologies that led to social and political upheavals lasting centuries.
This is a sobering thought, given the current technological and social revolutions sweeping the globe in unexpected ways and leading to unpredictable consequences. How will new generations, born to an age of such devastation and wonder, made sense of the world they have inherited? Thrown into an information ecology of unfathomable complexity and caught between the tidal forces of competing realities, how will they search for meaning? What heroes’ journeys will be made in the coming centuries as our descendants make sense of the world we leave behind? The final theme I want to cover in this series of blogs is that of self-discovery; how our identity is forged by the environment around it, and how our quests of self realisation within these contexts acts as the atomic process of the mechanism outlined above.