The Myth of AI Omniscience: AI's Epistemological Limits
We’re distracted from real AI issues by myths of superintelligence.
Our AI discourse is charged with apocalyptic prophecy: AI will either save us or destroy us. And this isn’t just empty rhetoric. OpenAI, the company behind ChatGPT, recently announced a massive investment into research on “superalignment,” which seeks ways to manage the profound risks to humanity posed by artificial “superintelligence.” OpenAI’s announcement inspires a utopian vision juxtaposed with a grave warning: “superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”
This leaves us humans wondering, in what ways exactly could a superintelligence have “vast power”?
Elon Musk points to one way that a superintelligence could, conceivably, have vast power in his recent announcement of xAI, a venture whose ambitious aim is to “understand the true nature of the universe.” xAI conjures the image of a near-omniscient superintelligence, maybe capable of resolving the enigmas of existence. By itself, xAI would be a harmless, if flawed, attempt by a billionaire tech mogul to realize his science fiction dream. The problem, though, is that many take xAI’s claim seriously, promoting the perception of advanced AI models as quasi-deities with unbounded powers, and diverting scarce attention from AI issues that truly matter for our society.
So, let’s tackle the question begged by xAI: is it even possible for an AI to “understand the true nature of the universe”?
My argument can be outlined as follows:
In order for an AI model to “understand the true nature of the universe,” the model must be able to discern new things.
But the ways in which a LLM can “talk about” the universe (and everything it contains) are limited to the ways in which humans have previously talked about the universe.
Future LLMs, regardless of model architecture, are fundamentally constrained in this way, by virtue of the fact that they are trained on human-written texts.
Therefore, the outputs of a LLM, at best, reflect our current understanding of the universe and nothing more.
This conclusion holds more generally for any AI models, which can only ever learn from data that has been structured through the lens of human understanding.
“It is we who project order into the world by selecting objects and tracing relations so as to gratify our intellectual interests. We carve out order by leaving the disorderly parts out; and the world is conceived thus after the analogy of a forest or a block of marble from which parks or statues may be produced by eliminating irrelevant trees or chips of stone.”
- William James
In his Pragmatism lectures, the philosopher William James argued against a conception of Truth as being independent of human minds. What we tend to call the true, for James, isn’t “out there” in the world, waiting to be discovered. Instead, the true is at least partially constructed: the order that we have carved out of disorder. Our order comprises everything we claim to know, from our ability to discern dogs from wolves, to our most abstract scientific theories. It is a tool that we have been honing since a time deep in our evolutionary past, at first through natural selection and later through the emergence of language and complex culture.
Our order is at all times provisional – we continue to hone the true through our cumulative experience in the world. This tool has generally worked for us because there are, to be sure, empirical regularities “out there”, but also, equally, because it is inseparable from our various interests as humans. Said differently, the true emerges only through contact between an external reality and our human aims. We can only ever “see” empirical regularities through the lenses of our uniquely human interests, hardwired perceptual apparatuses, and mental concepts. There is no way for us to completely take off these lenses and see the world, unmediated, in some “absolutely true” sense. And neither, as I argue, can our AI models.
To put it more concretely, we humans discern – against a deluge of sensory input – particular kinds of objects and relations among them. For example, we discern and name some places as “valleys”, because “valley” has proven to be a useful shorthand for distinguishing a certain type of place, perhaps one that tends to be favorable for foraging and settlements. Valleys exist only in the sense that they exhibit empirical regularities that have been useful for humans to distinguish from other types of places. But valleys don’t exist “out there”, independent of and external to human minds (the same can also be said of the more abstract concept “places”).
This ability to discern (literally to separate, set apart, divide) has long been thought to be fundamental to, even synonymous with, intelligence. In ancient Greece, the philosopher Anaxagoras introduced the concept of nous, sometimes translated as mind or intellect. In his telling, the cosmos began as a mixture, and it was nous that separated out the entities we know today. This conception of intelligence has a close parallel in the first creation story in Genesis. God – intelligence par excellence – takes an earth that is “without form” and gives order to it through separation, when “God divided the light from the darkness” and “God made the firmament, and divided the waters.” Intelligence carves order from disorder.
Coming back to LLMs, if an AI model is to help us deepen our understanding of the universe, then the model must discern new things, carve new order from disorder, hone new truths.
LLMs are machine-learning models that, given some input prompt, will output a “statistically reasonable” – though possibly fictional – text. LLMs perform this computational feat by learning regularities in language, based on vast amounts of training data in the form of text, including a snapshot of the internet.
At multiple levels, what a LLM can “say” about anything, including the universe, is fundamentally constrained to what humans have already said. To start with, consider the model’s vocabulary – the list of words (technically, tokens, which can be thought of as chunks of words) that it can draw from in order to form an output. A language model’s vocabulary is limited to the words that exist within the model’s training texts, which means a LLM can only refer to objects and relations that we humans have already discerned, named, and written about.
At another level, most of the computational heavy-lifting in a LLM is performed on numerical-array representations of text called word embeddings, or word vectors. Embeddings map text to points in a multi-dimensional space, with the property that points closer together in space represent texts that have more similar meanings. So for example, the embedding for “dog” is closer in space to the one for “puppy” than the one for “democracy”. Embedding models are themselves trained on large quantities of text. In effect, embeddings constitute a map of concepts that is thoroughly grounded in the way humans have used and understood those concepts, as reflected in the training texts. To the extent that a LLM can ever truly “understand” concepts, it is limited, through embeddings, to understand them in the way that humans already do.
Finally, the outputs of a LLM are “statistically reasonable” in that they reflect language regularities inherent in the texts used for training – models like OpenAI’s GPT-4 optimize on predicting the most likely word to occur next, given a sequence of words as input. Said differently, A LLM cannot “say” anything for which no statistical regularities in the training texts already exist. Therefore the model can only ever “talk about” the universe in the ways humans already tend to talk about it.
To sum up: because LLMs are fundamentally limited to i) using our vocabulary, ii) “understanding” concepts in the ways we do, and iii) “talking” in the ways we do, then, at best, a LLM can only mirror back to us the order we have carved, the truth we have honed.
What about advanced AIs that aren’t LLMs? A similar conclusion holds. Essentially, any data used for model training are structured in some way, and structure is premised on a way of seeing the world. In other words, all data are theory-laden, with the result that all models – whether we label them as “AI” or not – are fundamentally tethered to the ways we humans understand the world. Our models are stuck “seeing” the universe through human eyes.
While AI models are fundamentally limited in what they can “say,” there is still enormous potential for models like LLMs to dramatically impact virtually every field of human endeavor that deals with knowledge, from scientific research to tax advice. But we can dispense with the mythical image of an AI that, like some oracle, holds (or withholds) the ultimate answers about reality. Such images only serve to inspire awe and fear, and they are a complete distraction from the AI issues that matter. To name just a few: model bias, deep fakes and misinformation, and job displacement from AI are all pressing societal issues. If we as a society are to meet the very real challenges and opportunities afforded by AI, we need to do the work to dispel myths and commit to rational dialogue about what advanced AI models can and cannot do.
Thanks for this. I'm often telling people that "True Understanding" is a red herring. From now on I'll just link them to this.
"Begging the question" is not best used as a substitute for "prompting [or suggesting] the question" in an article that purports to be about philosophy.