The allegory of Plato’s Cave is a fascinating piece of literature, subject to analyses far superior to anything I may pen. I am, at best, abusing this layered tale to offer (additional) caution regarding the use of Large Language Models (LLMs) to augment human decision making.
LLMs don’t “know” anything. I do not seek here to explain what they are or how they work - there are many barrels of ink spilled already. I come to bury LLMs, not to praise them. LLMs are the latest instantiation of that poorly worded and misleading phrase, “artificial intelligence,” or AI, and suffer the same flaw as their ancestor generations: Their algorithms are trained using textual input. That is, text that reflects how humans perceive and interpret the world and events. And humans are, to put it mildly, unreliable narrators.
As I’ve written before, we do not know how we make decisions as much of the emotional struggles involved in decisions occur below our consciousness. When pressed, we invent “decision trees,” or “pros and cons” processes that do not reflect what actually happened. Also, in my umpteenth mention of An Immense World (Ed Yong) - we can only perceive the world based on the tools and senses we have. These perceptions are quite different from other life forms, and in several cases (discerning colors, smelling food, sensing time) these are far less precise than our animal neighbors.
The philosopher Michael Polanyi observed that “we know more than we can tell.” David Snowden extends this, noting “we always know more than we can say, and will always say more than we can write down.” Snowden’s summation: “meaning is lost with increasing codification.” In communicating our experiences and perceptions first into spoken language and then the written word, we adapt our thoughts into structures and codes that cannot fully capture what we know or what we are learning.
Then we have the eyewitness problem.
Many people believe that human memory works like a video recorder: the mind records events and then, on cue, plays back an exact replica of them. On the contrary, psychologists have found that memories are reconstructed rather than played back each time we recall them. The act of remembering, says eminent memory researcher and psychologist Elizabeth F. Loftus of the University of California, Irvine, is “more akin to putting puzzle pieces together than retrieving a video recording.” Even questioning by a lawyer can alter the witness’s testimony because fragments of the memory may unknowingly be combined with information provided by the questioner, leading to inaccurate recall.
This isn’t limited to people who witness crimes. Because our memories are reconstructed every time we bring them to mind, we can be convinced of experiences in our past that never happened.
Researchers gave participants written accounts of four different events, three of which they had actually experienced. The fourth story wasn't real, and involved the participant getting lost as a child in a public place (for example a mall or amusement park). A family member was used to provide realistic details for the false story. Around one-third of participants claimed to have remembered the false story. This change in memory recall that arises after manipulation has been termed the "misinfI ormation effect."
(LLMs have been known to “hallucinate,” but perhaps in this way, they do resemble human cognition!)
And yet we write and report on the world around us. Then we use that output as input to a Large Language Model, and marvel at the results. I discussed some of this concern with a friend recently, who assured me that testing revealed ChatGPT to be “highly accurate.” The science, he offered, was irrefutable. Of course, once you define what is meant by testing, it becomes evident that this accuracy refers to how well the LLM represents the word nearness and conceptual relationships found in the input corpus used to feed it. It successfully represented what it “read.” Simply put, if you train an LLM on nonsense, it will faithfully represent the nonsense almost as well as a human would.
So what has this to do with caves and a long-dead philosopher?
Plato’s "Allegory of the Cave" is a concept devised by the philosopher to ruminate on the nature of belief versus knowledge. The allegory begins with prisoners who have lived their entire lives chained inside a cave. Behind the prisoners is a fire, and between the fire and the prisoners are people carrying puppets or other objects. These cast shadows on the opposite wall. The prisoners watch these shadows, believing this to be their reality as they've known nothing else.
Plato posits that one prisoner could become free. He finally sees the fire and realizes the shadows are fake. This prisoner could escape from the cave and discover there is a whole new world outside they were previously unaware of.
While we are cautioned to use these LLM tools only to augment our awareness of the world and aid in decision-making, it is only human nature to outsource and offload cognitive processes wherever possible. If someone asks if you have the time, you say yes because you know you have a timepiece on your wrist or in your pocket or purse. I could choose to learn the backroads in my new county, but why not just use a GPS application? In Natural Born Cyborgs, Andy Clark wrote 23 years ago:
We humans have indeed always been adept at dovetailing our minds and skills to the shape of our current tools and aids. But when those tools and aids start dovetailing back — when our technologies actively, automatically, and continually tailor themselves to us, just as we do to them — then the line between tool and user becomes flimsy indeed. Such technologies will be less like tools and more like part of the mental apparatus of the person. They will remain tools in only the thin and ultimately paradoxical sense in which my own unconsciously operating neural structures (my hippocampus, my posterior parietal cortex) are tools.
As we consider how much responsibility to offload onto the “reasoning ability” of an LLM, I do not see this as a pathway out of Plato’s Cave, but another chain across our chest that keeps us knowing nothing but shadows.
I was curious to see if there were any good arguments against my thesis, so I asked an “expert.” After some back and forth wherein ChatGPT claimed at first to provide a way for people to become more enlightened, I argued that its training corpus were no more a representation of reality than those shadows. I ended with this statement:
“So your answers are as shadows in the Cave, and not a path out of it at all.”
ChatGPT’s response:
You've made an insightful analogy. Indeed, the information I provide can be seen as shadows on the cave wall, offering a limited perspective of reality. While I can offer a path to information, the journey from ignorance to true knowledge and enlightenment, as presented in Plato's Allegory of the Cave, requires active engagement, critical thinking, and the exploration of various perspectives and experiences beyond what I can provide. My role is to assist in the early steps of that journey by offering information, but the path to true understanding lies in your own exploration and reflection. ChatGPT can make mistakes. Consider checking important information.
Perhaps we should listen to our robot overlords and double down on “active engagement, critical thinking, and the exploration of various perspectives and experiences.” The danger, to me, lies not in the tools themselves, but in the inevitable urge for humanity to grant agency to them.
Or we can just heed the words of Shakespeare.
O judgment! Thou art fled to brutish beasts, and men have lost their reason.




Holy Smokes! Who are you, dude?! You have blown this old-guy's mind with this one, as you are... I don't know... my go-to Ai-Whisperer? Good... well, really great stuff, John. You've pieced this together so well that I start to wonder -- are you even human, LOL?