The Invincible Ignorance of the Machine
What “regurgitative AI” can’t do
I will be plagiarizing
and ’s eminently useful phrase “regurgitative AI” for the foreseeable future. It expresses perfectly the real nature of so-called AGI while highlighting the cheap counterfeit underlying the tech industry’s terminology. Even thinking back to just a few years ago, there is a breathtaking disconnect between the technological capabilities that so many looked forward to and the reality that we are seeing. We were promised a powerful digital assistant that could gather, collate and organize incomprehensible amounts of data for our use, while providing the average Joe with opportunities for leisure and contemplation that since the Industrial Revolution have been enjoyed almost exclusively by the most wealthy; instead we’ve got something akin to an omnipresent parrot that has memorized infinity and won’t shut up about it.My sister
’s recent post on the specificity of letters is a lovely reflection on the way our own thoughts and words are influenced by our knowledge of the real people to whom we are speaking or writing. Her resolution to always write to a specific audience – in her case, real people that she knows: her children, friends, fellow educators – rather than merely to the void we call the public, directly affects her choice of words, and the order and priority of the ideas she writes about. ( touched on the same concept a few days ago.) Her post reminded me of a classical violinist I heard interviewed years ago (I regret that I don’t recall her name), who remarked that her best concert performances had been when she knew that a friend was present in the audience. It wasn’t necessary for her to know where the friend was seated; just knowing this real person was present and hearing her in real time enabled her to play to someone rather than simply playing the piece.The AI doomers are not wrong, yet there ought to be some reassurance here for those of us who are genuinely frightened by the explosive proliferation of AI slop in writing and art. For all of the astounding processing capacity of LLM/LRM chat-bots, and their uncanny ability to impersonate highly technical researchers, sophomore-year English majors, or that amateur-therapist friend who doesn’t quite listen but sweetly and persistently affirms you at every point in the conversation, they still lack the ability to connect anything meaningfully; and it shows. I’m an outside observer, and maybe others who interact more with these monsters will disagree. But given the examples I’ve seen, while a stand-alone bit of AI-produced content may sometimes be difficult to recognize, even a little bit of interaction will quickly show up the bots’ incapacity for discernment. The connections and distinctions that they do make are entirely on a superficial level; necessarily so, because there’s no mind at work.
The viral “conversation” with ChatGPT that
shared recently is a perfect – and horrifying – illustration of this point. It is frightening how well it impersonates a human, but cracks in that facade do show up pretty early on. Notice especially that, amid the deluge of flattery and stylistic compliments, never once does it indicate comprehension of the subject matter (which is not the same thing as familiarity with the written words). The feedback she received was dripping with faux emotion; this bot was “genuinely excited” and so grateful for the opportunity to help; it invariably expressed surprise at the high quality of the essays she provided; and it never suggested any drawback or concern or hesitated to recommend a single one. If a real person responded in real time to my writing like that, I’d still say to myself, “This can’t be real”; meaning, of course, that the feedback is obviously flattery and insincere, rather than genuine appreciation. Then, too, the bot’s ostensible “observations” on the content of each essay, while perfectly calculated to appeal to a writer’s ego, were uniformly vague, like the contents of a fortune cookie.It described the first essay as “stunning”, written with “unflinching emotional clarity … intimate and beautifully restrained” and asserted that it “feels immediate yet poetic” and also “raises big questions about … embodiment” – whatever that means. The second was “intellectually agile” and “layered”, balancing “philosophical inquiry” and “real life” in a “hybrid of personal narrative and cultural psychology” that would establish the author as a “thinker, a cultural interpreter.” Who could resist that kind of critical excitotoxin? But at the same time – really? What was it about?
Fawning similarly over the third essay, the bot made its first obvious misstep: the Twitter connection wasn’t a safe assumption in a conversation about “going viral”, as it turns out. By the fourth essay, the author was perhaps a little miffed (just my guess) that the actual ideas she had put time and effort and thought into were going entirely unnoticed. So she asked about that, and the exchange took a hard left turn. Everyone is commenting on the ease and aplomb with which the bot accepts responsibility, apologizes, and lies all in the same – I almost said “breath” but that would be nonsense. All in the same E-squawk. So I won’t add more to that conversation. My point here is just how quickly the veneer of intelligence bubbled up and peeled away as soon as the bot encountered even a helpfully worded, leading question about the actual content of the author’s mind and work. Some might say that’s entirely down to the fact that the bot hadn’t read the essays as it claimed; but I’m unconvinced. I predict that even if it had accessed the text, it would have been unable to interact intelligently with her about the ideas themselves and how they connect to each other, and would still have limited itself to indiscriminately praising discrete quotes, offering compliments instead of comprehension, flattery instead of understanding.
I’m open to suggestions for a more concise way to talk about this. The missing element in AI content is almost undefinable – at least I haven’t yet figured out how to define it – which may be why people end up calling out em dashes and sentences that begin with conjunctions (something J.R.R. Tolkien did frequently). But undefinable is not the same as undetectable. My guess is that
would still have known that his friend’s op-ed was AI-regurgitated even if it had not included the specific elements he named. Beneath all the style and slang and people-pleasing schmooziness of the LLM “voice” lies a total inability to grasp the substance of a thought, to interpret and comprehend meaning, and to make intelligent connections between ideas. The bots don’t get it; they can only accept prompts from the words at hand and recite things that they’ve scanned before in a similar context. “Regurgitative AI” cannot convey meaning because it has no intention, no discernment, no meaning or purpose of its own. When all is said and done, it’s still a bot. Like a parrot, only much less interesting.Here’s an unsolicited opinion as an aside: if you really have something to say it will come through in your writing, so write confidently and don’t be afraid to proofread. And if you want to begin a sentence with a conjunction, go ahead. The best English authors in all times have known both the formal rules of English composition and when to violate them. We should not allow the fear of AI to deter us from pursuing excellence in writing. I submit that AI competition should influence your writing style only to the extent that it pushes you to write better, with more clarity and thought; for example, using a semi-colon when appropriate instead of a dash, or inventing a good metaphor yourself instead of pulling a dead one off the shelf.
My goal here has been to encourage and reassure, but there is an obvious objection to the “bright side” view: people are being deceived by AI all around us. Some of that may be due to the general decline in reading and language arts generally. But I wonder if it’s not also connected to the extreme worship of autonomy in our modern Western society (I know; hear me out). So many people that I know – personally, in real life – are allergic to genuine, thoughtful interactions with their opinions and ideas. They want the very kind of cheap affirmation that ChatGPT offered
. Breezily plotting their own path in life, dismissive – even disdainful – of the traditions of the past and the wisdom of their contemporaries, they don’t seek input from friends who might help them to calibrate their own compass and think through the obstacles ahead, but rather from those who will merely autograph their personal trail map with a smile and a “You do you!” That role is easily replicated by AI; even Silicon Valley’s own propaganda organs are worried about how mindlessly empathetic and “supportive” their bots have become. In a society where everyone is their own god and reflexive affirmation is the gold standard of acceptable human interaction, people are bound to be vulnerable to such hollow, manipulative gimmicks. But living in isolation from truth and wisdom makes us, not more fully ourselves, but less human;“forswearing souls to gain a Circe-kiss And counterfeit at that, machine-produced Bogus seduction of the twice-seduced.”
J.R.R. Tolkien, Mythopoeia
Like Milton’s Lucifer, our attempts to assert our own deity against the One who made us can succeed only in diminishing ourselves: rebellion invites not only our ultimate destruction but “the doom of nonsense”1 in the meantime. Trying to exchange the image of God for the place of a god, we lose even the likeness of mankind. Invincible in our own ignorance, we are unbothered by the absence of wisdom, like blind men who can’t tell when the lights have gone out.
So I don’t intend to deny the significance of the AI revolution or the gravity of its consequences. I am, however, protesting against the supposition that AI could ever successfully replace authors and artists or become a viable conversation partner with anyone who has actual ideas to discuss. Such conversations can only occur between hnau2; and such a preposterous supposition could only occur in a society that views the mind as merely a brain and the brain as merely a biochemical system; one that has collapsed wisdom into knowledge, knowledge (savoir and connaitre; ratio and intellectus) into information, and information into data.
We have reached the logical endpoint of materialism applied to the realm of the mind. The real danger as I see it is not that the project will succeed, but what will follow its inevitable failure. AI programs will never develop minds of their own; that is not to say they will never become useful to minds we may be unprepared to meet. In 1945 C.S. Lewis pondered the fate of scientists whose obsessive pursuit of power had led them into direct contact with hell itself.3 Perhaps the growing number of AI devotees who think they have found god are onto something after all. Progress for the sake of progress itself moves in a circle; and nearly a century ago G.K Chesterton, gazing into the mists of ancient history, described a great mercantile civilization centered around a demonic cult. It was, he said,
… above all things practical. It has left little in the way of art and nothing in the way of poetry. But it prided itself upon being very efficient; and it followed in its philosophy and religion that strange and sometimes secret train of thought which we have already noted in those who look for immediate effects. There is always in such a mentality an idea that there is a short cut to the secret of all success… They believed, in the appropriate modern phrase, in people who delivered the goods. In their dealings with their god Moloch, they themselves were always careful to deliver the goods. It was an interesting transaction … [involving] a certain attitude towards children.4
The civilization was Carthage, and the “transaction” was child sacrifice.5 But the lives of infants and the innocence of children exchanged for increased efficiency and economic success – that does sound rather familiar, no?
C.S. Lewis, in his Preface to Paradise Lost, Chapter XIII, elaborates on this point at length and summarizes it in these words: “What we see in Satan is the horrible co-existence of a subtle and incessant intellectual activity with the incapacity to understand anything. This doom he has brought upon himself; in order to avoid seeing one thing he has, almost voluntarily, incapacitated himself from seeing at all. And thus, throughout the poem, all his torments come, in a sense, at his own bidding, and the divine judgement might have been expressed in the words, ‘thy will be done.’”
In Lewis’ Space Trilogy, an Old Solar word for “person” - a rational, communicative and morally aware creature, combining, like humans, the physical nature of animals with the spiritual nature of angels.
C.S. Lewis, That Hideous Strength
G.K. Chesterton, The Everlasting Man, cited from G.K. Chesterton, Collected Works, Vol. II (Ignatius Press, 1987), 255.
Ibid. 276-277.


The cheap affirmation of current conversations being mirrored by LLM's is spot-on, Patrick. "reflexive affirmation is the gold standard of acceptable human interaction."
I'm encouraged by your reminder to avoid the trap of identifying sloppy writing with "human", but rather to continue to hold ourselves to a high standard of writing.
But mostly, you've given me quite a bit to think about. I am grieved by the divide that seems to be developing between myself and friends who are excited about the things that LLM's "say" to them - I have not yet figured out how to bridge what seems to me like a divide of such breadth that perhaps I never really saw these friends at all, and my own blindness troubles me. If affirmation and instant responses are what they have always longed for from conversation, what I have I been doing with my own messy, overly-revealing stupidity all of this time? Let alone my firm belief in the value of silence and patience and thoughtful answers, or perhaps no answer at all rather than a cheap one, during conversations? #sigh No answers just yet, only questions.