AI Crutchery
AI, LLMs, and the loss of critical thinking skills
My Technology Bona Fides
You can skip this section if you’re just interested in the meat of the article: here, I just lay the foundation for showing that I’m not stupid about technology.
It’s no secret to those who know me — and shouldn’t be to those who read much of my writing lately — that I’ve been doing a deep dive into artificial intelligence. My specific focus has been on what’s most accessible to me. So that would be the type of generative AI that creates images and movies and even bits of writing: Enhancor, Midjourney, ChatGPT, and Claude, to name just a few.
I’ve done even deeper dives into ComfyUI, Kohya SS, and Ollama, where I’ve actually done some work on a kind of “training” — creating LoRAs that can take images (such as pictures of myself) and regenerate “me” or can take massive amounts of my writing (such as my more than 1800 blog posts written over the years on various websites and hundreds of motions, writs, and appeals and even journals — both my private journals and legal journals like CACJ’s Forum) and work to produce pieces that look like they were written by me.
Many people know that before I was a lawyer, I was the Director of Information Systems for Valley Yellow Pages. This was no small thing: my department’s budget was around $2 million/year. I worked with and was responsible for a staff consisting of MUMPS programmers and before that, I had a hand in helping build the first two ISPs in Central California: Cybergate (which, to my knowledge, no longer exists) and Valleynet Communications, which later became Protosource. Protosource was publicly-traded and I think still is, at least from what this shows (LOL).
Google shows Cybergate may still exist somewhere on Shaw Avenue, but I have my doubts. The website just shows a page with “John Companies, Inc.” as the only content. I have a vague recollection that someone with that name (John) bought the company at some point. If it’s the guy I’m thinking of, by now his son must own whatever is left of it. And Protosource…I don’t think it lasted much longer after I left, but I don’t really know.
I previously held numerous certifications from Microsoft, Cisco, and I was a Linux Certified Professional. I also taught in these areas.
Lastly, I also used to be a technical editor (while working at Valley Yellow Pages) for the Coriolis Group and wrote one chapter of Migrating Windows NT4 to Windows 2000 (Amazon Affiliate link) for them.

I say all that just to make it clear I’m not just another lawyer with a pretty face talking about AI. I have some technological bona fides.
The Perilous Problem of Pervasive Predictive Processors
This morning I was, for about the millionth time, greatly frustrated by the bullshit ChatGPT was spouting during a brainstorming session. And — again for the millionth time — I asked myself Why? Why? Why does everyone think that AI is “all that and a bowl of chili”? (I’m actually working on another post with that as the subtitle and will link it when/if I ever get it done.)
We talk about confabulation. You might know it as “hallucination”, but that’s a poor choice of the word to describe what AI really does, as I’ve explained previously in Naming AI’s “Problem”: Confabulation, Bullshit, or Both?
Whoever Hilarius Bookbinder is, they (he? she? I don’t know) would maybe agree. I didn’t know about this when I wrote Naming AI’s “Problem”, but Hilarius had written something very similar some months before me. That article I found while researching for writing the article you’re reading now.
Anyway, as I again pondered this morning why so many people buy the AI hype — the sales material — I also started thinking about how much this is hurting us as thinkers.
As more people buy the AI hype, AI is showing up virtually (no pun intended) everywhere. Of course we have the chatbots. And of course we have the generative AI (which, unabashedly, I use to create images for my posts). But we also have AI infiltrating health care, law, stock markets, and pretty much any other domain you could probably imagine.
And everywhere AI shows up, problems procreate. The reason why is intimately connected to what I’ve been writing about regarding confabulation.
In health care, for example,
Increased trust in inaccurate or inappropriate medical advice generated by artificial intelligence (AI) may result in misdiagnosis and potentially harmful consequences for individuals seeking medical help, according to findings published in the New England Journal of Medicine.
— Ron Goldberg, Nonexpert Trust in AI-Generated Medical Advice Leads to Possible Misdiagnosis (June 30, 2025)
In human resources,
AI systems are increasingly used in hiring decisions, performance evaluations and promotions. If these systems rely solely on accurate but incomplete data, they risk reinforcing biases and ignoring critical human factors, resulting in unfair or ineffective decisions.
— Tshilidzi Marwala, Never Assume That the Accuracy of Artificial Intelligence Information Equals the Truth (July 18, 2024)
Yeah, the use of the word “accuracy” is a bit confusing there.
I’ve already written extensively on how AI has permeated the law and I’ve particularly highlighted the ways in which it goes awry. I’ve written about numerous examples — which, frankly, blows my mind because I can’t believe lawyers still haven’t learned from the mistakes — of lawyers submitting legal briefs containing AI-generated references to cases that don’t exist. And I’ve also pointed out problems with using AI in other parts of the legal system.
One of the great myths of modern legal reform is the idea that algorithms are neutral. That they take the bias out of human decision-making. That they bring consistency to a process distorted by subjectivity. But this way of framing it collapses under scrutiny — especially in the pretrial context.
— Rick Horowitz, Pretrial Release: The Illusion of Algorithmic Neutrality (June 8, 2025)
Worse than all of the above, AI is polluting real professional scholarly writing:
That’s because articles which include references to nonexistent research material — the papers that don’t get flagged and retracted for this use of AI, that is — are themselves being cited in other papers, which effectively launders their erroneous citations. This leads to students and academics (and any large language models they may ask for help) identifying those “sources” as reliable without ever confirming their veracity. The more these false citations are unquestioningly repeated from one article to the next, the more the illusion of their authenticity is reinforced. Fake citations have turned into a nightmare for research librarians, who by some estimates are wasting up to 15 percent of their work hours responding to requests for nonexistent records that ChatGPT or Google Gemini alluded to.
— Miles Klee, AI Is Inventing Academic Papers That Don’t Exist — And They’re Being Cited in Real Journals, Rolling Stone (Dec. 17, 2025)
The perilous problem of pervasive predictive processors is not just that AI often is wrong or that it’s polluting our scholarly writing pool — far more often than anyone wants to admit — but that, increasingly, it’s become a substitute for real thinking.
Crutchery and the Loss of Critical Thinking Skills
While thinking on all this at 3:30 a.m. this morning, it occurred to me that the thing that concerns me the most about AI is “crutchery”. And when I thought that, I had to stop myself and ask, “Is that a real word?” So I googled it and found disagreement on whether or not it is, with Google AI telling me that it is not, but proposing a possible explanation for what it might mean that actually turned out to be partially true.
Then I found a sermon from 2021 that used the word in exactly the way I had intended. And the story nails the point of what bothers me about AI so pointedly that I quote the entire sermon (it’s not that long) here:
When an accident deprived the village headman of the use of his legs, he took to walking on crutches. He gradually developed the ability to move with speed even to dance and execute little pirouettes for the entertainment of his neighbours.
Then he took it into his head to train his children in the use of crutches. It soon became a status symbol in the village to walk on crutches and before long everyone was doing so.
By the fourth generation no one in the village could walk without crutches. The village school included “Advanced Crutchery” in its curriculum and the village craftsmen became famous for the quality of the crutches they produced. There was even talk of developing an electronic, battery-operated set of crutches!
One day a young buck presented himself before the village elders and demanded to know why everyone had to walk on crutches since God had provided people with legs to walk on. The village elders were amused that this upstart should think himself wiser than them so they decided to teach him a lesson. “Why don’t you show us how?” they said.
“Agreed,” cried the young man.
A demonstration was fixed for ten o’clock on the following Sunday at the village square. Everyone was there when the young man hobbled on his crutches to the middle of the square and, when the village clock began to strike the hour, stood upright and dropped his crutches. A hush fell on the crowd as he took a bold step forward-and fell flat on his face.
With that everyone was confirmed in their belief that it was quite impossible to walk without the help of crutches.
— “Trinity 5 2021” Compton, Hursley, and Otterbourne Benefice (July 4, 2021)
Claude and ChatGPT both tell me this sermon illustration dates back at least to the late 20th Century with Claude adding that it’s the kind of illustrative story that circulated among preachers in the tradition of Henry Ward Beecher and Charles Spurgeon. Though neither LLM could provide an example “without access to physical sermon illustration collections from the late 1800s and early 1900s”.
Explanatory, Exploratory, Depilatory
Regardless of the provenance of the crutchery story, it makes the point.
AI is everywhere. Everyone is using it. It is quickly turning into the crutches without which we cannot do what we’ve done for millenia.
New evidence suggests that AI comes with an invisible trade-off. It helps us complete tasks faster, but has the potential to cut our engagement in real learning and erode cognitive skills. Essentially, we are swapping long-term cognitive ability for short-term efficiency. We have to weigh the costs of AI and consider how we can use it as a tool without jeopardizing our engagement in cognitive processes.
— Lauren Leffer, Too Much Trust in AI Poses Unexpected Threats to the Scientific Process, Sci. Am. (Mar. 18, 2024) (edited by Ben Guarino & Clara Moskowitz)
The study Leffer talks about found that “delegating mental tasks to external aids such as large language models can lead to a decline in cognitive engagement and skill development”.
In the sermon illustration above, the “young buck” fell flat on his face because generations of villagers had never walked without crutches. Muscles atrophied. Maybe their brains also “forgot” how to walk.
At least one study I found seems to show the same thing may be happening to our brains when we learn to rely too much on AI.
A lot of the reports about the study — like the one I just linked in the last sentence — say things like “LLMs are making us dumber.” So the authors responsible for the actual study have posted an FAQ page on the Internet. One of the FAQs asks if it’s safe to say that LLMs are making us dumber. Their answer is:
No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.
Nataliya Kosmyna, Eugene Hauptmann, FAQ for “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” (2025)
Now — and there’s a huge bit of irony in what I’m about to say — I asked ChatGPT to summarize the 206-page report. Because while I am interested enough that I think I’m going to read it, I’m clearly not going to read it before I publish this post. And, I’ll be honest, I don’t yet know if I’ll ever read the entire report. There are a lot of statistical things, charts, images, etc., which frankly I’m not all that interested in.
But here’s the response from ChatGPT when I asked it the same FAQ:
Not in the crude, tabloid sense — but yes in the precise, defensible sense that matters.
The paper does not claim LLMs reduce IQ or intelligence. What it does show is:
Reduced germane cognitive load (the kind required for learning and schema formation)
Systematic cognitive offloading
Weakened memory consolidation
Diminished self-authorship and agency
Lower neural integration during complex reasoning tasks
If you define “dumber” as:
less able to recall, synthesize, explain, or independently reconstruct what one has just produced
then the data squarely supports that conclusion.
— Response of ChatGPT to my query as to whether it was safe to say, based on the report, that LLMs are making people dumber (December 26, 2025)
Chat went on to add that:
This paper gives you solid empirical footing for a careful but devastating claim:
LLMs don’t make people stupid overnight.
They make thinking optional — and optional thinking atrophies.— Response of ChatGPT to my query as to whether it was safe to say, based on the report, that LLMs are making people dumber (December 26, 2025)
Among the concerns raised in Too Much Trust in AI Poses Unexpected Threats to the Scientific Process quoted near the start of this section, the greatest concern involved the development of “two kinds of scientific monocultures” through the use of AI. For brevity’s sake, I’m not going to talk about the monocultures. I leave it as an exercise to the reader to follow the link and read it themselves.
After all, the whole point of my article is that we all need to make sure we get enough exercise!
But the reason for the concern about two two kinds of scientific monocultures is not just about weakening brain connections — or making thinking optional.
It’s that they “[b]oth…could lead to cognitive illusions.” One of those is the illusion of explanatory depth; another is the illusion of exploratory breadth; a third is the illusion of objectivity — what I jokingly referred to in my section heading as “depilatory” because it removes the hairy subjectivity from scientific argumentation.
Supposedly.
The illusion [of explanatory depth] has two parts. First, the “explanatory” part refers to our belief that we can provide a clear, detailed explanation of how something works. The “depth” part reflects our assumption that the explanation will be thorough and complex enough to convey what we’re trying to explain. In reality, however, when we try to dig into the details, we often find that our explanatory knowledge is much shallower than we initially thought. What we actually end up with is the reality of explanatory shallowness.
— The Decision Lab, Why Do We Think We Understand the World More Than We Actually Do? The Illusion of Explanatory Depth, Explained (last visited December 26, 2025)
Lisa Messeri, a Yale University sociocultural anthropologist interviewed for the Too Much Trust article defines it slightly differently as thinking you know something just because someone else in your community knows something. Or, if I understood her correctly, because an LLM knows something. One of her concerns is that we put too much faith in AI.
Messeri says that the illusions come into play because of an incorrect or improper over-reliance on AI in trying to gain knowledge.
The illusion of exploratory breadth further complicates the picture of advancing scientific knowledge because this illusion involves thinking that we’re “examining more than we really are”. The concern here is that AI is well-suited only for certain kinds of questions but we might mistakenly take it that those questions are all the questions that need to be asked. It puts a limit on our acquisition of knowledge, shrinking it to the realm of what AI handles well; blocking out — locking out — what it does not.
Machines — as I’ve noted in my Pretrial Release: The Illusion of Algorithmic Neutrality quoted above — are often trusted as “neutral” or “objective”. By those who lack understanding — you’ll frequently find them hanging out on Mount Olympus wearing black robes — the machines strip away subjectivity.
But as Messeri notes, “at the end of the day, AI tools are created by humans coming from a particular perspective”. In other words, subjectivity — and bias — are baked in. All the depilatories in the world aren’t going to remove that hairy fact.
The Problem Is Crutchery, Not the Crutch
The problem with crutches isn’t that people use them. When the village headman lost the use of his legs, crutches became an absolute necessity for him. The problem came from over-reliance on crutches — especially were crutches were not necessary.
As Messeri noted, you don’t have to hate AI to be concerned about the improper use, or over-reliance on it. Messeri herself uses AI. I also use AI. Loads of really smart people with plenty of synaptic action going on use AI.
The problem is when we start teaching courses in Advanced Crutchery.
As David Brooks put it in an article with a title that would make the authors of the MIT study, Your Brain on ChatGPT, cringe:
A.I. isn’t going anywhere, so the crucial question is one of motivation. What do students, and all of us, really care about — clearing the schedule or becoming educated? If you want to be strong, you have to go to the gym. If you want to possess good judgment, you have to read and write on your own. Some people use A.I. to think more — to learn things, to explore new realms, to cogitate on new subjects. It would be nice if there were more stigma and more shame attached to the many ways it’s possible to use A.I. to think less.
— David Brooks, Are We Really Willing to Become Dumber? (July 3, 2025)
Now let’s everybody grab your books; your pens, pencils, markers; and notebooks.
And let’s hit the gym.





This made me pause and check my own habits. I’m new to AI and still learning its pros and cons, especially the idea of it becoming a mental shortcut. Curious how others here think about using AI without leaning on it too much.