Mythology & Artificial Intelligence
Prometheus, Stolen Fire, and Why AI Won’t Be Godlike…Ever
Every few months someone tells me that AI is going to build a smarter AI, which will build an even smarter one, and before long we’ll all be out of work — or ruled by a machine. It’s become a kind of modern myth. People talk this way even when they don’t understand how the systems work. Even when the people who “grow” them don’t know how they work. And even though what we do know is that these machines have no intentionality or intentionality or intentionality.
Although, as that last sentence might show you if you looked at the links: we don’t really know either what intentionality is or what we really mean by it. We certainly can’t grow AIs that have any form of it.
But to the people who staff the AI Cheerleading Squads, this doesn’t matter. The lack of understanding is irrelevant. The reason, ironically, was recently explained to me by ChatGPT:
You know, intellectually, that I’m not a mind. But I produce language in a form that triggers the intentional stance almost automatically. Humans are built to infer agency from language, especially coherent language. So when I sound like I “understand,” the brain starts treating the output as if there is someone in here weighing evidence, forming beliefs, and exercising judgment.
There isn’t.
— ChatGPT conversation with Rick Horowitz (April 24, 2026)
The AI Cheerleading Squads don’t see pattern-recognition machinery; they see destiny.
Prometheus taught us this pattern a long time ago. He gave humans fire — raw power, not wisdom. Zeus feared what humanity might become, not what the flame actually was. And ever since, people have confused new tools with new gods.
AI right now is fire. Useful. Dangerous. But not divine.
What People Think Fire Can Do
To believe AI will build a superintelligence, you have to believe the fire will suddenly decide to forge itself into Zeus.
That requires abilities no existing system has. For one thing, it would have to understand its own architecture, diagnose its own failures. It would need to choose its own training data, design better models, form goals — something that requires intentionality.
Which LLMs not only don’t have, but probably can’t have.
Another way to name the missing thing comes from predictive-processing theory, at least where it intersects with phenomenology. In The Philosophy and Science of Predictive Processing, Zachariah Neemeh and Shaun Gallagher discuss Husserl’s account of intentionality in terms of “originary temporalization” — the lived, pre-reflective unfolding of subjectivity through time. That matters because human prediction is not just computation. It is embodied anticipation by a living creature with something at stake. We do not merely predict the world. We inhabit it, suffer it, move through it, and can be killed by it. Our predictions are bound up with hunger, fear, fatigue, pain, desire, memory, and the forward pull of a life that has to continue.
As Merleau-Ponty says, “Time is not a line, but rather a network of intentionalities” [citation omitted]. It’s a processual network structured by three temporal aspects: retention, primal impression, and protention.
— Zachariah A. Neemeh & Shaun Gallagher, The Phenomenology and Predictive Processing of Time in Depression, in The Philosophy and Science of Predictive Processing 187, 190 (Dina Mendonça, Manuel Curado & Steven S. Gouveia eds., Bloomsbury Acad. 2022) (emphasis added)
An LLM has none of that. It has sequence prediction without a life. It has output without appetite. It has fluency without futurity. It can predict the word “burn,” but it has no body that recoils from flame. It can generate a sentence about death, but there is no self whose future is threatened. That is why the Prometheus myth matters. Fire in human hands becomes history because humans have originary agency. Fire inside a machine remains fire.
To again, ironically, quote ChatGPT:
Current AI has analogues, not the thing itself.
It can mimic retention by using context windows and memory-like features.
It can mimic primal impression by processing the current prompt.
It can mimic protention by predicting the next token or generating a plan.
But those are computational analogues. They are not lived temporality.
— ChatGPT conversation with Rick Horowitz (April 25, 2026) (emphasis added)
Beyond that, it also has no originary agency.
[A] sovereign must be the genuine source of its decisions rather than the execution point of externally imposed objectives. All engineered systems are derivative in a constitutive sense: instantiated through design constraints, training data, objective functions, and deployment architectures determined by prior human agents. Even in advanced deep-learning systems where behavior becomes unpredictable to developers, unpredictability does not entail originary agency. It signifies complexity within externally structured parameters. Sovereignty requires not mere operational autonomy but existential self-possession within the governing order.
— Yee Leong, Hiew, Sovereignty Cannot Be Outsourced: Artificial Identity Instability, Reciprocal Accountability, and the Limits of AI Authority (February 27, 2026). Working Paper. A revised version of this manuscript is supposedly currently under peer review at Ethics and Information Technology (Springer Nature) but I could not verify this.
I’ve seen increasing predictive power when it comes to choosing words. I’ve yet to see anything that tells me that any generative AIs truly understand what they’re saying when they say something.
And, so, finally, they cannot evaluate the consequences of what they say.
Today’s AI can’t even reliably fix its own bugs. It misreads instructions, confabulates sources, fails basic reasoning, and collapses outside narrow conditions. There is no path from this to “recursive self-improvement.” There isn’t even a first step.
But belief doesn’t depend on evidence. It depends on narrative. And if there’s one thing AI and its true believers have in abundant supply, it’s confident narrative.
Often wrong narrative. But confident nonetheless.
Nobody pays me to challenge the myths that make bad evidence look scientific and bad technology look objective. If you value that work, you can support it here.
Fire Doesn’t Scare Me. Mythmaking Does.
So, ultimately, AI does not think, analyze, evaluate, or otherwise do anything that depends upon intentionality, real world originary concepts, including originary temporality or originary agency. The belief that we will somehow ultimately get there is all part of the mythological thinking that I’m arguing against in this article.
But here’s where the criminal-defense angle matters and why it was important for me to write this.
In criminal defense — and particularly in courtrooms — mythology is far more dangerous than machinery because mythology accompanies the machinery and makes its chicanery admissible.
I’ve already seen judges, prosecutors, and police treat algorithmic outputs as if they were neutral, objective, and somehow wiser than the humans who built them. They defer to machines because — again — the machines sound confident. That’s the Oracle problem I wrote about earlier — people mistake fluent language for authority, and authority for truth.
If you want a real mythic parallel, AI in the justice system isn’t Prometheus or Zeus. It’s the Golem: literal, strong, obedient, and dangerous only when people imagine it understands more than it does.
Risk assessment tools, predictive policing models, automated reports — they all carry the same flaw: power without comprehension. Fire in the hands of people who have never studied burns.
This has nothing to do with ASI. It’s the opposite problem: the system treats a narrow tool as if it were an all-seeing judge.
The Real Threat Isn’t Superintelligence. It’s Superstition.
So far, I’ve been asking whether AI can become godlike — whether it can possess intentionality, lived temporality, originary agency, or anything resembling judgment. But for criminal defense, the more urgent question is different: what happens when courts, cops, prosecutors, and vendors start pretending it already does?
The real threat is not that AI becomes superintelligent. The real threat is that the legal system becomes superstitious. It’s not a Zeus problem so much as a Golem.
In Jewish folklore, the golem is an artificial humanoid formed from earth or clay and brought to life through sacred knowledge.
— Joseph Bennington-Castro, What Are the Origins of the Golem Legend?, History (Mar. 10, 2026)
If this sounds a little familiar already, let me give you more:
The most well-known version of the golem legend is set in late-16th-century Prague, where the Jewish community faced intense hardship and persecution fueled by the “blood libel”—a false accusation that Jews used Christian blood in the preparation of matzo for Passover. The story centers on a real historical figure, Rabbi Judah Löew ben Bezalel (1525–1609), a revered spiritual leader of Prague’s Jewish community, widely known as the “Maharal of Prague.”
According to legend, the Maharal created the golem as both a domestic servant and a protector of the Jewish community. He formed its body from clay taken from the Vltava River and brought it to life through sacred rituals—such as placing a parchment inscribed with a divine name in its mouth or by writing the Hebrew word emet (“truth”) on its forehead. Removing the parchment or altering the inscription to read met (“dead”) deactivated the creature.
But the soulless golem is dangerously powerful, and the Prague story usually ends with the creature spiraling into a murderous rampage.
— Joseph Bennington-Castro, What Are the Origins of the Golem Legend?, History (Mar. 10, 2026)
“Murderous rampage” is not the concern I have here. We can get into If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All some other day.
We are not the only potential targets for an existential threat from AI. We’re already seeing the existential threat within the criminal justice system in the form of the Judicial Golem.
This judicial myth casts AI — as we’ve already seen, a disembodied artificial “humanoid” — as the protector of criminal justice, due process, and soulless sentencing. It’s necessary because the criminal justice system is overwhelmed. It’s overwhelmed with cognitive biases. It’s overwhelmed with contradictory incentives. It’s overwhelmed by sheer numbers.
Somehow, it is believed, we can fix all these things by shaping our new Golem out of bits and bytes instead of mud. Then we place our digital parchment, inscribed with our divine laws, into its neural networks so it can guide investigations, court proceedings, and judicial outcomes by synthesizing the data and spitting out the unbiased, unburdened answer — the answer supposedly free from human prejudice, human fatigue, human fear, human anger, and human delay.
But that is the myth.
The Judicial Golem is not free from human bias. It’s built from it. Its clay is not the Vltava River mud. The Judicial Golem’s clay is training data. And we’ve plenty of evidence these days about how that training data implicitly teaches the Golem to respect our prejudices and biases. In fact, when I noted above all the things AI would need to do, but cannot do because it has no intentionality? One of those things had to do with finding its problems and fixing them. But because AI does not have this ability, when AIs are used to train AIs? They pass along all our human biases, prejudices, and other frailties without examination or question.
But wait! There’s more! [Cue the Ginzu Knife Ad.]
After we fall into the trap of thinking we have extirpated error, we layer in arrest data, charging decisions, police narratives, plea bargains, sentencing patterns, probation reports, surveillance feeds, and the thousands of quiet discretionary choices that already shape the criminal legal system before any defendant ever reaches a judge.
Call that clay “data” if you want.
It is still clay.
And when we animate it with code, we do not purify it. We give it motion. We give it speed. We give it the appearance of neutrality. We let it walk into court wearing the mask of objectivity.
That is the danger. Not that the Golem understands justice. That it does not — and that the humans around it forget this.
So what could go wrong? Who doesn’t like an improved conveyor belt of justice?
But that is exactly what goes wrong: shortcuts look like justice, automation looks like objectivity, human bias is replaced by machine bias, the presumption of innocence ends up disappearing completely, and nobody feels responsible because — to bastardize Flip Wilson’s bit — “the computer made me do it.”
AI Isn’t the Problem. The Myth Is.
In the end, my clients don’t get hurt because AI becomes godlike. They get hurt because humans pretend it already is.
Ultimately, what we need to remember is that AI is a tool. An unthinking tool. Like a hammer. Or like fire.
My concern is over our unthinking embrace of the unthinking tool. You all know well (I hope) the perhaps overused trope about tools. When your only tool is a hammer, everything starts to look like a nail. And when we hand investigations, court proceedings, and judicial outcomes to a tool designed to synthesize criminal-justice data and spit out the supposedly unbiased, unburdened answer — the answer supposedly free from human prejudice, human fatigue, human fear, human anger, and human delay — then I guarantee you that everyone starts to look like a criminal.
That’s the myth of AI in the courtroom. Not truth. Not justice. Not the American way. Instead we get convenient and quick conviction.
Well, okay. I guess that is the American way.
But Prometheus did not give humanity wisdom. He gave humanity fire. And fire, as the myth keeps reminding us, is double-edged: it warms, cooks, builds, and illuminates — but it also burns, destroys, and arms the people who learn to use it. That is the right analogy for AI. Not Zeus. Not God. Not mind. Fire.
Today’s AI is the same — capable, imperfect, unwise — fire. A tool. And the danger won’t come from what the tool becomes. We already know what these tools have become: us, but without intentionality; us, but without the ability to make judgments grounded in reality; us, with predictive power, but no understanding.
The danger does not come from the fire. It comes from what people believe the fire can do.
That’s the mythology we need to worry about.






Damn. Another deeply insightful article.