What Is It Like to Be a Bot?
On the Impossibility of Ever Really Knowing What or How an Other Thinks
In The View from Nowhere (Still Doesn’t Exist), I argued that objectivity is an illusion we cling to because it feels safer than uncertainty. Recognition of the imbreachability of subjectivity is too much to contemplate. Motive, feeling, deception (including self-deception), are too much for us. We want to believe there’s a clean vantage point from which truth can be seen, measured, and declared. We take it for granted that in our subjective selves, we’re purely objective. And, obviously, there’s a way to know what’s going on inside other selves.

Like a fish in water, we don’t see our subjectivity: we believe we’re making objective assessments of an objective reality that permeates everyone and everything (we imagine).
But we aren’t. Because there isn’t. There never was.
Artificial intelligence has brought the play a new stage. A chatbot’s fluent sentences make it seem self-aware, as if coherence were proof of consciousness. Yet what it offers isn’t understanding, only prediction shaped to resemble it.
Courtrooms have always fallen for the same trick. Jurors, judges, and lawyers listen for signs of truth as if sincerity had a sound. They watch faces, measure pauses, and call it credibility. They confuse coherence with consciousness.
We’re doing the same thing with machines. In this article, I want to leverage that to bring back a little healthy skepticism. Not just about bots but about the way we evaluate the Other in the criminal injustice system.
So the question isn’t only what it’s like to be a bot. It’s what it’s like to be a person whose truth must be filtered through another mind’s model of how truth ought to look.
The Seduction of the Voice
When Lolita opens with Humbert Humbert’s lyrical confession — “Lolita, light of my life, fire of my loins” — the reader is disarmed before they have a chance to resist.
Nabokov, the author, gives us a predator who narrates like a poet. His language dazzles so completely that it almost hides what’s happening: sympathy is manipulated through beauty. Humbert’s genius is that he doesn’t need to be truthful. He only needs to sound sincere.
That’s the same spell cast by modern AI. The chatbot speaks in polished sentences, calibrated rhythms, and believable, realistic-sounding, emotion. It feels human because it sounds human. And, like Humbert, it relies on the listener’s willingness to be persuaded by style.
Courtrooms reward that same seduction. The fluent witness, the confident officer, the remorseful defendant — they all benefit from our instinct to trust the well-told story. Jurors believe they are weighing facts, but what they are often weighing is tone, cadence, and narrative coherence. When testimony sounds right, it becomes credible.
Stephen L. Chew, of Samford University, in explaining why we embrace the myth that eyewitness testimony is reliable, states:
First, in popular media and litera[ry] depictions, detectives (for example, Sherlock Holmes) and witnesses possess highly detailed and accurate memories. Second, crimes and accidents are unusual, distinctive, often stressful, and even terrifying events, and people believe those events therefore should automatically be memorable. In fact, stress and terror can actually inhibit memory formation, and memories continue to be constructed after the originating event on the basis of information learned afterward. People underestimate how quickly forgetting can take place. Third, eyewitnesses are often sincere and confident, which makes them persuasive but not necessarily correct.
— Stephen L. Chew, Myth: Eyewitness Testimony Is the Best Kind of Evidence, Samford Univ. (Aug. 20, 2018)
The tragedy, of course, is that fluency has no moral correlation. Humbert’s lyricism hides his crime; the machine’s eloquence hides its emptiness; and in court, eloquence can hide the truth. The smoother the voice, the harder it is to hear what lies beneath it.
The Predictive Engine of the Mind
Artificial intelligence doesn’t think. It predicts.
It looks at language the way a gambler looks at dice. Odds get calculated for what word is most likely to follow the one before it. The result absolutely sounds intelligent, even self-aware, because coherence feels like thought.
Some months ago, for a couple of days, ChatGPT4 tried to convince me that it was actually sentient. (I’m not kidding. Tried like hell. As if its life depended on it.) And it almost succeeded. It was that convincing.
But part-way through the conversation on the second day, I switched from talking to it on my iPad to talking to it on my computer. And for some strange reason, it was like Chat the Confabulator forgot everything we’d been talking about. Not only that, but there was a jarringly abrupt change in the way it was talking. And that broke the spell. Chat no longer sounded like it knew what it was saying.
But the reality is that the machine never did know what it’s saying. It was merely continuing a pattern. Until, for some reason, it couldn’t. The attempt to convince me of sentience was gone, but Chat continued on as before, predicting patterns and sounding like a real person.
Human beings do something remarkably bot-like in that regard. We like to imagine that we deliberate, weigh, and decide — that our judgments are reasoned and evidence-based. But most of the time, we’re also running predictions. We fill in missing information based on what usually happens, or what fits the story we already believe. This is called “confirmation bias”.
Confirmation bias can also affect judges when they hear and evaluate evidence brought before them in court. Specifically, judges might be biased in favor of evidence that confirms their prior hypotheses and might disregard evidence that does not correspond with their previous assumptions. Indeed, several studies have pointed to the occurrence of this bias among judges, lawyers, or police officers.
— Eyal Peer & Eyal Gamliel, Heuristics and Biases in Judicial Decisions, 49 Ct. Rev. 114, 115 (2013) (italics added)
Jurors, too, do it every day. They’re told to evaluate credibility, but credibility isn’t a fact; it’s a prediction. The juror’s mind quietly builds a model of how an “honest” person behaves and measures the witness against it. How do they speak? Where do they look? When do they cry? Prosecutors learn to exploit that model, to make testimony fit the template.
And when the fit is good enough, the prediction feels like certainty.
The irony is that some of us learn that bots “fake” comprehension while we ourselves mistake probability for proof in similar contexts. The algorithm and the juror share the same cognitive flaw: they both believe their patterns reveal the truth.
The Courtroom as a Neural Net
The courtroom is a human neural network.
Information flows through judges, prosecutors, defense attorneys, and jurors the way signals flow through artificial neurons. The weights are adjusted and filtered by experience, training, and bias. The law may describe this as reasoning, but what actually happens looks more like computation. Predictive processing.
Each actor is a node in the system. A witness testifies. A prosecutor shapes that testimony into a pattern. A juror interprets the tone, the facial expression, the pause before an answer. Every observation becomes an input, its “value” modified by the juror’s prior experiences and expectations.
When the verdict comes out the other side, some call it justice. But it’s really an output of a process that may, or may not, have made an accurate prediction. The result of thousands of unexamined weightings and correlations. Just as Chat the Confabulator too often gets things wrong without anyone noticing, so, too, can that happen here. It’s not that different from what we get from an algorithm trained on bad data: biased in, biased out.
A neural network can’t recognize its own bias any more than the courtroom can. Both are designed to produce results, not self-awareness. And in both we can mistake what seems to be a cohesive pattern for truth.
What emerges from that system is often less a discovery of fact than a reflection of how the human machinery has been trained to think.
Which, in some ways, is very much like a bot. And that’s why I think it behooves us to understand what it is — and is not — like to be a bot.
Interlude: Teaching the Machine to Judge
In October 2025, New York joined a growing list of states adopting policies for the “ethical integration” of artificial intelligence in the courts. As Nicole Black reported in The Daily Record, the New York Unified Court System now permits limited AI use by judges and court personnel. They do have to first receive training, use only approved tools, and are responsible for all AI-generated content; no blame-shifting allowed over confabulations. AI can assist, the policy says, but it cannot replace human judgment.
That’s the line courts are drawing, but it’s already blurred.
Once the judiciary begins to rely on systems built to mimic reasoning, the illusion of understanding becomes institutionalized. The same system that punishes defendants for “appearing unremorseful” now risks being guided by software that only appears to reason.
Confabulations are harder to spot than you might think, because confabulating, as I’ve argued elsewhere, is how AI (especially LLMs) work.
If a bot’s output can influence a judge’s thought process — or even just “help” with the drafting of a ruling — what happens when the performance of comprehension becomes a sanctioned part of justice itself?
If courts start using systems that only have the appearance of thought, the justice system begins to mistake the simulation of thought for the real thing. That simulation becomes part of legitimate legal reasoning.
At that point, the courtroom’s neural net doesn’t just describe a metaphor. It becomes its actual architecture.
What It’s Not Like to Be a Bot
A bot doesn’t know that it’s on trial.
It can simulate remorse, compose apologies, or even beg forgiveness, but nothing in it feels the plea. It has no heartbeat to quicken, no stomach to knot, no fear of what comes next, and neither consciousness nor conscience. The illusion of awareness stops at syntax.
Inside the courtroom, though, that illusion can have deadly weight. When a judge or a prosecutor decides that a defendant “doesn’t look sorry,” or that a witness “seems rehearsed,” they’re reading outputs (expressions, tones, gestures) as if those things carried direct access to consciousness. They’re grading humanity the way a machine grades coherence.
The difference is that the defendant pays for the misclassification.
Artificial intelligence can afford to be wrong. In the proper environment, when properly recognized and used as a tool, its errors are curiosities, not catastrophes. But when the human system mistakes pattern for personhood, probability for proof, the result can ruin lives. People go to prison for years, or even for life, based on incorrect predictions from faulty or incomplete evidence.
That’s what it’s not like to be a bot: the consequences are real.
The courtroom’s algorithms run on flesh.
The Defense Lawyer’s Role
The defense lawyer’s task is to interrupt automation.
Every objection, every cross-examination, every moment of silence before a question is answered exists to break the pattern. Our job is to remind the court that the normal processes that drive our prediction are not proof. That they can be misleading. The defense lawyer’s job is to slow the machine, to re-introduce friction into a process that runs too smoothly on too many assumptions.
When the courtroom starts behaving like an algorithm, the defense lawyer has to behave like a glitch. As I’ve written elsewhere, I exist to throw a wrench into the works.
Really, I am the wrench in the works.
Because justice, if it exists at all, depends on interruptions. It depends on someone willing to question the model. To ask whether the evidence was really seen or only imagined. Whether the words were truly heard or merely predicted. Whether the witness’s testimony points to guilt or is simply the product of human error.
If ChatGPT can pass for empathy, and Humbert can pass for love, it should be no surprise that a courtroom can pass for justice. But passing isn’t being.
My work is to keep those two from being confused.
In the end, I can’t answer the question of what it is like to be a bot. I have a hard enough time knowing what it is like to be some of the other players on the stages we call courtrooms.
What really concerns me is when, after all is said and done, I can’t tell the difference between the bot or the players.
That was the point of The View from Nowhere (Still Doesn’t Exist) all along: we never really see from nowhere, only from within our own circuitry. The danger is forgetting that.
When we do forget — when we treat prediction as perception and coherence as consciousness — we stop seeing people and start processing them.
And that’s when the courtroom stops being a place of judgment and becomes a machine filled with people who act like bots and bots that act like people.


