In Google We Trust
How Google’s AI Turned Legal Knowledge Into Legal Fiction
I spent years building a library of free legal information. Articles people could trust — explanations of rights, procedure, bail, evidence, cross-examination.
For nearly two decades, that work kept my office open. My website consistently scored in the top 5 on Google search results because I wrote constantly. When people were looking for criminal defense lawyers in my area, that helped. My website received up to 300 visits per day.
Then Google AI came along. Google decided it could explain criminal law better than I could. After Google AI started handing out “answers” at the top of every page, my visits dropped from an average of around 250-300 to an average of zero-30.
If I’m lucky.
The impact on my practice has been catastrophic. Now, for the first time in almost twenty years — longer, if you count the three years when I was still in law school, but had an “independent law clerk” office doing research and writing for criminal defense lawyers — I’m looking at the possibility that I’ll be funding the winding down of my law office by paying bills from my savings.
Ironically, Google did this by essentially stealing my work. And not just mine, but any criminal defense lawyer who was blogging on criminal defense topics.
It’s Not All About Me
But there are also two sad aspects to this that negatively impact Google users just as much as they have impacted me.
Google doesn’t always use the work that it has stolen from criminal defense writers in its entirety. Google’s AI synthesizes the information it was trained on. It’s much like the generative AI that creates images. Generative AI models trained on billions of images don’t spit out the exact images upon which they were trained.
That’s not how it works. Instead, both generative AI and LLMs — large language models like ChatGPT, Claude, Gemini, and Google’s AI answerer (whatever that really is) — learn “concepts”. Those concepts sometimes include pieces of the original works: if you use generative AI, you may have seen what look like “artist signatures” sometimes show up in a generated image.
And, as will become more important below, that happens with pilfered legal “concepts”, too.
This isn’t the place to get into details on how that works. I do have other articles on my Substack and on my criminal defense blog, which talk about AI — how it works, and doesn’t work. You can check those out by visiting the links to my main writing sites.
The point is that that same cut-and-paste mimicry I talked about above applies to law and AI’s “legal answers”. The machine doesn’t understand a statute; it just repeats the “shape” (that’s an AI term) of understanding. And when Google repackages that imitation as an “answer,” it gives the illusion of legal authority without the accountability that makes real expertise safe to rely on.
“Learning Concepts”
Now, I don’t know all the details of how Google AI learns how to give answers. I’m sure some of that is “proprietary”. And, in fact, it’s so “proprietary” that only the real proprietors — and I’m not talking about Google or Google engineers, because they’re even further removed — could know. (But we don’t even know if the true proprietors are sentient.)
It’s just doing whatever its transformer architecture does — which, as even AI scientists admit, we don’t fully understand.
— Rick Horowitz, Naming AI’s “Problem”: Confabulation, Bullshit, or Both? (September 24, 2025)
Or, as I further explained in Confabulations Cause Hallucinations (May 10, 2025), even Google’s engineers don’t fully understand how its AI systems reach the answers they give. They can describe the circuitry — the layers, the weights, the probabilities — but not the moment when prediction turns into persuasion. The process is a black box, and like all black boxes, it hides both error and bias.
At any rate, it is my understanding that Google’s AI learns from things written on the Internet. How those are curated — or even if they are curated — I’ve no idea.
But here’s the problem: to the extent Google AI learns from the Internet, there’s no reason to believe it learns only from “good data”.
There’s an old saying about this sort of thing: GIGO — Garbage In, Garbage Out.
That’s where we are now — garbage in, garbage out — except the garbage has a glossy interface and the backing of a trillion-dollar corporation.
When you type a legal question into Google, you’re no longer getting search results that connect you to people who’ve spent decades studying the law, trying cases, and explaining them in plain English. You’re getting an AI summary trained on those people’s unpaid work — some from actual criminal defense lawyers like me and some from people who don’t know wtf they’re talking about — blended, averaged, and stripped of accountability.
The Damage to the Law
The real cost of this isn’t just that small offices like mine are disappearing. It’s that the public’s relationship to the law is being rewritten by machines that don’t understand what they’re saying.
When Google’s AI hands out legal “answers,” it trains people to treat the law as a consumer good — a quick predictable result, not a discipline. Clients come in convinced they already know the outcome of their case because Google told them so. Judges see filings that echo AI phrasing — or made-up (confabulated) AI case law that snags even experienced lawyers who rely on it. Even prosecutors quote snippets that feel suspiciously “machine-written.” We’ve replaced legal reasoning with auto-complete.
It can’t reason about a statute. It only sounds like reasoning. And when Google repackages that imitation as an “answer”, it gives the illusion of legal authority without the accountability that makes real expertise safe to rely on.
That illusion of authority corrodes everything downstream. The courtroom runs on credibility — not charisma, not style, but the ability to show how a claim connects to real evidence and existing (not confabulated) law. AI can’t do that. It can only sound like it did.
LLMs feel like they understand because they speak fluently. But fluency is not thought. Language is not consciousness. And projection is not perception.
— Rick Horowitz, Ghosts in the Machine: Why Language Models Seem Conscious (April 15, 2025)
Once Google’s AI summary becomes the public’s first stop for “legal information,” the adversarial system loses one of its cornerstones: informed clients who know enough to ask questions. Instead, people arrive armed with machine-spun certainty. They’ve been persuaded by a system that, as I explained a couple months ago, is indifferent to truth:
AI hallucinations aren’t glitches to be patched or programmed out. Hallucinations are the output of a system that’s doing exactly what it was built to do all the time.
And that’s why I believe “hallucination” is actually the wrong term. The correct term is “confabulation”.
— Rick Horowitz, Naming AI’s ‘Problem’: Confabulation, Bullshit, or Both? (September 24, 2025)
That indifference is fatal in law. The legal process depends on verification. Every fact must be checked (by a real lawyer), every citation traceable. Google’s AI does the opposite: it blends, averages, and smooths. Context, jurisdiction, statutory nuance — in other words the parts that validate the authority — are the parts that matter are exactly what get sanded away.
We’re already seeing the consequences. Lawyers sanctioned for filing briefs with phantom cases. Clients walking into court ready to argue with the judge’s decision based on their “research”. The contagion spreads because the machine sounds right.
“Confabulations cause hallucinations. Not just in machines, but in us.”
— Rick Horowitz, Confabulations Cause Hallucinations: AI Lies Fool More Than Our Eyes (May 10, 2025)
That’s the heart of it. The law can survive mistakes. What it can’t survive is the slow erosion of trust in the very idea that truth can be known — that there’s a difference between a persuasive sentence and a proven fact. Forgetting that AI sometimes falls victim to GIGO and other times, oftentimes, confabulates, hurts everyone. And could come with real costs.
I’d like to hear what you think — not what Google’s AI thinks. Have you seen AI answers go wrong in your own work or searches? Drop a comment below.
The Damage to Justice
The damage to law is structural. The damage to justice is human.
Justice happens in context. It involves people who can weigh motives, history, and harm. People who can make actual judgments. It lives in the space between the law’s abstractions and the lived reality of a person standing in front of a judge. That’s a space Google can’t see.
When someone accused of a crime types a question into Google — “Will I go to jail for a first-time DUI in California?” or “What happens if the victim doesn’t show up to court?” — the answer they get isn’t filtered through experience. It isn’t shaped by the facts of their case, or the county they’re in, or the temperament of the judge who’ll handle it. It’s a statistical echo of a million other sentences, potentially in hundreds of different jurisdictions, dependent upon a different set of laws and circumstances. And people are treating those blended echoes as legal advice.
That’s not an inconvenience; it’s a constitutional problem. The right to counsel presumes that advice will come from a thinking, accountable human being. It requires someone who can ask follow-up questions, test evidence, and give an opinion tailored to all of that. All that came before. All that the client presents. All that only experience teaches comes after in a particular case, in a particular jurisdiction, in a particular court.
Google’s AI does none of that. It generates confidence without context. But not without consequence.
I see the fallout in real cases. Just last week, a potential client came to see me who had delayed hiring a lawyer because “Google said” they could handle it. Only on hitting the wall of reality in the courtroom did that person realize they needed a real lawyer.
Families make plea decisions or tell me what the outcome of the case is going to be before I’ve even read the discovery.
Every part of the system adjusts itself to a fiction.
It’s tempting to call that ignorance, but it’s not. It’s trust misplaced. The people relying on these machine answers aren’t being lazy; they’re doing what technology has trained them to do and what AI promoters and influencers tell them they can do — believe the first confident voice of Google AI, or ChatGPT, or Claude.
Pick your poison.
That’s how justice erodes: not with a new law or a bad ruling, but with a shift in who we believe and how we make important constitutionally-impoverished decisions.
The courtroom was built to test stories against evidence. It’s adversarial with a defense attorney on one side, a prosecutor from the DA’s office and a prosecutor in a black robe on the other.
Google has replaced that process with a single-sentence summary as if the adversarial part had already reached a conclusion. But it hasn’t even started and this is why trusting in Google leads us astray.
Before long, In Google We Trust stops being a metaphor. It becomes the closing argument for an age that no longer remembers why the lawyers Google is working (perhaps unintentionally; perhaps not) to dispose of are not so disposable, after all.
In Google We Trust
When I started writing about criminal defense, the idea was simple. I wanted to give people real information from someone with actual experience in court, standing beside clients, and fighting for actual outcomes. It worked. For nearly twenty years, people who needed help could find me because Google connected them to what I wrote.
That was supposed to be the point of the Internet. Merit, not marketing. If you built something useful, people could find it.
Then Google decided the law was content. It stopped ranking lawyers by what they wrote and started ranking them by what they paid. So-called “organic” visibility collapsed. Now, if I want to appear where I used to — in searches I earned through my right and contribution to the online community — I have to pay a minimum $1,500 a month for the privilege.
That’s not access; that’s extortion.
And it’s worse than just financial. It’s moral. Because when AI answers replace search results for real lawyers writing about real law, the public isn’t just being sold ads — they’re being sold false hope and false certainty. And certainty without accountability isn’t truth; it’s bullshit.
When people start trusting Google AI over lawyers, the cost isn’t just to the profession. It’s to justice itself. Because real defense work isn’t about slogans or summaries or salesmanship; it’s about context, judgment, and responsibility.
A search engine can scrape words. It can’t stand next to you in court. It can’t scrape out a good result in your very real case.
The irony is that, after all this, I’m still doing what I always did — writing to help people understand the law, even if they now have to dig a little deeper to find it. That’s why I’ve moved from writing on my own website — which no longer benefits me, and is no longer found, so it no longer benefits anyone else — to Substack. My posts are already being seen and read by more people here than after Google AI scraped my content and scratched out my traffic.
If Google wants to rank “authority,” it should start where authority actually lives: in the people who put their names, reputations, and licenses behind what they say. Not in the machines that remix it, not the law firms that pay to be seen, and not in the corporations that profit from exploitation and remuneration.
Until that happens, “In Google We Trust” will keep being what it already is — a warning label, not a motto.




The zero-click search experience Google (GOOG) is pushing really does change the game for content creators. When Alphabet's AI Overviews become the end destination rather than a pathway to sources, it fundamentally undermines the web ecosystem that made Google powerful in the first place. It's one thing to organize information, but extracting value while bypassing the original creators is a totally diferent story.
This is why all the blogging was laid out as the path to a successful practice and high ratings. In the end, there’s no substitute for peer ratings.