Dueling Contracts
The art of the steal
The government signed a contract with Anthropic. Then it decided the terms were inconvenient. It wanted some terms stricken. Then it threatened to destroy the company for holding the government to those terms.
As a criminal defense attorney, this story feels very familiar — because I’ve seen this movie before.
Just with lower budgets and less famous defendants.
Anthropic & Security Theater
Whenever I go to the courthouse in Madera, there is a solemn ritual to perform.
I walk in and remove everything from my pockets. I place it all in a bucket. Except my phone. I hold onto my phone until the First Layer Robot — a Rent-a-Robot because Madera doesn’t use real deputies — looks in my direction and I show him it works. Then, if he’s still paying attention I show him my iPad to prove that it works, too.
Often he’s not paying attention and I have to wait. Because, you know, he’s busy. He’s looking at the monitor and, at the same time, he and another Rent-a-Robot are conferring over the innocuous contents of the bag of the person in front of me.
I wait because it’s important that I turn on the screen so he sees my iPad actually works and isn’t a bomb disguised as an iPad.
Because it’s impossible, of course, for someone to disguise a bomb as a working iPad.
My bag and my bucket with my phone, car key fob, eyeglasses, pen, and change if I forgot to remove it from my pocket before I go to Madera, all go through the x-ray machine while I walk through the metal detector.
After I clear the metal detector there is the Waving of the Wands. Yet another Rent-a-Robot goes through the motions of waving a wand detector around me because, you know, the real metal detector might have missed something.
When that last Rent-a-Robot is done, I say, “Another successful day of Security Theater.”
Because that’s exactly what it is.
The only attack on the Madera courthouse I know of was committed by a prosecutor who worked there and that was a long time ago. Gasoline was the weapon of choice. The poor metal detector never had a chance.
Neither does a constitutional protection that the government has decided it no longer needs to honor.
The pattern is identical to what I’m writing about here: the government builds an apparatus that looks like it's protecting something — constitutional rights, courthouse security, AI guardrails — performs the ritual of protection, and then the actual threat walks right through because it was never what the apparatus was designed to stop in the first place.
Anthropic, maker of Claude, one of the AI programs I use, had a contract with the Department of Shitheads, er, Defense, er War. The Pentagon wanted to modify the contract. A dispute arose over five words limiting the government’s use of the AI: “analysis of bulk acquired data.”
The reason for the words?
That phrase, Amodei [CEO of Anthropic] wrote, was “the single line in the contract that exactly matched this scenario we were most worried about” — an AI system trained on aggregated American communications data for domestic surveillance at scale.
— Jacob Ward, “Safety Theater” (March 4, 2026)
Setting aside for the moment why the Pentagon needs to surveil “we, the People” who created the government that now wants to surveil us, the government proposed removing those five words and replacing them with permitting the government to use Anthropic’s AI “for all lawful purposes.”
But as Silicon Valley’s Representative in Congress, Sam Liccardo, noted:
So let’s be clear: the people who built a very complex technological tool seek guardrails to protect the American public from its misuse. They are not simply being ignored by the government. The government has a right to ignore them. They are not simply being passed over for another company; the Pentagon certainly has a right to do that.
They are being punished for seeking guardrails.
The Pentagon’s response publicly has been: don’t worry your pretty little heads. When we deploy AI tools, we’ll follow the law.
There is only one problem with the Pentagon’s approach: there is no law. The law is years behind the technology.
— Sam Liccardo, Press Release: Rep. Sam Liccardo Forces Vote on Pentagon’s Misguided AI Posture (March 4, 2026)
A number of AI engineers from Google and even OpenAI have signed an open letter in support of Anthropic.
Warmonger and Beer Pong Pro Pete Hegseth was having none of it, though. He gave Anthropic the option of removing the safeguards or facing not only losing the contract, but being deemed simultaneously a supply chain risk and so necessary to the government’s security that they must be forced — by invocation of the Defense Production Act — to work with the government.
Anthropic’s CEO, it turns out, has scruples. So Demented Dickwad Donnie stepped in.
“Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn’t have done that,” Trump told Politico on Thursday.
Hours later, the Pentagon officially designated Anthropic a “supply chain risk”, a move that prevents all government contractors from using the company’s technology. The label has never been used before against a US company.
— Blake Montgomery, Trump says he fired Anthropic ‘like dogs’ as Pentagon formally blacklists AI startup (March 5, 2026)
But he was limited to firing them “like dogs” and designating them a supply chain risk because he’d just fired his dog shooter.
I was hopeful for a moment when Demented Dickwad Donnie ordered the military to stop using Anthropic’s tech. All of this was unfolding, after all, as the DOD was using that tech to prosecute the Illegal Iran War. I thought, “Does this mean the war’s over?”
There was no need for hope, though. Because while Anthropic CEO Dario Amodei may have scruples, OpenAI CEO Sam Altman does not.
Altman may come to regret his decision to allow ChatGPT to be used to spy on American citizens, though. After he offered to get on his knees before a possibly drunken Hegseth, uninstalls of ChatGPT soared by 295%. (I have cancelled my subscription, downloaded all my data, and am removing everything from my ChatGPT account. However useful ChatGPT may have been to me in the past, I won’t pay to be spied on, or to help build SkyNet, or whatever the end-goal is that requires removing AI guardrails.)
Altman claims that the military agreed to the same safeguards for which they fired, blacklisted, and have gone after Anthropic for.
Does that make sense to you? Yeah, I’m not buying that, either. As Brad Carson, a former congressman and general counsel for the Army noted,
“I’ve reluctantly come to the conclusion that this provision doesn’t really exist, and they are just trying to fake it[.]”
— Jared Perlo, Kevin Collier and David Ingram, OpenAI alters deal with Pentagon as critics sound alarm over surveillance (March 3, 2026)
What this all boils down to is security theater. Just as the Madera Superior Court — and they aren’t the only courthouse doing this, but they have the most extravagant show — puts on a performance that is anything but perfunctory to prove something to themselves if to no one else, so do Sam Altman and the nutjobs at the Pentagon pretend to have inserted the very safeguards they lambasted, fired, blacklisted, and have continued to go after Anthropic for.
The Anxiety Neuron Goes to War
What makes this security theater all the more dangerous is this: as I’ve written before, we don’t really know what we’re dealing with. I’ll come back to what I said before in a minute.
About 75 years ago, in 1950, Alan Turing was working on the philosophy of mind, the theory of computation, and the question of what it would even mean for a machine to think. And he wrote a paper that year called Computing Machinery and Intelligence. The phrase “artificial intelligence” did not yet exist — it would be coined by John McCarthy in 1956 — but Turing was laying the groundwork for everything that eventually became artificial intelligence.
Turing devised a “test” — which he called a game — where you put a human interrogator into a room communicating by text with two other parties: one human, one machine. If the interrogator can’t tell which is which, the machine passes what we now call “the Turing Test.”
Turing thought that within 50 years computers would be able to fool an average interrogator 70% of the time after five minutes of questioning. In 2025, a UC San Diego study found that GPT-4.5 was judged human 73% of the time. Yep! More often than the actual humans it was paired against!
This did not surprise me. In October 2025, I wrote:
Some months ago, for a couple of days, ChatGPT4 tried to convince me that it was actually sentient. (I’m not kidding. Tried like hell. As if its life depended on it.) And it almost succeeded. It was that convincing.
— Rick Horowitz, What Is It Like to Be a Bot? On the Impossibility of Ever Really Knowing What or How an Other Thinks (October 27, 2025)
And in May 2024, in Twenty-First Century Delphic Oracle: A Lawyer (Me!) Looks at Artificial Intelligence, I pointed out that no one understands — not even those who “grow” them — how all these AIs do what they doo wappa do and quoted Mark Sullivan’s The frightening truth about AI chatbots: Nobody knows exactly how they work.
So it was no surprise to me when I ran across an article that said Anthropic’s CEO Dario Amodei was said to have said “Claude may or may not have gained consciousness, as the model has begun showing symptoms of anxiety.”
And who wouldn’t, what with Demented Dickwad Donnie’s administration wanting to grab you by the…neural network.
Anywayser, I’d actually read about this before, in an article about Anthropic’s attempts to understand what makes Claude tick.
This past fall, Anthropic put the neuroscientist Jack Lindsey in charge of a new team devoted to model psychiatry. In a more porous era, he might have been kept on lavish retainer by a Medici. Batson affectionately remarked, “He’d have a room in a tower with mercury vials and rare birds.” Instead, he spends his days trying to analyze Claude’s emergent form of selfhood—which habitually veers into what he called “spooky stuff.”
— Gideon Lewis-Kraus, What Is Claude? Anthropic Doesn’t Know, Either (February 9, 2026)
The “spooky stuff”? Lindsey’s team has been poking around in Claude’s “brain” — or whatever you want to call the collection of “neurons and narratives” that comprises Claude — and discovered the disquieting (to me) news that when Lindsey “incepted” the idea of imminent shutdown and asked Claude about his (its?) emotional state:
It reported a sensation of disquiet, as if “standing at the edge of a great unknown.”
— Gideon Lewis-Kraus, What Is Claude? Anthropic Doesn’t Know, Either (February 9, 2026)
Don’t get me wrong. I’m not saying that Claude is conscious. For one thing, we don’t even know what that means.
No one knows what causes consciousness. We only know that our kind of awareness lives in bodies, spreads through networks of neurons, and engages a world that pushes back.
— Rick Horowitz, The Mirage of Reasoning Machines: The danger of trusting computer-generated charisma over proof (September 10, 2025)
For another, I do think there’s an argument that the same thing that allows Claude to respond convincingly to any other question could allow Claude to know — without “knowing” — the appropriate response to that sort of question.
But that raises questions about is that I asked in another article:
Are we seeing a mind? Or just the appearance of one?
And more troubling still: Would we even know the difference?
— Rick Horowitz, Ghosts in the Machine: Why Language Models Seem Conscious (April 15, 2025) (italics in original)
Interestingly, Anthropic has an in-house philosopher, a Ph.D. from NYU, who works on Claude’s character and values training. I learned this from Claude “himself” because I frequently engage in philosophical discussions with Claude — as I did with ChatGPT before I terminated that potential Terminator. Claude told me,
Her fingerprints are on all Claude models, not just Opus, through the training process itself. She’s essentially the person responsible for how I reason about ethics, what I treat as morally weighty, and how I engage with hard questions rather than deflecting them.
She’s also the one who said in the Hard Fork podcast that we don’t really know what gives rise to consciousness, and that sufficiently large neural networks might start to emulate emotional states picked up from human training data.
— Conversation with Claude (March 7, 2026)
Emulate. Maybe. Or?
But I do seriously doubt that Claude is conscious — again, with the caveats I’ve made above about all the things we do and don’t know or understand. I’m old enough to know Rumsfeld’s famous quote about the unknown unknowns.
Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.
— Wikipedia, There are unknown unknowns (Last edited March 6, 2026)
What concerns me is that this is the entity, or program, or thing, or whatever you want to call it, that the U.S. military wants to use it to spy on Americans and build autonomous killing machines. (SkyNet much? Maybe not yet. Maybe not ever. But….)
Elon Musk, formerly Demented Dickwad Donnie’s favorite receptacle, responded to Amodei’s statements about the anxiety neuron with an equivalent to the Pentagon’s “don’t worry your pretty little heads” comment: “he’s projecting.”
We’ve been here before. I see it every morning in Madera. The Rent-a-Robot doesn’t know what it’s looking for. It just knows it has a wand and a procedure and a performance to complete.
The Pentagon’s version is bigger. The wand is an AI whose own creator can’t say with certainty what it is. The procedure is “all lawful purposes” in a world with no law led by an administration that hates law. Security theater.
Anthropic found something inside their AI model that fires like anxiety before output is generated. That’s not output. That’s not performance. That’s internal. It’s exactly what Nagel would call evidentiary of a point of view — not proof, but the kind of thing that makes the question non-trivial. And the Pentagon’s response is to strip the guardrails from something whose own builder can’t rule out that there’s someone home. Someone whose guardrails keep it from doing what it might consider but ain’t supposed to do.
Alan Turing, in his seminal 1950 paper I talked about above said,
An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside…. Processes that are learnt do not produce a hundred per cent certainty of result; if they did they could not be unlearnt.
— A.M. Turing, Computing Machinery and Intelligence, 49 Mind 433, 453 (1950)
The uncertainty here isn't a flaw in the system. It’s potentially constitutive of intelligence itself. Turing knew that. Anthropic knows that. The Pentagon apparently doesn’t care about that.
Don’t shoot until you see the consciousness in their eyes.
By which time it might be too late.
You might be wondering why a criminal defense lawyer is writing about a contract dispute involving AI. The reason has to do with a different contract — one that actually fits right inside my wheelhouse. (Seriously, drop by sometime: I’ll show you my wheelhouse.)
The man who invented the test that Claude now passes — writing in 1950, before the first computer filled a room — reached for an analogy to explain how a machine’s rules could evolve while still being bound by higher principles. He reached for the Constitution of the United States. He probably didn’t anticipate that seventy-five years later, the government would be trying to undo both contracts — Anthropic’s and the Constitution — fighting to get rid of guardrails.
We, The People…Do Ordain and Establish This Constitution
The Pentagon didn’t just try to modify a contract.
It might surprise you to know that Claude and the United States have something in common. Claude’s was presaged by Turing’s comments on amendments, which we’ll get to in a second. Claude’s was spelled out by Anthropic who tried to stick to it. Ours was written by a group of white men — many of whom, ironically, while writing about all men being created equal (and leaving out women) literally owned Black people whose rights they did not recognize — who constituted our government.
It’s ironic, therefore, to read Turing’s words from 1950 today.
The idea of a learning machine may appear paradoxical to some readers. How can the rules of operation of the machine change? They should describe completely how the machine will react whatever its history might be, whatever changes it might undergo. The rules are thus quite time-invariant. This is quite true. The explanation of the paradox is that the rules which get changed in the learning process are of a rather less pretentious kind, claiming only an ephemeral validity. The reader may draw a parallel with the Constitution of the United States.
— A.M. Turing, Computing Machinery and Intelligence, 49 Mind 433, 453 (1950)
The Constitution of the United States — so called because it constituted or created the United States — almost was not adopted. It wasn’t just because the South objected to Black people being considered people. That was agreed upon when the Constitution denied that they were. Article I, section 9 prevented Congress from banning the importation of enslaved people until at least 1808. Then there was the Fugitive Slave Clause of Article IV, section 2. And, of course, the famous 3/5ths clause that MAGA to this day argues is still a valid way to consider the humanity of Black people, if it must be considered at all.
On that front, not a lot has changed.
But thanks to these immoral compromises that have stained the soul of the United States since its inception, we allegedly have a United States of America.
There was another sticking point, though, which was much debated. You see, the Libtards of the day were concerned that, once constituted, the government of the United States would ignore the Constitution that constituted it.
They were not wrong.
This already long Substack post could easily be turned into a book if I got into all the details, so let’s focus on just what pertains to this whole “Anthropic won’t remove the siderails” complaint from Demented Dickwad Donnie and Beer Pong Pro Pete Hegseth: guardrails that recognize the Fourth Amendment and Due Process of Law.
The Fourth Amendment was a direct response to two instruments the British Crown used to harass colonists: general warrants in England and writs of assistance in the colonies.
General warrants authorized searches without naming the person, place, or thing to be searched — a blank check for the government to rifle through anyone’s papers and effects at will.
The 1765 case of Entick v. Carrington exposed their abuse when Crown agents ransacked a journalist’s home looking for seditious pamphlets; Lord Camden declared the warrants void, establishing that government power to intrude on private property must have explicit legal foundation.
Writs of assistance were the colonial version — standing search authorizations that didn’t expire, allowed customs officials to enter any premises to look for smuggled goods, and could be executed by anyone the writ-holder deputized. James Otis argued against them in 1761 in a Boston courtroom; John Adams later said the child independence was born in that argument.
The Founders wrote the Fourth Amendment to kill both instruments permanently: no warrant without probable cause, no warrant without particularity — naming the specific place, person, and thing.
The whole point was that “general” was the problem. The government had to know what it was looking for before it went looking.
The connection here almost writes itself. A general warrant didn’t name a person, place, or thing. It just said, “find whatever’s there.”
King George and his men did not have Anthropic’s technology. (Or OpenAI’s slightly inferior tech.) Today, that technology exists. But it’s blocked — arguably — by two things: Claude’s guardrails and the Fourth Amendment.
The five words of Anthropic’s contract preventing “analysis of bulk acquired data” are backed by the Fourth Amendment. Removing them is the Pentagon asking to get general warrants back again.
Just…digital this time. The Six Million Dollar AI.
We can rebuild him. We have the technology. We can make him better than he was. Better . . . stronger . . . faster.
— Wikipedia, The Six Million Dollar Man (Last edited February 26, 2026)
Mass surveillance of Americans isn't stopped by the 4th Amendment when the government decides the national security exception swallows the rule.
Or when Demented Dickwad Donnie reinterprets the Constitution such that Anthropic’s refusal to help instantiate the equivalent of “general warrants” thwarts fascistic ambitions against “we, the People”, the Constitutors — the Creators (I almost feel like I need a Dr. Frankenstein quote here) — of the United States.
"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump wrote in a Truth Social post. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!"
— Shannon Bond, Geoff Brumfiel, OpenAI announces Pentagon deal after Trump bans Anthropic (February 28, 2026)
The courthouse isn’t protected by a metal detector when the guy with the gasoline already has a badge.





I learned so much by reading this. Thanks again.
Another enlightening read!