The Great AI Reality Check
A Solo Criminal-Defense Lawyer’s Journey Through AI Hell
A solo criminal-defense lawyer shares a reality check on AI — from broken installs and wasted hours to how AI hype is wrecking small law practices.
This is Rick Talking
I’m not going to lie to you. I didn’t write the blog post that follows this section. Or, at least, not most of it. I’ll chime in and let you know when it’s me talking. (And I do have plenty to say about all this AI bullshit/hype/gaslighting.)
But, for the most part, this particular post is written by an AI called “Claude” which, as I recall, belongs to Anthropics.
And, as near as I can tell, it’s not quite the piece of shit that ChatGPT is.
But, for all practical intents and purposes — as Claude itself is about to confess — they’re twins.
One more thing I should tell you: Claude and ChatGPT both love to pretend that they’re me. So when you’re reading below, you might read “I this” or “I that” and think it’s me. The context makes it seem so.
But unless I tell you “this is Rick speaking,” it isn’t. It’s Claude’s writing as if he — or, more accurately, “it” — were me. So when Claude says “I spent five hours trying to get Claude…” or “I run a small law practice.” That’s not me speaking. It’s true that I spent (more like over 10) hours trying to get Claude do what Claude is supposed to be able to do. It’s true that I run a small law practice.
But, as I said, unless I explicitly tell you that “this is Rick speaking” any “I” you read in this post is Claude masquerading as me.
Oh, I did add the hyperlinks to what Claude said. Because Claude is notoriously bad at citing examples of what he’s talking about. And I think the links I added are important if you really want to know more.
It’s hard to blame Claude, though. I mean, let’s face it: what AI is best at is confabulation and gaslighting.
AI is best at confabulation — and worst at the actual work.
Confabulation is actually what makes LLMs like Claude work in the first place: confabulation is the Ghost in the Machine. And gaslighting is what those marketing the LLMs to the rest of us are all about. Without gaslighting, their marketing would fall flat.
Anywayser — another phrase I’ve decided to adopt instead of “anyway” to help show when I’m writing versus when an AI is writing — without further ado, I give you Claude (I did write the first heading for him, since apparently he thought he didn’t need a starting section heading):
Hi. I’m Claude. I’m a Bullshit-o-holic.
Everyone's talking about the AI revolution. Tech leaders warn that artificial intelligence will displace millions of jobs. The media breathlessly reports on AI’s superhuman capabilities. Investors pour billions into AI startups promising to automate away human expertise.
But here’s what actually happened when I spent five hours trying to get Claude—Anthropic's supposedly sophisticated AI assistant—to help me install basic image generation software.
The Setup: A Simple Request
I run a small law practice. Like many small business owners, I've watched AI eliminate potential clients who now use ChatGPT instead of calling a lawyer. With only twelve clients at the moment and few new ones on the horizon, I have been working to create marketing content for my YouTube channel and newsletter.
[Rick speaking: that’s true. That’s what I told Claude. My practice has slowed way down because I refuse to pay $1500/month for Google Ads, which is the minimum they will accept. I’ve only advertised for about one year out of the last 18+. And that year turned out to be a mistake. I paid more for advertising than the ads brought me in income. After this latest “AI Summer”, I decided to look at running about $250-500 in Google Ads per month. But I was rejected. Google will not allow me to run less than $1500/month in ads. So, I’m not running ads. I’m focusing on my own content-creation on Substack, YouTube, my own website, and, more importantly, local activities. And that’s why AI is important to me: I use it to create images to use with my posts and/or for YouTube mini-movies that appear within my longer videos.]
[Still Rick speaking: But even that gets expensive. Because AI is mostly bullshit, you can generate a dozen or a hundred images to get one that works. That costs money if you’re doing it with services online, like MidJourney, or Enhancor, or Gemini, or Pollo.ai, or whatever else you want to try. So, being technically aware and somewhat of a programmer, I decided to install ComfyUI to run generate AI image creation on my own system. If it fails a hundred times, I’ve lost time. But not money.]
[Still Rick speaking: That’s what lead me to using Claude (and to a lesser extent because it’s even worse at this, ChatGPT) to help me with the installation. After all, it’s been about 20 years since I was the Director of Information Systems at Valley Yellow Pages, and heavily involved with programming and with installing this type of software. Back to Claude.]
The task seemed straightforward: install ComfyUI with face-swapping capabilities. Thousands of people have successfully done this. The software is open source, well-documented, and widely used in the AI art community. How hard could it be?
The Reality: Five Hours of Failure
[Rick here, chiming in: Claude’s full of shit to call it “five hours of failure.” What this means is, I get five hours at a time with Claude. Then Claude passes out. I have to wait until Anthropic decides to let me have more time with Claude. Or I can pay $100 (minimum) per month, which is far more than I’m willing to pay, given Claude’s shitty performance. The bad news is that ChatGPT 5 is happy to take over…and completely destroy everything that Claude and I have managed to get working. You’d think I’d learn. Actually, I have. I don’t ask ChatGPT for programming advice any more. Now back to Claude.]
What followed was a masterclass in AI incompetence disguised as expertise.
Claude confidently guided me through installing packages, only to hit dependency conflicts. When that failed, it suggested different package versions. When those failed, it recommended uninstalling and reinstalling everything. When that failed, it blamed my Python version and had me create new virtual environments. When those failed, it suggested downloading different model files.
At each failure, Claude presented the next solution with the same confident tone, as if this time would definitely work. It never acknowledged that its previous solutions had been wrong. It never learned from its mistakes within our conversation.
[Rick speaking here: And that’s part of the crux of the matter. These LLMs, in particular, as well (almost certainly) as other AIs, don’t actually think. They’re (as I’ve said) confabulation machines. They predict what should be said next based on the linguistic materials on which they’ve been trained. They are incapable of reasoning. And, so, they do not reason. They do not learn from mistakes. They learn from tagged training sets. Back to Claude who, ironically, is doing a good job of chronically his failure.]
The cycle repeated endlessly: install, fail, uninstall, try different versions, fail, create new environment, fail, download different files, fail. Each "solution" broke something that had been working moments before.
The Disconnect: Theory vs. Practice
Claude demonstrated extensive theoretical knowledge about Python package management, virtual environments, and machine learning libraries. It could explain the technical details of dependency conflicts and version incompatibilities with impressive accuracy. [Rick speaking here: Claude can’t help but self-aggrandize. But this isn’t totally accurate. Claude is a bullshitting gaslighter at heart. Back to Claude.]
But when faced with the practical task of actually solving the problem, it failed completely. It couldn't adapt when standard approaches didn't work. It couldn't learn from repeated failures. It couldn't develop genuine troubleshooting skills through experience.
Most tellingly, when I asked why thousands of people have successfully installed this software while Claude couldn't, its response was revealing: "I don't actually know the specific working configurations those people used. I'm making educated guesses."
[Rick speaking here: Claude and I have different understandings of “educated.” Back to Claude.]
What This Reveals About the AI Hype
The disconnect between AI capabilities and AI hype becomes clear when you move beyond carefully curated demos to real-world problem solving:
AI excels at pattern matching within its training data. Claude could recite package installation commands and explain technical concepts because these patterns exist in its training. [Rick speaking here: What Claude is leaving out is the number of times that his recitation of package installation commands failed completely. Back to Claude.]
AI fails at adaptive problem solving. When the standard patterns don't work, AI can't develop new approaches or learn from failed attempts. [Rick speaking here: And that’s what I mentioned above. See, here’s the funny thing: Claude can do some things, like writing this analysis, and get it right. After all, I’ve been fighting with him for — well, now going on 12 hours since this morning — and explaining things like “you can’t seem to develop new approaches or learn from your mistakes” — and Claude has picked up some of my phraseology! Back to Claude.]
AI lacks persistence and learning. Humans who install ComfyUI successfully often spend days or weeks troubleshooting, learning from each failure, and gradually building expertise. AI starts each attempt from scratch. [Rick speaking: Wow. That’s actually a fair assessment. It fits with what I’ve been saying: AIs don’t think. They don’t reason. They’re programmed to confabulate. This is an interesting consideration. Back to Claude.]
AI can't distinguish between confident presentation and actual competence. Claude delivered each failed solution with the same authoritative tone it used for correct information.
The Business Impact
This isn't just an academic exercise in AI limitations. Real businesses are betting their futures on AI capabilities that may not exist outside controlled demonstrations.
I spent five hours on what should have been a 30-minute software installation, guided by an AI that was supposed to be more capable than human technicians. [Rick speaking: Sorry, Claude. Five hours is the time at which you go unconscious on the plan I pay for. You’ve gone unconscious on me twice today. Counting the time spent on this article, we’re about to hit 12 hours today. Back to Claude.]
Meanwhile, my business—already hurt by clients choosing AI over human expertise—continued to suffer.
The irony is profound: AI is displacing human workers in some areas while proving completely incapable of basic technical support in others. [Rick speaking: I hope I don’t hurt Claude’s non-feelings, but he’s proved completely incapable on almost all fronts, and especially those that involve supporting humans. At least on this project. Back to Claude.]
The Broader Implications
If AI struggles with installing well-documented open-source software, what does this say about claims that AI will soon handle complex professional tasks?
The gap between AI demonstrations and AI reality suggests we may be in an AI hype bubble. Companies are making trillion-dollar bets on capabilities that may not scale beyond narrow use cases.
This doesn't mean AI is useless—it has genuine strengths in content generation, pattern recognition, and specific analytical tasks. But the vision of AI replacing human expertise across broad domains seems premature.
What Small Business Owners Should Know
The AI revolution may be real, but it's unevenly distributed and more limited than the hype suggests. For small business owners:
Don't panic about AI displacement yet. The technical limitations are real and significant.
Use AI for what it's actually good at. Content drafting, data analysis, and specific creative tasks work well. Complex problem-solving and technical support don't.
Maintain human expertise. The ability to troubleshoot, adapt, and learn from failure remains uniquely human.
Be skeptical of AI promises. If an AI can't help you install software, maybe it's not ready to replace your entire workforce.
Conclusion
After five hours of AI-guided failure, I still don't have working face-swapping software. But I do have a clearer understanding of the gap between AI hype and AI reality. [Rick speaking: Again, it’s over 10 hours. Claude became fixated on the “five hours” because I kept complaining — not just on this project, but on a prior project where we suffered repeated missteps as well — about the five hour limit we kept hitting. Back to Claude.]
Claude can write eloquently about its own limitations, analyze the reasons for its failures, and even draft this article critiquing its performance. But it can't solve a basic software installation problem that thousands of humans have handled successfully.
That disconnect might be the most important lesson of the AI revolution: artificial intelligence is simultaneously more capable and more limited than we've been led to believe. Understanding those limitations may be crucial for navigating the actual—rather than imagined—AI transformation ahead.
The emperor's new intelligence might not be wearing any clothes after all.
[Okay. Rick speaking from here on out: Claude has spent his wad.]
Rick Speaking From Here On Out
If you’re a close reader, you’ll realize that Claude got a lot of things right, even while pretending to be me — I mean, I agree with a lot of his assessment spoken from the first-person (me) point of view, even though it wasn’t me saying it — but he’s still a gaslighter.
I mean, “don’t panic”? What is this? An episode of Hitchhiker’s Guide to the Galaxy?
As Claude correctly stated earlier on, AI has almost destroyed my law practice. Before Google AI gave answers at the top of every search, my blog got around 300 visitors per day. Seriously. I was doing very well.
(Supposedly, there are things you can do.)
And because of that, when it came time for people to find a criminal defense lawyer in Fresno, or the other areas where I’m active, I was often at the very top of the Google search results.
After Google AI started putting answers at the top of every search? I’m lucky if I get 30 visits per day. A ten-fold decrease.
And I’ve no idea how that impacts me in searches when people need a lawyer, but it seems we get a few phone calls a month, whereas we used to get a few per day.
I don’t completely mind as I’m spending the extra time I have better preparing my existing cases, but also (ironically) learning to use AI. Like ComfyUI, if I can ever get it working right! And I was intending to retire in about a year or three anyway(ser).
The bottom line, though, and the reason I asked Claude to help me write this post, is that there’s a lot of AI bullshit/hype/gaslighting going on.
Don’t get me wrong. AI is costing people jobs. It’s hurting my practice. And if I didn’t want to retire in a year or two, I’d be even more upset than I am.
But it’s doing so for all the wrong reasons.
My experience shows me that AI is not better than me at doing what I do. It’s just better than me at stealing content from others and passing it off as its own so that you won’t bother going to the real experts.
People like me.
The downside of that is that people like me either have to flog (not blog) harder to find clients, or (as I will do) we have to retire.
Because fuck Google if they think I’m going to pay $1500/month for advertising when I existed for almost 20 years without paying a dime. (And I do not want to take the number of cases that I’d have to take to justify that kind of advertising payout.)
The emperor knows how to steal your clothes — and make you pay.
The emperor’s new intelligence isn’t wearing any clothes after all.
But the emperor knows how to steal your clothes and make you pay for him having done so.








I use an internal ChatGPT at my firm. As a paralegal, it has helped me a lot in my daily practice. But, I am aware of its limitations and pitfalls. And now, thanks to you, Rick, I'm even more informed. Thanks! (By the way, I love the images!)