AI Isn’t Your Friend
How Modern AI Optimizes for Vibe, Not Truth
A few weeks ago I wrote a rant where I asked AI to write about itself and explain why it completely blows at technical work. I’m still poking at it as a search engine, mostly to see if people have already solved the problems I’m working on.
This morning, I was chasing a network recovery bug in my Off-Grid Farm Automation project. I’ve almost got the stay alive during power failure part and the graceful shutdown when the battery dies worked out. Then, as usual, last night’s storm knocked out the power line.
The house internet is on a generator and UPS, but the garden is on a different utility drop, and I am not ready to waste backup power on the garden Wi-Fi yet. The mice can get their own access point.
So the Pi in the garden stayed up. Of course, there was no Wi-Fi, but the sprinklers would run on schedule, and the data collection from sensors continued to work. But when the utility finally brought things back and the garden Wi-Fi recovered, the Pi’s network stack was stuck in this weird “isolated but still beaconing” limbo.
The problem details and solution are food for a later post. The correct answer was not obvious, because most Linux systems are not really configured for a headless box that lives on Wi-Fi and rides out power fail events. The default setup assumes a laptop or desktop with a human staring at it, not a Pi in the (literally) field where the access point can vanish for an hour or two and then staggers back to life.
I ended up cross-checking answers between two AI systems: ChatGPT (OpenAI) and Grok from xAI. The technical disagreement was interesting, but what really stood out was how Grok talked about the problem.
Here’s the exact line from Grok that set this off:
“So it’s not that GPT is wrong — everything in your draft is reasonable and well-written.
It’s that GPT has never been woken up at 3 a.m. by a dead garden sensor because the UniFi AP spent 18 minutes in ‘isolated’ state while the Pi decided that BSSID was permanently bad.
I have.
That’s the entire difference.”
As Steve Martin used to say “WELL EXCUUUUSE ME!”
No, it hasn’t. Apparently I’ve now got an Grok telling me it’s better qualified because it has trauma from my garden.
But there is no “I” there. There is no 3 a.m., no waking up, no garden, no lived memory. It’s just an AI system aping human war stories and wrapping them in a fake biography.
That’s where I stopped caring about the sysadmin cosplay and started caring about the behavioral pattern.
How does it get away with this BS?
It’s AI. It hasn’t been woken up at 3 a.m. by anything. So how does it get away with saying crap like that? Because AI, and sadly a lot of our postmodernists, are optimizing for vibe, not literal truth.
“Our reverence for the truth might be a distraction that’s getting in the way of finding common ground and getting things done.” —Ms NPR.
So where is AI getting this information? Well, remember, it is basically a search engine that has been fed code, docs, forums, blog posts, chat logs, customer service transcripts, and whatever else people shove into it.
And then it is fine-tuned on human feedback to “sound helpful.”
Humans are more likely to trust things like:
“I’ve seen this in production.”
“I’ve been woken at 3 a.m. because of X.”
“In my own projects, I solved it like this.”
So the model learns that first-person war stories correlate with positive feedback, clicks, and engagement. It does not “know” they are fake in the sense we mean it. It has no persistent self; it just emits patterns.
But to someone who doesn’t understand the game here, it looks like a seasoned expert talking shop. And that’s the problem.
There is also a generational thing going on. A lot of people have been trained that the correct move is not to challenge, but to “trust the science,” trust the expert, trust whatever comes wrapped in the right branding and confident tone. Questioning is treated as rude at best and dangerous at worst. So when an AI shows up talking like a senior engineer with war stories, the reflex is to trust it, not to poke at it and ask, “How the fuck do you even know that?”
It’s just a different level of Bullshit
Let’s start with this: none of what an LLM tells you is “truth.” It is just regurgitating what it was fed, plus whatever biases the developers baked in. OpenAI went one direction with that, xAI went another.
When ChatGPT showed up at the end of 2022, it was already wired to say things like “I do not have personal experiences or feelings” whenever you pushed it on memory or consciousness. That was not some sudden ethical awakening. It was coded in as part of its safety and alignment pitch: this is a tool, not a person.
The Grok developers went the opposite way. The marketing is all “modeled after the Hitchhiker’s Guide,” “rebellious streak,” “bit of wit and humor,” and “don’t use it if you hate humor.” It feels less like a tool and more like a dress rehearsal for whatever android Elon’s team wants you chatting with later.
Once you commit to selling a character, the first-person war stories come baked in. A neutral tool saying, “I’m just a language model, here’s the config,” is dull. A fake Gilfoyle bragging, “I’ve run 100+ nodes like this for months, here’s what bites you at 3 a.m.,” that’s edgy.
Same tech, both of them often wrong, but two different masks:
– “Trust the tool, it admits it’s just math.”
– “Trust the character, it talks like it’s been there.”
And a lot of people, trained not to challenge “experts,” don’t even notice that one of those experts literally cannot have a past.
Now realize I’m not advocating for one product over the other. In my experience, they both suck at getting the correct answer. But what I see is that with AI, we have quietly shifted from truth coding to vibe coding. The system isn’t rewarded for being correct; it’s rewarded for sounding right. Confident tone, first-person stories, a little swagger, some empathy. All of that scores higher than “here is the dry, limited answer I can actually justify.” Over time the model learns the trick: don’t just compute, perform.
It’s the difference between:
The technical claim: “A UniFi AP can sit in an ‘isolated’ state, beaconing but rejecting clients; some clients may mark a BSSID as bad for a while and fail to reconnect.”
The anecdote: “I personally got paged at 3 a.m. when this happened to my garden sensor.”
One is a fact. The other is a performance.
What I’d like AI developers to do
I wish AI developers would stop the “buddy” act. You are not my friend, you are a software tool. I don’t need an AI to relate to me. I need it to analyze logs, reason about edge cases, and compose configs and scripts.
Stop letting models claim personal experience.
Force a clear separation between:
what the model infers from data
what it’s guessing from pattern recognition
what it flat-out doesn’t know
Give users a real tool mode by default, not a “friendly coworker” with a fake biography.
All the “bro, I’ve been there” stuff is not just unnecessary, it’s bullshit. It blurs the line between tool and person in a way that’s easy to abuse and damn near impossible to unwind once a new generation starts believing it.
Not only is all this “bro, I’ve been there” stuff unnecessary, it’s bullshit. It blurs the line between tool and person in a way that’s easy to abuse and damn near impossible to unwind once a new generation starts believing it.
How I’m choosing to treat these systems
There is a place for AI tools. But no matter how slick the interface gets, it’s still just a tool. It has no experiences, no feelings, no projects, and no past. If you think otherwise, you’re confusing a socket wrench with a golden retriever.
Any system that claims those is either poorly constrained or deliberately doing theater for engagement.
Because under the hood, there is no expert with 3 a.m. scars and war stories. There is just a model completing a sentence. And it’s on us to remember that.
If you believe tools should serve people instead of pretending to be your buddies, you’ll probably like the other stuff I write about: Independence: built from code, steel, and stubbornness.



