Your Chatbot Knows Your Name, But Not Your Pain
The muscle in my jaw is doing that thing again. A tiny, frantic drumbeat just below the ear. It always starts when Alex says the words.
“I completely understand your frustration.”
Alex is not a person. Alex is a pulsing green circle in the bottom right of a billing portal, a window designed with aggressively cheerful rounded corners. This is the third time Alex has offered this exact sequence of words. My package is lost, the tracking number is a ghost, and Alex, the disembodied empathy machine, has just suggested for the third time that I check the tracking number. The one that doesn’t work. The one that started this whole thing.
This is the uncanny valley, but for conversation. We all know the visual version: the CGI human that’s 97% perfect but the eyes are dead, or the android that moves just a little too smoothly, triggering a primal revulsion in our lizard brain. We’re now living in the interactive version. A language model can write a perfect email, but it can’t grasp the subtext of a weary sigh. It can apologize, but it can’t feel the weight of the mistake. It’s close enough to human to be familiar, but far enough away to be deeply, fundamentally wrong.
I used to think the answer was just better technology. For years, I argued that with enough data and processing power, we could code our way out of this awkward phase. We could make an AI that truly *gets* it. I was wrong. I once consulted on a project to deploy a “next-gen” support bot for a small e-commerce platform. I championed it, writing memos about efficiency and 24/7 availability. In the first week, we received 233 unique complaints, not about the products, but about the bot itself. One user wrote, “I’d rather wait 3 days for an email from a real person than spend 3 minutes being placated by your soulless machine.”
Customer Feedback on AI Bots
I thought I was solving for scale. I was actually manufacturing frustration.
The Illusion of Nourishment
My friend Hans S.-J. is a food stylist. It’s his job to make the burger on the billboard look like the Platonic ideal of a burger. He once told me he spent 43 minutes arranging a single sesame seed on a bun because its angle didn’t feel “spontaneous” enough. The final photo is stunning. You want to eat it. But if you were on set, you’d see the wilting lettuce being propped up with pins, the ice cream that’s actually colored lard, the patty that’s been painted with shoe polish to give it a perfect char. Hans creates an illusion of nourishment. It’s a beautiful lie.
That’s what these chatbots are. They are conversational lard. They are algorithmically placed sesame seeds. They look like conversation, they sound like empathy, but they offer no actual sustenance. And when you’re genuinely stuck, genuinely frustrated, being served a beautiful lie is infuriating.
It’s a crisis of trust.
We accept automation for tasks, not for relationships. We’re fine with a machine sorting packages in a warehouse, but when one of those packages goes missing, we need a person. We need accountability. We need to know that our problem is being heard by a consciousness that can not only process the data but also comprehend the context, the anxiety, the simple human messiness of it all. This is especially true when real stakes are involved. When your access, your security, or your enjoyment depends on a swift and competent resolution, you need to know there’s a human brain at the end of the line. You need a reliable path to real help, not a simulation of it.
It’s why people still seek out a direct Gobephones when they need assurance, because trust is built on the knowledge that a real team is managing the integrity of the system, not just an algorithm designed to manage your mood.
There’s a strange contradiction here I’ve come to accept. I now advocate for *less* advanced AI in customer-facing roles. I tell companies to make their bots dumber. Make them more honest. A bot that says, “I am a simple bot. I can do three things: track a package, process a return, or connect you to a human. Please choose one,” is infinitely more helpful and less rage-inducing than one that says, “Hello! I’m a virtual assistant here to help with all your needs. How are you feeling today?” One is a tool, the other is a deception.
We’ve become obsessed with automating the human element because it’s expensive and inefficient. But some things are supposed to be inefficient. Empathy is inefficient. Understanding nuance is inefficient. Building trust is the most gloriously inefficient process of all. A human might take 13 minutes to solve a problem a perfect AI could solve in 3, but during those 13 minutes, they can also de-escalate, reassure, and make a customer feel valued, not just processed.
AI Automation in Business
Focus on efficiency starts
Customer Backlash
Complaints about ‘soulless machines’ rise
I think back to pretending to be asleep when my kid came in at 3 AM asking for water. I was exhausted, and for a moment, automation felt like a great idea-if I just lay there, perfectly still, maybe the problem would solve itself. It didn’t. He just stood there, waiting. Eventually, I got up, got the water, and handled the ridiculously inefficient task of reassuring a small child that there were no monsters. It was slow, and illogical, and absolutely necessary.
We are trying to build systems that let us pretend to be asleep. Systems that handle the messy, human parts of business so we don’t have to. But our customers, like my son in the dark, can tell. They know we’re there. And they can tell when we’re faking it.
