Have humans already replaced AI?
Before we start: this article is adapted from my first Fork it! podcast in French, titled “Summer Chronicles”.
And yes, this is a chronicle with a strong humor angle. Don’t read it as first-degree literal truth.

Everyone is losing their minds over AI replacing humans. Will robots take our jobs? Will ChatGPT make us obsolete? Will Skynet finally rise? Wrong questions. The real question nobody wants to ask: Have humans already replaced AI?
I’m not talking about some Black Mirror scenario. I’m talking about something way more disturbing, something I’ve watched for years in offices, meetings, and projects. Some colleagues, despite their fancy diplomas and apparent competence, operate exactly like AI systems. They follow patterns without understanding them. They produce outputs but have no idea what the output actually means. And they fake intelligence so well that nobody questions it, sometimes not even themselves.
The journey from the streets to the boardroom
I’m a kid from the ghetto. Death or prison seemed like the only exits. Four walls, four boards, as Kery James would say (a legendary French rapper, and a very popular song in France, in my generation).
No connections to the business world, just raw reality where you hope to escape. Fast forward through college, years of study, a future within reach. Then this communication teacher whose name I forgot but whose words I remember:
“Don’t think that business people are professionals. If you knew how many people have no idea what they’re doing.”
Faking competence
That sentence stayed with me. First internship, first real professional experience, first shock. The guy was right. But I was an intern. I didn’t understand everything either. Maybe that’s just how business works? Pretending to understand? Like in the hood where you pretend to be stronger than bullets?
Then came my awakening. I met Ivan, a true genius. Strange guy who understands everything fast, talks faster, makes fun of me, and predicts the future. He taught me something crucial:
Understand what you are doing, why you are doing it, not just execute.
All the problems he warned me about happened, every single one. That’s when I created my model. Simple and binary. The world is divided into two categories: the competent and the idiots. The competent have the will to be good. The idiots don’t even try. It worked 99% of the time.
Then came the anomaly: someone with visible force, awareness of the situation, but zero results. The guy was competent, or at least he seemed to be. Did my model break?
But in June 2023, I read Blindsight by Peter Watts. Science fiction, my favorite genre. The book introduced me to something called the Chinese Room. Everything clicked. The core take is simple: intelligence can exist without consciousness. Then the reverse question hits: can consciousness exist without intelligence?
The Chinese Room and your colleague
John Searle invented this thought experiment in 1980. Imagine someone locked in a room with a rulebook for responding to Chinese sentences. The person doesn’t speak Chinese, but the rulebook is perfect. Someone outside sends questions. The person inside follows the rules, sends back Chinese answers. From outside, it looks like someone who speaks Chinese. But the person inside understands nothing. Zero. They just match patterns.
Picture source: https://cerveauxetrobots.fr/chatgpt-chambre-chinoise/
Now think about ChatGPT, then remove the screen and put a human in front of you. Same mechanism, different packaging: a plausible answer, delivered with confidence, without real grasp of what is being said. I’ve seen people forward AI-generated answers in a thread, add one sentence on top, and send the whole thing as if judgment had happened. It’s the same dynamic as the colleague who says a project is finished while tasks are still in “doing”, or the one who swears every CV is in English or French until you open the folder and find German, or the one who builds an Excel template where the data must be entered in comments instead of cells. These are not caricatures. These are field notes.
These people aren’t stupid. Many have excellent diplomas. They show logic in other areas of their lives. But somewhere along the way, their brain got trained to match stimuli to expected answers. Not to understand the question, just to produce something that looks like the right response. Like an AI. Skynet is already here. You just didn’t notice.
The new model: three categories
So I had to update my taxonomy. Two categories was not enough anymore.
The Awake. These are the ones who understand what they’re doing and why. You give them a new situation, they adapt. You ask them “why did you do it this way?”, they have a real answer, not a rehearsed one. Ivan is awake. My best developers are awake. You probably know who they are in your own team because they’re the ones you actually trust when things go sideways at 11pm on a Friday.
The Idiots. No will to be competent. They choose ignorance, choose carelessness. It’s intentional. I don’t lose much sleep over these ones, at least they’re easy to spot. Everybody has a colleague like this. The one who somehow survives every restructuration (yes, in French we say restructuration, not restructuring, and I prefer our version).
The AI. This is the category that broke my model. They have the will. They put in the effort. But the results never come. They didn’t learn the meaning of things, they learned to produce expected outputs. Like a student who memorises the exam answers but cannot solve a problem that is formulated differently. I had a project manager once who could repeat everything from the methodology book, every framework, every process name. Perfect vocabulary. But put her in front of an actual crisis and she would just… loop. Repeat the same phrases, the same steps, like a script running with no error handling.
The test is simple. Ask them to explain why, not what. The Awake can tell you why. The AI can only tell you what. Hence the following deductions:
Will + effort + results = Awake.
No will = Idiot.
Will + effort + no results = AI.
What does this change? Everything
You can’t communicate the same way with all three categories. Explaining nuance to someone operating on pattern matching is like arguing with ChatGPT about why it’s wrong. You can try. You won’t get far.
And this opens a much bigger question. How many decision makers who push “AI everywhere” are themselves operating on pattern matching? They heard “AI equals good” enough times, so now they repeat it. Think about Apple getting pressure from shareholders to add AI features nobody needs. Companies rushing to say “we put AI in our products” like they used to say “we put our products in the cloud” without understanding either. The pattern is the same.
Now scale that up. How much of the billions spent on software projects that never get used comes from human AIs building solutions to problems they don’t comprehend? You get a double layer of non-comprehension, and honestly, that explains a lot of what I’ve seen in this industry.
I used to be cynical about this. Five months ago, I would have ended here with a joke about planning a genocide of idiots, or calculating the carbon balance between keeping a human AI versus a computer AI running.
But I’m a father now. That changes things.
My legacy
My son, if you ever read this, check that you’re not an AI. Question yourself. Do you understand why or just what? Can you adapt or only follow procedures? Do you parrot concepts or comprehend them?
For everyone else, same question.
The theory has risks. It could oversimplify complex cognitive issues, could lead to unfairly dismissing people. I get that. But it also explains otherwise incoherent behavior. It gives you a practical lens for management and communication. And it’s based on legitimate philosophy (Searle’s Chinese Room), not just my frustration.
Can human AIs learn true understanding? I don’t know. What systemic changes would prevent creating them? Better question. If humans already function as AI, what does developing digital AI even mean?
The mirror
I’m not offering solutions. I’m offering a mirror.
We built systems that reward execution over understanding. Get the diploma, get the credential, match the pattern, get the promotion. Our education system optimizes for this. Get good grades by providing expected answers. Don’t question. Don’t understand deeply. Just match the pattern.
The real test isn’t whether AI can pass as human. It’s whether humans can remember how to be more than pattern matchers. Next time a colleague gives you a nonsensical answer despite clear competence, don’t just get frustrated. Ask yourself: are they struggling, or are they a Chinese Room made of flesh, responding to stimuli they’ll never truly understand?
Because if that’s the case, everything you thought about intelligence, competence, and the future of work needs updating. We replaced ourselves before the machines even had the chance to try. And most of us didn’t notice because, well, the outputs looked correct.
Funny enough, that’s exactly how a Chinese Room is supposed to work.
Rudy Baer
Founder and CTO of
BearStudio,
Co-founder of
Fork it! Community!