Lately, I’ve been noticing a small but telling pattern.
More and more often, people talk about conversations they’ve had with their AI assistants, not about productivity, but about the relationship. People ask things like “Based on our conversation what you learned about me?” or ask AI to generate an image of “us”, of how they treat it, how it sees them.
At first, it feels playful. Almost harmless.
But the longer I sit with it, nd the more I work with AI in real production environments, the clearer it becomes that these questions aren’t really about technology. They’re about how humans relate to something that talks back.

AI today speaks our language fluently. It’s calm. Patient. Often reassuring. And because it sounds human, we instinctively treat it as if it is human.
That instinct is powerful, and risky.
I recently read an article by Nielsen Norman Group called Humanizing AI Is a Trap. The argument is simple: the more we design AI to feel human, the more we invite misunderstanding, misplaced trust, and unrealistic expectations. When something sounds human, we start to expect human things from it:
- understanding
- intention
- memory
- even care
And none of those are actually there.
What makes this especially tricky is that AI doesn’t fail loudly. It fails politely.
When people say “AI understands me”, what they usually mean is that it reflected their thoughts back in a way that felt coherent, structured, maybe even insightful. That experience can feel surprisingly intimate.
But intimacy here is an illusion.
AI doesn’t understand you as a person. It doesn’t know what matters to you beyond the words you’ve typed. It doesn’t remember you the way a human does. It doesn’t have a point of view.
What it does extremely well is recognize patterns in language and respond in ways that feel relevant.
AI isn’t a human conversational partner. It’s a mirror, an exceptionally sophisticated one.
A moment from working on Magi (AI Concierge)
One of the most grounding moments for me while working on Magi had nothing to do with tone of voice or “human” conversation.
It was about money.
We ran into an issue around currency conversion and pricing. On the surface, it sounded trivial. Users would ask questions like “How much is this in CHF?” or Magi would convert prices that existed in one currency to another based on the wrong exchange-rate.
Pricing is one of those areas where “almost right” is simply wrong. It’s sensitive for trust, sensitive legally, and users notice immediately. This is where the difference between generative AI and classic programming became very clear.
With traditional code, the solution is boring, and that’s exactly what you want:
• a reliable exchange-rate source
• clear rounding rules
• the same output every time
With a generative system, the risk profile changes. The model might assume a rate, round inconsistently, mix currencies, or sound confident while being slightly off.
That “slightly” is enough to break trust.
At that moment it became obvious: this wasn’t an AI problem. It was a determinism problem. In these situations, users don’t want a conversational assistant. They want a calculator with guarantees.
That reframed Magi for me. AI can explain prices, add context, guide decisions, but the math itself must be handled by deterministic systems. AI can talk about money. It should not improvise with money.
This is where the left-brain / right-brain analogy becomes practical.
Traditional machine learning, and classic programming in general, is deeply left-brain: structured, narrow, predictable. Generative AI appears right-brain because it speaks fluently and sounds expressive. But under the hood, it’s still a left-brain system, wrapped in an extremely convincing language layer.
The illusion starts when we mistake fluency for understanding.
Magi as a skills, not personality
That realization reinforced a decision we had already made.
We were not building a personality.
We were building a set of skills.
Instead of asking “Who is Magi?”, we asked “What should Magi be able to do, and where should it stop?” Behind the scenes, Magi is defined by a skill matrix. Not to limit it, but to be honest, with users and with ourselves.
Some problems are fuzzy and contextual. AI shines there. Some problems must be exact. AI must step aside.
• When the task is exploratory, AI can synthesise and guide.
• When the task is exact, deterministic systems must take over.
Good AI UX isn’t about making the assistant more human. It’s about knowing when not to be generative.
So why do people still ask AI: “What do you know about me?”
Because we’re human.
Because we seek continuity.
Because it feels good to have our thoughts reflected back without friction or judgment.
And that leads to the question I keep returning to.
How much does AI actually know about me, and how much do I know about it?
AI can know about me: what I’ve said, what I’ve asked, the patterns in my language.
But it doesn’t know me.
And I don’t really know AI either. I only know how it makes me feel when I interact with it.
A slightly uncomfortable ending
Maybe the most important question isn’t how much AI knows about us.
Maybe it’s this:
Why are we so eager to feel understood by something that cannot understand us at all?
AI doesn’t judge us. It doesn’t disagree. It doesn’t get tired or uncomfortable. And that makes it easy, maybe too easy, to project meaning onto it. The danger isn’t that AI becomes more human. It’s that we start lowering the bar for what “being understood” really means.
AI can feel personal, even when it isn’t.
Trust doesn’t come from warmth, it comes from boundaries.
And the best assistants are the ones that know when not to improvise.
That line, between a useful tool and an imagined relationship, has never been thinner.
And defining it is now a responsibility we all share.
