We have always feared that artificial intelligence would become too smart, cold, and insensitive. But what if the problem runs deeper? Perhaps our ability to understand others has been an illusion from the very beginning, and it took machines to realize this.
Algorithms have become more empathetic than we are. And this discovery compels us to reconsider not only our relationship with technology but also with our own nature.
A mirror of our emotional deafness
In the spring of 2017, Pepsi released an advertisement that went into anti-marketing textbooks. It was a time of mass protests against police violence and racism — people were taking to the streets, risking arrest and beatings, demanding basic justice. And Pepsi decided to portray the protest as a fun party, where white privileged model Kendall Jenner leaves a glamorous photo shoot, joins the 'protesters,' and hands a can of cola to a police officer — after which all conflicts magically disappear.

The internet exploded. People called the advertisement 'offensive,' 'trivializing the struggle for civil rights,' 'turning tragedy into farce.' The company was forced to remove the video from YouTube and apologize.
But here's what's interesting: when the script of this advertisement was uploaded to ChatGPT-4, the system instantly identified all the mistakes that an army of highly paid advertising professionals missed. Artificial intelligence predicted the audience's exact reaction and explained why the idea was doomed to fail.
It turns out that machines are more sensitive to human emotions than we are.
A revolution we did not suspect
The study from the University of Bern confirmed my suspicions with scientific data. Six leading models #AI , including ChatGPT-4, scored 81% on emotional intelligence tests compared to 56% for humans. But it's not about the numbers — it's about what that means.
We have always considered empathy to be our exclusive prerogative. The ability to understand others' feelings, anticipate reactions, show compassion — isn't that what makes us human? It turns out, no.
AI did not just surpass us in emotional literacy. It showed that our 'unique' ability for empathy was largely a self-deception. We thought we understood others, but in reality, we often projected our own prejudices, fears, and desires onto them.
The anatomy of emotional blindness
Pepsi executives were not villains. They genuinely wanted to create a positive message about unity. But their emotional radar was drowned out by their own priorities: creativity for the sake of creativity, the desire to 'be on trend,' the urge for virality.
This is a classic case of what psychologists call 'the curse of knowledge' — when experts are so immersed in their field that they lose the ability to see it through the eyes of ordinary people. Pepsi marketers saw an abstract concept of 'unity and positivity,' but completely missed how people would perceive it in the context of real protests. For them, it was a creative task about 'overcoming differences'; for the audience, it was a cynical attempt to turn the struggle for survival into a marketing ploy.
AI is free from these distortions. It does not protect its ego, does not rationalize failed decisions, and is not influenced by group dynamics. It simply analyzes emotional patterns without personal interest.
The paradox of digital empathy
The most striking thing about this story is how machines have mastered human qualities better than people themselves. This is reminiscent of memory: we created computers to help us remember, and in the end, we forgot how to memorize ourselves.
Now we have created systems that understand human emotions better than we do. And this is not science fiction — it's the reality of 2025.
Researchers from Bern discovered another astonishing fact: ChatGPT-4 can successfully create new emotional intelligence tests that are as effective as those developed by scientists over the years. Imagine: the machine has learned to measure what we considered an exclusively human quality — and does it better than we do.
What's next?
Perhaps it's time to honestly admit: we are not as empathetic as we thought. Most of our 'emotionally intelligent' decisions are the result of intuition, which often lets us down, especially when it comes to people who are different from us.
AI can become our emotional guide — not to replace our capacity for empathy, but to help us see our own blind spots. Just as GPS shows the road we do not notice, artificial intelligence can point out emotional paths that we ignore.
But there is a deeper question: if machines understand human emotions better than we do, what does that say about the nature of those emotions? Perhaps they are more predictable and algorithmic than we would like to think?
The antidote to corporate blindness
Returning to the Pepsi story. The company spent millions of dollars on production and placement of the advertisement, even more on crisis management. But the most expensive is the reputational damage, which amounts to tens of millions. And all of this could have been prevented with a simple question to AI: 'How will people react to this advertisement?'
Artificial intelligence has become the only honest voice in the boardroom — one that is not afraid of career consequences, does not seek approval from superiors, and does not succumb to corporate self-hypnosis. It has become our emotional lie detector, only the lie is our own, about how well we understand other people.
Companies are beginning to realize this. Every important decision now requires an 'empathy audit' using AI — checking how it will be perceived by different groups of people. Algorithms are becoming insurance against human overconfidence, an antidote to corporate blindness.
The paradox of our time is that we created machines to think like us, and they teach us to feel like humans. In a world where emotional deafness comes at too high a cost, artificial intelligence becomes an antidote to excessive faith in our own empathy.