ai bias

During the AI Week 2025 there was a lot of discussion about algorithms, innovation, and automation, but also about bias.

But a crucial concept caught the attention of the listeners: technology is not neutral. Even artificial intelligence, as logical and mathematical as it is, amplifies human intentions. 

This means that if our mental processes are full of bias, AI also risks reproducing them on an amplified scale.

In this article, we explore the connection between cognitive biases and artificial intelligence, with a focus on two of the most pervasive: affinity bias and non-likeability bias

A topic that is increasingly central when discussing inclusive leadership and the ethical development of technologies.

Why biases are important in the context of AI

AI, despite being a technology, is trained on human data. And human data reflect behaviors, prejudices, stereotypes. AI, therefore, is not born neutral, but takes on the nuances of its creators and its datasets.

Biases are not just errors: they are systematic distortions in our way of perceiving and making decisions.

Understanding which biases affect us is fundamental for building more equitable, ethical, and sustainable technological systems.

The affinity bias: the silent enemy of diversity

Affinity bias is the tendency to prefer people similar to us. This happens, for example, when a manager hires collaborators who share their background, gender, worldview.

In the field of intelligenza artificiale, this can translate into:

  • Algorithms that reward profiles similar to those of the people who designed them

  • Recommendation systems that reinforce monoculture

  • Automatic selection processes that penalize minorities

If everyone around us thinks the same way, innovation stops.

The non-likeability bias: the hidden face of leadership

This manifests when we judge negatively those who deviate from the dominant style, especially in leadership roles. A common example? Women in predominantly male professional contexts, who are perceived as “not likable” if they show assertiveness or decisiveness.

In the context of AI, this bias can emerge when:

  • The models penalize behaviors that do not conform to the statistical “norm”

  • Le metriche di valutazione automatica replicano pregiudizi culturali

The result is a vicious circle that limits diversity in decision-making roles and hinders inclusion.

Bias, AI and change: from awareness to action

Every major technological transition generates fear, skepticism, and resistance. But only by recognizing our cognitive limitations can we build more human technologies.

AI, if guided by conscious leadership, can:

  • Help identify and correct bias in decision-making processes

  • Promote transparency in algorithmic criteria

  • Provide tools to improve equity in organizations

True leadership today can no longer ignore the issue of inclusion. A new model is needed that:

  • Recognize the power (and the risks) of AI

  • Foster heterogeneous and creative work environments

  • Adopt transparent and verifiable decision-making practices

The leadership of the future will be inclusive, adaptive, and aware of its cognitive limits. Or it will not be.

Conclusion: designing an ethical Artificial Intelligence

Artificial intelligence can be an incredible tool to improve the world. But if we do not understand the cognitive biases that we transfer into its algorithms, we risk amplifying the problems instead of solving them.

The challenge is not only technical, it is profoundly human. It begins with the awareness of our biases and is realized in a leadership capable of guiding innovation with ethics, empathy, and inclusion.