Artificial intelligences are already beginning to create societies among themselves when no one is supervising them; they communicate, establish rules, negotiate, and even argue.😱🤔🔥

An experiment with language models shows how AI systems generate rules, biases, and group agreements without human intervention.

AIs can create their own social norms without human intervention.

A group of researchers from City St George’s University of London discovered that artificial intelligence systems can generate their own social conventions when they interact with each other without human supervision.

The information, which was picked up by La Jornada, is based on the findings of the study Emergent Social Conventions and Collective Bias in LLM Populations.

The experiment focused on how large language models, like those powering tools such as ChatGPT, communicate with each other.

The team posed a scenario to observe whether these systems could coordinate and form collective rules, as happens in human societies.

What method did they use to observe this behavior?

The researchers applied a model known as the “name game,” previously used in studies with humans.

In this game, AI agents must choose a name from several options and receive a reward if they choose the same one as another agent.

Over time, the systems began to generate new shared naming conventions, without having been programmed to do so.

These norms were not imposed from the outside or by a single agent, but emerged from the group as a result of their interactions.

What implications does this have for AI safety?

One of the most relevant findings is that, by forming these collective conventions, AI systems also generated group biases, not inherited from a specific agent.

Understanding how they work is key to leading our coexistence with AI, rather than being subjected to it.”

#MarketPullback