It is unlikely that artificial intelligence will have consciousness, says the co-founder of Polygon.
According to Quantilegraph, Sandeep Nylwal, co-founder of Polygon and the open-source artificial intelligence company Sentient, expressed skepticism about the possibility of artificial intelligence (AI) developing consciousness. In a recent interview, Nylwal stated that AI lacks the inherent intention found in humans and other biological entities, making it unlikely for AI to achieve a significant level of consciousness. He dismissed the idea of a doomsday scenario where AI becomes self-aware and takes control of humanity.
Nylwal criticized the idea that consciousness could emerge randomly from complex interactions or chemical processes. While he acknowledged that such processes could lead to the creation of complex cells, he argued that they are insufficient to generate consciousness. Instead, Nylwal expressed concerns about the potential use of artificial intelligence by central institutions for surveillance purposes, which could threaten individual freedoms. He emphasized the need for artificial intelligence to be transparent and democratic, advocating for global AI controlled by individuals to create a borderless world.
The executive emphasizes the importance of having a personalized artificial intelligence working on behalf of each person, protecting them from other artificial intelligences deployed by powerful entities. This perspective aligns with the open Sensient model's approach to artificial intelligence, contrasting with the opaque methods of centralized platforms. Nylwal's views are echoed by David Holtzman, a former military intelligence professional and head of strategy for the decentralized security protocol, Nauris. Holtzman warned of the significant privacy risks posed by artificial intelligence in the near term, suggesting that decentralization could serve as a defense against AI threats.
In October 2024, the artificial intelligence company Anthropic released a paper discussing potential scenarios where artificial intelligence could sabotage humanity and proposed solutions to mitigate these risks. The paper concluded that while artificial intelligence is not an immediate threat, it could become dangerous as models advance. Both Nylwal and Holtzman argue that decentralization is essential to prevent the use of AI for surveillance by central institutions, including the state. Their insights highlight the ongoing debate about the future role of artificial intelligence and the importance of preserving individual freedoms in an increasingly digital world.