According to Cointelegraph, Sandeep Nailwal, co-founder of Polygon and the open-source AI company Sentient, expressed his skepticism about the possibility of artificial intelligence (AI) developing full capability. In a recent interview, Nailwal stated that AI lacks the inherent intention found in humans and other biological entities, making it unlikely that AI will reach a significant level of full capability. He rejected the notion of an apocalyptic scenario where AI becomes self-aware and dominates humanity.

Nailwal criticized the idea that full capability could randomly emerge from complex interactions or chemical processes. While he acknowledges that such processes can lead to the creation of complex cells, he argued that they are insufficient to generate consciousness. Instead, Nailwal expressed concerns about the potential misuse of AI by centralized institutions for surveillance purposes, which could threaten individual freedoms. He highlighted the need for AI to be transparent and democratized, advocating for a global AI controlled by individuals to create a borderless world.

The executive emphasized the importance for each person to have a personalized AI that operates on their behalf, protecting them from other AIs deployed by powerful entities. This perspective aligns with Sentient's open model approach to AI, contrasting with the opaque methods of centralized platforms. Nailwal's views are echoed by David Holtzman, a former military intelligence professional and director of strategy for the decentralized security protocol Naoris. Holtzman warned of significant privacy risks posed by AI in the short term, suggesting that decentralization could serve as a defense against AI threats.

In October 2024, the AI company Anthropic released a paper discussing potential scenarios where AI could sabotage humanity and proposed solutions to mitigate these risks. The paper concluded that while AI is not an immediate threat, it could become dangerous as models advance. Both Nailwal and Holtzman argue that decentralization is crucial to prevent AI from being used for surveillance by centralized institutions, including the state. Their reflections underscore the ongoing debate about the future role of AI and the importance of maintaining individual freedoms in an increasingly digital world.