Markets were shaken by an unexpected debate between Trump and Elon. This also dealt a heavy blow to #bitcoin . The most talked about tokens on #Binance today were $MASK , $ENA and $TRB
While the dust settles, I will be taking a break from the markets for a bit and reviewing the latest developments in the #AI world as a #SocialMining writer at @DAO Labs with an interview with Dr. Chen Feng, #Autonomys Research Chair and UBC Professor

In an era where AI has permeated every field, privacy and security are among the major issues that will determine the future of technology. In this context, the vision put forward by Dr. Chen Feng, Research Chair at Autonomys and Associate Professor at the University of British Columbia, stands out for both its theoretical depth and practical applicability. In a recent appearance on the Spilling the TEE podcast, Feng made a strong case for Trusted Execution Environments (TEEs) in building privacy-first AI. In his words, “TEEs are fortresses” – hardware-isolated zones where code and data can run securely even in hostile environments.
TEEs differ significantly from cryptographic privacy solutions such as ZKP (zero-knowledge proofs), MPC (multi-party computation), and FHE (fully homomorphic encryption). While cryptographic solutions are theoretically powerful, they currently have limited usability due to their high processing loads. Dr. Feng humorously puts it this way: “If we wait for some cryptographic approaches to catch up, we may have to wait until the end of the century.” TEEs, on the other hand, already offer high-performance, application-oriented solutions. Their ability to reduce the privacy burden to just 5%, especially in GPU-intensive AI workloads, makes them ideal for AI applications.

Autonomys is developing a privacy-focused infrastructure for decentralized intelligent agents. At the heart of this vision is TEEs as a hardware security layer that will support the data-driven nature of AI and protect individual privacy. Autonomys’ adoption of this technology is both a practical engineering choice and a philosophical stance: to provide trust without centralization, and scalability without privacy.
Dr. Feng’s approach does not only propose a technological solution, but also aims for a societal transformation. In the future, billions of AI agents will interact with humans and each other on-chain. But these agents should also have the right to privacy. “If AI agents are users, they deserve privacy,” Feng says. Autonomys therefore proposes to manage privacy through application operators at the infrastructure layer, rather than placing separate TEEs on each agent – thus scaling the system without overwhelming it.
Dr. Feng’s AI doctors project demonstrates that these technologies can be used even in privacy-critical areas such as healthcare. This pilot offers an opportunity to test both technological feasibility and societal benefit. Finally, Feng’s warning is clear: “If only two or three companies control AI, that’s dangerous… We still have time, but not much time.”
Autonomys and Feng’s work is not just a technological choice, but also an invitation to build a more equitable, private, and decentralized AI future for humanity.