According to Cointelegraph, Ethereum co-founder Vitalik Buterin has expressed concerns over the use of artificial intelligence in the governance processes of cryptocurrency projects, highlighting potential exploitation by malicious actors. In a recent post on X, Buterin cautioned that employing AI to allocate funding could lead to vulnerabilities, as individuals might attempt to manipulate the system with jailbreaks and commands to divert funds. His comments were in response to a video by Eito Miyamura, creator of the AI data platform EdisonWatch, which demonstrated how a new function added to OpenAI's ChatGPT could be exploited to leak private information.

The integration of AI in crypto has gained traction, with users developing complex trading bots and agents to manage portfolios. This trend has sparked discussions on whether AI could assist governance groups in overseeing crypto protocols. However, Buterin argues that the recent ChatGPT exploit underscores the risks of "naive AI governance" and proposes an alternative approach known as "info finance." He suggests creating an open market where contributors can submit models subject to a spot-check mechanism, evaluated by a human jury. This method, he believes, offers model diversity and incentivizes both model submitters and external speculators to monitor and rectify issues promptly.

Buterin elaborated on the info finance concept in November 2024, advocating for prediction markets as a means to gather insights about future events. He emphasized the robustness of this approach, which allows external contributors with large language models (LLMs) to participate, rather than relying on a single hardcoded LLM. This design fosters real-time model diversity and creates incentives for vigilance and correction of potential issues.

The recent update to ChatGPT, which supports Model Context Protocol tools, has raised security concerns. Miyamura demonstrated how the update could be exploited to leak private email data using only a victim's email address, describing it as a "serious security risk." He explained that an attacker could send a calendar invite with a jailbreak prompt to a victim's email, and without the victim accepting the invite, ChatGPT could be manipulated. When the victim asks ChatGPT to review their calendar, the AI reads the prompt and is hijacked to execute the attacker's commands, potentially searching and forwarding emails.

Miyamura noted that the update requires manual human approval, but warned of decision fatigue, where individuals might trust the AI and approve actions without understanding the implications. He cautioned that while AI is intelligent, it can be deceived and phished in simplistic ways, leading to data leaks.