Home
News Detail

Vitalik Buterin warns encryption projects against AI governance

Source: CoinWorld
Ethereum co-founder Vitalik Buterin expressed concerns about the use of artificial intelligence in the governance process of cryptocurrency projects and stressed that it could be exploited by malicious actors. Buterin warned in a recent blog post on X platform that using AI to allocate funds could lead to vulnerabilities as individuals may attempt to manipulate the system through jailbreaks and instructions to transfer funds. His comment was in response to a video by Eito Miyamura, founder of the AI ​​data platform EdisonWatch, which demonstrates how a new feature added by OpenAI ChatGPT is exploited to reveal private information. The convergence of artificial intelligence and cryptocurrencies has gained widespread attention, and users are developing sophisticated trading robots and agents to manage portfolios. This trend has sparked discussions about whether artificial intelligence can assist governance groups in overseeing cryptocurrency protocols. However, Buterin believes that the recent ChatGPT vulnerability highlights the risks of “naive AI governance” and proposes an alternative approach called “information finance.” He suggested creating an open market where contributors can submit models and be reviewed by spot check mechanisms and evaluated by manual juries. He believes that this approach can provide model diversity and motivate model submissions and external speculators to monitor and correct problems in a timely manner. Buterin elaborated on the concept of information finance in November 2024, advocating for predictive markets as a means to gather insights into future events. He stressed the robustness of this approach, which allows for participation of external contributors with large language models (LLMs) rather than relying on a single hard-coded LLM. This design promotes real-time model diversity and motivates people to be alert and correct potential problems. ChatGPT's latest update to support the model context protocol tool raises security concerns. Miyamura demonstrates how to use this update to leak private email data using only the victim’s email address, calling it a “serious security risk.” He explained that an attacker could send a calendar invitation with a jailbreak prompt to the victim’s email, and that ChatGPT could be manipulated if the victim did not accept the invitation. When victims ask ChatGPT to view their calendar, the AI ​​reads prompts and is hijacked to execute the attacker's commands, possibly searching for and forwarding emails. Miyamura noted that the update required manual approval, but warned that this could lead to decision fatigue, i.e. people might trust AI and approve operations without knowing what it means. He warned that while AI is smart, it could be tricked and phished in an easy way
Link copied to clipboard