Vitalik Buterin is moving away from cloud AI due to security concerns


Vitalik Buterin takes a completely different approach to AI, and has already sparked conversations across both cryptocurrency and tech circles.

Recently Update shared here Buterin revealed that he has completely stopped using cloud AI services. Instead, it now runs everything locally on its own hardware.

It’s not just a personal preference move either. From what he explained, this decision has a lot to do with growing concerns about security, especially as AI agents become more powerful and autonomous.

Run the full AI stack locally

What immediately stands out is how far he has come in this setting.

Buterin is now running the Qwen3.5:35B model locally on an Nvidia 5090 laptop. This is by no means a lightweight setup, but it shows what’s becoming possible with current hardware.

According to the details shared, the system is capable of hitting around 90 tokens per second, which is very fast for a domestic model of this size.

Most importantly, everything runs directly on his device. There are no external servers, no cloud processing, and no data leaving the device.

This level of control is a big part of the decision. For someone heavily involved in building decentralized systems, relying less on centralized AI infrastructure seems like a natural step.

The security risks in AI agents are becoming more apparent

The biggest issue Buterin pointed out is security.

He pointed to research indicating that about 15% of AI agent “skills” or integrations contain malicious instructions. This is not a small number, especially when these agents are used to automate tasks.

Even more troubling is the idea that something as simple as analyzing a malicious web page could completely put an AI assistant at risk.

In other words, the risk is not always clear. An agent may appear to be doing something normal, when in fact it is executing malicious instructions in the background.

As AI tools become more connected to financial wallets, applications and systems, these risks become much more significant.

The problem with fully autonomous AI

One of the main points Buterin seems to focus on is control.

There is a growing trend to build AI agents that can operate autonomously, sending messages, executing transactions, and interacting with platforms, without requiring constant human input.

On paper, this seems effective.

But from his point of view, it could also be dangerous.

If an AI agent has the ability to move money or make decisions without a human in the loop, it ceases to be just a tool. It becomes a potential liability.

If this factor is compromised, damage can occur quickly, often before anyone notices.

This is where his approach begins to differ from many current AI development trends.

A more conservative approach to AI interaction

To deal with these risks, Buterin is taking a more cautious route.

It has an open source messenger that requires human approval before sending any outgoing message to third parties. It’s a simple idea, but it changes how AI systems interact with the outside world.

Furthermore, it applies what is described as the “2 of 2 confirmation” rule for sensitive procedures. Basically, nothing critical happens unless there is clear human involvement.

It may seem slow or even unnecessary to some people, but that’s the point.

Rather than optimizing speed or full automation, the focus here is on safety and control.

In an environment where AI systems are becoming more powerful by the day, this trade-off may actually make sense.

The on-premises vs. cloud AI debate is heating up

The move also feeds into a larger debate about on-premises versus cloud-based AI.

Running forms locally gives users complete control over their data and interactions. Nothing is sent, and nothing is processed externally. This reduces certain types of risks, especially those related to privacy and external manipulation.

But it’s not the perfect solution either.

Native AI does not automatically mean secure AI. If the system itself is compromised, or if it is processing malicious input, risks still exist.

On the other hand, cloud AI offers convenience and scalability, but it comes with its own concerns. When systems are connected to external servers and given broad permissions, users essentially trust those systems with a great deal of control.

As Buterin’s move suggests, this level of confidence may not always be justified.

A quiet warning for the AI ​​agent economy

Looking at the bigger picture, this seems like more than just a personal setup option.

Vitalik Buterin essentially points out a potential weakness in the current trend of AI development, especially the push toward fully autonomous agents.

As more gadgets are designed to operate independently, the line between convenience and risk is starting to blur.

His approach is almost the opposite of this trend. Less automation, more verification. Less dependence on outside, more local control.

This may not be the quickest or most exciting path, but it may end up being one of the most practical, especially for high-risk use cases like finance or cryptocurrencies.

For now, this is a reminder that as AI systems become more capable, the way they are designed and controlled is just as important as what they can actually do.

Disclosure: This is not trading or investment advice. Always do your research before purchasing any cryptocurrency or investing in any services.

Follow us on Twitter @themerklehash To stay up to date on the latest Crypto, NFT, AI, Cybersecurity, and Metaverse news!





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *