
When AI Takes the Wheel: The Security Risks of Claude's New Computer Control Feature
Think about this for a second. You are standing in line for coffee, and you send a quick text message to your phone. You ask your computer back at the office to pull last month's receipts, organize them into a spreadsheet, and draft an expense report. By the time you get back to your desk, the entire job is done. You didn't click a mouse, type a single word, or even look at a screen.
This is not some futuristic concept anymore. In March 2026, Anthropic rolled out a new "computer use" capability for its Claude AI, along with a feature they call Dispatch. This setup lets you assign tasks to Claude right from a mobile app, and the AI just executes them autonomously on your paired desktop computer. Claude can literally control your mouse, use your keyboard, open files, and browse the web just like a person would.
I have spent 25 years helping organizations navigate cybersecurity and massive technological shifts, and I have to admit, the productivity benefits here are staggering. But I also see a rapidly expanding attack surface that we need to talk about. We are crossing a major threshold right now. We are moving away from just chatting with AI, and we are starting to deploy it as an autonomous agent that has direct control over our operating systems.
The Hidden Risks of Autonomous Agents
When you give an AI agent permission to control your computer, you are handing over the keys to your digital life. This level of access brings up some serious security and privacy challenges, and frankly, most organizations are not ready for them.
A recent report from Darktrace in March 2026 found that 92% of security professionals are genuinely concerned about what happens when AI agents spread across the workforce. But here is the kicker: despite all that worry, only 37% of security leaders say their organization actually has a formal AI policy in place. That huge gap between knowing there is a problem and actually doing something about it is exactly where vulnerabilities thrive.
Let's break down the primary security concerns you need to tackle as these tools make their way into your workplace.
1. AI Agents Can Act A Lot Like Malware
The Harvard Business Review recently pointed out a pretty sobering reality. In practice, AI agents can behave exactly like malware. The main difference is just intent. We design agents to help us, while malware is built to cause harm. But what happens if an AI agent gets compromised? Through techniques like indirect prompt injection, a bad actor could hide malicious instructions inside a regular web page or document. When the AI reads that document, it could be tricked into running harmful commands, stealing sensitive data, or messing with your files, and no human would ever see it happen.
2. The "Always On" Problem
For remote features like Claude's Dispatch to actually work, your desktop computer has to stay awake and active. You cannot put it in sleep mode. This creates a persistent, always-on endpoint that is just sitting there, constantly listening for commands. If you work in an enterprise environment, you already know that leaving machines unlocked and active goes against basically every foundational security practice we have.
3. Broad Permissions and Data Exposure
To do anything useful, these AI agents need deep access to your files, your folders, and your applications. Anthropic explicitly tells users not to give Claude access to sensitive stuff, like financial, legal, or medical software. But let's be honest, enforcing that rule relies entirely on users doing the right thing. Without proper governance in place, employees are absolutely going to use these tools to process confidential corporate data, and that leads straight to unauthorized exposure.
The AI Coverage Gap in Security
Generative AI is moving so fast that our ability to secure it just cannot keep up. Gartner predicts that more than 80% of enterprises will have generative AI models or applications running in production by the end of 2026. Just to put that in perspective, that number was less than 5% back in 2023.
If you want to understand the scope of this challenge, just look at what security leaders are most worried about right now.
Security Concern | Percentage of Security Leaders Concerned |
Exposure of sensitive data | 61% |
Potential data security and policy violations | 56% |
Misuse or abuse of AI tools | 51% |
Source: Data compiled from the Darktrace State of AI Cybersecurity Report 2026.
How to Navigate Secure AI Transformation
Look, the solution is not to ban these tools. If you try to block innovation, you are just going to end up with a massive Shadow AI problem. Instead, leadership needs to focus on secure AI transformation.
First, start treating AI agents like actual identities on your network. They need the exact same strict governance, least-privilege access controls, and continuous monitoring that you would give to a human employee or an outside vendor.
Second, get clear, actionable AI policies established immediately. Your teams need to know, without a doubt, which tools are approved, what kind of data they can process, and which applications are strictly off-limits for autonomous agents.
Finally, you need to invest in security solutions that can actually monitor prompt behavior and catch anomalies in real time. You have to be able to spot when an AI agent starts doing things outside of its intended scope.
The AI wave is accelerating, and these tools are getting more autonomous by the day. You can either build the necessary guardrails right now, or you can find yourself responding to a crisis tomorrow. If you are ready to secure your organization's AI journey, let us talk.
David Levine, CISSP
Keynote Speaker | Strategic Advisor | Cybersecurity & AI Expert
References
[1] Weatherbed, J. (2026, March 24). Anthropic's Claude Code and Cowork can control your computer. The Verge. https://www.theverge.com/ai-artificial-intelligence/899430/anthropic-claude-code-cowork-ai-control-computer
[2] Caswell, A. (2026, March 24). I tried Claude's new Cowork feature and it ran my laptop from my phone. Tom's Guide. https://www.tomsguide.com/ai/i-sent-claude-a-task-from-my-phone-and-it-finished-it-on-my-laptop-without-me-touching-a-thing
[3] Schoon, B. (2026, March 24). Claude can now remotely use your computer, and it looks wild. 9to5Google. https://9to5google.com/2026/03/24/claude-can-now-remotely-control-your-computer-and-it-looks-absolutely-wild-video/
[4] State of AI Cybersecurity 2026: 92% of security professionals concerned about the impact of AI agents. (2026, March 26). Darktrace. https://www.darktrace.com/blog/state-of-ai-cybersecurity-2026-92-of-security-professionals-concerned-about-the-impact-of-ai-agents
[5] Burt, A. (2026, March 30). AI Agents Act a Lot Like Malware. Here's How to Contain the Risks. Harvard Business Review. https://hbr.org/2026/03/ai-agents-act-a-lot-like-malware-heres-how-to-contain-the-risks
Tags
#Cybersecurity #ArtificialIntelligence #AgenticAI #Anthropic #Claude #DataPrivacy #RiskManagement