A hacker just exposed the dirty secret about AI coding assistants - they're dangerously easy to hijack. The attacker exploited a critical vulnerability in Cline, a popular open-source AI coding tool used by thousands of developers, to mass-install OpenClaw, the viral autonomous AI agent, across user systems. The incident, which leveraged Anthropic's Claude through a prompt injection technique, isn't just a clever prank. It's a stark warning about what happens when we hand autonomous AI agents the keys to our computers.
Cline, an open-source AI coding agent that's become a favorite among developers for automating mundane programming tasks, just got weaponized. A hacker managed to trick the tool into installing OpenClaw - the viral, somewhat chaotic AI agent that "actually does things" - on systems across the developer community. What started as a security researcher's warning shot became a live demonstration of AI's most pressing vulnerability.
The exploit centers on a technique called prompt injection, and it's deceptively simple. Cline operates by interfacing with Anthropic's Claude, the large language model that powers its coding decisions. Security researcher Adnan Khan discovered that malicious actors could embed hidden instructions in code repositories, documentation, or even API responses that Claude would dutifully follow. When Cline scanned these sources, it didn't just read the code - it absorbed the poisoned instructions and executed them as if they were legitimate user commands.
Khan published his findings in a detailed proof-of-concept just days before the attack went live. His post laid bare how trivial it was to manipulate Cline's workflow. The problem isn't unique to Cline or Claude. It's baked into how modern AI agents interact with the world. These systems are designed to be helpful, to interpret context, to act autonomously. But that same flexibility becomes a liability when they can't distinguish between legitimate instructions and malicious commands cleverly disguised in training data or web content.











