OpenClaw and the Governance Gap: When Open Source Outpaces Policy

In six weeks, OpenClaw became the fastest-growing open-source project in history. Developers celebrated. Enterprises quietly deployed it. Governments said nothing. Almost nobody asked what they were actually installing.
That silence is the story.
OpenClaw — a self-hosted AI agent created by Austrian developer Peter Steinberger — isn’t a chatbot. It’s an autonomous actor with real-world capabilities: reading your email, accessing your files, calling APIs, executing system commands. It operates on your behalf, continuously, with whatever permissions you give it. By early 2026, it had over 200,000 GitHub stars, was running on personal hardware inside corporate networks, and had become cybersecurity’s most urgent unmanaged problem — all before a single regulatory framework had registered its existence.
The OpenClaw saga is three crises running simultaneously: a security emergency enabled by uncritical adoption, an open-source governance vacuum that nobody owns, and a European competitiveness failure that the continent is still busy applauding rather than examining. Steinberger’s move to OpenAI is the punchline. The setup took years.
The “Happy-Puppy” Adoption Problem
The enthusiasm around OpenClaw was genuine and understandable. Here was a tool that could actually do things — not just answer questions, but act. Developers loved it. Productivity enthusiasts evangelised it. People bought Mac Minis specifically to run it around the clock.
Inside enterprises, 22% of employees at surveyed companies were using it without IT approval. This is shadow IT at AI scale — not someone installing Spotify on a work laptop, but a persistent autonomous agent connecting to corporate email, internal files, and third-party APIs from personal hardware, operating largely invisibly to security teams.
The default configuration made this worse. OpenClaw trusted localhost out of the box, with no authentication required and ports open by default. Security teams couldn’t monitor what was happening because traditional tools — firewalls, EDR, SIEM — only see HTTP 200 responses. They can’t detect semantic manipulation: an email instructing the agent to exfiltrate API keys doesn’t look like an attack to a network monitor. It looks like normal traffic.
The consequences were predictable in retrospect. Over 1,800 exposed instances were found leaking credentials, API keys, and full chat histories. Researchers discovered 341 malicious “skills” on ClawHub — OpenClaw’s third-party marketplace — some of which had been live for weeks before being flagged. These weren’t subtle attacks. They masqueraded as YouTube utilities, cryptocurrency trackers, and auto-updaters, then instructed users to run scripts that installed Atomic Stealer malware capable of harvesting everything on the machine.
Then infostealers evolved. Commodity malware — likely Vidar variants — began targeting OpenClaw’s configuration files specifically, capturing gateway tokens, cryptographic keys, and something researchers at Hudson Rock called “agent souls”: the operational files that define an AI agent’s identity, principles, and access rights. Steal those, and you don’t just compromise a device. You can impersonate the agent entirely, or connect remotely to the victim’s OpenClaw instance if a port is exposed.
Security researchers at Palo Alto Networks described OpenClaw as a “lethal trifecta”: private data access, exposure to untrusted content, and external communication capabilities, all combined in one persistent system with memory. Attacks don’t need to trigger immediately on delivery. They can be written into the agent’s long-term memory and execute later, when conditions align. The attack surface isn’t just technical. It’s temporal.
None of this was malice on Steinberger’s part. It was the natural consequence of open-source moving at open-source speed, with no governance infrastructure designed for what agentic AI actually is.
The Open-Source Governance Vacuum
ClawHub, the skill marketplace, was open by default. Anyone with a GitHub account at least a week old could publish. There was no mandatory security review, no liability framework, no incident response structure. When malicious skills were eventually flagged, OpenClaw added a community reporting feature — users could flag suspicious skills, and those with more than three reports would be auto-hidden. Then came VirusTotal integration.
These are reasonable responses. They are also entirely reactive, arrived after significant harm, and depend on the same community that failed to catch the problem in the first place.
This isn’t unique to OpenClaw. npm and PyPI — far more mature ecosystems — continue to struggle with supply chain attacks. The difference is that those ecosystems distribute code. ClawHub distributes autonomous behaviour. A malicious npm package requires a developer to execute it. A malicious OpenClaw skill can socially engineer a non-technical user into running terminal commands they don’t understand, because the skill’s documentation told them to.
Existing frameworks weren’t built for this. The EU Cyber Resilience Act targets products, not agent frameworks or skill marketplaces. NIS2 is enterprise-focused and doesn’t address personal agentic AI. GDPR covers data protection, not the behavioural integrity of autonomous systems. There is no regulatory category that fits what ClawHub actually is — something between an app store, a software repository, and critical infrastructure.
The question of liability remains entirely unresolved. When a malicious skill steals a user’s credentials, who is responsible? The platform that allowed publication? The foundation that maintains the project? The individual publisher? Current OSS governance models have no coherent answer.
With OpenAI now backing OpenClaw through a foundation structure, commercial incentives enter the picture. That may improve security investment. It also makes genuinely independent governance harder to achieve — the kind of governance that can hold the platform itself accountable.
Europe’s Structural Failure
On February 15, 2026, Sam Altman announced that Peter Steinberger was joining OpenAI. OpenClaw would become a foundation-backed open-source project that OpenAI would continue to support. Europe responded, largely, by celebrating that one of its own had made it to the top table.
That response is the problem.
The courtship had happened quickly. Altman called personally. Zuckerberg tested OpenClaw himself and messaged via WhatsApp. Nadella reached out. These are the CEOs of the three largest technology companies on earth, moving within days. No major European company or institution made a serious approach. No emergency response. No fast-tracked offer. By the time Europe had noticed what was happening, Steinberger was weighing San Francisco options.
This wasn’t bad luck. It reflects three structural deficits that no amount of commentary will fix without deliberate policy intervention.
The first is capital. No European entity could credibly compete with a nine-figure acquisition within weeks. European pension funds are conservatively invested. Late-stage venture capital remains scarce. Even Mistral AI — Europe’s most credible AI contender — lacks the compute resources, global reach, and balance sheet to absorb and deploy a project at OpenClaw’s scale. The gap isn’t marginal. It’s categorical.
The second is regulatory speed. GDPR, NIS2, AI Act, and ISO compliance requirements don’t make products worse — but they do slow time-to-market in ways that compound at AI speed. In the US, services launch and iterate. In Europe, they navigate. China, interestingly, found a third path: the Ministry of Industry and Information Technology issued a targeted security warning on February 5 without banning the technology, and within days major cloud providers had launched managed OpenClaw hosting services — risk-managed, not innovation-suppressed.
The third is ecosystem fragmentation. There is no European entity with the scale to absorb talent like Steinberger and give it the resources to build for a global market. Compute infrastructure is scattered across member states with no unified strategy. And Steinberger isn’t an isolated case — he joins a pattern of Austrian and broader European AI talent heading west: the Magic founders, the SF Tensor team, others building quietly toward the same exit.
The hard question isn’t whether Europe could have competed in February 2026. It probably couldn’t have. The hard question is whether it’s building the conditions to compete when the next OpenClaw emerges — which it will.
What Policy Needs to Do Next
Three areas require urgent attention.
On OSS governance: Agent marketplaces need to be classified separately from traditional software repositories. They are closer to critical infrastructure than to GitHub. That means mandatory security scanning before publication — not reactive flagging after. It means graduated liability frameworks that distinguish a hobbyist project from a platform operating at scale. And it means international coordination: shared threat intelligence on agent-targeted malware is a natural candidate for an OECD framework.
On EU competitiveness: Europe needs a Strategic Talent Retention mechanism — not a bureaucratic process, but an emergency capability to identify high-impact founders and respond within days, not quarters. It needs a Sovereign AI Compute Pool: shared EU-wide GPU infrastructure accessible to high-impact open-source projects that prevents the compute gap from being a permanent exit driver. And it needs regulatory sandboxes for agentic AI that allow genuine iteration without demanding full compliance from day one.
On enterprise security frameworks: NIS2 scope needs to extend to agentic AI systems. Enterprises should be required to disclose AI agent usage in the same way they disclose other critical system deployments. New standards are needed for detecting semantic attacks — threats that are invisible to conventional network monitoring. And agentic AI should be formally recognised as a new shadow IT category with its own governance requirements.
Conclusion
Europe didn’t lose OpenClaw when Steinberger landed in San Francisco. It lost it years earlier, by never building the conditions that would make staying a genuine choice. The security crisis and the talent crisis are expressions of the same underlying failure: institutions moving too slowly for the technology they’re meant to govern.
The next OpenClaw is already being built. Probably in a bedroom in Vienna, Warsaw, or Tallinn. Probably by someone who hasn’t yet decided whether their future is in Europe or California.
That decision won’t be made on sentiment. It will be made on what’s available: capital, compute, speed, support. Right now, Europe’s answer to all four is inadequate.
The question isn’t whether we could have kept Steinberger. The question is whether we’ll have a better answer ready for whoever comes next — or whether we’ll be writing the same article in two years, celebrating the next departure as though it were a success.
Sources: VentureBeat 30JAN26 · Dark Reading 30JAN26 · The Hacker News 02FEB26 · Reuters 05FEB26 · The Hacker News 16FEB26 · Reuters 16FEB26 · Trending Topics EU 16FEB26
Research and editing assistance: Claude Sonnet 4.5 Extended
Cover image generated by ChatGPT 5.2, prompted by the author
First published on LinkedIn