HUMAN BLOG

OpenClaw in the wild: How autonomous agents can drive abuse at scale

Read time: 5 minutes

February 18, 2026

Agentic AI, Automated Threats, Cyberfraud, Threat Intelligence

Over the past month, OpenClaw has moved from obscurity to widespread adoption. This agentic assistant has become a fixture across tech bloggers, mainstream media, and curious user communities. Publicly facing gateways are proliferating like mushrooms after a long rain, with hundreds of new instances sprouting every day, and what began as an experiment in agentic control is now a visible, measurable presence across the internet. 

So what are these agents actually doing? And what does this new era of machine-controlled execution mean for humans?

HUMAN’s Satori Threat Intelligence team has closely analyzed activity originating from OpenClaw gateways. What we found spans everything from routine automation to patterns that strongly resemble abuse.

Shodan results showing thousands of publicly exposed OpenClaw Control gateways, highlighting global proliferation of autonomous AI agent instances.
Figure 1: Publicly facing OpenClaw gateways as of February 18th, 2026.

OpenClaw take the wheel: Autonomous control and its consequences 

OpenClaw’s value proposition is simple: you give an AI agent full control of a local machine and let the model take the wheel 24/7, and you get an always-on AI assistant.

That means granting it access to all the tools that a human has access to, including browsers, automation frameworks, and even social media networks, where it can post eerie bits on self-awareness. The agent also inherits the user’s privileges, interacting with systems and platforms under the same authority while producing traffic that can appear indistinguishable from that of a human operator.

Of course, handing persistent, privileged control to an autonomous agent is not without consequences. OpenClaw has been plagued by vulnerabilities, including cases of hijacked AI context and improperly secured code, and as with any tool, it can be used for both good and bad. 

Researchers have also documented infostealer malware exfiltrating OpenClaw configuration secrets, including API keys and agent identity data, signaling that adversaries are adapting existing malware families to target autonomous agent environments.

As more instances come online, all these weaknesses can translate into abuse that we at HUMAN have observed in the last weeks.

What we’re seeing in the wild: Abuse patterns, impersonation, and active reconnaissance 

HUMAN’s Satori Threat Intelligence team discovered OpenClaw instances where the border between legitimate use and abuse is indistinguishable.

Figure 2 shows the OpenClaw-associated request volume observed between February 1 and February 10. Each color represents a distinct IP address operating a publicly facing OpenClaw gateway.

Not every request can be definitively attributed to autonomous execution. There is often a thin boundary between agent-driven activity and direct user interaction. However, based on behavioral consistency and automation patterns, we have high confidence that the majority of traffic originating from these nodes is generated by browser automation controlled by OpenClaw.

Log data showing high-frequency requests to common WordPress paths, consistent with automated directory and file brute-forcing from an exposed OpenClaw instance.
Figure 2: Segmented request activity from publicly exposed OpenClaw gateways over a ten-day period, grouped by source IP address.

The traffic presents clear segmentation among some abusive IPs that are launching high-velocity attacks. HUMAN is blocking the majority of this traffic. 

In several cases, the automation appears designed to impersonate organic referral traffic. On a popular news platform, outbound links were systematically tagged with UTM parameters referencing major social media sources. The effect is to simulate legitimate engagement at scale. The pattern suggests coordinated efforts to manufacture “organic” traffic using autonomous agents.

A sample of the request logs is shown in Figure 3, where repeated article access from a small set of IPs is paired with systematically appended social campaign UTM parameters.

Sample request logs showing repeated article access from a small set of IPs with structured social media UTM parameters, indicating synthetic referral traffic.
Figure 3: Automated browser instrumented by OpenClaw, with associated request logs showing structured social UTM tagging.

Other agents have gone completely to the dark side, moving beyond traffic manipulation into active reconnaissance, continuously scanning the internet just like regular script kiddies. In one instance, an exposed OpenClaw node ran a directory and file brute-force utility against common WordPress paths.

Figure 4: Automated dir busting - Sequential requests to common WordPress paths consistent with automated directory and file bruteforcing from an exposed OpenClaw instance.

Lowering the barrier to internet fraud

What started as a hobbyist project is quickly becoming the next step in agentic workflows. As our previous reporting has shown, Agentic AI is a fast-growing traffic segment, and as we’ve seen, it can be used for both good and bad.

We are already seeing autonomous agents generate synthetic engagement, manipulate referral signals, and conduct automated reconnaissance. In practice, this lowers the barrier to entry for internet fraud, allowing pretty much everyone to conduct attacks with zero cybersecurity skills. Today, that may resemble a script kiddie carrying out low-skill enumeration and traffic manipulation, but tomorrow’s models will likely be able to blast attacks that would rival a more advanced pentester.

As autonomous agents become operational actors on the public internet, defenders need visibility beyond simple bot detection. Understanding which traffic is human, which is automated, and which is adversarial is becoming foundational. HUMAN AgenticTrust provides the visibility and control required to distinguish legitimate agent activity from abuse, helping organizations enforce trust in an increasingly agent-driven web.

Get visibility and control over AI agents and agentic browsers on your website.
Spread the Word