The internet isn’t human anymore.
A little more than three years after the first LLMs took the web by storm, automated traffic is now growing eight times faster than human traffic, according to HUMAN’s new 2026 State of AI Traffic & Cyberthreat Benchmark Report.
Our research shows that in 2025, we crossed a threshold: AI systems moved from merely reading the web to transacting on it. AI agents can increasingly conduct human-like activities, from product discovery to checkout. This means that automated traffic that might once have raised alarm bells is now becoming commonplace, requiring a new way to determine which bots and agents pose a threat and which don’t.
The rapid growth of AI-driven traffic
AI-driven traffic accelerated sharply in 2025, with monthly volumes growing by 187% from January to December, nearly tripling over the year.
This growth was not linear. Total AI-driven traffic nearly quadrupled between January and October, peaking at 3.61x January’s volume before settling into a plateau for the final two months. The bulk of this traffic—more than 95%—centered on three industries: retail and e-commerce, streaming and media, and travel and hospitality. Retail and media verticals accounted for more than 80% of the increase, and e-commerce alone drove roughly half of that spike.

Why an October peak? It wasn’t the start of holiday shopping or vacation booking season. Between November 17 and December 11, four major AI companies released frontier models in rapid succession: xAI’s Grok 4.1, Google’s Gemini 3, Anthropic’s Claude Opus 4.5, and OpenAI’s GPT-5.2.
Training data acquisition for large models typically occurs in the weeks and months leading up to launch. October’s surge is consistent with an intensified pre-release data collection cycle, which suggests that training crawler volume may increasingly align with model release schedules, presenting as predictable seasonal spikes in industries most targeted by AI data acquisition.

The operator concentration is striking. OpenAI’s bots—ChatGPT User, OAI-SearchBot, GPTBot, and ChatGPT Agent—account for approximately 69% of all observed AI-driven traffic by volume. Meta-ExternalAgent contributes an additional 16%, and Anthropic identities (ClaudeBot and Claude-SearchBot) roughly 11%. This degree of concentration means that access policy decisions regarding a very small number of AI companies have outsized effects on an organization’s overall AI traffic exposure.
The new power user: the AI agent
AI-driven traffic is an umbrella term for several types of automated traffic, and the standout category in 2025 was agentic AI. Traffic generated by autonomous systems that can navigate and act on the web grew 7,851% year over year.
What these AI agents do is qualitatively different from what came before. Crawlers read data; AI agents interact with it. These agents can complete online shopping checkouts, manage accounts, and navigate authenticated sessions. Our research showed that 2.3% of agentic activity now occurs on checkout pages, representing autonomous transactions without direct human intervention. This shift introduces new dimensions to the threat landscape, as agents move from observing the web to transacting on it.
The narrowing line between legitimate and malicious automation
The traffic behavior that once signaled an attack—rapid navigation, automated form filling—is now normal for a shopping agent. But this doesn’t mean that all automation can be trusted. Across the entire universe of interactions analyzed by the Human Defense Platform, only one half of one percent separates the rate of benign automation from the rate of malicious automation. Automation is growing, for both good and ill.
The threat landscape is evolving alongside AI traffic
The digital surfaces that AI agents are reshaping—product discovery, account management, checkout—are the same ones threat actors target most. The full report documents these trends in detail, but the headline numbers are worth flagging here.
The median percentage of traffic attempting a scraping attack is approaching 20% globally, nearly double the rate in 2022. Post-login account compromise attempts more than quadrupled year over year, with HUMAN flagging an average of 402,000 per organization. Carding volume has surged 250% since 2022. And HUMAN’s Satori threat intelligence team has already observed AI agents being used in carding-like attack patterns, with threat actors cycling through stolen card details via an AI browser agent.
The convergence matters: as more commercial activity flows through automated channels, the attack surface expands with it. An AI agent browsing products, accessing an account, and completing a checkout could be acting on behalf of a real customer—or executing a fraud operation autonomously. The behavior is the same. The intent is not. We’ll be publishing a deeper analysis of the report’s cyberthreat benchmarks soon.
Defining trust with HUMAN AgenticTrust
The agentic internet is here. The question is whether your business is accessible to the agents your customers are already using.
That’s why HUMAN developed AgenticTrust in late 2025. Built on HUMAN’s Sightline Cyberfraud Defense, AgenticTrust is the trust and control layer for agentic AI. It detects AI agent actions and intent, verifies their trust level, and governs how agents interact with web applications, turning unknown AI traffic into visible, trusted interactions. For organizations looking to enable agentic commerce safely, AgenticTrust provides the visibility and control required to establish guardrails around agent activity without blocking legitimate demand.

