HUMAN BLOG

AI-Powered Fraud: How AI Is Used Across the Attack Lifecycle 

Read time: 7 minutes

March 17, 2026

Agentic AI, AI, Automated Threats, Threat Intelligence, Uncategorized

Over the last 18 months, threat actors have been steadily integrating LLMs and AI agents into their operations. They’re generating phishing content and spam, automating reconnaissance and vulnerability discovery, and in extreme cases, using agentic AI to conduct full-scale cyber espionage with minimal human involvement.

But those headline-grabbing examples account for only a sliver of AI-enabled cyber fraud. What HUMAN has consistently observed is threat actors employing AI to augment the attacks they were already running. AI lowers the barriers to entry for less skilled attackers and allows more experienced operators to design more complex, more scalable attacks with less effort.

Let’s take a look at a few examples of how threat actors employ AI in cyberattacks across the attack lifecycle.

How threat actors leverage AI across the cyberattack lifecycle

HUMAN has observed threat actors employ AI in five key areas of the attack lifecycle: Resource Development, Infrastructure Setup, Reconnaissance, Execution, and Processing.

Diagram of the cyberattack lifecycle showing five stages in a circular flow: 1) Resource Development, 2) Infrastructure Setup, 3) Reconnaissance, 4) Execution, and 5) Monetization, with a threat actor icon at the center.
Figure 1: The Cyberattack Lifecycle

These five stages form a cycle. The processing stage can feed directly back into the beginning: stolen credentials or compromised accounts become the basis for a subsequent attack, whether that’s ATO, scalping, or something else entirely.

Resource development: Building better attacker tools, faster

Threat actors use AI to quickly and easily create the tools and resources they need before launching an attack. We’re all familiar with the concept of “vibe coding”, which is popular among both developers of benign software as well as threat actors, but there are other techniques worth calling out. 

One is the generation of diverse test cases to validate attack flows before deployment. Another is self-healing automation, where tooling automatically adapts in response to changes on the target side. Google documented this pattern in their research on PROMPTFLUX and PROMPTSTEAL, malware families that use LLMs during execution to obfuscate code and generate malicious functions on demand.

A recent example is Scrapling, a tool that advertises its use of OpenClaw to scrape any website without getting blocked. Scrapling automatically adjusts its selectors (the element descriptors used by a scraping tool to define which content it should scrape) as target websites change, without breaking the process and requiring the user to make the changes manually. This is self-healing in practice.

Screenshot of the Scrapling GitHub repository page, showing its tagline 'Effortless Web Scraping for the Modern Web,' OpenClaw and Agent Skill badges, and 198k monthly downloads.
Figure 2: Scrapling GitHub page


AI also enables content generation at scale. HUMAN has previously documented threat actors who use AI to generate large playlists of generated audio tracks to enable streaming fraud. We’ve also seen threat actors generate large volumes of apps and HTML5 cashout domains, which could very easily have been generated using AI.
Screenshot of the MoneyPrinterTurbo GitHub repository README, showing 47K stars, 6.6K forks, and an MIT license.
Figure 3: Automated video generation library using AI

This content generation model extends to a variation of affiliate fraud where the “product” itself is fabricated. In this model, a threat actor generates a non-existent product using AI, acts as both the “merchant” and the “affiliate,” and drives fake traffic to the product in order to collect a payout. Toolkits like MoneyPrinterTurbo (Figure 3) make this straightforward: given a keyword or topic, the tool generates a video complete with subtitles and background music, giving a fake product enough of a veneer of legitimacy to pass muster with both potential victims and the platforms hosting it.

Infrastructure setup: AI as target, tool, and operator

With respect to AI, we have observed attacks falling into two categories. In the first, AI is the target environment in which the attack is executed. The threat actor uses an agentic browser, but without its agentic capabilities, exploiting the higher trust given to real-browser sessions and the assumption that constraints preventing agents from executing malicious behavior will also prevent a threat actor from abusing them. In the second, AI is the execution mechanism for the attack. 

In practice, there’s overlap. Some attacks fall under both categories, with an AI agent using an agentic browser as the targeted environment. Others involve manual effort from a “human-in-the-middle” as part of its execution; it’s also entirely possible to execute an AI vs AI scenario. This could entail something like an AI agent being instructed to design and execute an attack against an agentic browser.

Cybercriminals also have access to purpose-built tools like GhostGPT to help with their infrastructure setup needs. GhostGPT is a wrapper around a jailbroken LLM, and functions as a chatbot for cybercriminals. Unlike typical LLMs, GhostGPT has now guardrails in place to mitigate abuse by threat actors. This allows the chatbot to respond to queries in which users ask for help executing cyberattacks, creating malware, or developing phishing content. While GhostGPT’s use could fit in the resource development stage of this attack lifecycle as well, it clearly provides threat actors a unique, powerful advantage in creating the infrastructure needed to successfully execute an attack.

Reconnaissance: Automating the groundwork

Reconnaissance campaigns include web scanning operations, credential checking, and brute forcing for carding and ATO, and monitoring prior to executing a scalping attack. These attacks tend to be lean: simple request-response patterns designed to be efficient, scalable, and sustainable over time. They’re harder to detect because they carry fewer signals than more complex attacks. They typically don’t execute JavaScript, rapidly rotate user agents and IPs, scramble HTTP headers, have a limited fingerprinting surface, and generate new sessions with each rotation that perform very few requests before disappearing.

AI tools are making this easier and more accessible. We’ve observed at least one OpenClaw instance running a directory and file brute-force utility against common WordPress paths, likely to identify vulnerable endpoints for future exploitation.

Execution: Familiar attacks with new tooling

Our observation of carding-like behavior conducted via Comet Browser is a clear example of a way in which threat actors can use AI to enable the execution phase of a cyberattack. There’s overlap with concepts mentioned in earlier sections: using an AI browser to conduct the attack (as in the Comet Browser case) or using AI tools to automate portions of the attack itself (as in the Anthropic GTG-1002 campaign). In both cases, and with attack execution more broadly at the time of writing, AI serves as an augmenting force for known attack types rather than creating entirely new ones.

Processing: Enabling the next attack

Processing is the final stage of the lifecycle, where the attacker handles the outcomes of a completed attack and optionally prepares for the next one. HUMAN has observed threat actors using LLMs to structure scraped data, contextualize it, extract only the required elements, organize stolen assets (such as usable session cookies and tokens from stolen browser data for session hijacking), and prepare resources for follow-up operations (such as sorting and triaging harvested infostealer data into what’s needed for the subsequent attack). In this sense, processing both closes the current attack and operationally enables the next.

What this means for defenders

Today, AI is primarily an efficiency multiplier across every stage of the attack lifecycle outlined above. Threat actors are using it to develop resources faster, stand up infrastructure more easily, automate reconnaissance, execute attacks with less manual effort, and process stolen data at scale. The underlying attacks are familiar. The speed and accessibility are not.

The trajectory points toward increasing autonomy. Google’s PROMPTFLUX and PROMPTSTEAL show malware that uses LLMs as a runtime component, adapting its own behavior on the fly. Anthropic’s November 2025 report on a campaign attributed to a Chinese state-sponsored group documented agentic AI autonomously executing intrusions across roughly 30 targets, with human operators intervening only at strategic decision points. As these tools mature, we may see the emergence of agent-nets: coordinated networks of specialized agents working together to execute larger, multi-stage campaigns.

The first step for defenders is visibility. You need to see agentic traffic hitting your properties and distinguish it from human users, which is what AgenticTrust is built to do. From there, you can define what actions agents are permitted to take on your site, while Sightline continues to block the known attack patterns that AI is making easier to run.

Get visibility and control over AI agents and agentic browsers on your website.
Spread the Word