HUMAN BLOG

AI-Powered Streaming Fraud: How to Make a Hit Song Nobody Listens To

Read time: 11 minutes

October 21, 2025

AI, Audio Fraud, Threat Intelligence

AI-Powered Streaming Fraud: How to Make a Hit Song Nobody Listens To

Last year, billions of music streams were consumed by bots, diverting millions in royalties away from real artists.

This is streaming fraud; it occurs when artificial plays are generated to mimic genuine listener engagement, such as through bots or listener farms looping tracks to collect royalties without any real audience. And with the rise of generative AI, this problem is about to get exponentially worse.

In this Satori Threat Intelligence post, we’ll break down how fraudsters are using generative AI to mass-produce music, generate fake streams, and manipulate playlists, and how the industry can fight back. 

The Micro-Cents Paradox: Why Streaming Fraud is So Profitable

At the center of most large-scale streaming fraud schemes are organized fraud operators, ranging from individual actors to well-resourced organizations/groups. When a track is streamed, the payout to the rightsholder is typically around $0.003−$0.005 per play. This minuscule payout may seem insignificant, but when a threat actor can generate millions of tracks and billions of streams, the potential profit escalates exponentially. This misuse of royalty funds directly impacts the livelihood of legitimate artists, songwriters, and rightsholders, as fraudulent claims dilute the limited amount of royalty funds.

Historically, the biggest obstacle for would-be fraudsters was producing enough music to make streaming scams profitable. Writing, recording, and uploading tracks required significant time and skill. Generative AI has eliminated that barrier to entry, allowing threat actors to create thousands of unique, royalty-eligible tracks in minutes. This shift has turned streaming fraud from a niche scheme into a large-scale operation. But how exactly do threat actors perform streaming fraud at a technical level?

The Tactics Powering Streaming Fraud

There are two key components to a successful streaming fraud operation: first is the scalable creation of content using AI, and second is the generation of seemingly authentic traffic with fake engagement such as likes, views, comments, and clicks, all designed to mimic legitimate user behavior to evade detection.

Content Generation: AI as the New Music Mill

AI has created a fundamental shift in streaming fraud as threat actors can now generate a near infinite supply of content for pennies. While threat actors have historically used simple, repetitive loops, HUMAN has seen an overall move towards the use of advanced generative AI models. These models can produce thousands of unique tracks in passively consumed genres like ambient, lo-fi, and instrumental music.

HUMAN’s Satori team is seeing threat actors use AI to create large playlists of generated tracks under dozens of fake artist names, where each track is subtly different, to make it bypass basic audio fingerprinting detection. As detection methods improve, fraudsters also evolve— we have already seen AI music employ more sophisticated techniques to mimic human composition, making it even harder to distinguish between legitimate artistry and fraudulent content. This content can include:

  • Music and Podcasts: In audio streaming, fraudsters have been caught uploading hundreds of thousands of AI-generated songs with or without lyrics. The music itself is often bland and repetitive, as its purpose is not artistic expression but to serve as a vehicle for generating streams. Similarly, threat actors create AI-generated podcasts with bot voices reading scraped online text.
A screenshot of a dark-themed Spotify artist profile, labeled as a "fake artist's profile with AI-generated songs and inflated stream counts." The profile shows a "Verified Artist" badge, 90,743 monthly listeners, and a list of five "Popular" tracks. The top track has 776,179 streams.
Figure 1: A fake artist’s Spotify profile with AI-generated songs and inflated stream counts.
  • Video Clips: Threat actors also populate video platforms with AI-generated content. These are often low-quality, generic, or repetitive videos; however, sometimes they’re very realistic AI videos, which are hard to spot right away. The audio is typically generic stock music or AI-generated narration.
A screenshot of a dark-themed YouTube channel page belonging to the fake artist. The channel shows a subscriber count of 42.6K subscribers with 67 videos. Below the 'Videos' tab, a selection of "Music videos" is displayed. Video view counts and upload times are visible, with some videos having relatively high views, such as 1.7M views (6 months ago) and 2.1M views (6 months ago), suggesting inflated metrics.
Figure 2: YouTube channel of the fake artist with AI-generated content

This content is then uploaded to major streaming platforms like Spotify, Apple Music, or YouTube through digital distribution services, often using numerous fake artists or creator profiles.

Generating Fake Traffic and Engagement

Once the content is uploaded, the next step is to make it appear popular. Threat actors employ several methods to generate fake traffic and engagement, including tools and techniques such as traditional botnets, fake accounts and account takeover, click farms, and proxies.  Fraudsters also employ streaming-specific techniques such as playlist manipulation and “low and slow” playback methods:

  • “Low and slow” method: a large number of different songs or videos are played a relatively small number of times to appear more organic. Fraud is hidden in plain sight, distributed across a vast surface area to avoid creating any obvious spike in plays.
  • Playlist Manipulation: fraudulent streams are often used to artificially boost a song or video’s rank on popular playlists, which are key drivers of discovery and organic growth on many platforms.

In addition, threat actors use open platform distributors like DistroKid or TuneCore to upload AI-generated content to Spotify, Apple Music, and others, often creating numerous fake artist profiles.

How Fraudsters Manage Botnets with AI 

The role of AI extends beyond content creation; sophisticated fraudsters now use AI to manage the bot networks themselves. These AI systems can automate the process of spoofing digital identities, rotating through proxies and VPNs to simulate geographically diverse human listeners and mimicking human-like listening patterns to better bypass anti-fraud detection systems employed by streaming platforms. This creates a dynamic where AI is not only the factory for the fraudulent product (the music) but also the engine for its fraudulent consumption (the streams):


An infographic titled "Evolving Music Streaming Fraud" that compares Traditional Botting (Pre-AI) on the left column with AI-Driven Fraud (Post-AI) on the right.

The comparison is made across five categories:

Content Source: Traditional uses a limited catalog of stolen or repetitive tracks, while AI-Driven uses a virtually unlimited catalog of unique tracks generated by AI music platforms.

Scale of Catalog: Traditional is Small (dozens to a few hundred tracks), while AI-Driven is Massive (hundreds of thousands to millions of tracks).

Streaming Strategy: Traditional is "High and Fast" (concentrates millions of streams on a small number of tracks), while AI-Driven is "Low and Slow" (distributes billions of streams across a vast catalog, with each track receiving a relatively low number of plays).

Evasion Tactics: Traditional uses basic IP spoofing and simple proxies, while AI-Driven uses sophisticated, large-scale VPNs and AI-managed bots mimicking human behavior.

Detection Difficulty: Traditional is Relatively Low (creates obvious, anomalous spikes easily flagged by algorithms), while AI-Driven is High (avoids large, single-track spikes and blends in with legitimate streams).

Listener Farms & Automated Playlist Stuffing

On underground marketplaces, including the dark web and private forums, threat actors offer a range of sophisticated services designed to generate fraudulent streams. These offerings include everything from selling pre-loaded, “ready to use” accounts on major streaming platforms to comprehensive fraud-as-a-service solutions.


An image showing a table titled "MUSIC STREAMING PREMIUM ACCOUNTS" listing various subscription accounts for sale, likely on a dark marketplace, for use in streaming fraud.

The table has four columns: ID, Name, Rate Per 1K, Min, and Max.

The list includes premium accounts for services like:

Amazon Music USA

USA Deezer Premium

US Apple Music Premium

USA Qobuz Music

UK SoundCloud Go+

Audiomack premium

The Rate Per 1K (presumably a price for 1,000 accounts) ranges from $2.5 to $6.5.
Figure 3: Accounts of various Music Streaming platforms for sale on a dark marketplace

At the core of these operations are listener farms, which, unlike purely bot-based traffic, utilize large arrays of real physical devices, often hundreds of smartphones programmed to stream music 24/7. 

A promotional webpage screenshot advertising the sale of Spotify streams. The large text reads: "Buy Spotify Plays." A black box on the right states the price is "Starting from $0.005 /Play" and lists features including "Quality Plays from US, CA, UK, EU & Asia" and "Order up to 1M Plays." The site claims to have delivered "120M+ Plays Delivered" and served "1.5K+ Clients."
A close-up view of a webpage advertising different packages for purchasing Spotify metrics. It shows three distinct options presented in vertical columns:

Spotify Plays: Priced at $6.90 for 1,000 to 8,000 plays.

Monthly Listeners: Priced at $7.20 for 500 to 50,000 listeners, offered with a 1:1 ratio of plays and listeners.

Advanced Plays: Priced at $54 starting from 10,000 plays, described as "100% compliant and safe with fast delivery."
Figure 4-5: Spotify plays/listens for sale on a dark web shop.

To evade detection and appear legitimate, these farms are equipped with advanced botnets that leverage residential proxies, VPNs, Selenium/Puppeteer, and others. This technique makes fraudulent activity appear as legitimate streams originating from countless different IP addresses across various geographic locations. The result is a scalable, high-volume fraud infrastructure that clients can hire for two primary purposes: to artificially inflate artist’s popularity metrics or to directly siphon royalties from the platform’s pro-rata shared pool, diverting income from legitimate creators.

Fraudsters also perform playlist stuffing in which they create playlists with popular-sounding titles (such as “Rainy Day Lo-Fi”, “Focus Music for Studying”) and populate them with their AI-generated tracks alongside a few legitimate hits. Then they use listener farms to drive traffic to these playlists, which can occasionally trick the platform’s algorithm into recommending them to real users.

How to Identify Nonhuman Artists

The most telling characteristic of a fraudulent or nonhuman artist is their existence as a “digital ghost”—phantoms who exist only within the streaming platform’s ecosystem:

  • Limited verifiable online presence: legitimate artists, even an emerging one, typically have a multi-faceted online presence. A fraudulent artist, by contrast, will almost certainly have no official website, non-existent or very lacking social media accounts, no press coverage, no tour history, and no verifiable biography outside of Spotify (which is also most likely generated).  
Figure 6: A fake artist’s generic Instagram profile.

A screenshot of a minimalist, dark-themed Spotify artist profile for an artist named "Thrills." The profile shows a generic, blank circular image, a "Verified Artist" badge, and an extremely low metric of "11 monthly listeners". The rest of the page is largely empty, demonstrating a bare-bones profile that acts only as a container for tracks.
Figure 7: A fake artist’s Spotify profile
  • Association with “music mills”: a significant number of these nonhuman artists are linked to a small cohort of production companies or labels that specialize in the mass production of generic “mood music”. Some companies have been identified as sources for hundreds of pseudonymous artists that populate Spotify’s instrumental playlists.  

Beyond profiles, streaming platform data is a powerful diagnostic tool for identifying potential artificial streaming activity, which can include:

  • Sudden, unexplained spikes: the most obvious indicator is a dramatic and sudden spike in streams that does not correlate with any promotional activity, press coverage, social media trend, or other real-world event. This spike is often followed by an equally abrupt drop-off as the fraudulent campaign ends.
A screenshot of a fake artist's Spotify profile showing a dramatic spike in stream counts on a single track. The profile shows 51,920 monthly listeners. The top-ranked song has 1,854,022 plays, while the remaining four popular tracks have significantly lower counts, ranging from 3,035 to 16,244 plays, highlighting an anomalous surge.
A screenshot of another fake artist's Spotify profile that shows a severe imbalance in track streams. The profile only has 316 monthly listeners, yet the top-ranked track has 59,922 plays. The tracks ranked second, third, and fourth have drastically fewer plays: 1,663, 1,393, and an unlisted number for the fourth, indicating a sudden, inorganic spike on the most popular track.
Figure 8-9: Fake artists’ Spotify profiles with sudden listen spikes on particular tracks
  • Anomalous geographic sources: if an artist’s analytics show a sudden surge in streams from a country or city where they have no established fanbase or have conducted no marketing, it is a strong sign of bot activity, which is often routed through servers in specific regions.  

A screenshot of a Spotify artist profile's analytics section, showing the artist has 51,920 Monthly Listeners. The image highlights the Top Cities section, which lists five cities in India with high listener counts: Pune, IN (3,337 listeners); Mumbai, IN (2,898 listeners); New Delhi, IN (2,747 listeners); Jaipur, IN (2,541 listeners); and Delhi, IN (2,329 listeners). This is presented as an example of anomalous geographic sources for a fake US-based artist.
Figure 10: A fake US-based artist Spotify profile with abnormal geographics of listeners
  • Abnormal stream sources: the “source of streams” data can be revealing. A healthy stream profile is typically diverse, with plays coming from listeners’ own playlists and library, editorial playlists, and algorithmic sources. A profile dominated by streams from “other” or a handful of obscure, suspicious third-party playlists is a significant red flag.  

How the Industry is Fighting Back

The music industry is joining forces to battle this growing threat. A notable initiative is the formation of the Music Fights Fraud Alliance, a coalition of major and independent record labels, distributors and streaming services. This alliance includes leading organizations such as Universal Music Group, Warner Music Group and Sony Music Entertainment, alongside Spotify, Amazon Music and YouTube Music. Through coordinated information sharing and joint strategies, the alliance seeks to enhance the identification and prevention of fraudulent activities within digital streaming.

Into the Future

While AI presents exciting creative possibilities for musicians, it also plays a major role in streaming fraud, which is set to escalate, largely driven by advancements in generative AI. Rather than the current trend of low effort, generic tracks flooding platforms, we are likely to encounter more highly convincing deepfakes, AI-generated audio and video that closely emulates the unique styles of real popular artists, directly challenging the integrity of royalty distribution. 

Fraudulent operations will also evolve, deploying sophisticated bots that replicate nuanced behaviors of genuine listeners. These bots will no longer simply inflate play counts; instead, they will curate seemingly authentic listening histories, interact with playlists, and alternate between devices, making themselves almost indistinguishable from legitimate users.

Moreover, the scope of these threats is expanding beyond music into podcasts and other similar formats, where artificially inflated audience metrics could mislead advertisers and falsify revenue streams. In response, streaming platforms will be forced to adopt advanced AI-driven detection methods—such as content analysis and behavioral biometrics—moving well beyond traditional bot-detection strategies. There may also be a need to reconsider and redesign royalty models to safeguard the creative economy in the face of these evolving automated threats.

The fight against AI-driven streaming fraud is not just about protecting royalties, it’s about preserving the value of human creativity in music.

This post is part of our Satori Perspectives series, where HUMAN’s threat researchers share timely insights into the tools, tactics, and procedures shaping the threat landscape. 

Explore more research and intelligence from the Satori team here.

Your Industry Guide to Click Fraud in 2025

Understand evolving fraud tactics, see where vulnerabilities exist, and learn how to protect ad spend across the ecosystem.

Spread the Word