Loading...
Loading...
A guilty plea in an $8 million streaming-royalties fraud case is sharpening scrutiny on how AI-generated music can be weaponized to game platform payouts. Prosecutors say a North Carolina man coordinated the upload of hundreds of thousands of synthetic tracks and used automated bot farms—up to 10,000 fake accounts, plus VPNs and falsified records—to generate massive play counts across major services including Spotify, Apple Music, Amazon Music and YouTube Music. The case underscores an escalating integrity battle for streamers as AI lowers the cost of content at scale, and raises questions about where terms-of-service violations end and federal criminal fraud begins.
Aisha Malik / TechCrunch : Spotify is beta testing Artist Profile Protection, allowing artists to review releases before they go live to prevent AI tracks from being attributed to them — At a time when AI slop is flooding music streaming platforms, Spotify is beta testing a new “Artist Profile Protection” …
A man pleaded guilty after deploying roughly 1,000 bots to stream AI-generated songs and fraudulently inflate royalty payments, prosecutors say. The scheme generated about $8 million in illicit payouts by creating fake listening activity across streaming platforms and monetizing AI-created music. Authorities say the operator used automated accounts and shell entities to launder proceeds and exploit royalty systems designed for legitimate artists. The case highlights vulnerabilities in music streaming economics and bot detection, raising concerns for platforms, rights organizations, and AI-generated content governance. It matters because streaming fraud undermines payouts for real creators and exposes gaps in platform security and payment verification as AI lowers barriers to mass content generation.
A North Carolina man, Michael Smith, pleaded guilty to running an $8 million fraud scheme that used AI-generated music and thousands of bot accounts to rig streaming royalties across major platforms including Spotify, Apple Music, Amazon Music and YouTube Music. Prosecutors say Smith, with a co-conspirator and the CEO of an AI music firm, acquired vast catalogs of computer-generated tracks, then deployed automated software, VPNs and bulk-created fake accounts—at times 10,000 active concurrently—to produce billions of fraudulent streams from 2017–2024. The scheme diverted real royalty payments from legitimate artists; Smith also lied to platforms and rights organizations and faces up to five years in prison. The case highlights platform vulnerabilities and is spurring industry moves on AI detection and metadata transparency.
A North Carolina man, Michael Smith, pleaded guilty to running a years-long fraud that used AI-generated music and thousands of bot accounts to siphon more than $8 million in streaming royalties from platforms including Spotify, Apple Music, Amazon Music and YouTube Music. Prosecutors say Smith, working with a co-conspirator and the CEO of an AI music firm, uploaded hundreds of thousands of synthetic tracks between 2017 and 2024, used automated software and VPNs to generate billions of plays across up to 10,000 active fake accounts, and falsified records to conceal the scheme. The case highlights rising industry concerns over synthetic content, platform integrity, and efforts by services like Deezer and Apple to detect and label AI music. Smith faces up to five years in prison.
A man pleaded guilty to running an $8 million fraud that used AI-generated music and bot farms to inflate streaming counts and revenues. According to the indictment, the defendant operated up to 10,000 active bot accounts, attempted to sell the streaming-fraud as a service to other musicians, and partnered with executives at an AI music company and a promoter to produce hundreds of thousands of AI-created tracks for fraudulent streaming. Prosecutors say the scheme involved fake or manipulated accounts and generated illicit royalties, raising enforcement questions about when platform term-of-service breaches become federal crimes. The case highlights risks at the intersection of AI content, streaming economics, and platform abuse.