Spotify Removes Over 75 Million “Spam” Tracks and Tightens Rules on AI-Generated Music

Spotify has taken a major step to protect artists and listeners by removing over 75 million low-quality or AI-generated tracks from its platform. The company confirmed that this massive cleanup targeted “spam” uploads, duplicated songs, ultra-short clips made to exploit royalty systems, and voice-cloned recordings created without consent.

What Sparked the Decision?

The rise of artificial intelligence tools has made it easier than ever for bad actors to flood streaming platforms with low-effort content or AI deepfakes imitating real artists. Spotify revealed that these uploads — though only a small fraction of total streams — distort catalog data and payment models. This cleanup, which began earlier this year, marks the company’s strongest stance yet against abuse of AI in music.

Stronger Rules and New Safeguards

Alongside the mass removal, Spotify introduced a set of new policies designed to ensure transparency and authenticity on the platform:

  • Updated voice impersonation policy, empowering artists to report and remove unauthorized deepfakes.

  • An improved anti-spam filter to detect duplicate uploads, keyword-stuffed titles, and micro-track exploitation.

  • Industry collaboration to implement clear AI usage labels through metadata standards (DDEX).

Spotify emphasized that it’s not banning AI-assisted music altogether — but it must be responsibly created and properly disclosed.

Industry Reaction

The move has been praised across the music industry. Analysts see it as a necessary effort to restore fairness in the streaming economy, while major labels like Universal and Warner have publicly backed Spotify’s stance. However, experts warn that this will be an ongoing challenge as generative tools continue to evolve.

What This Means for Artists and Listeners

Spotify said that legitimate artists and real human listeners remain largely unaffected, but that fraudulent uploads undermine trust in the platform. These new safeguards give creators more control over their voice and image, and pressure distributors to verify content sources more rigorously.