Behind the Playlist: Why AI Can’t Curate Music (Even If It Sounds Like It Can)
Most people think AI can curate music. It can’t. It just predicts what’s average and average is invisible.
By Ross Woodhams, Founder, Audalize
There’s a lot of hype right now about AI music. Platforms like Suno and Udio can write entire songs in seconds. Other systems, used by mass-market background music providers claim they can build the perfect playlist using “AI-powered curation.”
And yes, it’s clever. It’s fast. It’s scalable.
It also completely misses the point.
Because while these systems can assemble music, recommend tracks, and even generate melodies… they don’t understand music at all.
What AI Music Curation Actually Does
Let’s pull back the covers.
Whether it’s generating a playlist or writing an original track, AI music systems are built on the same principles:
Train a model on massive datasets of audio, tags, and listener behaviour
Turn those songs into data (tempo, genre, valence, skip rate, etc.)
Use statistical patterns to predict what comes next
It’s not curating. It’s not composing. It’s predicting.
When AI builds a playlist, it’s not asking, “What’s the emotional journey here?” It’s asking, “What other songs share similar metadata?”
It sees music as numbers in a spreadsheet:
BPM: 90
Valence: 0.8
Energy: 0.6
Genre: chill-electronic-pop-with-male-vocal
It doesn’t know why that track makes you feel something. It just knows other people didn’t skip it. That’s not music intelligence. That’s audio autocomplete.
A Quick Experiment That Tells You Everything
A few years ago, I ran a simple experiment using the Spotify API.
I searched for playlists containing a phrase, like “Two for Dinner” and scraped every track uri across every one. Each time a song showed up, it scored a point. The more lists it appeared in, the higher its rank.
I ran this on thousands of playlists. The result? A single, massive playlist with over 2,500 songs ranked by cultural consensus.
And the top 100 or so?
Unreal. Nina Simone. Nat King Cole. Sinatra. Sade. Even a bit of Lionel Ritchie made the cut. Warm, nostalgic, textured… exactly what the phrase “Two for Dinner” should feel like.
Further down? You start to get fringe picks. Sade followed by John Mayer, then a weirdly confident bossa nova version of “Careless Whisper.” Not wrong. Just… bold. But still makes emotional sense.
And then, right at the bottom? Absolute chaos. Tracks that sound like someone fed an AI “romantic dinner music” and it returned a dubstep remix of Slayer recorded on a potato. A reminder that humans are beautifully, bafflingly inconsistent.
And that’s the difference.
Every one of those playlists was made by a person. There was intent. Flawed, messy, brilliant intent. That intent gave the whole thing structure, even if it drifted.
AI Doesn’t Do Intent
AI doesn’t know what “Two for Dinner” means. It doesn’t know time of day. Or lighting. Or whether the wine’s open or still in the bottle.
It doesn’t ask, “What fits here emotionally?” It asks, “What’s statistically similar to other tracks in this cluster?”
That’s why most AI-powered playlists feel hollow. They’re not wrong. But they’re not right, either. They’re optimised for consistency, not connection.
The Risk of Optimising for the Average
What happens when background music becomes machine-chosen? You get what’s statistically safest. You get playlists built on collective meh.
You get spaces that sound like everyone else, because they’ve all bought into the same automated logic.
This is what’s happening in background music right now.
Mass-market platforms are automating curation. They’re building music experiences using models trained on Spotify skip data, genre tags, and popularity scores.
It’s neat. It’s scalable. And it’s utterly soulless. Because in the end, these models don’t know music.
They just know data that once came from music.
Ross is CEO and Founder of Audalize, the company helping venues sound unforgettable. Learn more at audalize.com.