This week I logged on to Suno, an artificial intelligence music platform. I had just read a new study that found that most participants couldn’t distinguish Suno’s music from human compositions, and I wanted to try it for myself. I thought of a song that meant something to me—Buffalo Springfield’s “For What It’s Worth.” I’d first heard the tune when I was 17 years old, sitting in my stepfather’s kitchen in rural Virginia as he sang and strummed a guitar he’d made by hand. Released 30 years earlier, in December 1966, the song was a response to the Sunset Strip curfew riots—counterculture-era clashes between police and young people in Los Angeles. With my own guitar in hand, I’d set to learning the chords, trying to understand the feeling it had given me.

Now, at the computer, I prompted the AI to create a “folk-rock protest song, 1960s vibe ... male vocals with earnest tone.” The generation took seconds.

With my headphones on, I listened, imagining myself in a cafe as the song came on the sound system. Though knowing it was AI-generated made me look for signs of artificiality, I doubted I could have distinguished it from a human-made song. And though it didn’t give me a frisson or make me want to play it on repeat, most songs don’t.

The paper on AI music, a preprint that has not yet been peer-reviewed, drew from thousands of songs on a Reddit board where users post Suno-generated music. The researchers then presented the study’s participants with pairs of songs and asked them to identify which of these tunes had been generated by AI. The team found that participants chose correctly 53 percent of the time—close to guessing—though when they were presented with stylistically similar human and AI songs, their accuracy reached 66 percent. But AI generation models update frequently, and by the time the study was released as a preprint, a more advanced Suno model was available.

To read more, click here.