Ugh, algorithms

Hit and Miss #171

We’re living in one of the least interesting timelines, as far as consumer algorithms go. Allow me to weave a few threads together…

1. Revealed preference

Spotify Wrapped, the service’s personalized yearly summary, recently came out. If you’re not familiar, it summarizes your year in listening to music (on Spotify): your top songs, top artists, and so on.

Here are my top artists:

  1. Bruce Springsteen
  2. Billy Joel
  3. Pavarotti
  4. Elton John
  5. Supertramp

None of these are surprising. (Springsteen is my #1 every year.) Yet they puzzled me: surely male artists don’t dominate my listening; surely I have a bit more genre-istic variety than this?

Listing those I turn to most, I’d include Cher, Tina Turner, the original Mamma Mia! cast, and so on. (Springsteen and Pavarotti would remain; the other three—while great—wouldn’t rank so high.)

This might seem a classic case of revealed preference. Despite the values I think I hold, my listening history reveals my “true” self. But this isn’t the case: your listening history, depending on how you use Spotify, likely isn’t a set of intentional choices.

2. Choosin’ or cruisin’?

Courtney made an observation that perfectly captures this distinction:

i wish the top songs playlist would not count plays from spotify autoplay playlists; that it was a capsule of active listening instead of passive

i don’t listen to a lot of justin bieber but i listen to justin bieber a lot if you know what i mean

I mostly use Spotify’s “radios” to listen to music: you pick a starting point (a song, an artist, a playlist, whatever) and then The Algorithm™ plays an endless list of similar tunes. There are only two “choosins”: setting the starting point, and skipping any unwanted songs. For the rest, I’m cruisin’, taking what turns up.

Such algorithm-driven listening is passive—and I’m okay with that! While I’ll still throw on an album now and again, I genuinely enjoy the radio experience.

But it can also problematize the data that supposedly portrays your music tastes. A few years ago, Simone Rebaudengo spoke at CanUX, on playfully training consumer algorithms—he fed wildly different genres to his Spotify account, to generate more interesting recommendations. If you’ve spent enough time with Spotify, you may start to feel that its algorithmic tools churn out the same music ad nauseam—an “algorithmic cul-de-sac”, if you will. This throws off your listening counts—sure, it’s what you’re listening to, but is it what you’re deciding to listen to?

Similarly, if you’re an avid exerciser, your running or exercise playlist may dominate your stats, even though that’s not necessarily your taste “for the sake of music”. (I’m assured by more active friends that this is a thing.)

The simple counts of my listening history only tell one version of the story. But what of a subtler, more nuanced one: to which artists did I choose to listen?

3. Boring algorithms, boring analysis—brighter futures?

Spotify Wrapped is an interesting example of consumer-facing data analysis. Your data is visualized, becoming its own product. (Nowadays, Wrapped is only available on mobile—explicitly built for social media sharing, to be consumed and circulated.)

And what an unimaginative example it is!

What if your Spotify Wrapped weren’t just “check out my stats!”, but gave you control over the input, creating a more expressive Wrapped? Still informed by your data, but now curated—since data, after all, can’t really speak for itself.

Take the example of an exercise playlist. What if you could filter those “listens” out and reconstruct the Wrapped?

Or: What if you could filter out the automatically-suggested songs, only showing your “intentional” Wrapped?

Ugh, algorithms. They guide so much digital activity, yet we can’t see or tweak their inputs, can’t trace or influence their logics.

Though it’s likely not meant for situations like this, I wonder whether the proposed Consumer Privacy Protection Act’s provisions on transparency for automated decision-making could illuminate these algorithms:

63(3): If the organization has used an automated decision system to make a prediction, recommendation or decision about the individual, the organization must … provide them with an explanation of the prediction, recommendation or decision….

Our digital consumer economy is enabled by data—why can’t we more explicitly and intentionally engage with it?

Imagine being able to tweak your Twitter (or Facebook, or LinkedIn, or or or) timeline, to reweight their hidden factors. (Or to just disable it and have an actual chronological feed—what a concept!) Genuine value! Of course, they claim that the magic algorithm is part of the service (serving up sponsored content at regular intervals). Well, for one, that doesn’t reflect well on the content. And, crucially, need it be so secretive?

Anyway, that’s more than enough from me. All the best for the week ahead!