The Sound of Spotify’s 'Algotorial' Rhythms

The world’s leading music streaming platform has perfected the art of ‘ultra-personalisation’. But has curiosity been lost in modern patterns of algorithmic music consumption?

Spotify LA music studio photographed by Sean Michon

Spotify dominates the UK’s music streaming market, commanding some 33 million users, a position owed much to a decade of algorithmic advancement. The outcome of their refinements is what they call 'algotorial' technology, a portmanteau that folds editorial input into data-driven automation, an experience shaped by ‘humans + machines’.

On their blog they make plain that  ‘at Spotify, personalization, AI, and recommendations are core product features’ offering in-house guidance through the millions of tracks available. What personalisation at this scale offers in convenience, it may quietly cost in discovery. 


Distribution strategists hail the Swedish streaming platform’s 'ultra-personalised customer service'  as its greatest success and a masterclass in customer retention. For many listeners, the system is a genuine gateway, and without algorithmic guidance, many would simply default to what they already know. Yet that same guiding hand has begun to attract official scrutiny. In 2023, the Centre for Data Ethics and Innovation (CDEI) examined the mechanics of music recommendation systems, warning of how algorithms might lead to the homogenisation of taste and producing the ‘musical equivalent of echo chambers’. Praise and unease have grown in lockstep with every iteration of the platform's recommendation system.

Spotify’s recommendation engine is not unlike those behind Netflix or Tiktok, built around the same ambition of optimising time spent on the platform. The algorithmic system rests on two pillars: track representations and user representations. Listeners inform their own ‘taste profile’ through searching, listening, skipping or saving, each action feeding the machine a little more signal. On the track side, the algorithm first digests metadata supplied by the artist, then subjects the audio itself to acoustic analysis. The precise mechanics are only partially disclosed; some metrics are straightforward sonic characteristics, while others attempt something more holistic, measuring qualities like ‘danceability, valence, and energy.’ More recently, the system has evolved beyond audio analysis, incorporating cultural context, such as cover art, press coverage, critical reception, so that it can  ‘reason over both how the track sounds and where it culturally belongs.’ 

One mechanism of the ‘algotorial’ system, elucidated on their engineering blog, is their contextual bandit approach to curation. This system calibrates the balance of content types within recommendation lists and deliberately discourages large deviations from predicted listening patterns. New recommendations need not stand out too much from your existing tastes, and arrive swaddled in well worn tracks and on repeat artists. The well kept comfort and convenience raises a more uncomfortable question: if an algorithm is engineered to avoid testing its users or disrupting a session, is it not also quietly narrowing the aperture through which new music can pass? 

The result, for many users, is an experience that closely resembles a broader modern phenomenon of a 'filter bubble', a term coined by Eli Pariser to describe the personalised information ecosystems that digital platforms quietly erect around their users. One definition captures it precisely as ‘a state of intellectual isolation that allegedly can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user’.

Personalisation of this kind now underpins most consumption platforms, and users can find themselves trapped in the small radius around their preferences, impoverished of divergent ideas and in a position from which it is increasingly difficult to broaden interests and information horizons. On Spotify, 33% of new artist discoveries occur within its  Made for You sessions, so a third of new music comes pre-approved, sitting comfortably within your listening patterns, with its edges smoothed by an algorithm's best guess at who you are. 

These filter bubbles coalesce into what critic Kyle Chayka calls a  ‘filterworld’, a realm in which algorithmic predictions have ‘flattened’ the cultural ecosystem at large. In his investigation of the 'cultural middle zone' that so many now inhabit, one observation is particularly apt: 'Algorithmic feeds stand between the human creators and the human consumers, making an infinite series of decisions about culture.' The implication for Spotify is pointed. Taste, long considered something formed through curiosity, accident, exposure, and connection, is increasingly the output of a system optimised for retention.

In our audio bubbles, are we 'shuffling' collectively towards a listening culture that is ‘fed’ to us, narrower, and less inclined to look up? 

credit: Spotify

An algorithmic audio diet has, predictably, produced demand for solutions. A Forbes guide titled ‘How To Stop Spotify Feeding You The Same Old Songs’ offers practical advice on how to harness your algorithm for 'a well-trained Spotify', presenting ways of tweaking your listening behaviour to coax better results from the machine. The instinct, it seems, is not to look beyond the algorithm, but to submit to its rhythms, to become a more legible listener rather than to seek out recommendations unmediated. Yet overtly editorial alternatives do exist. Human curators, music journalists, radio programmes, and apps (such as hums), can deliver something less predictable, they just need slightly more seeking out than what is fed by algorithmic convenience. 

Of course, human curation has its own limitations. It is slower, less scalable, and historically less democratic in whose taste is reflected and whose music is championed. The editorial alternative also asks something of the listener that algorithmic convenience does not, in time, patience, and a tolerance for the occasionally misjudged recommendation. Not every listening session is an act of cultural exploration, nor should it be. 

Personalisation now permeates almost every feature of Spotify. Even the search bar – the most direct access point to active, self-directed discovery – returns results shaped by listening history rather than neutral relevance. What is more, Spotify's answer to the radio DJ, whose traditional role was to educate and curate beyond the listener's existing frame of reference, is what the platform heralds as  'the very best of Spotify's personalization', an AI DJ in your pocket. 

It is Spotify’s latest feature that most completely captures the trajectory this article has traced. The 'prompted playlist' allows users to generate a personalised playlist from a written description and is presented as liberation: 'Until now, unless you were a developer at Spotify and could write your own playlist algorithm, your best ideas might have stayed in your head… There's no longer a tradeoff between control and convenience. You finally get both.' Among Spotify’s many innovations in their algotorial project, this one asks the least of the listener, and marks the furthest point yet in the steady transfer of intentionality and curatorial agency from human to machine.


Spotify did not invent passivity, and the system it has built is remarkably impressive, convenient and so attuned to individual taste that it can feel almost telepathic. It is demonstrably capable of broadening taste and connecting listeners to new and brilliant music. The question is what becomes of taste when its defining characteristic becomes predictability. Comfort is a perfectly reasonable feeling to curate for, but a listening culture that has grown content within the radius of its own preferences must look and listen outside its own sounds.