In 2017, the industry website Music Business Worldwide ran a scoop on a number of ‘artists’ on Spotify that it had identified as being potentially fake. Their reporters identified songs on some of the streaming platform’s most listened to playlists – Ambient Chill, Piano In The Background and Music For Concentration, for example – that had racked up millions of plays, but the artists they were attributed to had no footprint outside of Spotify. No social media accounts, no websites, no concerts, no reviews. Couched in terms of Spotify’s ongoing wrangles with labels and artists over royalty payments, the implication was that the streaming giant was avoiding costly fees by bringing the production of content on its platform in-house; that either the ‘artists’ stocking these popular background music playlists were gun-for-hire producers on their payroll churning out generic tunes, or that they’d gone a step further and tasked an algorithm to do the dirty work.
Aside from the financial impact on hardworking musicians, the question of whether making music – and art in general – should be purely a human concern is one that was thrown up by this situation coming to light. A range of music-making software is now on the market that employs AI machine learning to generate music to a specific set of parameters. Amper Music, Jukedeck, Watson Beat and Flow Machines are among the products currently out there that enable you, regardless of your musical competency, to ‘create’ a piece of music to a specific set of parameters.
It’s at this interface between human and machine creativity that we bring AI AUDIO LAB – a month-long interactive installation that places the Liverpool public within this new world of augmented creativity. Working with the University of Liverpool’s senior music lecturer Dr Robert Strachan (author of a book titled Sonic Technologies: Popular Music, Digital Culture And The Creative Process, which assesses the consequences of digitisation for music making), we invite you to step into a virtual recording studio and shape the creation of an artificial intelligence-composed piece of music, in a genre of your choice. We will also be inviting a selection of artists to create new work within the installation, challenging them to test their own ingenuity out on the software in a bid to test out its capabilities and limitations. The project will be hosted within SEVENSTORE’s new Baltic Triangle location and is intended to encourage participants to critically engage with the idea of a music future based on a partnership between AI and human creativity.
Ahead of the installation going live, I met Robert Strachan to talk about the wider implications of this technology – not just in the way music is produced, but in the way it is consumed. It’s a conversation that has implications that go beyond music, and reach to the way we view progress in general. “On the one hand there’s the idea of artificial intelligence producing music,” Strachan says as we sit down for our conversation, “but then there’s artificial intelligence as a tool towards creativity. So some people are calling it augmented creativity. And in a sense that’s what technology does, right? It’s a trigger for creativity.”
A lot of AI music software available today is marketed at people with no existing musical knowledge or skill, which, in some ways, is very democratic. Other than impacting trained musicians, this must also have an effect on the kind of music produced.
RS: Yeh. If you think about ways we value music and musicianship, on the one hand there’s the authenticity idea – that you’ve lived a certain thing and you’ve paid your dues and you’ve learnt things and developed your virtuosity – but on the other hand it’s the promise of the idea, right? So, ultimately, could it be a completely punk rock thing that actually you don’t need to have any musical skill, all you need is good ideas to put into the algorithm? The problem with that is what is in the algorithm is generated through existing work. Those deep neural learning technologies group things together and they create new things by having likelihoods. And actually, in itself, that’s kind of overly generic. So, if you listen to AI music in the way that it’s already being done, it can do a pretty good job of producing some generic background music in certain respects, but in terms of ‘can it actually create new creative trajectories?’, that’s another issue.
There’s an inherent scepticism to machine learning software, possibly because of a lack of knowledge of how an algorithm actually works. Do you think the onus is on musicians and artists to show that this is not a kind of a dystopian world where the machines are going to take over, but that it’s just another tool?
Yeh… in a way. Think about the history of different technology within music – synthesisers, multi-tracks, midi controllers, samplers – they’ve all been seen as somehow inauthentic or duplicitous, or that there’s a trick of some sort, or that the technology can fake musical competency, or whatever. I think about the way that synthesisers were understood by the Musicians’ Union in the 1970s and 80s, that they weren’t seen as ‘real musical instruments’ [the MU tried to ban synths from sessions and live performances in 1982], and it took people to use those in creative ways in order for them to be understood as technologies that were valued and inherently part of musical genres. But that occurs over time, right? And it takes years for those tools to be incorporated. But, I think, at the core of the dystopia thing is the artificial intelligence, that idea that human agency is completely taken away. It’s got this natural dystopian ring to it. If you think about Orwell’s 1984… think of the scene with the washer woman: she’s hanging up the clothes and she’s singing this song, and Winston’s first thinking, ‘Oh, it’s a song which has been generated by a computer to create this false emotion’, and it’s that idea of the powers that be producing false consciousness through control and a machine producing music – how can it possibly have any kind of human resonance?
The irony being, as with all ‘artificial’ processes, that they’ve been created by humans to mimic a lot of the things we want them to do.
Yeh, but I’m saying that example because that’s the mythology about artificial intelligence; Orwell wrote that in the 1940s! It’s already establishing a kind of dystopian message about computers taking over.
It’s never really gone away, has it?
Exactly. Although, if you think about the way that people celebrate the idea of the post-human you could say that, actually, we’re very comfortable with technology. We use it in our daily lives to do all sorts of things – to work, to play. It’s already an extra part of our consciousness.
AI has its fingerprints in the way we consume music already, so the idea of it making the leap across into other areas of creativity and art shouldn’t be something that we should be surprised about.
We’re already in the AI era in terms of music – that’s just a fact of life. Artificial intelligence is all around us, it’s part of the way we interact with technologies and the way we live our lives. We use AI to our advantage all the time. It depends how you think about the technology as a concept, but we’re already in the AI turn in terms of music anyway. Spotify uses deep-learning technologies to understand what we like and recommend good music to us. It analyses not only recommendations from people who have similar musical taste, but it also analyses the sonic properties of music itself in order to build personalised playlists. I think Spotify is interesting because it taps into a fundamental change in the way that a lot of people listen to music. Music becomes not a thing that we own but a service that we consume. It’s about experience. In terms of the two notions of artificial intelligence – making music on the one hand, but also consuming music – Spotify are almost starting to meld that together.
François Pachet is the head of Spotify’s technology research lab, creating AI music, and he worked at Sony for 20 years doing artificial intelligence. For a platform like Spotify, AI makes total sense because music is no longer something that we physically own; it’s like water, we can turn it on and off; access it when we need it. What we want to listen to is regulated by the patterns of our daily life – we either want to be relaxed or stimulated or asleep or motivated or whatever. And that’s always been the case in terms of the way we use music, but Spotify really reduces that in terms of the way that it groups music together. So, musical classification comes through use-value rather than… a sociological value, almost, or its cultural use. And then the idea of developing AI technologies kind of makes sense, because if you can make convincing music that you can relax to or go to the gym to, then you don’t have to pay royalties to the artist who’s created it because it’s been developed in-house by artificial intelligence. For a significant part of your content, then, it makes complete sense because Spotify has ownership over everything in that process.
It’s also providing the consumer with what the consumer wants.
Exactly. So it’s reducing music to a service economy. And, I suppose, the idea of monetising experience is at the heart of what the platform does. If you can monetise that through owning the musical experience – a track, say – and not have to pay creators from that, that makes complete sense as music as a switch-on-and-off-able mood regulator. So you can see why they’re investing in it. And I suppose the other point is, well, does it matter? Because the whole moral panic around artificially intelligent music is that human agency is taken out of it – but on the other hand, music makes us cry and makes us feel part of something and gives us moments that are really special and it makes the hair stand up on our arms, or whatever. Does it matter if a computer does that? Or are there differences? And I think that’s the worry that people have about artificial intelligence.
Well, I think people of a certain generation would initially rail against that and say that’s never not been an option, so they would think the idea of algorithms generating music rather than humans would just be categorically unacceptable. Does an artist even really need to be the vehicle to deliver music?
It depends what context you’re looking at. Because if you reduce music to a service economy, which you use to do certain things, then I don’t think it necessarily matters to those consumers as long as it does the job. But, by the same token, we want performers to speak to us and speak for us about human experience. And that’s been a fundamental facet of music throughout the ages. We want our musicians to speak to us about the human condition and human experiences that reflect ours. Music is kind of like a mirror, in that context. So in terms of identity, then I can’t see the death of the artist per se. There may well be artificial intelligence-generated artists; you can see in Japan in terms of Vocaloid singers and things that people are actually accepting that those kind of really highly-mediated technologies can actually mean something to them, and are very popular.
But another thing about post-digitisation is the proliferation of niches, and, as we become more technological, the personal becomes entwined within communications technology. What it means is that you might have mega corporations who are producing artificial intelligence music in that functional role to fulfil music in games, music in adverts, television, mood music for Spotify. When everything becomes content, what does it matter where it comes from?
I wonder if the initial wariness to the idea of artificial intelligence creeping into all forms of creativity comes from artists, who are seeing their space being encroached upon a little?
Yes. Most musicians either don’t make money or they have portfolio careers, where they’re working in different sectors of music and utilising their skills in a variety of different ways. If a chunk of that then gets siphoned off to artificial intelligence through large corporations employing that and it being an easy available thing to all sorts of content providers, then that actually sucks away a lot of potential income streams. There’s a potential problem there for musicians – things are difficult enough as they are! I don’t think it’s necessarily dystopian for humans andlistener – that’s a separate issue. But as somebody who knows hundreds of musicians and is involved in educating them – and being a musician myself – that kind of worries me, in a way. But that’s more of a pragmatic thing rather than, ‘Oh god, the computers are taking over’. And, you know, humans and musicians are extremely resourceful and will use technology in particular ways. All musicians – today’s, tomorrow’s, those around in 20 years’ time – will have a vested interest in human agency being part of the creative process. There’s that other question, isn’t there, that because the technology now is quite limited, people go, ‘Oh well it can never replace human ingenuity’. I think the technology will get better and will be developed to the point where it’s even more convincing than it is now, and it just becomes this kind of service economy.
There could conceivably, then, be a split in how we view music, between music as pure content – a service – and music as experience. Could AI even creep into that, and kill the live industry?
No, I don’t think so. That’s another facet of digitisation, that as people’s lives become more digitised, actually what people crave is experience. And the commodification of experience in late capitalism is a massive thing. So I think there will always be room for that face-to-face interaction. But at the same time, if people use artificial intelligence to create something differently, which moves people, there’s no reason why that should be seen as in any way inauthentic than anything else.
It’s about collectivity: performance is a collective thing. And even the stuff that’s used in mediated technologies, like the hologram – Elvis’ band with Elvis as a hologram – still you’re seeing all those musicians who played with him in Las Vegas in the 70s on stage with a hologram of Elvis. There’s a suspension of disbelief there that people want to engage with. But, via the same token, people want a connection with… something. It depends what environment it is.
Could you ever see an AI artist performing on the Pyramid Stage at Glastonbury? Would you go and see that?
But a staged performance like that would still relate to some kind of human agency, because either somebody’s got to set the parameters for making that an interesting experience, or somebody’s got to say, ‘Yeh, that’s the interesting bit of the experience, that’s what we’ll present to humans’. I mean, the whole idea is interesting: does it open up creative possibilities? Because, as musicians and artists, we work with the materials that we’ve got, we use deep neural thinking ourselves – that’s what creativity is. We amass information about music and we understand that there are similarities between different types of music and then we process them into new things. It’s exactly the same process that AI uses. In one way, artificial intelligence could push us into parameters that we wouldn’t necessarily think of. All creativity is about trial and error and using systems to push the boundaries of something – but knowing when that works is an instinctive human thing.
AI Audio Lab launches at SEVENSTORE on 29th May, with a conversation event with Dr Robert Strachan. A number of workshops will take place in the lab throughout June, where musicians and non-musicians will be invited to take part in the process. Join us on 1st June for some special artist Q&As around the future of creativity in conjunction with Baltic Weekender.