Words by Lois Browne

Whilst AI is something we hear about all the time, it's still quite foreign to most of us. We tend to think about computers taking over the world, or for me, images of Steven Spielberg’s AI featuring Jude Law's overly doctored chiselled face, pops up.

Fast-forward to 2020,  the stronghold of technology in our lives is one that can't be denied. In the field of music where developments are always ongoing and morphing, elements such as Artificial Intelligence have become a playground for exploration and experimentation. These advancements led me on a curious journey to find how AI could realistically integrate with electronic music in a live set up and interact with human participants.  

Hector Plimmer A.I Electronic Music
Hector Plimmer A.I Electronic Music

Taking over the Southbank Centre: Purcell Sessions for one week in February; ardent collaborators Ben Hayes (AI Researcher and musician) and Hector Plimmer (musician and graphic designer), did just this and employed an AI system as a third member into their group.

First, hosting a workshop to delve deeper into and explain the technology they had been using. The next day, they closed their residency with a robust live audio-visual performance. The stage was kitted out with a plethora of keyboards, synths, effects pedals and a drum kit. All amongst incorporating human participants in the form of musicians, harpist Maria Osuchowska and saxophonist, Axel Kaner-Lindstrom

Hector Plimmer and Ben Hayes A.I Electronic Music

Admittedly, equal admirers and musical confidants of each other’s sounds, the project had been in the works for a while, “Yeah, it's been weirdly organic in its progression. A lot of serendipity” according to Hector. He had been developing a sample package for another project but had received some unexpected news from Ben, “…and then literally through that conversation, that's where it started.” The majority of the show’s developments made over the rehearsal period, “The music aspect of it we did during the four-day residency, we just had a couple of loops for little bits. All the music came together during the time in the Purcell rooms, but Ben was working on the AI part of it for some time” Hector shares.

As an AI researcher, Ben had set out to build tools to let AI interface with electronic music. The nature of building the computer proposed its own set of challenges, “It can be hard to know how well AI and deep learning will cope with a task. You might have great results in a day or you might be unknowingly embarking on a three-year research project. By the time we started the residency, I was nowhere near happy, as the output was pretty soupy. The hardest thing was actually switching from building it as a piece of software, to using it as a musician. It was really difficult to resist the temptation during the residency to nerd out and tweak it for two hours and accept its imperfect, flawed form.”

A.I Music Composer Magazine

Yet it was the precarious nature of the AI which acted as a catalyst of fun for the duo, “It's going to do things that you didn't expect it to do and that's really exciting. We decided what we wanted the AI to play by creating synth patches and sound clips in Ableton, then fed them into the neural network. Ideally, we'd have a lot more training examples for it to learn from, but it just wasn't feasible at the time. These examples are found through the latent space, which can be very patchy. It’ll have no idea what you're trying to do. It might give you very musically unrelated things and so there’s this back and forth, where you have to adapt your ideas to suit what it's capable of.” Ben explains. 

This led to some extensive jam sessions that wouldn’t always lead anywhere but gave them ample space to further investigate and push their ideas, once they had decided on building a set of five untitled tracks for the show. Inspired by, and following previous attempts at using machine learning to produce music such as Google’s Magenta where they used multi-track models to create compositions. As well as Sony’s Computer Science Laboratory (CSL), based in Paris who have been playing around with using this technology to create renaissance polyphony to contemporary popular music.

Despite the fact that the tech giants have clearly been having their fun, Ben stated how accessible AI is now due to it becoming more affordable and cheaper to incorporate into everyday studies.

Ben and Hector aimed to do something that wasn’t wholly conceptual and wanted to carve out a space for A.I, as a collaborative entity rather than seeing it as an alien force. Both curious about using it as an improvisational tool, to demonstrate it’s abilities in an unfiltered context, “Doing it live and interacting in real time with the AI, with no sort of preparation, with the potential for things going wrong was the ideologically purest way of testing that human AI interaction, actually seeing what occurred at the point of contact between these two types of creativity.”  

The inclusion of live musicians Maria (Harp) and Axel (Saxophone), two very visceral human players became standout integral features of the show. Performing on three out of five songs individually and then all together for the finale. Their free flowing compositions juxtaposed against the straight-edged nature of the AI, “We wanted extra human elements, just to show a variety of ways in which people might interact with it.” Hector shares. 

Cm Plimmer Hayes 0055

Whilst the audio-visual component of the presentation also served to work as a physical embodiment of when the AI was operating. Developed by Hector employing Resolume software, after teaching himself in the lead up to the weekly residency. He had programmed it to trigger the graphics anytime a musical note was being played by the AI, displaying the visuals in 12s, as an octave of notes.

Our relationship between technology, seeing it as an extension of the human likeness, is a theme that resonates with us all, as it’s a component so intertwined with our daily lives. A reference to Nam June Paik’s Tate Modern exhibition which focuses on exactly this.

It’s a relevant conversation to be had now as they both point out the common use of AI in workplaces and their scope for them take the show in different directions, “We'd love to do it in a club. So we could spend more time really building things and do longer extended tracks. Plus there are so many factors we can sort of play with. Gaining biofeedback data from the crowd, such as the temperature, the amount of pressure on the dance floor and key that back into the AI so the composition responds to the way people are dancing and interacting with the space.” However, they do intend to make improvements to their set, to define what is being contributed by the AI and humans to make it clearer.

Cm Hector Ben Hector
Cm Hector Ben Ben

Hector and Ben’s performance is the absolute amalgamation of humans colluding with machines, sometimes clashing in harmony and in unexpected ways. Nevertheless, it’s these sorts of experimentations which continue to push musical exploits forward on a bumpy path that still has many stones left unturned. I pose them a final question about whether they think AI will ever get to the point where it overtakes humans. Jokingly, Hector responds, “I’m already preparing to bow down to the AI overlords.”

On the contrary if there’s anything they both agree on and what this project has taught them, it’s all dependent on how we choose to interact with the technology.