Music has always been linked with emerging technologies. But AI is something new. AI has the power to completely transform how we make and experience music. AI is already changing the world.
We spoke to 8 artists from MUTEK Montreal 2019 to discuss their wildest predictions for the future of music, AI, and their own workflow.
I think there’s great potential for AI development in musical creation. But the first thing that comes to mind is who owns the rights of the composition—software creators or the AI?
If an AI is trained to create, it needs to work on pre-existing music… So another question arises: is the AI the author, or is it just a copy from the music that the AI used as a reference?
Is there any possible originality in AI at all? I guess it’s still an early stage to answer all these questions as it’s unknown territory.
My music trajectory is very much related to classic instruments. I play violin and piano since I was a kid and had the opportunity to play in orchestras. It was a collective artistic experience of sharing that I consider of great value.
I think each musician can provide a different touch through his original interpretation to make an art-piece come to life, something that is beyond technicality and virtuosity.
It’s an invisible variable, more of a personality and sensibility thing… In this sense I have certain doubts about how an AI can detach from “data” and algorithmic training in order to provide musicality to a creation, as interpretant or author.
I’d be interested in working with AI as long as the situation doesn’t become a music dialogue where the algorithm behaves the same as me and takes me back to the same point.
If the AI can’t create on its own, rather than just copy artists styles, I wouldn’t be interested to add it to my creative process as I feel I wouldn’t be adding anything new.
From my perspective, AI is one of many algorithmic approaches that could be useful for making new work. Based on the task at hand, it could be the most appropriate or inappropriate tool.
It certainly offers some exciting new potentials for less than thrilling preparation tasks. For instance, the ability to speed up sophisticated similarity-recognition processes could be very useful when using large sample banks, whether in studio practice or a live application.
In terms of using AI for generating musical structures (such as phrases, rhythms, etc) it’s important to recognize that the result is more dependant on the data it is trained on than on the particularities of the algorithm itself.
The use of such generative tools would involve tailoring of datasets, as well as multiple custom macro-compositional levels which would ensure that the generative material retains a character which is not easily achievable by just training the AI.
I feel that despite the exciting specific possibilities of AI in music, the quality of the music will still much more dependent on one’s ears and imagination rather than on any specific algorithm or application.
I would like to see how AI can work with non-copyrighted material. I know there are algorithms out there that can take an input and find something that sounds at least partially similar to it. But what if the search results came exclusively from databases of public domain, free-access sounds?
For example, let’s say I want to add something in a track that I know I can sing, but I don’t want to use my voice; such an algorithm could come in very handy. I can only imagine the exciting, unpredictable material it could take me to—fragments of speeches, videos, field recordings, and so on.
Now, what if we included literally every sound online that isn’t copyrighted? Add to your results every instance of home video on YouTube where people are just hanging out, doing their thing. That would be a tool I’d love to experiment with.
I hear AI technologies are already capable of making full pieces of music based on pre-existing, copyrighted material, but that really doesn’t interest me.
I would rather support projects that expand our possibilities as human creators, instead of gradually dehumanizing the process, stealing our jobs away from us, and banking in on our talent, time and effort. I really hope there are leading AI developers out there with this kind of vision in mind.
I see AI just as another significant advance in this never ending, exponential quest of pushing and questioning technology’s potential and boundaries in this prehistoric digital age.
Democratization of AI is, in my mind, highly connected to the democratization of art practice in our society.
More than ever now, powerful, complex, human brain/mind inspired and—I am tempted to say—organic, is now accessible for people to get their creative vision out of their head, without being musically trained or know anything about computers or programming.
But Just like any creative technology, adapted to music creation or not, I’m interested in what a creative mind can do with those tools, not the opposite.
I am a painter. And I use music as a “clear paint” for my audiovisual performance.
AI will be a good entrance guide to people whose main expression technique was not Music. They will help us in shaping that inspiration. Or it will create unexpected chaos luckily.
From the age of ‘tools used by humans’, I have hope in the age of tools that walk with humans.
I’m very interested in this subject and I read as much as I can about this.
The human brain and ear will always come up with more interesting and ”better” results than any AI could ever do, I’m sure of that.
There’s this famous example of Daddy’s Car which always pops up when people are speaking about artificial intelligence and music making.
The story of a track that was written by AI. But if you actually care to check, the track wasn’t done by AI really. It was arranged and produced by a human being. Also, the track isn’t even interesting.
For me personally, making music is the most thing I know and I love spending hours and hours in the studio just trying out new things.
I know a lot of people who more or less try to skip certain steps in the process of making music. For instance, saying that no-one who listens to a certain techno tune can hear if it’s a sample of a 909 drum machine or if it’s actually the vintage Roland instrument that is playing.
Well, again I love the moments when I make music and for me it’s more fun to use the hardware drummer instead of browsing tons of samples of the 909. Meaning the end result is not all that counts.
Having fun is the most important thing for me and the reason I make music to begin with. Experiencing with various algorithms and generative sequencers is super enjoyable for me but it will never replace actually coming up with melodies and drum patterns myself.
Having said that, I’m a huge fan of not having full control.
The sole reason I think the TB-303 is the most fun sequencer ever is that I still cannot master it, even after all these years. There’s always a certain degree of randomness in the basses and melodies I write due to the fact that the 303 sequencer is almost impossible to master. Ghost in the machine.
Chloe Alexandra Thompson
I see AI being used as a blanket term that may serve as a means to explore automation through machine learning, as well as creating instruments and applications which allow us to work with our trained applications to explore means of interactivity with data, objects, and human collaborators.
I will be working as a spatial sound designer for an artist named James Sprang later this year who is using an AI speech interpretation software as a means to explore legibility, experience and identity. While I will not directly be using this in my own personal work, I am excited to be working with this AI as it is being trained through the work of many recorded voices of poets we will be playing back and running through it in a spatial audio array.
How I see this fitting into music creation as a whole is having trained and intuitive FX chains that are run through automation. Presently it seems that AI programs are being trained to work with composers to take parts of the composition they may or may not typically focus on and expand those through following the users typical choice patterns.
In my own practice I see machine learning potentials being the integration of smart panning protocols which allow me to have the panning automation I construct through code, and also manually play be mapped out by a program rather than the individual panning protocols I apply to each instrument or movement of a work.
I would like to work with an AI that could translate selected frequencies into poetic text, spoken messages, or “lyrics” based on the writings of many critical theorists and poets who’s work I often reference in my praxis.
As an artist who focuses on abstractions rather than in a more discernible sung lyric song structure direct messages do not make it into my compositions and this could be a way to apply those abstracted principles and find new avenues of entry to this realm of personal interest.