Entry tags:
scattered thoughts on the use of AI tools for music creation
This is my reply to someone else's vlog which I am posting here rather than there, as it got quite long.
I am not absolutely for or against AI tools being used for making music. I think they are good in some ways and bad in others. It's already causing changes to the music industry, and that will continue whether we like it or not. Once the cat is out of the bag, as they say, there's no putting it back.
There are AI models already which can mimic people's voices very well, including their style of speech and inflections. Even if the big companies have safeguards to disallow users from doing that, they won't be able to stop it from happening in other places. I've already been served AI-generated ads on YouTube, seemingly showing celebrities endorsing things which I am very sure they did not approve. I am sure there will be fraudsters who release music claiming to be from popular artists, using fake AI-generated vocals.
I think AI will even be able to copy a particular artist's vibe and music style. I'm not too concerned about that, because artists can make it known which songs are authentic, so that anyone who follows them would know any item not on that list is likely to be fake. I am much more concerned that AI-generated video and audio are being used to sway people's opinions on politics and other matters. But people creating misleading videos and "news" was already a big problem even before AI started being used for it, and before famous peoples' likenesses were used without their consent.
AI enables more non-musical people to make music. This isn't necessarily a bad thing, but it will likely make it harder for individual artists to make money from music, much less make a living from it, due to there being more of it available. Music has been affected by advances in technology like this before. In the past, auto-tune enabled more people to become singers. Synthesizers enabled complex music to be made, replicating many kinds of instruments, no longer requiring a whole orchestra of musicians to make the same kind of music.
How much different is it to fiddle around and experiment with synthesizer settings until you find a sound you like, versus telling an AI to generate sounds until you find some you like? How much different is it to brain-storm to create lyrics, versus telling an AI to generate lyrics and mixing and matching from what it gives you to produce a song you like?
I don't consider myself to be very musical. I can't sing in tune, can't identify music notes by ear, and don't play any instrument well. Except maybe a hand-drum, but I'm not exceptional at that either. But in the past, I have made my own songs for fun, using software tools to add echos and change pitch and to layer on sound effects. I did that many years ago, with software I coded myself. (But using code libraries created by others... Hah!)
Nowadays if I were doing that, I might also use AI. I might imagine a particular tune in my head, and use AI to try to generate that tune in a professional sounding way, which I'd never be able to accomplish on my own. Or I might play around, randomly generating sounds and combining them with other things.
I don't see anything wrong with other people using AI like that, and if the result is good enough, why shouldn't they be able to sell it too? But if someone creates a song from scratch using AI and with very little input, say for example just a prompt "make a fun song for a day at the beach" with no further edits, should they be able to claim the resulting song as their own and to sell it? That seems more problematic. Where should the line be drawn, and how could it be enforced? I don't think that requiring an arbitrary amount of human input would be enforceable.
How much different is it when a person, who has grown up listening to various music all their life, uses that background as the basis and inspiration for their own music, versus an AI tool being exposed to a bunch of music, and then generating music based on what it learned from that? I don't think that AI tools generate music by simply recombining music clips and sounds "stolen" from human artists. I think they analyze and learn how songs are made; what tonal combinations are possible and popular; as well as what kind of beats to put together with what kind of melodies and vocals, for various genres of music. I think that is similar to what humans do.
One possibly benefit of AI is in more easily identifying songs which plagiarize others. It is ironic that AI can be used to make music based on what it has learned from human music, and also to make sure that the music it makes is not too similar to any one song. But it would also help people like me, who might inadvertently copy some tune or lyric I heard long ago, without even realizing I was doing so. It could also help the original artists get a fair share of any profits.
Human-made lyrics often are vague or unclear as to their meaning, and I may interpret the lyrics in a way that may not have been intended. I think AI could come up with meaningful and logical lyrics as well as vague and unclear ones. I could probably derive as much meaning from AI-written lyrics as I would with human-written lyrics. If i know lyrics are AI-written to begin with, I may be be less likely to care about deriving meaning from them. In the case of a human using AI to help write lyrics, I don't think that would bother me as the human would still be choosing which parts to use or enhance; the artist would still be imbuing meaning in them.
How much AI involvement is too much? If a song is completely created by AI with no input from a human, I think I'm not likely to connect much to it. But if I like the sound, I guess I will like it no different than any other song. Even if there was no human consciousness behind the creation of the song to imbue it with meaning, maybe I imbue it with meaning when listening to it.
There's a song I bought, which afterwards I came to suspect was AI-generated. That hasn't stopped me from liking the song when I hear it. The lyrics are few, and not in English; possibly a made-up language. (There are humans too, who release songs with made-up languages.) I feel the same listening to this world-fusion-style song as I do listening to world-fusion songs from the 80s or 90s.
I'm still not sure how I feel about it - about enjoying a song which may have had little or no human input. There are names on the song; but I can't tell if they are real people or not.
Will being suspicious of songs like this reduce my enjoyment of actual-human-made songs, as I may be suspicious of them too? I don't know.
Suppose a song is AI generated, but is remixed by a human, would that make me enjoy the song more than one solely created by AI? I don't know.
With another artist, I've already discovered than I can like a song that's been remixed by AI even better than the original in rare cases.
I am not absolutely for or against AI tools being used for making music. I think they are good in some ways and bad in others. It's already causing changes to the music industry, and that will continue whether we like it or not. Once the cat is out of the bag, as they say, there's no putting it back.
There are AI models already which can mimic people's voices very well, including their style of speech and inflections. Even if the big companies have safeguards to disallow users from doing that, they won't be able to stop it from happening in other places. I've already been served AI-generated ads on YouTube, seemingly showing celebrities endorsing things which I am very sure they did not approve. I am sure there will be fraudsters who release music claiming to be from popular artists, using fake AI-generated vocals.
I think AI will even be able to copy a particular artist's vibe and music style. I'm not too concerned about that, because artists can make it known which songs are authentic, so that anyone who follows them would know any item not on that list is likely to be fake. I am much more concerned that AI-generated video and audio are being used to sway people's opinions on politics and other matters. But people creating misleading videos and "news" was already a big problem even before AI started being used for it, and before famous peoples' likenesses were used without their consent.
AI enables more non-musical people to make music. This isn't necessarily a bad thing, but it will likely make it harder for individual artists to make money from music, much less make a living from it, due to there being more of it available. Music has been affected by advances in technology like this before. In the past, auto-tune enabled more people to become singers. Synthesizers enabled complex music to be made, replicating many kinds of instruments, no longer requiring a whole orchestra of musicians to make the same kind of music.
How much different is it to fiddle around and experiment with synthesizer settings until you find a sound you like, versus telling an AI to generate sounds until you find some you like? How much different is it to brain-storm to create lyrics, versus telling an AI to generate lyrics and mixing and matching from what it gives you to produce a song you like?
I don't consider myself to be very musical. I can't sing in tune, can't identify music notes by ear, and don't play any instrument well. Except maybe a hand-drum, but I'm not exceptional at that either. But in the past, I have made my own songs for fun, using software tools to add echos and change pitch and to layer on sound effects. I did that many years ago, with software I coded myself. (But using code libraries created by others... Hah!)
Nowadays if I were doing that, I might also use AI. I might imagine a particular tune in my head, and use AI to try to generate that tune in a professional sounding way, which I'd never be able to accomplish on my own. Or I might play around, randomly generating sounds and combining them with other things.
I don't see anything wrong with other people using AI like that, and if the result is good enough, why shouldn't they be able to sell it too? But if someone creates a song from scratch using AI and with very little input, say for example just a prompt "make a fun song for a day at the beach" with no further edits, should they be able to claim the resulting song as their own and to sell it? That seems more problematic. Where should the line be drawn, and how could it be enforced? I don't think that requiring an arbitrary amount of human input would be enforceable.
How much different is it when a person, who has grown up listening to various music all their life, uses that background as the basis and inspiration for their own music, versus an AI tool being exposed to a bunch of music, and then generating music based on what it learned from that? I don't think that AI tools generate music by simply recombining music clips and sounds "stolen" from human artists. I think they analyze and learn how songs are made; what tonal combinations are possible and popular; as well as what kind of beats to put together with what kind of melodies and vocals, for various genres of music. I think that is similar to what humans do.
One possibly benefit of AI is in more easily identifying songs which plagiarize others. It is ironic that AI can be used to make music based on what it has learned from human music, and also to make sure that the music it makes is not too similar to any one song. But it would also help people like me, who might inadvertently copy some tune or lyric I heard long ago, without even realizing I was doing so. It could also help the original artists get a fair share of any profits.
Human-made lyrics often are vague or unclear as to their meaning, and I may interpret the lyrics in a way that may not have been intended. I think AI could come up with meaningful and logical lyrics as well as vague and unclear ones. I could probably derive as much meaning from AI-written lyrics as I would with human-written lyrics. If i know lyrics are AI-written to begin with, I may be be less likely to care about deriving meaning from them. In the case of a human using AI to help write lyrics, I don't think that would bother me as the human would still be choosing which parts to use or enhance; the artist would still be imbuing meaning in them.
How much AI involvement is too much? If a song is completely created by AI with no input from a human, I think I'm not likely to connect much to it. But if I like the sound, I guess I will like it no different than any other song. Even if there was no human consciousness behind the creation of the song to imbue it with meaning, maybe I imbue it with meaning when listening to it.
There's a song I bought, which afterwards I came to suspect was AI-generated. That hasn't stopped me from liking the song when I hear it. The lyrics are few, and not in English; possibly a made-up language. (There are humans too, who release songs with made-up languages.) I feel the same listening to this world-fusion-style song as I do listening to world-fusion songs from the 80s or 90s.
I'm still not sure how I feel about it - about enjoying a song which may have had little or no human input. There are names on the song; but I can't tell if they are real people or not.
Will being suspicious of songs like this reduce my enjoyment of actual-human-made songs, as I may be suspicious of them too? I don't know.
Suppose a song is AI generated, but is remixed by a human, would that make me enjoy the song more than one solely created by AI? I don't know.
With another artist, I've already discovered than I can like a song that's been remixed by AI even better than the original in rare cases.