13 Minuten
Artificial intelligence is changing the professional world, including that of musicians. But what do music creators have to fear from AI? Will AI use their works without having to pay for them? Will AI even replace them, devalue their art? And what happens to works created using AI? Do they enjoy copyright protection? An attempt to approach a complex phenomenon that will occupy all music creators in the coming years.
Many conceivable applications
Someone who knows all about the status quo of AI is Emilia Gómez. She is a pianist, AI expert and researcher at the interface between machine learning and music. Five years ago, she joined the Joint Research Center - an in-house scientific service of the European Commission - and set up a team to look at algorithms based on artificial intelligence and the impact they have on our cognitive, social and emotional development and our lives.
She and her team are informing the European Commission, for example, about the technical status quo in terms of AI, and "there is a lot that needs to be understood first". Then they evaluate and possibly regulate. Gómez's job is the first two of these three areas. She has to bring the scientific side closer to the Commission and Council, ensure understanding and evaluate in order to create the basis for any need for legal regulation.
Gómez is fundamentally positive about technology: "In music in particular, it opens up countless possibilities for musicians and composers," she says. "Composing with computers has a long tradition, which is being taken to a new level by the new technology." In the "Phenicx" project, for example, in which she played a leading role, these systems were used to make recommendations in very complex music: symphonic music. "These systems can analyze the music and explain things to you, introduce you to new music step by step, explain it to you so that you can understand it in all its complexity." A wide range of applications for the education sector are therefore conceivable.
Fairness, transparency and trust
But Gómez, who is also a key researcher in the field of music recommendation systems and has programmed a whole host of such machine learning algorithms herself, also addresses a completely different aspect: Because in her view, these systems are not just about whether they work, "but also whether they are fair and transparent and whether they can be trusted." Because what can fall by the wayside is diversity, she says. We remember Zandi's criticism of the Western connotation of programming platforms. This obvious lack is also evident in the area of recommendation systems and has negative effects. "If the same music is always suggested to you, you will eventually become very specialized in this genre, but you lose sight of the big picture." The view narrows. Keeping an eye on diversity is therefore very important. "If more diverse recommendations are made, you will get to know more different styles and broaden your musical tastes. And that's where we have to be very careful, because such recommendations can have short-term and long-term consequences."
However, recommendation algorithms are now being developed and used by companies such as Apple, Amazon and Spotify. How much interest in diversity can we expect?
Of course there are conflicting interests, confirms Gómez. The need to play a certain type of music on the radio or to do business with a certain type of music. "On the other hand, there is the consumer's need for recommendations that are as tailored as possible," says Gómez. "In all of these recommendation systems, there are therefore recommendations based on user behavior and the platform's own business models that can be embedded in the algorithm. But getting at this information is impossible from our research perspective." And that is a huge problem. Siegfried Handschuh shares this view. "Open AI" is a funny name, says the St. Gallen professor, "because exactly nothing about the company is open." A little is revealed, but you only know roughly what data they have. You are not allowed to view the data. There is therefore no transparency whatsoever with regard to the data processed.
Emilia Gómez is of course also aware of this. It's her day-to-day business, so to speak. Nevertheless, she remains positive in the face of the intransparent superiority. Her team may be small, she says - at the moment, fifteen people work with her as senior researchers at the European Commission's Joint Research Center - but it is "broadly based and involved in many networks." The research is generally cooperative. "We work closely with external search teams and the European Network of AI Excellence Centers and contribute to projects that are more broadly based. We have partnerships with universities and scientific centers. We cooperate and also try to learn from other people. But what is needed is time. "We need to invest time in order to be able to assess whether the technology is safe. Just like with a car: you don't want the first prototype to be sent out on the road. You want it to have been sufficiently tested for safety beforehand. And the faster development progresses, the more there is to evaluate." For a long time, there was only one rule in the scientific community: Performance, better performance and even better performance! "Terms such as evaluation, fairness or transparency were hardly ever used. But there has been a change in thinking. The communities are spending more and more time on analysis."
Light and shadow of the "AI Act"
But what is the legal situation? As the author of a piece of music, do I have to put up with AI using my work as a data set without asking me? And how will a work that I have created with the help of AI be judged? Does it enjoy protection?
Jeannette Gorzala works as a lawyer specializing in AI and is on the board of AI Austria, an independent think tank that aims to promote Austria as a business location in the field of AI, i.e. to create the conditions for a good environment for AI research and development, but also to give AI start-ups the opportunity to develop their solutions and contribute to the flourishing of the business location. It represents companies in the HR sector, for example, but also in the financial industry, i.e. in banking and insurance. She describes the legal situation they are confronted with as follows: "In Europe, we have 27 different legal systems. In some cases, there are no regulations on AI at all, which makes it difficult at the moment. Every time you cross the border as a company, you have to deal with a completely new legal framework." The unsatisfactory situation described by Gorzala is now to be remedied by the EU's planned "AI Act". With it, the EU wants to regulate artificial intelligence across all areas and create a legal basis for the development and use of AI in order to avert or minimize potential damage caused by AI. The "AI Act" also aims to introduce uniform rules for the AI market throughout the EU so that the technology can gain a better foothold in the EU.
The status quo of the efforts is as follows: The EU Commission's draft of the AI Act initially served as a basis for negotiations between the EU Parliament and the EU Council. The Council finally adopted its version of the AI Act, the so-called general approach, at the end of 2022. In June 2023, the members of the EU Parliament then agreed on a "common position". The three institutions then began the so-called "trilogue negotiations". The aim is to produce a final version of the law by the end of 2023. It is highly doubtful that this will happen this year. So: A draft is available. But: nothing is fixed. And before this act is passed, there will be a lot of lobbying - as we remember from the difficult birth of the EU Copyright Directive in 2019. Because the (US) tech companies will know how to defend their investments with everything at their disposal.
Data mining free ride for AI?
But what about the quality of the current draft? This is judged very differently. Gómez, for example, sees it as a step in the right direction, even if, as she asserts, it is not her area of expertise. "In Europe, we have a great tradition when it comes to human rights. We put people at the center and with the 'AI Act' a big discussion has emerged. The AI Act will have a good impact because it forces platforms to think about the implications."
Gernot Schödl, Managing Director of Verwertungsgesellschaft der Filmschaffenden (VdFS) and Chairman of Initiative Urheberrecht, takes a more nuanced view. While he approves of the proposed disclosure and transparency obligation towards consumers, he criticizes the fact that the draft contains absolutely nothing in terms of copyright. The need for regulation, however, is huge. The problem that Schödl is now outlining arose with the 2019 Copyright Directive, which provides for an exception to the basic obligation to pay remuneration for text and data mining. Put simply, the reproduction or extraction of lawfully accessible works for the purposes of text and data mining constitutes free use. It was only at the last minute and as a compromise that the directive was amended to allow commercial providers to make use of this free use of works.
The problem now, according to Schödl, is that some believe that AI or AI operators can invoke this data mining exception. The consequences would be far-reaching, as the use of works by AI would then be free and unpaid. But at this point in time, in 2019, generative AI was not even an issue, says Schödl. The free use of works for text and data mining should only apply to the scientific and research sector. The solution that rights holders can choose not to have the machine read their works is impractical for AI cases. "From the rights holders' point of view, it is difficult to find out if their works have been scanned, used or edited, because no one knows whether and, if so, how and how often the work has been processed in the big sausage machine."
But authors and performers must retain control and, if necessary, be able to prohibit the use of their works or demand appropriate remuneration for the use by way of a license fee. However, according to Schödl, a free use of works covered by the data mining provision would not pass the so-called "three-step test", because any free use of works has its limits where significant interests of the rights holders are affected. That would clearly be the case here. The rights holder must always be able to understand whether and, if so, how their work has been used and then they must be able to enforce a claim, says Schödl. There must be no abuse.
However, a request I made to the EU Commission in response led to a rather frustrating answer: "Newly created Text and Data Mining exceptions provided for in Articles 3 and 4 of the Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market is the most relevant exception that could apply to AI" [The newly created exception for text and data mining under Articles 3 and 4 of Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market is the most relevant exception that could apply to AI. Translated with the help of AI and slightly modified], I am told. The data mining provision could therefore also apply to AI, according to the succinct text. The door opened by the directive is therefore still wide open, despite justified concerns.
Jeannette Gorzala takes a similar view to Schödl. The real question is whether the extent of generative AI that we have today was already foreseen at the time. At best, the tech industry was playing with things, but those affected, who had no idea of future developments, certainly did not. After all: "Who would have thought five years ago that you could clone a voice 1:1 by recording me speaking for just ten minutes and then training a model with it?"
A question of compensation in the end?
In recent weeks, the media has been abuzz with a lawsuit brought against OpenAI by US writers, most notably bestselling author Jonathan Franzen, to combat the fact that their works have already been incorporated into the AI system without authorization. And the assumption that this is already being done in music is obvious, because if I ask Chat GPT to compose a piece in the style of Nick Cave, then I must have fed it pieces by Nick Cave or similar beforehand - otherwise the request would make little sense.
So the question, according to Gorzala, is: "Where did these models get all the data they were trained with?" In the pending lawsuits, this is exactly what is being negotiated, namely the allegation that unauthorized copyrighted material was used to train these models. "There have already been the first lawsuits for damages, especially in the image sector; and I suspect there will soon be similar cases in the video and audio sectors." But the problem is: "The things that are included in these systems cannot be removed afterwards." As Schödl said: Opting out doesn't work here. What is the consequence of this? Does this mean that if AI models have been unlawfully trained on the basis of a certain data situation, the only remedy is licensing? The beneficiaries or injured parties must be remunerated or compensated accordingly? "Exactly that," says Gorzala. "In the end, it's a question of compensation."
The difficulty, however, lies in provability: those whose rights have been infringed must prove that the works have been used illegally. Compensation can only be provided in the form of damages or licensing. However, this first requires appropriate transparency. This could be laid down in law for the European area by the EU's "AI Act". However, this actually creates a strangely skewed situation, because: I can prohibit a label that illegally presses my recording onto sound carriers by means of an injunction. The use of the work can therefore be prohibited by legal action. But in the case of unlawful use by AI, the protected work - whether it is a piece of music, a sequence of notes played by an instrument, or your own voice, your own instrument, so to speak - is like a pinch of salt in a sea of data. I can no longer prohibit its use because it is technically impossible to extract the respective data again or it is unreasonable to switch off the machine completely. The only option is compensation.
Practical transparency
In terms of transparency, Gorzala says that the idea is now to disclose in a summarized form what data was used to train the AI in generative AI models. "If we now consider what is protected by copyright, then we have compositions, paintings, sculptures, texts, photographs, films, music and sound recordings, etc.," says Gorzala. But what would not be included is the voice - that is not protected by copyright. The obligation would have to be extended accordingly." The question would then be how to solve this in a way that is practicable for everyone involved. "If I post the URL of a YouTube video for a voice I've used, it won't be of any use to anyone. So it will be necessary to get the industries, the speakers' associations and the artists involved. We need proof that really is proof, a practicable solution."
In addition to transparency, copyright and personal rights, there is another legally relevant dimension: civil law. Gorzala describes a case in which someone had transferred the rights to their voice indefinitely and for all purposes in a contract with a large technology company. The voice was then used to create a synthesized product. In the culture, media and entertainment industry, it is therefore necessary to take a close look at the purposes for which the artist transfers the rights to his or her voice.
In an interview, actress Nina Hoss also cited this as the reason why she supports the Hollywood screenwriters' strike. She doesn't want to "one day find it in one of those piles of contracts under the small print that the face and body are sold with the role - and then everything the producers don't like can be digitally changed in every scene." According to Gorzala, what Hoss fears is already happening. And the above can be applied 1:1 to music. This means that in future, musicians will have to be particularly careful when assigning rights to ensure that the clauses, which are usually already too broad, do not also include use for AI.
According to Schödl, there will soon be two versions of every song: an original and an AI version. In any case, this is how he interprets the shift in opinion of the major labels, who initially spoke out against the use of AI versions with a clear "no", only to quietly and secretly retreat to a more open position immediately afterwards. A look at streaming services and Amazon, which are already flooded with AI-generated bots and fake reviews, shows that he is probably right. The disadvantage for musicians is obvious: royalties, which are already marginal from an artist's point of view, will probably become even more bitter in the future because an ever-increasing proportion will be accounted for by AI-generated content jazzed up by fake bots.
Markus Deisenberger
Markus Deisenberger, born in Salzburg in 1971, is a lawyer and freelance journalist who lives and works in Salzburg and Vienna. He is editor-in-chief of a Salzburg city magazine and publishes regularly in German and Austrian magazines. He also writes novels, most recently "Winter in Vienna".
Article topics
Article translations are machine translated and proofread.