Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)
Why this matters
Governance capacity is now part of the technical safety stack; this episode helps translate risk into policy with implementation value.
Summary
This conversation examines governance through Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 87 full-transcript segments: median 0 · mean 0 · spread -10–10 (p10–p90 0–0) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.
- - Emphasizes governance
- - Emphasizes policy
- - Full transcript scored in 87 sequential slices (median slice 0).
Editor note
A high-leverage addition to the AI Safety Map that clarifies one important safety bottleneck.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video esGcdqPk1-s · stored Apr 2, 2026 · 2,231 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/will-ai-companies-respect-creators-rights-with-ed-newton-rex.json when you have a listen-based summary.
Show full transcript
It said, "We think that training on people's copyrighted work without a license is fair use." And and that just goes against everything I stand for. No one trained on copyrighted work without a license for a very long time. The commercial model, everyone knew it was illegal. What this whole copyright fight has showed me maybe more than anything is that a lot of the people at the forefront of building this stuff honestly seem willing to trample on people's rights in the pursuit of personal gain and profit. Trying to shift the Overton window, trying to move the needle towards any kind of outcome that is fairer than the current circumstances for creators, I think is really important. Welcome to the Future of Life Institute podcast. My name is Gus Ducker and I'm here with Ed Newton Rex. Ed, welcome to the podcast. Hey, great to be here. Fantastic. Could you tell us a little bit about your background? I am a composer, a classical composer, and I've worked in AI for a long time. So when I left university, I started an AI startup, what we now call a generative AI startup, but this is in 2010, so we didn't call it that back then. We we used to call these things creative AI startups. Uh and it was a music generation startup. This was long before the invention of of the transformer. This I mean this was this was I started off by hand hand coding rules to compose music and eventually we replaced that with recurrent neural networks and you know we we kind of built it out but it took eight or nine years. Um I I'd say we were probably about 12 years too early to the generative AI trend. We ended up selling the company to to the to Bite Dance, the owner of Tik Tok and I went there and I took on a product role on working on the 4U feed which was which was very interesting, a totally new kind of thing and I ended up kind of going via Snapchat as well and ending up at a company called Stability AI in the UK which is a big AI company in the UK running the audio generation team there. So basically kind of doing what we've done at Juke Deck but 12 years later. How had the tech improved in those 12 years? So how how was it different working at Stability? It improved leaps and bounds when we were doing this in 2010. Really we were making things up as we went along. You know generative music had been around for a while. I don't think there had been any any startups in the space, but people have been working on it academically since like the 1950s. Um, but you basically had things like rule-based systems, you know, classical AI, Markoff chains, hidden markoff models, these kinds of things. And and the the tech was really rudimentary, you know, you you were and we were basically composing music note by note in a symbolic fashion. So, we created the notes and the chords. We then used like an automated production system we built to to to produce that to turn it into audio. It was basically the back end of a digital audio workstation that we built. But so the actual AI element was symbolic. It was creating basically notes and notes and chords on a page. And that's a totally different approach from you know the the the cutting edge models today. And then when I join stability, you know, which generating audio samples, they're generating raw audio, which is a, you know, which means you have a a load more variety that can come out of these outputs. I mean, they're much much more powerful systems now. And that's just a nature that's in the nature of these models getting basically getting bigger, new innovations like the transformer and other things, diffusion, all of these things coming along. So yeah, I mean it's a totally different world, but what's interesting is that the products, you know, the product visions aren't actually that dissimilar and what people are doing now with AI music, you know, I mean, is really, you know, it's the same kind of product that we were trying to design back in 2010, it's just the tech has got a lot better. Mhm. And for listeners who haven't heard AI generated music, I I can say that it's gotten incredibly good. just I'm I'm not a musical person, but it I've been fooled several times from listening to an AI generated song and and thinking that this was actually produced by by a team of humans basically. So, it's gotten really advanced. Yeah, it's look, I mean, AI music has come on a long way. It is now I think it's reasonable to say that it's, you know, in many instances is pretty indistinguishable from human composed music. I think lots of people wouldn't know the difference hearing a couple of new songs if they didn't know the artists involved. It's not necessarily true in all styles of music, but it's pretty true. I mean, interestingly, it's still not true in classical music where you which makes sense, right? I mean these models are optimized to you know to basically create music that's popular now to create pop music you know but you know pop music including with vocals including with lyrics um you can generate really convincing stuff now which which you know is is already out there in the market competing with people with human musicians which I think is a big problem so yeah it's pretty indistinguishable but yeah but classical music is you know it turns out harder I mean you know the intricacy of you know something like a huge work by like JS Bark or something right from from hundreds of years ago this is this is not yet doable so so maybe maybe yeah I don't know classical music some somehow is is maybe safe but but for most of the market right now AI music is is absolutely is absolutely here why is it that we still have human musicians then if it's if it feels like AI generated music is indistinguishable from from human produced music. Why hasn't this kind of wave rolled over the music market yet? Well, it's still very early, right? I mean, even though there are people like me who've been in this field for now 15 years, you know, in general, the generative AI wave, you know, really capturing the public consciousness started in 2022. you know, AI music entering the public consciousness really started probably right at the end of 2023 and into 2024. So, we're very early and a lot of people aren't noticing the impacts are there, but but they're invisible to a lot of people at the moment, right? Like, so for instance, already there are reports coming in from around the world of AI music being used in huge quantities to basically replace human composed music in stores, retail outlets, you know, and these are, you know, these are these are these are foreign countries, you know, these are places that, you know, maybe I haven't visited. People thinking about this maybe aren't visiting that much. And it's it's happening already a huge amount. So I think I think we're in the very early stages and we don't yet have the reporting that I think we will have you know that will actually show the extent of what already in mid2025 is going on. That's one reason. I mean I do think there is there's definitely another reason which is you know which is that human musicians will survive right like a category of human musician will be fine you know and I think that is the everyone's going to be affected to a degree but you know to put it very simply you know Taylor Swift will emerge from the AI age you know relatively unscathed but the problem is most people are not Taylor Swift. And so I think there is this there's this argument from big AI advocates and I'm not you know I'm I'm not someone who is against AI per se at all but you know I'm also not what I call like an AI booster you know someone who just relentlessly you know tries to elevate any and every AI advance and there are a lot of those people at the moment right you know I I think I think an argument that a lot of these people put forward is like Taylor Swift's going to be fine and and you know musicians are going to be fine. There are still going to be pop musicians and of course there are people are still going to want to go and see live music. They're still going to want to connect with these musicians. But but the issue is already that a lot of the long tale of how musicians can make money and that is actually you know it's obviously the the the huge majority of musicians who are not household names. It's a massive problem for them already. It does also affect the household names, you know, to an extent in the long tale of how they make money and and that it's the it's these hidden areas of the music industry and the creative industry more generally which are a massive massive part of those industries where you're already seeing, you know, the rug being pulled out from under people's feet. And I think that is the issue. Why is it what are the reasons that Taylor Swift is going to be fine? We love human connection in the art we consume. That is particularly true in music. It's more true, I think, in music than it is in literature where we immerse ourselves in the story. And often a lot of people don't really, you know, for better or worse, don't don't necessarily think about who the author is very much while they're consuming it. When we consume music, when we go to a Taylor Swift gig, right? And I I mean I confess I haven't been to a Taylor Swift gig, but I'm going to see Oasis who are reforming. I'm going to see them in London in August. You know, we it's the human connection. No one would be remotely interested in robots playing an Oasis concert, right? Like or you know, and and you'll get you, you know, you'll get a few kind of AI artists, I think, who kind of get big. You'll get you as a you know, as sort of a a circus sideshow. It'll be kind of interesting. But I don't think there's any risk of you know AI sort of fake musicians taking over the charts per se. You know what what is but again what is going to happen and what is already happening is that a lot of the musicians who contribute to those songs going into the charts will find themselves out competed. I think a great example is songwriters where already, you know, songwriters, you I've done some songwriting sessions. I'm not a, you know, I mostly write classical music, but I've done some songwriting sessions. You you all get in a room together, maybe a few songwriters. You might you might be with the artist, you might not be. You're you're kind of improvising. You're writing songs. Ultimately, you'll write, you know, people will write hundreds and hundreds of songs just for one album. So many of song ideas and full songs get rejected and already artists are starting to not all artists by any means, but I've heard stories of artists starting to turn to AI song generators, you know, because, you know, it's just kind of easy for them and cheap for them to just go and get song ideas. And so you get to a stage where no one's going to be no, there's going to be no kind of audit trail there. You won't know it's happening. It's almost certainly already happening. You know what you what you have is you have songwriters not being fully put out of work, you know, but gradually their work being eroded away. And I think that's where again it's not we'll keep filling Wembley, you know, we we'll keep filling these stadiums. People want to see pop stars, but most musicians are not pop stars. That's the that's the fundamental that's the fundamental issue. Yeah, we should talk about why you decided to resign from stability AI. So, I mean it I was at stability in 2022 and 2023. I was excited. I was really excited about building out and building out the audio team releasing stable audio which was it was it was an AI music generator that we released in I think September 2023. It went down very well. We we licensed all of our training data. And so I've you know I've I've I've built a whole bunch of AI music systems over the years. But key to all of them has been, you know, if you're using people's work to train on to train these models on, you you pay them. You you you figure out a deal that works for them. You ask them permission. And and that's what we did for stable audio, which I was really proud of. you know, I think it was the at the time it was one of the kind of first big, you know, kind of what I'd call contemporary generative AI models that was trained on data that was fully licensed. You know, unfortunately, the wider company and frankly the wider industry showed no signs of following that lead. And you know, I mean, I I like to say that I didn't really resign from stability so much as I resigned from the wider industry because stability were not the only company taking this approach. But it was, you know, it was them I was working for. they, you know, it was in October, I think 2023, and I woke up and I read an article, I think, in the Verge, and it was all of these AI companies responding to the US copyright office. The US copyright office had just put out this request for comments on AI and copyright. And interestingly, I mean, actually, just a few days ago, their their their final stage of their report finally came out. we we should talk about that. I think it's a great report. But this was when they were gathering evidence and they were asking for submissions and all of these tech companies made these public submissions and there was a list of them and I saw in this article, hey, stability is listed and I thought, okay, I'll have a look at that, you know, and I was on the, you know, I was on the leadership team at stability and I and I read it and and basically I mean on the first page it said something to the effect of, you know, we think that training on people's copyrighted work without a license is fair use, you know, basically is there they think there is this exception that covers it and and and and that just goes against everything I stand for, everything that I had stood for in the audio team, everything I stand for in general. And so that that was kind of the trigger, this kind of public statement. Stability, I mean, Stability had been training their image models for a while. And you know, I knew, you know, I knew the attitudes of the rest of the company, but honestly, I I had hoped that by building a model that went down very well. I mean, our audio model was named one of Time's best inventions of 2023, you know, I mean, it was it it was a really good I I think it was an industry-leading music generation model at the time and it was licensed and and I hope that by showing you could do that, and I still truly believe this. I think most of the reason you don't see leading models trained on licensed data is because people just can't be bothered. They're leaning on the fair use defense. So you don't have and the US copyright office said this basically in their report. They said look like licensing is hampered because so many people relying on this fair use defense. Obviously, if you've got a whole industry that's copying a few big players, you know, who who rely on the fair on this fair use defense, who refuse to go and license all their training data, obviously licensed models are going to suffer as a result. And that's I think what's happened. Would you say the general indust industry attitude is just that hey, we can train on on copyrighted material and this is covered under fair use. We don't have to license anything. Yeah, that's absolutely the standard industry attitude right now. I mean, it's really interesting, right, because I've been in I've actually been in the industry longer than probably almost anyone. And so, I saw it develop throughout the 2010s and through into the early 2020s. And no one trained on copyrighted work without a license for a very long time. Uh, trained a commercial model. Everyone knew. Everyone knew it was illegal. Like every like this was like it was it was like standard common knowledge. It's it's interesting if you look at like Google Google had this fantastic team like a little bit after we publicly launched RAI music company. We we launched in like 2014. I think in 2015 Google launched this team called Magenta. um really cool project run by like run by some really smart people and it was basically looking at creative AI as we called it back then and they trained these generative AI models. Honestly, when they when they launched we were super worried about it. We were like, "Oh my god, we got competition." And we needn't have worried at all cuz like we were all a decade too early. But you know, they they launched these models. They wrote papers about these models. They wrote blog posts. And they call out in these papers and in these blog posts like, you know, this is our data. Like here's where we've got it. We've gone and commissioned training data. Like they they built this AI drummer. They went and commissioned all these pieces of training data. And that's what we did as well. we commission training data and so you know and obviously you compare that to Google's approach now to generative AI which is obviously very different and so I think you know what happened is in 2022 you'd had all these my this is my impression you had all these research models people were like researching and there's always a better argument for using copyrighted work unlicensed if you're not doing anything commercial right like for research I think that there's good especially in academia I think there's a good argument for that. And so people did this research. They found these things worked really well. And then like two or three companies that are now the most famous companies in the world through caution to the wind thought, let's just release this. Let's see what happened. And and you know the story from there that the industry took off and everyone copied them immediately. There was this like gold rush. Everyone copied them. Everyone saw them relying on fair use. Everyone assumed, yeah, well, this must be I mean, they're they've raised billions of dollars. They're they're one of the most valuable companies in the world. They're they're showing what can be done. Like, you know, if they can get away with it, surely we can, too. And and it's it's rapidly become the kind of standard approach. And it's it has I mean, it has massive issues for people. I mean I I know a bunch I mean through my work through through Fairly Trained through just my work in general over the last few years you know I know a lot of people who run AI companies that are trying to take a fairer approach you know companies that are licensing all their training data that are really working with creators and you know every every company says like we're all about you know we're all about creators we want to democratize creativity like you know we want to treat them well but with most of these companies it's garbage. But there are a few who are actually licensing their training data. And that's what creators want, right? They want they want to be asked permission before their work is used. But they are having a really tough time. And one of the reasons they're having a tough time is because they're going to try to raise money to raise capital. And investors are looking at their pitch deck and they're stroking their beards and they're saying, "Well, well, hang on. your your expenses are going to be higher than the people who are taking their training data for free. So like you're not going to win. So we're not going to invest in you. And and so you have this horrible cycle where it it's not just the AI companies who are basically in my view stealing all this work and training on it. You've got a whole industry around it that and and you've now got this whole industry that that is is desperate for fair use to prevail in these lawsuits for the AI companies to prevail because if it doesn't they're worried that the whole thing falls apart. Um, so it's a it's we've got into a really bad position, unfortunately. And we didn't have to we didn't it didn't have to go this route, which is what's so annoying, you know. And we should say it if you're a company trying to produce a licensed model, you're also competing with open-source models that are also trained on all all of the data available on the internet and some of the some of some data that's not even publicly available. This is not unique to music. This is also images. This is text. The this is we're talking about books and articles, videos, movies, and everything is all of the top companies have collected all of the data available and they're now training on it and they're now producing synthetic data from that data. So they're doing everything they can to to gather as much data as possible basically. Yeah. Yeah. And agreed. I mean I think the open source thing is interesting, right? because open- source I mean for a start lots of these models obviously aren't open they're not open in the traditional sense because they're not revealing their training data and they're not revealing their training data because they know they'll meet they'll be immediately sued into oblivion if they do but open weights models I guess we can call them I think they're interesting as well because there's this you know that in general I mean I think open source obviously has benefits open source has led to a lot of innovation. You know, in this in this context though, there is this kind of there's this almost kind of religious adherence to this idea that open must be good. Um, and what what you have with a lot of open source models is you basically have organiz companies or organizations going out there training and releasing an open source model and sort of arguing at least like maybe externally certainly internally that because it's open sourced they have less there's less reason to license maybe they're not commercializing right maybe one of these open models they're not directly commercializing but I think this is like incredibly misleading for a couple of reasons. One, often the companies that are really invested in open source, in building open source, are doing it commercially. They may not be charging for the models, but they're absolutely doing it commercially. They're doing it so they can because they're massive trillion dollar companies and so that they can attract, you know, the best engineers in the world who want to work on open source. They're doing it so they build out their ecosystem, the the ecosystem around their products, their models. I mean, it's it's it's 100% a commercial thing. It's not just some like philanthropic exercise, you know, which I think is important. And secondly, open mo like truly open models of course can be can be used for anything. There there on a truly open model there there is no downstream limitation to how the model can be used. And that then throws a big spanner in the works for fair use defenses because as the copyright office made clear just this last weekend when they when they put out their report on on AI training these how you how a model is ultimately used comes to bear on the question of whether it is a fair use of the data that trained it. You can't just sort of train a model and say, "Well, look, we're like we're training a model. We're not we're not creating music. We're training a model. Other people are creating the music with the model." Like that doesn't fly. Like you it's about what the model is. It's obviously about what the model is used for. And so, and an open model, you know, you can't put restrictions in. If you if you try to build in guard rails that won't output copyrighted work, the copyrighted work that you trained on, those guardrails will just be removed. And a lot of this comes down to like potential as well. It's not with fair use. It's not just about what is actually being done. It's about what are you fac what are you potentially facilitating? And so so I think a lot of the fair use arguments have a lot of trouble with open models. And I mean in general I am I'm really wary of open models in you know for creators and as as regards creators part of the reason being that you know open models are irreversible you know you can't take it back a closed model you can turn off like I strongly believe that as the US cop the US copyright office said basically look we think that some AI training is probably fair use and some isn't you know so I think we should expect that some rights holders lawsuits will be successful and some won't. Fair use is determined on a casebyase basis. You'd expect that different cases would go different ways. So, so therefore, you should expect that some AI companies are going to have to turn off their models. They're going to have to retract them. Right now, a closed company can do that. A closed AI company can do that and can turn it off and then it's not accessible anymore. You got an open model out there. There's nothing you can do to get that back. you can make use of it illegal, but it's going to be very hard to police. So, so yeah, I mean I'm while I am a big advocate of open source in some areas, I think it has real I think there are real issues with open source and and copyright basically. You've lost launched this organization called Fairly Trained, which is trying to set a new standard for the industry. Could you tell us about what you're trying to do here? Yeah. So, Fairly Trained really came out of conversations I had kind of immediately after leaving Stability where it it kind of ended up blowing up a bit in the news. I think a bit because you know while a lot of creators who I really applaud had been you know had been flying the flag for creators rights and trying to shine a light on this issue. Most people in the AI world had been pretty silent on it. And so here was someone from the AI world who was saying actually no this is this is not legit like you know we should not be just stealing people's work and like to make money off it. This is terrible. And and I think because of that it it you know it got a bit of news coverage and one of the things I found was that journalists were sort of saying to me that's interesting like I thought AI could only be built by stealing people's work and you know I was I was kind of shocked but there's no reason they would have known otherwise like you know a lot of the the handful of models who were doing things legitimately were not that well known that you As I say, they've been like struggling to some of them been struggling to raise money. Not all of them, but you know, some of them have been struggling to raise money like you know, it's it's hard, you know, it's it's hard to take the take the right path. And so I thought, well, you know, we should do something about this. We should we should highlight the fact that there are these companies. We should we should try to help them. We should also try to help people understand that this is a viable option. you don't have to use, you know, plagiarism machines, you know, you can you can go and use models that that are built fairly. So that's where it came from. And so, you know, the idea that we landed on was just a very simple certification for AI models that aren't trained on copyrighted work without a license. And so we have a certification process that these companies go through. I think we've certified 19 to date. across a range of modalities. There's music, there's voice, we've actually got one large language model, there are other modalities and and really and that's the purpose. And so we've got like you know there are some companies who have said look we as companies are only going to use AI models that are fairly trained or that at least meet this bar you know and I mean I don't I don't care if people I I'm not I'm not I mean fairly trained is a nonprofit. I don't pay myself. I'm not I'm not in it to make a big success of Fairly Trained. I'm in it to try to elevate these companies. And so I don't, you know, I don't mind if people like take certification badge as gospel or if they do the diligence themselves and make sure these companies hit the same bar. I don't mind at all. So some companies do that. Some use our certification mark. Some just have their own diligence but try to hit our criteria. And these companies are basically saying, look, if we're if we're going to use AI as a company, we're only going to use fairly trained models. And I think that's really good, you know, but at the same time, we're not going to affect the public's feelings on this. Like the the public are always going to use just the best and easiest model that is available to them, you know, and I don't fault people for that at all, you know? I mean, before there were legal ways of streaming music, a whole bunch of people used Napster and the like, right? like you know if it's if it's easy you're just going to kind of go use it until there are you know about the really viable alternatives right and so yeah so so so I don't think we're going to affect the public consciousness that much but I think that's okay like I think that what we provide is something that people and companies who care about this can turn to and so we try to gradually change views that way but also something that you know legislators around the world can look to and and and and just point to as an example that this is possible. That's one of the things that kind of honestly frustrated me the most you know two years ago, one and a half years ago was that you know there was this idea for generally kind of put forward by AI companies that like it was impossible not to do what they're doing. they have to and and I hope that what we what we're doing at Fairly Train shows that that's not the case. How is it possible to certify the data that goes into training a model? What are you doing at a technical level? Well, I mean like many certification schemes, it is I guess you could describe it as a self-certification scheme. I mean it's not they're not these companies aren't doing it themselves, but it's kind of based on they provide information to us. There are ways you could technically scan data sets, but it would still be based on trust if you did that because you'd have to trust that the company was giving you all their data. There is at the moment no way of actually scanning kind of a you know taking a taking a model and obviously and just reverse engineering here's a list of all the data. So we can't do that. That's that's off the table. So we have to have some trustbased mechanism. So what we do is we have a process where companies submit a bunch of information in response to questions that we give them that we pose them uh they submit their they submit lists of their training data and then we go and check that and it's like sometimes it's very easy because there are companies that there are companies we've certified that you know aren't for instance large language models and don't use a ton of different data and sometimes it's difficult and there's a hu there's a ton we have to go through and we have to go and look at all the sources And we just we just drill down and drill down until we're until we have a high level of confidence that these people have you know that this data is is clean essentially. And so that's how we do it. There are you know there are other parts of it. There are other parts of the certification like having good internal processes to make sure that this is you know these these standards are met going forward that sort of thing. But the crux of it is what is your training data and so that's how we do it. Do you think a certification process could ever scale to some of the big players, OpenAI, Google Deep Mind, Anthropic and so on? Yeah, I think so. I mean, look, fundamentally the the biggest issue, the biggest thing stopping that is transparency, you know, is that or a lack of transparency. Like ultimately if these if these companies would just reveal their training data publicly then of course you could check it. It might take time depending on the kind of data they've used depending on where they've got it. It might take time you but you could automate parts of that relatively easily. The the check there is no issue with the checking side. The issue is in the disclosure and the lack of disclosure and there is you know there's this there's a fight going on in various places around this right now. I mean, you know, in the UK where I'm from, the House of Lords has twice proposed to the government like really simple additional bit to a bit of legislation that just basically says AI companies must disclose the training data that they use, right? That that's basically it, which to me seems like common sense. You know there is there are AI companies argue that it's kind of their secret source but it's not like the the the sources of your training data maybe what you do to the training data you know that there is some there are some trade secrets there right like how do you augment the training data how do you filter it what do you ultimately choose to use but the but the sources like where you've got it that's not a secret I mean for a start everyone is just getting as much as they can right they're just like there's no secret to that. Secondly, like it's it's trivial to to to like for instance, if you like run AI music company, it's absolutely trivial to like come up with a list of, you know, like really all of the people you could license music from, like all of the big companies you'd go license music from. I mean, I I know I've done that, you know, like it's you can do it in an afternoon. you know, there is no secret to where you get data. And so so I I don't buy the argument at all that it's like a like a trade secret like what your training data is. I think it's I think it's obviously these companies are saying that because they know that if they reveal their training data, they get sued. Like that's why that that that's what would happen. And and I think a bunch of people would win those lawsuits. So that's why they don't want to reveal their training data. But that's that's all it that's all it would take. And so in the UK, the House of Lords has proposed this a couple of times that the the government in the UK has has rejected it, you know, and on ar, you know, using arguments that are basically based on procedure. Um, but really, you know, the the reason they're rejecting it is clear. It's because they are very close to the big tech companies. They want the big tech companies to keep opening up like 100 person offices in London and whatever like boosting job count a little bit and you know I wouldn't go so far as to say they've been bought by tech companies but you know they're clearly they you know place tech US tech companies interests over their own creative industries and their own country's creators and so they're rejecting these amendments to bills that would literally just just make AI companies fess up to what they're training on. I mean, that's all it would do. So, and you got to bear in mind as well that training on copyrighted work in the UK is straight up illegal. Like, there's no fair use debate. There's no, you know, that there's, you know, there's not even debate around this. like you just can't do it in the UK at the moment, which is great, which is good law, you know, and and shows the strength of our copyright system. So, yeah, I mean, there's there are big fights going on around this kind of stuff. I agree that if the companies showed what they had been training on, it would be revealed that they have been training on copyrighted data. But I also think there might be special cases in which the companies have paid scientists say or researchers a lot of money to produce very valuable training data that's not publicly available or or they might have generated a bunch of synthetic data that's also difficult to to produce. So how would you would that fall into the category of of more of a trade secret? Would that be the secret source that that they can't reveal? No, I mean I think there's a difference between actually revealing your training data as in sharing the actual data in a big in like an S3 bucket and letting people go through the actual words you're training on or whatever it is and and lists of training data. And I think that I think that's a critical difference like I wouldn't suggest like if you've gone and commission and this is we do this with fairly trained right like we don't ask to see all of the data that you've commissioned. What we ask is we we need to know that you've commissioned that data like that like here is our data, we've commissioned it. You don't have to show us all the words that are in that data. You don't have to, you know, yeah, that's totally fair enough. That would I can see why that would be secret. There's no reason people need to know that. But, you know, ultimately that's not the argument that is being had. It is like should we have transparency over like lists of training data? You know, you should basically be able to like this is the key that copyright holders should be able to they need enough information that lets them know if their work is in the training set that like that's what they know. And at the moment they don't remotely have that, right? Because like that there is just no transparency at all. And and they have to go and like they have to go and red team the models to try to find out. and it's hard work, you know, and it's and it's and obviously that just like stops them exercising their rights. So, so yeah, I I totally agree with you. I think they're absolutely and the same with the same potentially with synthetic data. Synthetic data is interesting, right? Because synthetic data itself, I think, can be a way of laundering copyright, you know, like if you use a model that's trained on a ton of copyrighted books and then you create a load of synthetic data and then you train a new model on that synthetic data, you know, in my mind, you are infringing copyright just as much and you are and and you're doing as much harm to the authors as you would be if you were just if you cut out the middle step of the synthetic data. data training. So when we fairly trained when we evaluate synthetic data, you have to have you have to meet the same criteria with the whole chain of models that was used to create that data. Um so you can't just sort of you know wipe your hands of it and say well we only use synthetic data because we have we say where does that synthetic data come from? So, so I I think that I think that should be included and actually I mean that's a major issue with a lot of the transparency regulation that has been proposed. I think in general legislators have missed the synthetic data problem. I think it's always missed you know that they will say provide us with a list of the copyrighted works that you've trained on and I don't think that's nearly nearly enough. you we need a list of the training data because then if a bunch of that is synthetic data attached to that should be an explanation of the models that came from and similarly the training data that went into those models and if you didn't train that model yourself which you might not have you at least need to tell you at least need to disclose what the model is again the ultimate test should be can a third party looking at this list you provide reliably go and check it themselves and find out if their work is anywhere in that training stack and that that should be the and anything short of that I think is not good enough and I think that should just like it's a very simple bar to to try to hit when companies provide a list of the data they've trained on. Wouldn't it be fairly easy to to simply exclude data that they don't want anyone to know that they've trained on? So it you can't prove a negative. You can you you can you can only look at the the data they or the info they've provided on the data they've trained on but they might be running another training run using a whole bunch of copyrighted data. How how would you deal with that that issue? Yeah, I mean I I think you deal with that in two ways. I think you like ultimately at the society level as well and when this if we actually get transparency legislation then I think as as part of that you got to have audits. I think I think audits are key to that kind of legislation. And then I also think that you have, you know, this is where red teaming can come in as well, you know, because I mean like ultimately if you say you haven't trained on Harry Potter and people can get Harry Potter out of your system, then you're obviously lying, right? So So I think a combination of audits and red teaming can solve that can solve that issue pretty well. On the point of synthetic data, do you think it might be too late to fight this fight if we have open- source models that can generate fairly high quality text and images and potentially music uh that can then be used to train other models. So in some sense the cat might be out of the bag because you can't as you mentioned earlier you can't remove these open source models from from the world. Yeah, I mean I don't think so. I mean AI companies obviously that's a that's an opinion that AI companies love right like it's too late it's too don't bother don't bother regulating this like it's too late I don't think so like for a couple of reasons you know we are one like yes there are open source models out there but a bunch of them are almost certainly in my view a bunch of them will be found to be breaking the law in how they're built you can't take them back. But you can forbid people from using them. You can make it illegal to disseminate them. You can make it illegal to host them. You can make it illegal to use them. It's not going to stop all use, but it honestly it's going to stop a lot of the use, right? Like you you you solve a lot of the problem by just make by by saying no, you you you can't host that. Most people most people don't want to break the law. Some people do, but most people don't. So, I think that's one reason it's not too late. Another is another is this kind of current issue of model collapse. Now I'm not I I actually am not really very bought into the idea of model collapse. There's a there's kind of a a hope among the I guess the sort of anti-AI crowd which I don't really consider myself part of but but I know a lot but my views on copyright align with a lot of the anti what I'd call the anti-I crowd's views. So I know quite a lot of them and there's this kind of hope that you could never train a model on purely synthetic data because it would lead to model collapse. Like basically it just all becomes like it's it's just not good enough and it leads to the model doing really badly. And there are some signs that that's maybe true at the moment or has been true recently. But I I don't see any reason why it would be true in the long term why it would hold as a general rule. Um, I think it's similar to I mean, frankly, 10 years ago, 10, 12 years ago, no one believed you when you said AI will one day be able to create art and write music and text in a way that is as convincing as people. No one believe. They thought you were great. They thought, well, there's no way this could happen. And and and they were wrong, you know. And I think that model collapse is another thing like that. It's a very temporary to me it seems like a very temporary limitation. So I do think that we'll get to a stage where you can train very highly performant models purely on synthetic data. I think that's likely at some point. There already signs sign signs it's possible. So so I don't think I don't think sort of we could rely on model collapse as like a get out of jail free card. And it's another reason why I think like rapid regulation is important like you know rapid holding people to account. So I don't think it's too late but I do think time is of the essence. You know I absolutely think that time is of the essence. Yeah. On the model collapse point I mean we shouldn't rely on the intrinsic features of a technology to to kind of cross our fingers and hope that it all works out because the models will be limited. I I agree with you that we are seeing early signs in in reasoning models for example that synthetic data can can lead to quite impressive results. So I wouldn't hold out hope if if you have the perspective that model collapse will will prevent the models from ever kind of violating copyright in in bad ways. Yeah. Yeah. I tend to I don't know I tend to be it's funny what while like I have spent the last one and a half years really advocating for I guess AI development to take a pause and just say hang on like why are we all building our models based on based on theft you know at the same time I mean I I do like I am a I guess I don't know if I'd call myself a futurist but like I I I tend to think that technology is going to advance a lot that it's going to bring a lot of benefits. It'll bring massive risks as well. One of my general opinions, you know, is that like with AI ultimately we know humans are intelligent. We know that we've managed to, you know, intelligence has already come about once. There is no reason that to me there's no reason that everything we can do won't one day be possible be be sort of physically possible. and uh in in machines that it seems pretty self-evident that there's no re that it's not impossible because we it's already been achieved once and I think that that's my starting point and it's why back in 2010 I was convinced that you know AI would be able to write music before long and I was a bit off in my timelines. I thought it would happen a little bit. I also I also thought music would come before art and I was wrong about that. like art slightly beat music for I think interesting reasons, but you know ultimately the tech is going to be able to do this stuff. And so if if if if you if we tend to we have this problem where we get really hung up on the issues of today and and they're going to be solved. They're going to be solved. They're going to be solved probably from corners we don't expect. It might not be Open AI or Anthropic who solves these things. It might be some new startup, right? But they're going to be solved and they'll be solved in the next few years and then you know and then what you know so you can't you can't kind of rest on your laurels and think well it's fine because it's fine because machines can't do this I think is always always a bad argument and always a dangerous path to go down because almost always they will be able to do that thing within probably a few years time. It's very important here to notice the pace of change. Also, if you say in 2020 models can't do this simple task and then you just wait a couple of years, well then maybe they can. And I expect the same thing to happen over and over again. Basically, I agree the the like it feels like we're still in a current cycle of that happening. I mean, I remember meetings at Stability. It still feels like yesterday I was at Stability. Yeah. I remember meetings in 2022 at Stability where we were still talking about like you know when is the year that AI music's going to break out like we were I you know I I was like it's going to be I I was like 2023 and by then it was very close. It was kind of easy to predict. I got my prediction wrong back in 2010. I thought it would be be done quicker than it was, you know, but these even back in 2022, AI music had like had been far far far from sold, right? Like it just, you know, it wasn't it wasn't working really. And and and I think and and this is true across modalities. I mean, it's happened in video. It's currently we're currently h it's currently like the video cycle, I guess. I think it's it's happening pretty quickly in robotics. where you're seeing in my mind, you know, robotics advancing very rapidly in many ways thanks to um training models in simulations and then kind of transposing that over to the real world. You know, I wouldn't be surprised to see it happen in other technologies like brain computer interfaces and things like that. Yeah. Yeah. And I just think it's so it seems to be almost part of the innate human condition that we we we extend out from where we are now and we find it I think often very hard to picture and believe in rapid change as possible. Um we take today's limitations and we imagine that some version of them will still be there in a decade's time. And I think that's a mistake. Th this is quite interesting because a key question here is how does the debate around copyright and fair use fit into these larger questions or grander questions about the future role of of of people and the future of humanity? What what's the connection there? Well, I think there's a couple. I mean I think there's a big big question around work and this is something that I mean I'm you know I am someone who I do worry about the downsides of AI you know I mean I I'm excited about some of the upsides I'm worried about I'm anxious about some of the downsides I personally think that like one of the biggest risks in the near term from you know let's call it general intelligence or super intelligence or hyper intelligent AI like basically AI systems that are supremely capable. I think one of the one of the big issues that we're going to face in the near term is is the potential mass displacement of labor, you know, and I think I think the creative sector is kind of the canary in the coal mine here. We, you know, generative AI in this way has only been around a couple of years. We are already seeing data that shows that creative work is that creatives are being outco competed. You know, there are there's data on this already from Upwork. There's there's a bunch of papers that show that, you know, freelance writing tasks, freelance graphic design tasks fell in the wake of and never recovered in the wake of some of these models being released. You know, I know people whose income has just totally fallen because and they know they've been told that it's because the people who previously employed them are now using AI, you know, and and so I think creatives are kind of the canary in the coal mine here a little bit. I worry when I see, you know, I see company like just just like a month ago, a company announced its intentions and this company is called Mechanize. You may have seen it. There was a bit big kind of kathuffle around it about a month ago and you know it's it's got investment from Jeff Dean, you know, at Google. It's got investment from Nat Friedman, Dwesh Patel, like all of these like big names in the world of tech or tech adjacent fields, right? And I interviewed one of the founders of the company recently. Yeah. There we go. Right. Yeah. You know the the the the I confess I haven't heard that. I will listen to it. But the mission the mission is automate all work, you know, like so this is not like some and if you look at the investors behind this, this is not some sort of fringe movement, right? This is not some fringe idea in Silicon Valley. there is a real body of thought that says yeah like given the combination of general intelligence possibly super intelligence and general purpose robotics there is like a real possibility that actually we can automate if not all work and let's be honest it's probably not all all work right like it's not going to be politicians it's not going to be priests it's not going to be sports people you know but there is a real possibility that we can maybe automate a huge amount of that work and god we're going to try because this is you know I mean it's the it's the kind of Mark Andre software is eating the world idea Silicon Valley you know loves to I mean it's it's ultimately about making money like it's ultimately about you know trying to replace different sectors uh and and ultimately I think if Silicon Valley and I I use Silicon Valley obviously in in I don't really mean that I mean I happen to be here what I mean is this kind of philosophy this this idea in the tech industry But you know there is this this kind of idea that hang on maybe this is our chance. Maybe like all of these hold out industries where we haven't really been able to make inroads maybe this is our chance and actually maybe this is how we get into them. Maybe this is how we take them over. And so that I think is a big big thing and and is something that I'm worried about. And and I and to me like there are arguments about whether we'll get there with the current kind of paradigm of large language models, of reasoning models, of robotics, of where we are now. Um, and I think that's a sensible like debate to have. I I'm not sure whether we will. I wouldn't say we necessarily will, but but I think I'm concerned about two things irrespective of that. One is there is such a huge amount of investment being poured into automation that I think it's very likely that we get new innovations coming along s such that even if large language model large language models don't end up being the route to AGI something else might well be in the near term right um I don't think we should rule it out I think we should consider it a significant possibility and the second thing is just the desire to do it at all like the aim the fact that it is people's aim to go and automate all work. And I understand, you know, the position that sort of says, well, you know, the the fully fully automated luxury communism or whatever, right? Like, ah, let's all let's all just relax the whole time. Let's basically have early retirement. But I think that's like a very naive a very naive view given how political establishments work and given how obviously totally unprepared the world is, the US is, any major economy is for the rapid displacement of labor. So yeah, so so that's a big concern for me. Yeah. Perhaps a glip question here is to ask why is this a bad thing? If you think of a a peasant 300 years ago, maybe there would be some worries about automating agriculture to a large extent. But in historically, this has turned out great. We've we've been able to massively increase productivity and living standards and so on. Why isn't this just the next kind of turn of the wheel of that trend? So, in a sense, why should we be pessimistic here? Well, I think I think one reason I'm not saying we should only be pessimistic by the way like you know I think I think we need to be very alert to this possibility and we need to be like acting accordingly. It doesn't mean we have to be pessimistic about it. You know I think one difference is which I think is pretty obvious is the nature of the technology we're now building which is much more general which where the intention is to be general purpose. Um and and this makes it I think very very different to many of the revolutions of the past. Like again simply the fact that you have some of the biggest names in the tech industry investing in a company whose mission is to automate all work shows that there is there is a different idea here at play than there has been previously. Right? So so I think we should take that seriously. So then I think the question is okay well if you assume you hit that why why is that an issue? Wouldn't it be great to all relax? And I think I think here one of one of my concerns and this comes back to the copyright question is actually that like what this whole copyright fight has showed me maybe more than anything is that a lot of the people at the forefront of building this stuff honestly seem willing to trample on people's rights in the pursuit of personal gain and profit. Right? Like that's in my mind basically what's happening here. They see an opportunity for vast wealth, vast riches, and they look at copyright and they think, well, that's getting in the way, right? The the people who whose livelihoods depend on copyright, like the the the people who who who they're putting out of work, not really an issue, you know, for them. And and that worries me, right? Because if at this stage the people building this technology aren't going to respect people's rights, like aren't going to take seriously when when a whole chorus from an entire industry turns around to them and says, "What are you doing?" Why would it be any different? Why would they have any more respect for people? Why? Like they're not paying copyright holders. Like what? Like why will they ultimately pay anyone? How is this money going to be distributed to other people if it's not being distributed at the moment? And so I I don't I don't see the political will there to, you know, take seriously this idea that there might need to be that kind of mass redistribution. Demonstrabably the AI companies themselves don't have that will. So So that that's why again I'm not saying we should, you know, think we're all doomed. I'm not saying we think we we should just be purely pessimistic about this, but I but I do think we should take it ser when when highly funded, highly motivated, very smart people say we're going to try to automate all labor, I think we should take them seriously. Basically, what do you think the future of culture looks like? So what what would what will be the long-term effects of having these technologies that can basically create replicas or copies of different styles and yeah it when the price of generating text, imagery, videos, audio drops massively, what happens to culture? I mean, I think one of the most immediate effects is that a lot more people find it a lot harder to kind of get into the creative industries because a lot of the sort of longtail jobs that would have supported them maybe through their early years will go. And that might be writing jingles if you're a musician, right? Like writing ad jingles. It might be writing production music. It might be doing some sort of copywriting if you're a writer. You all these all these jobs are already on a downwards trajectory largely thanks to AI and I and I think that's a major issue because I I think there is going to be a big blow to the creative industries and that will have knock-on effects. Um so I I think that's going to affect culture. I mean, I think there is we will also see and there's a question as to how popular this will be, but I think we'll also see the see the rise of remix culture like a further rise of remix culture. You know, I think it at the moment, you know, you are not allowed for the most part to replicate people's voices or to take a copyrighted song and rework it into something else without kind of permission. It's very kind of can be hard to get that permission. Um, but I'm sure I mean, you know, media companies, rights holders, creators are open to licensing their voices, their likenesses, their music, their their works. They're open to licensing under the right conditions. And as soon as licensing gets done at scale, you will have the ability to as a consumer remix, right? And I think I think the the the barrier between being a consumer and a creator will you know start to not fully disappear but at least weaken you know so you'll be able to say if you want like Daniel Taylor Swift song is cool but can I hear null Gallagher singing it right like that'll be a possibility of like of course it will you'll be able to have the AI model as long as you get the licensing in place and the permissions in place I think that's all going to kind of be possible. I mean, I think there's a there's a question, an open question as to how popular that kind of thing will be. And I don't like I think a lot of people in tech assume that it's the future. And while I think it will be a part of the future, I I actually have quite high faith that concrete works, we can call them, so that's recorded music or a book in its final form or whatever it is, I have quite high faith that that will for a long time remain the norm in creative culture. Now, some of that will be generated in the first place by AI, but I think that ultimately we are very attached to we're very attached to the idea of these concrete works that we can all share, that we can all talk about, that we can, you know, that can be part of the public consciousness in a way that I think like hyperpersonalized content won't be as exciting to people. Um, so I think that is a big reason that it will stick around. You don't think that consumers would be interested in for example talking to an AI Taylor Swift and then you know having a video call with her. She's playing music for you. You you say to her I want to hear a different song. I want to hear more of this less of that or you could imagine a book that expands in the sections that you're interested in. So so it kind of changes as you're reading it. But of course then you don't have as as you mentioned the kind of the cultural common knowledge of what's in a work and and it it would that be the barrier to to creating this more in interactive form of entertainment? Yeah, I mean I think there are two things at play there, right? There's there's the ability to interact with like an avatar of a creator and there is the ability to have personalized content in the in in the media form that that that creator is known for kind of written for you. And I suspect that the fir the former will be exciting and the latter will be less so. Like I think that absolutely like the ability to and I don't know whether you'll be speaking to your Taylor Swift avatar or whatever, but the ability in general I'm a big believer in the future of you know essentially the voice and language interface and just being able to talk to your computer to digital devices to avatars like I think that's clearly the future. That's clearly the way things are going. So I think absolutely people consumers will probably want to I can absolutely see music fans wanting to kind of have some kind of virtual conversation with their favorite artists. But I think ultimately my bet would be that when they then say I like can you now play this the this will be one of their songs like one of like I don't see if you look at kind of some of the AI music generators right like you know one of the ways they advertise themselves is write a song about anything like you know and like your trip to the coffee shop or like you know your mom's birthday or whatever it is and in general eneral my impression so far is that it's an absolute moment of magic the first time you hear it you cannot believe it's possible and then you never use it again you know like the the my my bet is the retent the the retention figures for that kind of usage are atrocious because it's like people just don't really want that use case there are use cases for AI music 100% right but I think like personalized music in that way with that kind of like write me a song about X I don't I don't see as becoming like a very big part of music culture. I think that I think that the song as an entity and as a as a kind of thing that's set in stone over time will remain a really really important part of music culture and I think that expands to all the arts with their own kind of these these forms set in stone. Do you think AI could change the winner take all scenarios we see in in music where where you have basically the the most famous and influential musicians getting getting most of the plays and most of the views and so on. But if you if you had the opportunity or if you had the ability to create your own music, maybe you would see more of a kind of broad market. I don't think so because frankly because there's already a broad market like and this is one of you know this is one of the things about generative AI it's it lets you create music from scratch but you know when we say it kind of democratizes creativity I mean that's kind of only true to an extent creativity is already pretty democratized right like you know I mean it's not perfect by any means like it helps if you have rich parents and go to a school that lets you study it and like you know there are all these things that really really help but ultimately anyone or like a lot of people can can write music can learn how to write music can produce music like it's and a lot of people do like the amount of music before generative AI came along the amount of music out there being released every day like was absolutely astonishing so there is no shortage of music there's no shortage of options and ultimately what you what you still what you have despite this you know this huge abundance of music is you have a few people rising to the top you know and why is that I think one it's because to an extent it's inevitable in the kind of culture we have people you know people get popular and this connected to two a lot of that doesn't come from like how this stuff is made it comes from like recommendation systems you know there's there's kind of Spotify a bunch Spotify researchers in 2020 wrote a paper where they looked at one month in July 2019 and they looked at listener data of like a 100red million Spotify users and they found that when and this is predictable right but they showed it with the data when people listen to the recommended listen to recommended songs listen to kind of recommended playlist that kind of thing the diversity of the music they listen to is far far smaller than when they take what they call like userdirected action. When they when they're just going and searching for music, they find a load of cool stuff. You know, it's really diverse. When you go down these recommend recommendation paths, you know, everything kind of becomes homogenized and you end up listening to the same thing over and over again. And so, I think I think that's already a trend we're on, right? Like recommendation systems through YouTube and Tik Tok and Spotify, this is already a path we're on. And so, I don't think that AI letting more people We can say create, but like letting more people generate music from scratch, I don't really think does anything to affect those winner take all dynamics. Do you think there's a general effect of culture becoming more homogenized over time where perhaps you will see in the future most cultural products being having this kind of style this AI style that's influenced by say how culture works in in Silicon Valley and the values that are put into the models from there. I don't know. I mean the flip side of recommendation systems like Tik Tok you know and I worked on the Tik Tok now my job was to work on the Tik Tok recommendation algorithm. The flip side of these kind of models is that, you know, you do get you get different people ending up having very very different feeds. And now, you know, that can that can turn into filter bubbles, which isn't necessarily great, and you can go down some bad paths. But ultimately, like these recommendation algorithms, especially when you got short form content and you're using them a lot, which most people do, they very quickly learn the kind of thing you like, right? And they show different things to you to kind of try to try to work out what that is. And so, so that's why you have, you know, different pockets of Tik Tok, right? That's why you have like all all of these, you know, all of these different styles emerging. And and so I think actually recommendation systems can, if they're constructed right, take you down these good paths of discovery. And so I think in that sense, culture doesn't become too homogenized. Now, that's that's always maybe going to be these kind of fringe niche communities, but I think it kind of is up to the one of the exciting things about where we are, I think, is a lot of it comes down to the user, like to the per if if you as a consumer want to go and find some interesting stuff to listen to, to watch, to read, it's never been easy. You know, that that is that that's what's exciting. So, I think we're in this kind of this interesting time where you've got these two extremes. On the one hand, if you just can't be bothered as a consumer, everything will be homogenized and you'll just hear the lowest common denominator stuff, a lot of that will be end up being AI slot. It'll be awful. But if you can be bothered, you can go and find really great stuff and and the tools are there at your disposal to do that. And I think that I I mean I think the extension of all of this is that I strongly sus I mean the look the backlash to generative AI is just massive you know and I think I think people in tech still underestimate this like they they underestimate the huge strength of feeling against generative AI and and you know this partly comes from the fact that artist work is being stolen to build it and it partly comes from the fact that these companies are out competing artists and it partly comes from the fact that people, you know, people consider generative AI to be kind of, you know, dumbing down their professions and art in general. And it's a whole host of reasons, but it's really strong. And this is, you know, people in tech like to sort of point to the introduction of the camera or the synthesizer and say, look, people have people rejected recorded music when it came out. They thought it would be the death of music and it wasn't and people got over it. And they're looking at AI and they're saying the same thing. And I think that's a mistake because I think that the strength of feeling is much much greater with AI. And I think what that's going to lead to is, you know, what I kind of think of as like a humanist a new kind of humanist movement in the arts, which I suspect will will essentially entail probably a rejection not just of AI, but also but also probably of as a kind of result of that probably of some other technologies as well. I wouldn't be surprised to see in music, for instance, a kind of humanist movement emerging that that that is maybe more acoustic in nature, that favors less production, that favors live music, acoustic instruments, things that a machine couldn't do, being there in person with someone, you know, these kinds of things I think will and I I think they'll be I think that could be a strong artistic movement that I would I'd be surprised if it doesn't strengthen over the coming is what people are searching for is perhaps a a sense of authenticity in what they're consuming that what they're what they're enjoying is something that's coming directly from another person and not something overly produced and perhaps AI enhanced. I think so. I mean, look, at the moment, you wouldn't necessarily get that impression from maybe the press or like if you're on social media just just because the, you know, the AI chorus is so loud and so hard to avoid. You know there is just so much money flowing around the AI ecosystem right now that you know it benefits people to become basically AI influencers who will constantly share content who will sort of say look this is this is changing the world Hollywood is dead you know everything will be generative in two years time and you know and mostly people do this because you know they'll get more followers that way they'll make money that way you know it's a it's all a it's all a money-making game but ultimately you know among I mean there's a huge rejection of much of AI company's practices from musicians for instance I mean you look at what we've been doing in the UK you know we organized like a protest album called is this what we want and a thousand musician a thousand British musicians kind of co-created co-sponsored this this silent album in protest at the government's plans to give their music away to AI companies for free. And and it included these absolutely huge artists, right? Like the Kate Bush, Max Richter, like all of all of these people like these these and and and the same is happening across all the arts. you're getting some of the biggest creatives in the world coming up pretty strongly against now it's not necessarily against all generative AI but it's against some of the very common practices that AI companies are you know utilizing and I think inevitably what that turns into is a a movement to towards the authentic and towards the the natural and ultimately towards the human and I and I sus I suspect that will get bigger and bigger. What would be the principles of this new type of of humanism? What is it that that's that's that such a movement would be trying to promote and and trying to reject? I mean, I think fundamentally at its core, any kind of new humanist movement would would essentially be about putting humans first. And that's a that's obviously very high level and vague, but I think probably initially it'll be more about a rejection of a few specific practices than it will be a, you know, specific creed or kind of set of guidelines, right? Like I don't and I should say like I'm not sure this movement like exists, but I just see it as a as a broad kind of direction of travel. You know, I think there are things that are clearly not that would clearly not align with a sort of new humanist movements, let's say, in the arts. And one of those is obviously taking people's work and training models on it without their permission when they're all on mass telling you they consider that theft. Like that's not humanist, you know? I think building models that are designed to out compete humans in the creative spaces for instance would would not I think fall into this category you know now I think it I think it would remain you know pretty pretty vague and pretty pretty all-encompassing but I just think there'll be there be certain aspects of the current technological paradigm that will not be accepted by this by this group and I and as I say I think what you'll what you'll see is you'll see you'll see real time interaction between people you'll see a rejection of some of the most modern technologies and a reversion to traditional practices and I think still look the best songs ever written you know they were recorded on modern techn technology. You know, I don't I don't think that any new humanism would like totally reject modern technology, but ultimately, you know, you take Yesterday by the Beatles, you can just play it on a guitar. You can just sit in a room on a guitar, play it to one other person. And I think that is humanism, right? It's that versus, you know, a song that was composed on GPUs by a model that's been trained on the Beatles work without permission and that then someone shoves onto Spotify in order to go and make money and take it away from the royalty pool of other musicians. That is that is the difference between these two movements. I think one worry with such a new kind of humanism would be that it would face this set of values would face face competitive pressures from other groups from other companies from other countries and so on. And so perhaps perhaps a humanistic approach is not the most efficient approach and therefore it will fade away just because people as you mentioned earlier people consumers will grab what is best and what is easiest and there will be there will be demand for whatever is most efficient to produce what it is that consumers want. Yeah, I think in general that's true. But I actually think in creativity is not necessarily true. Creativity isn't all about efficiency. You know, the the the most, you know, widely loved songs and films in the world in the world were not made in a manner that, you know, where efficiency was the was the bar. That's not what people are going for. They're going for creating something great. And I think that you know ultimately when I mean again when countries have all these you know you look at the AI race you know and you look at people you look at countries like worrying about will China get AGI before us whoever us is you know and what does that mean you know these politicians who are who are worried about that I don't think are really thinking about creativity right? They're not thinking about the creative industries like in all of this stuff like no one like politicians don't mind that much if the next AI music generator is built in their country right like when they are looking at when they do come up with like for instance the UK at the moment the UK government is just like all out favoring AI companies it's like you know I mean it's it's astonishing and this is the this is a party that is meant to stand for it's called Labor right like it's meant to stand for working people and it's is basically in the pocket of big tech companies. The reason they're in the big in the pocket of big tech companies, I think, is not because they are desperate for the next AI music company to be built in the UK or the next AI image company to be built in the UK. Frankly, I think they'd probably rather it wasn't, right? Like they do value human creators. They certainly don't want human creators work to be stolen in the way that these companies are stealing it. you know what their what I think their primary concern is AGI and and that's a and that's a very different thing and they don't want to be left behind and that I think is driving a lot of politics. So, so I don't think efficient when you speak about creativity, I don't think efficiency really comes into it and is what is driving decisions or ultimately driving consumers decisions as well. Like I don't think it's, you know, you're never going to listen to a piece of music because it was made efficiently. Like you'll have an AI influencer being like, "This is wild. Like 10 songs I made in under 10 seconds. You'd never tell they weren't human." Cool. You'll get like 50 retweets, but no one's ever going to listen to that song again. No one's gonna care, you know, and I think that's good. Yeah, I agree that when politicians worry about competition with China and so on, they're thinking about national security, they're thinking about autonomous weapons, they think about AGI and super intelligence and and perhaps not as much about generative AI. But I mean depending on the scenario we're in, if we're in a bit of a longer timeline scenario, it might be the case that having control over culture is is a kind of soft power in the world as it has been I think during the 20th century for example. What is do you think countries will seek to influence how culture is produced through AI with with the goal of of kind of projecting soft power? I don't know. I mean, I don't I don't immediately think so. You know, I mean, I think that again, this is actually why I think that it's so crazy that people like the UK government are so strongly considering basically upending copyright law to favor AI companies and to penalize creators, you know, because you're not what you're going to do by that is you're going to increase, you know, you're basically going to increase the amount of slop out there. You're going to eat into real creators royalty streams. You know, you're going to eat into the royalty pools. You're going to make it harder and harder to build a career as an actual creator. You know, you're that's basically what you're going to do. So you're going to undermine what a what a very strong industry is at the moment. I think I think soft power through kind of the creative arts I suspect will largely continue to come from where it does at the moment which is from having supremely talented people backed by great industries you know and I think I think that's what countries like the US and the UK have at the moment. And I just I just think that all the you know, anything you do to undermine that is is probably a very bad idea. So that that's kind of where I come out on it. But I don't know. As a last question here, what are your priorities say for the next few years with Fairly Trained? What do what do you find the most important to do? Like honestly right now I think the creative industries and the people who make them up, the creators, the rights holders, this is a group I care a lot about. I'm a member of it myself, but like my entire career has been kind of working with this in mind. And and I think that what they face right now is like an existential threat to their industries, to people's ability to uh make money from being creative. and therefore to the art that we all consume. Like I I think that I think that this is the biggest threat that these industries have faced in certainly in living memory and probably going back a long way before then trying to shift the Overton window. I guess trying to trying to move the needle towards any kind of outcome that is fairer than the current circumstances for creators I think is really important. And and I think there's a bunch of ways you can like strategic there's a bunch of strategic questions about how you do that. Like we're trying to do that with fairly trained by showing the kinds of models that that that are possible without theft. You know, there are lots of other ways you can do this as well. Marketplaces of training data. There are great companies building marketplaces of training data. There are people building public domain data sets. There are people doing all this stuff to to to make it easier to train models that aren't based on theft. Right? There's there's people working on trying to kind of disentangle the I guess the or or shine a light on the black box of these models such that you can start to understand which training data has has kind of has has has gone into a particular output more or less. There are people doing all these kinds of areas of research. I think all of this is really important, but I mean in general I think if we if we continue on that if if AI companies win, god forbid all of the lawsuits in this space, you know, and andor it becomes like settled law that you're just allowed to take people's work to build AI models that will compete with them. You know, I think that is a terrible world for creators. And I think, you know, there are lots of people, myself included, basically just 100% focused on, you know, how do we try to kind of just just generally guide the path uh in a in a slightly different direction. I don't think we're going to stop AI development. You know, I don't think we necessarily should stop all AI development or and I I you know, there are people who say ban AI from the creative industries. I think that's unrealistic and I I also don't think that's the right approach. AI is a very broad field, but I certainly think we don't want to end up where where the where the big tech lobbyists at the moment want us to end up. Yeah. F, thanks for chatting with me. Good to chat. Just