Will modern AI break the music industry? Join episode 12 of Mixture of Experts, and listen to host Tim Hwang along with experts Chris Hay, Marina Danilevsky and Brent Smolinski discuss the week’s biggest AI news. Start the podcast by listening to the experts review the Goldman Sachs’ report on investment in gen AI, “too much spend, too little benefit.” Next, hear the experts break down Claude 2.0 Engineer and the future of coding agents. Finally, dive into the debate around the lawsuits filed by the Recording Industry Association of America (RIAA) against two generative music companies. Tune in to hear our experts’ takes on these issues!
The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Tim Hwang: Hello and happy Friday you’re listening to mixture of experts I’m your host Tim Hwang. Each week mixture of experts brings together a wide range of specialist to separate the AI signal from the AI Noise. We tackle the biggest stories of the week and distill them down to just what you need to know this week on the show three top headlines first the banks weigh in Goldman Sachs is out with a harsh report on the future of generative AI claiming the space still has a long way to go to prove its value are the bankers out of touch or do we think they’ve got some good points like they went from one extreme to the other a little bit right like I almost feel like there’s something in the middle. Second AI developer Petra chirano is out with Claude engineer 2.0 which adds a code editor and code execution agents to an already powerful command line interface tool what does the next stage of coding assistants look like and who’s currently winning in the anthropic open AI matchup the interesting thing a bout CLA engineer is it’s really embraced the agent methodology it’s agentic it is goal oriented and I think that’s where we really are going to be going as an industry. Third the recording industry Association of America or Raa has launched a lawsuit against gener of AI Music Company suo and yudo claiming Mass copyright infringement how might copyright shape the generative AI space and what does it mean for the future of training data.
Tim Hwang: As always I’m joined by an incredible group of panelists that will navigate what has been another action-packed week in AI today we’ve got Chris Haye distinguished engineer CTO customer transformation Marina Danilevsky senior research scientist and joining us for the very first time Brent Smolinski Global head of Tech data and AI strategy. So first up I want to change a little bit of what we do typically and I want to ask you all a yes or no question and then we’re going to actually dive into the story of the week which is the Goldman Sachs report and the question is this is AI a bubble?
Brent Smolinski: Thanks for having me.
Chris Hay: No definitely not.
Marina Danilevsky: Marina what do you think AI know generative AI a little bit?
Brent Smolinski: Yeah sort of.
Tim Hwang: Well with those extremely definitive answers let’s move on to our story for the week listen I I think if you look back you know a little over a year ago when Goldman Sachs first published their uh article and kind of the impact of that generative AI would have on the market they predicted something like 7% GDP lift uh which is just as context that’s the size of the North America Healthcare Market that’s a massive impact. I think now what we’re starting to see is in their last publication they’re beginning to backpedal on that yeah and I think it’s a great intro I think that’s exactly what I wanted to talk about. I mean just last week or I think just a few weeks ago now Goldman Sachs released its update Brent that you’re kind of referring to and they kind of fessed up right. I think the end conclusion of that report is that the current state of generator of AI is quote too much spend for too little benefit. Um and this follows on the heels of some other cautious statements coming out of seoa obviously prominent VC fund and McKenzie which works with a huge number of companies and has been kind of like at the Forefront of I think pushing sort of generative AI as an Enterprise use case and that’s kind of where I want to start today is just to kind of like go into this kind of moment of hesitation. I mean I the last 24 months have been crazy growth in generative Ai and crazy excitement um but I think now the industry is kind of like almost thinking a little bit about like okay so what happens next and I guess marina maybe I’ll throw it to you next I mean one of the numbers that was most striking to me that was kind of cited in the Goldman Sachs report was that you know they they talked to Darren asoglu who’s this kind of prominent MIT Economist who estimates only about 5% of tasks are really genuinely at risk from what’s been happening in generative AI. I guess m do you do you buy that as an estimate as someone who kind of works on the technical side of all this like do you see capability you know really becoming much broader over time or or really do you think this estimate is kind of in acccurate of like you know the kinds of tasks that are really going to be at risk in the economy?
Marina Danilevsky: I think the state at which the tech is right now I’m actually not that far off from what uh what he says as well what Darren says. Um there’s still a lot to be thought of as far as things that we could do with this technology but it’s very clear that we haven’t quite thought of it yet. So when it comes to you know is it a a bubble right now yeah a little bit as far as far as the the hype versus what the capabilities actually are what the reliability actually is I think we need to continue to think something that’s always really interesting about core technological research is you don’t know what the applications are sometimes until sometime later so it’s always very interesting to push those boundaries but yeah there’s got to be a difference between hype and actual usability especially when it comes to things that are reliable. At the moment it’s good at as an accelerant it’s good to speed up people and tasks that they’re kind of doing right now. All right but that’s not enough to does it like justify the valuation of Nvidia as the most you know the most valuable company um in the world?
Tim Hwang: I mean I guess Chris you know I think I recall last few times you’ve been on the show you’ve always been our hard bitten cynic. Um I don’t know if you are uh agreed with Brenton Marina here or if you’re more of a contrarian like you actually feel like you’re more optimistic than what these Bankers are saying because what do bankers know anyways I love the hype we wouldn’t have this podcast if there.
Chris Hay: Wasn’t any hype we be so no I enjoy this every few weeks the the hype has to stay. I I read the report though and and I think I can’t remember who said it but one of the guys said ah nothing’s going to happen for the next 10 years you know this generative AI is a complete waste of time. And and I was just thinking about like back to the 60s when someone said you know there’s only a need for maybe five main frames in the entire world and I’m just like oh my goodness I wouldn’t want to be writing that on that report I I wouldn’t want to be quoted in 10 years time being the guy that said generative AI was a waste of time so uh now I think it is early obviously. Um but the technology is progressing so fast. I mean I I was talking to a customer yesterday and I brought up the models that were popular this time last year right and if you think about this time last year right llama 2 just came out there was no such model from mistra right the the granite models were just out at this point Claude 2 it just came out never mind clae 35 on it right there was no turbos and open AI so and we were talking about the Falcon models we were talking about vacuna nobody talks about these anymore. The the everything has moved so fast and then this year we’re like agents agents agents. So if I look at the kind of time frame they’re talking about next 3 to 5 years 10 years this industry is moving so fast and the capabilities are getting so much better. I I I’m happy for them to say it’s a bubble because that’s going to create more space for people to get on and do the work so like more opportunity.
Tim Hwang: Yeah exactly but this is not going away that’s for sure. Yeah I mean it feels like they went from one extreme to the other a little bit right like I almost feel like there’s something in the middle. I think one of the analysts said that there’s no killer AI application of AI. I mean that that to me seems like an odd state statement right because I mean first of all we have ai perme yeah I mean well I I th ink that’s what he meant right like I think they’re getting at is is is really when they say AI they meant large language models right. But the reality is is AI permeates like so much of our uh applications today. Uh I mean it’s just uh I mean even in this this session that we have right now ai is being used to do signal processing clean up the videos and so on and so forth. So I mean I permeates just about everything we we interact all applications we interact with today so so I don’t quite get that statement I I think he what he meant was uh large language models right.
Unidentified Speaker: I know I agree with you a lot actually AI has been around for a very long time and does a lot of interesting and good things. AI just means hey the computer’s doing something useful there’s some kind of you know statistics with processing happening and J of AI is actually a relatively small part of that yeah so it’s not that fair to take Ai and say okay this is the only AI that matters anymore. Yeah it’s the one we’re payin g attention into but it’s actually built on the shoulders of giants in some way there’s been so much work going on for so many decades so the the Relentless March of AI progress right it just you know the technology continues to evolve and continues to permeate our application landscape and ways people don’t even realize I I don’t think yeah.
Tim Hwang: And I think that is something that we like do forget quite a bit is that you know for a long time it was like eh what’s happening in NLP everything’s about computer vision that’s the really exciting thing or like everything’s about reinforcement learning that’s really how we’re going to get to you know next Generation systems and then kind of just like everything sort of like flipped in a very unexpected way. It reminds me of this tweet that I saw that I was amazing so there’s this adage in financial markets which is like the market can be irrational longer than you can say solvent and the person’s tweet was basically that like um you know a sigmoid can stay exponential for longer than you can say selfish so like you know which I think is just you know is like beautiful in some ways. I think one thing I did want to touch on in the report is that it does focus on some interesting potential constraints on growth which I think are really genuine right. Like I think we debate about how far the the tech can go in the economy but I think one of the most interesting stats they cited was the idea that you know per query the power draw for something like open AI is like 10 times the power draw for something like a query like Google and you know it is true that like energy is becoming kind of a constraint on these things right. Like if you want to run mega mega mega clusters it actually just turns out that like in the United States there’s actually like only a few places that have the physical plant that’s necessary to do this and I guess I’m kind of curious if you all sort of BU that is that we we may you know I got by the argument that like well are we demand constraints it’s kind of anyone’s guess but we may very well become Supply constraint yeah.
Brent Smolinski: I mean listen uh these same kind of arguments were applied with cloud computing like 10 years ago people were worried about power consumption yet were able to build out the infrastructure solve these power problems. I would argue even the algorithms underlying a lot of these models are improving and becoming much more efficient which translates into computational efficiency which translates into energy so I I I think these problems will get solved right. I think in my mind the biggest problem to figure out is how do I get value out of this right once we kind of began cracking the kind of the AI the value problem with a you know applying some of these these large kind of Transformer based architectures to real world business problems I think that’s going to unlock a Floodgate Of Demand right. Uh and then at that point at that point we could begin talking about the supply constrain t but right now I I think it’s a a second order problem to think about and I’m very confident this problem will get solved.
Tim Hwang: Marina I’m curious if I could turn to you as just kind of a question on a last thought on the story. I mean is it right to say maybe the right way of thinking about this and I don’t know if you agree with the statement is that you know there may very well be a bubble in something like language models but I think we should doubt whether or not there is a bubble in sort of AI at large. I don’t know if you’d agree with that as kind of a way sort of framing up you know what’s going on here.
Marina Danilevsky: I don’t think there’s a bubble in AI I think there’s Ecclesiastes Seasons we have wins we have Summers and it goes and it goes so right now there’s a lot of attention but also I’m like all right I was doing NLP before it was cool I’m going to be doing NLP after it stops being cool. Like those of us that are on the ground are just going to continue to push and that’s where the these thin gs come from sometimes it becomes of interest to people sometimes it doesn’t. Um to the thing that you had said before you know does it make sense to throw a large language model at every single query maybe not but I think right now because the technolog is early everybody’s just seeing let’s see what it can do let’s test it as much as we can and it will eventually settle into it’s no longer a hammer in search of a nail it’ll settle into something that we’re just as comfortable with as with search when it first uh started being a big thing and everybody’s like oh we’re done no more information organization necessary we’ve solved it no but it’s very very useful nonetheless so yeah I think I think we’re in a season the season will pass.
Tim Hwang: So for our next segment I think the thing I really wanted to focus on was that there’s a cool little thing that was being passed around uh Twitter fairly recently. So Petro shirano who’s this AI engineer and kind of a serial entrepreneur based out in the Bay Area um just updated a project that he maintains in open source project called Claude engineer. Um and it’s basically an open source project that allows coders to use Claude 3.5 Sonet um from the command line and you know what I love about this project is there’s a bunch of kind of creative features running under the hood. You know he’s playing around with agents and he’s playing around with just like a bunch of these kind of like little quality of life improvements and you know again this is not a big release from an anthropic or an open AI the kinds of things that we’ve talked about in the past but I do really think it’s kind of interesting because you know I think we’ve been so locked into like co-pilot as like thinking about how coding assistance Works in with generative Ai and I think what Claude engineer is playing around with is to say well actually in the future we might want to do more than just like predictive code like kind of like stack exchange on demand basically um and so you know I guess Chris I see you nodding maybe I’ll go to you first is you know as as I’m wondering if you can explain to people who are listening who are maybe nonexpert not coding day in day out like what is the kind of Promise do you see anything sort of interesting in what’s happening with Claude engineer like what does the future of kind of coding assistance with AI look like um and if there’s like particular things you think are cool in the engineer product or or project or or otherwise just be kind of curious about how you think this kind of whole interface evolves.
Chris Hay: Yeah I think it’s really interesting what he’s done with clae engineer it is so simple it is literally just a command line application you run it in the terminal in vs code so no extensions or anything like that you put in your Cloud key and then it uses all of the tools that you would normally have with agents running in the background. So you give it a task a goal and then it can create folders on your machine it ca n go and create entire files and then it can stitch that all together to help you build entire applications and when I think about this for a second co-pilot is very typically a kind of prescriptive model in the same ways we chat with our interfaces but the interesting thing about Claud engineer is it’s really embraced the agent methodology it’s agentic it is goal oriented and I think that’s where we really are going to be going as an industry. So rather than me sitting there typing in a couple of letters you know waiting for co-pilot to come back for a response and then gives me a bit of a code segment I don’t like it I delete it and then I sit and pause again you know the co-pilot pause is going to go away and we’re going to give these agents goals and tasks and they’re going to come back and help us build into our applications and and and really sort of start to orchestrate and and build workflows there and what’s really going to happen I love CLA engineer but I suspect co-pilot’s ju st going to steal all of that and build it into their extension anyway.
Tim Hwang: Yeah that’s right I mean I think yeah that is one really interesting element of this is like how much project like this can survive going forwards because they just get absorbed directly into the product. I guess Chris maybe if I can kind of turn the screw one more time I think there’s one sort of comment that you just had there about basically like co-pilot being very prescriptive in nature um and do you want to talk a little bit more about that. I guess what you’re kind of saying is that like when you use co-pilot it literally recommends the code that you should be using as an engineer versus I guess where you’re contrasting here with Shiro’s project is more just that like you’re specifying more of an objective and it’s kind of like a you and getting the objective is that is that the distin.
Chris Hay: Exactly it’s assistive as opposed to agentic so when I’m in co-pilot I will type a comment or the first couple of letters and t hen I kind of wait so it’s not giving me an end to end goal it’s not building me an entire application it’s it’s really just a smart um uh intell prompt to to be honest right so it’s then just going to complete uh the piece of code that I’m writing so maybe deal with it at a function level it might deal with it at a line level whereas in an agentic approach and with Claude engineer it’s starting to scaffold entire applications and entire workflows and orchestration and that’s a completely different mindset from what co-pilot has today and I think that’s the big shift that’s happening. Um again it’s really simple it just runs on a command line it’s beautiful um but I think a lot of people are going to riff off of that and we’re going to get tons of tools and I’m excited.
Tim Hwang: I’m curious if you have any thoughts kind of on where some of this goes and in particular I was sort of interested because you know what’s cool about it is I think what Chris is kind of coming back to over and over again which is sort of like it’s just in the command line right like it’s almost like unfancy it’s like and we don’t have to make a big deal about the AI being part of your coding experience it’s just like in the command line but I’m kind of curious about like what else you think might be coming down the pike with this kind of project uh in particular you know I think one of the reasons I think we were excited to have you on the panel for this episode is like you know starting to combine stuff like okay we have agents and then we’ve also got Rag and then we’ve got you know there’s a bunch of things that you think you start can start to connect together um and uh and just kind of curious about how you think like you know these types of patterns go going forwards for for coding assistants.
Marina Danilevsky: Two directions one is uh by Engineers for engineers which this is a much more of a b Engineers for engineers thing like why do we have idees why do we have more than one a lot of these things uh really got cr eated by people because they say look I know my workflow better than you know my workflow I’m going to create tools that work for me other people are then going to be able to make use of it and say yeah that’s great people used to have the you know emac versus Vim fight now we have Eclipse versus vs code versus whatever but it really most of those features do come from people saying this is something that’s helpful to me and I’m going to do it so that’s where this project sort of Falls for me. On the flip side when you start to be able to combine things we might finally have something interesting going on in the low code no code space which up to now has been like isn’t it great we can you know arrange some visual blocks and you stick together and that’s like programming you’re like programming you’re programming now no you’re not. Um so we might actually be finally seeing something kind of interesting there although again the Persona is different so you do have to design different thing s but this goes to the fact that most of these things really come from I think individuals for even if Microsoft adopts it later it’s still the people on the ground that come up with the idea and go okay this is what actually works guys here do it this way that’s where you know that kind of innovation comes in my opinion yeah for sure.
Tim Hwang: Yeah I’m looking forward to is Gen creating like a new generation of endless nerd fights that are like Vim versus emac but dating myself um yeah. Um Brent I guess I’m curious if you want to zoom up for us a little bit and kind of talk about this in the context of the broader competition right. So I see this as kind of like you know at least for me I think the vibe shift has been anthropic is now ahead of open AI a little bit right like that they’re they’re the cool cats they’re doing the really interesting things but I think a big part of the battle is like what we’re seeing here with Claude engineer right is like our third party Engineers being like this i s so cool I’m going to design my own third party product on top of like the services these Foundation models are providing. Um and yeah just kind of curious about your take here about like this evolving competition between open Ai and anthropic I guess ultimately for the hearts and minds of like Engineers that are producing you know code out there in the world um and and if you’ve got a feeling on like who’s winning who’s advantaged who’s up Who’s down.
Brent Smolinski: Well it certainly feels like things are shifting towards anthropic in in Cloud that’s for sure. I think a big part of it I mean is the economics uh the cost effectiveness of these Cloud models is uh they’re they’re significantly more cost effective than than open Ai and so with many of my clients they’re actually um moving away from uh open AI towards towards Claud for that that very reason that’s really interesting. All I do have to say is is we recently did an an engagement with um a senior executive team uh at one of our clients and we d eveloped uh the team developed this amazing prototype it was an RFP uh generator and they were able to develop this in like three weeks or I mean some incredibly short period of time and it was a very powerful almost I mean you could almost use it to generate these these rfps there’s a few tweaks you’d have to make at the edges and I think everybody was was blown away um uh by how quickly they were able to uh pull this application together um and again a lot of it they were they built this on Cloud using a lot of these codenation tools as well.
Tim Hwang: Well I’m going to move us to uh the final segment of today. Um and I apologize um as a as a person who trained as an attorney I’m always like watching the legal side of all this. Um and so uh I was very curious to see and get the opinions of the panel on a story that just happened a few weeks back. Um the recording industry Association of America Ora is basically Ally um sort of the music industry’s Representatives lobbyists Advocates um in the Unit ed States um and they launched a high-profile lawsuit against two companies uh sunno and yudo which are these two companies that are in the generative music space so kind of the idea if you’ve played around with a a product like suo um you download the app you basically say I want a song that matches the following characteristics and it just generates the song and it’s it’s actually quite good um and procage is this kind of really strange world where you’re just like you know you like Taylor Swift cool you can just get you know in a hundred hours a thousand hours of Taylor Swift sounding noise basically going forwards. Um and uh the raaa sued both of these companies essentially claiming uh copyright infringement right and a big part of their claim leans on the fact that these companies are training on music that is that are ostensibly owned by rights holders. Um and so we’re about to see this big Showdown you know similar versions of this La you have popped up around open Ai and anthropi c and other companies but I think this is the first time we’ve seen a really high-profile one happen around music which I think is very interesting. Um and I think the other thing that’s very interesting to me is you know how it’s going to evolve right. Like you know for example in the book space right for like Kindle you know like I feel like there was a period of time where basically you know the kind of like piracy didn’t really kind of take off and so we have certain Norms around ebooks that we don’t have around say music right. Um and I think basically you know we’re starting to see that Evolution happen around different generative AI applications. Um and I guess R I kind of want to toss it to you is like as someone who’s kind of a researcher in the space training models in the space you know I think the big question for me is you know how do you think about these kinds of lawsuits right because I think there’s one point of view which is well look if the raaa gets its way there’s kind of no way to do these products just because of the sheer number of music files you need to put into these kinds of models to get them to have high performance um do you think that’s the case or am I kind of like overstating the risks here?
Marina Danilevsky: I mean I think this as usual is going to revolve around discussions of fair use and derivative information and if you train on the music just as if you train on the the text and then you throw it away and you just keep the features and the weights for the model what if that counts what if it doesn’t. Um but I first of all again cats out of the bag people are going to do it anyway so you got to figure out a way to do it. Um second this reminds me of you know discussions of well what if somebody post something on on the internet platform I’m going to misremember what the legal thing is it’s the DM something or another of like oh dmca yeah the cop yeah thank yes of the like you can’t sue me just because somebody put something bad on my platform I don’t know it just reminds me of the same kind of thing where people are going to continue to do the technology you’re going to have to find a way around it the ra is going to push for what they’re going to push for they’re not going to get their way and at some point in time they’re going to have to learn to live with it because again you can’t stop people from from doing it that’s yeah I.
Tim Hwang: Does remind me a little bit of the early 2000s right where you know Napster came up and file sharing became a thing and the raaa did the same thing which is like high-profile lawsuits against file sharers and then you know I guess Marina is your point it didn’t really stop file sharing but it also didn’t break the music industry right.
Chris Hay: Uhhuh that’s right so I I think the Raa or whatever their acronym is is gonna wipe apart sonan udio. I they are going to win this so I I spent this morning reading that and they went after the angle that I thought they would go after which is of course they were going with th e inputs but but if you look at the actual complaint they focus in on the outputs the individual songs so they brought up um I think one of them was kind of Chuck Berry’s Johnny Be Good and then they brought up um some other one in one of the other complaints and they brought up the kind of the musical chords and then they were like this has this uh style and this has this style these notes are identical and this is is why they’re going to win. And and the reason it’s going to win is there’s prior cases uh and Tim you can speak about this a lot more where you’ve seen like the Ed Sheeran case where he had to prove that he didn’t steal from this one and then there was that one where um I think the Verve had stolen some lick from like the 1960s and they couldn’t play their song for however long and so there is prior on not being able to use outputs that have got similar chords got similar musical progressions and they’re going to hit them with that and they’re going to win and there’s lite rally nothing they’re going to be able to do about it. So even if you don’t win on the if you make the fair use Ang uh argument that you make with books that’s not going to hold true on the outputs because you’re just going to point to Prior case law and say well actually this was a copyright infringement this was a copyright infringement and and then you’re going to have to pay for all of those outputs. So what what will probably happen with the generative AI there is they’re going to have to then start to check the outputs of songs to see it doesn’t infringe existing songs so I think it’s going to get super messy but they’re going to win big song.
Tim Hwang: Yeah and I think that the messiness I think is really interesting because you know so I used to do a bunch of work and still do around trust and safety on AI right and there you’re trying to say like well we’re going to use rhf and we’re going to create all these mechanisms to try to like constrain the behavior of the model right and a lot of what you learn is basically like anything you try to do to kind of like prevent Model Behavior or like block the permissible Behavior there’s lots and lots of ways of subverting right particularly against sort of like a user that’s adversarial. Um and I think part of the worry that I have here is sure you know you’re setting up a world where it’s like look your your model can output stuff that sounds almost exactly like this copyrighted training data but then basically you’re saying okay company now you’re responsible for preventing that and I guess I would ask the question of like is that actually possible? Like I think from a technical standpoint like we don’t have a whole lot of examples of being able to kind of like really categorically block or in the very least it kind of begs the question when is an output so close to the training data that it really should be a copyright violation I think that’s kind of an open question I I think it’s I think it’s a hard thing right because there ‘s so much music but they don’t as far as um the record companies are concerned it doesn’t matter for them they’re just going to sue each anytime they find an infringment they’re just going to sue the company right and it’s going to be so difficult that it’s not going to be worth anyone’s while. Um so I as said I think I think it’s going to be interesting and hard I think this case is different maybe maybe it turns out not to be the case and maybe will generate so much music and maybe synthetic data will actually be the solution to this because you just therefore invent a completely new style of music that isn’t based on the past and then and then the outputs are not going to infringe maybe that’s the solution but uh it’s definitely going to get messy I I just don’t see these companies surviving it maybe not these two but but I think the actual technology is going to survive they might kill these two.
Marina Danilevsky: But I agree with what you said in the end actually Chris which is okay so you find some way to figure out what is a Sim dissimilar enough distance between music that it’s okay and you just do that by looking all the music that’s out there and saying well these two are you know this far apart so you can’t say anything. There’s already been years and years and years of study for this we’ve got Pandora and Spotify and how do we do radio and how do we do recommendations and all the rest of it. We’ve got an embedding space to work with we’ve got things to do there so they’ll just keep pushing to the point of that it’s absurd to have the ra complain about a really specific thing and then that’s what we’re going to be botom.
Tim Hwang: Marinaa right that is the way around us because then you get to make the fair use argument again because you’re saying well I’m sampling these different cases I’m not infringing anybody’s copyright so I I totally agree I think we’re going to end up with a new style of music and and that will be the interesting thing.
Brent Smolinski: Chris uh you know you bring up a very intere sting you know argument and I you know the question I have is is how creative can these platforms actually be what what do they is what they create truly original or can it truly be original uh and and then the other question I have too is you know this this platform uh providers uh you know they’re just platform providers right and so is the question around you know should these platform providers be held liable for for these um you know for for the content that’s created or should be the people creating the content.
Tim Hwang: Yeah for sure and I think we will see eventually stuff like I mean on YouTube right now there’s this really interesting thing which is a form of content ID so the idea is well if you want to use copyrighted music you you can have it in your video and basically there’s like a royalty that gets paid out if it’s detected that you’re using this kind of audio and so there kind of could be these really interesting models that sort of emerge where it’s like well you know you’re a llowed to do you know a Katy Perry sound alike like she just gets some kind of paid off and then I mean you know Marina to your kind of comment the really interesting question is of being well how do you figure out the compensation based on its closeness in the embedding right. Like is this 10% Kanye 30% Katy Perry 10% Taylor’s like how do we actually go about like designing that kind of embedding space is going to be like a super super interesting engineering problem. Um so as usual uh we have more things to talk about than we have time to talk about. Um but we are out of time for today so uh Chris Marina Brent thank you for coming on the show. Uh as always it’s an awesome discussion and we’ll have to have you all back at some point thanks for joining us if you enjoyed what you heard you can get us on Apple podcast Spotify and podcast platforms everywhere and we will see you all uh next week
Get inspired by conversations between people who are at the forefront of innovation. Tune in to hear Malcolm Gladwell—one of the world’s most renowned thinkers and writers in social science—talk to leaders about technology that can transform your business.
There's a lot of hype about what AI can do, but how do we actually use AI to build experiences? In this series, our host Albert Lawrence together with business leaders and IBM technologists bypass the theoretical rhetoric and show you how to put AI into practice.
Watch AI Academy, a new flagship AI for business educational experience. Gain insights from top IBM thought leaders on effectively prioritizing the AI investments that can drive growth, through a course designed for business leaders such as yourself.
Listen to engaging discussions with tech leaders. Watch the latest episodes.