OpenAI Structured Outputs, character.ai "acquisition," and is it an AI bubble?

Watch the episode
Mixture of Experts podcast logo
Episode 15: OpenAI Structured Outputs, character.ai “acquisition,” and is it an AI bubble?

Is it an AI bubble? In episode 15 of Mixture of Experts, host Tim Hwang is joined by our veteran panel: Marina Danilevsky, Kush Varshney and Shobhit Varshney. Tune in to hear the experts chat about the stock market crash and the involvement of AI companies. Then, dive into a discussion about OpenAI’s Structured Outputs, and hear the experts analyze how this can support enterprise implementation of AI. Finally, hear the experts discuss about Google’s acquisition of character.ai and whether it makes any sense. Tune in for the breakdown of what’s happening in AI.

The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.

📩 Sign up for a monthly newsletter for AI updates from IBM.

Listen on Apple Podcasts Spotify Podcast

Episode transcript

Tim Hwang: Wall Street spooks pretty easily and hypes pretty easily, and they’re also on a cycle that research certainly is not. “Structured Outputs” is probably the most sexy release of this summer. It’s like you’re kind of breaking this freaking bronco that just came out of the blue. Does the acquisition of Character AI make any sense at all? You have to know what your value add is and how much of that is a differentiator with high moats, so others can’t just come in and do what you do. All that and more on today’s episode of Mixture of Experts. I’m Tim Hwang, and I’m joined today, as I am every Friday, by a genius panel of technologists, engineers, and more to help make sense of another hectic week in AI land.

On the panel today, we’ve got three guests: Marina Danilevsky, who is a senior research scientist; Kush Varshney, an IBM Fellow working on issues surrounding AI governance; and Shobhit Varshney, a senior partner consulting on AI for the US, Canada, and Latin America.

All right, so let’s just get into it. The first story of the week is a big one, but I want to start with kind of a round-the-horn question. Let’s just start with a quick yes or no to kick off the discussion, which is: are AI companies going to bring down the American economy? Kush, yes or no? What do you think?

Kush Varshney: No.

Shobhit Varshney: No.

Marina Danilevsky: No.

Tim Hwang: Okay, we have uniform skepticism on that position, and I think that’s actually what I wanted to get into. So, if you’ve been keeping your eyes on the financial news this week, markets were massively down across the board internationally, and there was a lot of speculation as to why this was the case. People were proposing the unwinding of exotic financial positions, concerns about the Fed not cutting rates, but one thing that a number of people argued was, should we blame AI? Like the hype around AI for this? And part of this claim was based around the idea that the companies really leading the downturn, and arguably a big drag on indexes like the S&P 500, were tech companies that have made big bets on AI in the last 12 to 24 months.

So I want to get the panel’s opinion. Kush, maybe we’ll toss it over to you first. Do we buy this as a theory? Why should we or shouldn’t we believe that AI is kind of a contributor to this downturn, and is kind of a popping or at least increasing skepticism around AI having these big macro effects? I’m curious why you said no in the first question there.

Kush Varshney: Yeah, I mean, there’s clearly hype cycles with everything, but I think the economy has a lot more to offer. It’s a very broad-based sort of thing. AI is kind of the cherry on top or the icing on the cake. So, yes, it affects perception, and less... I’m of the view that it is really about the fundamentals at this point. I think that’ll change over time, but not right now. Well, I think there’s been... part of this I think is also following on the tails of what we’ve been talking about for the last few episodes, is kind of these reports coming out of banks and other financial firms kind of raising some skepticism around the excitement around AI.

Tim Hwang: So you know, there’s the Goldman Sachs one that we talked about a few weeks back and also the Sequoia report that some people might have seen. It is true though that the tech companies have made genuinely a really big bet on the market for AI. And I guess I’m kind of curious, maybe Shobhit, I’ll throw it to you. Are you seeing clients kind of following those jitters? Are they reading these reports and saying, “Well, maybe AI is not providing what we thought it would. Should we be a little bit more cautious about how we make these investments?”

Shobhit Varshney: So I don’t think the clients have to—and I’m talking about the Fortune 500 companies—they don’t have to read these reports to realize that in certain areas AI has been overhyped, and in certain areas it is under-utilized. Right? So there’s absolutely no confusion about the fact that AI is going to have—is having—a seismic impact on businesses going forward. So no CEO can say that the next five years are not going to be massively impacted by what AI can do. It’s a question of how you apply AI surgically in the processes and how you think about a strategy of data that then leads to an AI strategy that then delivers value for you. Right? So the conversation has changed more to, “Alright, after experimenting for two years, we have a good sense of where AI and GenAI are working well. We now need to make sure that we have a good mechanism to figure out the high-value unlocks in the business, appreciate that it’s a combination of AI, automation, and generative AI—it’s not all GenAI handling the entire process end-to-end—and we need to make sure that our data real estate, the people, and the processes are aligned to unlock that value.” So I think there’s a significant appreciation of the value it can bring, but also the fact that it’s a journey and you need to take steps along the way to make sure you’re getting that value unlocked. That’s very clear to all my Fortune 500 clients.

Tim Hwang: Yeah, I think that’s maybe one thing that our listeners would benefit a lot from your expertise on, Shobhit, is I think you kind of put out the idea that there’s like underhyped areas of AI. And I’m kind of curious, when you say that, if you’ve got particular areas in mind where you’re like, “This is where businesses aren’t looking. There’s a lot of hype in the space, but this seems to be where some of the hidden value is.” I’m curious if you can speak to that a little bit.

Shobhit Varshney: Yes, I think it’s the mundane tasks. It’s the stuff that... How do you make sure that every employee across an organization can experiment in their day-to-day workflows with AI, with generative AI, in a very secure and governed way? So within IBM Consulting, for example, we have 160,000 consultants who wake up in the morning and do all kinds of varied tasks. There is a small subset of people who are AI gurus; they feel that if it’s been 20 minutes since LLaMA 3.1 landed and we have not had it running locally, you are an embarrassment to society. There’s a small portion of those. But 85% of the other part of consulting, they’re doing things like code creation, testing; for the last 11 years they’ve been doing marketing campaigns, finance workflows. So, “I’m going to get an invoice, I’m going to marry it against the contract, the purchase order, and I’m going to approve it or disapprove it.” Right? So those kinds of mundane workflows have a human in the loop, and you need to figure out how Excel got embedded in those workflows. You’re now at the point where you’re having AI, generative AI, get embedded. Everybody figured out how to use Excel to improve their day-to-day workflows, right? We’re at that same point today. So you need to get to a point where every IBM consultant—we call that “Consulting Assistant,” as an example; it could be a Copilot from Azure, could be Amazon Q, Google of the world—but you need to democratize people actually messing with their day-to-day, figuring out that, “Oh, this email that I write 100 times a month can be automated.” And that’s the value unlock. Get your end employees to start experimenting in a governed way so that Kush doesn’t have a heart attack, making sure we’re doing this in a way that we don’t get ourselves into trouble.

Tim Hwang: Yeah, I think that’s actually, in some ways, ended up being what you’re describing, Shobhit, the kind of like 800-pound gorilla of the AI world. You know, I love this kind of joke that you start OpenAI because you really want to create AGI, but like, just slowly but surely the gravitational well of being a B2B SaaS and offering that as a service is where the gigantic amount of money is. Marina, I did have a question for you based on what Shobhit just talked about. I know Shobhit kind of made the difference between people saying, “Okay, if you can’t implement LLaMA 3 on day one and revolutionize all your business processes, you’re a waste of society.” I’m kind of curious, there’s one standout company here which is NVIDIA, which is hardware, and that is a company that has been hit in the stock market rather hard, and I think was one of the examples people said, “See, this is why AI is hyped.” Do you buy that? I mean, is NVIDIA indeed the most valuable company in the whole world? And how should we think about hardware in this picture? Will hardware continue to be the most valuable piece of this AI pie, at least as far as the stock market is concerned?

Marina Danilevsky: I mean, it’s a dependency. So just talking in pure engineering terms, you are pretty much tied to it because it’s very much a dependency. I will say, as far as NVIDIA going up in value, crashing in value, Wall Street spooks pretty easily and hypes pretty easily, and they’re also on a cycle that research certainly is not. They want to know, “Alright, Q1 what do you got? Q2 what do you got? Q3 what do you got?” It’s not the rate at which research actually happens. So when you have preliminary results, Wall Street will get over-excited, and then the results next time are not as good, and then they get over-depressed. And we actually have the same thing in research, where I’m like, “I can’t guarantee you that the research breakthroughs are going to happen in three months on the dot. This is not how this works. You got to deliver your Q2 breakthrough, Marina!” I can’t promise my Q2 breakthrough. So I would also say that this is also, to some extent, a mismatch between the schedule of Wall Street and the schedule of research in a new area, in an area that we don’t yet understand very well, and that’s a lot of what we’re actually seeing here.

Tim Hwang: Yeah, that’s fascinating. It’s almost like you’re saying we should not be looking to the stock market to judge the value of the AI space, in part because the market doesn’t know how to value it at the moment. Is that kind of what you’re saying?

Marina Danilevsky: I don’t think it’s very clear yet. I don’t know if Kush disagrees, but I actually don’t think we know very well yet how to value AI properly.

Shobhit Varshney: I’m with Marina on this, and I don’t think that the common stock investor understands the impact, especially in the enterprise space, and what it can do for people. So we’ve been just dunking on AI stocks, saying that, “Hey, you are leading to the downfall of the economy.” We should look at the positive that it has done; it’s also contributing insanely towards the overall economy. Right? So you should give AI enough credit to lift the entire stock market up as well, not just the trading week and say, “Hey, the market is down X points because of the large NVIDIA swing.” Just look at the world we live in; in the last few months, NVIDIA has swung a trillion dollars in market cap. A trillion! Just pause and realize how much of an impact that’s having on people. It is people reacting to, “Oh my God, I don’t want to miss out,” but also not knowing at what point are you investing in the fundamentals, or are you pulling out of the stock too early? Right? Like, even massive companies like Ark Invest ended up missing the boat on NVIDIA, lost a billion dollars of opportunity there. So you need to understand the fundamentals and stay long in the market versus going and reacting to these quarterly ups and downs. I’m not arguing this, but I think it’s almost like you could make the argument that, you know what, the biggest meme stock in the whole world is NVIDIA. It’s not GameStop; it’s not anything like that.

Kush Varshney: I was just gonna agree with Marina. I mean, the fact is that, and what Shobhit was saying as well, this is a long game. We don’t really know how to value things yet. It’s not like some commodity where you can grab it and hold on to it and see what it’s doing. So I think we’ll get better, just like we’ve had trouble valuing data as well. Valuing the models and what we can do with them is going to be part of this as well.

Tim Hwang: I’m going to move us on to our second segment of the day. So OpenAI this week announced a new feature they call “structured outputs,” and this is huge, although it might not seem like it on the surface for people who are not in the day-to-day work of AI. Effectively, what they’re offering is, for the very first time, model developers are allowed to basically work with their system to constrain their outputs to match specific schemas that are defined by engineers. And this is a little bit nerdy, but I think it’s actually worth kind of walking through the technical points here because I think it’s one of the areas where if you dive a little bit into the technical, you may recognize why, out of a summer of lots and lots of announcements of AI, this may actually end up being the biggest announcement of the summer in some ways.

So I’m going to try to explain this, and then I think Marina, you’ll keep me honest. You should be like, “That’s completely wrong, Tim. You’ve completely misunderstood what they’re trying to do.” The way I understand it is that language models are of course very powerful; they can do all sorts of remarkable things. But the problem is that they kind of output in non-deterministic ways; they produce outputs that are difficult to constrain and standardize. And this has been a really tough problem because you have to take AI and then you have to connect it to all these other traditional systems that are expecting structured data. There’s a computer just being like, “Well, I’m expecting a table that has the following elements within it.” And it’s been very hard to integrate language models with that. And is what OpenAI is saying here that you can finally, for the first time, do that reliably? Correct me if I’m wrong; I’m just kind of thinking through this.

Marina Danilevsky: The thing I’m actually going to push back on is this whole “finally for the first time” thing. This is not for the first time. The fact is, before we were like, “Alright, structured outputs, semi-structured outputs are where it’s at.” We used to say, “What you do with unstructured data?”—this is work that I’ve done for years—is you try to turn it into something more structured so it’s features, and you can feed it into a classifier, feed it into ML, and go from there. Then everybody said, “Oh, foundation models! Alright, now it doesn’t matter. No more structure is needed, no more data is needed. We’re just going to go and have unstructured data is going to be everything.” You go and you work with that for a while and you go, “No, guess not.” Alright, we’re going to go ahead and walk it back a little. Let’s go back to the fact that, especially if you’re trying to mix and match a heterogeneous system, you do need structured output because these things don’t know how to talk to each other. So I’m going to pretty strongly push back on the “for the first time” and go back to, no, now that we’re trying to be practical about it, we’ve gone back to the fact that you need to impose a bit of structure.

I would also say that this is with the success of code models, where we see that there already is a lot more structure imposed on what kind of things can go in and can go out. There’s some lessons being learned there again, going, “Oh, maybe we don’t do just generally unstructured text,” and we’re going to go back to having a bit of a mix. Kush, would you agree with that particular... we’re kind of back.

Kush Varshney: Yeah, no, I think that’s exactly right. I mean, one way to look at it is you’re kind of breaking this bucking bronco that just came out of the blue in the last couple of years and bringing it back to where it should be. Right? The control and the governance, all of that is part of making these things practical. And I think another way to look at it is, one good thing about these language models is that they’re very creative; they’re coming up with all sorts of different things. But it’s really a trade-off: safety versus creativity. And the control, the constraint, is bringing us back to that safety aspect. If you’re inspiring a poet, go ride that bronco, it’s all good. But for all of the enterprise use cases that we care about that are going to make the productivity differences and all that sort of stuff, then that extra control is where it’s at. Yeah.

Tim Hwang: For sure. So Shobhit, am I just being an OpenAI shill here, really hyping this feature where I guess Marina is just telling us like, “This has all been said and done before; they’re just selling something that everybody has known how to do for a long time”?

Shobhit Varshney: So, Tim, hot take on this: this is the first time OpenAI is now appreciating and admitting that the whole workflow end-to-end won’t be done by an LLM. They have admitted, by releasing this, that at step number three, somebody’s going to call an LLM and expect it to behave in a structured manner so it can be a part of a team that does an end-to-end flow. Other aspects will be automation, RPA, there’ll be some regular AI, they’ll be just plain old API calls. But now, LLM... they have admitted to this by releasing this, that it’s now down to a subtask level versus being the LLM that’s going to do the entire process end-to-end. Right? So I think it’s a really important take on what they’re doing.

For practical deployments for me in the field, we are the launch partners with OpenAI and whatnot; we do a ton of OpenAI with clients in our workflows. Last week, on Monday actually, we were working with a large healthcare client where we’re reading reams of different documents and extracting things from those documents. Right? So, talking about my healthcare coverage, I need to know what’s in-network, what’s out-of-network, what’s family coverage, what’s single coverage, and so forth. So using an LLM to go extract things out from it, every time we run this against our rubric of checking the accuracy, quite often it responds back with a blurb instead of giving me the “in-network” and “out-of-network.” The way we used to solve this historically, we would ask questions in a manner and then we provided some coaching, saying, “Just respond with the actual dollar amount.” The problem there used to be, it responds back with saying “14.9,” and in three out of ten cases it’ll forget to put “million” in front of it. Right? There’s practical issues with leveraging these large language models. And then we’re like, “Okay, fine, just give me the entire thing,” and then, to Marina’s point, I’ll just use a small regex somewhere to extract what I need from it and then I’ll plug it back in. That was a horrible way of doing things in production. Yeah, that’s awful.

Having a commitment now, saying that, “This is the JSON I’m going to get, and if you can’t fill that number, if you don’t know what the single coverage is for out-of-network, it’ll be null, it’ll be blank,” then I can do something in a structured manner, raise some alerts, and have a workflow accordingly. I think it’s brilliant they’re allowing you to do this. This, combined with the price drop that we got—50% decrease in inputs, 33% in the outputs—makes it very, very easy for us to plug it in. The 40 Mini price is just rock bottom; it’s slow, but it’s very inexpensive to deploy. Mini, even the fine-tune versions of Mini, now they’re allowing you to fine-tune these models very easily and have a structured output around it. So they’ve understood the fact that instead of doing a generic top-down, “I’ll take care of the entire thing,” all the way down to a subtask level, it has to be fine-tuned for that task, it has to be super inexpensive, and has to be a good contract on what the input and the output structure is coming out. Right?

Tim Hwang: In other words, a good tool to be used in the enterprise. So, super interesting takes on this; it definitely went in a direction that I wasn’t expecting but I think is very helpful in thinking through why OpenAI did this. I think the final aspect of this I want to touch on is, it was very funny. As someone who is a software engineer turned into a lawyer, I read this very long blog post about structured outputs, and then at the very end it’s like, “Oh, by the way, it’s not eligible for zero data retention.” I think that was a very interesting part of the announcement. It was basically like, normally there’s the promise that OpenAI will not train on any data you send in through the API on the Enterprise basis, but in this one case, if you send in a schema, they’re going to train on that.

And I guess for our listeners, I think it’d be useful for them to hear some intuitions for why it is that OpenAI sees this data as so uniquely valuable, that they’re going to say, “We’ve got this general policy of zero data retention, but for this tiny little segment we’re going to cut out a hole, and if you send us your schemas, we definitely want to train on that.” Kush, I see you nodding; I don’t know if you want to speak to why they would do something like this.

Kush Varshney: Yeah, I mean, I was reading the announcement as well, and I think they’re taking two different technical approaches to make this work. Right? One is just training on more and more of these schemas; the second is constrained decoding using this context-free grammar to really make sure that what comes out is really matching the schema and stuff. So on the first of the two, it’s really hard to get this sort of variety of what kind of schemas are going to be out there. This is not something you can just download from the web. In some of our work, we also look at very unique enterprise policy documents or other stuff like that, and it’s just not easy. I was talking with one of my group members yesterday; we were trying to figure out what are different policies or guidelines for different professions, and I was looking, “Can I get the New York State barber license guidelines? Like, what does a barber need to do to do their job?” And there’s tons of stuff like that that is really not out there. So just the uniqueness of it is the key, I think.

Tim Hwang: I think that’s absolutely right, and I think that will be the increasing battle, it seems like. As all of the easy-to-get data is now accessible, the question is like, who’s got access to very hard-to-get data? And these schemas, they’re valuable tokens; they’re unique tokens in a lot of ways.

Shobhit Varshney: This has been a big struggle for us with our clients in enterprise settings. We go through enterprise security governance when we take a new product, and we have to make sure that it’s being used in a particular way; everybody signs off on it, and so on and so forth. Right? So we’re struggling with this with our enterprises. When you outsource your API calls to a third party, then every time the API calls change or they do something differently, or now in this case there’s the retention issue with the schemas, you need to go back through the whole process. And I don’t think enterprises have a good mechanism to understand, capture, and then act on each one of these incremental updates that happen. So it scares me a little bit that enterprises will end up approving a product in a particular state, but it so rapidly evolves with features that you won’t be able to go back in time and say, “This small incremental thing has to be done differently.” The data scientists will start getting super excited about these function calls and about these structured outputs and start using it, and then that’s where Kush and team are going to come in and say, “Guys, time out. There has to be a good discipline around how you govern incremental updates that are happening to these, so you don’t get yourself into trouble.” So I think that’s a very unaddressed issue with at least my enterprise clients.

Tim Hwang: So I’m going to move us on to our final story of the day. It was announced last week that Noam Shazeer, who was the CEO of Character.ai, was going to rejoin Google along with a core team from his company, and also that Google was going to acquire a license to all Character.ai IP. This is widely seen, though it’s disputed, as an acquisition ultimately of Character.ai, which had raised something like $150 million and was basically building personalized companion AIs.

I really want to go into this story because it’s very interesting and part of a trend of acquisitions in the space that I think are very interesting, and I think gets us thinking a little bit about how this market is going to evolve and what we really anticipate from AI startups in the next 12 to 24 months. Kush, I wanted to turn to you first. Why is a company like Google interested in a company like Character.ai at all? It feels like Google’s got all the resources in the world to do all the AI. Why are they acquiring companies at great cost? It feels like couldn’t they just build a character product on their own? We’d love to get your thoughts on what you think is motivating this in the first place.

Kush Varshney: Yeah, I think that’s the similar question like, why does IBM Research exist versus why don’t we just... tell me a little more about that? Yeah, I mean, why don’t we just keep acquiring a lot of startups? I think there’s always going to be a balance between kind of organic growth and the acquisition sort of thing. There’s always a spark of some idea; you can’t assume that you’re going to have all of them. In these cases, there is something unique; there’s something where there’s a market that they’ve touched on, and something that I think only a startup can maybe tap into because they have a different pulse of the scene. So I think it makes sense for a company like Google to have a mix of ways that they grow. Yeah, for sure.

Tim Hwang: To push you a little bit further on that, do you think it’s because there’s some kind of complement? What’s the angle that you think Google’s trying to chase after here? Because I mean, it’s a search company, right? Ultimately, this feels very consumer in some ways, what they’re trying to do.

Kush Varshney: Yeah, I mean, maybe they don’t think they’re a search company going forward. I don’t know. Maybe they’re edging on to more things or other things. But I think just, once you get something interesting, something exciting, that just draws customers to you, draws consumers to you, and then you can keep them and get them into other stuff. So yeah, as part of a pivot, for sure.

Tim Hwang: So maybe we could take the other angle at the story. You can see it from the perspective of the acquirer—why would Google do something like this—but I think it’s also worth investigating from the perspective of the startup. Marina, there was a bunch of commentary online where people were saying, look, you’ve seen Adept go through a similar transaction; there’s another company called Inflection that went through a similar transaction. These are companies that have raised an enormous amount of money and, by all accounts, would be very successful—maybe some of the most successful startups in the AI space—but as yet, the founders are choosing to sell, effectively. They’re choosing to go and join the big tech companies. Do you have a theory for that? Why would you, if you’re sitting there as Noam Shazeer and you’ve raised $150 million—that’s certainly more money than I’ve ever raised—what is motivating these kinds of founders to say, “Okay, well, actually I want to kind of throw in with the big companies,” rather than trying to make it on my own? And does it suggest problems in the startup market, do you think?

Marina Danilevsky: I mean, even $150 million can be burned through pretty quickly if you’re doing a whole bunch of your own training. There might be a case here of, again, if there’s an understanding that you want to have a sort of a pre-baked user base or a pre-baked set of being able to use a whole bunch of resources, which a company like Google, a company like Meta, they’re going to be really quite good with that. Again, potentially other people to collaborate with; I really will second what Kush said, which is you’ve had one or two or three good ideas; it doesn’t mean that you’re going to have 40. And there really are a ton of extremely interesting, smart people who are working in these companies. So it may be that there’s a desire to also do that and have that partnership be a lot more close in order to be able to see that through.

Tim Hwang: Yeah, zooming out to the macro level, Shobhit, do you think that... what does this presage for startups in the AI space in general? Are you seeing more AI startups over time? Because I think there’s almost one way of reading this, which is, well, even if these companies that have raised so much money can’t make it independently, no one can make it, right? Like, we’re about to see a lot of consolidation in the AI startup space.

Shobhit Varshney: I think the core values, the fundamentals, haven’t changed. You can’t have a thin wrapper around an OpenAI API call and expect it to keep drawing more. Right? So you do realize that the intellectual property that you’ve built is what people are going to pay for, and the talent that you have assembled—that particular team—that’s what is golden now. Big companies will try to walk around acquisitions and get very creative to work around any of the antitrust rules and things of that nature as well. Right? So in this case, they’re not acquiring it; they are hiring some people, or they’re licensing some terms, and so on and so forth. So you can see that there is some motivation on not just outright acquiring it.

But on the flip side, just like any startup environment, you’ll also see big companies like... who was it? Waze, which Google was trying to acquire, and Waze walked away from a $23 billion offer. And I’m just laughing because it’s like a literally hilarious amount of money; that is insane. And the co-founder of Waze wrote a very humbling letter to all the employees explaining why you’re not getting rich today, essentially explaining to them why he’s not taking this offer. It’s a very humbling offer, but here are the reasons why we believe that going IPO is a bigger value add, and so on and so forth. Historically, we have seen a lot of misses and hits—Yahoo trying to sell itself to Google, or like Netflix to Blockbuster. All of these have been multiple reminders that you have to know what your value add is and how much of that is a differentiator with high moats, so others can’t just come in and do what you’re doing. It takes a while to understand the rhythm of where you lie in the competitive landscape.

And my forecast: I think we put undue pressure on co-founders, on the founders who are just passionate about building a product, but now all of a sudden we are surrounding them with venture capitals who have different objectives than, “I need to build a business.” Yeah, I think they need to bring back Silicon Valley as episodes in today’s world with LLMs. That’s right.

Tim Hwang: Yeah, it’s for sure. I saw this great Twitter thread that was on, like, if we modernized Silicon Valley, what would it be? And just like, everybody’s in AI, basically. I mean, it goes to a point that Marina raised earlier in our first segment, though, is it almost kind of feels like this is the micro version of the market being not able to price these startups properly. It feels like in a lot of these cases, these big companies like Google are ultimately acquiring the talent versus necessarily the product. I guess Character.ai you can maybe debate because it actually had a big install base, but it feels like at the core of it is just simply, here’s a team of people who seem to be able to get what they want out of the AI, and that actually ends up being a huge value that’s almost separate from, “Did you have a blockbuster AI product release?” And it kind of goes to these interesting questions I’m thinking about now about, how do you actually value these companies? Because it’s just so unclear in such a fluid environment.

Any final thoughts on this? Super, super interesting. And I think, again, to argue against myself, this is also during the same week we saw a bunch of top leadership from OpenAI leave, right? And so it’s not necessarily all consolidation; it’s possible that people are moving between big companies and also creating new startups onto themselves. So any final thoughts to round this out for today?

Kush Varshney: Just one. A conversation I was having with my brother-in-law last week, not related to this, but the difference between running your own business versus doing a job in a big company, right, and the lifestyle issues there. And I think, like the point you were making before, Tim, if you just want to make one product versus building a business, I think maybe a lot of the folks that are getting into this right now are not in it for maybe that lifestyle or for that business-building way of going about it. So maybe it’s just a way for them to return back to their natural state. So that could be driving it as well, kind of more of the lifestyle issue. Yeah.

Tim Hwang: I believe that for sure. Yeah, it’s, I mean, personally, crazy to do a startup. I’ve got a friend who was a founder who is like, “It’s literally an irrational act to do a startup.” So... Well, great. On that note—no shade to anyone else who has already been on Mixture of Experts as a panelist—but I have to say, this is my favorite panel. The Marina, Kush, Shobhit power trio is basically... we just get the best conversations all the time. So I appreciate all three of you coming on the show. And for all you listeners, thanks for joining us this week. If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify, and podcast platforms everywhere, and we will see you same time next week.

Stay on top of AI news with our experts

Follow us on Apple Podcasts and Spotify.

Subscribe to our playlist on YouTube