Your brain on ChatGPT, human-like AI for safer AVs, and AI-generated ads

Watch the episode

In episode 61 of Mixture of Experts, host Tim Hwang is joined by Kaoutar El Maghraoui, Gabe Goodhart and, joining us for the first time, Ann Funai. First up, a new paper from MIT: “Your brain on ChatGPT”. Are we using AI and LLMs to augment our intelligence, or are we becoming optimally lazy? Next, our experts explore the surprising evolution of autonomous vehicles: they are driving more aggressively, and the results might actually be... safer? Finally, a conversation about AI-generated ads, AI-video generation and the risks that come with them.  All that and more on today’s episode of Mixture of Experts. 

  • 00:01 – Intro
  • 01:11 – Your brain on ChatGPT
  • 13:38 – Safer autonomous vehicles?
  • 24:39 – AI-generated ads

The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.

Listen on Apple podcasts Spotify podcasts YouTube Casted

Episode transcript

Ann Funai: Is the human baseline for driving what autonomous vehicles should be trained on going forward? I mean, if a Robotaxi acts one way, a Zoox acts another, and a Waymo acts a third way, are they expecting a human response from every other vehicle? They have a known response from other vehicles in their own network. But now you’ve got this whole other set of variables. How do you even train for that?

Tim Hwang: All that and more on today’s Mixture of Experts. I’m Tim Hwang, and welcome to Mixture of Experts.

Tim Hwang: Each week, MoE brings together a lovely team of researchers, product leaders, and deep thinkers to distill and navigate the high-speed and evermore complex landscape of artificial intelligence. Today, I’m joined by Gabe Goodhart, Chief Architect for AI Open Innovation; Kaoutar El Maghraoui, Principal Research Scientist and Manager for Hybrid Cloud Platform; and joining us for the very first time is Ann Funai, CIO and VP for Business Platform Transformation. We have an action-packed episode today.

Tim Hwang: But first, let’s talk about “Your Brain on ChatGPT.” I really want to cover this interesting paper that came out of MIT. A team of researchers published a paper literally called “Your Brain on ChatGPT,” and it’s a pretty fun read. But first, I want to start with an around-the-horn question: simply, do you feel smarter or dumber in the age of LLMs? Gabe, maybe I’ll start with you. How do you feel about this?

Gabe Goodhart: Sure. If I’m doing something I already feel smart at, like writing code, I feel smarter. It’s awesome. If I’m doing something I feel really dumb at, like writing for other people to read, I actually feel a lot dumber because I don’t fully comprehend what I’m getting from the AI.

Tim Hwang: That’s a great answer.
Ann Funai: Actually, I generally feel almost neutral about it—like it’s a validation of some of my insecurities. I know I’m terrible at writing; words are hard for me. And the LLMs, the AIs, they write the emails I am terrible at writing. I almost feel validated about the things I’m not smart at. But it also frees up brain space for the stuff I am intrigued by and that I enjoy pursuing.

Tim Hwang: That’s great. We have such a nuanced panel. Everyone’s like, “smarter, dumber...” But Kaoutar, what do you think? How do you feel about all this?

Kaoutar El Maghraoui: Yeah. I don’t think I feel anything specific here. But I think maybe the question isn’t whether LLMs make us smarter or dumber, but whether we choose to engage with them in ways that sharpen or soften our minds. So it’s really about how you engage with these LLMs.

Tim Hwang: Well, we’ll get into all of this in our discussion. So let me just set up the paper a little bit and I’d love your responses. This is a fun paper. They used brain scanning technology and divided their research participants into a couple of cohorts. They said, “Okay, we’re going to have you all do a series of tasks where you write an essay.” For the people who used LLMs, they had a cohort they called “LLM-to-brain,” where they said, “On this next task, you’re going to write the essay just by hand, with no AI assistance.” What they claim is—and I’ll quote them directly—“LLM-to-brain participants showed weaker neural connectivity and under-engagement of alpha and beta networks.” To put that in more human-readable language, the idea is their brains were actually less active while accomplishing this task when shifting from an LLM-assisted scenario to one where they had to do it all by themselves.
So I guess, Ann—you’re on the show for the first time—maybe I’ll turn to you for your hot take. How much do we take from this? I know there are a lot of hot takes on the internet like, “it’s killing our minds,” but do you read it that way?

Ann Funai: You know, I... it’s actually... I’ll just say I’m not surprised by the take. I think the world is trying to figure out how to use AI in the best and most advantageous way possible. But what it actually reminded me of is cycles of human-computer evolution. Like when tablets and phones became ubiquitous, it was, “Oh, it’s killing our minds. I can look everything up instantaneously.” If you even go way back to historians with books, it was, “I rue the world for the next generation. They’re going to ruin it.” You know, you can go back through Renaissance authors and read that. I kind of almost put it in that context. Yeah, there’s real science behind it with the brain activity, but is this just another “I rue the world; AI is going to make us all dumber” moment? At the end of the day, it is what we make of it. We can take the—I don’t know, maybe this is too much of a Gen-X reference—we can take the Idiocracy track and just get stupider and let it become our brains. But I’m stealing an analogy someone else used: if we become like Tony Stark with the Iron Man suit on and let it be an amplification of our brainpower and an educational tool, that’s goodness. That should be pushing our brainpower further, I think, if we use it properly.

Tim Hwang: Yeah, for sure. I was talking with a friend when I read this paper, and I was like, “Imagine the first person to invent a book, and they’re basically like, ‘Oh, well, people don’t have to memorize anything anymore. You know, it’s so bad for us to have all these books.’” It’s funny. Recently, under the CFO, a leadership team of us went through the IBM archives, and they were showing the original accounting books. And it’s like, “Well, are our accountants dumber because they have spreadsheets now or technology?” And the answer is no. I think it’s about detail and nuance. You can really dig into problems in a different way. Yeah, for sure.
Gabe, I want to bring you into this conversation because I think you had such an interesting response to the kind of hot-take, around-the-horn question. My hope was that people would be like, “Oh, I feel dumber,” or “I feel smarter.” But you said it depends on the task—for things I’m good at, I feel more engaged; for things I’m less good at, I feel differently. How do you think that applies to some of the results here?

Gabe Goodhart: Yeah, I definitely teased it in that, but that was really my read of this. One thing you didn’t mention in the intro is they also did the inverse: the brain-to-LLM group. And the brain-to-LLM group actually showed really good engagement. I think the way I have found myself using LLMs is primarily as a coding assistant, but where I am completely in control of the code. What I use them for is to accelerate my ability to explore an area that I don’t have prepped and ready to go. In that context, I am still very actively engaged in the act of creation, and that’s a brain space in which my intelligence is moving faster.

Tim Hwang: The brain is firing.
Gabe Goodhart: Yeah, exactly. So if the LLM can remove a time where my brain had to swap out and go figure out the right Google search, that keeps my brain in the “hot zone” longer and better, and it builds faster. So in that case, I feel way smarter. Where I feel like it makes me dumber is when I’m trying to get it to replace something I don’t like to do and I’m not very good at to begin with. So I occasionally write blog articles, and if I get in the right Zen, I can actually sit down and write expository writing. But it’s not my sweet spot. So I could try to come up with a prompt, slam it into an LLM, get some text out, and skim it more in consumer mode and critic mode rather than creator mode. My brain never hits that hot zone. I never hit that place where I’m actually really thinking and framing and coming up with the right connections. In that case, the thing I get at the end—yes, it took me a fraction of the time—but I don’t feel the same sort of ownership and deep engagement with what I just created. I think that’s one thing I found really interesting about this study: that difference between these two different ways of stimulating. Either you’re already deep in with your brain and you’re using the LLM to boost it, or you’re just starting with the LLM doing it for you, and then you’re trying to apply your brain to what the LLM already did. I think those are really different ways of using LLMs.

Tim Hwang: Yeah. I’d love to bring these two comments together. Kaoutar, kind of bringing you into this conversation... You know, I’m old enough to remember the discourse around graphing calculators. The teacher always being like, “Well, it’s important to understand how you graph a function before you do it automatically on your calculator.” I think, Gabe, what you’re pointing out is that exact dynamic: brain-to-LLM versus LLM-to-brain. So I guess, Kaoutar, I know you said you kind of don’t feel any way about this, but wondering... how new is this, in some sense? Do you think this is just LLMs repeating what we’ve already gone through with stuff like a graphing calculator?

Kaoutar El Maghraoui: Yeah, actually, I like to think of it as mirroring the historical effects of industrial automation. You know, as machines relieved humans of physical labor, physical strength and endurance declined for the majority of us—unless you really work hard, exercise those muscles. If you look at the majority of people back then, most were stronger because they had to do a lot of physical labor. But as we relied more on cars and machinery to clean our houses, our muscles evolved to be weaker. I worry a little bit: are we getting into a similar cognitive automation risk, a similar atrophy here? Not for our muscles, but for our minds. Just as cars made us walk less, AI systems could make us think less deeply. So we’re not just outsourcing tasks; we’re externalizing cognition. I think that’s what this paper is: a crucial wake-up call regarding the uncritical adoption of these AI tools for complex cognitive tasks. So I think it depends on how you engage. When I said how you engage with these tools, are you going to over-rely on them for deep thinking without really engaging your brain, or use them, like Gabe mentioned, to augment you for tasks you’re really good at? I think it depends. The concept of “cognitive depth” mentioned here is particularly compelling, suggesting a subtle but profound long-term impact on how our brain functions. I think for individuals, especially in educational and professional settings, the takeaway isn’t to abandon AI, but to cultivate cognitive resilience. Meaning, using AI strategically for brainstorming, fact-checking, summarizing—boosting your performance—but consciously engaging in deep thinking, analysis, or original synthesis ourselves. So it’s more about how we treat these AI tools to augment, not to replace, our fundamental cognitive processes. It’s about finding that critical balance.

Tim Hwang: Yeah, that’s a great point. And maybe I’ll kick it to you, Ann, because we could go much longer, but I need to move on. Just to play skeptic for a moment: it’s all well and good to tell people they need to use their critical faculties with this technology. But people are lazy, right? We can’t expect everyone to do that. Is it hoping against hope that people will use this technology in a way that looks more like brain-to-LLM versus LLM-to-brain?

Ann Funai: No, and exactly—my hope would be the brain-to-LLM path. The comment I made about how we learn to use it... my hope is that we shift to that. And I absolutely agree humans are lazy—myself included. I use it for emails; I put in the words “make it usable.” But, you know, I joke that I’m an optimist, but a cynical optimist because I can see every way it could go wrong before you get to the most optimistic outcome. Where I would put my hope and optimism in this case is that, at the end of the day, we’re still human beings with things that interest and drive us. I love to read technology papers, play with things, toys, whatever. That’s always going to drive me, and AI is actually going to help me go further and deeper in that. It could be the same with someone who’s a doctor, a lawyer... maybe retail, shopping, marketing... My hope and optimism would be that it makes you lazy in the tasks that don’t drive the things that interest you.

Tim Hwang: Right. It’s optimally lazy.
Ann Funai: It’s optimally lazy! I love that! That’s perfect.

Tim Hwang: That’s great. Well, much more to talk about. We’ll be paying attention to this story; I’m sure there’s gonna be a lot more to come on this kind of research. But I want to move us to our next topic. A super interesting story came out of the SF Chronicle—the local metro paper in San Francisco. In San Francisco, and I don’t know if some of our listeners are in cities where these robotaxis are rolling out... autonomous driving is a thing where you can just call a robotaxi and it will take you to your location. These are all run by Waymo right now, which is part of the Alphabet/Google network of companies. The article is really fascinating because it focuses on the idea that now that they’ve seen such great success, the Waymos are driving a little bit more ‘aggressively.’ One great example they give is that now the Waymos will do a little rolling start where it’s about to go through an intersection and, much like a human would, it kind of loosens up on the brake as a signal to the rest of the road: “I’m getting in here.” I think this is so fascinating. At least what the Waymo folks say in the article is that it turns out having a robotaxi that’s a lot more brisk and decisive—dare I say, kind of like a jerk a little bit on the road—actually makes things safer. Which I think is just such a funny outcome.
So, Kaoutar, I would love to bring you in on this. How should we think about this? Because normally, I think in the chatbot world, we’ve tended to make our AIs very catering. But this is almost an example where we’re getting better results from having AIs that are much more assertive when they interact with humans. Any hot takes about that?

Kaoutar El Maghraoui: Yeah, it’s very interesting. I found it fascinating how Waymo is now prioritizing human-like driving behavior to better integrate into real-world urban environments. I think what this is saying is safety doesn’t just mean rule-following and being very strict. It also means fitting into a human-centered system. Overly cautious autonomous vehicles can be disruptive, as they’ve seen on the roads. This shift reflects a delicate trade-off between algorithmic perfection and social compatibility. It seems like we’re entering an “uncanny valley” of behaviors—cars that are smart enough to mimic our bad habits. What does it say when AI becomes more trustworthy by being less perfect? It’s a very interesting paradox here. We need less perfection to really fit these social norms, and that translates into safer assistance because these cars have to act in human environments, so they have to adopt how we’re being. This also highlights a critical challenge in designing AI systems for the future that operate in complex and unpredictable real-world environments. How human-like should they be? That’s the key question. Waymo’s approach suggests that really strict adherence to rules might not be the safest, and that is very interesting.

Tim Hwang: Gabe, I lived in San Francisco for many years and then spent a few years in LA. I remember when I moved to LA, I was like, “These drivers are unhinged.” The culture of driving is just aggressive in a way that’s not very familiar after driving around San Francisco for close to a decade. One interesting question, building off what Kaoutar just said, is that it kind of suggests we’re almost going to have to localize these systems to cultural practices. Is that the right way of thinking about it? It’s very different from how we typically think about rolling out these systems.

Gabe Goodhart: Well, the analogy that immediately jumped to my mind reading this article was the shift from pre-GenAI chatbots to GenAI chatbots. Before Transformers, we built chatbots by crafting a very deep decision tree and trying to figure out at every point where the person was trying to go. If any of you have walked through one of those trees on the phone with customer assistance, it’s really clunky. You’re trying to reverse-engineer the tree in your head. A rules-based vehicle is very analogous. It’s trying to follow exactly the right structured path in its trajectory of possible actions at every given point. I would love to unbox what Waymo is doing here, because my guess is they’re starting to apply a much more free-form decision-making space, akin to “generate the next token” or “generate the next thing that needs to happen.” I wouldn’t be surprised if they’ve got a reinforcement learning transformer on top of their rule system that has a wider space of possible next actions and generates stuff on the fly. So when you say “localize to different geos,” that’s a different system prompt. It’s basically zero-shot learning your car with a bunch of examples of crazy LA drivers. So, in some ways, if we start applying this more flexible way of adapting behavior to the environment, it may actually make the vehicle fit in a whole lot better. Honestly, that’s one of the things—going back to the chatbot analogy—that powered the AI explosion: suddenly, the consumption experience jumped over that “uncanny valley” Kaoutar mentioned. You feel like you’re talking to a real entity, not reverse-engineering in your head. You feel comfortable, exactly the same way drivers would now feel comfortable in a space mixed with humans and autonomous vehicles, because those vehicles fit their mental perception.

Tim Hwang: Yeah, I love the idea that somewhere hidden in Waymo’s cloud there’s a prompt like, “You’re a San Francisco driver, 25 to 35, you live in the Mission District...” Ann, I want to bring you in. Where does this all go? This is a multi-round game now. Imagine a future with multiple car companies operating autonomous vehicles, and one of them thinks, “I can get my consumer to their location faster if my car is a little bit of a jerk.” I’m really interested in how you think this evolves.

Ann Funai: It’s actually funny—this was not planned at all—but that’s exactly where my head went. As an aside, I’m in Austin, Texas, and the San Francisco/LA analogy is Austin and Houston. Houston is a whole other game; there are entire social media feeds about ridiculous Houston driving. It’s funny you targeted me that way because what we have going in Austin is interesting. We have Waymo, we have Zoox, and as of this week, we have the Tesla robotaxis. The irony is, right before we got this article to discuss, I was coming back from a trip to the airport. My partner and I are in the car, and we’re both commenting, looking at a Waymo driving like a maniac. It was actually going above the speed limit, doing a little bit of zoom... maybe it’s not there yet, but it kind of... and he’s a tech person too, so it led us into this weird conversation: “Okay, where is this going? What’s it doing? How is it learning?” Then I saw this article and thought, “Gosh, what happens?” Because as there are more autonomous vehicles on the road, they are trained on human behavior, not the behavior of each other. There was another piece I saw around the same time, I think a New York Times article, talking about how Waymo, in the first six months of 2025, has already done double the rides they did in all of 2024—and 2024 was 5x 2023, partly due to expansion. It really got me thinking: Is the human baseline for driving what they should be trained on going forward? I mean, if a Robotaxi acts one way, Zoox acts another, and Waymo acts a third way, are they expecting a human response from every other vehicle? They have a known response from other vehicles in their own network, but now you’ve got this whole other set of variables. How do you even train for that? They’re all going to have proprietary systems; they’re all going to learn differently. So I actually think in five years or less—looking at how fast it’s doubling—it’s going to be very important to adapt. How do these train models adapt on the fly? Visually, maybe we need more tiny models or more capable local models on device that can make decisions and retrain/fine-tune on the fly in real time, especially since driving changes depending on location. Like you said, in San Francisco versus, if I go to Morocco, the driving is way different, much more aggressive.

Tim Hwang: You would win “aggressive” in Morocco, for sure.
Kaoutar El Maghraoui: Oh my God, I can’t drive there myself. So I can imagine a car trained in the US put in Morocco—it needs to adapt completely to a much more aggressive behavior. So I think we need more of that going forward. We just can’t rely on statically trained models; they have to adapt constantly.

Ann Funai: I could even see what could be interesting is a lot of open-source consortiums have started because of similar problems. You want your proprietary piece as a company, but you recognize there’s an area where you have to have common understanding. Maybe it’s okay to use an open-source piece for how we train in the same way so we’re not all crashing into each other, and then put proprietary pieces on top for their business model.

Tim Hwang: Yeah, the handshake will be very interesting. Think about different brands of autonomous vehicles. Your car’s computer vision model is like, “Oh, that’s a Tesla robotaxi; we have to navigate around it differently than a Waymo.” The easier way is if there’s just some technical handshake that says, “Hey, I’m signaling to everybody on the road that I’m from this company and have these attributes.” That’ll be very interesting to see.
Well, great. I’m going to move us on to our next topic. I am, by admission, not really a sports guy, but I was roped into watching the NBA finals, which were great. I think I’m now a basketball guy. I caught this really interesting ad that was widely talked about in the ad industry. A prediction market company called Kalshi did this completely surreal, mind-bending ad that played during game three of the NBA finals—a lot of crazy scenes. I remember looking at it and thinking, “This really looks like GenAI.” And lo and behold, it came out later that it was a GenAI ad. I think it’s one of the most high-profile, end-to-end GenAI ads we’ve really seen in the media. I wanted to bring it up because, in the past, we’ve often talked about generative AI for ads as something for more bargain-bin ad inventory—the kinds of things you encounter online. But this is high-prestige, what marketing people call brand advertising—an ad you’d see in the New York Times. So, Gabe, I’m curious how we should read this. The use of these technologies is so good now that a big company like Kalshi will spend a huge amount of money and use this technology to generate an ad for a really high-profile event. It’s a signal of some kind, right?

Gabe Goodhart: Yeah, I mean, I have three different reactions to it: one on the technical front, one on the consumer front, and one on the skeptic front.
On the technical front, one thing I thought was really compelling was that I watched the ad and there were very few of what you might expect from GenAI blemishes. They did a good job making it a fast-paced ad, so your eye isn’t going to pick up on one random person in the crowd having six fingers. The actual quality of what was generated was really good, and merged with some clever expertise on how to cut the ad together, it produced a good-looking ad. It didn’t smack of duct tape hiding the “gorp.”
From a consumer standpoint, it’s a good ad. If it lowered the cost of creating it for the company, making it cool, that sounds like a good optimization for the industry.
My biggest take, though, is on the skeptic/warrior front: who were those humans in the video? Obviously, they were not recorded humans, but we all know GenAI models are based on a boatload of training data. As this becomes more ubiquitous, what are the odds that somebody’s face—who did not give permission to be in an ad for Company X—shows up on screen with absolutely no way of validating whether that’s happening? It’s a huge gulf between the training data and what pops out. Right now, it’s a needle in a haystack. Do you think anyone in the background scenes is going to be someone who watches it and says, “Hey, wait a minute, that’s me. I’m going to sue your pants off?” No. But as the number of GenAI ads balloons, it’s going to happen. The odds will shake out that somebody will realize their face is popping up in ads they have nothing to do with and they’re getting no compensation for. It’s a different but related element to the copyright issues around authors’ books—snippets popping up if they’re sufficiently popular. I think it’s going to go down that same rabbit hole of ownership of likeness, ownership of content, where the content in this case is your actual persona in visual space.

Tim Hwang: Yeah. Ann, maybe I’ll turn to you. Gabe raises a really good point. One thing I want to investigate is how mainstream this becomes. How much is a one-off novelty, where everybody’s surprised it can be done? I have a friend in the ad industry who’s like, “I just don’t think it’s a very good ad.” But then you layer on everything Gabe’s talking about—the risks that come with this technology. Do people want to take on that risk? In sharper terms: in 3 or 4 years, do we feel like every ad for game three of the NBA finals will be AI-generated? How far do you think this is going to go?

Ann Funai: Yeah, well... and plus one to everything Gabe said. There are so many things that can go in any direction. What I kind of went back to when looking through that article was: at the end of the day, marketing is still a data-driven exercise. To be a marketer, it’s data-driven, and it’s not so much about “are we going to have more AI ads?” but “what are the outcomes businesses are trying to drive?” Is it just awareness? Like, “Hey, AI people haven’t been paying attention to us, our awareness is going down, our revenues are dropping. We need to do something flashy that gets our name out there.” Or are we trying to sell a specific product? Again, it goes back to: what is your goal with the ad and the outcome you’re trying to drive? So I think there’s a little TBD there. But at the same time, going back to that first conversation about brain-to-LLM and LLM-to-brain: you may have a clear outcome, a clear vision of what you’re trying to do, and the AI may be able to create the advertisement faster and better than a human could. That’s the brain-to-LLM versus the LLM-to-brain diminishment we were talking about before. So again, I would lean on: marketing is always going to be outcome-driven. It’s going to be a flashy thing, but the direction of the flashy thing used for the right purpose could get really interesting.

Tim Hwang: Yeah, I think that’s right. Kaoutar, one final bit I think you’d be well-positioned to talk about: in the past, when I’ve heard discussions about AI-generated ads, it’s been about everyone having their own custom ad—using GenAI to create your favorite movie star telling you to use a service. This is interesting because GenAI is being used for everyone to see the same ad. Do you think it’s more likely people will want ultra-targeted stuff (building on Ann’s theme), or is there something fundamental to advertising where, no matter how it’s created, we still want it to be shared culture? I think about Super Bowl ads that became cultural movements. Maybe that’s preserved in a world of generative AI.

Kaoutar El Maghraoui: Yeah, that’s a very interesting point. Personalization is an important aspect; some people would like that, some don’t because they want the shared advertising experience. In the world of generative AI, it’s really possible because there’s so much data collected on each of us. If they can generate a generic ad, they might as well generate personalized ads based on your historical preferences and purchase data. So I think we will see both.
Looking at this new ad and the statistics: they had 300 to 400 generated results, about 15 usable clips. The cost was USD 2,000, which is about 95% cheaper than traditional production. It took two to four days using one creator for the full ad, and an estimated 18 million views in about 48 hours. That’s really huge. So what’s this telling us? Of course, more marketers and companies will use these tools. But what are the implications? AI here isn’t just replacing creatives; it’s fragmenting the creative task stack. The bottleneck is no longer in production, but in ideation and originality. Yes, we can generate all these things—maybe with faces, as Gabe mentioned, randomly picked up—but how creative are these ads? What Kalshi is highlighting is both the promise and the peril: democratizing content creation at industrial speed, but also the risk of homogenized, hyper-targeted media. We could soon be flooded with highly personalized ads, but are they going to move us? Are we going to find them creative? Where’s the “Y factor”? Is it going to be there? That’s the key question. Can generative AI do that, or do we need additional human creativity to really make it or break it for the viewers?

Tim Hwang: Yeah, I hope they get it right. Otherwise, it’s a pretty dark future of being flooded with slapdash ads you just don’t like. Well, that’s all the time we have for today. I want to end with two special notes. Ann, I know this is your first time on the show. If people want to find you and keep up with your work, where should they go?

Ann Funai: Fun enough, if you enjoy podcasts, we started a podcast called Transformers. Our goal is to show people across industries—technical and non-technical roles—what it takes to transform a company, a business, open source, closed source, fintech, tech. So come find me over there. We have a lot of fun with really interesting guests from a lot of fun places, and hopefully the conversation is entertaining.

Tim Hwang: It’s really good. You should subscribe, listeners.
Finally, I want to take a personal moment to thank our producers: Hans Buetow, Mike Rugnetta, and Michael Simonelli. They’ve been fearlessly working behind the scenes ever since MoE started a year ago. We owe a huge amount of this show’s success to them. We will miss you guys; this is their last show working with us here at MoE. Thanks for all you’ve done.
Listeners, if you enjoyed what you heard, you can get us on Apple Podcasts, Spotify, and podcast platforms everywhere. We will see you next week on Mixture of Experts.

About AI

What are AI agents?

An artificial intelligence (AI) agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools.

What is artificial intelligence (AI)?

Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language. They can learn from new information and experience. But what is AI?

AI agents vs. AI assistants

AI assistants are built by a foundation model (for example, IBM Granite, Meta’s Llama models or OpenAI’s models). Large language models (LLMs) are a subset of foundation models that specialize in text-related tasks.

Stay on top of AI news with our experts

Follow us on Apple Podcasts and Spotify.

Subscribe to our playlist on YouTube