Is open source winning the AI race? This week on Mixture of Experts, we analyze three major model releases that dropped in the final weeks of 2025: Mistral 3, DeepSeek-V3.2 and Claude Opus 4.5. Our experts discuss what makes each model unique—from Mistral’s multimodal capabilities to DeepSeek’s reasoning-first approach and Claude’s developer focus. Are there too many good models?
Next, a provocative blog post from Theory Ventures argues Gemini 3 proves scaling laws are throwing more compute at the problem. We debate if scaling laws are a universal truth.
Finally, Amazon just blocked ChatGPT’s shopping research agent from accessing product data. We discuss the business incentives threatening the agent dream. Join host Tim Hwang and panelists Aaron Baughman, Abraham Daniels and Gabe Goodhart on this week’s Mixture of Experts for more!
The opinions expressed in this podcast are solely the views of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Listen to engaging discussions with tech leaders. Watch the latest episodes.
An artificial intelligence (AI) agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system. It achieves this goal by designing its workflow and employing available tools.
Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language. They can learn from new information and experience. But what is AI?
Developers build AI assistants on top of foundation models—for example, IBM Granite, Meta’s Llama models, or OpenAI’s models. Large language models (LLMs), which specialize in text-related tasks, represent a subset of foundation models.