OpenAI goes open, Anthropic on interpretability, Apple Intelligence updates and Amazon AI agents

Watch the episode
Episode 49: OpenAI goes open, Anthropic on interpretability, Apple Intelligence updates and Amazon AI agents

Will OpenAI be fully open source by 2027? In episode 49 of Mixture of Experts, host Tim Hwang is joined by Aaron Baughman, Ash Minhas and Chris Hay to analyze Sam Altman’s latest move towards open source. Next, we explore Anthropic’s mechanistic interpretability results and the progress the AI research community is making. Then, can Apple catch up? We analyze the latest critiques on Apple Intelligence. Finally, Amazon enters the chat with AI agents. How does this elevate the competition? All that and more on today’s Mixture of Experts.

Key takeaways:

  • 00:00 – Intro  
  • 00:48 – OpenAI goes open
  • 11:36 – Anthropic interpretability results
  • 24:55 – Daring Fireball on Apple Intelligence
  • 34:22 – Amazon’s AI agents

The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.

View all Mixture of Experts episodes
Listen on Apple Podcasts Spotify Podcasts YouTube Casted
Explore more episodes DeepSeek-V3-0324, Gemini Canvas and GPT-4o image generation

What’s the best open-source model? In episode 48 of Mixture of Experts, we discuss a new release of DeepSeek V3, Google’s Gemini 2.5 and Canvas, Extropic’s thermodynamic chip and OpenAI’s GPT-4o image generation.

NVIDIA GTC, Baidu reasoning models and Gemini AI image generation

In episode 47 of Mixture of Experts, we discuss NVIDIA GTC announcements, Baidu reasoning models, chain of thought flaws and Gemini image generation.

Manus, vibe coding, scaling laws and Perplexity’s AI phone

Is Manus a second DeepSeek moment? In episode 46 of Mixture of Experts, we discuss Manus AI, vibe coding, scaling laws and Perplexity’s AI phone.

Watch all episodes from Mixture of Experts

Learn more about AI

What is artificial intelligence (AI)?

Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language. They can learn from new information and experience. But what is AI?

What is fine-tuning?

It has become a fundamental deep learning technique, particularly in the training process of foundation models used for generative AI. But what is fine-tuning and how does it work?

How to build an AI-powered multimodal RAG system with Docling and Granite?

In this tutorial, you will use IBM’s Docling and open-source IBM® Granite® vision, text-based embeddings and generative AI models to create a retrieval augmented generation (RAG) system.

Stay on top of the AI news with our experts

Follow us on Apple Podcasts and Spotify.

Subscribe to our playlist on YouTube