Is OpenAI going to enter the social media game? In episode 52 of Mixture of Experts, Gabe Goodhart, Kate Soule and Marina Danilevsky join host Tim Hwang. First, Sam Altman is rumored to be testing an internal prototype social network; why is this a potential next move for the AI giant? Next, for our paper of the week, we analyze Anthropic’s study on chain-of-thought reasoning, “Reasoning models don’t always say what they think.” Then, AI scraping puts a strain on Wikimedia; what’s the impact of this? Finally, China held a humanoid robot half-marathon, where humans raced alongside robot competitors. Who wins this AI race? All that and more on today’s Mixture of Experts.
In episode 51 of Mixture of Experts, Chris Hay, Vyoma Gajjar and special guest John Willis, owner of Botchagalupe Technologies, join host Tim Hwang. We analyze Sam Altman’s new AI models, o3 and o4-mini.
In episode 50 of Mixture of Experts, we debrief many announcements: IBM z17, Meta's Llama 4, Gemini 2.5 Pro and more from Google Cloud Next.
In episode 49, our experts unpack Altman's open source push, Anthropic’s AI insights, Apple’s AI race and Amazon’s new AI agents. What’s next in AI? Tune in to Mixture of Experts for the full scoop.
Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language. They can learn from new information and experience. But what is AI?
It has become a fundamental deep learning technique, particularly in the training process of foundation models used for generative AI. But what is fine-tuning and how does it work?
In this tutorial, you will use IBM’s Docling and open source IBM® Granite® vision, text-based embeddings and generative AI models to create a retrieval augmented generation (RAG) system.