OpenAI just dropped o3 and o4-mini. In episode 51 of Mixture of Experts, host Tim Hwang is joined by Chris Hay, Vyoma Gajjar and special guest John Willis, Owner of Botchagalupe Technologies. We analyze Sam Altman’s new AI models, o3 and o4-mini.
Next, Google announced that by Q3 you can run Gemini on-premises. What does this mean for enterprise AI adoption? Then, John takes us through AI evaluation tools and why we need them. Finally, NVIDIA is planning to move AI chip manufacturing to the US. Can they pull this off?
Key takeaways:
The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language. They can learn from new information and experience. But what is AI?
It has become a fundamental deep learning technique, particularly in the training process of foundation models used for generative AI. But what is fine-tuning and how does it work?
In this tutorial, you will use IBM’s Docling and open-source IBM® Granite® vision, text-based embeddings and generative AI models to create a retrieval augmented generation (RAG) system.