Quantum leap, Model Context Protocol, CoreWeave IPO and AI voice companion

Watch the episode
Mixture of Experts podcast album artwork
Episode 45: Quantum leap, Model Context Protocol, CoreWeave IPO and AI voice companion

When can we expect quantum to reach consumer devices? In episode 45 of Mixture of Experts, host Tim Hwang is joined by special guest Blake Johnson to debrief the quantum noise in the news. Blake helps us understand the intersection between quantum and AI, and how far this technology is from being realized. Then, veteran experts Chris Hay and Volkmar Uhlig hash out some other news in AI this week.

We cover Anthropic’s Model Context Protocol, CoreWeave filing for an initial public offering (IPO) and Sesame AI’s new voice companion. All that and more on today’s Mixture of Experts.

Key takeaways:

  • 00:01 – Intro
  • 01:06 – Quantum leap
  • 20:08 – Model Context Protocol
  • 28:24 – CoreWeave IPO
  • 40:12 – Sesame AI voice companion

The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.

Listen on Apple Podcasts Spotify Podcasts Casted YouTube

Episode transcript

Tim Hwang: How many years do you think it’ll be until quantum computing finds its way into a consumer device? Blake Johnson is a Distinguished Engineer and Quantum Engine Lead. Blake, welcome to the show for the very first time. What do you think?

Blake Johnson: I might say that in some ways it’s already happened. Some of the early explorations with quantum were fun with games, and you can make those available on a phone. I think I’ve seen a demo of a quantum-powered game on a phone.

Tim Hwang: Volkmar Uhlig is Vice President, AI Infrastructure Portfolio Lead. Volkmar, welcome back. Do you have a prediction here?

Volkmar Uhlig: Quantum itself in a phone will be a big fridge, so I guess it’s connected over the internet. I can see that once there are actual applications that get a benefit out of it, they will go very fast.

Tim Hwang: Finally, last but not least, Chris Hay is a Distinguished Engineer and CTO of Customer Transformation. Chris, I can usually rely on you for the wildest estimates. What do you think here?

Chris Hay: I think it will be available on a consumer device and not available on a consumer device at the same time.

Tim Hwang: All right. All that and more on today’s Mixture of Experts. I’m Tim Hwang, and welcome to Mixture of Experts. Each week, MOE helps you navigate the biggest headlines in technology with a set of brilliant minds from research, product, engineering, and more. As always, we have a slew of AI news to get through. We’re going to talk about Anthropic’s new Model Context Protocol, CoreWeave filing to go IPO, and a new voice demo from a company called Sesame.

But uniquely today, we’re actually going to step a little adjacent to our usual AI topic to talk about quantum because we have Blake here on the show with us. Blake, maybe you can just kick it off a little. If you’ve been reading the headlines, quantum is weird—it disappears and then occasionally comes back in force. We’re in a “quantum spring” where everybody’s talking about it suddenly.

To my opening question, it’s sometimes hard to get a sense of how close or far this technology is from becoming something we practically feel the impacts of as consumers or in enterprises. A good place to start: cut through the hype. Where are we now? Are we very close, is quantum nigh, or is it still in basic R&D?

Blake Johnson: Yeah, I think something quite interesting has happened in the past two years or so—quantum has entered a new era. The first quantum computers IBM put online were educational tools, research tools for teaching quantum computing and mechanics, useful to students, educators, and researchers. That was limited by the size of the computation we could execute before quantum noise overwhelmed the situation.

But in the last couple of years, we’re now at a state where we can do computations with our most powerful quantum computers that we cannot simulate with classical simulation. This is a moment we at IBM refer to as “quantum utility.” You at least need this property for something to be useful because if I can simulate it with a classical computer, I don’t need a quantum one. So we’re finally in a regime where I can do something unique on a quantum device.

Now, the hunt is on to connect that power to an application that someone really cares about and has value—that’s the sprint towards “quantum advantage,” the moment when we can do something faster, cheaper, or better.

Tim Hwang: What are the most promising areas for that? It sounds like the technology is looking for its demo—a place where it’s really better than traditional computers. You have to find a quantum-shaped problem.

Blake Johnson: Sure. There are things we know about with the most powerful machines that could do arbitrary-sized computations. People can write mathematical proofs that certain algorithms have better scaling behavior with a quantum algorithm than a classical one. In particular, there’s simulating nature—Richard Feynman’s original idea for the quantum computer came from thinking about simulating nature. That has applications in chemistry, materials design, drug discovery, and so on. That’s a rich area.

Then you have mathematical problems with structure, like factoring or machine learning. And then you have optimization problems where we have weaker mathematical proofs about the advantage, but because of its importance to business, it still gets a lot of attention.

In terms of where we’re placing our bets, there are areas ripe for early quantum advantage. Simulating the time dynamics of a quantum system seems very possible. Chemistry is another area where the field has had ebbs and flows—people get optimistic, then pessimistic because it’s harder than thought. But last year, by combining quantum and classical computing—something IBM calls “quantum-centric supercomputing,” where you split a problem and have the quantum and classical computers work together—we were able to show real headway on chemistry problems with quantum and be competitive with classical methods for certain molecules. Now we’re expanding to larger molecules to reach parity with state-of-the-art classical methods. The hope is that by pushing hard enough, we finally enter that territory of advantage.

Tim Hwang: That’s really exciting. I want to make sure we bring Chris and Volkmar in. One last question to get us there: MOE typically focuses on AI. At conferences, people always lump quantum, AI, blockchain, and other hype technologies together. Is there an actual overlap between AI and quantum? If so, what is it? A lot of our listeners work in machine learning day in, day out. It’s been voiced that quantum might have an overlap, but it’s still unclear. What are people talking about in your world? Is there a genuine interesting overlap?

Blake Johnson: I think there are two interesting directions: using quantum to make AI better, and using AI to make quantum better. They look pretty different today.

A lot of the early hope was about using quantum for AI. This is very interesting, but we know a lot less. It wasn’t until a couple of years ago that we could find formal mathematical proofs that you could do something better with a quantum machine, but it required a contrived quantum dataset. Usually, people apply AI to classical data, so that’s an area where we have heuristic methods that are harder to make proofs about. We have a different computational paradigm, so can you do something better with it? The jury is still out. People are trying, and we’re partnering with startups and other companies focused on that. You can use a quantum-powered AI tool on our quantum platform.

The other direction—applying AI to quantum—is now, and we’re finding a lot of value. Two new tools we released last year are directly enabled by AI. One is optimizing quantum circuits. When executing a quantum program, it takes the form of a quantum circuit, and you need to optimize it for performance on quantum hardware. We build a compiler in Qiskit for that. Last year, we upgraded that with AI-powered passes using reinforcement learning to automatically build optimization passes, recognize patterns, and find good reductions.

The other was using our watsonx code assistant tool, fine-tuned with quantum programming patterns from Qiskit problems and tutorials, to build the Qiskit Code Assistant. This lives in your development environment and helps you learn quantum programming and write quantum programs directly.

Tim Hwang: I want to bring in Volkmar and Chris. Volkmar, maybe I’ll turn to you first. You work with enterprises and think about AI infrastructure. I’m curious on two fronts: are customers starting to ask, “I keep reading about quantum, are you guys going to support that soon?” And second, is that in the long-term forecast for infrastructure? Do you predict in four or five years we’ll need quantum computers online, or is that not really in your long-term planning?

Volkmar Uhlig: I don’t think there’s a clear path to unify the two. But on the flip side, we’re getting away from traditional x86 boxes to a world where compute capacity is more specialized for specific tasks. We have AI computers now; there are companies focusing only on AI capacity. Similarly with quantum, there will be players with quantum computers online.

This goes back to my prior life in self-driving cars. When building an AI model, you need ground truth to train it. We see this with large language models—initially, they just downloaded the internet as ground truth and did next-token prediction. In self-driving, you need massive observational data. The model is an approximation of reality.

If you go into biology, chemistry, or physical phenomena, you need a good sample set to train a neural network as an approximation. If producing that data takes decades or centuries, that’s where a quantum computer can be extremely useful. You can use it to explore the solution space because it’s fast, produce a bunch of data, and then use that to train a neural network as an approximator. Then you can work without the quantum computer and look at phenomena on a desktop machine. CERN has been doing this for decades—training neural networks as physics approximations. This is where the two can really come together.

Tim Hwang: So, applying AI to the quantum space. Blake, you talked about training a code assistant for quantum. Volkmar, you’re raising another interesting point: to take advantage of applications, we might need to generate data, and AI is a way to get there.

Chris, you’ve been uncharacteristically quiet. I’m curious about your view on future prospects for quantum. As someone who plays with models a lot, there’s an interesting story about models getting more specialized, but I’m curious about your take.

Chris Hay: I was really quiet because I feel really dumb on this subject with super smart guys talking about quantum. But I’ll take my super dumb approach: I think you’re going to need AI to interact with quantum machines because AI is really good at explaining things like you’re a six-year-old, and I’m going to need that if I’m going to program a quantum computer. I’m going to need to “vibe code” it. So, code assistance for vibe coding quantum is a path.

A more serious one: if I think about what quantum is really good at—and I don’t really understand quantum—it’s about probability, error correction, and sampling. AI is about probability, next-token prediction, and sampling. We have two separate things focused on probability, sampling, and prediction. I can’t help but think they’ll come together, whether AI helps quantum predict better or quantum helps AI predict better. There’s a Venn diagram somewhere that brings these together. If you ask me further questions, please don’t, Blake, because then I’ll look really super dumb, but I think there’s something there.

Blake Johnson: There’s a fun take there. Maybe combine a bit of what Volkmar and Chris added. We don’t see a world where quantum replaces classical computing; it’s an accelerator for certain problems. Something exciting for the future is bringing different methods together. You see this pattern in computational science where people study a system, but the computational space is overwhelmingly large, so they use AI models to identify interesting regions of parameter space and then plug in detailed simulation models. An obvious upgrade is plugging in a quantum simulation model for the detailed simulation of an AI-identified interesting problem. The future really is quantum and AI.

Tim Hwang: It’s particularly interesting in the context of computing history. You started with all these devices, converged towards general computing, and now in 2025, we’re suddenly saying we’ll have special hardware for AI, and quantum might be a specific platform for certain problems. There’s a re-divergence from the general-purpose model.

Volkmar Uhlig: There was a paper by Google 10-15 years ago, an allegory to the Watson statement that the world needs five computers. Google said there’s a general-purpose computer, there’s search, and we don’t know what the other three are. I’m tracking now, and I think number three is the AI training supercomputer, number four is probably quantum, and who knows what number five is. These are very different compute patterns. When you can optimize something by a factor of a thousand or 10,000, it’s worthwhile to relook at the architecture. Quantum is one of those—if you can do something in minutes that takes a hundred years, it’s worth a completely different architecture.

Tim Hwang: It’s hard to say these aren’t replacements because they’re so different in design space and can solve one problem very well.

Blake, producer Hans said we have to cover all of quantum in 15-20 minutes, so we’ve done our best. I know you need to go, but one final question: you’ve done a great job parsing what’s important. How can our audience cut through the noise in quantum news? What’s important to pay attention to? Any final parting recommendations?

Blake Johnson: I would caution listeners against the narrative that quantum computers can’t do anything until we have error correction. The most general-purpose algorithms are large computations needing large systems, but we’re already in a realm where we can execute circuits we can’t simulate. It’s harder to believe that nature doesn’t permit anything useful between now and something a billion times larger.

The thing to pay attention to is the steady march of progress in machine performance as people build fundamental ingredients to do larger computations. What we can do is directly connected to the scale of computation we can reliably execute.

Tim Hwang: Great to keep in mind. Blake, thanks for joining us and spending time this morning. Hopefully, we’ll get you back on a future episode because I’m sure there will be more quantum news this year.

Well, that was great. I’m going to move us to our next topic. The thing dominating my group chats and AI social media this week was the Model Context Protocol (MCP) released by Anthropic. Anthropic describes it as “a universal open standard for connecting AI systems with data sources.” People have been frothing at the mouth with excitement.

Chris, let me turn it over to you. When I read “universal open standard for connecting AI systems with data sources,” are they just talking about APIs? Why is MCP important? What do you think about the release?

Chris Hay: MCP has been around for a bit, but what’s made it super cool is it’s hooked into editors like Cursor or Cline (my favorite). You can access MCP from there. Under the hood, it’s just JSON RPC (remote procedure calls), so nothing wonderful, but they’ve standardized it in three important ways:

  1. Resources: Exposing resources like a database schema or GitHub schema, so you can look at an individual file.
  2. Tool Calling: Defining tools available and their parameters, then executing them. This is important because traditionally we used “function calling,” which requires functions to be locally on your machine. With MCP, servers can serve up tools remotely, so you can mix and match. For example, in Cursor, there might be a tool server for sequence diagrams (Mermaid) or bar charts, so I can say, “Generate an architectural diagram for this code.” Or an MCP server for AWS to deploy code. This ability to access tools in a federated fashion, with marketplaces building around it, supercharges models and VS coding environments. I’m no longer restricted to my code; I have access to tools and ecosystems, and the model/agent orchestrates it, which is super cool.

Tim Hwang: Were we always going to end up here? The hyped dream of agents 12-18 months ago was that they’d get good enough to integrate without a special standard. Was that always a pipe dream? Would we always need standardization for agents to use tools effectively?

Chris Hay: We were always going to end up here. A few shifts got us here:

  1. MCP is one.
  2. Function Calling: We needed a standardized way for models to interact with APIs.
  3. Structured Output: Models were bad at generating exact formats for APIs; they’d hallucinate schemas. Now they can output exact syntax.
  4. Context Length: Working short-term memory. If it’s too small, you can’t do much with agents. Now models are 128k by standard, 256k for newer ones, millions for Google models, so working memory is huge.

They understand standards, how to do function calls, and now we’ve standardized at an API level, like REST for microservices. This opens up tool marketplaces. The next step is agent marketplaces and multi-agent collaboration. MCP and tool marketplaces are the first step, but there’s more to come.

Tim Hwang: My question is, does Anthropic have the juice? This is a battle of standards. They’ve thrown out their standard, it’s popular and integrated into editors, but it’s not certain the model creator defines the standard. I can imagine a world where databases say, “If you want to talk to our protocol, this is the standard.” In this competitive landscape, do you think Anthropic has the edge? Will this be the base standard?

Volkmar Uhlig: I think we should ask differently. Anthropic is hitting a point where their model’s value is higher if it can interface with an ecosystem because they can’t build every application. They’re opening access to information locked away to make their model more useful. Someone has to drive the standard. OpenAI hasn’t stepped up, so it’s coming from elsewhere.

OpenAI’s answer a year and a half ago was, “Point us at the Swagger API, and we integrate.” This is an indicator that models will be more autonomous, not human-supervised. The traditional interface is English, but now it’s remote procedure calls—computers talking to computers. If two models talk, should they use English or gibberish? We’re enabling software to be directly invoked, making models access the rest of the world. Search is integrated, but search engines are written for humans; queries are natural. This is the next logical step.

You also need standardization for quality assurance. Without a standard, how do you say my model is doing the right thing? Without standards, it’s hard because the model gives tokens back that can be malformed. You want unit tests for correct syntax. Standardization allows things to talk to each other; it’s a natural progression.

Tim Hwang: I’m surprised it hasn’t happened before—maybe it was delayed.

Well, great. I’m going to move us to our next topic. One big news story this week for me was CoreWeave. If you haven’t been watching the AI hardware/cloud space, CoreWeave is an exciting upstart. They started as a crypto infrastructure company building specialized clouds for mining, noticed AI was a big market, and went all-in on AI. They’ve benefited from a close relationship with NVIDIA and early access to next-gen chips. As a result, CoreWeave has grown hugely and is now filing to go IPO.

Volkmar, maybe I’ll turn to you. You’re the obvious person to respond. I’m interested in how you think about the market for companies specializing in AI compute. There are 800-pound gorillas dominating the cloud market. Is it believable that a company specializing in one area can survive and become gigantic? What are the prospects for specialized compute companies in AI?

Volkmar Uhlig: This goes back to the “five computers” we talked about. There’s a wave of new companies entering the cloud space to serve the niche of supercomputers. CoreWeave isn’t just giving AI capacity; they specifically give you an AI training cluster. When you go to CoreWeave, you’re buying 10,000 H100s wired into a single supercomputer, and they run that for you.

IBM, for example, announced a relationship with CoreWeave, and we run training jobs there. It’s a natural progression in compute demands—I don’t want the asset on my books or to build in-house capability to operate these large machines. There’s an economy of scale, similar to the cloud, to operate this well. CoreWeave is leading; they’re really good at their job.

It’s a natural progression, but a new market of high-performance computing hosting companies will evolve. The big question is, how are traditional ones like Azure, Google, and AWS doing? So far, CoreWeave has an edge because they don’t worry about virtual private cloud networks internally; they just give you a computer with a lot of GPUs.

Tim Hwang: It’s still counterintuitive. The deep capital of Azure makes it seem like they could offer that too—they have more money than smaller providers. It seems like CoreWeave has unique know-how in deploying clusters that’s difficult for traditional players to easily shove them out. Is that the right reading?

Volkmar Uhlig: Traditional cloud vendors render out thousands of individual computers; their DNA is not clusters of a thousand machines working in concert. Their approach is a thousand machines limping along; if one fails, they give you another. For training workloads, that’s not sufficient—you need a thousand machines that stay up. Any fault or network congestion has a dramatic impact.

NVIDIA had challenges with HBM (high bandwidth memory). If one GPU has a silent HBM error, your training job fails or forgets what it learned. CoreWeave monitors every wire in their cluster—connectivity from CPU to PCIe switch to the card, dealing with link flapping, etc., to keep that one computer up. The traditional cloud approach is to take it offline and give you a different one. There’s a different DNA needed as an operator to run these machines, pervasive in your control plane and monitoring. In the cloud, taking one machine offline is fine.

Tim Hwang: Chris, you want to jump in with your hot take?

Chris Hay: If I think about Bitcoin, everybody started on CPU, moved to GPU, then FPGAs, then ASICs. For inference, we’re seeing the same—everybody’s building their own inference chips. What does that leave? Training compute. But we discussed last week: is the era of pre-training dead? We’ve moved to reasoning models—take a base pre-train, so a few companies do massive trains, and then everybody’s into inference-time compute.

So, where is the market? There’s a big market now, but does it stay? Then look at the desktop market—yesterday, Apple with M4 Studios, where you can run models like DeepSeek-R1 on your desktop. We discussed video boxes months ago where you can train. Will the fine-tuning market stay on the cloud or move to devices? And will AWS and Azure let somebody else eat their lunch? They’ll say, “No! We’ll put that capability in and squish you.”

Tim Hwang: Shifting to reasoning models, it’s interesting to believe it favors incumbents because an inference world looks less like the training world—effectively, it’s “broken, pull it out, put a new one in.” You don’t think so?

Volkmar Uhlig: Post-training is now much more than pre-training, but it’s a mix of training and inferencing. The complexity for post-training is sometimes 5-10x more expensive. In post-training, your model still lives in a cluster, but your loss function went from milliseconds to minutes. That’s the challenge. There’s a bigger balance between training cost and infrastructure for loss function calculation. The loss function needs the weights you just trained, so these are large training costs with a mixed workload because computational cost has shifted, but the fundamental need for a big HPC machine hasn’t.

From my perspective, the big question is whether the market is big enough that Google, Amazon, and Microsoft say this is critical because otherwise workloads move to esoteric vendors, creating a drag they don’t want. They have two options: build or buy. The heads of training clusters at Google, Amazon, and Microsoft are ex-HPC guys, so they have talent. How fast are they moving? Do they see this as big enough? CoreWeave is a couple of billions; Microsoft is a couple of trillions—three orders of magnitude difference in market cap. It may not be critical now, or now that these companies are online, they’ll do it themselves.

Chris, you’re right—the chances trillion-dollar businesses take out billion-dollar businesses is high, and negotiation power is better. But fundamentally, a different compute paradigm allowed this market to exist because it was underserved. Because it was underserved, this company exists. Now let’s see if they close the gap.

Chris Hay: I agree it’s an underserved market. But every AI provider or cloud provider is invested in designing their own chips to bring down cost. Latency on inference is key, and that’s the biggest focus—getting inference chips right. So, having massive clusters for big training runs and post-tune phases is a different mix of workload, with new techniques. CoreWeave is saying, “Here’s my big cluster, go for it.” I can’t help thinking any AI model provider will invest in that space themselves with their own chips and infrastructure. I get buy vs. build, but as you say, small numbers now. I don’t see big cloud providers handing over cash to a third party.

Tim Hwang: We could go on at length; it’s interesting how the infrastructure landscape will look with these pressures. The third path where big companies say, “What’s a few billion?” and leave the market alone is a path I hadn’t thought of. We shall see.

Well, great. For the last segment, we only have a minute, so I’ll quickly touch on a new story that popped up—widely chattered about online. A startup called Sesame, launched by an Oculus co-founder, has been working on synthetic voice. They released a demo that, for me, has gone over the uncanny valley of voice interfaces. To be clear, I don’t really use voice on OpenAI or Anthropic, but this is the first time a demo felt smooth enough to feel like interacting with a human.

Quick around the horn: Chris, Volkmar, have you played with it? Is it worth checking out, overhyped? Do you think we’re finally there from a voice standpoint? Quick takes before we close.

Chris Hay: Oh my goodness, that model got me in trouble with my wife. I put it on at 11 p.m. to interact, and my wife said, “Who are you speaking to? I hear a woman’s voice!” I had to switch it off because it was so realistic. That model is incredible. They’ve solved latency and the utterance problem—the silence, the waiting. It feels like a natural interaction, like talking to somebody else. This will change everything—contact centers, customer service experiences. They’ll kick off agents to do workflows. The model is incredible. If you’ve interacted with other voice models and thought, “It’s not quite there,” check this out—what they’ve done is incredible.

Tim Hwang: All right. Volkmar, parting shots. Hyped or overhyped?

Volkmar Uhlig: I agree with Chris; it’s amazing. I tried it in the office, so my wife wasn’t listening. It shows the other end of the spectrum—we have military-style Siri conversations that command you, and this was smooth, chatty, friendly, funny. Now we have two ends of the spectrum and can populate all other points. You can make models for pretty much any human interaction.

I can’t wait for when I call an airline and they don’t tell me to wait for an agent—they can just pick up and talk. This is a great extension to the spectrum. It’s also good that someone is nice to you when you’re driving the wrong way.

Tim Hwang: It might be too sassy for airlines. You call, “My flight is delayed,” and it says, “Ah, but did you actually get there on time, Volkmar? Did you plan enough?” Maybe too chatty for that scenario. It’s a good way to see where we can go. If you can do that, you can do anything. The real interesting part is how they express that emotionally—how they get those emotions into the model mathematically. If you get that dial, it’s powerful.

Chris Hay: “Emotional” is the most important word. When I interacted, it felt real—there was a feeling no other voice model has done. This is something different. They’re open about techniques, and I think it’s getting open-weighted soon. This is a game-changer.

Tim Hwang: Well, you heard it here first—check out the Sesame demo. That’s all the time we have today. Chris, Volkmar, thanks for joining us as always. We’ll have to do the duo show again. Thanks to all you listeners for tuning into Mixture of Experts. If you like what you heard, you can get us on Apple Podcasts, Spotify, and podcast platforms everywhere. We’ll see you next week here on MOE.

 

Explore more episodes Claude 3.7 Sonnet, BeeAI agents, Granite 3.2 and emergent misalignment
IBM® Granite™ 3.2 is officially here. In episode 44 of Mixture of Experts, join host Tim Hwang and experts Kate Soule, Maya Murad and Kaoutar El Maghraoui to debrief a few big AI announcements.
Deep Research, OpenAI inference chip, small VLMs and AI agent job posting
What is the hype with Deep Research? In episode 43 of Mixture of Experts, we cover Deep Research, OpenAI’s inference chip rumors, small vision language models (VLMs) and an AI agent job posting.
Paris AI Summit, Altman’s “Three Observations,” Anthropic’s Economic Index
Live from Paris, Tim Hwang is at AI Action Summit 2025. In episode 42 of Mixture of Experts, we welcome Anastasia Stasenko, CEO and co-founder of pleais with our veteran experts. We analyze the Paris AI Summit, s1: Simple test-time scaling, Sam Altman’s “Three Observations,” and Anthropic’s Economic Index.
OpenAI’s deep research and o3-mini, AI Action Summit and Anthropic’s Constitutional Classifiers
What does Sam Altman have up his sleeve? In episode 41 of Mixture of Experts, host Tim Hwang along with experts Nathalie Baracaldo, Marina Danilevsky and Chris Hay dissect OpenAI’s deep research and o3-mini, and the AI Action Summit. They also discuss Anthropic’s Constitutional Classifiers and Microsoft’s unit to study AI’s impact.
Watch all episodes from Mixture of Experts

Learn more about AI

What is artificial intelligence (AI)?

Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language. They can learn from new information and experience. But what is AI?

What is fine-tuning?

It has become a fundamental deep learning technique, particularly in the training process of foundation models used for generative AI. But, what is fine-tuning and how does it work?

Build an AI-powered multimodal RAG system with Docling and Granite

In this tutorial, you will use IBM's Docling and open source IBM Granite vision, text-based embeddings and generative AI models to create a RAG system.

Stay on top of the AI news with our experts
Subscribe to our playlist on YouTube