Large language models usually give great answers, but because they're limited to the training data used to create the model, over time they can become incomplete--or worse, generate answers that are just plain wrong. One way of improving the LLM results is called "retrieval-augmented generation" or RAG. In this video, IBM Senior Research Scientist Marina Danilevsky explains the LLM/RAG framework and how this combination delivers two big advantages, namely: the model gets the most up-to-date and trustworthy facts, and you can see where the model got its info, lending more credibility to what it generates.
IBM watsonx.ai is the next-generation enterprise studio for AI builders – bringing together new generative AI capabilities and traditional machine learning into a powerful studio spanning the AI lifecycle. Tune and guide models with your data to meet your needs with easy-to-use tools for building and refining performant prompts.
Put AI to work in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.
Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.
IBM watsonx.ai is the next-generation enterprise studio for AI builders – bringing together new generative AI capabilities and traditional machine learning into a powerful studio spanning the AI lifecycle. Tune and guide models with your data to meet your needs with easy-to-use tools for building and refining performant prompts.