Vector embeddings capture the essence of unstructured information. These embeddings, when combined with a vector database or search algorithm, offer a valuable means of retrieving contextually relevant data for a LLM.
By dynamically linking vector embeddings to specific information in the database, LLMs gain access to an up-to-date and ever-expanding knowledge base. This continuous updating process ensures that the LLMs remain capable of generating accurate and contextually appropriate outputs, even in the face of constantly changing information. As the generated output is being augmented by retrieved context, this approach is sometimes called Retrieval Augmented Generation or (RAG).
This talk will introduce the topic of RAG, show examples, provide code, and discuss the tradeoffs of various use cases.
Talk
Slides
You can download the slides the the talk here.
Comments