This is a very good answer, but I’ll try to elaborate to make things clearer:
RAG is done by:
Taking a long text and splitting it into chunks of a certain size/length.
You take each chunk of text, and run it through a function which turns the text into a vector representation. This vector representation is called an embedding, and the function used is an embedding function/model. E.g. OpenAIEmbeddings(). You then generally store these vectors in a vector database (Qdrant, Weviate ++).
When someone asks a question, create an embedding for the question.
Since your question is a vector (embedding), and your data is represented as vectors (embeddings) in your vector db (from 2), you can then compare your question vector with your data vectors. Technically you measure distance between your question vector to vectors in your vector db. Vectors closer to your questions, is likely to contain data relevant to your question.
You grab the text corresponding to the (e.g.) 3 closest vectors from your vector db. The text is often stored along with the vector for retrieval purposes. You send that text + question to your LLM (e.g. GPT-4), and implicitly say: “Answer this question based on only these 3 chunks of text.” That way you sort of limit the language models knowledge to what you explicitly give it.
This is a very good answer, but I’ll try to elaborate to make things clearer:
RAG is done by:
Taking a long text and splitting it into chunks of a certain size/length.
You take each chunk of text, and run it through a function which turns the text into a vector representation. This vector representation is called an embedding, and the function used is an embedding function/model. E.g. OpenAIEmbeddings(). You then generally store these vectors in a vector database (Qdrant, Weviate ++).
When someone asks a question, create an embedding for the question.
Since your question is a vector (embedding), and your data is represented as vectors (embeddings) in your vector db (from 2), you can then compare your question vector with your data vectors. Technically you measure distance between your question vector to vectors in your vector db. Vectors closer to your questions, is likely to contain data relevant to your question.
You grab the text corresponding to the (e.g.) 3 closest vectors from your vector db. The text is often stored along with the vector for retrieval purposes. You send that text + question to your LLM (e.g. GPT-4), and implicitly say: “Answer this question based on only these 3 chunks of text.” That way you sort of limit the language models knowledge to what you explicitly give it.