Yeah. And it can’t answer Bilbo’s riddle either!
I asked it “What do I have in my pockets” and it was wrong!!!
Yeah. And it can’t answer Bilbo’s riddle either!
I asked it “What do I have in my pockets” and it was wrong!!!
word2vec is a library used to create embeddings.
Embeddings are vector representations (a list of numbers) that represents the meaning of some text.
RAG is retrieval augmented generation. It is a method to get better answers from GPT by giving it relevant pieces of information along with the question.
RAG is done by:
A vector DB is a way to store those section embeddings and to also search for the most relevant sections. You do not need to use a vector DB to perform RAG, but it can help. Particularly if you want to store and use RAG with a very large amount of information.
It’s not really about fairness though, it’s about knowing where things stand.
I’ve used GPT 4 a lot so I have a rough idea of what it can do in general, but I’ve almost no experience with local LLMs. That’s something I’ve only played a little with recently after seeing the advances in the past year.
So, I don’t really see it as a question that disparages local LLMs, so I don’t see fairness as an issue - it’s not a competition to me.