I haven’t exactly chosen my specific loss function yet. From what people have told me looking up iBots loss and DinoV2’s loss as well as a loss from a paper by Google might be helpful I think. But I might just end up summing multiple loss functions if they’re useful and then check if they work.
As for my objective, I don’t really have a specific application in my mind rn other than a chatbot of sorts (with moderate-high capability of logic/reasoning) but on my CPU.
Currently this is a rough idea of how I want it to work tbh:
E.g: Q. How should I take a rectangle door outside if all I have is a square window? Possible queries:
Possible/acceptable answers:
Sorry from what I’ve seen I couldn’t find the answer. (This option would be choosen if the model doesn’t find the answer in a limit of n queries)
Rectangles are more general than squares and windows are generally smaller than doors so depending on your exact size you might just be able to fit it through but if the door size and window size are anything standard I don’t think you’ll be able to fit it through.
NLP. I’m trying to take the llama2 chat and try to compress it down so that it can be ran in a mid-high cpu without losing too much accuracy.
They seem to have distilled knowledge from a larger and general model to a smaller and specialised model and outperform the larger model on single task. Thanks for the paper. I wonder if I can specialise it to a subset of the original tasks and then try to outperform the original model.
for me the approach can be generalized to different tasks
Can you elaborate?