I’m looking for insights and advice on extending the context window of the LLMs (most specifically Mistral).
Whether you’re a researcher, developer, or enthusiast in the field, I’d love to hear about your experiences and recommendations. Are there any specific techniques, methodologies, or tools you’ve found effective in extending the context window for LLMs?
Additionally, if you’ve encountered challenges in this area, how did you overcome them? Any resources, papers, or community discussions you can point me to would be greatly appreciated.
I have been able to expand the context window of multimodal models like gpt4 simply by rendering the text to images at a small font size and then feeding it in as images. I’ve not done large scale studies to determine the total increase in perplexity or anything but my empirical results have been great. Plus you get the ability to analyze non standard text.
If it were me, I would LoRA adapt a model to take in image input. There’s a lot of space in the token embedding space that is completely barren that could be used for reasoning.
Wait, WHAT