I was hoping to run some context window retrieval testing on open source long context models such as Yarn-Mistral-130k but I’m only working with a 16BG Mac M2. Does anyone have experience with inference on such a setup?

I have a automated evaluation script to generate various contexts and retrieval prompts, iterating over context lengths. I was hoping to be able to call the model iteratively in this script; what would be your preferred method to achieve this? llama.cpp? oogabooga? anything else?

  • FlishFlashman@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    When I load yarn-mistral-64k in Ollama (uses llama.cpp) on my 32GB MAC it allocates 16.359 GB for the GPU. I don’t remember how much the 128k context version needs, but it was more than the 21.845GB MacOS allows for the GPUs use on a 32GB machine. You aren’t going to get very far on a 16GB machine.

    Maybe if you don’t send any layers to the GPU and force it to use CPU you could eek out a little more. On Apple Silicon CPU inference only seems to be a 50% hit over GPU speeds, if I remember right.