I am running a LLaMa 13B instance (via GPT4all) and am finding inference times to be quite slow, especially for summarization. Does anyone have recommendations for models that can do summarization of 4k+ tokens extremely quickly?

  • FlishFlashman@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Please get specific. What’s “quite slow,” what’s “extremely quickly.” Use numbers and units that include a unit of time.

    What hardware are you running on? Without changing hardware your best bet is a smaller model (in terms of parameters), or a smaller quantization of a 13b model, or both.