Not super knowledgeable about all the different specs of the different Orange PI and Rasberry PI models. I’m looking for something relatively cheap that can connect to WiFi and USB. I want to be able to run at least 13b models at a a decent tok / s.

Also open to other solutions. I have a Mac M1 (8gb RAM) and upgrading the computer itself would be cost prohibitive for me.

  • ThinkExtension2328@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Honestly the m1 is probably the cheapest solution you have , get your self LLM studio and try out a 7b_K_M model your going to struggle with anything larger then that. But that will let you get to experience what we are all playing with.

    • ClassroomGold6910@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      3b’s work amazingly and super smoothly but 7b models while running at a fair 15 tokens per second prevent me from using any other application at the same time and occasionally freeze my mouse and screen temporarily until the response is finished

    • ClassroomGold6910@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      What’s the difference between `K_M` models, also why is `Q_4` legacy but not `Q_4_1`, it would be great if someone could explain that lol

      • Sea_Particular_4014@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Q4_0 and Q4_1 would both be legacy.

        The k_m is the new “k quant” (I guess it’s not that new anymore, it’s been around for months now).

        The idea is that the more important layers are done at a higher precision, while the less important layers are done at a lower precision.

        It seems to work well, thus why it has become the new standard for the most part.

        Q4_k_m does the most important layers at 5 bit and the less important ones at 4 bit.

        It is closer in quality/perplexity to q5_0, while being closer in size to q4_0.