I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.

Why are you interested in running local models? What are you doing with them?

Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?

  • Equal-Bug1591@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    i haven’t started yet, but come to read and learn here every day. I want to see if possible to develop single board computer that can run a usable local model, plus voice. All this without any wireless connection.

    Really curious what hardware would be needed, to run the model plus the voice recognition and synthesis. I can do all the HW design, PCB routing, low level C, asm, verilog, etc. But still so much to understand about modern AI tech. Really exciting to learn new things, and have a hobby project outside of work.