I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.

Why are you interested in running local models? What are you doing with them?

Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?

    • ttkciar@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I don’t know why you’re getting downvoted. By my best reckoning, about two-thirds of this sub’s regulars use LLM inference for smut.

      It’s not one of my use-cases, but to each their own, and it’s undeniably helping advance the state of the art (much as the online porn industry helped advance web development).