I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.

Why are you interested in running local models? What are you doing with them?

Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?

  • Aperturebanana@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Literally such a cool written post. But boy your gear is so much pricier than a chatgpt subscription.

    • thetaFAANG@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      tax deductible if you use your imagination

      and you get to play with gear you already wanted

      and you get experience for super high paying jobs

      just comes down to fitting it within your budget to begin with

      • Infamous_Charge2666@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        to tax deduct anything you have to earn. and most users here are students ( undergrads/phd’s/ masters) that make less to deduct 10k in pc hardware.

        Best way is to ask your program "( phd) for sponsoring, or if undergrad to apply to scholarships