Curious if anyone got the whole rig and realize they didn’t really need it etc

  • iwishilistened@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I was building an app and then realized it was cheaper to just call inference API for Llama on Azure lol. Put my local llama on hold now