I already tried to set up fastchat-t5 on a digitalocean virtual server with 32 GB Ram and 4 vCPUs for $160/month with CPU interference. The performance was horrible. Answers took about 5 seconds for the first token and then 1 word per second.

Any ideas how to host a small LLM like fastchat-t5 economically?

  • Amgadoz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    How about T4 GPU or something like 3090 from runpod? the 3090 costs around 0.5$ per hour which is around 350 dollars per month and it gives you 24 GB which should be enough for t4