I’m trying to perfect a dev tool for python developers to easily scale their code to thousands of cloud resources using only one line of code.

I want to get some project ideas so I can build useful tutorials for running inference and fine tuning open source LLMs.

A few weeks back I created a tutorial teaching people to massively parallelize inference with Mistral-7B. I was able to deliver a ton of value to a select few people and it helped me better understand the flaws with my tool.

Anyways I want to open it up to the community before I decide what tutorials I should prioritize. Please drop any project/tutorial ideas and if you think someone’s idea is good please upvote them (so I know you think it would be valuable).

  • CocksuckerDynamo@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    what is different/better about whatever you are attempting to suggest compared to the existing prominent solutions such as vLLM, TensorRT-LLM, etc?

    it’s not clear to me exactly what the value proposition is of what you’re offering.