Hello,

With the new GPT4 and its 128k context window, my question is, is there an advantage to running a local LLM?

  • Cost Considerations: Is there a cost benefit to running a local LLM compared to OpenAI?

  • Specialized Training: Is it possible to train the model for specific tasks, similar to ‘Code Llama - Python’, but perhaps for areas like ‘Code Llama - Unreal Engine’?

I understand that for some applications, avoiding the content restrictions of OpenAI might be a plus. However, when it comes to using an LLM as a coding assistant, are there any specific advantages to running it locally?

  • Chaosdrifer@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Cost is really the main issue. You can train a local LLM, or you can train ChatGPT as well. I wouldn’t be surprised if someone is already making a custom GPT for helping with unity of unreal engine projects.

    For Privacy, company with money will use a private instance from Azure, it is like 2-3 times the cost , but your data is safe as you have a contract with MS to keep it safe and private, with large financial penalties if it isn’t.

    Also, running LLM locally isn’t 0 cost, depending on the electricity price of your area. GPU consume a LOT of power. The 4090 is like 460 watts.

    • krazzmann@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Azure allows to opt out of content logging for their OpenAI services. You need to apply for this option. This is what most larger companies, like my employer, are doing.