Not sure, but it seems they finetuned gpt-3.5-turbo-16k, which is faster than GPT-4, hence the claim of GPT-3.5 speed with 16K context limit.

They’re dubiously naming it Phind V7. Also, they’ve ripped off WizardLM’s code in the past and rebranded it to secure seed funding.

I doubt it’s based on CodeLlama 34B. Unless they trained on a specific dataset that makes the model hallucinate as if it’s GPT-3.5 Turbo.

    • donotdrugs@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Yeah and it baffles me how many people, even in the tech community, take LLM output as hard facts.

    • Xhehab_@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Yeah but at least PHIND should have cleaned the training dataset rows which mentions gpt-3.5-turbo/gpt-3 words…lol