Hi. So I am a bit new to NLP and ML as a whole and I am looking to create a text classification model. I have tried it with deBERTa and the results are decent(about 70%) but I need more accuracy. Are Generstive models a better alternative or should I stick to smaller models like Bert or maybe even non-NN classifiers and work on better dataset quality?

  • sshh12@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    +1, when in doubt, LLM it out.

    You could also ask for explanations so when it gets it wrong, you can work on modifying your prompts/examples to get better performance.

    Potentially you wouldn’t want to do this if:

    • Your classification problem is very unusual/cannot be explained by a prompt
    • You want to be able to run this extremely fast or on a ton of data
    • You want to learn non-LLM deep learning/NLP (in which case I would’ve suggested basically some form of finetuning BERT)