What is the best approach to achieve multi modality using a instruct fine tuned model?

  • sshh12@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The best way is still open to some research but my understanding is that current open source SOTA is ShareGPT4V uses a high quality dataset based on GPT4V + I believe a LLaVA-like architecture. This works by essentially encoding the other domain as text embeddings that are understood by the LLM.

    If you are interested I have a library for more easily training these on custom modalities: https://github.com/sshh12/multi_token (uses basically the same idea from the LLaVA 1.5 paper)

  • DeliciousFriedPanda@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The field is broad, with no easy answer and many nuances, you could fill countless PostDocs and Phds with work here.

    Judging by the way you phrased your question, very generically and without essentially any detail, I’m going to take a wild guess and say that you’re either a beginner in ML ot haven’t read/studied anything on the subject.

    I’d encourage you to start by reading recent and not-so-recent papers dealing with inherently multimodal tasks, like scene text recognition, VQA, and the like. The big problem in the field is that the models will, generally speaking, try to overfit on the modality that’s mostly information-dense between V and L for your task. In current SoTa methods, the best way to mitigate this seems to be fusing the two modalities via a gradual mechanism, for example the gated tanh attention of Flamingo.

    Happy reading!