I have an 8GB M1 MacBook Air and 16GB MBP (that I haven’t turned in for repair) that I’d like to run an LLM on, to ask questions and get answers from notes in my Obsidian vault (100s of markdown files). I’ve been lurking this subreddit but I’m not sure if I could run LLMs <7B with 1-4GB of RAM or if the LLM(s) would be too quality.

  • artisticMink@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Quick answer: No.

    Longer answer: It depends. Passing it as context won’t work as it’s too much data among other things. So you could use a model that builds SQL to query your database according to input and either output it directly or have another model (quantized 7B) interpret it.

    But generally, i see the idea of the ‘AI Assistant’ come up here regularly, and the question is do you want to rely on a LLM that just ‘makes things up’ when accessing your notes. I guess that depends on how important the subject is.