Hi,

I was using my search engine to look for available Emacs integrations for the open (and local) https://gpt4all.io/ when I realized that I could not find a single one.

Is there somebody who’s using GPT4All with Emacs already and did not publish his/her integration?

    • ahyatt@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’ve now added this to the llm package. Although I have to say it’s not nearly as complete as Ollama is. In particular, it lacks embedding functionality and streaming.

    • ahyatt@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes, the llm package does not have this but does have ollama, which seems pretty similar. I’m curious what the differences are. But if anyone thinks this is worth adding, it can be done, making it available to any package integrating with the llm package.

  • karthink@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I can add this to gptel quite easily, but I can’t find the instructions on how to use it. Does it use a local http server? Where can I find these details?

    • publicvoit@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Hi,

      I personally would not have expected that the desktop app doesn’t have to run in background anyway. ;-)

      Any “gtp4all.el”-like mode would help me in writing my queries in Emacs as well as receiving its output directly into Emacs (babel/org-mode preferred, I suppose). Currently, I do a lot of copy&paste for that purpose.

      • karthink@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        In that case you can use it right now with gptel, which supports an Org interface for chat.

        Enable the server mode in the desktop app, and in Emacs, run

        (setq-default gptel-model "gpt4all-j-v1.3-groovy"
                      gptel-host "http://localhost:4891/v1"
                      gptel-api-key "--")
        

        Then you can spawn a dedicated chat buffer with M-x gptel or chat from any buffer by selecting a region of text and running M-x gptel-send.

          • karthink@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            In the meantime I added explicit support for GPT4All, the above instructions may be incorrect by the time you get to it. The Readme should have updated instructions (if it mentions support for local LLMs at all).

          • nickanderson5308@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I have played with this a bit in the last few days.

            It’s nice and minimal, but I am hitting some issues with not enough memory. It seems gptel wants to load whatever model is specified, but I don’t have enough memory to run the model GPT4All desktop loads by default plus what gptel wants to load.

        • ahyatt@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It isn’t actually the same, though - they don’t support streaming. How are you getting around this?