I’m using Mistral OpenOrca and GPT4ALL who claim privacy. I opted out from sharing my conversations for privacy reasons but don’t think this is actually true. See my conversation in the picture attached. Any feedback is appreciated and would like to hear from other people.

  • damian6686@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I agree on testing with WireShark, great suggestion! but how can you know it doesn’t know anything about its environment? This LLM is a 4GB file, and network scan only needs a few lines of code to return your entire system network configuration. How does it know how to automatically run and download updates, store them and install? Why are there updates in the first place? Any time you get something for free, chances are you give away your data in return. Nothing is free

    • ----Val----@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      but how can you know it doesn’t know anything about its environment? This LLM is a 4GB file, and network scan only needs a few lines of code to return your entire system network configuration.

      Though HF models can contain code to be executed, this is usually heavily scrutinized by the community. Plus, not all models are equally flexible.

      For example the GGUF format are essentially all weights with no executable code. That said, it isn’t impossible that there is some exploit that results in remote code execution, so the risk isn’t 0.

      That said, it is important to consider though that the people releasing these models, be it the original authors or The Bloke who quantizes models risk their grants and research funding if they decide to act malicously.

      How does it know how to automatically run and download updates, store them and install?

      That’s up to GPT4All, which is essentially just a wrapper around llama.cpp, you are conflating a Local LLM with the frontend used to interact with it.