There was a post asking people their opinions about Edge and many people seemed to liked the idea of Edge and seemed to be ok having it on Linux (Blasphemy)

Also, can we all agree how fast Edge went from joke to a threat? I mean, it’s good now alright! It was good back then, but it’s better now. Money man!!! Money! Personally I hate MS, but I can’t help but see the fact that there is no alternative to Bing GPT and many features Bing offers on Linux.

If there is an open source ChatGPT how would it look? Who would bear the costs? How would we solve the server problem? i.e., it would take a ton of server space and bandwidth. Just wondering.

I am pretty sure MS products will improve greatly due to their integration with GPT what do us poor folks on Linux do?

Just want to know the answers, I don’t want to discuss (aka can’t comment, I need to study), but just curious!

  • lloram239@feddit.de
    link
    fedilink
    arrow-up
    27
    ·
    edit-2
    1 year ago

    what do us poor folks on Linux do?

    Run llama.cpp and any of the models listed here, that stuff has been around for months.

    TheBloke has a lot of models converted to GGUF format which you need for llama.cpp.

    Quick Start Guide (requires Nix, otherwise compile llama.cpp manually):

    $ GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/TheBloke/guanaco-7B-GGUF
    $ cd guanaco-7B-GGUF
    $ git lfs pull --include=Guanaco-7B.Q4_0.gguf
    $ nix run github:ggerganov/llama.cpp -- -m Guanaco-7B.Q4_0.gguf --instruct
    > Write haiku about a penguin
     A penguin walks on ice,
     Takes a plunge in the sea,
     Hides his feet from me!
    
    • 257m@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I ran it on my pc with a gtx 1070 with cuda enabled and compiled with the cuda compile hint but it ran really slowly how do you get it to run fast?

      • lloram239@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        To make use of GPU acceleration you have to compile it with the proper support (CUDA, OpenCL, ROCM) and add --gpu-layers 16 (or a larger number, however much your VRAM can handle). If that’s not enough, than the GPU/CPU is probably to slow.

        You can try a smaller model, those run faster, but give worse results.

  • fubo@lemmy.world
    link
    fedilink
    arrow-up
    24
    ·
    edit-2
    1 year ago

    KoboldAI has an “KoboldGPT” mode, which is for answering questions. You can try it at https://lite.koboldai.net/ or run it on your own.

    Here’s an example conversation I just had with it (emoji icons added):


    🤖 Hello, I am KoboldGPT, your personal AI assistant. What would you like to know?

    🧑 What is the name of the original version of the game Monopoly?

    🤖 The original version of the game Monopoly is called “The Landlord’s Game.” It was created by Elizabeth Magie in the early 20th century. However, the most well-known version of Monopoly today is based on a variant called “Monopoly” that was designed by Charles Darrow in 1935.

    🧑 Did “The Landlord’s Game” have any purpose besides being a board game?

    🤖 Yes, “The Landlord’s Game” had a specific purpose beyond being a board game. Elizabeth Magie created it as a way to illustrate the negative aspects of concentrating land in private monopolies. She believed that it could be used as a tool for teaching economic concepts such as rent, taxes, and the distribution of wealth.

    🧑 What economic ideology was associated with this game?

    🤖 Elizabeth Magie was influenced by Henry George’s philosophy of Georgism. Georgists believe that land should belong to all people equally, and that taxation should be based on the value of land rather than on income or production.

  • juliebean@lemm.ee
    link
    fedilink
    arrow-up
    17
    ·
    1 year ago

    i don’t know a single thing about whatever these things your talking about are, but i just want to say, you’re the first person i’ve ever seen suggest that edge isn’t a joke.

  • db0@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    1 year ago

    As others said you can host Koboldai on your own, and if you don’t have a powerful GPU, anyone can use powerful llm models via the AI horde

  • russjr08@outpost.zeuslink.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I think it’s far more likely that easier solutions to hosting your own LLM/ChatGPT/etc will be pushed, rather than someone (or a group of someones) offering and maintaining it - as you mentioned, that takes a lot of resources. Think about the issues that various Fediverse software has had in terms of keeping up with the load sometimes, and LLMs use way more than most Fediverse applications. Especially if you want it to be as snappy as the current ones already out there.

    • PlexSheep@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Huggingchat is pretty good. And the model is Open source. No need for a phone number in your account and not by a company whose name is hypocritical even.

  • pezhore@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    1 year ago

    At work today, I just heard the term, “Federated Machine Learning” - where individual corporations keep their proprietary learning data, but make the models shareable. I wonder if something similar could be done with activitypub?

    (Feel free to steal this idea if it’s worth stealing)

    • KRAW@linux.community
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      This doesn’t work when an important part of the process is making sure your data is actually good. If the data is proprietary, there is no way to make sure it is usable.

  • Nicbudd@beehaw.org
    link
    fedilink
    arrow-up
    4
    arrow-down
    4
    ·
    1 year ago

    I installed Edge on Linux because it’s fun watching Linux people get angry about it