Mateys! We have plundered the shores of tv shows and movies as these corporations flounder in stopping us seed and spread their files without regard for the flag of copyright. We have long plundered the shores of gaming and broke DRM that have been plaguing modern games, and allowing accessibility to games in countries where a game would cost a week or even a month of wages (I was once in this situation, so I am grateful for the pirating community for letting me enjoy the golden era of games back in 2012-2015).

But there, upon the horizon, lies a larger plunder. A kraken who guards a lair of untouched gold and emeralds, ready for the taking.

Closed-source AI models.

These corporations have stolen what was once ours, our own data, and put them in their AI models so that only they can profit off of it. These corporations raze the internet with their spiders and their bots to gather as much morsel of data from us which they can feed to their shiny new toy. We might not be able to stop them from stealing our data, but we have proven ourselves to be adept at copying things, leaking software, and this is what we need to do. AI is already too dangerous and to powerful for a select few corporations to control.

As long as AI is within the hands of corporations, not people, the AI will serve their goals, not ours. This needs to change, so this is what I propose for our next voyage.

  • MalReynolds@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Akshually, while training models requires (at the moment) massive parallelization and consequently stacks of A100s, inference can be distributed pretty well (see petals for example). A pirate ‘ChatGPT’ network of people sharing consumer graphics cards could probably indeed work if the data was sourced. It bears thinking about. It really does.

    • wolfshadowheart@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      You definitely can train models locally, I am doing so myself on a 3080 and we wouldn’t be as many seeing public ones online if that were the case! But in terms of speed you’re definitely right, it’s a slow process for us.

      • MalReynolds@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I was thinking more of training the base models, LLAMA(2), and more topically GPT4 etc. You’re doing LoRA or augmenting with a local corpus of documents, no?

        • wolfshadowheart@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Ah yeah my mistake I’m always mixing up language and image based AI models. Training text based models is much less feasible locally lol.

          There’s no model for my art so I’m creating a checkpoint model using xformers to bypass the VRAM requirement and then from there I’ll be able to speed up variants of my process using LORA’s but that won’t be for some time, I want a good model first.