Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

  • doeknius_gloek@feddit.de
    link
    fedilink
    arrow-up
    91
    ·
    1 year ago

    While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5.

    What kind of argument is that supposed to be? We’ve stolen his art before so it’s fine? Dickheads. This whole AI thing is already sketchy enough, at least respect the artists that explicitly want their art to be excluded.

      • grue@lemmy.ml
        link
        fedilink
        arrow-up
        18
        ·
        1 year ago

        That’s true, but only in the sense that theft and copyright infringement are fundamentally different things.

        Generating stuff from ML training datasets that included works without permissive licenses is copyright infringement though, just as much as simply copying and pasting parts of those works in would be. The legal definition of a derivative work doesn’t care about the techological details.

        (For me, the most important consequence of this sort of argument is that everything produced by Github Copilot must be GPL.)

        • Rikudou_Sage@lemmings.world
          link
          fedilink
          arrow-up
          19
          ·
          1 year ago

          That’s incorrect in my opinion. AI learns patterns from its training data. So do humans, by the way. It’s not copy-pasting parts of image or code.

          • grue@lemmy.ml
            link
            fedilink
            arrow-up
            8
            ·
            1 year ago

            By the same token, a human can easily be deemed to have infringed copyright even without cutting and pasting, if the result is excessively inspired by some other existing work.

          • Samus Crankpork@beehaw.org
            link
            fedilink
            arrow-up
            5
            ·
            1 year ago

            AI doesn’t “learn” anything, it’s not even intelligent. If you show a human artwork of a person they’ll be able to recognize that they’re looking at a human, how their limbs and expression works, what they’re wearing, the materials, how gravity should affect it all, etc. AI doesn’t and can’t know any of that, it just predicts how things should look based on images that have been put in it’s database. It’s a fancy Xerox.

            • Rikudou_Sage@lemmings.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Why do people who have no idea how some thing works feel the urge to comment on its working? It’s not just AI, it’s pretty much everything.

              AI does learn, that’s the whole shtick and that’s why it’s so good at stuff computers used to suck at. AI is pretty much just a buzzword, the correct abbreviation is ML which stands for Machine Learning - it’s even in the name.

              AI also recognizes it looks at a human! It can also recognize what they’re wearing, the material. AI is also better in many, many things than humans are. It also sucks compared to humans in many other things.

              No images are in its database, you fancy Xerox.

              • Samus Crankpork@beehaw.org
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                And I wish that people who didn’t understand the need for the human element in creative endeavours would focus their energy on automating things that should be automated, like busywork, and dangerous jobs.

                If the prediction model actually “learned” anything, they wouldn’t have needed to add the artist’s work back after removing it. They had to, because it doesn’t learn anything, it copies the data it’s been fed.

                • Rikudou_Sage@lemmings.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  Just because you repeat the same thing over and over it doesn’t become truth. You should be the one to learn, before you talk. This conversation is over for me, I’m not paid to convince people who behave like children of how things they’re scared of work.

        • Otome-chan@kbin.social
          link
          fedilink
          arrow-up
          8
          ·
          edit-2
          1 year ago

          It’s actually not copyright infringement at all.

          Edit: and even if it was, copyright infringement is a moral right, it’s a good thing. copyright is theft.

          • grue@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Edit: …copyright infringement is a moral right, it’s a good thing. copyright is theft.

            Except when it’s being used to enforce copyleft.

      • Samus Crankpork@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        Aside from all the artists whose work was fed into the AI learning models without their permission. That art has been stolen, and is still being stolen. In this case very explicitly, because they outright removed his work, and then put it back when nobody was looking.

        • I_Has_A_Hat@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 year ago

          Let me give you a hypothetical that’s close to reality. Say an artist gets very popular, but doesn’t want their art used to teach AI. Let’s even say there’s even legislation that prevents all this artist’s work from being used in AI.

          Now what if someone else hires a bunch of cheap human artists to produce works in a style similar to the original artist, and then uses those works to feed the AI model? Would that still be stolen art? And if so, why? And if not, what is this extra degree of separation changing? The original artist is still not getting paid and the AI is still producing works based on their style.

          • Samus Crankpork@beehaw.org
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            Comic book artists get in shit for tracing other peoples’ work all the time. Look up Greg Land. It’s shitty regardless of whether it’s a person doing it directly, or if someone built software to do it for them.

          • Samus Crankpork@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            So you hire people to trace the original art, that’s still copying it, and nobody is learning anything. It’s copying.

          • wizardbeard@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Fine, you win the semantic argument about the use of the term “stealing”. Despite arguments about word choice, this is still a massively disrespectful and malicious action against the artist.

          • CallumWells@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Strictly speaking it wouldn’t exactly be stealing, but I would still consider it as about equal to it, especially with regards to economic benefits. It may not be producing exact copies (which strictly speaking isn’t stealing, but is violating copyright) or actually stealing, but it’s exploiting the style that most people would assume mean that that specific artist made it and thus depriving that artist from benefiting from people wanting art from that artist/in that style.

            Now, I’m not conflicted about people who have made millions off their art having people make imitations or copies, those people live more than comfortably enough. But in your example there are still other human artists benefiting, which is not the case for computationally generated works. It’s great for me to be able to have computers create art for a DnD campaign or something, but I still recognize that it’s making it harder for artists to earn a living from their skills. And to a certain degree it makes it so people who never would have had any such art now can. It’s in many ways like piracy with the same ethical framing. And as with piracy it may be that people that use AI to make them art become greater “consumers” of art made by humans as well, paying it forward. But it may also not work exactly that way.

            • Otome-chan@kbin.social
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              People aren’t allowed to produce similar styles to other humans? So do you support disney preventing anyone from making cartoons?

              • CallumWells@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Now you’re making a strawman. Other humans that are actually making art generally don’t fully copy a specific style, they draw inspiration from different sources and that amalgamation is their style.

                Your comment reads as bad-faith to me. If it wasn’t meant as such you’re free to explain your stance properly instead of making strawman arguments.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      His art was not “stolen.” That’s not an accurate word to describe this process with.

      It’s not so much that “it was done before so it’s fine now” as “it’s a well-understood part of many peoples’ workflows” that can be used to justify it. As well as the view that there was nothing wrong with doing it the first time, so what’s wrong with doing it a second time?

      • Pulse@dormi.zone
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Yes, it was.

        One human artist can, over a life time, learn from a few artists to inform their style.

        These AI setups are telling ALL the art from ALL the artists and using them as part of a for profit business.

        There is no ethical stance for letting billion dollar tech firms hoover up all the art ever created to the try and remix it for profit.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          No, it wasn’t. Theft is a well-defined word. When you steal something you take it away from them so that they don’t have it any more.

          It wasn’t even a case of copyright violation, because no copies of any of Rutkowski’s art were made. The model does not contain a copy of any of the training data (with an asterisk for the case of overfitting, which is very rare and which trainers do their best to avoid). The art it produces in Rutkowski’s style is also not a copyright violation because you can’t copyright a style.

          There is no ethical stance for letting billion dollar tech firms hoover up all the art ever created to the try and remix it for profit.

          So how about the open-source models? Or in this specific instance, the guy who made a LoRA for mimicking Rutkowski’s style, since he did it free of charge and released it for anyone to use?

          • Pulse@dormi.zone
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Yes copies were made. The files were downloaded, one way or another (even as a hash, or whatever digital asset they claim to translate them into) then fed to their machines.

            If I go into a Ford plant, take pictures of their equipment, then use those to make my own machines, it’s still IP theft, even if I didn’t walk out with the machine.

            Make all the excuses you want, you’re supporting the theft of other people’s life’s work then trying to claim it’s ethical.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Yes copies were made. The files were downloaded, one way or another (even as a hash, or whatever digital asset they claim to translate them into) then fed to their machines.

              They were put on the Internet for that very purpose. When you visit a website and view an image there a copy of it is made in your computer’s memory. If that’s a copyright violation then everyone’s equally boned. When you click this link you’re doing exactly the same thing.

              • Pulse@dormi.zone
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                By that logic I can sell anything I download from the web while also claiming credit for it, right?

                Downloading to view != downloading to fuel my business.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  No, and that’s such a ridiculous leap of logic that I can’t come up with anything else to say except no. Just no. What gave you that idea?

          • zeus ⁧ ⁧ ∽↯∼@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            i’m not making a moral comment on anything, including piracy. i’m saying “but it’s part of my established workflow” is not an excuse for something morally wrong.

            only click here if you understand analogy and hyperbole

            if i say “i can’t write without kicking a few babies first”, it’s not an excuse to keep kicking babies. i just have to stop writing, or maybe find another workflow

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              The difference is that kicking babies is illegal whereas training and running an AI is not. Kind of a big difference.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  You’re using an analogy as the basis for an argument. That’s not what analogies are for. Analogies are useful explanatory tools, but only within a limited domain. Kicking a baby is not the same as creating an artwork, so there are areas in which they don’t map to each other.

                  You can’t dodge flaws in your argument by adding a “don’t respond unless you agree with me” clause on your comment.

    • ParsnipWitch@feddit.de
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      1 year ago

      We will probably all have to get used to this soon because I can see the same happening to authors, journalists and designers. Perhaps soon programmers, lawyers and all kinds of other people as well.

      It’s interesting how people on Lemmy pretend to be all against big corporations and capitalism and then they happily indulge in the process of making artists jobless becaus “Muh technology cool!”. I don’t know the English word to describe this situation. In German I would say “Tja…”

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Just as quickly as people disregard the human art enjoyer, who now has access to a powerful tool to create art undreamed of a year ago.

      I have found over the years that forums that claim to be about various forms of art are almost always really about the artists that make that art, and have little to no regard for the people who are there just for the art itself. The AI art thing is just the latest and most prominent way of revealing this.

    • teichflamme@lemm.ee
      link
      fedilink
      arrow-up
      19
      ·
      1 year ago

      Nothing was stolen.

      Drawing inspiration from someone else by looking at their work has been around for centuries.

      Imagine if the Renaissance couldn’t happen because artists didn’t want their style stolen.

    • Mossy Feathers (She/They)@pawb.social
      link
      fedilink
      arrow-up
      18
      ·
      edit-2
      1 year ago

      Pretty much. There are ways of using it that most artists would be okay with. Most of the people using it flat out refuse to use it like that though.

      Edit: To expand on this:

      Most artists would be okay with AI art being used as reference material, inspiration, assisting with fleshing out concepts (though you should use concept artists for that in a big production), rapid prototyping and whatnot. Most only care that the final product is at least mostly human-made.

      Artists generally want you to actually put effort into what you’re making because, at the end of the day, typing a prompt into stable diffusion has more in common with receiving a free commission from an artist than it has with actually being an artist. If you’re going to claim that something AI had a hand in as being your art, then you need to have done the majority of the work on it yourself.

      The most frustrating thing to me, however, is that there are places in art that AI could participate in which would send artists over the moon, but it’s not flashy so no one seems to be working on making AI in those areas.

      Most of what I’m personally familiar with has to do with 3d modeling, and in that discipline, people would go nuts if you released an AI tool that could do the UV work for you. Messing with UVs can be very tedious and annoying, to the point where most artists will just use a tool using conventional algorithms to auto-unwrap and pack UVs, and then call it a day, even if they’re not great.

      Another area is in rigging and weight painting. In order to animate a model, you have to rig it to a skeleton (unless you’re a masochist or trying to make a game accurate to late 90s-early 00s animation), paint the bone weights (which bones affect which polygons, and by how much), add constraints, etc. Most 3d modelers would leap at the prospect of having high-quality rigging and UVs done for them at the touch of a button. However, again, because it’s not flashy to the general public, no one’s put any effort into making an AI that can do that (afaik at least).

      Finally, even if you do use an AI in ways that most artists would accept as valid, you’ll still have to prove it because there are so many people who put a prompt into stable diffusion, do some minor edits to fix hands (in older version), and then try to pass it off as their own work.

      • DekkerNSFW@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Sadly, AI isn’t as good with sparse data like vertices and bones, so most attempts to use AI on 3D stuff is via NERFs, which is closer to a “photo” you can walk around in than to an actual 3D scene.

    • kboy101222@lemm.ee
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Welcome to the wonderful world of the silicon valley tech era! Everything must be profitable at all costs! Everything must steal every tiny fact about you! Everything must include ! Everything must go through enshittification!

  • CapedStanker@beehaw.org
    link
    fedilink
    arrow-up
    33
    ·
    1 year ago

    Here’s my argument: tough titties. Everything Greg Rutkowski has ever drawn or made has been inspired by other things he has seen and the experiences of his life, and this applies to all of us. Indeed, one cannot usually have experiences without the participation of others. Everyone wants to think they are special, and of course we are to someone, but to everyone no one is special. Since all of our work is based upon the work of everyone who came before us, then all of our work belongs to everyone. So tough fucking titties, welcome to the world of computer science, control c and control v is heavily encouraged.

    In that Beatles documentary, Paul McCartney said he thought that once you uttered the words into the microphone, it belonged to everyone. Little did he know how right he actually was.

    You think there is a line between innovation and infringement? Wrong, They are the same thing.

    And for the record, I’m fine with anyone stealing my art. They can even sell it as their own. Attribution is for the vain.

    • ParsnipWitch@feddit.de
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      I think people forget the reality when they take their supposedly brave and oh so altruistic stance of “there should be no copyright”.

      When people already know they won’t even have a small chance of getting paid for the art they create, we will run out of artists.

      Because most can not afford to learn and practice that craft without getting any form of payment. It will become a very rare hobby of a few decadent rich people who can afford to learn something like illustration in their free time.

    • smart_boy@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      If a company stole your art and copyrighted it such that it no longer belonged to everyone, in the same way that a Beatles record cannot be freely and openly shared, would you be fine with that?

    • meseek #2982@lemmy.ca
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      A sad fact but undeniable truth. I work in the industry. It’s standard for us to do mood boards. I have a lot hate relationship with them because it can be helpful to hone the design to a client’s liking and get your bearings. But, the fact it’s essentially what AI is doing by “borrowing” existing art as a reference. It’s the exact same thing. And that’s why I hate doing it. Because I don’t want to take someone’s button or background pattern.

      Regardless of how I feel, I still can’t recognize AI as being “stealing” but industry accepted practices that do the exact same thing aren’t?

  • AzureDusk10@kbin.social
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    1 year ago

    The real issue here is the transfer of power away from the artist. This artist has presumably spent years and years perfecting his craft. Those efforts are now being used to line someone else’s pockets, in return for no compensation and a diminishment in the financial value of his work, and, by the sounds of it, little say in the matter either. That to me seems very unethical.

    • millie@beehaw.org
      link
      fedilink
      arrow-up
      17
      ·
      1 year ago

      Personally, as an artist who spends the vast majority of their time on private projects that aren’t paid, I feel like it’s put power in my hands. It’s best at sprucing up existing work and saving huge amounts of time detailing. Because of stable diffusion I’ll be able to add those nice little touches and flashy bits to my work that a large corporation with no real vision has at their disposal.

      To me it makes it much easier for smaller artists to compete, leveling the playing field a bit between those with massive resources and those with modest resources. That can only be a good thing in the long run.

      But I also feel like copyright more often than not rewards the greedy and stifles the creative.

    • moon_matter@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      But that’s sort of the nature of the beast when you put your content up for free on a public website. Does Kbin or Beehaw owe us money for our comments on this thread? What about everyone currently reading? At least KBin and Beehaw are making profit off of this.

      The argument is not as clear cut as people are making it sound and it has potential to up-end some fundamental expectations around free websites and user-generated content. It’s going to affect far more than just AI.

  • trashhalo@beehaw.orgOP
    link
    fedilink
    arrow-up
    25
    ·
    edit-2
    1 year ago

    Re: Stolen. Not stolen comments Copyright law as interpreted judges is still being worked out on AI. Stay tuned if it’s defined as stolen or not. But even if the courts decide existing copyright law would define training on artists work as legitimate use. The law can change and it still could swing the way of the artist if congress got involved.


    My personal opinion, which may not reflect what happens legally is I hope we all get more control over our data and how it’s used and sold. Wether that’s my personal data like my comments, location or my artistic data like my paintings. I think that would be a better world

  • Melody Fwygon@beehaw.org
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    1 year ago

    AI art is factually not art theft. It is creation of art in the same rough and inexact way that we humans do it; except computers and AIs do not run on meat-based hardware that has an extraordinary number of features and demands that are hardwired to ensure survival of the meat-based hardware. It doesn’t have our limitations; so it can create similar works in various styles very quickly.

    Copyright on the other hand is, an entirely different and, a very sticky subject. By default, “All Rights Are Reserved” is something that usually is protected by these laws. These laws however, are not grounded in modern times. They are grounded in the past; before the information age truly began it’s upswing.

    Fair use generally encompasses all usage of information that is one or more of the following:

    • Educational; so long as it is taught as a part of a recognized class and within curriculum.
    • Informational; so long as it is being distributed to inform the public about valid, reasonable public interests. This is far broader than some would like; but it is legal.
    • Transformative; so long as the content is being modified in a substantial enough manner that it is an entirely new work that is not easily confused for the original. This too, is far broader than some would like; but it still is legal.
    • Narrative or Commentary purposes; so long as you’re not copying a significant amount of the whole content and passing it off as your own. Short clips with narration and lots of commentary interwoven between them is typically protected. Copyright is not intended to be used to silence free speech. This also tends to include satire; as long as it doesn’t tread into defamation territory.
    • Reasonable, ‘Non-Profit Seeking or Motivated’ Personal Use; People are generally allowed to share things amongst themselves and their friends and other acquaintances. Reasonable backup copies, loaning of copies, and even reproduction and presentation of things are generally considered fair use.

    In most cases AI art is at least somewhat Transformative. It may be too complex for us to explain it simply; but the AI is basically a virtual brain that can, without error or certain human faults, ingest image information and make decisions based on input given to it in order to give a desired output.

    Arguably; if I have license or right to view artwork; or this right is no longer reserved, but is granted to the public through the use of the World Wide Web…then the AI also has those rights. Yes. The AI has license to view, and learn from your artwork. It just so happens to be a little more efficient at learning and remembering than humans can be at times.

    This does not stop you from banning AIs from viewing all of your future works. Communicating that fact with all who interact with your works is probably going to make you a pretty unpopular person. However; rightsholders do not hold or reserve the right to revoke rights that they have previously given. Once that genie is out of the bottle; it’s out…unless you’ve got firm enough contract proof to show that someone agreed to otherwise handle the management of rights.

    In some cases; that proof exists. Good luck in court. In most cases however; that proof does not exist in a manner that is solid enough to please the court. A lot of the time; we tend to exchange, transfer and reserve rights ephemerally…that is in a manner that is not strictly always 100% recognized by the law.

    Gee; Perhaps we should change that; and encourage the reasonable adaptation and growth of Copyright to fairly address the challenges of the information age.

          • Deniz Opal@syzito.xyz
            link
            fedilink
            arrow-up
            10
            ·
            1 year ago

            @raccoona_nongrata

            Actually. It is necessary. The process of creativity is much much more a synergy of past consumption than we think.

            It took 100,000 years to get from cave drawings to Leonard Da Vinci.

            Yes we always find ways to draw, but the pinnacle of art comes from a shared culture of centuries.

              • Deniz Opal@syzito.xyz
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                @raccoona_nongrata

                A machine will not unilaterally develop an art form, and develop it for 100,000 years.

                Yes I agree with this.

                However, they are not developing an art form now.

                Nor did Monet, Shakespeare, or Beethoven develop an art form. Or develop it for 100,000 years.

                So machines cannot emulate that.

                But they can create the end product based on past creations, much as Monet, Shakespeare, and Beethoven did.

                • ParsnipWitch@feddit.de
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  No, humans create and develope styles in art from “mistakes” that AI would not continue pursuing. Because they personally like it or have a strange addiction to their own creative process. The current hand mistakes for example were perhaps one of the few interesting things AI has done…

                  Current AI models recreate what is most liked by the majority of people.

        • Ben from CDS@dice.camp
          link
          fedilink
          arrow-up
          12
          ·
          1 year ago

          @selzero @raccoona_nongrata @fwygon But human creativity is not ONLY a combination of past creativity. It is filtered through a lifetime of subjective experience and combined knowledge. Two human artists schooled on the same art history can still produce radically different art. Humans are capable of going beyond has been done before.

          Before going too deep on AI creation spend some time learning about being human. After that, if you still find statistical averages interesting, go back to AI.

          • Deniz Opal@syzito.xyz
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            1 year ago

            @glenatron @raccoona_nongrata @fwygon

            I mean, yes, you are right, but essentially, it is all external factors. They can be lived through external factors, or data fed external factors.

            I don’t think there is a disagreement here other than you are placing a lot of value on “the human experience” being an in real life thing rather than a read thing. Which is not even fully true of the great masters. It’s a form of puritan fetishisation I guess.

            • Ben from CDS@dice.camp
              link
              fedilink
              arrow-up
              7
              ·
              1 year ago

              @selzero @raccoona_nongrata @fwygon I don’t think it’s even contraversial. Will sentient machines ever have an equivalent experience? Very probably. Will they be capable of creating art? Absolutely.

              Can our current statistical bulk reincorporation tools make any creative leap? Absolutely not. They are only capable of plagiarism. Will they become legitimate artistic tools? Perhaps, when the people around them start taking artists seriously instead of treating them with distain.

              • Deniz Opal@syzito.xyz
                link
                fedilink
                arrow-up
                5
                ·
                1 year ago

                @glenatron @raccoona_nongrata @fwygon

                This angle is very similar to a debate going on in the cinema world, with Scorsese famously ranting that Marvel movies are “not movies”

                The point being without a directors message being portrayed, these cookie cutter cinema experiences, with algorithmically developed story lines, should not be classified as proper movies.

                But the fact remains, we consume them as movies.

                We consume AI art as art.

    • Thevenin@beehaw.org
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      It doesn’t change anything you said about copyright law, but current-gen AI is absolutely not “a virtual brain” that creates “art in the same rough and inexact way that we humans do it.” What you are describing is called Artificial General Intelligence, and it simply does not exist yet.

      Today’s large language models (like ChatGPT) and diffusion models (like Stable Diffusion) are statistics machines. They copy down a huge amount of example material, process it, and use it to calculate the most statistically probable next word (or pixel), with a little noise thrown in so they don’t make the same thing twice. This is why ChatGPT is so bad at math and Stable Diffusion is so bad at counting fingers – they are not making any rational decisions about what they spit out. They’re not striving to make the correct answer. They’re just producing the most statistically average output given the input.

      Current-gen AI isn’t just viewing art, it’s storing a digital copy of it on a hard drive. It doesn’t create, it interpolates. In order to imitate a person’t style, it must make a copy of that person’s work; describing the style in words is insufficient. If human artists (and by extension, art teachers) lose their jobs, AI training sets stagnate, and everything they produce becomes repetitive and derivative.

      None of this matters to copyright law, but it matters to how we as a society respond. We do not want art itself to become a lost art.

      • Fauxreigner@beehaw.org
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Current-gen AI isn’t just viewing art, it’s storing a digital copy of it on a hard drive.

        This is factually untrue. For example, Stable Diffusion models are in the range of 2GB to 8GB, trained on a set of 5.85 billion images. If it was storing the images, that would allow approximately 1 byte for each image, and there are only 256 possibilities for a single byte. Images are downloaded as part of training the model, but they’re eventually “destroyed”; the model doesn’t contain them at all, and it doesn’t need to refer back to them to generate new images.

        It’s absolutely true that the training process requires downloading and storing images, but the product of training is a model that doesn’t contain any of the original images.

        None of that is to say that there is absolutely no valid copyright claim, but it seems like either option is pretty bad, long term. AI generated content is going to put a lot of people out of work and result in a lot of money for a few rich people, based off of the work of others who aren’t getting a cut. That’s bad.

        But the converse, where we say that copyright is maintained even if a work is only stored as weights in a neural network is also pretty bad; you’re going to have a very hard time defining that in such a way that it doesn’t cover the way humans store information and integrate it to create new art. That’s also bad. I’m pretty sure that nobody who creates art wants to have to pay Disney a cut because one time you looked at some images they own.

        The best you’re likely to do in that situation is say it’s ok if a human does it, but not a computer. But that still hits a lot of stumbling blocks around definitions, especially where computers are used to create art constantly. And if we ever hit the point where digital consciousness is possible, that adds a whole host of civil rights issues.

        • Thevenin@beehaw.org
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          It’s absolutely true that the training process requires downloading and storing images

          This is the process I was referring to when I said it makes copies. We’re on the same page there.

          I don’t know what the solution to the problem is, and I doubt I’m the right person to propose one. I don’t think copyright law applies here, but I’m certainly not arguing that copyright should be expanded to include the statistical matrices used in LLMs and DPMs. I suppose plagiarism law might apply for copying a specific style, but that’s not the argument I’m trying to make, either.

          The argument I’m trying to make is that while it might be true that artificial minds should have the same rights as human minds, the LLMs and DPMs of today absolutely aren’t artificial minds. Allowing them to run amok as if they were is not just unfair to living artists… it could deal irreparable damage to our culture because those LLMs and DPMs of today cannot take up the mantle of the artists they hedge out or pass down their knowledge to the next generation.

          • Fauxreigner@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Thanks for clarifying. There are a lot of misconceptions about how this technology works, and I think it’s worth making sure that everyone in these thorny conversations has the right information.

            I completely agree with your larger point about culture; to the best of my knowledge we haven’t seen any real ability to innovate, because the current models are built to replicate the form and structure of what they’ve seen before. They’re getting extremely good at combining those elements, but they can’t really create anything new without a person involved. There’s a risk of significant stagnation if we leave art to the machines, especially since we’re already seeing issues with new models including the output of existing models in their training data. I don’t know how likely that is; I think it’s much more likely that we see these tools used to replace humans for more mundane, “boring” tasks, not really creative work.

            And you’re absolutely right that these are not artificial minds; the language models remind me of a quote from David Langford in his short story Answering Machine: “It’s so very hard to realize something that talks is not intelligent.” But we are getting to the point where the question of “how will we know” isn’t purely theoretical anymore.

      • Zyansheep@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago
        1. How do you know human brains don’t work in roughly the same way chatbots and image generators work?

        2. What is art? And what does it mean for it to become “lost”?

          • Zyansheep@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            No, he just said AI isn’t like human brains because its a “statistical machine”. What I’m asking is how he knows that human brains aren’t statistical machines?

            Human brains aren’t that good at direct math calculation either!

            Also he definitely didn’t explain what “lost art” is.

    • ParsnipWitch@feddit.de
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      Current AI models do not learn the way human brains do. And the way current models learn how do “make art” is very different from how human artists do it. To repeatedly try and recreate the work of other artists is something beginners do. And posting these works online was always shunned in artist communities. You also don’t learn to draw a hand by remembering where a thousand different artists put the lines so it looks like a hand.

    • shiri@foggyminds.com
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      @fwygon all questions of how AI learns aside, it’s not legally theft but philosophically the topic is debatable and very hot button.

      I can however comment pretty well on your copyright comments which are halfway there, but have a lot of popular inaccuracies.

      Fair use is a very vague topic, and they explicitly chose to not make explicit terms on what is allowed but rather the intents of what is to be allowed. We’ve got some firm ones not because of specific laws but from abundance of case evidence.

      * Educational; so long as it is taught as a part of a recognized class and within curriculum.
      * Informational; so long as it is being distributed to inform the public about valid, reasonable public interests. This is far broader than some would like; but it is legal.
      * Narrative or Commentary purposes; so long as you’re not copying a significant amount of the whole content and passing it off as your own. Short clips with narration and lots of commentary interwoven between them is typically protected. Copyright is not intended to be used to silence free speech. This also tends to include satire; as long as it doesn’t tread into defamation territory.

      These are basically all the same category and includes some misinformation about what it does and does not cover. It’s permitted to make copies for purely informational, public interest (ie. journalistic) purposes. This would include things like showing a clip of a movie or a trailer to make commentary on it.

      Education doesn’t get any special treatment here, but research might (ie. making copies that are kept to a restricted environment, and only used for research purposes, this is largely the protection that AI models currently fall under because the training data uses copyrighted data but the resulting model does not).

      * Transformative; so long as the content is being modified in a substantial enough manner that it is an entirely new work that is not easily confused for the original. This too, is far broader than some would like; but it still is legal.

      “Easily confused” is a rule from Trademark Law, not copyright. Copyright doesn’t care about consumer confusion, but does care about substitution. That is, if the content could be a substitute for the original (ie. copying someone else’s specific painting is going to be a violation up until the point where it can only be described as “inspired by” the painting)

      * Reasonable, ‘Non-Profit Seeking or Motivated’ Personal Use; People are generally allowed to share things amongst themselves and their friends and other acquaintances. Reasonable backup copies, loaning of copies, and even reproduction and presentation of things are generally considered fair use.

      This is a very very common myth that gets a lot of people in trouble. Copyright doesn’t care about whether you profit from it, more about potential lost profits.

      Loaning is completely disconnected from copyright because no copies are being made (“digital loaning” is a nonsense attempt to claiming loaning, but is just “temporary” copying which is a violation).

      Personal copies are permitted so long as you keep the original copy (or the original copy is explicitly irrecoverably lost or destroyed) as you already acquired it and multiple copies largely are just backups or conversions to different formats. The basic gist is that you are free to make copies so long as you don’t give any of them to anyone else (if you copy a DVD and give either the original or copy to a friend, even as a loan, it’s illegal).

      It’s not good to rely on it being “non-profit” as a copyright excuse, as that’s more just an area of leniency than a hard line. People far too often thing that allows them to get away with copying things, it’s really just for topics like making backups of your movies or copying your CDs to mp3s.

      … All that said, fun fact: AI works are not covered by copyright law.

      To be copyrighted a human being must actively create the work. You can copyright things made with AI art, but not the AI art itself (ie. a comic book made with AI art is copyrighted, but the AI art in the panels is not, functioning much like if you made a comic book out of public domain images). Prompts and set up are not considered enough to allow for copyright (example case was a monkey picking up a camera and taking pictures, those pictures were deemed unable to be copyrighted because despite the photographer placing the camera… it was the monkey taking the photos).

      • Harrison [He/Him]@ttrpg.network
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        This is true in US law but it should probably be noted that a lot of the “misconceptions” you’re outlining in OP’s comment are things that are legal in other jurisdictions

    • joe_vinegar@slrpnk.net
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      This is a very nice and thorough comment! Can you provide a reputable source for these points? (no criticism intended: as you seem knowledgeable, I’d trust you could have such reputable sources already selected and at hand, that’s why I’m asking).

      • throwsbooks@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Not the poster you’re replying to, but I’m assuming you’re looking for some sort of source that neural networks generate stuff, rather than plagiarize?

        Google scholar is a good place to start. You’d need a general understanding of how NNs work, but it ends up leading to papers like this one, which I picked out because it has neat pictures as examples. https://arxiv.org/abs/1611.02200

        What this one is doing is taking an input in the form of a face, and turning it into a cartoon. They call it an emoji, cause it’s based on that style, but it’s the same principle as how AI art is generated. Learn a style, then take a prompt (image or text) and do something with the prompt in the style.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 year ago

    All this proves to me, based on the context from this post, is that people are willing to commit copyright infringement in order to make a machine produce art in a specific style.

    • Hawk@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      It doesn’t say anywhere they used copyrighted art though?

      Seems the new model might use art inspired by him, not his art itself.

      It’s a moral gray zone. If you add enough freely available works inspired by someone, the model can produce a similar style without using any original works.

      Is it still copyright infringement at that point?

      • UnknownCircle@kbin.social
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 year ago

        Its unlikely that this did not use his work, these models require input data. Even if they took similar art, that would only resolve the issue of Greg himself but would shift it to those other artists. Unless there is some sort of unspoken artistic genealogical purity that prevents artists with similar or inspired styles from having equal claim on their own creations when inspired by another.

        It also could be outputs generated from another AI model. But I don’t think people who see ethical problems in this care about the number of steps removed and processing that occurs when the origin is his artwork and it ultimately outputs the same or similar style. The result is what bothers people, no matter how disparate or disconnected the source’s influence is. If the models had simply found the Greg Rutkowski latent space through random chance people would still take issue with it.

        The ability and willingness to generate images in a style associated with a person, without consent, is a threat to that persons job security and shows a lack of value for them as a human. As if their creative expression is worth nothing but as a commodity to be consumed.

        The people supporting this don’t care though. They want to consume this person’s style in far greater quantities and variations then a human is capable or willing to fulfill. That’s why these debates are so fierce, the two sides have incentives that are in direct conflict with one another.

        We currently lack the economic ingenuity or willingness to create a system that will satisfy both parties. The barrier of entry to AI is low, someone at home has every incentive to maintain the status quo or even actively rail against artists. Artists will need a heavy handed approach from the government or as a collective to combat this effectively.

        • Harrison [He/Him]@ttrpg.network
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          The ability and willingness to generate images in a style associated with a person, without consent, is a threat to that persons job security and shows a lack of value for them as a human. As if their creative expression is worth nothing but as a commodity to be consumed.

          You can’t own an art style. Copyright only extends to discrete works and characters. If I pay a street artist to draw a portrait of me in the style of Picasso, I’m not devaluing Picasso as a person.

          • UnknownCircle@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I agree that you can’t own an art style in the US and I don’t know if there’s any other legal basis for artist’s claims.

            Legality doesn’t automatically deal with problems that are not based on whether something is legal or not. Losing money is losing money, regardless of if its the result of something legal. And people can feel devalued by something that is legal. It just means that the government will not use force to intervene in what you’re doing and may in-fact use force to support you.

            Picasso is dead, so he has no ability to feel devalued. Artists who are alive do have that ability and other living people who value his works do as well.

            I myself support and love this technology. But it is clear that it causes problems for some people. I would prefer for it to exist in a form where artists could get value from and be happy with it too, but that is just not the case at present.

      • Dizzy Devil Ducky@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        If it’s inspired then at that point I guess it might not be copyright infringing unless it’s an accurate enough recreation of a copyrighted piece… And it looks like my mind filled in the gaps to assume it was copyrighted work being used.

  • arvere@lemmy.ml
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    my take on the subject, as someone who worked both in design and arts, and tech, is that the difficulty in discussing this is more rooted on what is art as opposed to what is theft

    we mistakingly call illustrator/design work as art work. art is hard to define, but most would agree it requires some level of expressiveness that emanates from the artist (from the condition of the human existence, to social criticism, to beauty by itself) and that’s what makes it valuable. with SD and other AIs, the control of this aspect is actually in the hands of the AI illustrator (or artist?)

    whereas design and illustration are associated with product development and market. while they can contain art in a way, they have to adhere to a specific pipeline that is generally (if not always) for profit. to deliver the best-looking imagery for a given purpose in the shortest time possible

    designers and illustrators were always bound to be replaced one way or a another, as the system is always aiming to maximize profit (much like the now old discussions between taxis and uber). they have all the rights to whine about it, but my guess is that this won’t save their jobs. they will have to adopt it as a very powerful tool in their workflow or change careers

    on the other hand, artists that are worried, if they think the worth of their art lies solely in a specific style they’ve developed, they are in for an epiphany. they might soon realise they aren’t really artists, but freelance illustrators. that’s also not to mention other posts stating that we always climb on the shoulders of past masters - in all areas

    both artists and illustrators that embrace this tool will benefit from it, either to express themselves quicker and skipping fine arts school or to deliver in a pace compatible with the market

    all that being said I would love to live in a society where people cared more about progress instead of money. imagine artists and designers actively contributing to this tech instead of wasting time talking fighting over IP and copyright…

    • Harrison [He/Him]@ttrpg.network
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Artists don’t own their styles, so it’s interesting to see them fight to protect them.

      The only thing that makes anything valuable is that someone wants it, or at least wants it to exist. Nothing has intrinsic value because value itself is a human construction. This necessarily includes art.

  • Storksforlegs@beehaw.org
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    There’s a lot of disagreement here on what is theft, what is art, what is copyright… etc

    The main issue people have with AI is fundamentally how is it going to be used? I know there isnt much we can do about it now, and its a shame because there it has so much potential good. Everyone defending AI is making a lot of valid points.

    But at the end of the day it is a tool that is going to be misused by the rich and powerful to eliminate hundreds of millions of well paying careers, permanently. MOST well paying jobs in fact, not just artists. What the hell are people supposed to do? How is any of this a good thing?

    • sapient [they/them]@infosec.pub
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      What the hell are people supposed to do?

      Eat the rich :)

      More concretely, there are a number of smaller and larger sociopolitical changes that can be fought for. On the smaller side, there’s rethinking the way our society values people and pushing for some kind of UBI, on the larger side there’s shifting to postcapitalist economics and organisation to various degrees .)

      • boff@lemmy.one
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        But the rich are the ones buying a lot of the art! Who will pay the artists if you eat the people with the money?

    • Harrison [He/Him]@ttrpg.network
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      The rich and powerful must go away, or everyone else will suffer.

      Soon enough they will succeed in eliminating most jobs, and the moment will come where action must be taken. Them or us.

  • SmoochyPit@beehaw.org
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    If an image is represented as a network of weighted values describing subtle patterns in the image rather than a traditional grid of pixel color values, is that copy of the image still subject to copyright law?

    How much would you have to change before it isn’t? Or if you merged it with another representation, would that change your rights to that image?

    • whelmer@beehaw.org
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      It doesn’t matter how you recreate an image, if you recreate someone else’s work that is a violation of copyright.

      Stealing someone’s style is a different matter.

    • Steeve@lemmy.ca
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      1 year ago

      This person has no idea what machine learning actually is. And they hate such a generic concept on a “gut feeling” and come up with the reasons later?

      If you want good reasons to hate AI generated art you won’t find them in this shitty blogpost.

      • liminalDeluge@beehaw.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Apparently your comment really got to them, because the blogpost now contains a direct quote of you and a response.

        • Steeve@lemmy.ca
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          1 year ago

          Someone I don’t get along with very well wrote:

          Hahaha yikes. Pretty cowardly to post their unhinged response on their blog where nobody can actually respond.

          Also, why the hell would this person who hates the very general concept of machine learning (because of their gut lol) get a degree in a field that significantly utilizes machine learning? Computational linguistics is essentially driven by machine learning, so that’s uh… probably bullshit.