OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

          • kmkz_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            1 year ago

            His point is equally valid. Can an artist be compelled to show the methods of their art? Is it as right to force an artist to give up methods if another artist thinks they are using AI to derive copyrighted work? Haven’t we already seen that LLMs are really poor at evaluating whether or not something was created by an LLM? Wouldn’t making strong laws on such an already opaque and difficult-to-prove issue be more of a burden on smaller artists vs. large studios with lawyers-in-tow.

      • Asuka@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        If I read Harry Potter and wrote a novel of my own, no doubt ideas from it could consciously or subconsciously influence it and be incorporated into it. Hey is that any different from what an LLM does?

      • TwilightVulpine@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        6
        ·
        1 year ago

        You joke but AI advocates seem to forget that people have fundamentally different rights than tools and objects. A photocopier doesn’t get the right to “memorize” and “learn” from a text that a human being does. As much as people may argue that AIs work different, AIs are still not people.

        And if they ever become people, the situation will be much more complicated than whether they can imitate some writer. But we aren’t there yet, even their advocates just uses them as tools.

          • TwilightVulpine@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            1
            ·
            edit-2
            1 year ago

            But this falls exactly under what I just said. To say that using Machine Learning to imitate an artist without permission is fine, because humans are allowed to learn to each other, is making the mistake of assigning personhood to the system, that it ought to have the same rights that human beings do. There is a distinction between the rights of humans as opposed to tools, so to say that an AI can’t be trained on someone’s works to replicate their style doesn’t need to apply to people.

            Even if you support that reasoning, that still doesn’t help the writers and artists whose job is threatened by AI models based on their work. That it isn’t an exact reproduction doesn’t change that it relied on using their works to begin with, and it doesn’t change that it serves as a way to undercut them, providing a cheaper replacement for their work. Copyright law as it was, wasn’t envisioned for a world where Machine Learning exists. It doesn’t really solve the problem to say that technically it’s not supposed to cover ideas and styles. The creators will be struggling just the same.

            Either the law will need to emphasize the value of human autorship first, or we will need to go through drastic socioeconomic changes to ensure that these creators will be able to keep creating despite losing market to AI. Otherwise, to simply say that AI gets to do this and change nothing else, will cause enormous damage to all sort of creative careers and wider culture. Even AI will become more limited with less fresh new creators to learn elements from.

        • kmkz_ninja@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          1 year ago

          How do you see that as a difference? Tools are extensions of ourselves.

          Restricting the use of LLMs is only restricting people.

          • TwilightVulpine@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            1 year ago

            When we get to the realm of automation and AI, calling tools just an “extension of ourselves” doesn’t make sense.

            Especially not when the people being “extended” by Machine Learning models did not want to be “extended” to begin with.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      10
      ·
      1 year ago

      Exactly. If I write some Loony toons fan fiction, Warner doesn’t own that. This ridiculous view of copyright (that’s not being challenged in the public discourse) needs to be confronted.

      • wmassingham@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        1 year ago

        They can own it, actually. If you use the characters of Bugs Bunny, etc., or the setting (do they have a canonical setting?) then Warner does own the rights to the material you’re using.

        For example, see how the original Winnie the Pooh material just entered public domain, but the subsequent Disney versions have not. You can use the original stuff (see the recent horror movie for an example of legal use) but not the later material like Tigger or Pooh in a red shirt.

        Now if your work is satire or parody, then you can argue that it’s fair use. But generally, most companies don’t care about fan fiction because it doesn’t compete with their sales. If you publish your Harry Potter fan fiction on Livejournal, it wouldn’t be worth the money to pay the lawyers to take it down. But if you publish your Larry Cotter and the Wizard’s Rock story on Amazon, they’ll take it down because now it’s a competing product.

          • Sethayy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Can’t but theyre pretty open on how they trained the model, so like almost admitted guilt (though they werent hosting the pirated content, its still out there and would be trained on). Cause unless they trained it on a paid Netflix account, there’s no way to get it legally.

            Idk where this lands legally, but I’d assume not in their favour

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      It’s honestly a good question. It’s perfectly legal for you to memorize a copyrighted work. In some contexts, you can recite it, too (particularly the perilous fair use). And even if you don’t recite a copyrighted work directly, you are most certainly allowed to learn to write from reading copyrighted books, then try to come up with your own writing based off what you’ve read. You’ll probably try your best to avoid copying anyone, but you might still make mistakes, simply by forgetting that some idea isn’t your own.

      But can AI? If we want to view AI as basically an artificial brain, then shouldn’t it be able to do what humans can do? Though at the same time, it’s not actually a brain nor is it a human. Humans are pretty limited in what they can remember, whereas an AI could be virtually boundless.

      If we’re looking at intent, the AI companies certainly aren’t trying to recreate copyrighted works. They’ve actively tried to stop it as we can see. And LLMs don’t directly store the copyrighted works, either. They’re basically just storing super hard to understand sets of weights, which are a challenge even for experienced researchers to explain. They’re not denying that they read copyrighted works (like all of us do), but arguably they aren’t trying to write copyrighted works.

    • SubArcticTundra@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      No, because you paid for a single viewing of that content with your cinema ticket. And frankly, I think that the price of a cinema ticket (= a single viewing, which it was) should be what OpenAI should be made to pay.