In the filings, Anthropic states, as reported by the Washington Post: “Project Panama is our effort to destructively scan all the books in the world. We don’t want it to be known that we are working on this.”

https://archive.ph/HiESW

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 hours ago

      You’re right, I just compared the author list to the news article and not to the paper. Sorry, took me a bit to absorb that one.

      Yeah, it’s an interesting paper. They’re specifically trying a different method of extracting text.

      I’m not taking the position that the text isn’t in the model, or that it isn’t possible to make the model repeat some of that text. We know 100% that the text that they’re looking for is part of the training set. They mention that fact themselves in the paper and also choose books that are public domain and so guaranteed to be in the training set.

      My contention was with the idea that you can just sit down at a model and give it a prompt to make it recite an entire book. That is simply not true outside of models that have been manipulated to do so (by training them on the book text for several hundred epochs, for example).

      The purpose of the work here was to demonstrate a way to prove that a specific given text is part of a training set (which useful for identifying any potential copyright issues in the future, for example). It is being offered as proof that you can just prompt a model and receive a book when it actually proves the opposite of that.

      Their process was to, in phase 1, prompt with short sequences (I think they used 50 tokens like the ‘standard’ experiments, I don’t have it in front of me) and then, if the model returned a sequence that matched the ground truth then they would give it a prompt to continue until it refused to continue. They would then ‘score’ the response by looking for sections in the response where it matched the written text and measuring the length of text which matched (a bit more complex than that, but the details are in the text)

      In order to test a sequence they needed 52 prompts telling the model to continue, in the best case, to get to the end/a refusal.

      The paper actually gives a higher score than ~40%. For The Great Gatsby, a book which is public domain and considered a classic, they achieved a score of 97.5%. I can’t say how many prompts this took but it would more than 52. The paper doesn’t include all of the data.

      Yes, you can extract a significant portion of text of items that are in the training set with enough time and money (it cost $134 to extract The Hobbit, for example). You can also get the model to repeat short sentences from text a high percentage of the time with a single prompt.

      However, the response was to a comment that suggested that these two things were both combined and that you could use a single magical prompt to extract an entire book.

      Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.

      The core of the issue, about copyright, is that a work has to be ‘highly transformative’. Language models transform a book in such complex ways that you have to take tens of thousands or hundreds of thousands of samples from the, (I don’t know the technical term) internal representational space of the model, in order to have a chance of recovering a portion of a book.

      That’s a highly transformative process and why training LLMs on copyrighted works was ruled to have a Fair Use exemption to claims of copyright liability.

      • ch00f@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        I think it’s critically important to be very specific about what LLMs are “able to do” vs what they tend to do in practice.

        The argument is that the initial training data is sufficiently altered and “transformed” so as not to be breaking copyright. If the model is capable of reproducing the majority of the book unaltered, then we know that is not the case. Whether or not it’s easy to access is irrelevant. The fact that the people performing the study had to “jailbreak” the models to get past checks tells you that the model’s creators are very aware that the model is very capable of producing an un-transformed version of the copyrighted work.

        From the end-user’s perspective, if the model is sufficiently gated from distributing copyrighted works, it doesn’t matter what it’s inherently capable of, but the argument shouldn’t be “the model isn’t breaking the law” it should be “we have a staff of people working around the clock to make sure the model doesn’t try to break the law.”

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 hours ago

          The argument is that the initial training data is sufficiently altered and “transformed” so as not to be breaking copyright. If the model is capable of reproducing the majority of the book unaltered, then we know that is not the case.

          We know that the current case law on the topic, which has been applied in the specific case of training a model on copyrighted material, including books is that training a model on copyright material is ‘highly transformative’.

          Some models are capable of reproducing the majority of some books, after hundreds or thousands of prompts (not counting the tens of thousands of prompts required to defeat the explicit safeguards preventing this exact kind of copyright violation), as long as you make the definition of ‘reproduce’ broader (measuring non-contiguous matching, allowing near edits, etcetc).

          Compare that level of ‘copyright violation’ vs how the standard in Authors Guild v. Google, Inc was applied. In that case Google had OCR’d copies of books and allows (it is still a service that you can use now) users to full-text search books and it will return you a sentence or two of text around the search term.

          Not ‘kind of similar text that has some areas where the tokens match several times in a row’, an exact 1:1 copy of text taken directly from a scan of the physical book. In addition, the service also has high quality scans of the book covers as well.

          Google’s use was considered highly transformative and it gives far more accurate copies of the exact same books with far less effort than a language model which is trained, in many cases, to resist doing the very thing that Google Books has been doing openly and legally for a decade.

          LLMs don’t get close to this level of fidelity in reproducing a book:

          Google

          vs

          LLMs

          • ch00f@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 hours ago

            Interesting. Didn’t know about the google books case. I agree that it applies here.

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              The case against Meta, where they ‘lost’ the copyright claim, was one of the biggest cases recently where Authors Guild v. Google was used. The judge dismissed one of the complaints (about training) while citing Authors Guild v. Google. Meta did have to pay for the books, but once they paid for the books they were free to train their models without violating copyright.

              Now, there are some differences so the litigation is still ongoing. For example, one of the key elements was that Google Books and an actual book fulfill two different purposes/commercial markets so Google Books isn’t stealing market share from a written novel.

              However, for LLMs and image generators this isn’t as true so there is the possibility that a future judge will carve out an exception for this kind of case… it just hasn’t happened yet.