• logicbomb@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 day ago

      Yeah, I was surprised when they said it could summarize the plot and talk about the characters. To my knowledge, LLMs only memory is in how long their prompt is, so it shouldn’t be able to analyze an entire novel. I’m guessing if an LLM could do something like this, it would only be because the plot was already summarized at the end of the novel.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        16 hours ago

        Summarizing is entirely different from analyzing though. It’s a “skill” thats baked into LLMs, because that’s how they manage all information. But any analysis would be based on a summary, which will lose a massive amount of resolution.

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        1 day ago

        I once asked ChatGPT for an opinion on my blog and gave the web address. It summarized some historical posts accurately enough. It was definitely making use of the content, and not just my prompt. Flattered me with saying “the author shows a curious mind”. ChatGPT is good at flattery (in fact, it seems to be trained specifically to do it, and this is part of OpenAI’s marketing strategy).

        For the record, yes, this is a bit narcissistic, just like googling yourself. Except you do need to google yourself every once in a while to know what certain people, like employers, are going to see when they do it. Unfortunately, I think we’re going to have to start doing the same with ChatGPT and other popular models. No, I don’t like that, either.

        • ruan@lemmy.eco.br
          link
          fedilink
          arrow-up
          1
          ·
          5 hours ago

          It was definitely making use of the content, and not just my prompt.

          Ok, being simplistic about the actual workings: anything a LLM outputs is based only in the training data or the prompt, a LLM does not “create” anything.

          I really doubt your blog is statistically significant enough represented in the training data, therefore I can only assume that yes, your blog post URL referenced was web scrapped by ChatGPT and, and any other URLs linked by this main URL that the scrapped deemed significant to the prompt, and all that text was in fact added to the full internal prompt that was processed by the actual LLM.

        • oddlyqueer@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 hours ago

          I just had a horrifying vision of AI SM tools that help you optimize your public presentation. Get AI critiques as well as tips for appearing more favorable. People do it because you need to be well-received by AI evaluators to get a job. Gradually social pressure evolves all public figures (famous or not) into polished cartoon figures. The real horror of the dead internet is that we’ll do it to ourselves.

      • baguettefish@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        chatbots also usually have a database of key facts to query, and modern context windows can get very very long (with the right chatbot). but yeah the author probably imagined a lot of complexity and nuance and understanding that isn’t there

      • L0rdMathias@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        1 day ago

        Yes but actually no. LLMs can be setup in such a way where they remember previous prompts; most if not all the AI web services do not enable this by default, if they even allow it as an option.

        • logicbomb@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          1 day ago

          LLMs can be setup in such a way where they remember previous prompts

          All of that stuff is just added to their current prompt. That’s how that function works.