• reksas@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 days ago

    i think he means that its a bit pointless to nitpick little things like this, when there are bigger and more severe problems with ai. at least that is how i see it. And is it a bit bad to use slopmachine to prove the obvious when they waste resources?

    Though I hope you share this outwards too, so people outside this community also see this, so is it pointless or not depends on how much effect it has on the actual llm hype. I doubt anyone here needs any convincing.

    • Spezi@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      The little things are indicative of larger scale problems though. If an LLM gets simpler things wrong, what happens with more complex topics like science, medicine etc where the operator doesnt understand the full extent of the result.

      • reksas@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        well, yeah. llms are unreliable all the way. While they do have some use, trusting them at all is always a mistake. The problem is that so many people seem to trust them to the point of getting a psychosis.