• Mortoc@lemmy.world
    link
    fedilink
    English
    arrow-up
    207
    ·
    1 day ago

    This was legitimately fascinating. It’s crazy that Bloomberg reported on the same issue and didn’t find anything. Their reporters are terrible at their jobs or were prevented from doing them properly. Either way Bloomberg looks inept or corrupt.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      64
      ·
      1 day ago

      It’s not that surprising that an outlet that makes its entire living on a certain segment of the economy would do a better job in that segment than generalist journalists.

      If you’ve ever seen a news article about something you have real world expertise in, you know what I mean. Every time this happens to me I’m like “but they’re giving it such a surface treatment, missing the real point, and getting lots of little things wrong.”

      Then I turn to the next article and read it like it’s gospel. It’s a cognitive dissonance I don’t know how to deal with except by becoming an expert in everything, which is impossible.

      • JoeyJoeJoeJr@lemmy.ml
        link
        fedilink
        English
        arrow-up
        25
        ·
        1 day ago

        I hate to bring up AI, but this is exactly what I keep trying to explain to people - when you ask any of these bots questions about things you’re an expert in, you see all the flaws. The trouble is people tend not to ask questions about things they already know…

        • pirat@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          Since we first got easy access to various LLMs, I’ve been doing the opposite, asking obscure questions I know the answer to, trying to get a better understanding of what various models are really (not) capable of, and what data they’re (not) trained on, but it seems that you’re right and I’m in a minority. Most people treat the only LLM they know of as an oracle, and don’t seem to understand that it can write with confidence and still be incorrect. I’ve seen countless examples of just that, some funnier than other, so to me it has always been very obvious. It’s possible that using GPT-2 (back in the talktotransformer days), which was not configured for chat-style conversation but rather just to generate a continuation to the user’s input text, has actually helped me understand LLMs better and avoid using them in that common naive way, but I’m not sure how to make it just as clear to everyone else…

          • JoeyJoeJoeJr@lemmy.ml
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            24 hours ago

            What bugs me the most is I’ve pointed it out to people in conversations that basically go like this:

            Me: You used it for X and caught mistakes - why are you trusting it for Y? Them: That’s a good point.

            And then they keep doing it anyway.

            I’m not an AI hater at all - it can be a great way to accelerate work you are capable of doing on your own. But using it for things you don’t understand, and/or not double checking its work is insanity.

          • 18107@aussie.zone
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            I tried to use an LLM to write a script for me. It confidently told me I could split a string in OpenSCAD with the [1:] operator. It works in Python, but isn’t an OpenSCAD feature.

            Fortunately, programming has a good way of letting you know when the LLM is completely wrong.

            The worrying part is that the LLMs can sometimes produce code that runs, but has massive security issues that you don’t notice if you just run the code and don’t closely analyse it.

          • ngdev@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            theyre basically just all reddit commenter summarizers imo so yeah. garbage.