• MoonManKipper@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    12
    ·
    2 days ago

    If true they’re all idiots, but I don’t believe the story anyway. All the data question answering LLMs I’ve seen use the LLM to write SQL queries on your databases and then wrap the output in a summary. So the summary is easy to check and very unlikely to be significantly wrong. AI/ML/statistics and code is a tool, use it for what it’s good for, don’t use it for what it’s not, treat hype with skepticism

    • mayabuttreeks@lemmy.caOP
      link
      fedilink
      arrow-up
      32
      ·
      2 days ago

      Honestly, I was leaning toward “funny but probably fake” myself until I checked out OP’s post history, which mentions “startups” and namedrops a few SaaS tools used heavily in marketing. If you’ve worked with marketers (or a fair few startup bros, honestly), you’ll know this isn’t beyond the bounds of reason for some of them 😂

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      Writing a syntactically correct SQL statement is not the same as doing accurate data analytics.

    • drosophila@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      2 days ago

      I am reminded of this story:

      https://retractionwatch.com/2024/02/05/no-data-no-problem-undisclosed-tinkering-in-excel-behind-economics-paper/

      Heshmati told the student he had used Excel’s autofill function to mend the data. He had marked anywhere from two to four observations before or after the missing values and dragged the selected cells down or up, depending on the case. The program then filled in the blanks. If the new numbers turned negative, Heshmati replaced them with the last positive value Excel had spit out.

      Of course that guy didn’t need fancy autofill to act like an idiot, he used good old fashion autofill.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 day ago

      Clearly you’ve never worked as a data analyst, or you would know that the vast majority of upper management and C Suite are, in fact, all fucking idiots.

      They’re generally where they are because of mutual secrets and nepotism, for who else is on their contact list.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      The problem is you’ve got people using the tools that don’t understand the output or the method to get there.

      Take the Excel copilot function. You need to pass in a range of cells for the slop prompt to work on, but it’s an optional parameter. If you don’t pass that in, it returns results anyway. They’re just complete bollocks.

      • TrippyHippyDan@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        It’s even worse than that. The ones that should understand the tools decide that the ease is good enough and just become AI brain rot.

        I’ve watched co-workers go from good co-workers to people I can’t trust anything from because I know they just slapped at an AI and didn’t check it.

        What’s worse is, when you come to them as an engineer and tell them they’re wrong, you have to prove to them the AI is wrong, not they have to prove to you the AI is right.

        Moreover, when you refer to documentation, they can’t be bothered and say the AI didn’t say that, so it must be wrong.

      • MoonManKipper@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        At least it’ll self correct in a couple of years - use a tool, look like an idiot, stop using tool

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      I’m on the fence, but will say that if, for whatever reason, it was never actually connected to the data or the connection had some flaw, I could totally believe it would just fabricate a report that looks consistent with what the request asked for. Maybe it failed to ever convey that an error occurred. Maybe it conveyed the lack of data and the user thought he could just tell the AI to fix the problem without trying to understand it himself and triggered it to generate a narrative consistent with fixing it without actually being able to fix it.

      Sure if you do a sanity check it should fall apart, but that assumes they bother. Some people have crazy confidence in LLM and didn’t even check.