• badgermurphy@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    21 hours ago

    I’m sure the quality of the LLM output does vary a lot based on the size of the scope it covers and the training data set.

    However, I believe that if it were possible to get an LLM to be “quite accurate” in any context, that would make it easy to find a path to profitability for that tool, but I don’t think we have seen that materialize anywhere.

    I believe that the best they can get is “more accurate” than the mean, but still not accurate enough to reliably make anyone money*.

    *Nvidia notwithstanding

    • Routhinator@startrek.website
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 hours ago

      Moreover, until you can get the same output from the same input from an LLM consistently, the entire tech is unreliable garbage.