• KeenFlame@feddit.nu
    link
    fedilink
    arrow-up
    1
    arrow-down
    4
    ·
    4 days ago

    ? If you make and work with ml you are in a field of research. It’s not a technology that you “use”. And if you give the output of your “ml” then that is exactly identical to an llm output. They don’t conflate anything. Chat gpt is also the output of “ml”

    • thespcicifcocean@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      3 days ago

      when i say the output of my ml, i mean, i give the prediction and confidence score. for instance, if there’s a process that has a high probability of being late based on the inputs, I’ll say it’ll be late, with the confidence. that’s completely different from feeding the figures into a gpt and saying whatever the llm will say.

      and when i say “ml” i mean a model I trained on specific data to do a very specific thing. there’s no prompting, and no chatlike output. it’s not a language model

      • KeenFlame@feddit.nu
        link
        fedilink
        arrow-up
        1
        ·
        6 hours ago

        Yeah but there is no fundamental difference for you to use any language stack and train it on the same data