Seriously, 15 times is my limit on correcting an LLM.

The name in question? Rach. Google absolutely cannot pronounce it in any other way than assuming I was referring to Louise Fletcher in the diminutive.

Specifying “long a” did nothing, and now I’m past livid. If you can’t handle a common English name, why would I trust you with anything else?

This is my breaking point with LLMs. They’re fucking idiotic and can’t learn how to pronounce English words auf Englisch.

I hope the VCs also die in a fire.

    • Powderhorn@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I know IPA (the linguistic term, not the beer … OK, I also know the beer, but that’s not important right now) … and, yeah, I tried that, but on a laptop without a numpad, it’s a bit of a slog.

      What was maddening was the LLM got it right somewhere around 10% of the time after I corrected it. This was a voice conversation, so every time I corrected it, that should have been clear data. Aren’t these systems simply supposed to be pattern recognition? How is it outputting wildly different pronunciations (N>5) with constant inputs?

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        22 hours ago

        The models themselves are nondeterministic. Also, they tend to include a hidden (or sometimes visible) random seed that gets input into the models as well.

        • Powderhorn@beehaw.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 hours ago

          How delightful. I mean, I knew there were reasons you don’t get the same results twice, but I’ve not dived into how all this works, as it seems to be complete bullshit. But it’s nice to hear that’s a feature.