• pyre@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 hours ago

      i recently got access to the paid version of Claude at my job. they wanted us to automate some routine tasks, fine. i had it make something, then asked how i could save it as a skill for future use. it said it doesn’t have skills or macros. i said what, yes there are skills right there in the customize section. it came back with the usual “you’re right! let me check… oh yes indeed there is such a function. my bad. here’s more information from the web: …”

      like… oh my god. imagine if this were an unpaid intern. they would be immediately shot into the atmosphere. but instead we pay for this shit.

      • schnurrito@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 hours ago

        Yes, such things can happen… I once asked an LLM a few questions about me (under my real name) that was publicly available on the Internet (i.e. should be in its training data). It answered a simple yes-or-no question wrongly. Then I asked it a followup question, which it answered more correctly, but the answer contradicted that wrong answer and it went “this seems to contradict my previous answer that…”.

    • VinegarChunks@lemmus.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      13 hours ago

      In my experience Microsoft Copilot is wildly inaccurate about facts describing aspects of Microsoft software products like Teams, or even Microsoft Copilot itself.

      • schnurrito@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        12 hours ago

        All AI does is generate plausible-sounding text. It doesn’t care about whether it is true or false.

        I am not generally anti-AI, nor generally pro-AI. There are good uses of AI and bad uses. For example I used AI to generate my profile picture here; the creation of art (as long as there is human review) is one of the best uses of AI I can think of…

        But asking it for factual information and expecting it to be correct, and making decisions based on it? Anyone who does that deserves all negative consequences it can have.

        • HalfSalesman@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 hours ago

          AI is good for quickly generating “realistic enough” stat sheets for pen and paper campaigns. Not for actual research that effects people.