• AceBonobo@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    7
    ·
    2 days ago

    From reading the study, it seems like the workers didn’t even use it. Less than 2 queries per day? A third of participants used it once per week?

    This is a study of resistance to change or of malicious compliance. Or maybe it’s a study of how people react when you’re obviously trying to take their jobs.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      1
      ·
      2 days ago

      I don’t think it’s people being resistant to change I think it’s people understanding the technology isn’t useful. The tagline explains it best.

      AI tech shows promise writing emails or summarizing meetings. Don’t bother with anything more complex

      It’s a gimmick, not a fully fleshed out productivity tool, of course no one uses it. That’s like complaining that no one uses MS paint for the production of a high quality graphics.

      • SpicyLizards@reddthat.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        2 days ago

        Absolutely, and it’s a massive and undeserved cash cow for AI companies (e.g. Sam “Sister-Lovin’” Altman).

        AI is never an investment for businesses or individual users. It’s a bloated and unfulfillable promise that just makes users dumb, dependant, and destroys the very environment we need to survive.

        It also produces bad products (it’s easy to tell which devs use it from reviewing poor quality code).

        Not to mention the centralisation of power with the rich who are the problem in this world.

    • thehatfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      2 days ago

      The figures are the averages for the full trial period.

      So it’s possible they were making more queries at the start of the trial, but then mostly stopped when if they found using Copilot was more a hindrance than a help.

      • Elvith Ma'for@feddit.org
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        I have a Copilot license at work. We also have an in house „ChatGPT clone“ - basically a private deployment of that model so that (hopefully) no input data gets used to train the models.

        There are some usecases that are neat. E.g. we’re a multilingual team, so having it transcribe, translate (and summarize) a meeting so that it’s easier to finalize and check a protocol. Coming back from a vacation and just ask it summarize everything you missed for a specific area of your work (to get on track before just checking everything chronologically) can be nice, too.

        Also we finetuned a model to assist us in writing and explaining code from a domain specific language with many strange quirks that we use for a tool and that has poor support from off the shelf LLMs.

        But all of these cases have one thing in common: They do not replace the actual work and are things that will be checked anyways (even the code one, as we know there are still many flaws, but it’s usually great at explaining the code now - not so at writing it). It’s just a convenient method to check your own work - and LLM hallucinations will usually be caught anyway.

    • r00ty@kbin.life
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      2 days ago

      I think the issue is that there’s usually competing targets here. From the top they wanted to see use of copilot. But direct reports likely expected the same performance.

      So are you going to a) do it the way you’ve done it for 10 years and know will deliver on time or b) ask hal9000 for help potentially actually slowing you down, at least initially.

      I think there’s gains to be made for some jobs in some cases with ai. But most people probably just got on with their work.