• FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    10 hours ago

    To add to this, we already know that context switching causes a loss in performance.

    A person who’s thinking about how to solve a problem one way and then has to suddenly think about solving it in another way will perform worse.

    https://medium.com/@codewithmunyao/the-hidden-cost-of-context-switching-why-your-most-productive-hours-are-disappearing-43c5b501de19

    The Neuroscience Behind the Pain

    Context switching isn’t just annoying — it’s neurologically expensive. When you shift from debugging a race condition to answering emails, your brain doesn’t simply “change tabs.” It goes through a complex process:

    -Memory consolidation: Storing your current mental model

    -Attention disengagement: Breaking focus from the current task

    -Cognitive reloading: Building a new mental model for the next task

    -Re-engagement: Getting back into flow

    Research from Carnegie Mellon shows that even brief interruptions can increase task completion time by up to 23%. For complex cognitive work like programming, this cost multiplies dramatically.

    Here’s another article from CMU discussing the same thing: https://www.sei.cmu.edu/blog/addressing-the-detrimental-effects-of-context-switching-with-devops/

    What this study shows is that a person who is faced with an unexpected context switch performs worse on a task than a user who has spent the last 12 questions performing the task the same way.

    This exact problem would happen if you replaced AI with a calculator, or made a person swap from using paper to doing mental math. The problem here is context switching, not AI.

    The way to ensure that the problem is AI and not the context switch, would be to continue the quest and see if the first group reverts back to baseline after 12 questions. 12 questions is how long the control group had to become acclimated to the task before their last context swap at the start of the test.

    Also, of note, this is a paper on arXiv it is not published so it has not gone through a peer-review process which would certainly catch the failure to set a proper control group.

    • chunes@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      9 hours ago

      Context switching isn’t just X — it’s Y.

      Are we sure this was written by a human?

        • chunes@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 hours ago

          Thanks.

          And I’m all for em dashes. After all, I started using them after reading enough books. It’s just that particular construct that strikes me as especially LLM-y.

          • luciferofastora@feddit.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            8 hours ago

            AI was trained on human writing. If it produces a certain tone, then that’s probably a result of the material that was favoured in training it. That construction was common in human writing before it became common in AI too.

            What makes it stick out is when AI uses it in contexts where humans normally wouldn’t, but this kind of assertion is common in scientific papers and articles. It would make sense to train an AI on scientific writing, since that tone sounds authoritative and like you have some idea of what you’re talking about.

            So I don’t think this is an LLM-construct; it’s an instance of the original style that LLMs copy.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 hours ago

            I’d like to see a study on that, I see it mentioned so much it’s almost achieved meme status.

            It could very well be a Baader–(👀)Meinhof phenomenon.