cross-posted from: https://lemmy.today/post/52276726

Dawkins points out how the goalposts have been moved from the Turing test without justification and claims it can be viewed as a test of consciousness.

  • unexposedhazard@discuss.tchncs.de
    link
    fedilink
    arrow-up
    80
    ·
    edit-2
    4 days ago

    I talked to an acquaintance earlier that is writing a paper on computer science stuff. His university advisor told him to chuck it into claude and if claude says it would pass then he will accept it for a conference. Claude didnt accept it and the worst part is, my acquaintance just agreed with this outcome saying its a valid test because Claude is “state of the art”.

    Shits fucked man. LLMs are automated thought termination.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      4 days ago

      ? Claude can’t accept anything.

      Did it find badly worded sentences that needed correction? Did it find prior work that the paper unknowingly duplicated?

      • unexposedhazard@discuss.tchncs.de
        link
        fedilink
        arrow-up
        25
        ·
        edit-2
        4 days ago

        Obviously in the wider sense of “accept”. As in they asked it whether the paper would pass review and if there are any major issues with it. But also the answer is completely irrelevant, its the process thats the problem.

    • Doorbook@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      4 days ago

      Is he undergraduate? Because this only makes sense if he is undergraduate and this paper is just to practice how to research something.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        I mean, I don’t expect much from advisors, but you’re not really advising much if you’re leaving it up to an LLM to do your job.