• zieg989@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    16 days ago

    I would not be surprized if Anthropic would actually hire a real developer to make these PRs as a marketing stunt

    • BestBouclettes@jlai.lu
      link
      fedilink
      arrow-up
      4
      ·
      16 days ago

      Well, if the model detected an issue, and a human tested it to make sure it was real and then fixed it, I think that’s an acceptable use of AI tools.

  • SkunkWorkz@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    15 days ago

    The ffmpeg team was mad at Google when they reported a bug that was found and reported automatically with an AI. Google reported the bug without providing a fix and also gave an ultimatum. Google would publicize the bug report after 60 days. That’s what pissed off the ffmpeg devs. Not to mention that it was a very obscure bug, like ffmpeg didn’t decode a video file from a 90’s videogame correctly.

    Anthropic on the other hand found a bug and provided a fix. So why would they be mad if the fix is properly written and fixes the bug ?

  • railcar@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    16 days ago

    It’s OK to hate AI slop and recognize the immediate threat to cyber security it brings. At least they are trying to mitigate it. There’s been no similar actions from other frontier models. They are deliberately helping open source projects with little funding to keep pace.

    https://www.anthropic.com/glasswing

    • sunbeam60@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      Anthropic right now are the good people.

      That probably won’t last. But out of a bad bunch they’re the least bad.

      • 0xDREADBEEF@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        16 days ago

        the good people.

        You are limiting your own intelligence by thinking companies can be described in those words.

        They are not good. They are profit-seeking. Profit seeking doesn’t necessarily mean evil, but it can never mean good. A non-profit who’s goal is to improve their community around them, a co-op who’s goal is to treat their workers with respect etc etc can all be described as ‘good’ to varying degrees, but no for-profit entity, especially a publicly traded one, can ever be described as ‘good’

        • hitmyspot@aussie.zone
          link
          fedilink
          arrow-up
          1
          ·
          16 days ago

          Hence their point about being the best of a bad bunch. Remember the people making decisions are people. A corporation has no soul and only seeks profit. People work for them and can make good decisions and be good people whomever they work for.

          There were good people that worked for the nazis. Unless you think the cleaner, for instance of the Nazi headquarters cleaned as a way to speak evil.

          However. I take your point. I just think that’s not what is the point of the discussion here and is no different to both sides being bad on politics. It lacks nuance.

  • spectrums_coherence@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    16 days ago

    LLM is very good at programming when there are huge number of guardrails against them. For example, exploit testing is a great usecase because getting a shell is getting a shell.

    They kind of acts as a smarter version of infinite monkey that can try and iterate much more efficiently than human does.

    On the other hand, in tasks that requires creativity, architecture, and projects without guard rail, they tend to do a terrible job, and often yielding solution that is more convoluted than it needs to be or just plain old incorrect.

    I find it is yet another replacement for “pure labor”, where the most unintelligent part of programming, i.e. writing the code, is automated away. While I will still write code from scratch when I am trying to learn, I likely will be able automate some code writing, if I know exactly how to implement it in my head, and I also have access to plenty of testing to gaurentee correctness.

    • Serinus@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      16 days ago

      People have trouble with the middle ground. AI is useful in coding. It’s not a full replacement. That should be fine, except you’ve got the ai techbros and CEOs on one end thinking it will replace all labor, and the you’ve got the backlash to that on the other end that want to constantly talk about how useless it is.

  • CannonFodder@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    16 days ago

    ai tools can detect potential vulnerabilities and suggest fixes. You can still go in by hand and verify the problem carefully apply a fix.

    • shirasho@feddit.online
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 days ago

      AI is actually SUPER good at this and is one of the few places I think AI should be used (as one of many tools, ignoring the awful environmental impacts of AI and assuming an on-prem model). AI is also good at detecting code performance issues.

      With that said, all of the fix recommendations should be fixed by hand.

  • sun_is_ra@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    Maybe he meant code quality was so good its like a human wrote it.

    After all if the code is good and follow all best practices of the project, why reject it just because it was an AI who wrote it. That’s racism against machines.