I use AI to develop software. However when I’m looking for libraries to do things if I see a CLAUDE.md file I have to look and see when it was added and hold it against the library if it’s early in the history.
Interesting. I’m unfamiliar with the purpose of that file as originally intended, but if I understand it right, it probably can be used to detect PRs with ai-made code. Tell Claude to write a particular string in every PR, and then reject all PRs with that string.
It’s not hypocritical. Because you use AI to code, you know how easy it is to just let the AI do it’s thing and not check it’s work. It’s almost like a sirens song. So you know the odds that a library that was coded with AI probably wasn’t checked by a human. That’s just called experience.
It’s the difference between checking for questions in stack overflow and implementing solutions VS pasting every SO solution blindly until something works.
I do use autocomplete and ask plenty questions, sometimes even use an agent so it makes small changes that I then review and test, but I would never commit unchecked changes, and a claude.md implies that the AI is coding AND committing without supervision.
I can’t stress enough how different those scenarios are.
You know what’s funny?
I use AI to develop software. However when I’m looking for libraries to do things if I see a CLAUDE.md file I have to look and see when it was added and hold it against the library if it’s early in the history.
It’s like prewar steel.
I also recognize it’s hypocritical.
https://codeberg.org/irelephant/kittygram/src/branch/main/CLAUDE.md
meow
Interesting. I’m unfamiliar with the purpose of that file as originally intended, but if I understand it right, it probably can be used to detect PRs with ai-made code. Tell Claude to write a particular string in every PR, and then reject all PRs with that string.
I love this sabotage. Has this worked against anyone that you know of yet?
It’s not hypocritical. Because you use AI to code, you know how easy it is to just let the AI do it’s thing and not check it’s work. It’s almost like a sirens song. So you know the odds that a library that was coded with AI probably wasn’t checked by a human. That’s just called experience.
It’s the difference between checking for questions in stack overflow and implementing solutions VS pasting every SO solution blindly until something works.
I do use autocomplete and ask plenty questions, sometimes even use an agent so it makes small changes that I then review and test, but I would never commit unchecked changes, and a claude.md implies that the AI is coding AND committing without supervision.
I can’t stress enough how different those scenarios are.