

The issue is the scale. One comment can be fact checked in under an hour. Thousands not so much.
Also, it’s not purely about accuracy. I want to be having discussions with other humans. Not software.
Thanks for bringing this up to the group, I appreciate it! edit: typo
Bots could be used to spam LLM comments, but users can effectively act as a manual bot with a LLM assisting them.
Unless the prompter goes out of their way to obfuscate the text manually, which sort of defeats the purpose, they tend to be very samey. The generated text would stand out if multiple users were using the same or even similar prompts. And OPs stands out even without the admission.
edit: to clarify I mean stand out to the human eye, human mods would have to be the ones removing the comments