• 0 Posts
  • 3 Comments
Joined 22 hours ago
cake
Cake day: February 28th, 2026

help-circle
  • there are many use-cases, and you’ve neglected one: linguistic analysis can be used to identify a person and to link them to other accounts. i’m not saying it’s likely or apocalyptic, but it is true and present. using an LLM to “sanitize” your outputs can prevent this.

    from a privacy perspective, everyone should do this using a locally hosted LLM. from a person-that-uses-the-internet perspective, i would absolutely hate it if every article and every comment looked like an identical brand of ai slop.


  • a layperson cannot be relied upon to draw meaningful conclusions from a scholarly article. i learned this when i tried to do it. have you ever tried to read a spanish book, without knowing spanish, with nothing but an english-spanish dictionary? it’s very slow going and it works alright until someone speaks in idiom or metaphor, but even then you can mostly still get it. this is not always the case with scholarly articles.

    moreover, it’s a waste of time. if it takes you 30 hours to look up every term and graph, but it would have taken your biology friend 20 minutes to synthesize it for you, there’s an obvious solution here. if an LLM can save you 30 hours, and your biology friend 20 minutes, it’s a useful tool.


  • hi friends i hope you’re well.

    i worked a laborious job and experienced a phenomenon i refer to as “parasitic thought:” it is where someone will provide to you all of the information that a person would require to reach the correct conclusion, and then stare at you. they want you to crunch the info for them.

    i feel like one of those parasites in my agent interactions. i know i COULD think, but you can do it too, lil buddy. go on. do it for me.

    i don’t know about “reasonable” or “ethical” or “polite,” but in my experience: if someone just regurgitates some clank clank slop slop, it reads as hostile. “i can’t be bothered to communicate with you, here, read this wall of gpt-vomit”

    my instinct is to copy and paste, “LLM agent of my choice, what’s this person trying to say to me?” and then skim the ai synthesized summary of the ai composed body text generated from some idiot’s faint echoes of thought.

    in the words of your highschool biology teacher, the human is the powerhouse of the agentic loop. in my unimportant opinion, responsible use of genai agents means that the output should be indistinguishable, if not better, than something you wrote by hand.

    there are privacy implications. linguistic assessment can be used to identify you. from a privacy perspective, the internet would be preferable if everyone fed their carefully formed thoughts to an LLM and said “make this look like chatgpt 3 wrote it.”