Not efficient enough:
https://customhydronutrients.com/Calcium-Carbonate-50-lb-_p_23351.html
Not efficient enough:
https://customhydronutrients.com/Calcium-Carbonate-50-lb-_p_23351.html
Learn to shitpost kid.
I mean it’ll still be a joke even if he gets elected, but it’s a bigger joke that’s largely independent of him.
Pre print journalism fucking bugs me because the journalists themselves can’t actually judge if anything is worth discussing so they just look for click bait shit.
This methodology to discover what interventions do in human environments seems particularly deranged to me though:
We address this question using a novel method – generative social simulation – that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms.
LLM agents trained on social media dysfunction recreate it unfailingly. No shit. I understand they gave them personas to adopt as prompts, but prompts cannot and do not override training data. As we’ve seen multiple times over and over. LLMs fundamentally cannot maintain an identity from a prompt. They are context engines.
Particularly concerning sf the silo claims. LLMs riffing on a theme over extended interactions because the tokens keep coming up that way is expected behavior. LLMs are fundamentally incurious and even more prone to locking into one line of text than humans as the longer conversation reinforces it.
Determining the functionality of what the authors describe as a novel approach might be more warranted than making conclusions on it.
Cool tech but I question it’s usefulness. They focus on clinical in their language but anybody who’s on telemetry orders needs waveforms not beats per minute. I care if they’re suddenly in afib, not that they’re a little tachy after getting up to go to the bathroom.