• 1 Post
  • 25 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle
  • Their worldview quite literally stems from the idea that there should be Out-Groups whom the law binds but does not protect, and In-Groups whom the law protects but does not bind.

    That’s how they square their “don’t tread on me” attitude with their want of big government. They recognize that in order to properly humiliate and subjugate “the others,” they need a big state militarized police force and constant surveillance, but if they were ever themselves to fall victim to it, then they cry foul. They can’t learn the lesson until they’re caught up in it, and usually instead of updating their opinion and voting like it, they simply drop out of the democratic process all together, saying that “both parties are the same.”

    They are the uneducated simpletons that the founding fathers feared, which is why they originally wanted only educated people to vote. I’m, uh, starting to come around to the idea myself.




  • oh awesome, Palantir, the same company profiting off the genocide in Gaza

    CEO Alex Karp has been a vocal supporter of Israel. In November 2023, he stated, “I am proud that we are supporting Israel in every way we can.”

    After October 2023, Palantir has provided Israel with multiple AI-powered data analytics tools for military and intelligence purposes.

    In January 2024, Palantir held its board meeting in Israel and entered into a “strategic partnership” with Israel’s Ministry of Defense to help Israel’s “war effort.”

    In October 2024, Norway’s largest asset manager, Storebrand, divested its Palantir shares, worth $24 million, due to concerns that Palantir’s work for Israel might implicate Storebrand in violations of international humanitarian law and human rights.





  • but that’s the problem. Ai people are pushing it as a universal tool. The huge push we saw to have ai in everything is kind of proof of that.

    People taking the response from LLMs at face value is a problem

    So we can’t trust it, but in addition to that, we also can’t trust people on TV, or people writing articles for official sounding websites, or the white house, or pretty much anything anymore. and that’s the real problem. We’ve cultivated an environment where facts and realities are twisted to fit a narrative, and then demanded that we give equal air time and consideration to literal false information being peddled by hucksters. These LLMs probably wouldn’t be so bad if we didn’t feed them the same derivative and nonsensical BS we consume on a daily basis. but at this point we’ve just introduced and are now relying on a flawed tool that’s basing it’s knowledge on flawed information and it just creates a positive feedback loop of bullshit. People are using ai to write BS articles that are then referenced by ai. It won’t ever get better, it will only get worse.


  • Here’s a news article about this, and what the snipped image doesn’t tell you, is that it did actually give dosage recommendations.

    It gave him specific doses of illegal substances, and in one chat, it wrote, “Hell yes—let’s go full trippy mode,” before recommending Sam take twice as much cough syrup so he would have stronger hallucinations.

    It’s one thing to be so isolated from your community that you rely extensively on on-line relationships, but it’s quite a bit different to take that a step further, relying on a machine. Like, what do you think pets are for, my guy? Get a dog, man.


  • I don’t think using an inaccurate tool gives you extra insight into anything. If I asked you to measure the size of objects around your house, and gave you a tape measure that was not correctly metered, would that make you better at measuring things? We learn by asking questions and getting answers. If the answers given are wrong, then you haven’t learned anything. It, in fact, makes you dumber.

    People who rely on ai are dumber, because using the tool makes them dumber. QED?





  • I think Ai being used by teachers and administrators for the purpose of off-loading menial tasks is great. Teachers are often working like 90 hours a week just to meet all the requirements put upon them, and a lot of those tasks do not require much thought, just a lot of time.

    In that respect, yeah sure, go for it. But at this point it seems like they’re encouraging students to use these programs as a way to off-load critical thinking and learning, and that… well, that’s horrifyingly stupid.


  • When I was in medical school, the one thing that surprised me the most was how often a doctor will see a patient, get their history/work-up, and then step outside into the hallway to google symptoms. It was alarming.

    Of course, the doctor is far more aware of ailments, and his googling is more sophisticated than just typing in whatever the patient says (you have to know what info is important in the pt. history, because patients will include/leave out all sorts of info), but still. It was unnerving.

    I also saw a study way back when that said that hanging up a decision tree flow chart in Emergency rooms, and having nurses work through all the steps drastically improved patient care; additionally new programs can spot a cancerous mass on a radiograph/CT scan far before the human eye could discern it, and that’s great but… We still need educated and experienced doctors because a lot of stuff looks like other stuff, and sometimes the best way to tell them apart is through weird tricks like “smell the wound, does it smell fruity? then it’s this. Does it smell earthy? then it’s this.”


  • I gotta be honest. Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets. What these things are doing to our minds, is akin to how your diet and cooking habits change once you start utilizing doordash extensively.

    I say this with full understanding that I’m coming off as just some luddite, but I don’t care. A tool is only as useful as it improves your life, and off-loading critical thinking does not improve your life. It actively harms your brains higher functions, making you a much easier target for propaganda and conspiratorial thinking. Letting children use this is exponentially worse than letting them use social media, and we all know how devastating the effects of that are… This would be catastrophically worse.

    But hey, good thing we dismantled the department of education! Wouldn’t want kids to be educated! just make sure they know how to write a good ai prompt, because that will be so fucking useful.



  • No any time she mentioned it I yelled and hit her. Of course I acknowledged it, mate, what kind of question is that? She made that point during every conversation about anything, there was nothing that we could talk about that wasn’t, evidently, worse for women of color. It’s an invalidating response that tells your partner that you don’t actually care about their troubles, because someone else has it worse.


  • This is a well thought out explanation of an amorphous feeling I would always get when discussing stuff with my now ex-wife. We’d be talking about some terrible thing that the republikkkans were undertaking, and she’d always, (always) inevitably point out “…and it’s even worse for black and brown women.”

    And while, yeah, it is. It’s a true statement, but it also kind of shuts down the conversation, ya know? So now are we only allowed to discuss the plight of black and brown women? Do bad things not happen to the rest of us?

    I don’t know, every time it happened, it’s like she was trying to make me feel bad for thinking about anything BUT the plight of women of color, and I couldn’t really explain why it wasn’t adding to the discussion, but this write-up gave me the words.