• rozodru@piefed.social
    link
    fedilink
    English
    arrow-up
    27
    ·
    7 days ago

    It feels like all these companies are desperately trying to push the camel that are LLMs through a needle hole. None of them work anymore and have seen a very noticeable decline since last fall. All of them. What’s the point of opening more data centers and funneling yet more power to them when 9 times out of 10 a solution is going to be a hallucination. Just look at the massive decline of Anthropic for example. Claude is absolutely useless now. you could tell it “hey my linux distro is summoning the chaos god Nurgle and now my cat has some crazy disease where it vomits relentlessly whenever I launch the GNOME DE on Arch.” to which it’ll reply “this is a known issue…” no it’s not. not EVERY issue I have is a known issue with solutions that are hallucinated because to the LLM it’s a logical solution that may utilize libraries that don’t even exist.

    ChatGPT is even worse because it’ll give you some bullshit rant before a “solution” and again that rant AND solution 9 times out of 10 is an hallucination. They’re all like this. They’re a complete waste of time. And anyone who keeps investing in this slop is hoping, praying, begging it gets better and is embraced but it’s not going to happen. At one point it was decent, briefly, and then they all started their massive decline.

    it’s like going to home depot and asking the CSR which hammer to buy and they say “well this hammer right here has some major investments behind it, it’s been developed for years, BUT it has a 90% failure rate” Why would I buy that hammer? Why keep forcing a tool on users that simply doesn’t work and a tool that no one wants because of the very reason it doesn’t work.

    in a shocking twist it turns out people don’t like being lied to.

    • reksas@sopuli.xyz
      link
      fedilink
      arrow-up
      13
      ·
      7 days ago

      only reason i can think of is if llms became really widely utilized, the ones who control them would gain so much control over everything. They can spy through the models and they can use the models to influence those who use them.

      I dont think its ultimately even about the money. Who cares about money if you have control, which you can utilize to get more money and whatever other horrible goals they might have in mind.

      I dont want to even think about what kind of world we would get if LLMs became successful… Its bad enough already.

    • altphoto@lemmy.today
      link
      fedilink
      arrow-up
      5
      ·
      6 days ago

      Search used to be about finding needles in a haystack. Now AI is all about filling the stack with needles so you can find the needle that agrees the most with your own best guess…yada yada yada, this is worth $10,000,000.58 because I purchased more toilet paper?

      They tell us they are selling our data to these horrible companies who are working with the us gov to disappear people in daylight. Murderers separating kids from their parents or even killing their friends and family members right in front of them and of all of us…but you need the latest Android or our tracking app won’t work as well.

      I used to peddle android to everyone I knew because it was linux-ish. Well now I suggest to not use technology as much as possible. Don’t feed the machine anymore. Let that whole industry dry past raisin and into cardboard dry territory. We’ll be better for it.