• krunklom@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    7
    ·
    3 hours ago

    I really don’t understand this perspective. I truly don’t.

    You see a new technology with flaws and just assume that those flaws will always be there and the technology will never progress.

    Like. Do you honestly think this is the one technology that researchers are just going to say “it’s fine as-is, let’s just stop improving it”?

    You don’t understand the first thing about how it works but people like you are SO certain that the way it is now is how it will always be, and that because there are flaws developing it further is pointless.

    I just don’t get it.

    • CubitOom@infosec.pub
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      22 minutes ago

      I’ve actually worked professionally in the field for a couple of years since it was interesting to me originally. I’ve built RAG architecture backends for self hosted FOSS LLMs, i’ve fine tuned LLMs with new data, And I’ve even took the opposite approach where I embraced the hallucinations as I thought it could be used for more creative tasks. (I think this area still warrants research). I also enjoy TTS and STT use cases and have FOSS models for those on most of my devices.

      I’ll admit that the term AI is extremly vauge. It’s like saying you study medicine, it’s a big field. But I keep coming to the conclusion that LLMs and predictive generative models in general simply do not work for the use cases that it’s being marketed for to consumers, CEOs, and Governments alike.

      This " AI race" happened because Deepseek was able to create a model that was more or less equivalent to OpenAI and Anthropic models. It should have been seen as a race between proprietary and open source since deep seek is one of the more open models at that performance level. But it became this weird nationalist talking point on both countries instead.

      There are a lot of things the US is actually in a race with China in. Many of which are things that would have immediate impact. Like renewable energy, international respect, healthcare advances, military sufficiency, human rights, food supplies, and afordible housing, just to name a few.

      The promise of AI is that it can somehow help in the above categories eventually, and that’s cool. But we don’t need AI to make improvements to them right now.

      I think AI is a giant distraction, while the the talk of nationalistic races is just being used for investor buy in.