Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.
Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.
AI is just a tool like anything else. What’s the saying again? "AI doesn’t kill people, capitalism kills people?
I do AI research for climate and other things and it’s absolutely widely used for so many amazing things that objectively improve the world. It’s the gross profit-above-all incentives that have ruined “AI” (in quotes because the general public sees AI as chatbots and funny pictures, when it’s so much more).
The quotes are because “AI” doesn’t exist. There are many programs and algorithms being used in a variety of way. But none of them are “intelligent”.
There is literally no intelligence in a climate model. It’s just data + statistics + compute. Please stop participating in the pseudo-scientific grift.
And this is where you show your ignorance. You’re using the colloquial definition for intelligence and applying incorrectly.
By definition, a worm has intelligence. The academic, or biological, definition of intelligence is the ability to make decisions based on a set of available information. It doesn’t mean that something is “smart”, which is how you’re using it.
“Artificial Intelligence” is a specific definition we typically apply to an algorithm that’s been modelled after the real world structure and behaviour of neurons and how they process signals. We take large amounts of data to train it and it “learns” and “remembers” those specific things. Then when we ask it to process new data it can make an “intelligent” decision on what comes next. That’s how you use the word correctly.
Your ignorance didn’t make you right.
Except the Neural Net model doesn’t actually reproduce everything real, living neurons do. A mathematician in the 70s said, “hey what if this is how brains work?” He didn’t actually study brains, he just put forward a model. It’s a useful model. But it’s also an extreme misrepresentation to say it approximates actual neurons.
lol ok buddy you definitely know more than me
FWIW I think you’re conflating AGI with AI, maybe learn up a little
The term AGI had to be coined because the things they called AI weren’t actually AI. Artificial Intelligence originates from science fiction. It has no strict definition in computer science!
Maybe you learn up a little. Go read Isaac Asimov
We have the term AGI because we sometimes want to communicate something more specific, and AI is too broad of a term.
lol Again, you definitely know more than me
I always get such a kick reading comments from extremely overly confident people who know nothing about a topic that I’m an expert in, it’s really just peak social media entertainment
Please tell me you don’t actually think “AGI” is possible.
Are you talking about AI or LLM branded as LLM?
Actual AI is accurate and efficient because it is designed for specific tasks. Unlike LLM which is just fancy autocomplete.
LLMs are part of AI, so I think you’re maybe confused. You can say anything is just fancy anything, that doesn’t really hold any weight. You are familiar with autocomplete, so you try to contextualize LLMs in your narrow understanding of this tech. That’s fine, but you should actually read up because the whole field is really neat.
Literally, LLMs are extensions of the techniques developed for autocomplete in phones. There’s a direct lineage. Same fundamental mathematics under the hood, but given a humongous scope.
That’s not true.
How is this untrue? Generative pre-training is literally training the model to predict what might come next in a given text.
That’s not what an LLM is. That’s part of how it works, but it’s not the whole process.
They never claimed that it was the whole thing. Only that it was part of it.
You might keep hearing people say this, but that doesn’t make it true (and it isn’t true).
Even llms are useful for coding, if you keep it in its auto complete lane instead of expecting it to think for you
Just don’t pay a capitalist for it, a tiny, power efficient model that runs on your own pc is more than enough
Yes technology can be useful but that doesn’t make it “intelligent.”
Seriously why are people still promoting auto-complete as “AI” at this point in time? It’s laughable.
FTFY.