The letters that make up words is a common blind spot for AIs, since they are trained on strings of tokens (roughly words) they don’t have a good concept of which letters are inside those words or what order they are in.
I find it bizarre that people find these obvious cases to prove the tech is worthless. Like saying cars are worthless because they can’t go under water.
The point isn’t “they can’t do word games therefore they’re useless”, it’s “if this thing is so easily tripped up on the most trivial shit that a 6-year-old can figure out, don’t be going round claiming it has PhD level expertise”, or even “don’t be feeding its unreliable bullshit to me at the top of every search result”.
A six year old can read and write Arabic, Chinese, Ge’ez, etc. and yet most people with PhD level experience probably can’t, and it’s probably useless to them. LLMs can do this also. You can count the number of letters in a word, but so can a program written in a few hundred bytes of assembly. It’s completely pointless to make LLMs to do that, as it’d just make them way less efficient than they need to be while adding nothing useful.
LOL, it seems like every time I get into a discussion with an AI evangelical, they invariably end up asking me to accept some really poor analogy that, much like an LLM’s output, looks superficially clever at first glance but doesn’t stand up to the slightest bit of scrutiny.
I don’t want to defend ai again, but it’s a technology, it can do some things and can’t do others. By now this should be obvious to everyone. Except to the people that believe everything commercials tell them.
How many people do you think know that AIs are “trained on tokens”, and understand what that means? It’s clearly not obvious to those who don’t, which are roughly everyone.
They are using it for every question. It’s pointless. The only reason they are doing it is to blow up their numbers.
… they are trying to be infront. So that some future ai search wouldn’t capture their market share. It’s a safety thing even if it’s not working for all types of questions.
The only reason they are doing it is to blow up their numbers.
Ding ding ding.
It’s so they can have impressive metrics for shareholders.
“Our AI had n interactions this quarter! Look at that engagement!”, with no thought put into what user problems it solves.
It’s the same as web results in the Windows start menu. “Hey shareholders, Bing received n interactions through the start menu, isn’t that great? Look at that engagement!”, completely obfuscating that most of the people who clicked are probably confused elderly users who clicked on a web result without realising.
Sure. But you can literally test almost all frontier models for free. It’s not like there is some conspiracy or secret. Even my 73 year old mother uses it and knows it’s general limits.
The letters that make up words is a common blind spot for AIs, since they are trained on strings of tokens (roughly words) they don’t have a good concept of which letters are inside those words or what order they are in.
I find it bizarre that people find these obvious cases to prove the tech is worthless. Like saying cars are worthless because they can’t go under water.
Not bizarre at all.
The point isn’t “they can’t do word games therefore they’re useless”, it’s “if this thing is so easily tripped up on the most trivial shit that a 6-year-old can figure out, don’t be going round claiming it has PhD level expertise”, or even “don’t be feeding its unreliable bullshit to me at the top of every search result”.
A six year old can read and write Arabic, Chinese, Ge’ez, etc. and yet most people with PhD level experience probably can’t, and it’s probably useless to them. LLMs can do this also. You can count the number of letters in a word, but so can a program written in a few hundred bytes of assembly. It’s completely pointless to make LLMs to do that, as it’d just make them way less efficient than they need to be while adding nothing useful.
So if the AI can’t do it then that’s just proof that the AI is too smart to be able to do it? That’s your arguement is it. Nah, it’s just crap
You think just because you attached it to an analogy that makes it make sense. That’s not how it works, look I can do it.
My car is way too technologically sophisticated to be able to fly, therefore AI doesn’t need to be able to work out how many l Rs are in “strawberry”.
See how that made literally no sense whatsoever.
LOL, it seems like every time I get into a discussion with an AI evangelical, they invariably end up asking me to accept some really poor analogy that, much like an LLM’s output, looks superficially clever at first glance but doesn’t stand up to the slightest bit of scrutiny.
I don’t want to defend ai again, but it’s a technology, it can do some things and can’t do others. By now this should be obvious to everyone. Except to the people that believe everything commercials tell them.
358 instances (so far) of lawyers in Australia using AI evidence which “hallucinated”.
And this week one was finally punished.
Ok? So, what you are saying is that some lawyers are idiots. I could have told you that before ai existed.
How many people do you think know that AIs are “trained on tokens”, and understand what that means? It’s clearly not obvious to those who don’t, which are roughly everyone.
You don’t have to know about tokens to see what ai can and cannot do.
Go to an art museum and somebody will say ‘my 6 year old can make this too’, in my view this is a similar fallacy.
That makes no sense. That has nothing to do with it. What are you on about.
That’s like watching tv and not knowing how it works. You still know what to get out of it.
Then why is Google using it for question like that?
Surely it should be advanced enough to realise it’s weakness with this kind of questions and just don’t give an answer.
They are using it for every question. It’s pointless. The only reason they are doing it is to blow up their numbers.
… they are trying to be infront. So that some future ai search wouldn’t capture their market share. It’s a safety thing even if it’s not working for all types of questions.
Ding ding ding.
It’s so they can have impressive metrics for shareholders.
“Our AI had n interactions this quarter! Look at that engagement!”, with no thought put into what user problems it solves.
It’s the same as web results in the Windows start menu. “Hey shareholders, Bing received n interactions through the start menu, isn’t that great? Look at that engagement!”, completely obfuscating that most of the people who clicked are probably confused elderly users who clicked on a web result without realising.
Line on chart must go up!
Yeah, but … they also can’t just do nothing and possibly miss out on something. Especially if they already invested a lot.
Understanding the bounds of tech makes it easier for people to gage its utility. The only people who desire ignorance are those that profit from it.
Saying “it’s worth trillions of dollars huh” isn’t really promoting that attitude.
Sure. But you can literally test almost all frontier models for free. It’s not like there is some conspiracy or secret. Even my 73 year old mother uses it and knows it’s general limits.
Well it also can’t code very well either
Removed by mod
I feel like that was supposed to be an insult but because it made literally no sense whatsoever, I really can’t tell.
No not really, just an observation. It literally said you are a boring person. Not sure whats not to get.
Bye.
You need to get back on the dried frog pills.