As a counter: you only think you know what an apple is. You have had experiences interacting N instances of objects which share a sufficient set of “apple” characteristics. I have had similar experiences, but not identical. You and I merely agree that there are some imprecise bounds of observable traits that make something “apple-ish”.
Imagine someone who has never even heard of an apple. We put them in a class for a year and train them on all possible, quantifiable traits of an apple. We expose them, in isolation, to:
all textures an apple can have, from watery and crisp to mealy
all shades of apple colors appearing in nature and at different stages of their existence
a 1:1 holographic projection of various shapes of apple
sample weights, similar to the volume and density of an apple
distilled/artificial flavors and smells for all apple variations
extend this training on all parts of the apple (stem, skin, core, seeds)…
You can go as far as you like, giving this person a PhD in botanical sciences, just as long as nothing they experience is a combination of traits that would normally be described as an apple.
Now take this person out of the classroom and give them some fruit. Do they know it’s an apple? At what point did they gain the knowledge; could we have pulled them out earlier? What if we only covered Granny Smith green apples, is their tangential expertise useless in a conversation about Gala apples?
This isn’t even so far fetched. We have many expert paleontologists and nobody has ever seen a dinosaur. Hell, they generally don’t even have real, organic pieces of animals. Just rocks in the shape of bones, footprints, and other tangential evidence we can find in the strata. But just from their narrow study, they can make useful contributions to other fields like climatology or evolutionary theory.
We could, with our current tech and enough resources, make something that matches the complexity of the human brain. You just need a shit ton of processing power and lots of well groomed data. With even more dedication we might match the dynamic behavior, mirroring the growth and development of the brain (though that’s much harder). Would it be as efficient and robust as a normal brain? Probably not. But it could be indistinguishable in function; just as fallible as any human working from the same sensory input.
At a higher complexity it ceases being a toy Chinese Room and turns into a Philosophical Zombie. But if it can replicate the reactions of a human… does intentionality, personhood or “having a mind” matter? Is it any less useful than, say, an average employee who might fuck up an email or occasionally fail to grasp a problem or be sometimes confidently incorrect?
As a counter: you only think you know what an apple is. You have had experiences interacting N instances of objects which share a sufficient set of “apple” characteristics. I have had similar experiences, but not identical. You and I merely agree that there are some imprecise bounds of observable traits that make something “apple-ish”.
Imagine someone who has never even heard of an apple. We put them in a class for a year and train them on all possible, quantifiable traits of an apple. We expose them, in isolation, to:
You can go as far as you like, giving this person a PhD in botanical sciences, just as long as nothing they experience is a combination of traits that would normally be described as an apple.
Now take this person out of the classroom and give them some fruit. Do they know it’s an apple? At what point did they gain the knowledge; could we have pulled them out earlier? What if we only covered Granny Smith green apples, is their tangential expertise useless in a conversation about Gala apples?
This isn’t even so far fetched. We have many expert paleontologists and nobody has ever seen a dinosaur. Hell, they generally don’t even have real, organic pieces of animals. Just rocks in the shape of bones, footprints, and other tangential evidence we can find in the strata. But just from their narrow study, they can make useful contributions to other fields like climatology or evolutionary theory.
An LLM only happens to be trained on text because it’s cheap and plentiful, but the framework of a neural network could be applied to any data. The human brain consumes about 125MB/s in sensory data, conscious thought grinds at about 10 bits/s, and each synapse could store about 4.7 bits of information for a total memory capacity in the range of ~1 petabyte. That system is certainly several orders of magnitude more powerful than any random LLM we have running in a datacenter, but not out of the realm of possibility.
We could, with our current tech and enough resources, make something that matches the complexity of the human brain. You just need a shit ton of processing power and lots of well groomed data. With even more dedication we might match the dynamic behavior, mirroring the growth and development of the brain (though that’s much harder). Would it be as efficient and robust as a normal brain? Probably not. But it could be indistinguishable in function; just as fallible as any human working from the same sensory input.
At a higher complexity it ceases being a toy Chinese Room and turns into a Philosophical Zombie. But if it can replicate the reactions of a human… does intentionality, personhood or “having a mind” matter? Is it any less useful than, say, an average employee who might fuck up an email or occasionally fail to grasp a problem or be sometimes confidently incorrect?
4.7 bits per neuron, 10 bits in thinking, huh? This is good information, thank you :3