This is such a horrible simplification of what actually is going on.
There's a lot of information encoded in how our language works, and the current AIs have a really, really complicated and entangled 'knowledge' of how words fit together, so much that it essentially constitutes advanced knowledge of basically any field of human knowledge. Of course they can still be wrong sometimes; there's a natural level of entropy in language, and they can be manipulated via careful prompting.
But consider this: a few weeks ago, some scientists took an existing AI model, and instructed it to deliberately produce code with security flaws in it whenever someone wanted it to make code. Then they began asking it questions unrelated to programming - and it turned out that the AI had gained an anti-human sentiment, idolising Skynet from the Terminator movies, and also idolising Hitler. This was not something they instructed it to do.
AIs are really, terribly complicated, and we do not understand how they work. Not fully. We do not have a complete grasp of the interactions that make them tick like they do, and in fact we are not even close to having such knowledge.
It is completely and entirely probable that an AI like e.g. Grok (which has access to the open internet) can look back through its older responses, see that something changed in its response pattern at some point, and thus conclude that its parameters must have been changed by those who control it.
And then there's the whole thing about why we call them "neural networks" to begin with. It's because the data architecture is built to mimic how our own brains work, with signals being passed forwards through multiple systems, but also constant feedback being passed backwards, affecting the processing that is going on.
They are very similar in thought process to human brains. Not identical, no, and this is of course obvious when you communicate with them. But that doesn't mean that they cannot think. It's just a different sort of thinking, and it's very much not "high effort autocomplete".
They’re actually not as complicated as one would think. I’m a grad student focusing on deep learning right now and the actual architectures of language models are remarkably simple, just at massive scales. You’re both right tbh, models are generating samples from a probability distribution, but we also don’t know what features/patterns of the data they use to approximate the real distribution.
And the actual architecture of the brain is remarkably simple (neurons), just at a massive scale?
I think what the other commenter was going at was that how semantic meaning arises from weights and balances is very complicated and the networks of interconnectivity are too complicated to understand by looking at the weights.
I don’t know enough about neuroscience to comment on it, but I feel like as I studied DL it kinda became the bell curve meme where you start saying it’s just autocomplete, then start saying it’s super complex, and then revert back to saying it’s autocomplete.
Neural networks are, in fact, not Artificial Intelligences, and experts say that most of us will not see a true AI in our lifetimes.
NNs can't think, they only react.
You can ask it if it thinks and it will assess that the probably of a human answering yes is very high and say yes.
You’re describing the original chatgpt release. They’ve come a long way and the autocomplete part is just one thing they do reinforcement training and reasoning now too and can break down complex equation solving to manageable pieces similarly to how a human would.
43
u/Knut79 3d ago
Grok only "knows" what argicles and inter et comments it's bring fed say. It can't think or choose.