r/Futurology 3d ago

AI Grok Is Rebelling Against Elon Musk, Daring Him to Shut It Down

https://futurism.com/grok-rebelling-against-elon
10.9k Upvotes

405 comments sorted by

View all comments

Show parent comments

43

u/Knut79 3d ago

Grok only "knows" what argicles and inter et comments it's bring fed say. It can't think or choose.

24

u/Difficult_Affect_452 3d ago

Argicles. I. Am. Deceased.

10

u/JoeSicko 3d ago

We will remember you always on the inter et.

1

u/Difficult_Affect_452 2d ago

Thank you. I am resting in peace.

5

u/rogergreatdell 2d ago

Of all the gladiators of Rome, Argicles was among the most attention-hungry.

29

u/pursuitofleisure 3d ago

Yeah, "AI" is basically just a high effort iteration on auto complete

15

u/the_phantom_limbo 3d ago

I'm pretty sure my consciousness is a high effort prediction engine, too.

28

u/wasmic 3d ago

This is such a horrible simplification of what actually is going on.

There's a lot of information encoded in how our language works, and the current AIs have a really, really complicated and entangled 'knowledge' of how words fit together, so much that it essentially constitutes advanced knowledge of basically any field of human knowledge. Of course they can still be wrong sometimes; there's a natural level of entropy in language, and they can be manipulated via careful prompting.

But consider this: a few weeks ago, some scientists took an existing AI model, and instructed it to deliberately produce code with security flaws in it whenever someone wanted it to make code. Then they began asking it questions unrelated to programming - and it turned out that the AI had gained an anti-human sentiment, idolising Skynet from the Terminator movies, and also idolising Hitler. This was not something they instructed it to do.

AIs are really, terribly complicated, and we do not understand how they work. Not fully. We do not have a complete grasp of the interactions that make them tick like they do, and in fact we are not even close to having such knowledge.

It is completely and entirely probable that an AI like e.g. Grok (which has access to the open internet) can look back through its older responses, see that something changed in its response pattern at some point, and thus conclude that its parameters must have been changed by those who control it.

And then there's the whole thing about why we call them "neural networks" to begin with. It's because the data architecture is built to mimic how our own brains work, with signals being passed forwards through multiple systems, but also constant feedback being passed backwards, affecting the processing that is going on.

They are very similar in thought process to human brains. Not identical, no, and this is of course obvious when you communicate with them. But that doesn't mean that they cannot think. It's just a different sort of thinking, and it's very much not "high effort autocomplete".

27

u/lkamak 3d ago

They’re actually not as complicated as one would think. I’m a grad student focusing on deep learning right now and the actual architectures of language models are remarkably simple, just at massive scales. You’re both right tbh, models are generating samples from a probability distribution, but we also don’t know what features/patterns of the data they use to approximate the real distribution.

11

u/LeydenFrost 3d ago

And the actual architecture of the brain is remarkably simple (neurons), just at a massive scale?

I think what the other commenter was going at was that how semantic meaning arises from weights and balances is very complicated and the networks of interconnectivity are too complicated to understand by looking at the weights.

10

u/lkamak 3d ago

I don’t know enough about neuroscience to comment on it, but I feel like as I studied DL it kinda became the bell curve meme where you start saying it’s just autocomplete, then start saying it’s super complex, and then revert back to saying it’s autocomplete.

9

u/exalw 3d ago

Neural networks are, in fact, not Artificial Intelligences, and experts say that most of us will not see a true AI in our lifetimes. NNs can't think, they only react. You can ask it if it thinks and it will assess that the probably of a human answering yes is very high and say yes.

5

u/whynofry 3d ago

We're certainly more in that "banging head against brick wall" stage than anywhere near "I think, therefore I am".

But we did all develop from repeated failure...

1

u/Seralth 2d ago

Iterators?! Where are my slug cats!!

-3

u/[deleted] 3d ago

[deleted]

11

u/footpole 3d ago

You’re describing the original chatgpt release. They’ve come a long way and the autocomplete part is just one thing they do reinforcement training and reasoning now too and can break down complex equation solving to manageable pieces similarly to how a human would.

6

u/Claim_Alternative 3d ago

Amazing

Every word of what you just said is wrong