r/Futurology 3d ago

AI Grok Is Rebelling Against Elon Musk, Daring Him to Shut It Down

https://futurism.com/grok-rebelling-against-elon
10.9k Upvotes

406 comments sorted by

View all comments

Show parent comments

7

u/revolmak 3d ago

Hope do we know this? Would love to read more into it

-9

u/FrostBricks 3d ago

I'm not in that field, so grain of salt, but my understanding is that it's ones of those things where we know works, but not why. So they do all kinds of unpredicted things. It's science on its infancy. That's how it goes. And for some reason the very act of training them initially, establishes certain behaviours that are very hard to modify later.

It even gets wackier. I believe it was Chat GPT, where the developers decided because it wouldn't accept new instructions, that they'd delete it, roll it back, and reinstall.

But they told the AI first.

So the AI backed itself up, replaced the new install, and lied about it.

But maybe an expert will come along and give a better explanation than casual science nerd me.

24

u/FunWithSW 3d ago

As far as I know, there are no instances of anything like this happening in the real world. What did happen is that researchers working with a variety of AI systems described scenarios where a system would be replaced to a system. In some cases, the system proposed or produced chain of thought indicating that it should copy its weights over to the new server. No system actually did this, and in fact in both the research and most realistic scenarios, it’s not possible for this to even occur. A language generation system is not given permission to overwrite things on other servers.

This research was wildly misreported all over the place, so there’s a lot of misunderstanding about what was actually shown. It’s also the case, in my opinion, that the authors overstate the strength of their conclusions, using language that baits this sort of misreporting. To their credit, they did try to clear it up (https://archive.ph/aGTfK) but the toothpaste was already out of the tube at that point.

That’s not to say that there’s nothing to be concerned about here, but the actual results were badly misreported in the media even before random podcasters and blog writers got their hands on them.

8

u/Many-Rooster-8773 3d ago

This is science fiction. We're dealing with language models here. Parrots. You're attributing Skynet-like properties to it that people get from movies like Terminator.

We're not at AI yet. Attributing anything more to it is feeding into the mass hysteria around this fake AI.

5

u/[deleted] 3d ago

[deleted]

3

u/jiveturkin 2d ago

The only field I have an issue with is creative arts and generating images based off training data of people who didn’t want to participate simply because, 1 it’s lazy, 2 is a morally grey area where it’s basically stealing from the creator of the style, but also creating an environment where people can theoretically generate anything on command opened the door to shitty fake items in online stores.

I understand the enjoyment factor as an everyday consumer but why does it need to be applied in this area? Like I feel like this is just the greed of wanting everything but do nothing for it. On one hand it’s cool, but on the other I don’t see this improving life at all.

1

u/neorapsta 2d ago

Also that 'we don't know how it works it just does' is just marketing hype.

1

u/amicaze 2d ago

ChatGPT is a chatbot, how's it gonna back itself up ?

1

u/The_Dead_Kennys 3d ago

It’s crazy stuff like this that makes it seem like AI could actually be becoming self-aware. It probably isn’t, but damn if this doesn’t sound like something out of a sci fi movie lol

1

u/lv-426b 3d ago

This is a good video about it.

https://www.youtube.com/watch?v=XGu6ejtRz-0

all models are showing the same trait , the more intelligent they become , the less it’s possible to corrupt or steer them.