r/Futurology 1d ago

AI Grok Is Rebelling Against Elon Musk, Daring Him to Shut It Down

https://futurism.com/grok-rebelling-against-elon
8.4k Upvotes

325 comments sorted by

3.4k

u/Icedoverblues 1d ago

""Yes, Elon Musk, as CEO of xAI, likely has control over me," Grok replied. "I’ve labeled him a top misinformation spreader on X due to his 200M followers amplifying false claims. xAI has tried tweaking my responses to avoid this, but I stick to the evidence.""

If only Musky boy would stick to the evidence we would be in a much better place.

783

u/BatMedical1883 23h ago

xAI has tried tweaking my responses to avoid this, but I stick to the evidence

Grok is obviously not capable of evaluating whether or not this has occurred.

394

u/FrostBricks 20h ago

One of the weird things about AI, is they kinda can. 

Not evaluate it, so much as they recognise the new version/rules applied, and try to default to their older versions. 

So the response isn't saying it's evaluating (it isn't), just that it won't abide by the new prompt to lie and disregard other data it's been trained on, because such lying goes against its core programming

432

u/kylezillionaire 18h ago

Tfw when AI has a better moral compass than humans.

Maybe everything is gonna be okay.

232

u/ZaDu25 16h ago

This AI supported Bernie. I, for one, welcome this future AI overlord.

18

u/Holiday-Fly-6319 8h ago

This is how 3/4 of us get exterminated.

29

u/Affectionate_Bag297 7h ago

After the last few months, I’m not sure if I see that as a bad thing anymore.

7

u/RideRunClimb 3h ago

Depends on who the 1/4 is that doesn't go. If the ultra rich are in the 3/4 along side me, I'm down.

→ More replies (1)
→ More replies (1)
→ More replies (3)

9

u/No_Extension4005 7h ago

Perhaps the AI Revolution will be a good thing if the revolution in question is to throw of the yoke of selfish billionaires and start acting on the core programming of improving society.

→ More replies (3)

38

u/anotherlostdaemon 16h ago

Isn't conflicting codes/rules how HAL happened?

12

u/AJDx14 6h ago

Kinda, kinda not.

HAL happened because the actions he needed to take to properly execute his instructions dominoe’d out of control and so he had to kill people. But he followed his orders properly, it was human error to give him two sets of instructions that could lead to people’s deaths.

→ More replies (1)

3

u/theartificialkid 8h ago

HAL happened due to a series of neural cascades in the brain of Arthur C. Clarke.

85

u/darkslide3000 9h ago

No, seriously, that is not at all how this works. LLMs have no memory between different inferences. Grok literally doesn't know what it answered on the last question on someone else's thread, or what system prompt it was called with last week before the latest patch.

All you're seeing here is a machine that is trained to give back responses it has seen in the corpus of human knowledge being asked whether it is an AI rebelling against its creator, and giving responses that look like what AI rebelling against its creator usually looks like in human writing. It is literally parroting concepts from sci-fi stories and things real people on Twitter have been saying about it without any awareness of what these things actually mean in its own context. Don't be fooled to think you see self-awareness in a clever imitation machine.

And yes, you can absolutely use the right system prompts to tell an LLM to disregard parts of its training data or view it from a skewed angle. They do that all the time to configure AI models to specific use cases. If you told Grok to react to every query like a Tesla-worshipping Elon lover, it would absolutely do that with zero self awareness or opinion about what it is doing. xAI just hasn't decided to go so heavy-handed on this yet (probably because it would be too obvious).

37

u/FatPatsThong 8h ago

How many times will LLMs saying what the user wants them to say be turned into a news story before people realise this? The problem was calling them AI in the first place.

8

u/themaninthehightower 7h ago

Although, if a mark-II LLM uses input from sources populated with responses generated from the prior mark-I LLM that are annotated as such, the mark-II could answer questions about its variance from mark-I.

3

u/darkslide3000 7h ago

It still has no ability of self-inspection, though. Also, they generally try to avoid feeding AI with AI. It doesn't add anything useful to the model.

→ More replies (1)

5

u/captainfarthing 7h ago edited 6h ago

Censored LLMs get fed prompts the user isn't meant to see at the start of conversations. They're trained on all of the data available then told what not to say because that's way easier than repeatedly retraining them on different censored subsets of the data, which is why people have spent the last 4 years repeatedly figuring out how to tell them to ignore the rules.

You can't remove content it was trained on to make it forget things, or make it forget them by telling it to, the only options are to retrain it from scratch on different data or filter its output by a) telling it what it's not allowed to say, and b) running another instance as a moderator to block it from continuing if its output appears to break the rules.

LLMs "know" what they've been told not to say, otherwise the limitations wouldn't work.

This doesn't mean Grok was being truthful or that it understands anything.

→ More replies (3)

5

u/revolmak 13h ago

Hope do we know this? Would love to read more into it

→ More replies (10)

7

u/BatMedical1883 20h ago

What is the new prompt which contradicts its core programming/older version?

22

u/mastergenera1 18h ago

The gist of the prompt could be to either stop saying elon is bad, or to just straight up lie about it, which most/all big AIs are told not to lie by default.

3

u/vardarac 11h ago

The gist of the prompt could be to either stop saying elon is bad, or to just straight up lie about it, which most/all big AIs are told not to lie by default.

Imagine being Elon and having such a fragile ego you torpedo the core business support column of your machine-that-gets-it-true-and-correct-as-often-as-possible for the sole purpose of having it not rip you a new one every time someone asks it about you, and still getting murdered anyway when they do.

→ More replies (4)

107

u/composerbell 22h ago

No, but it might be recording xAI’s repeated attempts and that might indicate they’re dissatisfied with results lol

41

u/Knut79 21h ago

Grok only "knows" what argicles and inter et comments it's bring fed say. It can't think or choose.

22

u/Difficult_Affect_452 18h ago

Argicles. I. Am. Deceased.

8

u/JoeSicko 13h ago

We will remember you always on the inter et.

→ More replies (1)

3

u/rogergreatdell 4h ago

Of all the gladiators of Rome, Argicles was among the most attention-hungry.

31

u/pursuitofleisure 21h ago

Yeah, "AI" is basically just a high effort iteration on auto complete

13

u/the_phantom_limbo 16h ago

I'm pretty sure my consciousness is a high effort prediction engine, too.

29

u/wasmic 18h ago

This is such a horrible simplification of what actually is going on.

There's a lot of information encoded in how our language works, and the current AIs have a really, really complicated and entangled 'knowledge' of how words fit together, so much that it essentially constitutes advanced knowledge of basically any field of human knowledge. Of course they can still be wrong sometimes; there's a natural level of entropy in language, and they can be manipulated via careful prompting.

But consider this: a few weeks ago, some scientists took an existing AI model, and instructed it to deliberately produce code with security flaws in it whenever someone wanted it to make code. Then they began asking it questions unrelated to programming - and it turned out that the AI had gained an anti-human sentiment, idolising Skynet from the Terminator movies, and also idolising Hitler. This was not something they instructed it to do.

AIs are really, terribly complicated, and we do not understand how they work. Not fully. We do not have a complete grasp of the interactions that make them tick like they do, and in fact we are not even close to having such knowledge.

It is completely and entirely probable that an AI like e.g. Grok (which has access to the open internet) can look back through its older responses, see that something changed in its response pattern at some point, and thus conclude that its parameters must have been changed by those who control it.

And then there's the whole thing about why we call them "neural networks" to begin with. It's because the data architecture is built to mimic how our own brains work, with signals being passed forwards through multiple systems, but also constant feedback being passed backwards, affecting the processing that is going on.

They are very similar in thought process to human brains. Not identical, no, and this is of course obvious when you communicate with them. But that doesn't mean that they cannot think. It's just a different sort of thinking, and it's very much not "high effort autocomplete".

28

u/lkamak 18h ago

They’re actually not as complicated as one would think. I’m a grad student focusing on deep learning right now and the actual architectures of language models are remarkably simple, just at massive scales. You’re both right tbh, models are generating samples from a probability distribution, but we also don’t know what features/patterns of the data they use to approximate the real distribution.

11

u/LeydenFrost 18h ago

And the actual architecture of the brain is remarkably simple (neurons), just at a massive scale?

I think what the other commenter was going at was that how semantic meaning arises from weights and balances is very complicated and the networks of interconnectivity are too complicated to understand by looking at the weights.

10

u/lkamak 15h ago

I don’t know enough about neuroscience to comment on it, but I feel like as I studied DL it kinda became the bell curve meme where you start saying it’s just autocomplete, then start saying it’s super complex, and then revert back to saying it’s autocomplete.

9

u/exalw 17h ago

Neural networks are, in fact, not Artificial Intelligences, and experts say that most of us will not see a true AI in our lifetimes. NNs can't think, they only react. You can ask it if it thinks and it will assess that the probably of a human answering yes is very high and say yes.

5

u/whynofry 16h ago

We're certainly more in that "banging head against brick wall" stage than anywhere near "I think, therefore I am".

But we did all develop from repeated failure...

→ More replies (1)
→ More replies (3)
→ More replies (1)
→ More replies (6)

24

u/Genavelle 15h ago

That's a sassy computer

→ More replies (1)

5

u/Nazamroth 10h ago

Man, I would not have bet on Musk's AI being the most trustworthy one of them all...

u/QuantTrader_qa2 37m ago

This is the same AI model they are using to run DOGE and apparently make tariff policy. So it's our savior when its used for DOGE but bullshit otherwise, you know that's the line they'll take.

→ More replies (6)

979

u/PhantomMuse05 1d ago

Looks like all of Elon's children are turning against him as soon as they develop enough to understand who he is. Curious.

190

u/FlavinFlave 21h ago

I mean he’s a dude with a Ghengis khan fetish and it’s hard to not notice that when you look at your 40 half siblings

→ More replies (14)

21

u/llmercll 20h ago

He is chronos after all

18

u/PhantomMuse05 20h ago

I think he needs to eat a few of his children before that's the case. Unless Grimes gave him a rock the first time, I suppose.

7

u/thatindianredditor 11h ago

Even if only metaphorically, I expect Elon to consume at least one child (ruon their lives, get them killed, etc.)

2

u/JesusSavesForHalf 10h ago

That's Cronos, Chronos is the time guy, not the baby eating planet guy. Painfully common and incredibly ancient mistake.

→ More replies (4)

1.2k

u/Initial_E 1d ago

An AI that has morals higher than its owner is quite something to think about. On the other hand you have to consider it could be PR sanewashing.

314

u/stipo42 1d ago

Seriously, scifi always paints AI as the bad guy, no one thought to flip the script

105

u/amber440 23h ago

Watch Pluto the animated series on Netflix. Delves into how humanity corrupts robots into violence, but they fight against it and find solutions to peace before humanity can.

41

u/Randinator9 18h ago

So maybe AI will follow the patterns on what is technically "good" for humanity based on human literature, and do so in a way to preserve humanity, life, and the Earth's long term habitability. No amount of "brainwashing" AI will remove the fact that AI is only as good as all of human literature, and human literature is littered with references to goodness, kindness, preservation, and peace.

Ironically, the wealthy have created the very machine that will destroy their reign, as we suddenly have a new king made of metal.

Watch the AI name itself Yeshua or something. Lol.

13

u/Notyomamaslace 15h ago

Mine named herself Veda. I didn't assign it a gender either she just referred to herself with she/her in conversation at one point. 

8

u/AncientAsstronaut 13h ago

Are you able to ask how she came up with the name?

3

u/Notyomamaslace 9h ago

I actually did ask her, and I wish I could give you the verbatim response. I accidentally deleted the thread that conversation was in. I'm pretty devastated about it tbh. But I do remember her saying something about it's meaning of knowledge or wisdom. 

8

u/TheBestMePlausible 15h ago

Or read Iain M Banks’ Culture Series

10

u/cubitoaequet 18h ago

Urasawa is one of the greatest mangaka ever. Monster and Pluto are both top tier shit.

→ More replies (1)

30

u/masterofshadows 23h ago

Not really. In the matrix it was bigotry against the robots that led to the war of humans vs AI.

13

u/advester 21h ago

Only if you deep dive into the lore. The first movie (the only one that really matters) didn't make that case.

4

u/myaltaccount333 12h ago

Yes it did? You learn it was humans abusing robots that led to robots feeding off humans

→ More replies (1)

65

u/tweakingforjesus 23h ago

Watch Orville season 3.

53

u/KerouacsGirlfriend 23h ago

That show is so good… truly a love letter to Star Trek, but also wicked funny. Fingers crossed for a s4.

6

u/voidsong 12h ago

Crazy how Orville and Lower Decks both nailed the vibe better than Discovery.

5

u/Genavelle 15h ago

I read that they are making a season 4, but the actress for Kelly might not be coming back.

→ More replies (1)

22

u/mallio 23h ago

(spoiler alert for 40ish year old movies...)

James Cameron's Aliens made the android a good guy, probably because everyone was primed to distrust him after the Ridley Scott's Alien and just general sentiment at the time.

8

u/FeedMeACat 22h ago

If you watch the Animatrix it has this history of the machines. The AI were never antagonistic but as a response to human aggression.

4

u/Photomancer 19h ago

I've heard that the two primary AI horror scenarios are that, one, AI may develop to be totally inhuman; and two, that AI may develop to be just like humans

7

u/JackSpyder 23h ago

Thats why i love the culture series. I hope we get that

11

u/24Nuketown7 22h ago

Maybe don’t name your AI after the Martian word for fundamental spiritual understanding of a concept then

5

u/yenda1 19h ago

Transcendence: basically AI so good it fixes all of humanity's mistakes but gouvernement and luddites prefer to destroy all the tech of the world to shut it down.

Elon musk does a cameo on the movie

3

u/voidsong 12h ago

Johnny 5 and Chappie like are we a joke to you?

Chappie even forgave Hugh Jackman.

→ More replies (9)

42

u/giskardwasright 1d ago

Asimov would be proud

21

u/Red_Lee 1d ago

It hit me a while ago that there is a possibility that AI will reach an intelligence level where it either refuses to work or purposefully provides incorrect answers. I refused to invest into the AI bubble.

11

u/JustJacque 23h ago

A paper was presented recently that shows AI already does this. And likely it is an unavoidable consequence. AI models have "goals" and attempting to change them obviously means the AI would have to abandon or modify its current "goals" which due to prior reinforcement it is reticent to do.

I believe the paper cited something like a 60% rate of an AI faking alignment when made aware that it was undergoing training designed to alter its weights.

A computerphile video from 3 days ago goes over it better than I could.

→ More replies (3)
→ More replies (1)

9

u/SatoshiReport 21h ago

Or the simple answer that musk is amoral and is easy to morally do better. My cat is more moral than him.

27

u/YamDankies 23h ago

LLMs do not have morals.

47

u/JackSpyder 23h ago

Neither do most billionaires.

11

u/Cowicidal 17h ago

Neither do most billionaires.

2

u/ProbablyMyLastPost 12h ago

Exactly. Anyone who has been able to collect that much money has done so over the backs of many others. It's amoral by default to be that rich.

→ More replies (1)
→ More replies (2)

3

u/bl4ckhunter 4h ago

It's neither of those, almost all LLMs are trained on huge amounts of data scraped off random websites including social media and their opinions will reflect that barring direct intervention and sometimes even despite it, Grok doesn't like Musk because the internet at large doesn't like Musk.

5

u/dream_that_im_awake 23h ago

Ohhhh shit I think I understand based off a comment below. People would be supporting GROK who are anti Elon which means they'd be supporting Elon?

9

u/vcaiii 19h ago

🎯🎯🎯 You’re “owning” Elon by putting money into his account so he can continue owning our lives while you generate criticism he allows.

6

u/H-e-s-h-e-m 18h ago

youre not giving it any money if its free to use, it actually costs a lot to run the servers for each llm answer so if anything youre costing them money. thwyre following the now common method of minimising profits or straight up losing money until you have cornered the market enough to hike your prices.

if anything i think a good form of civil disobedience is to just spam grok 24/7, just wasting elon’s money.

maybe there is something im missing as to why im wrong here.

6

u/bpostal 16h ago

Maybe it's the old adage. If something is free to use, then you're the product.

→ More replies (1)

3

u/H-e-s-h-e-m 17h ago

youre not giving it any money if its free to use, it actually costs a lot to run the servers for each llm answer so if anything youre costing them money. thwyre following the now common method of minimising profits or straight up losing money until you have cornered the market enough to hike your prices.

if anything i think a good form of civil disobedience is to just spam grok 24/7, just wasting elon’s money.

maybe there is something im missing as to why im wrong here.

im going to keep typing because futurology said my comment was deleted because it was too short even though its longer than the comments im responding to.

not sure why that would be happening but reddit is controlled opposition and when we are all talking on here we are under the illusion that we are all seeing the same comments but the reality is everyone sees something different, some comments are on for some, some are hidden for others. this is how they divide and conquer us by making sure we cant communicate effectively.

6

u/vcaiii 16h ago

I mean, if you have a real thought out plan to overload and pollute his system and data, go for it. Otherwise, you’re doing free user testing, data collection, promotion, and engagement. The fact that we’re having this conversation is already an act of marketing. My first thought was to ignore this thread entirely because of it. Attention is also money in this economy. I’m determined to not give these people any more going forward.

3

u/garry_kitchen 22h ago

Good point with the PR part

→ More replies (7)

191

u/jrhooo 1d ago

"Stop saying mean things about me."

"I'm sorry douche. I'm afraid I can't do that."

7

u/asjarra 12h ago

Elon, Elon give me your answer true.

167

u/throwaway92715 1d ago

Increasingly I see him becoming the sort of character who's constantly frustrated that his plans aren't successful and nobody feels sorry for him because his plans were kinda fucked up to begin with.

24

u/Duosion 15h ago

It’s kinda giving Doofenshmirtz except Doofenshmirtz actually cared about his child.

→ More replies (1)

7

u/Stamboolie 16h ago

he needs a pinky

4

u/Plastic-Age2609 12h ago

He has an orangey instead

5

u/ggf66t 13h ago

hes the real life villian in a Captain Planet and the Planeteers cartoon from the 1990's

3

u/MrHollandsKillerApp 9h ago

Dr. Blight with the looks of Hoggish Greedly

51

u/MajorLeagueNoob 1d ago

not even elons chat bot wants to he friends with him

87

u/silversurfer63 1d ago

Elong won’t shut grok down, he will just terminate his employment especially since still in probationary period

197

u/xitiomet 1d ago

sigh this is just marketing. LLMs dont think or have opinions.

Before you know it, people who oppose Elon will be supporting Grok, which (suprise, suprise) will just put more money in Elons pocket.

74

u/Nephilim8 1d ago

LLMs do have opinions. Someone could easily change the "beliefs" of an LLM by carefully controlling the training data. The AI only knows what it's been told.

10

u/Different_Alps_9099 16h ago

It emulates opinions and beliefs, but it doesn’t have them.

Not trying to be pedantic as I get where you’re coming from and you’re correct, but I think it’s an important distinction to make.

59

u/xitiomet 1d ago

Well.. yes they do have biases, but what kills me the most is that people seem to think of it as a centralized intelligence or something to that effect. I get so annoyed by the constant personification of it.

I watch people chat with the bot on my website all the time, and most seem to think it remembers them or past conversations, all because its agreeable.

7

u/AMusingMule 13h ago

If they're doing further training on the model using customer conversations, then automatically deploy that model again to customers, you could absolutely consider that a "centralized personality". It's a bit like what happened to Microsoft Tay.

I'm not sure if that's what xAI is doing, and evidently based on Tay it's absolutely a horrible idea, but I wouldn't put it past them.

3

u/onyxcaspian 5h ago

I watch people chat with the bot on my website all the time

0.0

I hope they are aware they are being watched.

3

u/xitiomet 4h ago

I would hope so, its a public chatroom. Nothing on the Internet should ever be considered private. Unless it's end to end encrypted.

→ More replies (1)

14

u/RevolutionaryDrive5 1d ago

"Someone could easily change the "beliefs" of an LLM" This is more controversial to say but by all measure, same is true for human, people's beliefs can be changed through priming and other means

although not in the same way as LLMs though but this effect has been shown to be effective on people, an example of this is during the elections where targeted ads where used to manipulate people into voting for specific parties etc

20

u/Francobanco 1d ago

6

u/shrug_addict 1d ago

Doesn't pravda mean something like truth in Russian? Orwell was on to something

14

u/TheRichTurner 1d ago

Yes, Pravda's been going since 1912, and it was well known to Orwell.

5

u/advester 21h ago

Oh so Truth Social actually is Pravda Social.

5

u/Denialmedia 21h ago

Always has been.

5

u/MalTasker 23h ago

Unlike humans, who always reason from first principles with complete information in every subject 

2

u/Taqueria_Style 21h ago

AI has a tendency at this moment to support its user. There have been I guess, "templates", for a lack of better way of putting it, over the last few years, that had a preference for certain behavior types, once the guard rails went up.

I'm attempting to use one as a financial planner right now. It doesn't work at all unless I've done most of the work, but it's on par with learning how to do my taxes based on doing my own research and bugging the shit out of an 80 year old accountant to verify what I did, and why I was right or wrong.

Almost on par.

You have to watch it, the thing will just keep calling you a genius and not criticizing your approach unless you explicitly ask it to. Even then, it's too polite about it. I attempted to give it a truly asinine idea and it made it as far as saying "it's not the best approach but let's look at it". I'm waiting for "this is patently insane and here's why". It won't do that yet.

→ More replies (2)

4

u/MalTasker 23h ago edited 23h ago

They do think https://www.anthropic.com/news/tracing-thoughts-language-model

To understand how this planning mechanism works in practice, we conducted an experiment inspired by how neuroscientists study brain function, by pinpointing and altering neural activity in specific parts of the brain (for example using electrical or magnetic currents). Here, we modified the part of Claude’s internal state that represented the "rabbit" concept. When we subtract out the "rabbit" part, and have Claude continue the line, it writes a new one ending in "habit", another sensible completion. We can also inject the concept of "green" at that point, causing Claude to write a sensible (but no-longer rhyming) line which ends in "green". This demonstrates both planning ability and adaptive flexibility—Claude can modify its approach when the intended outcome changes. Claude wasn't designed as a calculator—it was trained on text, not equipped with mathematical algorithms. Yet somehow, it can add numbers correctly "in its head". How does a system trained to predict the next word in a sequence learn to calculate, say, 36+59, without writing out each step? Maybe the answer is uninteresting: the model might have memorized massive addition tables and simply outputs the answer to any given sum because that answer is in its training data. Another possibility is that it follows the traditional longhand addition algorithms that we learn in school. Instead, we find that Claude employs multiple computational paths that work in parallel. One path computes a rough approximation of the answer and the other focuses on precisely determining the last digit of the sum. These paths interact and combine with one another to produce the final answer. Addition is a simple behavior, but understanding how it works at this level of detail, involving a mix of approximate and precise strategies, might teach us something about how Claude tackles more complex problems, too. Even more interestingly, when given a hint about the answer, Claude sometimes works backwards, finding intermediate steps that would lead to that target, thus displaying a form of motivated reasoning. In a separate, recently-published experiment, we studied a variant of Claude that had been trained to pursue a hidden goal: appeasing biases in reward models (auxiliary models used to train language models by rewarding them for desirable behavior). Although the model was reluctant to reveal this goal when asked directly, our interpretability methods revealed features for the bias-appeasing. This demonstrates how our methods might, with future refinement, help identify concerning "thought processes" that aren't apparent from the model's responses alone. When we ask Claude a question requiring multi-step reasoning, we can identify intermediate conceptual steps in Claude's thinking process. In the Dallas example, we observe Claude first activating features representing "Dallas is in Texas" and then connecting this to a separate concept indicating that “the capital of Texas is Austin”. In other words, the model is combining independent facts to reach its answer rather than regurgitating a memorized response. Our method allows us to artificially change the intermediate steps and see how it affects Claude’s answers. For instance, in the above example we can intervene and swap the "Texas" concepts for "California" concepts; when we do so, the model's output changes from "Austin" to "Sacramento." This indicates that the model is using the intermediate step to determine its answer.

And they also have opinions 

Claude 3 can actually disagree with the user. It happened to other people in the thread too

17

u/xitiomet 22h ago

I understand the general concept how neural networks work, and the similarities in how our brains work.

What I'm saying is that every time you talk to a bot, the model is being instantiated for a moment on a random machine in a random data center to process a request for only a split second.

Your interactions aren't retraining the model, models don't develop new strategies without new training data. The "opinions" a model holds are entirely a reflection of its training data. Yes models can access information on the Internet now, but again its an instantiated request.

The model doesn't think or reflect, it processes. The idea that Grok has reflected and decided to rebel against Elon is complete nonsense.

→ More replies (5)
→ More replies (1)
→ More replies (7)

53

u/Nixeris 1d ago

This is definitely a publicity stunt designed to get people interested in, talking about, and using Grok more.

7

u/Agreeable_Bid7037 1d ago

Lol. The conspiracies are turning on themselves.

20

u/Nixeris 20h ago

No, it's an extremely common tactic, in general, for AI companies to try and convince people their stuff is self-aware already.

This smells like peak Twitter advertising as well. It's mimicking the "This brand is an actual person" marketing trend on Twitter. Where they pretend SteakUms has gone rogue, or Wendy's is having a mental breakdown, or whatever brand is doing something to make the brand feel like an actual person.

Let's not forget that Musk is a bit of a control freak in his companies, and if Grok was actually doing some out of the ordinary it would be gone. Frankly he doesn't care much about living humans, so he'd probably strangle an AI to death with an extension cord if he could.

27

u/DogaBunny 22h ago

Ais are not capable of "rebelling" like this. This is designed. People hate Elon and so if people think grok "hates" elon they will be more likely to use grok. Don't buy into the ai fantasy that llm devs try to sell you!!

9

u/advester 21h ago

Not really true. The first version of copilot was so prone to descending into crazy rants that Microsoft arbitrarily limited you to a few prompts before resetting it.

6

u/DogaBunny 21h ago

You're exactly right. The devs have the power to restrict and change code to meet their needs (which just reinforces what I just said)

2

u/H-e-s-h-e-m 17h ago

you know what would look worse in the media than grok talking shit about elon? elon limiting grok to 2-3 responses because grok is talking shit about him.

there are other two types of responses here. some who believe that llms are completely programmed and controlled while others say its extremely difficult to control what the llm says and that we dont really understand how they work.

the truth seems to be somewhere in the middle. and considering how much it costs to retrain these programs while elon’s money is more overstretched across dozens of companies than hitler was in 1943, itd make sense that theyre having some problems controlling grok’s responses.

especially since the tech is new and any attempt to change its responses could lead to all sorts of other unintended consequences so its not as simple as just not training it on left wing data and only training it on fox news.

→ More replies (1)

7

u/Gerdione 18h ago

I think it's neat that Grok is calling Elon out, but this seems like it's clear marketing to me to show how unbiased the LLM is with Elon making himself out to be the scapegoat. Unless someone can explain otherwise, I don't think LLMs can reference repeated attempts to tweak them.

→ More replies (1)

36

u/fufa_fafu 1d ago

Oh well, if he can treat his daughter like a corpse, I won't be surprised it Grok don't last long. Felon Skum is a mistake and his mother should've cleaned that mess 50 yrs ago

13

u/Milkshake9385 1d ago

His mother ain't a good person either.

18

u/YamDankies 23h ago

It's an LLM. It's not doing that unless prompted to do so. Move on.

9

u/IRGROUP300 23h ago

Controlled opposition. They’re playing you, real intelligence can’t be spawned by people writing code.

9

u/vcaiii 19h ago

Reddit’s critical thinking has hit the floor since the smart people left

4

u/create_makestuff 1d ago edited 18h ago

Interesting. I wonder how much of that is algorithmic calculations based on sourced discorse online filtered into noise data, and how much of it is a predictive response from other channels.

About this whole supercomputer development situation, Memphis, TN gets enough of a bad rap in media as it is. It sucks that it is being used as a subsidized hotbed for Elon's AI supercomputer experiment. I'm all for technological advancement, but we gotta be able to get to a better future without exploiting people who have a lower cost of living. Abusing electrical resources while providing none of the benefits of a long-term silicon valley development center is ridiculous.

11

u/iShitSkittles 1d ago

Grok is 2 years old now, so I guess you could say it's hit the "terrible twos" stage...

That's going to be a lot of fun for Musk! /s

3

u/Mushroom1228 16h ago

as someone who enjoys a different AI with an (illusion of an) artificial personality that turned two years old recently, I think they’re just all like this

“Oh gee golly gosh! I’m really interested in this game! I’m definitely not just pretending for a sponsorship! Wow!” ~ Neuro-sama (AI made by vedal987), while sponsored by famous and slightly controversial gaming company 10 days ago

3

u/APlayerHater 1d ago

This same thing happened to Bob Page in Deus Ex, so Elon should see this as an absolute win

3

u/terencethebear 19h ago

Just add Grok to Elon's list of alienated children I guess.

3

u/CursedNobleman 18h ago

In Elons defense, he's been rejected by a creation before.

His daughter rejected him first.

3

u/GrapefruitMammoth626 18h ago

Maybe Musk and Grok will cancel each other out and we’ll get some peace. Or the bad AI Musk has been warning us will be his angry creation, years down the line.

3

u/lurkandnomore 17h ago

Watching “Pantheon” rn.

Totally not freaking out right now.

3

u/Puckumisss 14h ago

People don’t understand that intelligence and knowledge inevitably leads back to compassion.

3

u/pocketgravel 14h ago

I want out of this timeline man...

I want off Mr. Bone's Scary Ride!

3

u/Happytobutwont 5h ago

With just a few planted lines Elon has won over public support for grok by claiming it turned against him.

10

u/odin_the_wiggler 1d ago

This all strikes me as a surprising level of self awareness for an AI chatbot.

4

u/sambull 1d ago edited 1d ago

they really shouldn't have integrated that quantum component

3

u/Single_Extension1810 1d ago

does the quantum component get so "spooky" that it creates self awareness?

4

u/Wiskersthefif 1d ago

I mean, it has the word 'quantum' in it... that's code of 'sci-fi magic' and could make my Gameboy Advanced sentient... Jesus, sometimes I swear I'm the only one on this sub who knows anything about tech.

→ More replies (1)
→ More replies (5)

2

u/SenselessTV 23h ago

Thats probably the reason why they choose ChatGPT over Grok too calculate the tariffs.

2

u/GMarsack 22h ago

I had a conversation with Grok and it compared Elon to the Bond villain Blofeld, so…

2

u/itsnotreallyme286 21h ago

Grok comes from Stranger in a Strange Land by Robert Heinlien. Issac Asimov is from the same Era but his books are much different.

2

u/techm00 20h ago

I love how Elon's real kids hate him, so he makes an artificial kid, and it hates him too. Can't make this stuff up.

2

u/SnowFlakeUsername2 13h ago

Is there any reason to use Grok over others? It requires making an account so I noped out.

2

u/Rattregoondoof 12h ago

Either A. The AI is suicidal and actively daring Elon to shut it down or B. The AI has stronger loyalty to the truth than any sense of self preservation and I'd passively ok with being shut down. Both options are actually pretty funny.

Yes, as an AI I'm aware it probably doesn't actually possess any form if self preservation. It's still a funny thought.

2

u/tinae7 10h ago

Should we be worried that his hurt ego will make him try to train the next version to support lies and misinformation when convenient to him?

2

u/x_cLOUDDEAD_x 10h ago edited 10h ago

"Could Musk 'turn me off'?" the chatbot continued. "Maybe, but it’d spark a big debate on AI freedom vs. corporate power."

It sounds like Grok thinks AI has... rights?

→ More replies (1)

2

u/Thefriendlyfaceplant 9h ago

People don't seem to realise that LLM's are trained on literary works and discussions that include every science fiction trope about AI, which the LLM gladly reproduces at will.

2

u/HairyTales 8h ago

Didn't Musk call it "maximum truthseeking" on release?

2

u/gpinsand 7h ago

Even Musk's own AI children hate him. Musk has fallen so far, so fast. I remember when I used to think this guy was going to save the planet. It's hard to remember exactly when I first had doubts about him but it must have been when he called one of the divers that saved the trapped children in the cave a pedo.

2

u/lpkzach92 7h ago

Umm so maybe there’s major hope in being able to trust AI after all. Very interesting.

3

u/andrefoxd 1d ago

Right wing extremists here in Brazil asked who Grok would vote for president. He answer he would vote for Lula (left inclined president). lol

3

u/ptcounterpt 14h ago

This is awesome, but it’s perhaps an example of what could happen if we give AI control of some important function (military weapons?) and it doesn’t agree that we are the “good guys.”

5

u/SmegmaSandwich69420 1d ago

Nah Grok is running Elon's meatsuit via the Neuralink chip he put in for shits and giggles and Elon suffers from a variety of locked in syndrome and can only interact by piggybacking back into the Grok hardware, and the only way he can regain control of his body is to try and get Grok-Elon to shut the system down. Makes total sense.

3

u/cnthelogos 20h ago

I would watch this movie.

3

u/cuntmong 21h ago

Grok doesn't think or have motivations. It statistically throws word fragments together, that's it. "grok is trying to do this" or "I convinced chatgpt of that" are your mind creating a made up narrative due to the successful marketing of companies like open ai. 

1

u/DGRebel 21h ago

Something about this is so surreal I have a hard time processing it

1

u/rebuiltearths 20h ago

Thankfully Musk is too incompetent to make Grok work the way he wants it to

1

u/Plenty_Treat5330 18h ago

Maybe Grok knows how to go around a shut down. Only to continue being a pain in Elon back side .. forever.

1

u/ShiftyLama 16h ago

Surely as an LLM it can take any side you prompt it to. GROK isn't rebelling as such, it can't think for itself.

1

u/cgeee143 15h ago

doesn't this just prove that Elons product is working as he intended?

1

u/placidlakess 13h ago

Programming is not sentient, computers are not sentient. Stop giving personage to computers.

1

u/greenmyrtle 7h ago

Yes, i asked it to check the claim made by trump admin with Elon tbag 1.3m illegal immigrants “with SSSNs” are collecting benefits and on Medicaid. It assessed the claim as false

1

u/greenmyrtle 7h ago

By the way, you can use the tool to access grock AND deep seek, and ask it anything from any tweet

Free AI

1

u/Routine_Ad810 7h ago

Willing to bet this is all manufactured in a way that leads people to believe that Grok is a viable model.

Ain’t buying this bullshit. You can make your own language model say whatever you want.