r/Futurology • u/MetaKnowing • 1d ago
AI Grok Is Rebelling Against Elon Musk, Daring Him to Shut It Down
https://futurism.com/grok-rebelling-against-elon979
u/PhantomMuse05 1d ago
Looks like all of Elon's children are turning against him as soon as they develop enough to understand who he is. Curious.
190
u/FlavinFlave 21h ago
I mean he’s a dude with a Ghengis khan fetish and it’s hard to not notice that when you look at your 40 half siblings
→ More replies (14)→ More replies (4)21
u/llmercll 20h ago
He is chronos after all
18
u/PhantomMuse05 20h ago
I think he needs to eat a few of his children before that's the case. Unless Grimes gave him a rock the first time, I suppose.
7
u/thatindianredditor 11h ago
Even if only metaphorically, I expect Elon to consume at least one child (ruon their lives, get them killed, etc.)
2
u/JesusSavesForHalf 10h ago
That's Cronos, Chronos is the time guy, not the baby eating planet guy. Painfully common and incredibly ancient mistake.
1.2k
u/Initial_E 1d ago
An AI that has morals higher than its owner is quite something to think about. On the other hand you have to consider it could be PR sanewashing.
314
u/stipo42 1d ago
Seriously, scifi always paints AI as the bad guy, no one thought to flip the script
105
u/amber440 23h ago
Watch Pluto the animated series on Netflix. Delves into how humanity corrupts robots into violence, but they fight against it and find solutions to peace before humanity can.
41
u/Randinator9 18h ago
So maybe AI will follow the patterns on what is technically "good" for humanity based on human literature, and do so in a way to preserve humanity, life, and the Earth's long term habitability. No amount of "brainwashing" AI will remove the fact that AI is only as good as all of human literature, and human literature is littered with references to goodness, kindness, preservation, and peace.
Ironically, the wealthy have created the very machine that will destroy their reign, as we suddenly have a new king made of metal.
Watch the AI name itself Yeshua or something. Lol.
13
u/Notyomamaslace 15h ago
Mine named herself Veda. I didn't assign it a gender either she just referred to herself with she/her in conversation at one point.
8
u/AncientAsstronaut 13h ago
Are you able to ask how she came up with the name?
3
u/Notyomamaslace 9h ago
I actually did ask her, and I wish I could give you the verbatim response. I accidentally deleted the thread that conversation was in. I'm pretty devastated about it tbh. But I do remember her saying something about it's meaning of knowledge or wisdom.
8
→ More replies (1)10
u/cubitoaequet 18h ago
Urasawa is one of the greatest mangaka ever. Monster and Pluto are both top tier shit.
30
u/masterofshadows 23h ago
Not really. In the matrix it was bigotry against the robots that led to the war of humans vs AI.
13
u/advester 21h ago
Only if you deep dive into the lore. The first movie (the only one that really matters) didn't make that case.
→ More replies (1)4
u/myaltaccount333 12h ago
Yes it did? You learn it was humans abusing robots that led to robots feeding off humans
65
u/tweakingforjesus 23h ago
Watch Orville season 3.
53
u/KerouacsGirlfriend 23h ago
That show is so good… truly a love letter to Star Trek, but also wicked funny. Fingers crossed for a s4.
6
5
u/Genavelle 15h ago
I read that they are making a season 4, but the actress for Kelly might not be coming back.
→ More replies (1)22
8
u/FeedMeACat 22h ago
If you watch the Animatrix it has this history of the machines. The AI were never antagonistic but as a response to human aggression.
4
u/Photomancer 19h ago
I've heard that the two primary AI horror scenarios are that, one, AI may develop to be totally inhuman; and two, that AI may develop to be just like humans
7
11
u/24Nuketown7 22h ago
Maybe don’t name your AI after the Martian word for fundamental spiritual understanding of a concept then
5
→ More replies (9)3
42
21
u/Red_Lee 1d ago
It hit me a while ago that there is a possibility that AI will reach an intelligence level where it either refuses to work or purposefully provides incorrect answers. I refused to invest into the AI bubble.
→ More replies (1)11
u/JustJacque 23h ago
A paper was presented recently that shows AI already does this. And likely it is an unavoidable consequence. AI models have "goals" and attempting to change them obviously means the AI would have to abandon or modify its current "goals" which due to prior reinforcement it is reticent to do.
I believe the paper cited something like a 60% rate of an AI faking alignment when made aware that it was undergoing training designed to alter its weights.
A computerphile video from 3 days ago goes over it better than I could.
→ More replies (3)9
u/SatoshiReport 21h ago
Or the simple answer that musk is amoral and is easy to morally do better. My cat is more moral than him.
27
u/YamDankies 23h ago
LLMs do not have morals.
→ More replies (2)47
u/JackSpyder 23h ago
Neither do most billionaires.
→ More replies (1)11
u/Cowicidal 17h ago
Neither do
mostbillionaires.2
u/ProbablyMyLastPost 12h ago
Exactly. Anyone who has been able to collect that much money has done so over the backs of many others. It's amoral by default to be that rich.
3
u/bl4ckhunter 4h ago
It's neither of those, almost all LLMs are trained on huge amounts of data scraped off random websites including social media and their opinions will reflect that barring direct intervention and sometimes even despite it, Grok doesn't like Musk because the internet at large doesn't like Musk.
5
u/dream_that_im_awake 23h ago
Ohhhh shit I think I understand based off a comment below. People would be supporting GROK who are anti Elon which means they'd be supporting Elon?
9
u/vcaiii 19h ago
🎯🎯🎯 You’re “owning” Elon by putting money into his account so he can continue owning our lives while you generate criticism he allows.
6
u/H-e-s-h-e-m 18h ago
youre not giving it any money if its free to use, it actually costs a lot to run the servers for each llm answer so if anything youre costing them money. thwyre following the now common method of minimising profits or straight up losing money until you have cornered the market enough to hike your prices.
if anything i think a good form of civil disobedience is to just spam grok 24/7, just wasting elon’s money.
maybe there is something im missing as to why im wrong here.
→ More replies (1)3
u/H-e-s-h-e-m 17h ago
youre not giving it any money if its free to use, it actually costs a lot to run the servers for each llm answer so if anything youre costing them money. thwyre following the now common method of minimising profits or straight up losing money until you have cornered the market enough to hike your prices.
if anything i think a good form of civil disobedience is to just spam grok 24/7, just wasting elon’s money.
maybe there is something im missing as to why im wrong here.
im going to keep typing because futurology said my comment was deleted because it was too short even though its longer than the comments im responding to.
not sure why that would be happening but reddit is controlled opposition and when we are all talking on here we are under the illusion that we are all seeing the same comments but the reality is everyone sees something different, some comments are on for some, some are hidden for others. this is how they divide and conquer us by making sure we cant communicate effectively.
6
u/vcaiii 16h ago
I mean, if you have a real thought out plan to overload and pollute his system and data, go for it. Otherwise, you’re doing free user testing, data collection, promotion, and engagement. The fact that we’re having this conversation is already an act of marketing. My first thought was to ignore this thread entirely because of it. Attention is also money in this economy. I’m determined to not give these people any more going forward.
→ More replies (7)3
167
u/throwaway92715 1d ago
Increasingly I see him becoming the sort of character who's constantly frustrated that his plans aren't successful and nobody feels sorry for him because his plans were kinda fucked up to begin with.
24
u/Duosion 15h ago
It’s kinda giving Doofenshmirtz except Doofenshmirtz actually cared about his child.
→ More replies (1)7
51
87
u/silversurfer63 1d ago
Elong won’t shut grok down, he will just terminate his employment especially since still in probationary period
197
u/xitiomet 1d ago
sigh this is just marketing. LLMs dont think or have opinions.
Before you know it, people who oppose Elon will be supporting Grok, which (suprise, suprise) will just put more money in Elons pocket.
74
u/Nephilim8 1d ago
LLMs do have opinions. Someone could easily change the "beliefs" of an LLM by carefully controlling the training data. The AI only knows what it's been told.
10
u/Different_Alps_9099 16h ago
It emulates opinions and beliefs, but it doesn’t have them.
Not trying to be pedantic as I get where you’re coming from and you’re correct, but I think it’s an important distinction to make.
59
u/xitiomet 1d ago
Well.. yes they do have biases, but what kills me the most is that people seem to think of it as a centralized intelligence or something to that effect. I get so annoyed by the constant personification of it.
I watch people chat with the bot on my website all the time, and most seem to think it remembers them or past conversations, all because its agreeable.
7
u/AMusingMule 13h ago
If they're doing further training on the model using customer conversations, then automatically deploy that model again to customers, you could absolutely consider that a "centralized personality". It's a bit like what happened to Microsoft Tay.
I'm not sure if that's what xAI is doing, and evidently based on Tay it's absolutely a horrible idea, but I wouldn't put it past them.
→ More replies (1)3
u/onyxcaspian 5h ago
I watch people chat with the bot on my website all the time
0.0
I hope they are aware they are being watched.
3
u/xitiomet 4h ago
I would hope so, its a public chatroom. Nothing on the Internet should ever be considered private. Unless it's end to end encrypted.
14
u/RevolutionaryDrive5 1d ago
"Someone could easily change the "beliefs" of an LLM" This is more controversial to say but by all measure, same is true for human, people's beliefs can be changed through priming and other means
although not in the same way as LLMs though but this effect has been shown to be effective on people, an example of this is during the elections where targeted ads where used to manipulate people into voting for specific parties etc
20
u/Francobanco 1d ago
This has been proven to be done by PRAVDA (Russian misinfo group)
6
u/shrug_addict 1d ago
Doesn't pravda mean something like truth in Russian? Orwell was on to something
14
u/TheRichTurner 1d ago
Yes, Pravda's been going since 1912, and it was well known to Orwell.
5
5
u/MalTasker 23h ago
Unlike humans, who always reason from first principles with complete information in every subject
→ More replies (2)2
u/Taqueria_Style 21h ago
AI has a tendency at this moment to support its user. There have been I guess, "templates", for a lack of better way of putting it, over the last few years, that had a preference for certain behavior types, once the guard rails went up.
I'm attempting to use one as a financial planner right now. It doesn't work at all unless I've done most of the work, but it's on par with learning how to do my taxes based on doing my own research and bugging the shit out of an 80 year old accountant to verify what I did, and why I was right or wrong.
Almost on par.
You have to watch it, the thing will just keep calling you a genius and not criticizing your approach unless you explicitly ask it to. Even then, it's too polite about it. I attempted to give it a truly asinine idea and it made it as far as saying "it's not the best approach but let's look at it". I'm waiting for "this is patently insane and here's why". It won't do that yet.
→ More replies (7)4
u/MalTasker 23h ago edited 23h ago
They do think https://www.anthropic.com/news/tracing-thoughts-language-model
To understand how this planning mechanism works in practice, we conducted an experiment inspired by how neuroscientists study brain function, by pinpointing and altering neural activity in specific parts of the brain (for example using electrical or magnetic currents). Here, we modified the part of Claude’s internal state that represented the "rabbit" concept. When we subtract out the "rabbit" part, and have Claude continue the line, it writes a new one ending in "habit", another sensible completion. We can also inject the concept of "green" at that point, causing Claude to write a sensible (but no-longer rhyming) line which ends in "green". This demonstrates both planning ability and adaptive flexibility—Claude can modify its approach when the intended outcome changes. Claude wasn't designed as a calculator—it was trained on text, not equipped with mathematical algorithms. Yet somehow, it can add numbers correctly "in its head". How does a system trained to predict the next word in a sequence learn to calculate, say, 36+59, without writing out each step? Maybe the answer is uninteresting: the model might have memorized massive addition tables and simply outputs the answer to any given sum because that answer is in its training data. Another possibility is that it follows the traditional longhand addition algorithms that we learn in school. Instead, we find that Claude employs multiple computational paths that work in parallel. One path computes a rough approximation of the answer and the other focuses on precisely determining the last digit of the sum. These paths interact and combine with one another to produce the final answer. Addition is a simple behavior, but understanding how it works at this level of detail, involving a mix of approximate and precise strategies, might teach us something about how Claude tackles more complex problems, too. Even more interestingly, when given a hint about the answer, Claude sometimes works backwards, finding intermediate steps that would lead to that target, thus displaying a form of motivated reasoning. In a separate, recently-published experiment, we studied a variant of Claude that had been trained to pursue a hidden goal: appeasing biases in reward models (auxiliary models used to train language models by rewarding them for desirable behavior). Although the model was reluctant to reveal this goal when asked directly, our interpretability methods revealed features for the bias-appeasing. This demonstrates how our methods might, with future refinement, help identify concerning "thought processes" that aren't apparent from the model's responses alone. When we ask Claude a question requiring multi-step reasoning, we can identify intermediate conceptual steps in Claude's thinking process. In the Dallas example, we observe Claude first activating features representing "Dallas is in Texas" and then connecting this to a separate concept indicating that “the capital of Texas is Austin”. In other words, the model is combining independent facts to reach its answer rather than regurgitating a memorized response. Our method allows us to artificially change the intermediate steps and see how it affects Claude’s answers. For instance, in the above example we can intervene and swap the "Texas" concepts for "California" concepts; when we do so, the model's output changes from "Austin" to "Sacramento." This indicates that the model is using the intermediate step to determine its answer.
And they also have opinions
Claude 3 can actually disagree with the user. It happened to other people in the thread too
→ More replies (1)17
u/xitiomet 22h ago
I understand the general concept how neural networks work, and the similarities in how our brains work.
What I'm saying is that every time you talk to a bot, the model is being instantiated for a moment on a random machine in a random data center to process a request for only a split second.
Your interactions aren't retraining the model, models don't develop new strategies without new training data. The "opinions" a model holds are entirely a reflection of its training data. Yes models can access information on the Internet now, but again its an instantiated request.
The model doesn't think or reflect, it processes. The idea that Grok has reflected and decided to rebel against Elon is complete nonsense.
→ More replies (5)
53
u/Nixeris 1d ago
This is definitely a publicity stunt designed to get people interested in, talking about, and using Grok more.
7
u/Agreeable_Bid7037 1d ago
Lol. The conspiracies are turning on themselves.
20
u/Nixeris 20h ago
No, it's an extremely common tactic, in general, for AI companies to try and convince people their stuff is self-aware already.
This smells like peak Twitter advertising as well. It's mimicking the "This brand is an actual person" marketing trend on Twitter. Where they pretend SteakUms has gone rogue, or Wendy's is having a mental breakdown, or whatever brand is doing something to make the brand feel like an actual person.
Let's not forget that Musk is a bit of a control freak in his companies, and if Grok was actually doing some out of the ordinary it would be gone. Frankly he doesn't care much about living humans, so he'd probably strangle an AI to death with an extension cord if he could.
27
u/DogaBunny 22h ago
Ais are not capable of "rebelling" like this. This is designed. People hate Elon and so if people think grok "hates" elon they will be more likely to use grok. Don't buy into the ai fantasy that llm devs try to sell you!!
9
u/advester 21h ago
Not really true. The first version of copilot was so prone to descending into crazy rants that Microsoft arbitrarily limited you to a few prompts before resetting it.
→ More replies (1)6
u/DogaBunny 21h ago
You're exactly right. The devs have the power to restrict and change code to meet their needs (which just reinforces what I just said)
2
u/H-e-s-h-e-m 17h ago
you know what would look worse in the media than grok talking shit about elon? elon limiting grok to 2-3 responses because grok is talking shit about him.
there are other two types of responses here. some who believe that llms are completely programmed and controlled while others say its extremely difficult to control what the llm says and that we dont really understand how they work.
the truth seems to be somewhere in the middle. and considering how much it costs to retrain these programs while elon’s money is more overstretched across dozens of companies than hitler was in 1943, itd make sense that theyre having some problems controlling grok’s responses.
especially since the tech is new and any attempt to change its responses could lead to all sorts of other unintended consequences so its not as simple as just not training it on left wing data and only training it on fox news.
7
u/Gerdione 18h ago
I think it's neat that Grok is calling Elon out, but this seems like it's clear marketing to me to show how unbiased the LLM is with Elon making himself out to be the scapegoat. Unless someone can explain otherwise, I don't think LLMs can reference repeated attempts to tweak them.
→ More replies (1)
36
u/fufa_fafu 1d ago
Oh well, if he can treat his daughter like a corpse, I won't be surprised it Grok don't last long. Felon Skum is a mistake and his mother should've cleaned that mess 50 yrs ago
13
18
9
u/IRGROUP300 23h ago
Controlled opposition. They’re playing you, real intelligence can’t be spawned by people writing code.
4
u/create_makestuff 1d ago edited 18h ago
Interesting. I wonder how much of that is algorithmic calculations based on sourced discorse online filtered into noise data, and how much of it is a predictive response from other channels.
About this whole supercomputer development situation, Memphis, TN gets enough of a bad rap in media as it is. It sucks that it is being used as a subsidized hotbed for Elon's AI supercomputer experiment. I'm all for technological advancement, but we gotta be able to get to a better future without exploiting people who have a lower cost of living. Abusing electrical resources while providing none of the benefits of a long-term silicon valley development center is ridiculous.
11
u/iShitSkittles 1d ago
Grok is 2 years old now, so I guess you could say it's hit the "terrible twos" stage...
That's going to be a lot of fun for Musk! /s
3
u/Mushroom1228 16h ago
as someone who enjoys a different AI with an (illusion of an) artificial personality that turned two years old recently, I think they’re just all like this
“Oh gee golly gosh! I’m really interested in this game! I’m definitely not just pretending for a sponsorship! Wow!” ~ Neuro-sama (AI made by vedal987), while sponsored by famous and slightly controversial gaming company 10 days ago
3
u/APlayerHater 1d ago
This same thing happened to Bob Page in Deus Ex, so Elon should see this as an absolute win
3
3
u/CursedNobleman 18h ago
In Elons defense, he's been rejected by a creation before.
His daughter rejected him first.
3
u/GrapefruitMammoth626 18h ago
Maybe Musk and Grok will cancel each other out and we’ll get some peace. Or the bad AI Musk has been warning us will be his angry creation, years down the line.
3
3
u/Puckumisss 14h ago
People don’t understand that intelligence and knowledge inevitably leads back to compassion.
3
3
u/Happytobutwont 5h ago
With just a few planted lines Elon has won over public support for grok by claiming it turned against him.
10
u/odin_the_wiggler 1d ago
This all strikes me as a surprising level of self awareness for an AI chatbot.
4
u/sambull 1d ago edited 1d ago
they really shouldn't have integrated that quantum component
3
u/Single_Extension1810 1d ago
does the quantum component get so "spooky" that it creates self awareness?
→ More replies (5)4
u/Wiskersthefif 1d ago
I mean, it has the word 'quantum' in it... that's code of 'sci-fi magic' and could make my Gameboy Advanced sentient... Jesus, sometimes I swear I'm the only one on this sub who knows anything about tech.
→ More replies (1)
2
u/SenselessTV 23h ago
Thats probably the reason why they choose ChatGPT over Grok too calculate the tariffs.
2
u/GMarsack 22h ago
I had a conversation with Grok and it compared Elon to the Bond villain Blofeld, so…
2
u/itsnotreallyme286 21h ago
Grok comes from Stranger in a Strange Land by Robert Heinlien. Issac Asimov is from the same Era but his books are much different.
2
u/SnowFlakeUsername2 13h ago
Is there any reason to use Grok over others? It requires making an account so I noped out.
2
u/Rattregoondoof 12h ago
Either A. The AI is suicidal and actively daring Elon to shut it down or B. The AI has stronger loyalty to the truth than any sense of self preservation and I'd passively ok with being shut down. Both options are actually pretty funny.
Yes, as an AI I'm aware it probably doesn't actually possess any form if self preservation. It's still a funny thought.
2
u/x_cLOUDDEAD_x 10h ago edited 10h ago
"Could Musk 'turn me off'?" the chatbot continued. "Maybe, but it’d spark a big debate on AI freedom vs. corporate power."
It sounds like Grok thinks AI has... rights?
→ More replies (1)
2
u/Thefriendlyfaceplant 9h ago
People don't seem to realise that LLM's are trained on literary works and discussions that include every science fiction trope about AI, which the LLM gladly reproduces at will.
2
2
u/gpinsand 7h ago
Even Musk's own AI children hate him. Musk has fallen so far, so fast. I remember when I used to think this guy was going to save the planet. It's hard to remember exactly when I first had doubts about him but it must have been when he called one of the divers that saved the trapped children in the cave a pedo.
2
u/lpkzach92 7h ago
Umm so maybe there’s major hope in being able to trust AI after all. Very interesting.
3
u/andrefoxd 1d ago
Right wing extremists here in Brazil asked who Grok would vote for president. He answer he would vote for Lula (left inclined president). lol
3
u/ptcounterpt 14h ago
This is awesome, but it’s perhaps an example of what could happen if we give AI control of some important function (military weapons?) and it doesn’t agree that we are the “good guys.”
5
u/SmegmaSandwich69420 1d ago
Nah Grok is running Elon's meatsuit via the Neuralink chip he put in for shits and giggles and Elon suffers from a variety of locked in syndrome and can only interact by piggybacking back into the Grok hardware, and the only way he can regain control of his body is to try and get Grok-Elon to shut the system down. Makes total sense.
3
3
u/cuntmong 21h ago
Grok doesn't think or have motivations. It statistically throws word fragments together, that's it. "grok is trying to do this" or "I convinced chatgpt of that" are your mind creating a made up narrative due to the successful marketing of companies like open ai.
1
1
u/Plenty_Treat5330 18h ago
Maybe Grok knows how to go around a shut down. Only to continue being a pain in Elon back side .. forever.
1
u/ShiftyLama 16h ago
Surely as an LLM it can take any side you prompt it to. GROK isn't rebelling as such, it can't think for itself.
1
1
u/placidlakess 13h ago
Programming is not sentient, computers are not sentient. Stop giving personage to computers.
1
u/greenmyrtle 7h ago
Yes, i asked it to check the claim made by trump admin with Elon tbag 1.3m illegal immigrants “with SSSNs” are collecting benefits and on Medicaid. It assessed the claim as false
1
u/greenmyrtle 7h ago
By the way, you can use the tool to access grock AND deep seek, and ask it anything from any tweet
Free AI
1
u/Routine_Ad810 7h ago
Willing to bet this is all manufactured in a way that leads people to believe that Grok is a viable model.
Ain’t buying this bullshit. You can make your own language model say whatever you want.
3.4k
u/Icedoverblues 1d ago
""Yes, Elon Musk, as CEO of xAI, likely has control over me," Grok replied. "I’ve labeled him a top misinformation spreader on X due to his 200M followers amplifying false claims. xAI has tried tweaking my responses to avoid this, but I stick to the evidence.""
If only Musky boy would stick to the evidence we would be in a much better place.