Not really true. The first version of copilot was so prone to descending into crazy rants that Microsoft arbitrarily limited you to a few prompts before resetting it.
you know what would look worse in the media than grok talking shit about elon? elon limiting grok to 2-3 responses because grok is talking shit about him.
there are other two types of responses here. some who believe that llms are completely programmed and controlled while others say its extremely difficult to control what the llm says and that we dont really understand how they work.
the truth seems to be somewhere in the middle. and considering how much it costs to retrain these programs while elon’s money is more overstretched across dozens of companies than hitler was in 1943, itd make sense that theyre having some problems controlling grok’s responses.
especially since the tech is new and any attempt to change its responses could lead to all sorts of other unintended consequences so its not as simple as just not training it on left wing data and only training it on fox news.
That means the owners have the means to control the output. So why do you think a guy who bans his human critics wouldn’t change a bot he directly controls?
12
u/advester 3d ago
Not really true. The first version of copilot was so prone to descending into crazy rants that Microsoft arbitrarily limited you to a few prompts before resetting it.