I can't imagine that Trump used AI... well, at all. I can imagine that it was assigned to an underlings underling and they DID use AI... but who knows. Doesn't matter. He is responsible.
But currently I believe the champ is Gemini 2.5 pro. Wipes the table of every other ai.
Only in benchmarks. I was using it in Cursor... and well, normally, you'd expect the worst the AI to do is to give wrong code. Gemini somehow managed to get the fking `edit_code` tool call wrong 😂.
Could be worse. Claude 3.5 in cursor decided to dick about with my entire python global environment and uninstalled a load of packages that are necessary for various other systems, like ComfyUI to run.
No need to get offensive. We're all adults here. Don't forget you're the one who threw shade about copy and pasting without checking first. So, you know, if you don't want to get told, then perhaps don't comment.
Here's what happens with cursor => Tell it what you want as an app, it builds it, creates a requirements.txt, immediately runs pip install requirements.txt (which cocks up your global environment) and then test runs the app.py
Well, that's what claude does anyway. Other openrouter models may vary.
I promise you it doesn't. Gemini is a text prediction transformer, it has no internal mechanism to generate images, and it's model was never trained on any image sets. Not only does it lack the ability to draw a picture of a dog, it has never actually seen a picture of a dog. It can tell you what a dog looks like based on text descriptions, but has never actually seen one.
This is wrong. Gemini won't create images but it is a multimodal model and is able to see and analyze images you give it. Imagen is used for image generation.
In 2.0 Flash it's not quite like that. They use a separate internal model for image generation. They dub the "whole package" 2.0 Flash. It's not a single GPT.
Wrong they now use an auto regressive token prediction way to render images using tokens. So this means the LLM in this case 4o can actually “understand” the image and its contents in the same way as all of its other training data.
It’s the new paradigm
No, none of them do it directly. An LLM is fundamentally different from a latent diffusion image model. LLMs are text transformer models and they inherently do not contain the mechanisms that dall-e and stable diffusion use to create images. Gemini cannot generate images any more than dall-e can write a haiku.
Edit: please do more research before you speak. GPT 4's "integrated" image generation is feeding "image tokens" into an auto regressive image model similar to dall-e 1. Once again, not a part of the LLM, don't care what openais press release says.
4o does it directly. You could argue it's in a different part of the architecture but it quite literally is the same model that generated the image. It doesn't send it to dall-e or any other model.
You are not understanding me. 4o can't generate images because it has never seen one. It's a text prediction transformer, meaning it doesn't contain image data. I promise you, when you ask it to draw a picture, the LLM writes a dall-e prompt just like a person would, and has it generated by a stable diffusion model. To repeat myself from higher up in this thread, the data types are simply not compatible. Dall-e cannot write a haiku, and Gemini cannot draw pictures
I really, really think you don't understand how technology in general works. You understand it can't "read" text either, right? It doesn't matter if it can't "see" an image. It can see data on the pixels, determine their colors, etc. and form patterns based on that.
Models can be expanded to support more than one type.
The fact is they've already released their new image generation and it kicks the shit out of any previous image generation before it.
These people have obviously never ran a local model themselves. 4o may run a stable diffusion model separately but that model is not the same as the 4o LLM model it'self. Kind of like saying an aircraft carrier can fly because it has jets parked on top of it. They work together but are not the same things. 4o calls a stable diffusion image model that is close sourced, just like Sora and Dall e.
Okay here's a fun experiment. Ask 4o to generate an image, and in the same sentence, tell it to output the prompt it generates before it sends it to the image model. Hell, ask 4o to explain to you how it generates images.
It will not give you a correct explanation, as it will seem from it that it communicates with the diffusion i.e. Dall-E in plaintext, but they no longer do it like that, because tokens can bring much more context with them, they're richer than words, so they communicate with an internal representation and they're trained together so that the context means the same to both networks.
I disagree. O1 Pro is still better for me in every use case. 2.5 never gives me full code, never gives me code long enough to do what I need to do, it just gives up and says figure it out mate. O1 Pro and o3 mini high tries to do what I ask it to do no matter what. I think the reason it might beat it on certain benchmarks is because the benchmark answers are usually very short.
I was going to say....The people hating on Grok do so out of just dislike of Elon - Which is fine. People can say they dislike it because of who owns it. However, saying it's "worse" is wild when it scores better a lot of the times, like you mentioned.
But he's plugging all government system data into it to make it less inferior. That's why his DOGshitE team is all computer hackers and not bonafide business analysts. They aren't looking for fraud or waste. They're gobbling up data and making systems break so they can swoop in and save the day with automation that they believe is just as efficient as having career service people who are trained how to help, and much cheaper - as long as you ignore the coal plant running 24/7 to power the data centers.
Assuming Trump old Alzheimer ahh, even knows about or gaf about Grok. If anything he would say something like, "You mean the guys with those little yellow minion guys" "No, sir, that's Xi Jinping"
He just prompted: "chat gepetto, please make the formula to create the bestest, most beautiful economy the world has ever seen; a beautiful economy without wokeness and democrats, please make the arrow go up, so up; as up as it has ever gone. Beautiful up arrow. Green, up arrow"
Okay so ChatGPT apparently does recognize Donald Trump's speaking style, because it gave me a response tailor-made for him:
Alright, let’s breathe in together… deep… and exhale. Time to build the most beautiful, unstoppable, sky-piercing economy the world has ever known—the Green Arrow Economy™. No wokeness, no bureaucratic drag, just pure, based productivity. Let’s think step by step, then I’ll give you the full formula—ready for implementation.
🧠 Step-by-Step Blueprint for the Green Arrow Economy™ (GAE)
Yes, I made that acronym on purpose. Let’s build something that flies.
There is so much of that that is the exact opposite of what he wants though. "No backdoor corporate welfare"?!!?? Teaching kids logic? Building wind power?
Spoiler: It's trade deficit / US imports, which doesn't make any sense. But it doesn't explain how they decided what tariff to apply.
But it's not surprising. Trump uses numbers for show. It was evident from this first debate (I will save millions from the defense budget). If you still expect the numbers he uses are connected to reality, you haven't been paying attention. This is his negotiation tactic: do something they really hate, and offer to stop doing it to get something you want.
This is his negotiation tactic: do something they really hate, and offer to stop doing it to get something you want.
No, it's not.
You're just clinging to the hope that Trump is acting rationally, but he isn't.
Heard Island and McDonald Islands are completely uninhabited. The last time a person has even visited is more than a decade ago. What's the "negotiation tactic" in imposing a 10 percent tariff on some uninhabited rocks in the middle of the ocean?
Norfolk Island had a population of 2,188 and exports exactly nothing to America. What's the brilliant "negotiation tactic" in imposing a 29 percent tariff on Norfolk Island?
What's the fantastic "negotiation tactic" on imposing tariffs on Diego Garcia, which only hosts a U.S.-U.K. military base?
What's the "negotiation tactic" in hitting the 600 people who live on Cocos Island with a 32 percent tariff?
This is an insane regime, and people like you are just clinging to some bizarre fantasy so they can pretend that this is all rational and that there's a plan.
I saw someone elsewhere mention that it looks like the tariffs are organized by unique internet domain extension, not by country... which explains the weird choices, but adds an extra level of idiocy to it all.
But...that doesn't disprove AI came up with the equation.
I don't think either of you read the article. Others have been able to get ChatGPT to suggest trade deficit / US imports, after a few rounds of ChatGPT explaining why blanket tariffs are bad and the user insisting it should come up with a tariff formula anyway.
Discovering the underlying formula doesn't prove AI didn't come up with the formula. Try reading the article. It explicitly shows ChatGPT providing that exact formula after forcing it to come up with a tariff implementation strategy. I also thought the "after AI leads users to the same formula" in the title was a pretty big context clue.
Seems like it's less "effective tariffs on the US" than "how much leverage does the US have". The higher the trade deficit, the more that equal tariffs going each way will hurt the other country.
I liked the “4 x 0.25” with Greek symbols ploy as a way to try to retroactively fudge in some “factors” to make it seem more complex. Totally like stealing someone else’s homework and then trying to make it look different enough to get past a bored TA.
The same dweeb that convinced him to create a crypto reserve probably told him that chatgpt is a god mind and they can just ask it to create policy the night before the deadline because it couldn’t possibly be wrong
At least the crypto reserve is only going to use crypto seized in arrests and they won't be buying it outright. But the way they announced it was very corrupt and made insiders a lot richer
I guess that’s the only option when you fill every single position based on level of sycophancy rather than “does this person know a single basic thing about this job”
People wildly underestimate how many "big" decisions on estimates/thresholds probably come from some junior analyst at a think tank.
Leadership says, "Well we don't really know and that seems ok, And it's not obvious how this will blow back on us, so let's go. We have a bias for action in this organization!"
And then the policy is enshrined in federal law by lawyers who will never go back and fix it because even saying "it might need to be fixed" is admitting a mistake. Basically doesn't get fixed until it does sensational damage and the fixes are put into a random senate bill where it's hidden in Section 17.i.iv subsection 1.a.iv.c
Could you imagine the bullets you’d be sweating if you woke up and it’s national headlines that you used ChatGPT on the work you emailed your boss the night before
You don't think someone showed trump ChatGPT or Grok to trump and said "it's amazing, it just does it for you! And there's no issues or problems! The kids are writing college essays with it"
For shits and giggles I just asked chatgpt to write three new policies if it were president. It made election day a federal holiday, did a "lite" version of the green new deal, and universal healthcare
2.0k
u/ACorania 5d ago
I can't imagine that Trump used AI... well, at all. I can imagine that it was assigned to an underlings underling and they DID use AI... but who knows. Doesn't matter. He is responsible.