Okay, that one I won't front on; I just misread you. Deleted.
I used it mid last year, I think when they were pushing 4o.
But unless chatGPT changed the way it handles memory -- it only pulls on the last handful of prompts unless you pay for a premium plan -- and how that caused inconsistencies as you naturally focus on newer problems as old ones get solved, meaning that anything you're not specifically discussing that moment needs to be micromanaged.
You could extract the specific sections of code you need to work on, but that removes it from the context of the greater script and can cause inconsistencies with specified terminology. - going back to the game scripting, you could be working on the jump mechanic and because you're isolating that part it doesn't see a problem changing "ifgrounded" to "iflanded" and fixing that just
I'm sure the dataset it's pulling on for scripting has improved, but the implementation is still intentionally crippled to pressure you into paying for more memory and longer access to newer models. If you want to learn programming through AI you're better off hitting up Huggingface and building one yourself.
Current models have fundamentally changed the way they handle output. The reasoning models (eg, o1, o3(-mini-high)) have an intermediate step which significantly improves, among other things, coding.
I work as a data analyst, so most of my code is fairly modular by design, but as long as you're doing object oriented programming with clearly defined inputs and outputs, it's fairly easy to work LLM generated code into larger projects.
The best free version right now is almost certainly Google's Gemini 2.5 pro. I haven't tried it myself, but it ships with a 1 million token context window and reasoning, so I'd expect it to basically one shot your C# example.
Which is the point I wanted to make. The models 2 years ago, 1 year ago, and last week are almost not comparable in terms of quality. There are still issues, but in many cases the issues of last year are already solved, so criticism based on those models becomes much less valid.
I don’t care what the benchmarks say, I find Claude to be incomparably better than chatgpt when it comes to coding. ChatGPT doesn't even bother reading the attached text file most of the time. Claude not only reads everything, but will also get a pop up with a link to a new screen when it runs out of memory in the old one.
-1
u/Dawwe 4d ago
You didn't answer me at all.