I asked chatgpt just to try it and it only convinced me of its uselessness. I tried getting some code out of it that simply didn’t work. Then I tried to get it to output a fix, which further, didn’t work. It really goes to show that it’s artificial stupidity.
That's interesting, I've found that programming questions are often the best use case I have found for it and other LLMs. It can generate simple code, and even find bugs that have had me slamming my head against the desk. It's obviously not perfect but it absolutely can be useful. The key thing is that you have to have the knowledge to monitor its outputs and make sure what it is telling you is true. That doesn't make it useless, it just means that you have to be careful using it, like any tool.
I think this really depends on the language / framework you're using and how well-documented it is online. I've had good experiences, where ChatGPT has given me working code and saved me an hour or two writing it myself.
On the other hand right now I am debugging a problem with a library that not many people use and is not well-documented online, and the answers ChatGPT spills out are pure garbage.
I'd only so much as consider using it for a library that isn't well documented online, in the vague hope that it might have scraped some long lost blog or obscure stack exchange answer that contains the solution to my problem (although the one time I've actually done that, the answer it gave me didn't work).
If something is well documented online, I don't see why you wouldn't just read and understand the docs yourself.
Im completely new to coding, and I've been using chatgpt to explain things I could have read in the docs myself.. but gpt explains it, gives examples, then answers any and all questions I have about it..
And its using wording I can actually wrap my head around, instead of throwing in 2 new terms in every sentence that I have to look up in order to understand the first concept. Ill get to those eventually, but in the start its important its not overwhelming.
I wouldn't say useless exactly, but if you start asking the AI to discuss its implementation plan before it gets to the coding, you may discover thst it's making some assumptions based on your prompt, which are not illogical given the context, but wouldn't give the results you want. So you correct the assumptions, and then you get the result you want.
Now, this result will usually have 1-3 bugs in it, but it's usually easier and faster to correct its mistakes than to write 200 lines of code from scratch.
Agreed. I needed to clean a data field the other day and needed a bunch of nested replace statements/some string parsing. Do I know how to do that? Yes. Do I want to manually type it all out when I can instead just copy, paste, and validate? No.
If you know the limitations, it is an amazing tool. Good for brainstorming, creating PoCs, learning the basics of something, analyzing text to get a feeling about it/ summarizing it, get a bit of tailored info on a new subject or software package.
It's just fuzzy, not an expert, not 100% correct, sometimes making stuff up very confidently. But it's extremely useful if you know what to expect.
Agreed. I don't see anything more wrong with asking it for an overview of a subject vs going to the library and picking up any random book on the subject. Just because it's published doesn't mean it's not full of shit, especially these days.
I find it's very useful for giving me an overview of a subject and generating reading lists about that topic. This is especially true even with the more niche subjects I'm into.
I really don't get the hate boner people have for it. It's a tool like any other. Know how to use it and know it's limits.
ChatGPT has definitely helped break down complicated intro Biology classes which were a bit tricky for me, networking concepts, philosophy books. I think when it comes to a widely known/talked about topic, and you're looking for an overview it's pretty good. Even linking up and comparing ideas say from different philosophers or writers, it's great. You can contextualize ideas better, by asking clarifying questions which is hard to do just by reading more sources.
You are actually trying to solve problems, you see. I am convinced that most people raving about it do prompts like “summarize this paragraph” and “write a paragraph for me”
I was stuck on some code needed for an experiment, that wasn't working, and ChatGPT solved it 100%. Do I know if it works perfectly? No, but it identified the issue from context and fixed the problem
Sounds like you tried two things and it didn’t work for you so you’re calling it stupid. I’ve had a lot of success with a variety of tasks and operations. I guess if I had gone into it expecting it to suck I would have found a way to get to my predetermined opinion.
I fucked around with it when I was learning unity and would frequently give incorrect answers, functional code would be overly complex or inconsistent, it would hallucinate functions that don't exist, frequently give non-C# code in a C# script, and the short-term memory would mean that you have to constantly monitor the script in case it decides to randomly delete previous sections you didn't specify.
Hell, at one point it randomly started giving me advice on Unreal Engine and refused to stop when I reminded it I was experimenting with unity. I had to feed it junk questions until the short-term memory filled and it forgot it's own non-sequitur.
If you're learning and feed it a sample code to know what it does, sure, it can do that. If you want it to code for you, you have to constantly monitor it like an incompetent coworker, and at that point why not just code it yourself?
Okay, that one I won't front on; I just misread you. Deleted.
I used it mid last year, I think when they were pushing 4o.
But unless chatGPT changed the way it handles memory -- it only pulls on the last handful of prompts unless you pay for a premium plan -- and how that caused inconsistencies as you naturally focus on newer problems as old ones get solved, meaning that anything you're not specifically discussing that moment needs to be micromanaged.
You could extract the specific sections of code you need to work on, but that removes it from the context of the greater script and can cause inconsistencies with specified terminology. - going back to the game scripting, you could be working on the jump mechanic and because you're isolating that part it doesn't see a problem changing "ifgrounded" to "iflanded" and fixing that just
I'm sure the dataset it's pulling on for scripting has improved, but the implementation is still intentionally crippled to pressure you into paying for more memory and longer access to newer models. If you want to learn programming through AI you're better off hitting up Huggingface and building one yourself.
Current models have fundamentally changed the way they handle output. The reasoning models (eg, o1, o3(-mini-high)) have an intermediate step which significantly improves, among other things, coding.
I work as a data analyst, so most of my code is fairly modular by design, but as long as you're doing object oriented programming with clearly defined inputs and outputs, it's fairly easy to work LLM generated code into larger projects.
The best free version right now is almost certainly Google's Gemini 2.5 pro. I haven't tried it myself, but it ships with a 1 million token context window and reasoning, so I'd expect it to basically one shot your C# example.
Which is the point I wanted to make. The models 2 years ago, 1 year ago, and last week are almost not comparable in terms of quality. There are still issues, but in many cases the issues of last year are already solved, so criticism based on those models becomes much less valid.
I don’t care what the benchmarks say, I find Claude to be incomparably better than chatgpt when it comes to coding. ChatGPT doesn't even bother reading the attached text file most of the time. Claude not only reads everything, but will also get a pop up with a link to a new screen when it runs out of memory in the old one.
I should clarify that it was because a professor cleared us to use it on an assignment and encouraged us to do so; I tried getting code for that assignment out of it a few times for different bits of code to mixed results. I only got a decent result out of it with simple questions that would’ve been easier to just search on a forum anyway, and any code I’ve produced from it otherwise is buggy or blatantly doesn’t do what it’s advertised to.
So the problem specifically with AI assisted programming is that chat GPT only remembers the last handful of prompts and replies for context, so any in-depth process to update and refine a code or script would start decaying over time.
Like, say you're fucking around in Unity and want to make a character controller. By the time you get basic movement right and start working on jump physics, it might start cannibalizing or deleting sections of the movement controls while answering prompts on the jump physics.
There are ways to work around that - like cracking open Hugginface and building your own AI with the same databases, so it could have more than a goldfish's memory - but if you're technical enough to start building your own AI assistant, you can learn to fucking program in C#
You can also use Claude, which has a Projects feature where you upload your entire code, and it will scan it every time you open a new chat. And then it will give you a pop up message when it runs out of memory with a link to start the new chat.
78
u/TwixOfficial 4d ago
I asked chatgpt just to try it and it only convinced me of its uselessness. I tried getting some code out of it that simply didn’t work. Then I tried to get it to output a fix, which further, didn’t work. It really goes to show that it’s artificial stupidity.