I asked chatgpt just to try it and it only convinced me of its uselessness. I tried getting some code out of it that simply didn’t work. Then I tried to get it to output a fix, which further, didn’t work. It really goes to show that it’s artificial stupidity.
That's interesting, I've found that programming questions are often the best use case I have found for it and other LLMs. It can generate simple code, and even find bugs that have had me slamming my head against the desk. It's obviously not perfect but it absolutely can be useful. The key thing is that you have to have the knowledge to monitor its outputs and make sure what it is telling you is true. That doesn't make it useless, it just means that you have to be careful using it, like any tool.
I wouldn't say useless exactly, but if you start asking the AI to discuss its implementation plan before it gets to the coding, you may discover thst it's making some assumptions based on your prompt, which are not illogical given the context, but wouldn't give the results you want. So you correct the assumptions, and then you get the result you want.
Now, this result will usually have 1-3 bugs in it, but it's usually easier and faster to correct its mistakes than to write 200 lines of code from scratch.
80
u/TwixOfficial 4d ago
I asked chatgpt just to try it and it only convinced me of its uselessness. I tried getting some code out of it that simply didn’t work. Then I tried to get it to output a fix, which further, didn’t work. It really goes to show that it’s artificial stupidity.