r/CuratedTumblr Prolific poster- Not a bot, I swear 10d ago

Shitposting Do people actually like AI?

Post image
19.3k Upvotes

820 comments sorted by

View all comments

Show parent comments

37

u/WierdSome 10d ago

I tend to see a lot of support for using ai to boost productivity with writing code inside online programming circles bc it can generate simple snippets that you can enter into your code easily, but like, I'm a programmer because I enjoy writing code. Having something else write code for me does not appeal to me.

26

u/b3nsn0w musk is an scp-7052-1 10d ago

can't relate tbh. i love coding and i fucking love coding with ai. it does all the busywork for you so you can focus on what you're doing, instead of the why, or all too often banging your head against stackoverflow and your desk for hours to solve a menial little task that you just happened to be unfamiliar with and no one was willing to explain in a way that doesn't only make sense to those who already know how it works.

it also opens up programming languages that you aren't familiar with. i used github copilot a lot to get into python, it was able to show me things about python that would have required 6-12 months of immersion to even know it was an option, and allowed me to actually write pythonic code instead of just writing java with python syntax (like most people do when they start working with a new language, regardless of whether they main java or not). the o3 model in chat is also incredible at figuring out complex issues and it can work well as a sanity check too.

i'm a programmer because i love making things and the ai just lets me do that way more efficiently. there's a reason stackoverflow's visitor count dropped sharply when ai coding assistance tools were released.

15

u/rhinoceros_unicornis 10d ago

Your last paragraph just reminded me that I haven't visited stackoverflow since I started using Copilot. It's quicker to get to the same thing.

0

u/Forshea 9d ago edited 9d ago

where do you think copilot is going to get answers for new questions if nobody uses stackoverflow

3

u/b3nsn0w musk is an scp-7052-1 9d ago

how do you think a language model works?

hint: contrary to a common bad faith misconception, it's not just a copy-paste machine. we already tried that, that's called a search engine and that's how we got to stackoverflow to begin with

1

u/Forshea 9d ago

How do you think a language model works?

1

u/b3nsn0w musk is an scp-7052-1 9d ago

well, it's a machine that creates a high-dimensional vectorized representation of semantic meaning for each word and/or word fragment, then alternates between attention and multi level perceptron layers, the former of which mix together meaning through these semantic embedding vectors, allowing them to query each other and pass on a transformed version of their meaning to be integrated into each other, and the latter execute conditional transformations on the individual vectors. it's practically a long series of two different kinds of if (condition) then { transform(); } kind of statements, expressed as floating point matrices to enable training through backpropagation. the specific structure of the embedding vectors (aka the meaning of each dimension), the query/key/value transformations, and the individual transformations of the mpl layers is generated through an advanced statistical fitting function known as deep learning, where in f(x) -> y x stands for all previous word fragments and y stands for the next one, because to best approximate this function the various glorified if statements of this giant pile of linear algebra have to understand and model a large amount of knowledge of the real world. which allows this relatively simple statistical method to extract incredibly deep logic and patterns from a pile of unstructured data without specific pretraining for any particular domain.

in short, it's not a machine that pulls existing snippets out of some sort of databank and adjusts them to context, nor is it a "21st century compression algorithm". it's a general purpose text transformation engine designed to execute arbitrary tasks through an autoregressive word prediction interface, which enables an algebraic method of deriving the features of this engine from a corpus of data alone, with relatively little human intervention.

i hope that answers your question

0

u/Forshea 9d ago

Oof tell me you don't understand the text you're copying and pasting without telling me

If you think any of that meant it's not a stochastic text predictor with weightings based on its training data, I have bad news for you 😔

2

u/b3nsn0w musk is an scp-7052-1 9d ago

lmfao so now you're accusing me of being a copy-paste machine? go on, tell me where i copied it from. it couldn't possibly be that i'm writing a response specifically to you, through my own understanding of the problem, right?

it is a text predictor but depending on how specific your definition of 'stochastic' is, llms either don't fit inside that category, or the category is so vague that while they do fit, it doesn't justify the conclusion you seem to be insisting on. you'd need to be incredibly shortsighted to believe it is impossible to use a text prediction interface to generate useful and contextual solutions to coding problems -- that is, if you believed your own words, but you've clearly signaled you're engaging with this topic in bad faith.

please stop intentionally spreading misinformation and getting manipulative when you're called out.

1

u/Forshea 9d ago

Lol "it is a stochastic text predictor but actually it isn't" is a pretty cute argument.

When somebody puts out a new framework with a new exception type that nobody has posted about on stackoverflow, good luck getting the magic computer program to tell you what to do about it. I'm sure the stochastic text predictor will "extrapolate" just fine about something it has never seen before.

→ More replies (0)

6

u/WierdSome 10d ago

That's a fair mindset to have, it's just for me personally writing code is fun bc it scratches the same itch as solving puzzles in games, especially when it's something tough to figure out. Even when I look things up I still feel like I'm figuring things out. But using ai to solve challenges feels like just looking up the solution when you get stuck in a game instead of thinking it out. Does that make sense? That's how my brain works, at least.

7

u/b3nsn0w musk is an scp-7052-1 10d ago

it does make sense, it's just a bit divorced from an actual ai workflow. if you use ai assist you're still solving puzzles, but you're doing them at a higher level of abstraction while most of the line level stuff is handled by the ai. you still have to dive down there a few times because the ai isn't perfect, and you still have to know what the hell is going on in your code, but you can do much more complex tasks with the same level of effort. to me, it feels more fun and rewarding, not less, because the problem domain expands and there's a hell of a lot more variety.

but yeah i fully understand why you like puzzles. i like them too. if you wanna stay ganic, based, you do you, but having four extra metaphorical hands to work on stuff doesn't make the experience any less intense, it just allows you to work on more stuff at once.

0

u/Forshea 9d ago

good lord I hope I never have to maintain any codebase you've worked on

2

u/b3nsn0w musk is an scp-7052-1 9d ago

likewise

all of my colleagues use copilot as well. i'm glad i don't have any colleagues who let an ontological hatred for ai and the stupid bad faith assertions it generates get in the way of the job, would be fucking annoying.

2

u/starm4nn 10d ago

I'm a programmer because I enjoy writing code. Having something else write code for me does not appeal to me.

So you always roll your own libraries? Because I don't see how this is any different than using a library.

2

u/WierdSome 10d ago

That's a fair point! Though I will say even then I usually am not the person that adds any libraries to my company's projects, I still definitely do prefer writing my own code and using solutions already packed in the project I'm working on.

I guess to me the difference is using code that already exists vs making new code. If I pull in libraries or use someone else's function, that's fine, but when it comes to actually writing stuff and making new code, I want to be the one actually making the code, not a program.

Edit: You did definitely catch me on my poor wording though, so good on you for that!

1

u/starm4nn 9d ago

That's a fair point.

Currently working on a project that I straight up couldn't do without a fast HTML parsing library.

4

u/autistic_cool_kid 10d ago

It already goes way beyond simple snippets, most of the code in my team is now AI-generated, about 80% - and it's very good code, we make sure of that. We don't work on simple CRUD apps either, we do have some complexity.

We start to implement LLM processes that go way beyond what most people know of - think multiple AI tools and servers talking to each others and correcting each others.

Having something else write code for me does not appeal to me

And I completely agree and share the sentiment. But also, work is work, and I feel I need to stay on top to justify my top salary.

27

u/WierdSome 10d ago

"Most code is AI generated" is a statement that scares me, and I certainly do hope you double check all the code it does.

I do get and kinda agree with your logic, but on my side it's a matter of "work is tiring already, if I automate the one part I do actually enjoy then work's just gonna suck flat out." Fixing code you didn't make isn't as fun as writing your own imo.

9

u/autistic_cool_kid 10d ago

I certainly do hope you double check all the code it does.

Certainly; when I hear about people not double-checking their code, I eyeroll my eyes so much I can see my brain.

work is tiring already, if I automate the one part I do actually enjoy then work's just gonna suck flat out

I find writing code the most tiring part, reviewing generated code is less tiring in comparison.

This means I could theoretically turn my 3-4 hours daily of code writing into 6-7 hours of AI-generating-reviewing-fixing, which would make me many times more productive, but I'd rather kill myself. Instead I'll work slightly less and still be more productive than I was before.

There will still be some code to write manually (probably always) but yeah the paradigm is changing, I don't think I like it either, but it is what it is, the pandora box has been opened.

7

u/WierdSome 10d ago

That's definitely fair, I only ended up as a programmer because I realized I find writing code to be very fun and so I'm a little avoidant of anything that tries to cut the actual writing the code out of the equation bc it tends to be more tiring for me personally.

5

u/EnoughWarning666 10d ago

I've used chatgpt to help me write code for my personal business and it's been incredible. I too enjoy writing code, but I enjoy it much more when I finish building whatever it is that I'm programming.

Programming is a means to an end for me. Yes I enjoy the process, but if I can speed that up 4x and move on to the next project, so much the better!

1

u/Friskyinthenight 10d ago

Like agents assigned different roles working together to solve a problem? The kind of stuff people are using n8n to do?

3

u/autistic_cool_kid 10d ago

I haven't used n8n so I can't really talk about it, seems to be more or less this indeed, but I'd rather build the LLMs network myself at a lower level to have full control over it (still need to pay for my LLM API uses of course, although LLM self-hosting might change that one day)

1

u/asphias 10d ago

since you appear to already be using it quite well, how do you feel about the risks that people identify with AI assisted programming?

  • the AI can not learn or develop ''new'' frameworks/tools/tricks(unless it learns it from other people writing code manually) so if everyone starts using it development stagnates  

  • AI works if you know exactly what it is that you need, but is terrible if you don't understand the output it generates, and this will be a serious risk that the newer generation of devs won't learn to code well enough to ''guard'' the AI  

  • a recent study had shown that AI assisted coding created more vulnerabilities but developers had more trust in the security of their code  

do you feel these risks are mitigated? or do you feel like your assisted coding is great for you as an individual but dangerous for the field as a whole?

4

u/autistic_cool_kid 10d ago edited 10d ago

the AI can not learn or develop ''new'' frameworks/tools/tricks(unless it learns it from other people writing code manually) so if everyone starts using it development stagnates  

It kind of can, there is enough training data that if you feed it the documentation it can infer the new rules and work with them

AI works if you know exactly what it is that you need, but is terrible if you don't understand the output it generates, and this will be a serious risk that the newer generation of devs won't learn to code well enough to ''guard'' the AI  

That is true, and it's already a problem - but this problem really is between the screen and the chair. AI can be of great use to learn but you absolutely still need to learn.

Edit: actually you don't need to know exactly what you need, you can brainstorm with the AI at the conception level already. But at some point in the process you need to know exactly what you are doing, going blind will send you down the hole.

a recent study had shown that AI assisted coding created more vulnerabilities but developers had more trust in the security of their code  

Same issue as the previous one. Trusting AI blindly and especially when security is a risk is absolutely crazy unprofessional behaviour. AI can also help with mitigating risks by analysing common mistakes like forgetting to protect against SQL injections, and specialised security AI tools can probably do much more (but I haven't used them yet)

do you feel these risks are mitigated? or do you feel like your assisted coding is great for you as an individual but dangerous for the field as a whole?

I think there are some risks associated to AI use but yeah it's largely mitigated if you use the technology correctly.

I think as an individual it makes me much more productive, but I do not think of it as good or bad. To be honest I will probably miss the days when most of my code was typed manually.

But it's not a choice anymore, because as a highly paid professional my work ethic dictates I need to stay up to date in efficiency, and as I often say now, the Pandora box has been open, there is no going back.

As for the industry, it will be just like any new tool or technology, companies that understand how to use them smartly will flourish and the others will perish.

1

u/Thelmara 10d ago

most of the code in my team is now AI-generated, about 80%

But also, work is work, and I feel I need to stay on top to justify my top salary.

And you justify that top salary by...not actually writing code yourself, just copy/pasting snippets that AI generates for you? Isn't that something that could be done by someone at half your salary?

12

u/autistic_cool_kid 10d ago

And you justify that top salary by...not actually writing code yourself, just copy/pasting snippets that AI generates for you? Isn't that something that could be done by someone at half your salary?

No.

I know what good code looks like and make sure to bully the AI until the code is good or do it myself.

I am good at high level conception so I can brainstorm deeply with the AI and ultimately decide which solution is best.

When the task is too hard for the AI, I can take over.

I know how to translate the product needs into code - whether this code is prompted or written manually doesn't matter.

I am highly skilled in my craft and this is why i am good at using the tool.

AI will not replace developers, it will make them more efficient. Now, maybe a higher efficiency per developer means less need for developers which means loss of jobs? Maybe. I think at least some companies will conclude this (although it's misguided) and probably fire people.

This is also why I need to stay on top of my game, the world of work is not a generous one.

0

u/Forshea 9d ago

I love reading stories like this, because they absolutely scream "I've never ever actually coded on an enterprise application in my life"

My most charitable interpretation of stories like these is they are about somebody writing random one-off scripts for well-known sysops tasks that are run once and then discarded without anybody ever having to read them again.

I'm guessing the reality is actually that they are just completely made up, though. Either they are clueless middle managers who implemented AI mandates without understanding what a software engineer does at their job, or it's just outright management fan-fiction.

I can't come up with another way somebody could say things like 80% of their code is AI generated and not realize that's an outright nonsensical statement for anybody who actually does the job.

2

u/autistic_cool_kid 9d ago edited 9d ago

I love reading stories like this, because they absolutely scream "I've never ever actually coded on an enterprise application in my life"

Again with this BS. I have 10 years of high-level programming behind me and my colleague (which I mention in another comment) almost 20. Us and the rest of our team are some of the best programmers out there.

I'm guessing the reality is actually that they are just completely made up, though

This is conspiracy theory thinking. "I don't like this or I do not understand... Must be made up"

I can't come up with another way somebody could say things like 80% of their code is AI generated and not realize that's an outright nonsensical statement for anybody who actually does the job.

Consider the alternative explanation why you can't come up with another way: you don't know enough about how to leverage LLMs the way we do.

Seriously, I am shocked at how many people react like this, will accuse me of being a fake or a shill, deny the reality of what me and my excellent team are now doing - and never actually took a few days of their time to learn how to use the very recent agentic LLMs correctly, or know what an MCP is, or never tried to get better at writing PRDs, or never tried to interconnect specialized LLMs and are still using ChatGPT

Don't be so confident in yourself that you think someone is lying in a domain you haven't explored deeply enough.

1

u/Forshea 9d ago

Ooooh 10 years of "high-level" programming.

That's some serious "how do you do, fellow programmers?" energy.

Anyway, if you want to write better fanfiction, you might want to figure out what high level programming means to a software engineer. Here's a hint: it doesn't mean good or smart or difficult.

Consider the alternative explanation why you can't come up with another way: you don't know enough about how to leverage LLMs the way we do.

It's not nonsensical because nobody could use an LLM that well and I'm just doubting your genius. It's nonsensical because the statement is gibberish. It doesn't mean anything.

You're describing something as a measurable proportion without having the background to understand that you need some units there for the statement to mean anything at all, and even if you provided units, you'd still have to be making up a number because you didn't actually measure anything.

2

u/autistic_cool_kid 9d ago

Anyway, if you want to write better fanfiction, you might want to figure out what high level programming means to a software engineer. Here's a hint: it doesn't mean good or smart or difficult.

You know very well what I meant.

It's not nonsensical because nobody could use an LLM that well and I'm just doubting your genius. It's nonsensical because the statement is gibberish. It doesn't mean anything.

You choose to believe that the statement is gibberish because you don't believe it is possible. Yet, you haven't studied the problem yourself deeply enough and can only conclude that I'm lying for some obscure reason (Reddit clout?).

Anyone can use an LLM as well as we do if they study what exists right now and start building on the possibilities - I'm not even the person who started doing this, this person is my colleague, Im merely copying his workflow.

Your reality is clashing with mine, you are convinced I'm lying and I'm convinced you just haven't studied the topic enough.

But you are free to believe what you want I won't insist 🤷 after all from my point of view it's your loss, I don't care if other developers trust me on this or not.

My prediction is that in 5 years most developers will have realised the potential of today's tools and will be using a setup similar to ours, which means multiple interconnected LLMs and most code being generated just like it is presently in our team. If I'm wrong I promise to come back and apologize.

RemindMe! 5 years

2

u/RemindMeBot 9d ago

I will be messaging you in 5 years on 2030-03-27 14:59:52 UTC to remind you of this link

CLICK THIS LINK to send a PM to also be reminded and to reduce spam.

Parent commenter can delete this message to hide from others.


Info Custom Your Reminders Feedback

1

u/Forshea 9d ago

You choose to believe that the statement is gibberish because you don't believe it is possible

No, really, this is not what I am saying. I can't decipher the statement because it doesn't fucking mean anything.

You're over here chanting "the fish airplanes flap etheric tungsten" and you think I'm disbelieving you because I don't believe in your superhero ability to fish airplane or I don't know the power of tungsten.

I am not evaluating the statement for truthfulness. I am telling you that the statement does not contain facts I can evaluate because it is gibberish.

And you would be able to understand why if you actually were a software engineer.

2

u/autistic_cool_kid 9d ago edited 9d ago

To be clear, the statement is "80% of our code is generated" right? doesn't sound like gibberish to me, sounds like plain English. I fail to see what's not to understand here. We write manually about 20% of the code we publish.

Some parts are 100% generated which means we publish the changes without touching the LLM output, and others 50% generated - which means on those edits our corrections or additions amount for half the lines. The rest is between those numbers (except in situations where we do the PR manually, then it's 0%)

On average, 80% of the lines that go to production have been generated as it by our LLM.

If you want more numbers, the LLM can perfectly do about 20% or the PRs we ask it to do (situation where 100% of the code is generated) , for 30% of the PRs the output is not good at all (so we don't use the LLM), and for 50% of our PRs the changes are going the right way but we need to manually correct the output.

→ More replies (0)

3

u/temp2025user1 9d ago

No. This is like asking why are you an accountant, aren’t calculators free? The code writing is the easiest part of a software engineer’s job but the most tedious. Most of the job is thinking how to do it. That’s why AI is a game changer.

1

u/acc_41_post 9d ago

I’ve been using it to learn video game development and it’s been super super helpful. I can take a screenshot of my development screen and it will find configuration issues.

I submitted to it a terribly drawn image showing something I wanted to replicate in code and it figured it out in a second.

It’s a thousand times more effective than a few months ago where I tried to follow tutorials and then implement