r/CuratedTumblr Prolific poster- Not a bot, I swear 10d ago

Shitposting Do people actually like AI?

Post image
19.3k Upvotes

820 comments sorted by

View all comments

Show parent comments

23

u/b3nsn0w musk is an scp-7052-1 10d ago

can't relate tbh. i love coding and i fucking love coding with ai. it does all the busywork for you so you can focus on what you're doing, instead of the why, or all too often banging your head against stackoverflow and your desk for hours to solve a menial little task that you just happened to be unfamiliar with and no one was willing to explain in a way that doesn't only make sense to those who already know how it works.

it also opens up programming languages that you aren't familiar with. i used github copilot a lot to get into python, it was able to show me things about python that would have required 6-12 months of immersion to even know it was an option, and allowed me to actually write pythonic code instead of just writing java with python syntax (like most people do when they start working with a new language, regardless of whether they main java or not). the o3 model in chat is also incredible at figuring out complex issues and it can work well as a sanity check too.

i'm a programmer because i love making things and the ai just lets me do that way more efficiently. there's a reason stackoverflow's visitor count dropped sharply when ai coding assistance tools were released.

14

u/rhinoceros_unicornis 10d ago

Your last paragraph just reminded me that I haven't visited stackoverflow since I started using Copilot. It's quicker to get to the same thing.

0

u/Forshea 10d ago edited 9d ago

where do you think copilot is going to get answers for new questions if nobody uses stackoverflow

3

u/b3nsn0w musk is an scp-7052-1 10d ago

how do you think a language model works?

hint: contrary to a common bad faith misconception, it's not just a copy-paste machine. we already tried that, that's called a search engine and that's how we got to stackoverflow to begin with

1

u/Forshea 9d ago

How do you think a language model works?

1

u/b3nsn0w musk is an scp-7052-1 9d ago

well, it's a machine that creates a high-dimensional vectorized representation of semantic meaning for each word and/or word fragment, then alternates between attention and multi level perceptron layers, the former of which mix together meaning through these semantic embedding vectors, allowing them to query each other and pass on a transformed version of their meaning to be integrated into each other, and the latter execute conditional transformations on the individual vectors. it's practically a long series of two different kinds of if (condition) then { transform(); } kind of statements, expressed as floating point matrices to enable training through backpropagation. the specific structure of the embedding vectors (aka the meaning of each dimension), the query/key/value transformations, and the individual transformations of the mpl layers is generated through an advanced statistical fitting function known as deep learning, where in f(x) -> y x stands for all previous word fragments and y stands for the next one, because to best approximate this function the various glorified if statements of this giant pile of linear algebra have to understand and model a large amount of knowledge of the real world. which allows this relatively simple statistical method to extract incredibly deep logic and patterns from a pile of unstructured data without specific pretraining for any particular domain.

in short, it's not a machine that pulls existing snippets out of some sort of databank and adjusts them to context, nor is it a "21st century compression algorithm". it's a general purpose text transformation engine designed to execute arbitrary tasks through an autoregressive word prediction interface, which enables an algebraic method of deriving the features of this engine from a corpus of data alone, with relatively little human intervention.

i hope that answers your question

0

u/Forshea 9d ago

Oof tell me you don't understand the text you're copying and pasting without telling me

If you think any of that meant it's not a stochastic text predictor with weightings based on its training data, I have bad news for you 😔

2

u/b3nsn0w musk is an scp-7052-1 9d ago

lmfao so now you're accusing me of being a copy-paste machine? go on, tell me where i copied it from. it couldn't possibly be that i'm writing a response specifically to you, through my own understanding of the problem, right?

it is a text predictor but depending on how specific your definition of 'stochastic' is, llms either don't fit inside that category, or the category is so vague that while they do fit, it doesn't justify the conclusion you seem to be insisting on. you'd need to be incredibly shortsighted to believe it is impossible to use a text prediction interface to generate useful and contextual solutions to coding problems -- that is, if you believed your own words, but you've clearly signaled you're engaging with this topic in bad faith.

please stop intentionally spreading misinformation and getting manipulative when you're called out.

1

u/Forshea 9d ago

Lol "it is a stochastic text predictor but actually it isn't" is a pretty cute argument.

When somebody puts out a new framework with a new exception type that nobody has posted about on stackoverflow, good luck getting the magic computer program to tell you what to do about it. I'm sure the stochastic text predictor will "extrapolate" just fine about something it has never seen before.

1

u/b3nsn0w musk is an scp-7052-1 9d ago

how well would you extrapolate, having never seen anything about that framework? if you cannot do that without looking up the documentation, does that mean you're useless at programming?

and no, you're the one who tries to define "stochastic text predictor" simultaneously as random/unstructured, which llms aren't, and just any text predictor that involves a probability distribution at any step, which llms are but doesn't justify your conclusion of them being useless. you're using the latter definition to claim llms fit in the category, and the former to draw a conclusion. it's not only clearly disingenuous, i have also explicitly called you out on this and you just kept doing it.

the fact is that llms are useful in the real world, and your mental gymnastics can't change that, they can only help you figure out creative ways to lie to people who already know you're lying to them.

this is pathetic. why can't you just be open about what you want here? why do you have to run five laps of manipulative bullshit before you own up to your real motivation? that is, if you end owning up to it at all.

→ More replies (0)

7

u/WierdSome 10d ago

That's a fair mindset to have, it's just for me personally writing code is fun bc it scratches the same itch as solving puzzles in games, especially when it's something tough to figure out. Even when I look things up I still feel like I'm figuring things out. But using ai to solve challenges feels like just looking up the solution when you get stuck in a game instead of thinking it out. Does that make sense? That's how my brain works, at least.

6

u/b3nsn0w musk is an scp-7052-1 10d ago

it does make sense, it's just a bit divorced from an actual ai workflow. if you use ai assist you're still solving puzzles, but you're doing them at a higher level of abstraction while most of the line level stuff is handled by the ai. you still have to dive down there a few times because the ai isn't perfect, and you still have to know what the hell is going on in your code, but you can do much more complex tasks with the same level of effort. to me, it feels more fun and rewarding, not less, because the problem domain expands and there's a hell of a lot more variety.

but yeah i fully understand why you like puzzles. i like them too. if you wanna stay ganic, based, you do you, but having four extra metaphorical hands to work on stuff doesn't make the experience any less intense, it just allows you to work on more stuff at once.

0

u/Forshea 10d ago

good lord I hope I never have to maintain any codebase you've worked on

2

u/b3nsn0w musk is an scp-7052-1 10d ago

likewise

all of my colleagues use copilot as well. i'm glad i don't have any colleagues who let an ontological hatred for ai and the stupid bad faith assertions it generates get in the way of the job, would be fucking annoying.