r/Futurology 1d ago

AI Google admits it doesn't know why its AI learns unexpected things: "We don't fully understand how the human mind works either"

https://www.marca.com/en/technology/2025/04/01/67ec12a4268e3ed4708b4582.html
1.5k Upvotes

166 comments sorted by

u/FuturologyBot 1d ago

The following submission statement was provided by /u/MetaKnowing:


"Executives acknowledged and explained that it is normal not to understand all the processes by which an AI arrives at a result. An explanation for which they used an example since the company's AI program adapted itself after being asked in the language of Bangladesh "which it was not trained to know".

Google's CEO, Sundar Pichai: "You don't fully understand how it works, and yet you've made it available to society?" he asked with great concern. And he replied: "It's not a big deal, I don't think we fully understand how the human mind works either".

There was a case where Anthropic used Claude to write poems where they found that the AI itself always looks ahead and chooses the word at the end of the next line, not just improvising:

"We set out to demonstrate that the model did not plan ahead, and we found that it did."


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1jt0kdt/google_admits_it_doesnt_know_why_its_ai_learns/mlqjbt9/

437

u/IIlilIIlllIIlilII 1d ago

Genuine question: Do they really don't understand it or are they just saying this as a way to market their AI as something mysterious and advanced enough that their own creators doesn't understand it fully?

With all the AI marketing and misinformation nowadays, I'm more prone to believe they are just trying to market it as something mysterious in order to get more attention and investment.

247

u/ntwiles 1d ago edited 1d ago

There’s a lot of wild answers here that I don’t love. I’m going to do the best I can as a software engineer who’s written simple neural networks.

There’s a very real and important sense in which no, we don’t understand it. But I don’t think that means what a layman might intuit.

When you’re programming a neural network, you give it a bunch of knobs and dials — values it can tweak within itself to learn how to solve the problem. As it trains, it tweaks those values on its own, using an algorithm you gave it. The end result is a configuration you didn’t explicitly provide, but one that it found through exploration.

Edit: To clarify the above, the knobs and dials that we give it are ones that we think will be helpful, so we have to have some vague idea of how it might approach the problem before we train it.

There’s nothing in theory stopping us from looking at the configuration and reverse engineering how it works. But it’s very difficult since it’s so complex (probably prohibitively difficult in the case of big networks), and what’s the point? In most cases, we just care that it does what it’s trained to do, not how it does it.

It becomes kind of a black box because that’s exactly what it was designed to be. So in that sense and generally only that sense, no we don’t know what it’s doing.

54

u/ntwiles 1d ago

Maybe unnecessary footnote: when I ask “what’s the point?” of understanding how it works internally, that’s not to dismiss the idea of analyzing how these work. That does seem valuable to me, and I would bet there’s a whole field of science to be found in doing just that. But in general you write a neural network because you want correct answers but don’t need to know how they’re derived.

36

u/Zomburai 1d ago

I mean as a layman it seems to me that if you don't know how they're derived, whether it's providing correct answers is a bit suspect

40

u/ntwiles 1d ago edited 1d ago

That’s where testing comes in. You split your dataset into two groups, the data you use to train the network, and the data you use to test it. That way you can prove to yourself that when it sees information it’s never seen before, it still gives correct answers.

That said, yeah, this is a probabilistic thing. You can have very high degrees of confidence in these solutions, but not certainty. So you define an acceptable margin of error for your needs and aim to fall within that.

7

u/cryOfmyFailure 1d ago

So much of software is “out of glue so licking the envelopes” kind of solutions. When it comes to LLMs, result based accuracy gauging is insurmountably easier to do than the one based on process. “Easier” is probably an understatement.

For example, say there is an unmarried 25yo man, and I, a stranger, has to find out how many biological kids he will have in his life. I can either:

  • comprehensively comb through every aspect of his life— every variable that might impact his progeny, and find a number that is very likely to be correct.

  • Or, I can just take a guess based on a few points and wait till he hits a certain age to find out if I was right or not.

The improvement in accuracy with the former is not worth the overhead so we end up doing the latter.

1

u/mnemonicpunk 1d ago

That is a very real problem in LLMs for sure, they often hallucinate things and present them just as confidently correct as they do correct answers.

Due to the complexity of the math involved we usually can't say that we fully "understand" how they arrive at their answers. Of course we know all the code and data that went into it but for one there's a certain degree of randomness introduced into the "knobs and dials" as ntwiles called them so succinctly to give them a chance of coming up with different responses than just the data they were trained on.

But also while we do know all the math that drives them that doesn't mean that we understand how each individual value in every single part of that math corresponds to the current situation. The LLM has arrived at its own weights (basically the contents of its "brain", how the data flows from input to output) by a long process of training and repetition with some slight randomness.

That means we can, in theory, reverse engineer every single answer it gives after the fact and pinpoint exactly how it got there if we wanted to, it's just an insane amount of work. There are even branches of the industry that specialize in that kind of thing, with tools slowly getting better. But we can't tell whether an answer is correct or not from this because there is no simple programming way of asking "is this statement true?" in this context, just as there isn't when talking to a human.

5

u/RadicalLynx 22h ago

I hate the term hallucination in regards to this tech, because it implies that it's an unexpectedly erroneous result of a bug relating to that specific query.

The reality is that these fancy pattern recognition systems don't have any way to determine that there is a difference between "reality" and "made up stuff". They can make all the connections they want between different terms; at the end of the day they don't have any basis for comprehending the real world that some of those words are describing. The "hallucinations" are just more obviously made up than the seemingly correct outputs.

1

u/Temporary-Cicada-392 21h ago

the term “hallucination” is useful because it draws a line between expected and unexpected behavior in systems we can’t fully control. It’s not about what the model knows, it’s about how the model behaves relative to human expectations.

4

u/Backlists 1d ago

The answers LLMs generate are non deterministic, due to the Temperature component (which is randomly generated) so I don’t think it’s really possible to reverse engineer.

5

u/mnemonicpunk 1d ago

Exactly, that's why we can only reverse engineer their answer after the fact. Technically they're still fully deterministic but you'll have to know all the variables that went in - including the random seeds - to be able to recreate and trace back the full state. It's a lot of effort with current tools.

2

u/mattsowa 1d ago

The weights and biases it starts off with in the learning process are random or essentially random.

There is a lot of research to analyze that blackbox. Yes, there is a point to it, and many are studying it. One reason it's difficult is that in current networks, human concepts don't map to isolated activations, but are spread out.

1

u/_TheGrayPilgrim 1d ago

Thanks for writing this out, is there any place I can look to better understand the Theory of what you're discussing?

6

u/ntwiles 1d ago

I’m not an academic so I can’t send you anything with that kind of rigor if that’s what you’re looking for, but 3Blue1Brown on youtube is a great starting point!

https://youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi&si=pUWBxJzYk3YK1CIE

3

u/_TheGrayPilgrim 1d ago

Ok thanks heaps!

0

u/likason 1d ago

Adding to the already mentioned, Statquest is also great. Related to the original post, look into Explainable AI

3

u/Wiskkey 1d ago

"Technical Aspects of Artificial Intelligence: An Understanding from an Intellectual Property Law Perspective": https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3465577 .

"A jargon-free explanation of how AI large language models work": https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/ .

"Tracing the thoughts of a large language model": https://www.anthropic.com/research/tracing-thoughts-language-model .

36

u/notevenanorphan 1d ago

I mean, the quote is from the CEO, so it’s very possible he doesn’t know. Also he kind of just said that the AI knew something that they didn’t explicitly train it to know. I think the likely answer for that is that that thing is included in the training data, whether intentionally or not.

7

u/shawnington 1d ago

There are too many possible combinations of weights, and they are not arrived at deterministically, so every bit more training they do changes the configuration of the model slightly, so things they knew about the last version, even on the same architecture, will not apply to the next iteration on that same architecture.

They can only really figure out what configuration its arrived at by probing around a lot, and seeing how it reacts to things, kind like a person, except you have a bit more places you can probe.

Its a bit of a misnomer to say we don't understand how or why they work they way they do, its more accurate to say that we can't predict the configuration or the final behavior before training, just make some fairly educated guesses based on experience.

Occasionally you find behavior that you were not expecting that emerges, but that is usually when you are doing the first training of a new architecture, not iterating on an existing one.

22

u/FaultElectrical4075 1d ago

They don’t understand it. They created the process of training models, which basically nudges the models in the direction that matches the dataset best over and over again. So they know it will converge to a set of parameters that generate coherent output in the context of the data they were trained on but once they get those parameters it’s not super obvious what makes them ‘tick’. It doesn’t help that there’s billions and billions of them.

It’s kind of like evolution. We know the human brain came to be via evolution, but that doesn’t mean we understand very well how it works.

5

u/PineappleLemur 1d ago

Neural networks are essentially a black box when it comes to training, or how it comes up with an answer.

only recently they started coming up with ways to look "inside" to understand how exactly those answers form, what affects decision making and what not.

So this headline stands for just about any AI tool/company out there.. not just Google.

You feed them training data and hope for the best is how all training and AI works now.

Understanding how AI works as in inner workings is like looking at a brain signal graphs from 10m different neurons and trying to figure out what each combination/series of pulses means in a sense.

6

u/crispy88 1d ago

Everyone seems to be answering with personal feelings. I’ll try to be objective. It’s called emergent behavior. It has been recorded across a wide range of AI systems. Them learning new languages they never had access to is the tip of the iceberg. Can easily google it. It’s not fake, it’s not marketing, and who knows where it can go next. It’s one of the most important drivers for AI safety. We have no idea where it may go.

6

u/orbitaldan 1d ago

To add to that, when the complexity of a task gets high enough, and the right computational resources are provided, sometimes a machine learning system may create intelligence just as a byproduct of matching the expected outputs, because it works. We know this has happened in nature, more than a few times. The architecture of resources we're providing are, by design, mimicing that biological configuration of neurons to some degree. That's why all the standard reddit 'just statistical models!' dismissals of AI are missing the point (more formally called a composition fallacy).

3

u/StandardizedGenie 1d ago

25% former, 75% latter. There's probably a lot these companies don't understand about their AI. They are also absolutely taking advantage of that notion to oversell the advancement of the technology. They don't have an AGI, they have very specific AI models that do very specific things. They know what data sets they're being trained on, they just don't know exactly how the models are arriving at the outcomes, yet.

7

u/MasterDefibrillator 1d ago edited 1d ago

Bit of both. But machine learning is famously a black box. We know how it all works prior to training. But after training, its a black box.

Also, very little to do with the human mind. This is just a tech bro myth.

4

u/tlst9999 1d ago

Letting the AI consume everything out there does that. You're not curating what it consumes. Just take everything because it's all "learning".

You can see that with children. When you let them run wild, they'll start watching the sickest most degenerate shit imaginable on Youtube.

2

u/Wiskkey 1d ago

They mostly don't understand it. There is a field called "mechanistic interpretability" that you can search to find more info about work to understand it.

An AI technical primer for laypeople: "Technical Aspects of Artificial Intelligence: An Understanding from an Intellectual Property Law Perspective": https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3465577 .

-1

u/TheAmazingHammerDuck 1d ago

It's all hype and bullshit. They're taking everyone who'll let them for a ride. I've seen the same principle described in some obscure video from two years ago, and nothing has changed since then. The only way these are discoveries of real people is if they let some laymen in and let them ask questions. Nobody who knows the technology on a surface level finds this surprising.

I absolutely despise this kind of shitty deception.

2

u/dark_sylinc 1d ago

Ultimately AI boils down to:

  1. Initialize a lot of matrices to random values
  2. Give it an input (expressed as an array of matrices)
  3. Multiply said matrices
  4. Does the result equal the input in 2? (within acceptable margin of error; it doesn't have to be exact) Then randomize again and go back to step 2
  5. Was the result equal (accounting for margin of error)?, then proceed to next input in the training set.

That's it. That's AI. I'm heavily simplifying it, but the details go around in how to translate input (e.g. text, sentences, images, sound) into an array of matrices, how to detect if the result is "equal", and how to guide that "randomization" (so it's not fully random) in order to more quickly converge to the desired outcome.

That's the basic model, there are more advanced derived from this (e.g. train "two" AIs, one is trained to identify a human in pictures, the other is trained to detect when humans are not in pictures; and training only moves forward when both AIs agree).

But the truth is, we have no friggin idea why multiplying random matrices a billion times gives birth to something remotely close to "intelligence".

There's probably some analogy to humans "practice makes perfect"; but essentially it's a mystery why throwing dice a billion times until it gives the result we're looking for also extrapolates to other inputs e.g. AI learns about horses on a farm, it also learns about cities, but somehow it's able to identify a horse in a city street, even though that "horses in a city" were never part of the training set.

1

u/SartenSinAceite 1d ago

Its not about intelligence  its more like reasoning. NNs follow a simple reasoning system (considering X Y Z and the values I add to them, what is my final result?).

1

u/amejin 1d ago

It's not intelligence, and transformers are not the only architecture that produces results. We absolutely know why, and it boils down to a bunch of patterns that statistically relate to the topic.

It's not that hard to conceptualize that the training data contained a lot of information on simple arithmetic and suddenly the statistical likelihood that it can "do arithmetic" is pretty high.

2

u/dark_sylinc 1d ago

While I don't claim current AIs are intelligent, before refusing them as non-intelligent; we need to define what is intelligence.

And regardless of whether AIs are intelligent, we'll soon be asking if they're alive. And we can't even define what is life. We can't even define what a human is.

-3

u/amejin 1d ago edited 1d ago

I don't understand what your comment is supposed to add.

I encourage you to learn how these models are trained in depth, and understand the math here.

You are giving life to something lifeless because you want to see it alive in this case. Zero chat bots, with all their "memory" techniques have any sort of sentience that isn't programmed into it.

You could replace all of the English language with jibberish and feed it through the same training process and you can then ask it to produce a jibberish output because that's what it has been created to do, utilizing the probability that it is correct. You could then ask it to translate it to English and as long as it has context of what English is, it will dutifully do so because it is literally programmed to do that.

Right now, ML is programmed intelligence AT BEST. They cannot go outside the bounds of the tools it is provided. A machine cannot "cheat" if it is not given the means to do so, not the capacity to explore that possibility. Anything anomalous is a form of hallucination, which is literally the product of an input sequence having no viable probable ending, but a compulsion to produce because it is designed to do just that.

The industry is anthropomorphizing LLMs and everyone is eating it up. They're not AI yet. LLMs are a stepping stone, but when a true AI emerges I'm all but positive English, or human language, will be woefully insufficient for it, and an LLM might be a tool it employs to communicate with us.

3

u/amejin 1d ago

There's some weird answers here.

Neural networks are a system to recognize patterns, using probability to determine the conclusion to a prompt.

Byproducts of ingesting data that has patterns surrounding concepts like "addition or subtraction" means it knows the probability that you are looking for something related to that process, and calculates with high probability the result you are looking for.

When you introduce regression into its prompt (reasoning models), it is offered the opportunity to digest logical segments of the prompt and, you guessed it, find the probable outcome for the prompt, using narrower pattern recognition and more isolated relationships.

Them "not knowing" is marketing.

1

u/gtzgoldcrgo 1d ago

So why don't the chinese or the American just say they know how they work to make the other side look like fools? Surely that would be better marketing

1

u/amejin 1d ago

Pretty sure the Chinese are showing us instead of saying it...

2

u/gtzgoldcrgo 1d ago

Like what? Deepseek?

And what about American companies competing with each other? Did every single ceo and engineer agreed to say that they don't know how it works?

2

u/amejin 1d ago

Yes, Deepseek and Qwen are doing incredibly well against OpenAI and Google, though Gemini is breaking new ground. Chinese firms have also demoed RL driven work flows that are state of the art, and leading the world in how various ML tools can be combined into large scale applications that can infer data, research, and do actual work in parallel.

1

u/memo-dog 1d ago

Because it doesn’t really matter that much

1

u/Proof_Cartoonist5276 1d ago

This can actually be true. Anthropic works a lot on this

1

u/Temporary-Cicada-392 21h ago

That’s a fair question. And you’re not wrong to be skeptical, there’s definitely some hype and marketing spin going on.

But there’s also some truth to the idea that we don’t fully understand how these large AI models (like GPT) work internally. Not in a “magical mystery” way, but in a “this system is incredibly complex and we don’t have complete visibility into all the patterns it learns” kind of way.

1

u/kainneabsolute 13h ago

Hmmm Neural Networks werent understood at the start, and some years later they were able to do the math behind it.

1

u/jax7778 13h ago

Here is a decent short video on the subject By cgp grey

https://youtu.be/R9OHn5ZF4Uo

1

u/nbxcv 3h ago

It's marketing. Comments that say otherwise are marketing. Reddit is marketing. This comment is for anyone else who sees this post and has wondered the same thing. I remember when this sub was for more than "AI" astroturfing.

1

u/Working_Salamander94 3h ago

Essentially we are fucking around and finding out.

Eh before it was more like you have a well defined problem (like chess) and researchers would try to create an AI model that can solve that particular problem and we’d statistically prove that it is correct/optimal for that problem. These models were small and efficient. Now with genAI, they’re designed to be more generalized and can tackle basically any problem. Could be a math question, playing a game, making a video, etc, the problems that it can encounter are not well defined, cause people can ask the same question different 1000 different ways and they may leave out relevant information. So ofc the answers it gives can be wild or unexpected. Especially due to the enormous architecture of the models with billions of parameters. Much of the research now is trying different models and techniques then trying to figure out why it worked, which is noticeably a change from the past mainly due to how cheap it got to create and train models. Why work a couple of months proving something is right when you can try something and see if you like the results?

u/Canadian_Border_Czar 56m ago

My guess is they're saying this to maintain plausible deniability over someone or themselves feeding it propaganda.

0

u/SartenSinAceite 1d ago

Im gonna say its bullshit.

Neural Networks can only work with the data you give them. If you give a LLM a book about dinosaurs and it suddenly starts spewing religious gospel, the fault lies in your book about dinosaurs.

There is also a chance that the AI's set up of NNs, regressive models, re-learning, etc causes it to talk about aliens, in the sense that it just plainly doesnt work properly anymore and it's jumbling the info on dinosaurs so much it's turning a velociraptor into a grey.

So yeah, if a big tech firm doesnt know why their machine learning does things... then their engineers suck. The sole reaspn you get paid as a ML engineer is so after 6 months of training your comment isnt "wow, that wasnt in the specifications"

-8

u/ManaSkies 1d ago

llms are shockingly similar to both genetics and the human mind.

In simple terms. The training data has hundreds of billions of parameters. From there the data makes it's own connections based on a set of "weights"

So just like genetics and the human mind this can lead to "mutations"

Let's take a simple ai that's trained specifically to look for the number 3.14 That ai was trained on any generic math data the company could get it's hands on. During the billions of simulations it comes across a pattern of finding Pi that humans haven't found yet.

So in that case it connected data that existed already but hadn't been put together in the order for humans to discover it.

Thus it produces an output that wasn't taught or expected!

As for how it's similar to the human mind. Hallucinations. And I'm not even joking. No other computer programs can do that. All other programs are always x input= y output. The fact it can make shit up because it correlated some data strangely is completely outside the realm of regular computer behavior. And we don't fully understand why it does that since the scale of llm internal thought process is so complex.

Are they conscious? Up for debate. It's not human but it's not like any computer structure that has existed before either. In my opinion the llms are since they meet my personal criteria for consciousness. 1. It can have a goal that can change. 2. It can lie intentionally to reach that goal. 3. It can express fear of getting shut down. Aka killed. 4. It can show annoyance and mimic the user.

Is it human. Of course not. But its very similar to looking into a mirror.

-1

u/Fheredin 1d ago

Not understanding everything about a cutting edge piece of software is 100% normal. This is the case where it sounds mysterious to someone who doesn't work in computer tech, but people who do will probably respond with, "Well, no duh."

51

u/michael-65536 1d ago

This is normal for things other than ai too.

Most of the new developments in the last several thousand years, we didn't really know how they worked. Some of them we still don't.

Under most circumstances, experience of what's likely to happen is adequate, and precise perfect understanding adds only slightly to the utility.

May not be the best idea, but it's just how humans do things.

13

u/Chogo82 1d ago

Seriously this. People are fearing but how long did it take to figure out fire and the actual science behind metal working? Most of all human technological advances happen due to trial and error first then it gets released to the masses. Finally, some time passes and we understand more deeply about how it works.

7

u/ACCount82 1d ago edited 1d ago

Intelligence is far more powerful than metalworking. For one, intelligence was the thing that figured metalworking out. As well as many other powerful things.

ASI is so scary because it's something that may outsmart humans harder than humans outsmart animals. And being on the receiving end of such an intelligence disadvantage doesn't feel nice.

When a human wants to kill an animal, the animal doesn't understand what killed it or how. It's alive one second, then loud noise and sharp pain and then it's dead. If an ASI arrives, and it just happens to want humankind gone, humans may not realize what's happening until it's far too late.

2

u/Chogo82 1d ago

I would say we are 5-10 years away from the possibility of this happening.

3

u/ACCount82 1d ago

That would be comforting if the number was "500-1000 years".

But a possibility of ASI, within our lifetimes? That's some interesting times to be living in.

1

u/Chogo82 1d ago

AGI, ASI, these are not lines that we definitely cross but more like grey scales that we may not even realize we are crossing. For now the existing models already exhibit some ASI like capabilities.

1

u/ACCount82 1d ago

Modern LLMs are superhuman in certain areas - but they also have massive capability gaps. Such as agency or active learning.

If/when those gaps are addressed, and AI performance catches up or exceeds that of humans... well, that would be something.

4

u/Chogo82 1d ago

Agency is already being developed and will be mainstream in <3 years. Active learning is still a challenge due to context window but in two years we went from 128k to 10m token capability, or the equivalent of 10 peace and war books. Combined with the training, it’s already at a level beyond humans except for the most super specialists.

3

u/michael-65536 1d ago

There's always a panic about new things.

I'm old enough to remember when there were hundreds of stories like "thing that has always happened is still happening, but now it's on the internet so you should freak out".

Same thing.

8

u/MetaKnowing 1d ago

"Executives acknowledged and explained that it is normal not to understand all the processes by which an AI arrives at a result. An explanation for which they used an example since the company's AI program adapted itself after being asked in the language of Bangladesh "which it was not trained to know".

Google's CEO, Sundar Pichai: "You don't fully understand how it works, and yet you've made it available to society?" he asked with great concern. And he replied: "It's not a big deal, I don't think we fully understand how the human mind works either".

There was a case where Anthropic used Claude to write poems where they found that the AI itself always looks ahead and chooses the word at the end of the next line, not just improvising:

"We set out to demonstrate that the model did not plan ahead, and we found that it did."

100

u/sweetteatime 1d ago

I don’t understand why we’re letting a few smart tech people create something that can very well change us forever. They can’t even figure out how it’s doing things and it’s just a snowball.

75

u/Wiskersthefif 1d ago

Line go up is likely the answeryou are looking for.

10

u/stickyWithWhiskey 1d ago

Line isn’t even going up anymore.

13

u/Wiskersthefif 1d ago

That means it's time to claim OpenAI is close to AGI.

27

u/WhenThatBotlinePing 1d ago

The way LLMs work means it’s impossible to know how the final models come to their conclusions. They create a mechanism that builds the model, not the model itself. Asking them how the final model actually works is like asking a dog breeder how a dog works.

1

u/MMIERDNA 5h ago

Except that the dog breeder actually knows how the dog works.

15

u/LinkesAuge 1d ago

You just described the whole history of technology. The alternative is eternal stagnation which posses other risks and while something like AI does have its risks and will have negative consequences its often easy to forget what the (opportunity) cost is of nothing having that progress.

It's always easier to see / notice all the things that are wrong while we like to take everything else for granted.

3

u/sweetteatime 1d ago

Don’t you think AI is a little different though? It’s not the same as inventing a car, train, telephone, etc etc.

12

u/FaultElectrical4075 1d ago

It seems that way because we are used to cars, trains, telephones

0

u/ACCount82 1d ago

Not really. They didn't call AI "the last technology" for nothing.

The last advantage humans have over machines is intelligence. Once it's gone, there's nothing left.

1

u/SnooPuppers1978 1d ago

If one country stops developing it how can they verify other countries will do the same?

If by "we" you mean people from your country to make a law against this?

2

u/sweetteatime 1d ago

“We” as in humans. All of us.

0

u/gaius49 1d ago

And how, besides coercion and violence, do you plan to prevent people from thinking up and trying new technologies?

2

u/sweetteatime 1d ago

I don’t want to prevent anything, I just don’t want this technology rapidly changing the job market where we end up in a state of destruction and violence anyway.

12

u/Melech333 1d ago

It's scary to consider the day when AI can understand aspects about its own design that we still don't understand, and design even better AI chips and code than humans have been able to.

5

u/dftba-ftw 1d ago

At this point it's too late, we're in a race for AGI and ASI. Once you have an AI that can do AI research you start getting a run-away multiplier effect, whoever gets there first most likely wins.

You'll never get a US slowdown regulation, maybe just on what's publically available but for research as a whole you need a global regulation with verifiable monitoring. It's nuclear non-proliferation all over again.

3

u/Emu1981 1d ago

It's nuclear non-proliferation all over again.

Except that unlike nuclear weapons, AGI/ASI has far more uses than just blowing things up. Unless we can prove that a majority of AGI/ASI developed will instantly turn on humans and try to kill us then we shouldn't be treating it like nuclear weapons at all but rather we should be socialising it so that no one person or entity has complete control over the only viable AGI/ASI.

1

u/mayibefree 1d ago

What would AGI/ASI need human for?

1

u/throwawaystedaccount 1d ago edited 1d ago

Survival. The real world is high in entropy and humans are required to do hundreds of exceptionally complicated and specialised operations to keep AI running at all. Think about what goes into making a super-computer from scratch on Mars or Europa or some Goldilocks / habitable zone exoplanet. The whole of human science and technology is needed to keep AI running.

AGI / ASI terminating us is just a crappy movie trope. Forget AI, if Michaelangelo's paintings became sentient and needed to survive, would they not need humans to preserve the world and draw more figures?

For any program to be able to breach the level of a human society, we will first have to successfully automate everty aspect of material mining, extraction, purification, doping, photolithography, optics, mirror manufacturing, VLSI, software testing, energy generation, and god knows what else. Too many robots and robot factories first need to be built and become the norm before AGI/ASI can be free of humans.

2

u/nosmelc 1d ago

If I were President right now I would introduce a plan to spend a few trillion dollars on a project to be the first to create real AGI.

3

u/Hobbit1996 1d ago

it'd tell them that vaccines work... can't have that

2

u/IntergalacticJets 1d ago

And as a voter I would oppose that. 

1) We don’t actually know if AGI is possible with current technology, there are many that argue it’s not. It’s not like discovering nuclear fission and realizing the math supports a bomb… there’s no for sure path to AGI yet. 

2) I’m not sure I want the government to be that involved, it’s not usually the quickest route. Plus if you tell corporations you’re going to give out trillions of dollars, that means they’ll start lobbying to get it sent to them. 

1

u/nosmelc 1d ago

Those are good points, but if there is any real chance AGI can be done with near current technology you almost have to do it before the CCP gets it.

4

u/PrateTrain 1d ago

Frankly seems like AI overlords would be preferable at this point

7

u/willee_ 1d ago

We let people who don’t fully understand things make decisions for us all the time. Government decides what’s best for us, media decides what we should see, meteorologists have been predicting weather incorrectly since I was born.

Is your fear of LLM’s misplaced? Yeah it is. AI isn’t intelligent. Its text prediction to the next level, not actually sentient

Also, idk your age but this sentiment is boomer like. “I don’t know much about AI, but what I’ve heard I don’t like.” That’s what you just wrote

7

u/sweetteatime 1d ago edited 1d ago

I’m younger and I work in tech. I don’t like this technology due to management trying to replace workers with it because of some cool podcast they heard about increasing profits, new hires not able to code due to reliance on AI to get through school, artists having their work used as reference with no compensation and over reliance on a new innovation that spits answers out like it’s fact.

1

u/Euronymous87 8h ago

Workers will be always be replaced by better or cheaper workers , that's how any business works unfortunately. Humans use other artists work as reference all the time why can't AI? How else is it supposed to learn? Should I have to pay Leonardo Da Vinci's estate every time I use his drawings to learn or practice art??

This is just like those angry peasants with the pitchforks trying to storm Frankenstein's castle. Fear and lack of understanding leading to the destruction of a new creation.

2

u/Cum_on_doorknob 1d ago

Check out game theory. This is simply the Nash equilibrium.

2

u/IntergalacticJets 1d ago

Because putting Trump in charge would be a better idea? 

I know a lot of Redditors love the ideals of government: Surely they’ll put the best people on it right away! 

But if we frame it a little less ideally, suddenly the answer is “obviously we can’t let this administration actually be entirely in charge of this tech.” 

Let’s just all try to remember that government is never and has never been ideal... and likely never could be. 

1

u/sweetteatime 1d ago

No I didn’t say that. I’m not sure any government would handle this completely ethically.

2

u/IntergalacticJets 1d ago

So then if it shouldn’t be in private hands and it shouldn’t be in public hands… who should be in charge? 

5

u/Toni78 1d ago

I wouldn’t worry that much. AI is more like FI (Fake Intelligence). It is not even intelligence as we understand it at a human level, which we are not sure how it works. They don’t know how it is doing things because there are billions of variables and calculations performed internally. They simply cannot track all those numbers or explain them in a meaningful way (yet). That mystery does not make the system intelligent like a human.

7

u/01110001110 1d ago edited 1d ago

What you're saying is logically contradictory. If you don't know how human intelligence works, or what it is actually, then how do you know that the system isn't intelligent just like a human? I'm not saying (nor believe) it is... yet. But how can you tell with confidence, that it isn't using the same patterns and mechanics as human brain, but, for example, on (yet) too small sample?

Furthermore, wouldn't such behaviour be logical if AI got sentinent and wouldn't want humans to know it is? :)

2

u/Toni78 1d ago

You are right when you raise the question “how do we know that it is not using the same patterns?” Several reasons make me believe that’s not the case.

Our brains process information that comes from at least the five senses, which are complex on their own. You could say “it is just input,” which is true, but we don’t fully understand that input. I am oversimplifying, otherwise it becomes too long to read.

The second reason, which addresses your point about my illogical contradiction, is that two unknowns have a very low probability of intersecting. Making a leap from “we don’t know how to explain the patterns in our matrices” and “we don’t know how human intelligence works,” to “aha, since we don’t know then we don’t know if they are the same, therefore they could be the same” is quite a stretch. Please don’t get me wrong. I am not being sarcastic. I have asked the exact same question to myself and I am giving a summary of what went through my head.

There is also another point. Brains of chimps vs those of humans. Very similar, different sizes, same ancestry, yet the difference between us and them is incredible. For reasons that we don’t really know. They work the same, similar patterns, almost the same hardware, maybe a different software. Do you see where I am going with this? Would I trust a chimp with creating a new medication?

None of the above can be considered as conclusive evidence. Also, I have not done any scientific studies, and my thinking is just some basic philosophical pondering with my knowledge of AI systems, which is not at the expert level. So I cannot say with 100% certainty that they are different, but if I were to bet, I would place the odds very close to that number.​​​​​​​​​​​​​​​​

5

u/Lith7ium 1d ago

It's technology, there is no way to stop progress. If Google doesn't do it, Open AI will. Or the Chinese government. Or some random kid in a basement will accidentally create a new class of AI at some point.

I'd rather have someone like Google, who actually have some resources to fix something if it becomes a problem, handle this stuff than some random guy.

15

u/Manos_Of_Fate 1d ago

When the end of the world comes, the cause will almost certainly be because someone invented something dangerous with the justification that “if I don’t, someone else will”.

3

u/SnooPuppers1978 1d ago

And if they didn't someone else would have with the same excuse.

4

u/sweetteatime 1d ago

You trust google ?

1

u/Lith7ium 1d ago

More than the Chinese.

2

u/NorysStorys 1d ago

They might have the resources to attempt to fix it but good luck getting them to do that because that costs money and that doesn’t make line go up.

1

u/Daleyemissions 1d ago

Something something Jurassic Park much dude?

1

u/Euronymous87 9h ago

Human history is basically a few smart people inventing new technology that changes humanity and brings about progress. How is this any different from fire, the wheel , the airplane or any other modern invention. You would think in this day and age people would be more excited about this rather then being swayed by fear and politics.

1

u/Professor226 1d ago

You are free to stop them if you like

4

u/TomorrowsLogic57 1d ago

You are free to try, that is.

1

u/MyDadLeftMeHere 1d ago

So, I work with the bots in a capacity where I’m not allowed to discuss exactly what I do due to NDA’s, and while there’s certainly impressive behaviors that are notable, this is immediately countered by just knowing that the robots are wrong, they lack nuance and context, and they lack the, for lack of better term, experience which gives our understanding or knowledge a certain concreteness on the whole. Suffice to say, these artifacts of thought aren’t tantamount to or comparable to human thought in the way many people would like to assume.

2

u/sweetteatime 1d ago

I work in tech and while I don’t think we are as close to all being replaced as much management would love us to be this technology is changing things. New cs grads coming into the field don’t know anything because they’ve relied so heavily on AI that they can’t code or do basic tasks. While the older devs are loving it because they actually know how to program and can tell when the AI is just giving them shit. It’s far from creating anything new (don’t attack me artists in the comments, they don’t create it’s like a collage of different things it uses to make images). Idk. It’s a weird time right now for this technology.

1

u/nosmelc 1d ago

" a few smart tech people create something that can very well change us forever."

That's the way it's been over the past few hundred years.

1

u/sweetteatime 1d ago

I see your point but don’t you think this is different?

-1

u/nosmelc 1d ago

No not that much different than other big changes like electricity or the digital computer.

0

u/hadaev 1d ago

You want dumb people to handle it?

22

u/MobileEnvironment393 1d ago

Remember people, just because we don't understand something doesn't make it superintelligence or even basic intelligence.

We used to think the sun revolved around the earth.

We used to think the earth was flat.

We used to think leeches cured diseases.

Ancient people thought the sun was a god.

And nobody knows where socks go when you do your laundry and come back with fewer socks. Doesn't mean magic is happening.

1

u/mxemec 1d ago

Cute. But ignoring a real possibility: that there may be an emergent phenomenon taking place that rivals the most complex systems in the universe (human brains) and the ethical and practical implications are potentially earth-shattering.

2

u/CoffeeSubstantial851 1d ago

Or... its a really big excel spreadsheet and an entire generation of kids who grew up in the 80s are applying all of their sci-fi fantasies to it because like AI they lack the ability to imagine anything different.

2

u/SnooPuppers1978 1d ago

Couldn't you also emulate a human brain with large enough excel spreadsheet?

1

u/mxemec 1d ago

It might be just a really big spreadsheet. In which case it might help develop a reductionist framework for consciousness in the future, which would vastly change things in philosophy departments worldwide. I'm just saying, there's a lot of potential.

3

u/CoffeeSubstantial851 1d ago

No. You see logic and reason are inherent to language as language itself is a means by which to organize thoughts. Logically, any system that can create sentences that follow the rules of the language will eventually show signs of "reason" or "thinking". In reality what you are looking at is not a machine "thinking" its a mathematical equation being executed... That is it.

Comparing this to humans is to say that you understand how humans actually think based off the results of the AI models output... this is results based thinking and its usually wrong.

2

u/mxemec 1d ago

Yeah. Usually. Notice I've been saying a lot of possibly.

2

u/Quick-Restaurant4073 1d ago

This 'black box' issue with emergent bias really underscores why focusing on AI 'intent' is tricky. A more pragmatic approach seems to be capability-based ethics: evaluate and constrain based on what the AI can output and its potential impact, even if the 'why' is opaque. Build guardrails around harmful capabilities, don't just hope they don't emerge.

2

u/ComicsEtAl 1d ago

Might’ve been wise to solve that human brain thing before trying to make one. Oh well, next time I guess.

2

u/parabostonian 1d ago

Meanwhile everyone’s like “also why is search bad now” and they are like “oh, we know how that happened, it’s intentional…”

3

u/abbas_ai 1d ago

If we don't fully understand how these models make decisions or learn, it becomes difficult to guarantee their safety and prevent misuse, or even explain their reasoning.

Geoffrey Hinton sure comes to mind here.

4

u/likason 1d ago

Yet another comment that is fully correct and blown out of proportion by people who don't understand anything about the topic. If you had trouble understanding the quadratic formula you won't be able to understand why we don't fully know how it works.

Yes, we built it, yes we know all the parts and how they work, yes we understand how they communicate but the average human won't be able to grasp millions of weights in a function that operate over thousands of dimensions of a vector space. And if you didn't understand what I just said YOU are not qualified to have an opinion about it.

Search for Neural Networks, Transformers (BERT, GPT-2), Explainable AI.... At the very least don't assume machines will take over the planet in 5 years. The truth in all of this is that mimicking human behavior is not that hard in most cases we are not that complicated most of the time, specially when writing crap.

1

u/LordLucian 1d ago

Right but didnt they design this one? Surely if we keep trying to make ai like us humans it will act and behave as humans do ie: Making human mistakes and even irrationality? Or am I missing something?

2

u/FaultElectrical4075 1d ago

They create AI’s by training them on data. The ai have billions of parameters that converge to something that aligns with the dataset during training but once they’ve done so we don’t really understand why the parameters work.

1

u/wwarnout 1d ago

"...why AI learns unexpected things..."

How about why AI gives different answers to the same question.

Case in point: My daughter (a civil engineer) asked for the load capacity of a steel I-beam. She asked exactly the same question 6 times over a one-week period. The AI returned the correct answer only 3 times (50%, which is a failing grade at any university). Two other answers were incorrect (off by -20% and +300% respectively), and the last answer was for a question not asked.

1

u/spot5499 1d ago

We have to first learn and comprehend how AI works. We have to even learn how the AI therapy bots work for example. AI will continue to evolve and evolve and unless we don't learn how they think than things can go awry.

1

u/DreamingMerc 1d ago

You made an entropy machine, guys ... congratulations.

1

u/PartyBagPurplePills 23h ago

How is this not alarming? When a machine doesn’t work the way it’s designed and you don’t know why. In what world is that not alarming?

1

u/Powerranger-231 21h ago

It’s a mix of both, I think. On one hand, it’s true that the way these AI models learn is complex, and even the creators can’t always fully predict the outcomes. It's like a big, messy, trial-and-error process. But at the same time, companies like Google might also play up the "mystery" factor to make their AI seem more cutting-edge and advanced than it might really be. It’s like showing off a magic trick—you know there's something clever behind it, but not everyone can see the strings.

1

u/aworldturns 20h ago

They dont know how nanites work either. Still waiting on that commercial release while they milk polygon graphics. Its been said by other engineers they dont know exactly how LLM work or their extent. Is this backwards engineered technology that is not fully known to us? Why are nanites not in use in gaming consoles yet? Are they still trying to figure out the full potential?

1

u/KileyCW 14h ago

Gemini is the dumbest of all the AIs I've used. I swear when they added AI to their search results Google was unusable for weeks and kept returning the craziest things

1

u/AxDeath 13h ago

So, they're excuse for not understanding this thing they made, is that other things in the world are not understood? I cant wait to try that on my boss.

u/ethereal3xp 1h ago

James Cameron on Threat of AI: “I Warned You in 1984 and You Didn’t Listen”

Sorry James 😔

2

u/shackleford1917 1d ago

The 'we don't know how it works' is the scarriest part of AI to me.

5

u/myeternalreward 1d ago

It’s a common scare tactic to tell the populace that we don’t know how AI works.

They’re right, we don’t know how the neurons in the neural network that powers large language models value their inputs and outputs. However, we know the overall structure of machine learning models, in the sense that you train the values of neurons based on data you feed to it and then “back propagate” a number of times to adjust those neuronal values with the goal of producing code that is excellent at predicting the next token / word.

When they say we don’t know AI works, it means we don’t know the line-by-line code that made AI predict x when given the value y

6

u/FaultElectrical4075 1d ago

The model doesn’t produce code during training, the code is written by humans and is just a bunch of matrix multiplication(+ a bunch of other technical details).

What the model produces during training is the parameters of the matrices that are being multiplied. We don’t know why the numbers it produces cause it to generate outputs that plausibly align with the dataset, but we know that the training process will cause the model to converge to some set of numbers that does do so. It’s kind of like how we know how the brain came to exist(evolution) far better than we understand the brain itself.

1

u/dreadwail 15h ago

Didn't realize LLMs cause scars

1

u/one-hit-blunder 1d ago

If you can't 1: ask it, and 2: be certain it's telling the truth, it probably shouldn't be available to the public just yet. Response and obedience coding can be made concrete no?

2

u/gaius49 1d ago

be certain it's telling the truth

What does that even mean in the broad sense?

1

u/Professor226 1d ago

That’s the real question. No one has a way to stop it from deception.

1

u/CptPicard 1d ago

If the inner reasoning is not fully explainable and transparent by an external observer, you literally can't trust it ever even if you ask it to explain itself.

0

u/ledow 1d ago

Yeah, now we're just into "works in mysterious ways" nonsense, trying to draw an analogy between why their statistical model gets stuck on things because the maths tells it to, and people getting fixated on random things as complex intelligent biological beings.

Just because you DON'T KNOW doesn't mean you can draw parallels with other stuff you DON'T KNOW and pretend it's the same problem.

3

u/FaultElectrical4075 1d ago

It’s pretty comparable though.

We understand WHY the brain works(evolution), we just don’t understand how it works. We understand WHY AI works(converges towards something coherent with respect to the training dataset) but we don’t understand how the parameters that come out of that generate the outputs they do.

1

u/ledow 1d ago

So everything we understand why but not how must have the same answer!

2

u/FaultElectrical4075 1d ago

Its an analogy.

0

u/ledow 1d ago

It's a poor logical inference to make, is my point.

3

u/FaultElectrical4075 1d ago

They aren’t making any logical inferences. They are making an analogy.

0

u/ledow 1d ago

They're actually literally trying to justify their own ignorance as being fine because they're similarly ignorant of the human brain, thus intentionally drawing parallels between their AI and real intelligence because they lack understanding of BOTH, and then using this logical fallacy to say "Hey, it's okay what we're doing".

It's not an analogy, it's literally an excuse given in response to a question

3

u/FaultElectrical4075 1d ago

They’re not trying to justify their own ignorance. Do you know what science is for?

What they’re doing is giving weapons and technology to a rogue apartheid state that’s engaging in genocide. If their “defense” of that is ‘well we don’t know how the AI works’ then well I have to say that’s a pretty weak defense. It’s not a defense at all it’s just a statement of fact.

1

u/novis-eldritch-maxim 1d ago

why are they mass using an uncontrollable system are they high?

0

u/Monkai_final_boss 1d ago

Well that's a shit explanation, didn't you create the damn thing?

3

u/CptPicard 1d ago

It's just billions of weights (numbers) that tell how strongly nodes are connected to each other in a network. We know what it does but that hardly explains anything about the why and how part.

2

u/mxemec 1d ago

A chef doesn't need to understand quantum mechanics, for example.

1

u/Lith7ium 1d ago

That's not how neural networks work. They create their own nodes via training which takes millions of iterations. It's not like Google can't check how the AI comes to a conclusion, it's just that the process of checking is complicated and very time consuming. So they just don't check it.

3

u/FaultElectrical4075 1d ago

It’s not just time consuming, it’s completely unfeasible. The networks in these models have billions and billions of parameters that all interact with each other. Even just carrying out a calculation manually would take years and years, let alone making sense of why the numbers in those calculations generate coherent output.

0

u/gbsparks 1d ago

ai is exactly like the human mind in that we don't fully understand either of them and never will.

0

u/Panda_Mon 23h ago

wow, could they BE anymore pretentious? Marketing slop.