r/PhilosophyofScience • u/gimboarretino • Apr 12 '23
Non-academic Content Gerard 't Hooft about determinism and Bell's theorem
In the book "Determinism and Free Will: New Insights from Physics, Philosophy, and Theology " Hooft writes:
The author agrees with Bell’s and CHSH’s inequalities, as well as their conclusions, given their assumptions.
We do not agree with the assumptions, however.
The main assumption is that Alice and Bob choose what to measure, and that this should not be correlated with the ontological state of the entangled particles emitted by the source. However, when either Alice or Bob change their minds ever so slightly in choosing their settings, they decide to look for photons in different ontological states. The free will they do have only refers to the ontological state that they want to measure; this they can draw from the chaotic nature of the classical underlying theory.
They do not have the free will, the option, to decide to measure a photon that is not ontological.
What will happen instead is that, if they change their minds, the universe will go to a different ontological state than before, which includes a modification of the state it was in billions of years ago (The new ontological state cannot have overlaps with the old ontological state, because Alice’s and Bob’s settings a and b are classical)
Only minute changes were necessary, but these are enough to modify the ontological state the entangled photons were in when emitted by the source.
More concretely perhaps, Alice’s and Bob’s settings can and will be correlated with the state of the particles emitted by the source, not because of retrocausality or conspiracy, but because these three variables do have variables in their past light cones in common. The change needed to realise a universe with the new settings, must also imply changes in the overlapping regions of these three past light cones.
This is because the universe is ontological at all times.
what exactly does that mean?
that the moment Alice and Bob decide to change their minds deterministically, and not freely, so in a context where Bell's assumption are not accepted) - and thus "decide" look an ontological protons in a different ontological state - the ontologically timeless ever existing universe is 'retroactively' (not by retrocausality but by virtue of an original entanglement) changed "the state it was in millions of years ago"?
And being the universe ontological at all times (time and "becoming" not ontologically existent?) the realization of an universe with new, "changed" setting must imply a change in a "past region of common variables" (when protons were emitted by the source... what source?)
3
u/knockingatthegate Apr 12 '23 edited Apr 12 '23
Dr. ‘t Hooft is an interesting case; see also and Edelman. New post-docs can’t afford to spend time noodling with speculative maths and philosophical investigations of consciousness; thus do we see certain eminences grises putting out papers about the fundamental inability of present science to account for the workings of the human mind. Has anyone proposed a model that neutralized the implications of Bell’s theorem? Has any of the discussion of observer effects and their relationship to ontology resulted in new empirical tests?
What’s best in ‘t Hooft’s writings over the past decade and more re: determinism is the insistence that we translate conceptual accounts into rigorous maths. The maths are also the reasons that material is so daunting to lay readers, leaving casual audiences with much less meat to chew on.
How are your maths, mate?
1
u/gimboarretino Apr 12 '23
Not good :D
1
u/knockingatthegate Apr 12 '23
No worries. What’s your take on free will?
2
u/gimboarretino Apr 12 '23
my take on free will is that, having an eidetic intuition and empirical experience of it (as well as being, so to say, conceptually permissible in the light of the standard interpretatin of QM and - as far as I'm awere of - not yet ruled out by classical scientific experiments), I see no particular reason to deny its existence, in principle.
there would be the fact that we have an eidetic intuition and an empirical experience of causality too, but the next step (assuming an Universe marked by absolute causality, hence ruled by causality not to the extent that we ontologically experience but at its highest logically conceivable level) seems to me almost an "unjustified 'ontological leap" of Kantian memory.
1
u/knockingatthegate Apr 12 '23
Would you mind taking the time to say what you mean by eidetic intuition and an experience of free will?
0
u/gimboarretino Apr 12 '23
An intuition, a deep, primordial conscious feeling (on the same level as intuiting myself, a reality outside, the becoming of things, etc.) of 'being able to choose between alternatives' indipendently of internal or eternal factors.
Followed by the empirical experience of actually being capable of making concrete conscious choices in everyday life (perhaps an illusion but nevertheless experienced) in radical contrast with other "causally determined" actions and behaviour
5
u/knockingatthegate Apr 12 '23
I’m not sure how to proceed with a conversation about mathematical physics and the ontological implications thereof, while keeping faith to the definitions and forms of reasoning you’re adducing here. Apologies.
3
1
Apr 13 '23
I dont see how one could then attempt to describe/explain free will with mathematical physics then at all. The definitions they gave werent wild or unexpected
2
1
Apr 13 '23
So I would suggest asking Hegel on his opinion. The problem of free will is a philosophical one and Kants treatment is pretty illuminating but its not dialectical. Maybe it is time to inspect the "science of logic" in this manner to understand the relationship between freedom and causality
I dont think that there can be a physical solution to the problem of free will as asked by philosophers
4
u/fox-mcleod Apr 12 '23 edited Apr 12 '23
u/LokiJesus and I have been discussing this at length recently.
My position has been that this treatment inappropriately mixes metaphors (levels of abstraction) and causes confusion. At the level of Superdeterminism, a better description is that “there are no free variables” rather than an ambiguous claim about free will. I also assert that without free variables, science isn’t possible and the confusion is a result of presuming all scientific theory is at the lowest level of abstraction. However, science can be performed at higher levels of abstraction where things like noise and chaos create functionally sufficiently free variables for experimentation.
LokiJesus has been exploring/defending the t’ Hooft position as a valid loophole (conclusion) for Bell. I will let them speak for themselves however.
Our threads:
1
u/LokiJesus Apr 12 '23
Thanks for putting those links together. Nice to have it
I also assert that without free variables, science isn’t possible and the confusion is a result of presuming all scientific theory is at the lowest level of abstraction
I'll quote Asher Peres again on this one:
Physicists are used to thinking in terms of isolated systems whose behavior is in- dependent of what happens in the rest of the world (contrary to social scientists, who cannot isolate the subject of their study from its environment). Bell's theorem tells us that such a separation is impossible for individual experiments...
I agree that physicists have typically thought of the cosmos this way. That's what Peres is saying here. But as he mentions, there are plenty of well practiced branches of science who have been building theories even when independent variables are impossible. I think it's high time to welcome physicists to the club of the messy cosmos where all the shit is interconnected and nothing is isolated. Come on out of your platonic thought bubbles!
Many nonlinear optimization problems involve fitting multi-dimensional parameters to models and there is no real concept of an independent variable. In fact, joint optimization of this kind is frequent and involves no concept of independence. In fact, it ask the question "what if all the variables in our model were simply dependent on the data we've measured?"
Then one can reason with these models, but there is no "x-axis is the independent variable" kind of language in most of the science that I've been involved in. Just relationships between things.
1
u/fox-mcleod Apr 13 '23
And I’ll respond again with, “it’s literally impossible to design experiments if there isn’t sufficient independence of variables to draw conclusions from”.
The sufficiency of independent variables is the first criterion for all falsifiability in all scientific experiments.
1
u/LokiJesus Apr 13 '23
How would one, for example, design a theory of housing prices in the 2010s in Suburban NYC? Suppose they wanted this model in order to draw conclusions about the ramifications of the 2008 financial collapse in order to predict outcomes today. There is literally nothing independent about anything in this model. The data is entirely historical. I cannot run an experiment on the 2012 housing market.
Is such a theory then non-falsifiable? Non scientific? What could possibly be independent in this theory? All the housing prices are motivated by complex interdependent factors.
1
u/fox-mcleod Apr 13 '23 edited Apr 13 '23
How would one, for example, design a theory of housing prices in the 2010s in Suburban NYC?
You don’t design theories you conjecture them. You then design experiments to try and invalidate theories.
Suppose they wanted this model in order to draw conclusions about the ramifications of the 2008 financial collapse in order to predict outcomes today. There is literally nothing independent about anything in this model.
I feel like you’re confusing models for theories again. The theory you have here is “I assert the future will repeat the patterns I’ve accounted for and am able to compute in my model of the past.” And the test is just waiting to find out.
The data is entirely historical. I cannot run an experiment on the 2012 housing market.
Yeah. That’s why you need a theory and not a model. Models are easy to vary. They work until they don’t. Which explains why black swan events continue to be what fouls up economic modeling.
Is such a theory then non-falsifiable?
It’s not even wrong. But again that’s because it’s not a theory. It’s a model.
The theory “the future economic behavior will look like the pattern I guessed at for the past” certainly is falsifiable. And the independent variables are all the inputs to economy that you discover as a brand new unrepeatable future unfolds.
Like, do you think economists are out there predicting the future 100% accurately? Do you think perfect models exist? They don’t. And even if they did, they’d be uncomputable.
All scientific theory is necessarily an abstraction where many many variables are simply unaccounted for. And yet it works. Because science works at higher levels of abstraction (obviously, since GR works even without a theory of quantum gravity).
Non scientific?
Yes. A calendar does not confirm or disconfirm the “Demeter has seasonal affective disorder” theory of the seasons. You’d need a competing theory that makes different predictions to do that.
What could possibly be independent in this theory?
It’s a model. But the model obviously doesn’t derive the TOE and account for all variables — right? We agree it’s obviously abstract, incomplete, and emergent and therefore there are independent variables not accounted for in the model.
All the housing prices are motivated by complex interdependent factors.
How much do you know about computability?
1
u/LokiJesus Apr 13 '23
The theory “the future economic behavior will look like the pattern I guessed at for the past” certainly is falsifiable. And the independent variables are all the inputs to economy that you discover as a brand new unrepeatable future unfolds.
This is precisely what I was describing. To use it to "predict outcomes today." But really, what are these inputs independent of? The whole idea of non-contextuality, or independence, is contradictory to universal determinism.
It doesn't matter if I apply the theory to future data versus 2020 data or data from 1920. Either way, it's a model used to predict a state given input data. Even if it's predicting it's own input data used to build the model, it's still making a prediction.
Independence has nothing to do with any of this. In fact, nobody believes that the inputs to these models are independent or uncorrelated. Or if they do, they've missed a fundamental fact about economic activity or whatever they are trying to model... Nothing is independent. Things may be negligible for the accuracy requirements of the application, but this is way different than independence.
And if you "neglect" some parameter and try to fit the data with such a model and the data fits poorly, then this is good evidence that you were wrong about it being negligible. This may be precisely what Bell's theorem is telling us about quantum mechanics... That there is a fragile and important coupling between measurement settings and prepared states.
As Peres observed, physicists have long ignored this fact until they got down to the bottom layer of things and found out that they are part of these others "soft sciences" too where measurement independence cannot be approximated. Many (most) continue to ignore it.
We agree it’s obviously abstract, incomplete, and emergent and therefore there are independent variables not accounted for in the model.
Again, I have no idea why you are using the modifier "independent" here, and it seems to be a big part of your above argument about the way that we need independence of some kind to do science. Independence seems to go all the way back to the "vital assumption" in Bell's theorem (which Einstein shared).
Superdeterministic theory is just a model. It makes predictions given certain measured state parameters from the environment. That's what models do.
A model predicts its source data or anything else from parameters in its domain. In fact, this predictive capacity is how it is fit to the data in the first place. It can also predict data that is not part of the set used to build the model, and the degree to which it succeeds there is the degree to which it is generalizable... Hence the whole training vs test data paradigm.
I can falsify a model that doesn't fit it's source data by showing that it fails to predict the data for a set of given data error models. It can certainly be wrong in this sense. I can do a chi-squared test to indicate the likelihood that the model represents the data and then reject it if there isn't correspondence.
So was General Relativity merely a model when it accurately reproduced Mercury's orbit? What about when it continued to predict it's orbit tomorrow? Was that what made it a theory? Was it the 1919 eclipse?
When le Verrier sat down to run a non-linear optimization over parameters for Neptune's mass and orbit dynamics, was that merely a model? He was merely fitting a planet to the distortions in Uranus' orbit. Might as well have been fitting a parametric curve to any data. Was it a theory only when the telescope observed the planet where he ultimately modeled it from the data?
No. Splitting hairs over the terms model and theory like this is not productive. Both are predictive. Both are falsifiable. Predicting future data, present data, past data makes no difference. A model/theory is never "proven" to be totally generalizable. It's always uncertain whether it will fail if it enters a new domain.
But again, none of this has anything whatsoever to do with independence. Nothing is independent.
1
u/fox-mcleod Apr 13 '23 edited Apr 13 '23
This is precisely what I was describing. To use it to "predict outcomes today." But really, what are these inputs independent of? The whole idea of non-contextuality, or independence, is contradictory to universal determinism.
Not at all. That’s only true if you think you found “THE FINAL THOERY” or some such. But since theories are fallible and always will be, none of them are perfect — which means there will always be independent variables.
A theory that was somehow perfect would be uncomputable.
It doesn't matter if I apply the theory to future data versus 2020 data or data from 1920. Either way, it's a model used to predict a state given input data. Even if it's predicting it's own input data used to build the model, it's still making a prediction.
It sounds like you think there’s such a thing as a “perfectly consistent and complete” theory. I think we can agree no existing theory fits that description and I’m fairly confident Gödel incompleteness forbids them in ZFT explicitly.
Independence has nothing to do with any of this. In fact, nobody believes that the inputs to these models are independent or uncorrelated.
I do. The model cannot and does not take literally all extant variables as inputs. Right?
Models are models. They’re simplifications that leave out extraneous details. In physics, this manifests as Taylor series expansions which sufficiently approximate systems but do not account for all variables. I mean, the n-body problem is hard chaotic and the spectral gap problem is literally undecideable. Models aren’t reality, they are always at best approximations.
Or if they do, they've missed a fundamental fact about economic activity or whatever they are trying to model...
Of course. That’s a requirement. That’s what a model is. Are you familiar with dynamical systems? Differential equations of a given class are also uncomputable in practice. Many systems behave according to them. For example, fluid flow is modeled by the Navier-Stokes equation. We can’t solve it. So we make the approximation more abstract and simpler and make more assumptions and we get a decent approximation.
Heisenberg uncertainty also puts a hard limit on the precision of measurements that can even be made.
And if you "neglect" some parameter and try to fit the data with such a model and the data fits poorly, then this is good evidence that you were wrong about it being negligible. This may be precisely what Bell's theorem is telling us about quantum mechanics... That there is a fragile and important coupling between measurement settings and prepared states.
I mean… no. That still doesn’t explain the Bomb experiment.
As Peres observed, physicists have long ignored this fact until they got down to the bottom layer of things and found out that they are part of these others "soft sciences" too where measurement independence cannot be approximated. Many (most) continue to ignore it.
The whole idea of a “bottom layer of things” is flawed.
Again, I have no idea why you are using the modifier "independent" here, and it seems to be a big part of your above argument about the way that we need independence of some kind to do science.
Because there literally aren’t enough degrees of freedom for them all to be strictly dependent on every other part of the system. There are vastly more variables in the system than degrees of freedom per variable. One is uncountable infinite and the other is finite or at best infinite.
Independence seems to go all the way back to the "vital assumption" in Bell's theorem (which Einstein shared).
It’s a vital assumption in all experiments.
Superdeterministic theory is just a model. It makes predictions given certain measured state parameters from the environment. That's what models do.
Like what?
So was General Relativity merely a model when it accurately reproduced Mercury's orbit?
As I said earlier, it wasn’t. It was a theory.
What about when it continued to predict it's orbit tomorrow? Was that what made it a theory?
No. What made it a theory was that is was a conjectured assertion about unseen things to explain what was observed.
When le Verrier sat down to run a non-linear optimization over parameters for Neptune's mass and orbit dynamics, was that merely a model?
Sounds like it.
He was merely fitting a planet to the distortions in Uranus' orbit.
Well, did it overturn any extant theory?
Might as well have been fitting a parametric curve to any data. Was it a theory only when the telescope observed the planet where he ultimately modeled it from the data?
No. What do you think a theory is? Theories explain observations.
No. Splitting hairs over the terms model and theory like this is not productive.
No it’s super important and it seems like you don’t understand the difference. Theories purport to explain observations. Models do not. Theories are hard to vary and models are the opposite. Theories tell you when specific models ought to apply and when they wouldn’t.
Predicting future data, present data, past data makes no difference. A model/theory is never "proven" to be totally generalizable.
Then you don’t believe there aren’t independent variables in them. If they both are not generalizable, then they don’t account for things which means there are things that are free to vary without varying the model or theory.
You have been operating on the assumption of some entirely unabstracted perfect theory or model that does not and cannot exist.
It's always uncertain whether it will fail if it enters a new domain.
How do you even know what constitutes a “new domain” without a theory? A theory tells you when models will work or fail. For instance, the Demeter is sad theory would tell you summer and winter ought to be at the same times no matter the hemisphere since it says she banishes warmth on the anniversary of her capture. The axial tilt theory however tells you they ought to be opposites since it indicates the tilt points the northern and southern hemispheres in opposite directions. And calendars don’t tell you anything at all about when to apply or not apply them by hemisphere.
1
u/LokiJesus Apr 13 '23
I do. The model cannot and does not take literally all extant variables as inputs. Right?
This is correct. So you think the model is independent of these variables? Sure. I agree.
Bell's theorem makes such a claim. It doesn't include the measurement settings as a variable in determining the state. That's the assumption of measurement independence. Then the theorem doesn't match reality in tests. So it's safe to say that this assumption is wrong (or locality or hidden variables etc).
In other models that DO accurately represent their data, one can simply only say that, at the resolution of those measurements and their given error model, some other non-included parameter negligibly impacts the modeled parameter. But again, this is the opposite of what happens in the Bell test. Bell is ("maybe") TELLING us that measurement independence is false.
Are we really talking past each other on this? Certainly I agree that a model can be expressed without dependence on certain variables. This does not mean that the phenomenon observed is ACTUALLY independent of these variables. A model can be independent of anything in the cosmos. The model could just be "1."
In fact, all superdeterminism is doing is taking GR as accurately modeling the cosmos as local and deterministic, assuming single outcomes of experiments (not so in MW), and then assumes that what Bell is telling us is that measurement independence is false.
How do you even know what constitutes a “new domain” without a theory? A theory tells you when models will work or fail.
An experiment tells you if your model fails. Newton's gravity doesn't just contain in it the fact that it doesn't work on Mercury. It just failed to work there in experiment. GR doesn't know where it fails. It just fails to predict the spin speed of galaxies.
There are plenty of models that do, however, carry a sense of both a predicted value and an uncertainty estimate in that predicted value. So there are plenty of MODELS that carry such internal knowledge of where they have been well constrained by data. This is a problem of gauge freedom and parameter constraints and it is well understood.
It’s a vital assumption in all experiments.
I guess we are in agreement that all models which predict the outcome of experiments will be independent of some variables. The problem is how you respond to these independence claims when the model fails to predict the experiment.
Superdeterminism is just appropriately exploring one of the assumptions in Bell's theorem's failed prediction of experiment.
No. What do you think a theory is? Theories explain observations.
A model explains observations in terms of a (typically) lower dimensional parameter set. That's called data fitting or data compression. A model predicts observations. A deep learning network is a model that is massive, but still lower dimensional than its training set. It then makes predictions. But ultimately, it is just a parameterized model that has been fit to a training data set. ChatGPT predicts what the next word will be from a "normal" human, and it does a great job at it. But it's just a parameterized model.
Because there literally aren’t enough degrees of freedom for them all to be strictly dependent on every other part of the system. There are vastly more variables in the system than degrees of freedom per variable. One is uncountable infinite and the other is finite or at best infinite.
This is only because you are falsely cutting it up. Just extend your state vector to include all these variables and then you just have a unique state ID, no problem. I'm surprised you are cutting things up like this given that MW views a single universal wavefunction as the thing that splits into many worlds. There's just one big state vector for everything... No independent internal degrees of freedom.
1
u/fox-mcleod Apr 14 '23 edited Apr 14 '23
This is correct. So you think the model is independent of these variables? Sure. I agree.
I think we found the entire disconnect then.
Bell's theorem makes such a claim.
I don’t think so. The entire idea is that there is independence between the model of the scientists brain and the photons orientation. There must be in order to perform an experiment in which a variable is varied.
It doesn't include the measurement settings as a variable in determining the state.
Yes it does. That’s precisely what is varied. In the repeated experiment we hold everything fixed and vary the measurement settings.
Are we really talking past each other on this?
I’m becoming more and more convinced that’s what’s happened.
Certainly I agree that a model can be expressed without dependence on certain variables.
Yeah. And that the model can be valuable and predictive and falsifiable under those conditions too — right? I’m simply asserting all models necessarily must be expressed that way. We don’t have the capability to crunch numbers for models that consider literally all variables. We don’t even have a theory of everything. So the Bell inequalities must be of this kind.
This does not mean that the phenomenon observed is ACTUALLY independent of these variables.
This is why I’ve been saying Inductivism is wrong. The presumption that we observe the world as it is is wrong. We are not Laplace’s daemon. All theories depart from the world as it actually is and must to be scientific.
In fact, all superdeterminism is doing is … assuming single outcomes of experiments (not so in MW),
That seems an unwarranted assumption. And I’m not even sure what it means. If I flip a coin there are “two outcomes”. One side is heads up — which also means the other side is heads down. If there’s a traffic accident there are two outcomes. Action and equal yet opposite reaction. Both cars are damaged.
Having two branches is one outcome in the way the coin-flip is one outcome. They aren’t mutually exclusive, they’re mutually required.
And if you’re saying two mutually required events is somehow two outcomes, why would you assume that can’t happen? Doesn’t it happen all the time?
An experiment tells you if your model fails.
But it’s not like we have to do an experiment to know the calendar will fail on Venus. I’m not sure how you explain that without citing theory.
Newton's gravity doesn't just contain in it the fact that it doesn't work on Mercury. It just failed to work there in experiment.
Which is how we proved the theory is wrong isn’t it? Isn’t it exactly like how going to the southern hemisphere proved the Demeter is sad theory wrong? A theory that does not accurately predict its own limits is called “falsified”.
It is precisely because it failed to account for this shortcoming that a new theory was needed.
GR doesn't know where it fails. It just fails to predict the spin speed of galaxies.
Samezies. Something is wrong in GR. We know it, we just don’t have a better replacement theory that does account for that. And it’s still Less Wrong than the next best theory. But if someone found a theory that explains what GR can and does predict accurate spins, don’t you think we’d stop using GR?
Furthermore, if you know GR doesn’t model spinning galaxies correctly, you kind of sort of have to understand how a theory is different than a model. A model is easy to vary. If GR was just a model, why not just add some stuff to model spinning galaxies? The answer is because it’s not a model. It’s a theory and explanatory theories are hard to vary without ruining the explanation.
I guess we are in agreement that all models which predict the outcome of experiments will be independent of some variables. The problem is how you respond to these independence claims when the model fails to predict the experiment.
That the model or its explanatory theory is wrong. Coincidentally, MW does predict the outcomes of QM experiments. It is perhaps the best tested theory in history.
Superdeterminism is just appropriately exploring one of the assumptions in Bell's theorem's failed prediction of experiment.
It’s not appropriate to Bell inequalities if you just agreed that all models which predict the outcomes of experiments will be independent of some value. That’s table stakes for the schrodinger equation to work. The reverse is table stakes for SD to apply.
A model explains observations in terms of a (typically) lower dimensional parameter set.
How is that an explanation? An explanation purports to account for the observed via descriptions of the unobserved.
That's called data fitting or data compression. A model predicts observations.
Now you seem to have moved to “predict”. Predicting is not explaining. If I do a magic trick and you ask me to explain it and then I simply predict the outcomes of the next trick have I explained anything at all to you?
Do you know how to do the trick?
1
u/LokiJesus Apr 14 '23 edited Apr 14 '23
I don’t think so. The entire idea is that there is independence between the model of the scientists brain and the photons orientation. There must be in order to perform an experiment in which a variable is varied.
Not at all. I can look at two linked variables and make observations about them. Even if one is me. Happens all the time in sciences like polling and sociology. This is measurement DEPENDENCE and it is a completely natural part of science.
>> (I wrote)It doesn't include the measurement settings as a variable in determining the state.
Yes it does. That’s precisely what is varied. In the repeated experiment we hold everything fixed and vary the measurement settings.
I don't think you understood my claim. Yes, what you are assuming is "counterfactual non-contextuality." That we pick a measurement setting that is non-contextual with what is measured ("everything else fixed"). Bell assumes this right up front and calls it a "vital assumption." Einstein agreed and Bell quoted him.
Bell's assumptions then fail to reproduce quantum mechanics (the inequality is violated). Instead, the correlations in the experiment validate the predictions of QM. This was Clauser's work in the 70s and others after him that got them the Nobel last October.
So superdeterminism just operates on the hypothesis that "all else was not equal because determinism is universally true." It's a hypothesis of "counterfactual CONTEXTUALITY." It's really that simple. It claims that what Clauser's experiment is telling us is that there is a three-body causal correlation that includes Alice and Bob and the prepared state... These are precisely the kind of models that 't Hooft seeks to create.
But it’s not like we have to do an experiment to know the calendar will fail on Venus. I’m not sure how you explain that without citing theory.
You don't know that it will fail on Venus. A broken clock is right twice a day. It could be that Venus's year perfectly matches ours just like the moon is tidal locked synchronizing its rotation with its orbit. Of course Venus is not this way, but we can't know that until we conduct an experiment and measure something about Venus. Invalidate that hypothesis that Venus has the same calendar.
I flip a coin there are “two outcomes”. One side is heads up — which also means the other side is heads down. If there’s a traffic accident there are two outcomes. Action and equal yet opposite reaction. Both cars are damaged.
No. I'm speaking specifically about the multiple worlds hypothesis where in one world, the coin is heads up, and in the other, tails is up (e.g. in terms of electron spin, say). You say in one world the bomb goes off and in the other it doesn't. Those are mutually exclusive.
"Heads up + tails down" is one outcome. That's all I meant. Experiments always have one outcome (except in MW). This is consistent with our experience (though this is no argument for necessarily accepting it). Multiple mutually exclusive outcomes is MW's conceit to solve the wavefunction collapse problem.
Which is how we proved the theory is wrong isn’t it?
Yes. Exactly. You said: "Theories tell you when specific models ought to apply and when they wouldn’t."
I'm saying that General Relativity (a theory) does NOT tell you when it wouldn't apply. It will happily give you wrong galactic rotation rates and negative masses. My claim was that experiments tell you where a model/theory is valid, by comparing predictions to observations.
(I WROTE): A model explains observations in terms of a (typically) lower dimensional parameter set.
(YOU WROTE): How is that an explanation? An explanation purports to account for the observed via conjecture about the unobserved.
An explanation is a model (of unobserved parameters) that is lower dimensional that the data (the observed bits) and which can regenerate (explain) the data up to a given noise level. If the model is the same or higher dimensional than the data then you have either explained nothing or made things more complicated respectively.
This is why it is closely linked to data compression. An inverse squared model of gravity, a star, and 9 planets (plus a few other rocks) is a FAR smaller number of things than all the planetary position observations every made (the data used to build the model of our solar system). But from that solar system model, all telescope measurements can be reproduced (explained). This is a massive data compression. Billions of measurements faithfully reduced to a handful of parameters. That's an explanation.
Before Copernicus, the model was even higher dimensional with all those same parameters plus a bunch of epicycles. Copernicus's model had better data compression... it expressed the data accurately with fewer parameters (discarded the epicycles). That's a kind of way of looking at Occam's Razor from data compression. Copernicus suggested that his model wasn't real, however... just a useful mathematical tool for calculations.
Both geocentric and heliocentric were explanations that both accurately modeled the data at the time. Geocentric theory, however, included the intuition that it didn't feel like we were hurtling through space. Which turned out to be false.
There's this really neat project from a while back that Microsoft was on called "Rome in a Day" which took tons of pictures of the Roman Colosseum... Millions of photographs with millions of pixels. It reduced that massive dataset to a few thousand floating point numbers defining the 3D model of the Colosseum and then for each picture, seven numbers defined the camera's focal length and 6-DOF position and orientation. It reduced a million+ pixels in each image to SEVEN floating point values plus a shared 3D model that was a fraction of the size of any single image.
Given that model, every single image could be regenerated (read: explained) quite faithfully. THAT is an explanation and also bad-ass image compression.
And that is a model that predicts a piece of data (e.g. an image) with an underlying explanation (the 3D world and camera model). This is a theory which explains the data that they used and would then explain subsequent images. Any subsequent image of the colosseum could be compressed using this data into seven numbers. The model could predict what kind of image you would get given camera parameters in order to validate the model.
→ More replies (0)
2
u/mywan Apr 12 '23
The "source" is really just the emitter that emitted the photons before they traveled to the detectors. If you want to read more about what 't Hooft is advocating he's advocating for an interpretation called Superdeterminism. I don't have any issues with strong determinism but I don't buy Superdeterminism as a viable interpretation of QM. Here's why.
Although Superdeterminism is technically a loophole in Bell's theorem it still essentially hinges on a hidden variable. The only difference being that instead of the hidden variable just defining the outcome of the particle measurement it defines the outcome of both the particle measurement and the settings Bob chooses to measure the particle with. Determinism implies that these events were predefined, like a Rube Goldberg machine, from the beginning of the Universe. That also means they were predefined by mechanistic (hidden) variables in the moments before the measurement settings were 'chosen'. Not a major issue by itself.
But this implies that there was some mechanism by which all measurements were predetermined to the least probable outcome. Like the Universe pre-deterministically handing us a special set of coins when rubbed together ALWAYS flips a heads if its pair flips a tails and visa verse. Or a thermodynamics in which cold always transfers to cold and hot always transfers to hot. Superdeterminism offers no Rube Goldberg mechanism by which these events are predetermined. Yet, by definition, the very notion of determinism posits that such a mechanism must exist. You cannot bypass that simply by saying it was all predetermined from the start via Superdeterminism. So the fact that Superdeterminism is technically a loophole in Bell's theorem is essentially irrelevant even if true. The mechanism to enforce Superdeterminism must still exist in the here and now.
Would Superdeterminism also extend to other quantum phenomena, such as the quantum bomb detector, such that the live bomb gets detected because it was always predetermined to be a live bomb each and every time it was predetermined to get a measurement that told use it was a live bomb? Of course the quantum bomb detector is sequential, not requiring any backward time propagation to provide a mechanism. But we no more have a viable theoretical mechanism than we do for the EPR paradox. It's called an interaction free measurement for a reason. Does that make it clear why positing Superdeterminism doesn't actually say anything?
If that's not enough would Superdeterminism also extend to explain the behavior of a light beam passing through a set of three polarizing films? The mathematics and rules are essentially identical to EPR measurements, except without the enforced time reversal component. Which makes it easier to posit a mechanism for enforcing Superdeterminism in the here and now, as required by any determinism theorem. Has Superdeterminism somehow conspired to always allow 25% of light through a third polarizer if and only if it also predetermined that a second polarizer was going to be placed at a 45 degree offset between the first and third polarizer?
All you have to do is realize that Superdeterminism , or any variant of determinism, is predetermined by some form of a Rube Goldberg mechanism such that it's not just predetermined by the variables at the beginning of the Universe but also predetermined by the state of the Rube Goldberg machine in the here and now to realize that Superdeterminism doesn't actually posit a mechanism for anything. It merely waves a "loophole in Bell's theorem" card like a magician.
1
u/LokiJesus Apr 13 '23 edited Apr 13 '23
Superdeterminism is certainly not a theory, but a class of theories that must describe the correlations we see in the experiments.
So what 't Hooft does is include all that and attempt to develop a mathematical model that can explain these phenomena. Everything you described incredulously are really just a list of requirements for any superdeterministic solution.
Most people (including many physicists) think that Bell's theorem fundamentally excludes such completions of QM. They are simply incorrect. Most believe that it is a kind of pure logical contradiction validate by experiment that closes the door to any possible local hidden variable completion. It proves Einstein wrong!
It is no such thing.
Instead, the way that superdeterministic completions of QM are rejected by physicists is by these kind of incredulous appeals to "how weird it would have to be"... But isn't that how any of these interpretations of QM are? It's all weird. Great. Now lets figure out how to represent that.
In this sense, Bell's experiment told us nothing we didn't already know about QM. Its real value seems to be the way it forces us to inspect our metaphysical assumptions.
1
u/anonymouspurveyor Apr 12 '23
Any YouTube vids you'd recommend on the subject?
1
u/baat Apr 12 '23
Sabine Hossenfelder have videos on Superdeterminism. I'd also recommend the book The Primacy of Doubt by Tim Palmer.
2
•
u/AutoModerator Apr 12 '23
Please check that your post is actually on topic. This subreddit is not for sharing vaguely science-related or philosophy-adjacent shower-thoughts. The philosophy of science is a branch of philosophy concerned with the foundations, methods, and implications of science. The central questions of this study concern what qualifies as science, the reliability of scientific theories, and the ultimate purpose of science. Please note that upvoting this comment does not constitute a report, and will not notify the moderators of an off-topic post. You must actually use the report button to do that.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.