• 0 Posts
  • 190 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2024

help-circle


  • I’m pretty sure this goes against the properties proven of entanglement (Bell test) and how far entanglement can propagate, but I don’t know enough about quantum mechanics to explain why this explanation is incompatible with entanglement.

    If you don’t know anything about the topic then maybe you shouldn’t speak on it. Especially when claiming you have debunked peer reviewed papers from Harvard physicists like Jacob Barandes.

    However, I don’t currently see how this at all explains computing with superpositions; if it’s just statistics a superposition can never exist

    Superposition is a property of statistics. Even classical statics commonly represent the system’s statistical state as a linear combination of basis states. That’s just what a probability distribution is. If you take any courses in statistics, you will superimpose things all the time. This is a mathematical property.

    so entanglement doesn’t exist; so quantum algorithms wouldn’t be possible, but we know they are.

    Quantum advantage obviously comes from the phase of the quantum state. If you remove the phase from the quantum state then all you are left with is a probability distribution, and so there would be nothing to distinguish it from a classical statistical theory. But the phase is, again, a sufficient statistic over the system’s history. The quantum advantage comes from the fact that you are ultimately operating with a much larger information space, since each instruction in the computer is a function over the whole algorithm’s history back to the start of the quantum circuit, rather than just the current state of the computer’s memory at that present moment.


  • What if two packets interact with each other? If you claim a collapse occurs then entanglement could never happen, and so such a viewpoint is logically ruled out. If you say a collapse does not occur but only occurs if you introduce a measurement device, then this is vague without rigorously defining what a measurement device is, but providing any additional physical definition with then introduce something into the dynamics which is not there in orthodox quantum mechanics, so you’ve not moved into a new theory and are no longer talking about textbook QM.


  • In any statistical theory, the statistical distribution, which is typically represented by a vector that is a superposition of basis states, evolves deterministcally. That is just a feature of statistics generally. But no one in the right mind would interpret the deterministic evolution of the statistical state as a physical object deterministically evolving in the real world. Yet, when it comes to QM, people insist we must change how we interpret statistics, yet nobody can give a good argument as to why.

    We only “don’t fully understand where the probabilistic measurement happens” if you deny it is probabilistic to begin with. If you just start with the assumption that it is a statistical theory then there is no issue. You just interpret it like you interpret any old statistical theory. There is no invisible “probability waves.” The quantum state is an epistemic state, based on the observer’s knowledge, their “best guess,” of a system that is in a definite state in the real world, but they cannot know it because it evolves randomly. Their measurement of that state just reveals what was already there. No “collapse” happens.

    The paradox where we “don’t know” what happens at measurement only arises if you deny this. If you insist that the probability distribution is somehow a physical object. If you do so, then, yes, we “don’t know” how this infinite-dimensional physical object which doesn’t even exist anywhere in physical space can possibly translate itself to the definite values that we observe when we look. Neither Copenhagen nor Many Worlds have a coherent and logically consistent answer to the question.

    But there is no good reason to believe the claim to begin with that the statistical distribution is a physical feature of the world. The fact that the statistical distribution evolves deterministically is, again, a feature of statistics generally. This is also true of classical statistical models. The probability vector for a classical probabilistic computer is mathematically described as evolving deterministically throughout an algorithm, but no sane person takes that to mean that the bits in the computer’s memory don’t exist when you aren’t looking at them an infinite-dimensional object that doesn’t exist anywhere in physical space is somehow evolving through the computer.

    Indeed, the quantum state is entirely decomposable into a probability distribution. Complex numbers aren’t magic, they always just represent something with two degrees of freedom, so we can always decompose it into two real-valued terms and ask what those two degrees of freedom represent. If you decompose the quantum state into polar form, you find that one of the degrees of freedom is just a probability vector, the same you’d see in classical statistics. The other is a phase vector.

    The phase vector seems mysterious until you write down time evolution rules for the probability vector in quantum systems as well as the phase vector. The rules, of course, take into account the previous values and the definition of the operator that is being applied to them. You then just have to recursively substitute in the phase vector’s evolution rule into the probability vector’s. You then find that the phase vector disappears, because it decomposes into a function over the system’s history, i.e. a function over all operators and probability vectors at all previous time intervals going back to a division event. The phase therefore is just a sufficient statistic over the system’s history and is not a physical object, as it can be defined in terms of the system’s statistical history.

    That is to say, without modifying it in any way, quantum mechanics is mathematically equivalent to a statistical theory with history dependence. The Harvard physicist Jacob Barandes also wrote a proof of this fact that you can read here. The history dependence does make it behave in ways that are bit counterintuitive, as it inherently implies a non-spatiotemporal aspect to how the statistics evolve, as well as interference effects due to interference in its history, but they are still just statistics all the same. You don’t need anything but the definition of the operators and the probability distributions to compute the evolution of a quantum circuit. A quantum state is not even necessary, it is just convenient.

    If you just accept that it is statistics and move on, there is no “measurement problem.” There would be no claim that the particles do not have definite states in the real world, only that we cannot know them because our model is not a deterministic model but a statistical model. If we go measure a particle’s position and find it to be at a particular location, the explanation for why we find it at that location is just because that’s where it was before we went to measure it. There is only a “measurement problem” if you claim the particle was not there before you looked, then you have difficulty explaining how it got there when you looked.

    But no one has presented a compelling argument in the scientific literature that we should deny that it is there before we look. We cannot know what its value is before we look as its dynamics are (as far as we know) random, but that is a very different claim than saying it really isn’t there until we look. This idea that the particles aren’t there until we look has, in my view, been largely ruled out in the academic literature, and should be treated as an outdated view like believing in the Rutherford model of the atom. Yet, people still insist on clinging to it.

    They pretend like Copenhagen and Many Worlds are logically consistent by writing enormous sea of papers upon papers upon papers, where it only seems “consistent” because it becomes so complicated that hardly anyone even bothers to follow along with it anymore, but if you actually go through the arguments with a fine-tooth comb, you can always show them to be inconsistent and circular. There is only a vague aura of logical and mathematical consistency on the surface. The more you actually engage with both the mathematics and read the academic literature on quantum foundations, the more clear it becomes how incoherent and contrived attempts to make Copenhagen and Many Worlds consistent actually are, and how no one in the literature has actually achieved it, even though many falsely pretend they have done so.





  • Technically aether theory was never ruled out. People love to claim that the Michelson-Morley experiment ruled it out, but this is historical revisionism. The MM experiment was conducted in 1887. Hendrik Lorentz proposed his aether model in 1904. Obviously Lorentz was not such a moron he would not take into account the findings of MM, but that is what people are unironically suggesting when they say MM somehow retrocausally ruled out his model. Indeed, both Michelson and Morley did not believe their own experiments ruled it out either but continued to promote such models.

    Lorentz’s aether model and Einstein’s relativity are actually mathematically equivalent so they make all the same predictions, so no possible experiment could rule out Lorentz’s aether theory that would not also rule out Einstein’s relativity. Indeed, if you read his 1905 paper where Einstein introduces special relativity, his criticism of Lorentz’s model is only a philosophical objection. He never posited that an experiment can rule it out. MM only rules out some very early aether models, not Lorentz’s model.

    I would recommend also checking out John Bell’s paper “How to Teach Special Relativity,” where he also discusses this fact, and how the mathematics of special relativity are perfectly consistent with a reality with an absolute space and time. Taking space and time to be relative only comes at the level of metaphysical interpretation.



  • It’s amazing how nonsensical the actual foundational axioms of modern day economics are.

    Classical economics tried to tie economics to functions of physical things we can measure. Adam Smith for example proposed that because you can recursively decompose every product into the amount of physical units of time it takes to produce it all the way down the supply chain, then any stable economy should, on the average (not the individual case), roughly buy and sell in a way that reflects that time, or else there would necessarily have to be physical time shortages or waste which would lead to economic problems. We thus may be able to use this time parameter to make quantifiable predictions about the economy.

    Many people had philosophical objections to this because it violates free will. If you can predict roughly what society will do based on physicals factors, then you are implying that people’s decisions are determined by physical parameters. Humans have the “free will” to just choose to buy and sell at whatever price they want, and so the economy cannot be reduced beyond the decisions of the human spirit. There was thus a second school of economics which tried to argue that maybe you could derive prices from measuring how much people subjectively desire things, measured in “utils.”

    “Utils” are of course such ambiguous nonsense that eventually these economists realized that this cannot work, so they proposed a different idea instead, which is to focus on marginal rates of substitution. Rather than saying there is some quantifiable parameter of “utils,” you say that every person would be willing to trade some quantity of object X for some quantity of object Y, and then you try to define the whole economy in terms of these substitutions.

    However, there are two obvious problems with this.

    The first problem is that to know how people would be willing to substitute things rigorously, you would need an incredibly deep and complex understanding of human psychology, which the founders of neoclassical economics did not have. Without a rigorous definition, you could not fit it to mathematical equations. It would just be vague philosophy.

    How did they solve this? They… made it up. I am not kidding you. Look up the axioms for consumer preference theory whenever you have the chance. It is a bunch of made up axioms about human psychology, many of which are quite obviously not even correct (such as, you have to assume that the person has evaluated and rated every product in the entire economy, you have to assume that every person would be more satisfied with having more of any given object, etc), but you have to adopt those axioms in order to derive any of the mathematics at all.

    The second problem is one first pointed out, to my knowledge, by the economist Nikolai Bukharin, which is that an economic model based around human psychology cannot possibly even be predictive because there is no logical reason to believe that the behavior of everything in the economy, including all social structures, is purely derivative of human psychology, i.e. that you cannot have a back-reaction whereby preexisting social structures and environmental factors people are born into shape their psychology, and he gives a good proof-by-contradiction that the back-reaction must exist.

    The idea that you can derive everything based upon some arbitrary set of immutable mathematical laws made up in someone’s armchair one day that supposedly rigorously details human behavior that is irreducible beyond anything else is just nonsense. No one has ever even tested any of these laws that supposedly govern human psychology.


  • Surprisingly that is a controversial view. Most physicists insist QM has nothing to do with probability! But then why does it only give you probabilistic predictions? Ye old measurement problem, an entirely fabricated problem because physicists cannot accept that a theory that gives you probabilities is obviously a probabilistic theory.




  • QM is a lot easier to understand when we stop pretending a theory that only gives you statistical results somehow has no relevance to statistics. Every “paradox” can always be understood and resolved by applying a statistical analysis. If you apply such a statistical analysis to entangled systems, let’s say you have two qubits with their own bit values b1 and b2, you find that if you apply a unitary operator to just b1, there are cases where the way in which this stochastically perturbs b1 has a dependence upon the value of b2.

    You could not send a signal to b2 by perturbing b1 because perturbing b1 has no effect on b2, rather, the way in which b1 stochastically changes merely depends upon the current state of b2. You might think maybe you could send a signal the other way. If b1 depends upon b2, then you could perturb b2 to alter b1. But the dependence is always symmetrical, such that if you apply a stochastic perturbation to b2 the way in which it will change will depend upon the value of b1, and so it becomes a vicious circle.

    It is non-local in the sense that the way in which one changes depends upon the value of the other far away, but not in the sense that perturbing one locally alters the value of the one far away, and the dependence is always symmetrically mutual, so there is no way to signal between them.


  • As it is normally explained, it’s definitely fake. There is no reason to believe particles turn into waves when you’re not looking and turn back into particles when you look, and believing this demonstrably leads to irreconcilable paradoxes. Dmitry Blokhintsev was correct that the particles are just particles, and the “wave” is a property of its stochastic dynamics over an ensemble of systems. The wave is part of the nomology: it tells you how the particles stochastically behave in the aggregate, but the particles are still particles at all times. Ontologically, they are particles. Nomologically, their stochastic dynamics in an ensemble of systems converges to wave-like behavior.



  • Also Bell experiments have proven the indeterminacy which you say is absurd. No theory of local hidden variables can describe quantum mechanics.

    You say Bell’s theorem disproves realism, but then you immediately follow it up with saying it disproved local realism. Do you see how those two are not the same statements? It never even crossed Bell’s mind to deny reality. He believed that the conclusion to his own theorem is just that it is not local.

    (Technically, anything explained non-locally can also be explained non-temporally instead, so it is more accurate methinks to say spatiotemporal realism is ruled out. I am not as big of a fan of thinking about it non-temporally but there are some respectable people like Avshalom Elitzur who do. Thinking about it non-locally is far more intuitive.)

    Also, again, this is not about indeterminacy and determinacy, but about indefiniteness and definiteness, i.e. anti-realism vs realism. These are not the same things. To say something is indeterminate is merely to imply it is random. To say something is indefinite is to say it doesn’t even have a value at all. It is also sometimes called realism because it’s about object permanence. Definiteness is just object permanence, it is the idea that systems still possess observable properties even when they are not being directly observed in the moment.

    He’s asking where the line is between this indeterminacy and determinacy. At what scale to things move from quantum to “real” and why?

    You could in principle make this non-realism make sense if you imposed some sort of well-defined physical conditions as to when particles take on real values. Bell described this as a kind of “flash” ontology because you would not have continuous definite values but “flashes” of definite values under certain conditions. But it turns out that you cannot do this without contradicting the mathematics of quantum mechanics.

    These are called physical collapse models, like GRW theory, but these transitions are non-reversible even though all evolution operators in quantum mechanics are reversible, and so in principle if you rigorously define what conditions would cause this transition, you could conduct an experiment where you set up those conditions, and then try to reverse it. Orthodox quantum theory and the physical collapse model would make different predictions at that point.

    These models never end up being local, anyways.

    The reason I say value indefiniteness is absurd as a way to interpret quantum mechanics is because it is not necessitated by the mathematics at all, and if you believe it:

    1. It devolves into solipsism if you do not rigorously define a mathematical criterion as to when definite values arise, because then nothing has real values outside of you directly looking at it.
    2. If you do rigorously define a criterion, then it is no longer quantum mechanics but an alternative theoretical model.

    So, either it devolves into solipsism, or it is a different theory to begin with.

    Bell was fine with #2 as long as people were honest about that being what they were doing. He wrote an article “Against ‘Measurement’” where he criticized the vagueness of people who claim there is a transition “at measurement” but then do not even rigorously define what qualifies as a “measurement.” He wrote positively of GRW theory in his paper “Are there Quantum Jumps?” precisely because they do give a rigorous mathematical definition of how this process takes place.

    But Bell also didn’t particularly believe there was any reason to believe in value indefiniteness to begin with. You can just interpret quantum mechanics as a kind of stochastic mechanics, just one with non-local features, where it is random but particles still have definite values at all times. The same year he published his famous theorem in 1964 in the paper “On the Einstein Podolsky Rosen Paradox” he also published the paper “On the Problem of Hidden Variables” debunking von Neumann’s proof that supposedly you cannot interpret quantum mechanics in value definite terms. He also wrote a paper “Beables in Quantum Field Theory” where he shows QFT can be represented as a stochastic theory. He also wrote a paper “On the Impossible Pilot Wave” where he promoted pilot wave theory, not necessarily because he believed it, but because he saw it as a counterexample to all the supposed “proofs” that quantum mechanics cannot be interpreted as a value definite theory.

    My point isn’t about randomness/indeterminacy. It is about “indefiniteness,” the claim that things have no values until you look. This either devolves into solipsism, or into a theory which is not quantum mechanics. It is far simpler to just say the systems have values when you’re not looking, you just don’t know what they are, because the random evolution of the system prevents you from tracking them. It is sort of like, if I hit a fork in the road and take either the left or right path, and you don’t know which, you wouldn’t then conclude I didn’t take a path at all until you look. You would conclude that you just don’t know what it is, and maybe assign probabilities to them. The fact that the probability distribution doesn’t contain a definite value does not demonstrate that the real world doesn’t contain a definite value, and believing it doesn’t unnecessarily over-complicates things. And definite ≠ deterministic. Maybe the path taken is truly random, but there is a path taken.


  • Not to be the 🤓 but just so we’re clear, the point of Schrödinger’s cat was to illustrate that you can’t know a quantum state until you measure it. Basically just saying “probability exists.”

    That wasn’t Schrödinger’s point at all.

    Schrödinger was responding to people in Bohr and von Neumann’s camp who claim that particles described mathematically by a superposition of states literally have no real observables in the real world at all. It is not just that they are random or probabilistic, but people in the “anti-realist” camp argue that they effectively no longer even exist anymore when they are described mathematically by a superposition of states. This position is sometimes called value indefiniteness.

    Schrödinger was criticizing this position by pointing out that you cannot separate your beliefs about the microworld from the macroworld, because macroscopic objects like cats are also made up of particles and should follow the same rules. Hence, he puts forward a thought experiment whereby a cat would also be described mathematically in a superposition of states.

    If you think superposition of states means it no longer has real definite properties in the real world, then the cat wouldn’t have real define properties in the real world until you open the box. Schrödinger’s point was that this is such an obvious absurdity that we should reject value indefiniteness for individual particles as well.

    You say:

    The reason it’s a big deal is that this probability is a real property. One that is supposed to be only one of two states. But instead it isn’t really in a state at all until you measure it, and that’s weird.

    But that is exactly the point Schrödinger was criticizing, not supporting.

    Value indefiniteness / anti-realism ultimately amounts to solipsism because if particles lack real, definite, observable properties in the real world when you are not looking at them, other people are also made up of particles, so other people wouldn’t have real, definite, observable properties in the real world when you are not looking at them.

    He was trying to illustrate that this position reduces to an absurdity and so we should not believe in that position.

    The point is that instead of assuming it is in one state or the other, you can and often should think of both possibilities at once. This is what makes quantum computing useful.

    If you perform a polar decomposition on the quantum state, you are left with a probability vector and a phase vector. The probability vector is the same kind of probability vector you use in classical probabilistic computing. The update rule for it in quantum computing literally only differs by an additional term which is a non-linear term that depends upon the phase vector.

    The "advantage’ comes from the phase vector. For N qubits, there are 2^N phases. A system of 300 qubits would have 2^300 phases, which is far greater than the number of atoms in the observable universe. A single logic gate thus can manipulate far more states of the system at once because it can manipulate these phases, which the stochastic dynamics of the bits have a dependence upon the phases, and thus you can not only manipulate the phases to do calculations but, if you are clever, you can write the algorithm in such a way that the effect it has on the probability distribution allows you to read off the results from the probability distribution.

    The phase vector does not contain anything probabilistic, so it contains nothing that looks like the qubit being in two places at once. That is contained in the probability vector, but there is no good reason to interpret a probability distribution as the system being in two places at once in quantum mechanics than there is in classical mechanics. The advantage comes from the phases, and the state of the phases just can influence the stochastic perturbations of the bits, and thus can influence the probability distribution.

    So you simply apply operations that increase or decrease the chances of certain outcomes and repeat until the answer you want has an incredibly high probability and the rest are nearly zero. Then you measure your qubit, collapsing the wave function, with a high probability that collapse will give you the answer you wanted.

    Again, perform a polar decomposition on the quantum state, break it apart into the probability vector and a phase vector. Then, apply a Bayesian knowledge update using Bayes’ theorem to the probability vector, exactly the way you’d do it in classical probabilistic computing. Then, simply undo the polar decomposition, i.e. recompose it back into a single complex-valued vector in Cartesian form.

    What you find is that this is mathematically equivalent to the collapse of the wavefunction. The so-called “collapse of the wavefunction” is literally just a Bayesian knowledge update on the degree of freedom of the quantum state associated with the probability distribution of the bits.

    It’s less like “the cat is both alive and dead” and more that “the terms ‘alive’ and ‘dead’ do not apply to the cat till you open the box”

    Sure, but that position reduces to solipsism, because then you don’t exist with a definite value until I look at you, either. But clearly you are thinking definite thoughts when I’m not looking, right?


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzGottem
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    They do have values. Their position is just a superposition, rather than one descrete one, which can be described as a wave. Their value is effectively a wave until it’s needed to be discrete.

    To quote Dmitry Blokhintsev: “This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.”

    When I say “real values” I do not mean pure abstract mathematics. We do not live in a Platonic realm. The mathematics are just a tool for predicting what we observe in the real world. Don’t confuse the map for the territory. The abstract wave has no observable properties, it is pure mathematics. If the whole world was just one giant wave in Hilbert space, then this would be equivalent to claiming that the entire world is just one big mathematical function without any observable properties at all, which obviously makes no sense as we can clearly observe the world.

    To quote Rovelli: “The gigantic, universal ψ wave that contains all the possible worlds is like Hegel’s dark night in which all cows are black: it does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world.”

    Again, as I said in my first comment, any mathematical theory that describes the world needs to, at some point, include symbols which directly refer to something we can observe. An abstract mathematical function contains no such symbols. If you really believe that particles transform into purely mathematical waves, then you need some process to transform them back, or else you cannot explain what we observe at all, and so far the only process you have put forward is “it happens at every interaction” which is just objectively and empirically wrong because then entanglement would be impossible.

    This is why you run into contradictions like the “Wigner’s friend” paradox where Wigner would describe his friend in a superposition of states, and if you believe that this literally means that all that exists inside the room is an abstract function, then you cannot explain how the observer in the room can perceive anything that they later claim they do, because there would be no observables inside of the room.

    You cannot get around criticisms of solipsism by just promoting purely abstract mathematical entities to being “objective reality” as if objects transform into purely Platonic mathematical functions. At least, if you are going to claim this, then you need some rigorous process to transform them back into something that is described with mathematical language where some of the symbols refer to something we can actually observe such that we can then explain how it is that we can observe it to have the properties that it does when we look at it.

    Sure. That doesn’t make the general understanding of the thought experiment accurate. Once the decay of the atom that triggers the poison is detected, it’s no longer in a superposition. It has to not be in order for the detection to occur.

    Please scroll up and read my actual comment. You seem to have skipped all the important technical bits, because you are claiming something which is mathematically incompatible with the predictions of quantum mechanics. Your personal self-theory you are inventing here literally would render entanglement impossible.

    The double slit experiment shows that an interaction can change the result from wave-like to particle-like behavior.

    Decoherence is not relevant here. Decoherence theory works like this:

    1. Assume that the system+environment become entangled.
    2. Assume that the observer loses track of the environment.
    3. Trace out the the environment.
    4. This leaves you with a reduced density matrix for the system where the coherence terms have dropped to 0.

    Notice that step #2 is entirely subjective. We are just assuming that the observer has lost track of the environment in terms of their subjective epistemic access, and step #3 is then akin to statistically marginalizing over the environment in order to then remove it from consideration.

    This isn’t an actual physical transition but an epistemic one. The system+environment are still in a coherent superposition of states, and decoherence theory merely shows that it looks like it has decohered if you only have subjective knowledge on a small portion of the much larger coherent superposition of states.

    If you believe that a superposition of states means it has no observable properties and is just purely a mathematical function, then decoherence does not solve your problem at all, because it is ultimately a subjective process and not a physical process. If you spent time studying the environment enough before running the experiment such that you could include the environment in your model then decoherence would not occur.

    I’m literally not. My entire point is that it isn’t a solipsism. Any interaction causes the waveform to collapse.

    Which, again, renders entanglement impossible, since objects must interact to become entangled.

    If we accepted your personal self-theory, then quantum computers should be impossible, because the qubits all need to interact many many times over as the algorithm progresses for them to all become entangled and to create a superposition of states of the whole computer’s memory.

    You are not listening and advocating things that are trivially wrong.

    yet you give no explanation of an alternative. Something is happening. How do you explain it?

    I just don’t deny value definiteness. That’s it. There is nothing beyond this.

    Consider a perfectly classical world but this world is still fundamentally random. The randomness of interactions would disallow us from tracking the definite values of particles at a given moment in time, so we could only track them with an evolving probability distribution. We can represent this probability distribution with a vector and represent interactions with stochastic matrices. Given that the model does not include observable definite values, would it then be rational to claim that particles suddenly transform into an infinite-dimensional vector in configuration space when you’re not looking at them and lose all their observable properties? No, of course not. The particles still have real observable properties in the real world, but you just lose track of them in the model due to their random evolution.

    You could create a simulation where you assign definite values and permute them stochastically at each interaction, and this would produce the same statistical results if you make a measurement at any given step. It is the same with quantum mechanics. It is just a form of non-classical statistical mechanics. There is no empirical, mathematical, or philosophical reason to deny that particles stop possessing real values when you are not looking at them. It is not hard to put together a simulation where the qubits are assigned definite bit values at all times and each logic gate just stochastically permutes those bit values. I even created one myself here. John Bell also showed you can do this with quantum field theory in his paper “Beables for Quantum Field Theory.”