• 0 Posts
  • 140 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2024

help-circle
  • Dogmatism goes all ways. The Soviets temporarily threw out evolutionary biology for Lysenkoism because they believed there was an ideological connection between Darwinism and social Darwinism and thus thought it was an ideology used to justify capitalism, and the adoption of Lysenkoism was devastating to their agriculture and wasn’t abandoned until 1948.

    The main lesson that China learned from the Cold War is that countries should be less dogmatic and more pragmatic. That does not mean an abandonment of ideology because you still need ideology to even tell you what constitutes a pragmatic decision or not and what guides the overall direction, but you should not adopt policies that will unambiguously harm your society and work against your own goals just out of a pure ideological/moralistic justification.

    Americans seemed to have gone this pragmatic direction under FDR, who responded to the Great Depression by recognizing that one should not take a dogmatic approach to liberalism either, and expanded public programs, state-owned enterprises, and economic planning in the economy. But when the USSR started to fall apart, if you read Chinese vs US texts on the subject, the Americans took literally the opposite lesson from it that China did.

    The Americans used the USSR’s collapse as “proof” that we have reached the “end of history” and that their liberal ideology is absolutely perfect and, in fact, we are not dogmatic enough. It is not a coincidence that the decline of the USSR throughout the 1980s directly corresponded with the rise of the neoliberal Reagan era. The USSR’s collapse was used by Americans to justify becoming hyperdogmatioids.

    You can just read any text from any western economists on China’s “opening up” to private markets, and you will see that every single western economist universally will refuse to acknowledge that any of the state-owned enterprises, public ownership of land, or economic planning plays any positive role in the economy. They all credit the economic growth solely to them introducing private enterprise and nothing else alone, and thus they always criticize China from the angle of “they have not privatized enough” and insist their economy would be even better off if they abolished the rest of the public sector.

    I wrote an article before defending the public sector in China as being important to its rapid development, and never in the article do I attack the role the private sector played, I simply defended the notion that the public sector also played a crucial role by giving economic papers from China as well as quotes from books from top Chinese economists.

    My article was reposted in /r/badeconomics and the person who reposted it went through every single one of my claims regarding the public sector playing an important role and tried to “debunk” every single one of them. They could not acknowledge that the public sector played ANY beneficial role at all. This is what I mean by the west has become hyperdogmatoids. They went from FDR era to believing that it’s literally impossible for the public sector to play any positive role at all, and this has led to Reaganite era in the USA as well as waves of austerity throughout western Europe as they have been cutting back on public programs and public policy.

    In my opinion, the decline of the western world we have been seeing as of late is very much a result of westerners taking the exact opposite lessons from the Cold War and becoming hyperdogmatoids, adopting the same mistakes the USSR made but in the opposite direction. In most of the western world these days, expanding public control in the economy is not even a tenable economic position. Just about every western country the “left” political parties want to just maintain the current level of public control, and the “right” want austerity to shrink it, but parties which want to increase it are viewed as unelectable.

    Any economics or sociology which suggests maybe it is a good thing in certain caes to expand public control in certain areas is denounced as “flat-earth economics” and not taken seriously, and this refusal to grapple with an objective science of human socioeconomic development is harming the west as their public programs crumble, wealth inequality skyrockets, their infrastructure is falling apart, and they cannot self-criticize their own dogmatism.


  • bunchberry@lemmy.worldtoLefty Memes@lemmy.dbzer0.comNo one is illegal
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    1 day ago

    Basically no one believes in open borders, only some weird fringe anarchists who posts memes like the one above that are largely irrelevant in the real world. It’s always just been a straw man from the right or just weird online fringe anarchists who hold the position.

    The reason communists are critical of the US/European hostility towards immigrants is not because we want open borders but because western countries bomb, sanction, coup these countries and cause a refugee crisis then turn around and cry about those immigrants coming to their country.




  • Depends upon what you mean by realism. If you just mean belief in a physical reality independent of a conscious observer, I am not really of the opinion you need MWI to have a philosophically realist perspective.

    For some reason, everyone intuitively accepts the relativity of time and space in special relativity as an ontological feature of the world, but when it comes to the relativity of the quantum state, people’s brains explode and they start treating it like it has to do with “consciousness” or “subjectivity” or something and that if you accept it then you’re somehow denying the existence of objective reality. I have seen this kind of mentality throughout the literature and it has never made sense to me.

    Even Eugene Wigner did this, when he proposed the “Wigner’s friend” thought experiment, he points out how two different observers can come to describe the same system differently, and then concludes that proves quantum mechanics is deeply connected to “consciousness.” But we have known that two observers can describe the same system differently since Galileo first introduced the concept of relativity back in 1632. There is no reason to take it as having anything to do with consciousness or subjectivity or anything like that.

    (You can also treat the wavefunction nomologically as well, and then the nomological behavior you’d expect from particles would be relative, but the ontological-nomological distinction is maybe getting too much into the weeds of philosophy here.)

    I am partial to the way the physicist Francois-Igor Pris puts it. Reality exists as independently of the conscious observer, but not independently from context. You have to specify the context in which you are making an ontological claim for it to have physical meaning. This context can be that of the perspective of a conscious observer, but nothing about the observer is intrinsic here, what is intrinsic is the context, and that is just one of many possible contexts an ontological claim can be made. Two observers can describe the same train to be traveling at different velocities, not because they are conscious observers, but because they are describing the same train from different contexts.

    The philosopher Jocelyn Benoist and the physicist Francois-Igor Pris have argued that the natural world does have a kind of an inherent observer-observed divide but that these terms are misleading being “subject” tends to imply a human subject and “observer” tends to imply a conscious observer, and that a lot of the confusion is cleared up once you figure out how to describe this divide in a more neutral, non-anthropomorphic way, which they settle on talking about the “reality” and the “context.” The reality of the velocity of the train will be different in different contexts. You don’t have to invoke “observer-dependence” to describe relativity. Hence, you can indeed describe quantum theory as a theory of physical reality independent of the observer.


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzI'm good, thanks
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    7 days ago

    MWI very specifically commits to the existence of a universal wavefunction. Everett’s original paper is literally titled “The Theory of the Universal Wavefunction.” If you instead only take relative states seriously, that position is much closer to relational quantum mechanics. In fact, Carlo Rovelli explicitly describes RQM as adopting Everett’s relative-state idea while rejecting the notion of a universal quantum state.

    MWI claims there exists a universal quantum state, but quantum theory works perfectly well without this assumption if quantum states are taken to be fundamentally relative. Every quantum state is defined in relation to something else, which is made clear by the Wigner’s friend scenario where different observers legitimately assign different states to the same system. If states are fundamentally relative, then a “universal” quantum state makes about as much sense as a “universal velocity” in Galilean relativity.

    You could arbitrarily choose a reference frame in Galilean relativity and declare it universal, but this requires an extra postulate, is unnecessary for the theory, and is completely arbitrary. Likewise, you could pick some observer’s perspective and call that the universal wavefunction, but there is no non-arbitrary reason to privilege it. That wavefunction would still be relative to that observer, just with special status assigned by fiat.

    Worse, such a perspective could never truly be universal because it could not include itself. To do that you would need another external perspective, leading to infinite regress. You never obtain a quantum state that includes the entire universe. Any state you define is always relative to something within the universe, unless you define it relative to something outside of the universe, but at that point you are talking about God and not science.

    The analogy to Galilean relativity actually is too kind. Galilean relativity relies on Euclidean space as a background, allowing an external viewpoint fixed to empty coordinates. Hilbert space is not a background space at all; it is always defined in terms of physical systems, what is known as a constructed space. You can transform perspectives in spacetime, but there is no transformation to a background perspective in Hilbert space because no such background exists. The closest that exists is a statistical transformation to different perspectives within Liouville space, but this only works for objects within the space; you cannot transform to the perspective of the background itself as it is not a background space.

    One of the papers I linked also provides a no-go theorem as to why a universal quantum state cannot possibly exist in a way that would be consistent with relative perspectives. There are just so many conceptual and mathematical problems with a universal wavefunction. Even if you somehow resolve them all, your solution will be far more convoluted than just taking the relative states of quantum mechanics at face value. There is no need to “explain measurement” or introduce a many worlds or a universal wavefunction if you just accept the relative nature of the theory at face value and move on, rather than trying to escape it (for some reason).

    But this is just one issue. The other elephant in the room is the fifth point that even if you construct a theory that is at least mathematically consistent, it still would contain no observables. MWI is a “theory” which lacks observables entirely.


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzI'm good, thanks
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    edit-2
    7 days ago
    1. Entanglement is just a mathematical property of the theory. If it is sufficient to explain measurement then there is not anything particularly unique about MWI since you can employ this explanation within anything. You also say I missed your point by repeating exactly what I said.
    2. You’re the one giving this bullet point list as if you are debunking all of my points one-by-one. If you agree there is nothing especially “more local” about MWI than any other interpretation then why not just ignore that point and move on?
    3. A relative state is not an entangled state. Again you need to read the papers I linked. We are talking about observer-dependence in the sense of how the velocity of a train in Galilean relativity can be said to have a different value simultaneously for two different observers. I drew the direct comparison here in order to explain that in my first comment. This isn’t about special relativity or general relativity, but about “relativity” in a more abstract sense of things which are only meaningfully defined as a relational property between systems. The quantum state observer A assigns to a system can be different from the quantum state observer B assigns to the system (see the Wigner’s friend thought experiment). The quantum state in quantum mechanics is clearly relative in this sense, and to claim there is a universal quantum state requires an additional leap which is never mathematically justified.
    4. Please for the love of god just scroll up and read what I actually wrote in that first post and respond to it. Or don’t. You clearly seem to be entirely uninterested in a serious conversation. I assume you have an emotional attachment to MWI without even having read Everett’s papers and getting too defensive that you refuse to engage seriously in anything I say, so I am ending this conversation here. You don’t even know what a universal wavefunction is despite that being the title of Everett’s paper and are trying to lecture me about this subject without even reading a word I have written, claiming that the opinions of the cited academics here are “not even worth taken seriously.” This is just an enormous level of arrogance that isn’t worth engaging with.

  • bunchberry@lemmy.worldtoScience Memes@mander.xyzI'm good, thanks
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    7 days ago
    1. Not sure what this first point means. To describe decoherence you need something like density matrix notation or Liouville notation which is mathematically much more complicated. For example, a qubit’s state vector grows by 2^N, but if you represent it in Liouville notation then the vector grows by 4^N. It is far more mathematically complicated as a description, but I don’t really see why that matters anyways as it’s not like I reject such notation. Your second point also agrees with me. We know the Born rule is real because we can observe real outcomes on measurement devices, something which MWI denies exists and something you will go on to deny in your point #4
    2. This is also true in Copenhagen. Again, if that’s your criterion for locality then Copenhagen is also local.
    3. I think you should read Everett’s papers “‘Relative State’ Formulation of Quantum Mechanics” and “The Theory of the Universal Wave Function” to see the difference between wavefunctions defined in a relative sense vs a universal sense. You will encounter this with any paper on the topic. I’m a bit surprised you genuinely have never heard of the concept of the universal wavefunction yet are defending MWI?
    4. That quotation does not come one iota close to even having the air of giving the impression of loosely responding to what I wrote. You are not seriously engaging with what I wrote at all. You denying the physical existence of real-world discrete outcomes is exactly what I am criticizing, so just quoting yourself denying it is only confirming my point.

  • bunchberry@lemmy.worldtoScience Memes@mander.xyzI'm good, thanks
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    7 days ago

    The Many Worlds interpretation is rather unconvincing to me for many reasons.

    |1| It claims it is “simpler” just by dropping the Born rule, but it is mathematically impossible to derive the Born rule from the Schrodinger equation alone. You must include some additional assumption to derive it, and so it ends up necessarily having to introduce an additional postulate at some point to derive the Born rule from. Its number of assumptions thus always equal that of any other interpretation but with additional mathematical complexity caused by the derivation.

    |2| It claims to be “local” because there is no nonlocal wavefunction collapse. But the EPR paper already proves it’s mathematically impossible for something to match the predictions of quantum theory and be causally local if there are no hidden variables. This is obscured by the fact that MWI proponents like to claim the Born rule probabilities are a subjective illusion and not physically real, but illusions still have a physical cause that need to be physically explained, and any explanation you give must reproduce Born rule probabilities, and thus must violate causal locality. Some MWI proponents try to get around this by redefining locality in terms of relativistic locality, but even Copenhagen is local in that sense, so you end up with no benefits over Copenhagen if you accept that redefinition.

    |3| It relies on belief that there exists an additional mathematical entity Ψ as opposed to just ψ, but there exists no mathematical definition or derivation of this entity. Even Everett agreed that all the little ψ we work with in quantum theory are relative states, but then he proposes that there exists an absolute universal Ψ, but to me this makes about as much sense as claiming there exists a universal velocity in Galilean relativity. There is no way to combine relative velocities to give you a universal velocity, they are just fundamentally relative. Similarly, wavefunctions in quantum mechanics are fundamentally relative. A universal wavefunction does not meaningfully exist.

    |4| You describe MWI as kind of a copying of the world into different branches where different observers see different outcomes of the experiment, but that is not what MWI actually claims. MWI claims the Born rule is a subjective illusion and all that exists is the Schrodinger equation, but the Schrodinger equation never branches. If, for example, a photon hits a beam splitter with a 50% chance of passing through and a 50% chance of being reflected and you have a detector on either side, the Schrodinger equation will never evolve into a state that looks anything like it having past through or it having been reflected, nor will it ever evolve into a state that looks anything like it having past through and it having been reflected. The state it evolves into is entirely disconnected from the discrete states we actually observe except through the Born rule. Indeed, even those probabilities I gave you come from the Born rule.

    This was something Einstein pointed out in relation to atomic decay, that no matter how long you evolve the Schrodinger equation, it never evolves into a state that looks anything like decay vs non-decay. You never get to a state that looks like either or, both, or neither. You end up with something entirely unrecognizable from what we would actually observe in an experiment, only connected back to the probabilities of decay vs non-decay by the Born rule. If the universe really is just the Schrodinger equation, you simply cannot say that it branches into two “worlds” where in one you see one outcome and in another you see a different outcome, because the Schrodinger equation never gives you that. You would have to claim that the entire world consists of a single evolving infinite-dimensional universal wavefunction that is nothing akin to anything we have ever observed before.

    There is a good lecture below by Maudlin on this problem, that MWI presents a theory which has no connection to observable reality because nothing within the theory contains any observables.

    Rovelli also comments on it:

    The gigantic, universal ψ wave that contains all the possible worlds is like Hegel’s dark night in which all cows are black: it does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world. The Many Worlds interpretation does not explain them clearly. It is not enough to know the ψ wave and Schrödinger’s equation in order to define and use quantum theory: we need to specify an algebra of observables, otherwise we cannot calculate anything and there is no relation with the phenomena of our experience. The role of this algebra of observables, which is extremely clear in other interpretations, is not at all clear in the Many Worlds interpretation.

    — Carlo Rovelli, “Helgoland: Making Sense of the Quantum Revolution”


  • People don’t believe him because there is no reason to take his view on this issue seriously. Just because a person is smart in one area doesn’t mean they are a genius in all areas. There is an old essay from the 1800s called “Natural Science and the Spirit World” where the author takes note of a strange phenomena of otherwise brilliant scientists being very nutty in other areas, one example being Alfred Russel Wallace who codiscovered evolution by natural selection but also believed he could communicate with and photograph ghosts from dead people.

    People don’t take Penrose’s theory on consciousness seriously because it is not based on any reasonable arguments at all. Penrose’s argument is so bizarre that it is amazing even Penrose takes it seriously. His argument is basically just:

    (P1) There are certain problems that the answer cannot be computed. (P2) Humans can believe in an answer anyways. (C1) Therefore, humans can believe things that cannot be computed. (P3) The outcome of quantum experiments is fundamentally random. (C2) Therefore, the outcome of quantum experiments cannot be computed. (C3) Therefore, the human consciousness must be related to quantum mechanics.

    He then goes out with this preconception to desperately search for any evidence that the brain is a quantum mechanical system, even though most physicists don’t take this seriously because quantum effects don’t scale up easily for massive objects, warm objects, and for objects not isolated from their environment, which all three of those things apply to the human brain.

    In his desperate search to grasp onto anything, he has found very loose evidence that quantum effects might be scaled up a little bit inside of microtubules, and the one paper showing this maybe as a possibility which hasn’t even been repeated has been plastered everywhere by his team as proof they were right, but it ignores the obvious elephant in the room that microtubules are just structural and are found throughout the body and have little to do with information processing the in brain and thus little to do with consciousness.

    The argument he presents that motivates the whole thing also just makes no sense. The fact humans can choose to believe in things that cannot be computed doesn’t prove human decisions cannot be computed. It just means humans are capable of believing things that they have no good reason to believe… I mean, that is literally a problem with LLMs, sometimes called “hallucinations,” that they seem to just make things up and say it with confidence sometimes.

    The idea that it is impossible to have a computer reach conclusions that cannot be proven is silly, because the algorithm for it to settle on an answer to a question is not one that rigorously validates the truth of the answer but just activates a black box network of neurons and it settles on whatever answer the neural network outputs with the highest confidence level. If you ask an AI if the earth orbits the sun, and it says yes, it is not because it ran some complex proof at that moment and proved with certainty that the earth orbits the sun before it says it. That’s not how artificial intelligence works, so there is no reason to think that is how human intelligence would work either, and so there is no reason to expect that humans couldn’t believe things without absolute proof in the first place.



  • There is a variant of classical computers called probabilistic computers where logic gates can randomly perturb a bit. These bits are called p-bits. Since the system is random, you can’t represent it by specifying each bit value individually since you don’t know their bit values. You specify them with a vector where each entry represents the probability of the bit being a particular value. For example, you would represent a single p-bit with a two-vector where the first number represents the probability of 0 and the second the probability of 1.

    You also have to use a single vector for the whole system, called the state vector. If you have two p-bits, you don’t use two two-vectors, but a single four-vector. This is because two p-bits can become statistically correlated with one another, and so you need to represent them together to keep track of correlations. If you represent each p-bit individually, you will lose information about correlations.

    The state vector grows in scale exponentially because it holds the probabilities for all possible outcomes, which for N bits there are 2^N possible outcomes. If we knew the state of the machine at a given time, we could represent it with separate two-vectors for each bit giving us a complexity of 2N (linear) for all the two-vectors combined, but the fact we are ignorant of its state at a given time requires us to represent it with a single vector with a complexity of 2^N (exponential).

    What even is a probability distribution? Such as, if I say, this biased coin has a 25% chance of landing heads and a 75% chance of landing tails. What does that even mean? One answer is that the probability distribution represents an ensemble. An ensemble is an idealized experiment where you flip the same coin an infinite number of times and then distribute the results. Such a distribution should precisely have the ratio 25:75. An experiment ran an infinite number of times will clearly have greater complexity than an experiment just ran one time. The exponential complexity of the statistical description is usually understood to come from the fact that it represents an ensemble and not an individual system.

    It turns out that the “logic” that underlies quantum mechanics can be described the mathematics that looks exactly like the mathematics of normal probability theory, but with the introduction of negative signs. These probabilities that can be negative are called quasi-probabilities. Despite common misconception, imaginary numbers are not necessary for quantum mechanics. You can reproduce all of quantum information science just with the introduction of negative numbers alone.

    When you build out a probability tree, negative numbers allow certain paths to cancel out with other paths that have positive quasi-probabilities. This cannot happen in classical probability theory as probabilities are either positive or zero and thus can only accumulate. This cancelling are called interference effects and is the hallmark of quantum mechanics. Even entanglement effects, such as shown in violations of Bell inequalities, are just interference effects across statistically correlated systems.

    Here is where it gets a bit weird. Recall how we said that the exponential complexity of the state vector of a probabilistic computer is assumed to be derivative of a combination of an infinite number of samples from simpler, linear, deterministic system, which this combination we call an ensemble. It turns out that you can prove that it is impossible for a system described by quasi-probabilities to be decomposed in the same way. There is no simpler, linear, deterministic system, which an infinite number of samples from it can give you a “quasi-ensemble.”

    This means either one of two things. (1) Quasi-probabilistic systems are fundamentally random, unlike classically probabilistic systems, because there simply cannot be a simpler underlying deterministic state, or (2) there does exist an underlying deterministic state, but it has similar complexity to the “quasi-ensemble” itself. That is to say, the underlying, deterministic, and physical state of the system really is exponentially complex. That is to say, as you add N qubits, the complexity of the internal dynamics of the system really does grow by 2^N.

    Either conclusion you draw, the outcome is the same: Unlike classical probability theory where we assume that the exponential complexity of the statistical description is ultimately derivative of our ignorance of an underlying state, in quantum mechanics, you either have to assume such an underlying state does not exist, or that the system really is just exponentially complex, and thus, in either case, you can only work with the exponentially complex description. There is no other description to work with.

    This makes it impossible to efficiently simulate quantum computers with a classical computer, since the underlying complexity grows exponentially with each qubit you have. If you have 300 qubits, then the size of the state vector is 2^300, which is greater than the number of atoms in the observable universe. Whether you believe #1 or #2, the outcome is, again, the same: there is simply no way to break this apart into an infinite number of samples of a linearly complex system. To simulate it correctly and losslessly, you must indeed use a vector of this size, and so a lossless simulation of even 300 qubits on a classical computer is impossible.

    This extra complexity means that the internal dynamics of a quantum computer is much more complicated than a classical computer, much more stuff is “going on” so to speak, and so can in principle get much more compute if you can figure out how to leverage that in a useful way.

    Typically quasi-probabilities aren’t actually used often in practice, because introducing negative signs into classical probability theory breaks L1 normalization unless you double-up the size of the state vector. It is more mathematically concise to also introduce imaginary numbers, which lets you keep the state vector the same size as it is in classical probability theory, but containing complex numbers. These are called probability amplitudes. That is why imaginary numbers are used in quantum mechanics. They are not necessary just more concise. What is absolutely necessary and indispensable is the negative numbers, as these are what allows certain paths on the probability tree to cancel out with other paths.

    Yes, you do just work with a vector and matrices that apply to the vector. The vector either can contain quasi-probabilities or probability amplitudes. But, besides that, it pretty much just works like normal probability theory. Each entry is associated with the likelihood of a particular outcome. If you have 3 qubits, you need an eight-vector because 2^3=8, where the probability amplitudes are associated with the likelihoods of observing 000, 001, 010, 011, 100, 101, 110, or 111 respectively. Unlike stochastic matrices of a classical probabilistic computer, you use unitary matrices, also can contain complex numbers.

    Besides the use of complex numbers, the mathematics is, again, pretty much identical to regular old probability theory.

    It may seem “abstract” because, in classical probability theory, you assume that the state vector is an abstract idealized description of an ensemble, which is different from the underlying physical state. But in quantum mechanics, the state vector cannot possibly be merely an idealized description of an ensemble. Either such an underlying state does not physically exist, or whatever physical state does exist, it must be related to the quantum state vector with similar complexity, and indeed some physicists interpret the quantum state vector to actually be the underlying physical state of the system.

    There is no agreement or academic consensus on how to “interpret” the physical meaning of the mathematics, in terms of natural philosophy. Everyone agrees on the mathematics itself and how to make predictions with it, but if you are to ask what the mathematics actually physically represents, what is “really going on” inside a quantum computer, or, as well called them, what are the “beables” of the theory, this is a rather controversial topic without agreement.

    Personally, I tend to be torn between two different answers to this.

    One possibility is the answer of the physicists Carlo Rovelli and Francois-Igor Pris, which takes position #1, that outcomes are truly random an there is no underlying physical state because physical states are only meaningful when defined relative to another system during an interaction, and so it makes no coherent sense to speak of the particle as having an autonomous state as a property to itself. All the “paradoxes” in quantum mechanics disappear if you stop making absolute statements about particles, like “the spin of the electron is down,” and always instead append to this relative to what.

    Another answer may be something akin to David Bohm’s pilot wave theory which takes position #2, where you assume that there is an underlying, simpler, deterministic state, but that it exists alongside the quantum state. The quantum state is separate thing, a separate “beable,” which influences how the particles behave. This gives you a picture that feels fairly Newtonian. I used to be more skeptical of this approach because the physicist John Bell proved such a picture cannot be compatible with special relativity, which, if true, might make it impossible for such an approach to reproduce the predictions of quantum field theory. However, the physicist Ward Struyve demonstrated that while it is indeed not compatible with special relativity, it is false to draw the conclusion that therefore it cannot reproduce the predictions of quantum field theory, and demonstrated that this is not actually an issue.

    There are many other views in the literature as well.



  • The reason quantum computers are theoretically faster is because of the non-separable nature of quantum systems.

    Imagine you have a classical computer where some logic gates flip bits randomly, and multi-bit logic gates could flip them randomly but in a correlated way. These kinds of computers exist and are called probabilistic computers and you can represent all the bits using a vector and the logic gates with matrices called stochastic matrices.

    The vector necessarily is non-separable, meaning, you cannot get the right predictions if you describe the statistics of the computer with a vector assigned to each p-bit separately, but must assign a single vector to all p-bits taken together. This is because the statistics can become correlated with each other, i.e. the statistics of one p-bit depends upon another, and thus if you describe them using separate vectors you will lose information about the correlations between the p-bits.

    The p-bit vector grows in complexity exponentially as you add more p-bits to the system (complexity = 2^N where N is the number of p-bits), even though the total states of all the p-bits only grows linearly (complexity = 2N). The reason for this is purely an epistemic one. The physical system only grows in complexity linearly, but because we are ignorant of the actual state of the system (2N), we have to consider all possible configurations of the system (2^N) over an infinite number of experiments.

    The exponential complexity arises from considering what physicists call an “ensemble” of individual systems. We are not considering the state of the physical system as it currently exists right now (which only has a complexity of 2N) precisely because we do not know the values of the p-bits, but we are instead considering a statistical distribution which represents repeating the same experiment an infinite number of times and distributing the results, and in such an ensemble the system would take every possible path and thus the ensemble has far more complexity (2^N).

    This is a classical computer with p-bits. What about a quantum computer with q-bits? It turns out that you can represent all of quantum mechanics simply by allowing probability theory to have negative numbers. If you introduce negative numbers, you get what are called quasi-probabilities, and this is enough to reproduce the logic of quantum mechanics.

    You can imagine that quantum computers consist of q-bits that can be either 0 or 1 and logic gates that randomly flip their states, but rather than representing the q-bit in terms of the probability of being 0 or 1, you can represent the qubit with four numbers, the first two associated with its probability of being 0 (summing them together gives you the real probability of 0) and the second two associated with its probability of being 1 (summing them together gives you the real probability of 1).

    Like normal probability theory, the numbers have to all add up to 1, being 100%, but because you have two numbers assigned to each state, you can have some quasi-probabilities be negative while the whole thing still adds up to 100%. (Note: we use two numbers instead of one to describe each state with quasi-probabilities because otherwise the introduction of negative numbers would break L1 normalization, which is a crucial feature to probability theory.)

    Indeed, with that simple modification, the rest of the theory just becomes normal probability theory, and you can do everything you would normally do in normal classical probability theory, such as build probability trees and whatever to predict the behavior of the system.

    However, this is where it gets interesting.

    As we said before, the exponential complexity of classical probability is assumed to merely something epistemic because we are considering an ensemble of systems, even though the physical system in reality only has linear complexity. Yet, it is possible to prove that the exponential complexity of a quasi-probabilistic system cannot be treated as epistemic. There is no classical system with linear complexity where an ensemble of that system will give you quasi-probabilistic behavior.

    As you add more q-bits to a quantum computer, its complexity grows exponentially in a way that is irreducible to linear complexity. In order for a classical computer to keep up, every time an additional q-bit is added, if you want to simulate it on a classical computer, you have to increase the number of bits in a way that grows exponentially. Even after 300 q-bits, that means the complexity would be 2^N = 2^300, which means the number of bits you would need to simulate it would exceed the number of atoms in the observable universe.

    This is what I mean by quantum systems being inherently “non-separable.” You cannot take an exponentially complex quantum system and imagine it as separable into an ensemble of many individual linearly complex systems. Even if it turns out that quantum mechanics is not fundamental and there are deeper deterministic dynamics, the deeper deterministic dynamics must still have exponential complexity for the physical state of the system.

    In practice, this increase in complexity does not mean you can always solve problems faster. The system might be more complex, but it requires clever algorithms to figure out how to actually translate that into problem solving, and currently there are only a handful of known algorithms you can significantly speed up with quantum computers.

    For reference: https://arxiv.org/abs/0711.4770


  • If you have a very noisy quantum communication channel, you can combine a second algorithm called quantum distillation with quantum teleportation to effectively bypass the quantum communication channel and send a qubit over a classical communication channel. That is the main utility I see for it. Basically, very useful for transmitting qubits over a noisy quantum network.


  • The people who named it “quantum teleportation” had in mind Star Trek teleporters which work by “scanning” the object, destroying it, and then beaming the scanned information to another location where it is then reconstructed.

    Quantum teleportation is basically an algorithm that performs a destructive measurement (kind of like “scanning”) of the quantum state of one qubit and then sends the information over a classical communication channel (could even be a beam if you wanted) to another party which can then use that information to reconstruct the quantum state on another qubit.

    The point is that there is still the “beaming” step, i.e. you still have to send the measurement information over a classical channel, which cannot exceed the speed of light.


  • Obvious answer is that the USA is the world’s largest economy while Russia is not, so if USA says “if you trade with Russia then you can’t trade with me” then most countries will happily accept ceasing trade with Russia to remain in the US market but if Russia says the same about the USA then people would just laugh and go trade with the USA.

    The only country that might have some leverage in sanctioning the US is China but China has historically had a “no allies” policy. Chinese leadership hate the idea of that because then they would feel obligated to defend them and defending another country is viewed very poorly in Chinese politics. They thus only ever form trade relations and never alliances, meaning if your country is attacked they have no obligation to you. Chinese politicians may verbally condemn the attack but they won’t do anything like sanctions or even provide their own military support in return.