

What is the distinction you are making between knowing the math and understanding it?


What is the distinction you are making between knowing the math and understanding it?


There are nonlocal effects in quantum mechanics but I am not sure I would consider quantum teleportation to be one of them. Quantum teleportation may look at first glance to be nonlocal but it can be trivially fit to local hidden variable models, such as Spekkens’ toy model, which makes it at least seem to me to belong in the class of local algorithms.
You have to remember that what is being “transferred” is a statistical description, not something physically tangible, and only observable in a large sample size (an ensemble). Hence, it would be a strange to think that the qubit is like holding a register of its entire quantum state and then that register is disappearing and reappearing on another qubit. The total information in the quantum state only exists in an ensemble.
In an individual run of the experiment, clearly, the joint measurement of 2 bits of information and its transmission over a classical channel is not transmitting the entire quantum state, but the quantum state is not something that exists in an individual run of the experiment anyways. The total information transmitted over an ensemble is much greater can would provide sufficient information to move the statistical description of one of the qubits to another entirely locally.
The complete quantum state is transmitted through the classical channel over the whole ensemble, and not in an individual run of the experiment. Hence, it can be replicated in a local model. It only looks like more than 2 bits of data is moving from one qubit to the other if you treat the quantum state as if it actually is a real physical property of a single qubit, because obviously that is not something that can be specified with 2 bits of information, but an ensemble can indeed encode a continuous distribution.
This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.
— Dmitry Blokhintsev
Here’s a trivially simple analogy. We describe a system in a statistical distribution of a single bit with [a; b] where a is the probability of 0 and b is the probability of 1. This is a continuous distribution and thus cannot be specified with just 1 bit of information. But we set up a protocol where I measure this bit and send you the bit’s value, and then you set your own bit to match what you received. The statistics on your bit now will also be guaranteed to be [a; b]. How is it that we transmitted a continuous statistical description that cannot be specified in just 1 bit with only 1 bit of information? Because we didn’t. In every single individual trial, we are always just transmitting 1 single bit. The statistical descriptions refer to an ensemble, and so you have to consider the amount of information actually transmitted over the ensemble.
A qubit’s quantum state has 2 degrees of freedom, as it can it be specified on the Bloch sphere with just an angle and a rotation. The amount of data transmitted over the classical channel is 2 bits. Over an ensemble, those 2 bits would become 2 continuous values, and thus the classical channel over an ensemble contains the exact degrees of freedom needed to describe the complete quantum state of a single qubit.
Superdeterminism isn’t just the idea that if you know the initial state of everything you can predict the outcome with certainty. It’s more bizarre than that. Superdeterminism suggests that particles have pre-existing correlations created at the Big Bang that would allow you to predict the behavior of a system that the particle never interacted with without knowing anything about it, at least in contrived cases.
For example, you can imagine setting up an experiment whereby the particles will be measured by an experimenter and the experimenter can freely choose the measurement basis as a personal conscious decision. Superdeterminism would suggest that if you could measure the hidden variables on the particles before using them in the experiment, just from the particles alone, even if they have never interacted with the experimenter since the Big Bang, would be sufficient to predict the experimenter’s conscious decision of the choice in measurement basis.
Indeed, if you knew that one singular variable from the particles, you would know the experimenter’s decision ahead of time with certainty, even if you know nothing about the experimenter and never met them before yourself. You could even go up to the experimenter and insist they should use a different measurement basis, but you shouldn’t be able to ever change their mind, because it’s absolutely predetermined they would make that decision, and that decision is also correlated with the hidden variable of the particle, so knowing the hidden variable of the particle alone is equivalent to knowing what their decision will be ahead of time.
Nothing is logically impossible about it, it is just a bit strange.


I got interested in quantum computing as a way to combat quantum mysticism. Quantum mystics love to use quantum mechanics to justify their mystical claims, like quantum immortality, quantum consciousness, quantum healing, etc. Some mystics use quantum mechanics to “prove” things like we all live inside of a big “cosmic consciousness” and there is no objective reality, and they often reference papers published in the actual academic literature.
These papers on quantum foundations are almost universally framed in terms of a quantum circuit, because this deals with quantum information science, giving you a logical argument as to something “weird” about quantum mechanic’s logical structure, as shown in things like Bell’s theorem, the Frauchiger-Renner paradox, the Elitzur-Vaidman paradox, etc.
If a person claims something mystical and sends you a paper, and you can’t understand the paper, how are you supposed to respond? But you can use quantum computing as a tool to help you learn quantum information science so that you can eventually parse the paper, and then you can know how to rebut their mystical claims. But without actually studying the mathematics you will be at a loss.
You have to put some effort into understanding the mathematics. If you just go vaguely off of what you see in YouTube videos then you’re not going to understand what is actually being talked about. You can go through for example IBM’s courses on the basics of quantum computing and read a textbook on quantum computing and it gives you the foundations in quantum information science needed to actually parse the logical arguments in these papers and what they are really trying to say.


There is a problem, not just among laymen, but also among academics, to continually conflate contextuality with subjectivity.
If I ask you the best music genre, and you ask me the best music genre, we will likely give different answers, because the question is subjective. It depends upon the subject and there is no “objectively” best music genre in the world. If I ask you the velocity of an object, and you ask me the velocity, it is conceivable we might give different answers if we are both perceiving it in two different reference frames. Is that because velocity is subjective? That it is all in our heads and just a personal opinion?
I find this hard to believe, because you can conduct an experiment where two observers use radar guns to measure the velocity of the object they are perceiving, and you can later compare them and see that the radar gun does indeed agree that the velocity was different. If it was purely subjective, why would a purely mechanical device like a radar gun also record the difference, which is not a conscious observer or a subject at all?
I believe that velocity can ontologically differ between observers. Meaning, it is part of objective reality that it differs. But this difference is not because they are observers. If the observers observed the same object in the same frame of reference they would perceive it at the same velocity. The difference is not reducible to them being observers, so calling it “observer-dependent” is misleading. The difference goes beyond them being observers and into objective reality: that they perceive the object in different measurement contexts.
This is what I mean by the distinction between contextuality and subjectivity. Some properties of then natural world really do ontologically realize themselves differently in objective reality depending upon the context of their realization.
Basically, what these academics do who claim quantum mechanics disproves objective reality is that they conflate subjectivity to contextuality and then demonstrate that two observers can give a different description of the same system in principle in quantum theory, and then conclude that this means there is no objective reality because the description of the quantum system must be subjective as it differs between the two observers. But, again, that does not follow. One can just interpret the difference as context-dependent rather than observer-dependent and then there is no trouble interpreting it as a physical theory of the natural world independent of the observer. It is just not independent of context.
There is in fact a whole philosophical school called contextual realism based on Wittgensteinian philosophy and originated by the French philosopher Jocelyn Benoist that argues that much of the confusion in philosophy (such as the “hard problem”) ultimately has its origins is continually confusing the contextual for the subjective and if the distinction is adhered to clearly from the get-go then the issue goes away. The physicist Francois-Igor Pris has written extensively on the relationship between this philosophical school and interpretative problems in quantum mechanics.
The logic of quantum theory does allow for two different observers to give different descriptions of the same system, but (1) the differences are never empirically relevant as the theory also predicts that if they were to become empirically relevant they would both agree on what they would both perceive, and (2) the theory predicts those deviations in the description, kind of like how Galilean relativity predicts that two observers will record different velocities in different frames of reference by using a Galilean transformation.
The fact that theory predicts these differences (in point #2) makes it hardly subjective, as the deviations are predicted by the objective theory, and they are always consistent with one another (in point #1).


I don’t find it useful to prepend adjectives to reality. Reality just is what it is.


since when do the demon rats care about popular opinions?


I personally don’t believe it’s real in the way Chalmers defines it. You can define it in another way where it can be considered real, but his definition I don’t find convincing. Indirect realism is the belief that what we perceive is not real but kind of a veil that blocks us from seeing true reality. True reality then by definition is fundamentally impossible to observe, not by tools and not under any counterfactual circumstances.


If my options are to vote for genocide or not vote at all and thus allow the USA to be destroyed by a madman, then hate me all you want but I’ll take the latter option. Let fire and brimstone rain down on the USA. If there is no option to vote against the genocide, then the USA dismantling its own legitimacy as a world power is infinitely better for harm reduction as the USA being a functional and respectable state in the global arena allows it to do even greater harm.


Well I am of the same opinion of the philosopher Alexandr Bogdanov and the philosopher Jocelyn Benoist which is that indirect realism is based on very bad arguments in the first place, and this is the first premise of Chalmers’ argument for the “hard problem,” and so to drop it as a premise drops the “problem.” I would recommend Bogdanov’s book The Philosophy of Living Experience and Benoist’s book Toward a Contextual Realism. The uniting theme is that they both reject the existence of a veil that blocks us from seeing reality, and thus Chalmers’ notion of “consciousness” is rejected, and so there is no “hard problem” in the first place.
The “hard problem” is really just a reformulation of the mind-body problem, and Feuerbach had originally pointed out in his essay “On Spiritualism and Materialism” that the mind-body problem is not solvable because to derive it, one has to start from an assumption that there is a gulf between the mind and the body (the phenomena and the noumena, “consciousness” and physical reality), and so to then solve it would be to bridge that gulf, which contradicts oneself, as that would mean a gulf didn’t exist in the first place. He thus interprets the mind-body problem (later reformulated as the hard problem) as a proof by contradiction that indirect realism is not tenable, and so materialists should abandon this gulf at the very axiomatic basis of their philosophy.
There will never be a “solution” because it’s better understood as a logical proof that indirect realism is wrong. That means, no matter how intuitive indirect realism may seem and no matter how many arguments you think you can come up with off the top of your head to defend it, you should step back and actually rigorously evaluate those arguments as they cannot actually be correct and you must be making a mistake somewhere.


Survivorship bias as an argument doesn’t really work because you are already presupposing you are the one who survived. Of course if you assume that there is a multiverse of infinite copies of yourself and at least one of them survived an incredibly incredibly unlikely event, then by definition you would not die and would be the person who survives the event.
But it’s kind of circular. You cannot apply surviroship bias prior to conducting the experiment because you have no reason to believe that what you call “you” would be one of the survivors. It is much more likely, even if we assume the multiverse theory is true (see my criticism of it here) that what you would call “you” after the splitting of worlds would not be one of the survivors.
Let me give an analogy. Replace the very likely event of dying with something else, like losing the lottery. At least one branch of the multiverse you would win the lottery. Yes, if we bias it so we only consider the branch where you win the lottery, then by definition you are guaranteed to win the lottery if you play it. But that biasing makes no sense prior to actually playing the lottery. It is much more likely what you call “you” after you play the lottery would be one that sees themselves as having lost the lottery.


Quantum immortality was a concept in quantum mysticism invented by Hugh Everett, the guy who originated the Many Worlds Interpretation. It’s not even taken seriously by defenders of Many Worlds. Major proponents of Many Worlds like Sean Carroll even admit it is nonsensical and silly.
Imagine if a company perfectly cloned you. If you then died, do you expect that your consciousness would suddenly hop into the clone and take control over them? No, it makes no sense. The clone is effectively another person. If you die, you would just die. The clone would keep living on because the clone ultimately isn’t you.
The obvious problem with quantum immortality with it is that if you truly believe in Many Worlds, then the other branches of yourself in other copies of the universe are effectively like clones of yourself. You dying in this branch of the multiverse doesn’t somehow magically imply your consciousness can hop into another branch where you are still alive. “You” as in the “you” on this branch where you die would just die, and the other “yous” would continue to live on.
Penrose’s ideas are not taken seriously either, because the arguments for them are comedically bad. Pretty much all physicists are in unanimous agreement that quantum computing needs to be well-isolated from the environment and incredibly cold, the opposite of a human brain, and so there is zero chance the brain is utilizing quantum computing effects.
Penrose’s argument is, and I kid you not, that it is possible for humans to believe things they cannot prove, for example, we cannot currently prove Goldbach’s Conjecture but you can choose to believe it, and therefore he concludes human consciousness must transcend what is computable. Since no algorithm can compute the outcome of the collapse of the wavefunction with absolute certainty (as it is random), he then thinks that the human brain must therefore be using quantum processes.
I genuinely don’t know how anyone can find that argument convincing. The barrier towards creating artificial intelligence obviously isn’t that AI has a tendency to only believe things that are rigorously computable. In fact, it is quite the opposite, AI constantly hallucinates and makes statements that are obviously false and nonsensical. The physical implementation of the neural network can be captured by a rigorous mathematical model without the output of what the neural network does or says being all rigorous mathematical statements. There is no contradiction between believing the human brain is not a quantum computer and that humans are capable of believing or saying things that they did not rigorously compute.
Penrose then partnered with Hameroff to desperately search for any evidence that there are any coherent quantum states in the brain at all. They start with their conclusion they want and desperately seek something out that might fit it. All they have found is that there might be brief coherent quantum states in microtubules, but microtubules are not a feature of the brain, but of eukaryotic cells generally, and they play a structural role as a kind of lattice that keeps the cells together. Even if they are right that microtubules briefly can have a coherent quantum state, that does not get you one iota closer into proving that the human brain is a quantum computer in the sense that coherent quantum states actually play a role in decision making or conscious thought.


Moore’s law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it’s a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.
This actually started to happen around 2015. These clever tricks were always exaggerated because there isn’t an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.
The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it’s actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia’s locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.
Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.


There is no limit to entanglement as everything is constantly interacting with each other and spreading the entanglement around. That is in fact what decoherence is about, because spreading the entanglement throughout trillions of particles in the environment dilutes it such that quantum interference effects are to subtle to notice, but they are all technically entangled. So if you think entanglement means things are one entity, then you pretty much have to treat the whole universe as one entity. That was the position of Bohm and Blokhintsiev.
the world is run by PDF files
ChatGPT just gives the correct answer that the limit doesn’t exist.


Speed of light limitation. Andromeda is 2.5 million light years away. Even if someone debunks special relativity and finds you could go faster than light, you would be moving so fast relative to cosmic dust particles that it would destroy the ship. So, either way, you cannot practically go faster than the speed of light.
The only way we could have intergalactic travel is a one-way trip that humanity here on earth would be long gone by the time it reached its destination so we could never know if it succeeded or not.

Dogmatism goes all ways. The Soviets temporarily threw out evolutionary biology for Lysenkoism because they believed there was an ideological connection between Darwinism and social Darwinism and thus thought it was an ideology used to justify capitalism, and the adoption of Lysenkoism was devastating to their agriculture and wasn’t abandoned until 1948.
The main lesson that China learned from the Cold War is that countries should be less dogmatic and more pragmatic. That does not mean an abandonment of ideology because you still need ideology to even tell you what constitutes a pragmatic decision or not and what guides the overall direction, but you should not adopt policies that will unambiguously harm your society and work against your own goals just out of a pure ideological/moralistic justification.
Americans seemed to have gone this pragmatic direction under FDR, who responded to the Great Depression by recognizing that one should not take a dogmatic approach to liberalism either, and expanded public programs, state-owned enterprises, and economic planning in the economy. But when the USSR started to fall apart, if you read Chinese vs US texts on the subject, the Americans took literally the opposite lesson from it that China did.
The Americans used the USSR’s collapse as “proof” that we have reached the “end of history” and that their liberal ideology is absolutely perfect and, in fact, we are not dogmatic enough. It is not a coincidence that the decline of the USSR throughout the 1980s directly corresponded with the rise of the neoliberal Reagan era. The USSR’s collapse was used by Americans to justify becoming hyperdogmatioids.
You can just read any text from any western economists on China’s “opening up” to private markets, and you will see that every single western economist universally will refuse to acknowledge that any of the state-owned enterprises, public ownership of land, or economic planning plays any positive role in the economy. They all credit the economic growth solely to them introducing private enterprise and nothing else alone, and thus they always criticize China from the angle of “they have not privatized enough” and insist their economy would be even better off if they abolished the rest of the public sector.
I wrote an article before defending the public sector in China as being important to its rapid development, and never in the article do I attack the role the private sector played, I simply defended the notion that the public sector also played a crucial role by giving economic papers from China as well as quotes from books from top Chinese economists.
My article was reposted in /r/badeconomics and the person who reposted it went through every single one of my claims regarding the public sector playing an important role and tried to “debunk” every single one of them. They could not acknowledge that the public sector played ANY beneficial role at all. This is what I mean by the west has become hyperdogmatoids. They went from FDR era to believing that it’s literally impossible for the public sector to play any positive role at all, and this has led to Reaganite era in the USA as well as waves of austerity throughout western Europe as they have been cutting back on public programs and public policy.
In my opinion, the decline of the western world we have been seeing as of late is very much a result of westerners taking the exact opposite lessons from the Cold War and becoming hyperdogmatoids, adopting the same mistakes the USSR made but in the opposite direction. In most of the western world these days, expanding public control in the economy is not even a tenable economic position. Just about every western country the “left” political parties want to just maintain the current level of public control, and the “right” want austerity to shrink it, but parties which want to increase it are viewed as unelectable.
Any economics or sociology which suggests maybe it is a good thing in certain caes to expand public control in certain areas is denounced as “flat-earth economics” and not taken seriously, and this refusal to grapple with an objective science of human socioeconomic development is harming the west as their public programs crumble, wealth inequality skyrockets, their infrastructure is falling apart, and they cannot self-criticize their own dogmatism.
Basically no one believes in open borders, only some weird fringe anarchists who posts memes like the one above that are largely irrelevant in the real world. It’s always just been a straw man from the right or just weird online fringe anarchists who hold the position.
The reason communists are critical of the US/European hostility towards immigrants is not because we want open borders but because western countries bomb, sanction, coup these countries and cause a refugee crisis then turn around and cry about those immigrants coming to their country.
“Why” implies an underlying ontology. Maybe there is something underneath it but it’s as far as it goes down as far as we currently know. If we don’t at least tentatively accept that our current most fundamental theories are the fundamental ontology of nature, at least as far as we currently know, then we can never believe anything about nature at all, because it would be an infinite regress. Every time we discover a new theory we can ask “well why that?” and so it would be impossible to actually believe anything about nature.