• 0 Posts
  • 163 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2024

help-circle
  • Sadly it’s all nonsense. If Einstein was alive today he would denounce special relativity. Modern day physics has devolved into a religious cult. Yes, please read Einstein, because everyone lies about what he believed and does not represent his concerns with modern physics accurately.

    When Einstein introduced his theory of relativity in 1905, it made zero new predictions, because it was mathematically equivalent to a theory Lorentz presented in 1904. Einstein’s critique of Lorentz’s theory was simply that it contained a privileged reference frame, these days called a preferred foliation, which was undetectable. If you can’t detect it, it is redundant for experimental predictions, and so it should be thrown out. This gives you a different mental picture to what is going on, since Lorentz’s theory was one of absolute spacetime where deviations in rods and are clocks are treated as deviations in rods and clocks from absolute time due to physical effects. If I mess with your clock so it runs slower, that doesn’t prove time slows down. Einstein’s theory, by dropping the postulate that there exists a preferred foliation, drops the concept of absolute time, and this inevitably leads you to the conclusion that time and space really do deviate.

    Good. Einstein removed something unnecessary for predictions. What is the problem? The problem is that in 1964 the physicist John Bell published a paper proving that special relativity simply lacks sufficient structure to take into account objective reality when factoring in quantum mechanical predictions. Bell’s theorem has nothing to do with determinism as it is often misunderstood. When Bell talked about “hidden variables” he was not talking about some additional hidden parameter that would make quantum theory deterministic. He was talking about the very concept of object permanence, the basis of philosophical realism.

    If I observe an object at time t=0, t=1, and then again at t=2, then run the entire experiment again from the beginning with the same initial conditions and observe the object at t=0 and t=2 but not t=1, clearly, this time I didn’t observe it at t=1, but that was just by happenstance. I could have counterfactually observed it at t=1, and I know, from other experiments, that I would perceive something there if I looked at t=1 under this counterfactual, even though I just so happened not to. We thus must conclude that the system has an observable property at t=1, even though we did not observe, because we could have under a counterfactual.

    This is the bedrock of philosophical realism, the very notion of objective reality, that things exist when we are not looking as long as you can make an argument that they could be observed under some counterfactual. Directly being observed in the present moment is not necessary to say it exists. Even if I cannot see you right now, I can still believe you exist in objective reality because I can imagine a counterfactual scenario where I do observe you, such if you were my friend and invited me over.

    When Bell was talking about “hidden variables,” this is what he was talking about: the idea that particles can be considered to have physical properties even when you are not looking at them. These properties are sometimes called in the literature the “ontic state.” The ontic state may evolve deterministically, or in a way that is fundamentally random. It does not matter. The notion of philosophical realism is that systems possess ontic states when you are not looking at them and then these ontic states explain what shows up on your measurement device when you look.

    What Bell demonstrates in his 1964 paper is that special relativity does not have sufficient structure to include the ontic states of particles when you factor in the statistical predictions of quantum mechanics. There is thus an incompatibility between objective reality and special relativity. It turns out that the exact additional structure you need is the preferred foliation, which was originally deleted by Einstein. But if you read Einstein’s work, his main concern of Bohr’s interpretation of quantum theory was not lack of determinism, but lack of realism. He gave an example of atomic decay, where if you leave a radioactive atom in a box to decay and a certain amount of time has passed, it must have emitted or not emitted a particle within that time frame. It must have done one or the other, but Bohr’s interpretation did not allow you to admit that it did, because that is equivalent to adopting an ontic state for the atom in the box.

    When encountered with the contradiction between special relativity and objective reality, physicists chose to abandon objective reality. That is religion. Not science. The Copenhagen interpretation then became dominant, which argues physics is not about reality but purely about what shows up on measuring devices, but can say nothing about reality independently of what you measure. And don’t even get me started on Many Worlds which is a cope that tries to find a middle ground by claiming the abstract mathematical realm used to predict what shows up on measuring devices is the objective reality, devolving into a logically incoherent Platonism.

    It’s clearly a religious cult as no evidence was needed to even establish this position. Niels Bohr convinced physicists to adopt the Copenhagen interpretation at the Solvay conference in 1927. This was decades prior to the publication of Bell’s theorem, and even more decades prior before the Nobel prize was given for confirming Bell’s theorem. Almost a century prior before Bell’s theorem was experimentally verified, the physics community already decided objective reality doesn’t exist.

    Part of this nonsense comes back to Occam’s razor. Occam’s razor only makes sense as a principle if we just so happen to be lucky to be born into a universe without any physical redundancies. If there are any physical redundancies in the physical world itself, then the mathematical description of the world will also contain mathematical redundancies. If you then remove the mathematical redundancies, you then have an incorrect picture of the world, even though it still technically makes the right predictions. Physicists have taken Occam’s razor to such an extreme that they have found that they can remove objective reality from the mathematics and only care about what shows up on measuring devices and make the right predictions.

    They then go around lying and in indoctrination people into their superstition that this is how we should really think, that we should really believe that there is no cat in the box until you open the lid and look, or even that time and space are unquestionably fundamentally relative, and if you don’t believe these things unquestionably then you are a “science denier,” even though there is not a shred of empirical evidence for them. Special relativity never once made a single original prediction that could be verified in experiment which was not already predicted by Lorentz’s theory, and taking it too seriously forces you to drop belief in the very existence of an objective reality independent of observation/measurement, or devolve into incoherent talk about objective reality being the Platonic realm of the pure mathematics itself, as if we all live inside of a giant invisible infinite-dimensional wave.

    If Einstein saw Bell’s theorem he would have renounced his own theory of special relativity as Einstein was a committed realist. He falsely believed he could make realism compatible with relativistic locality, and there is just no way he would have opted to deny the very existence of objective reality if he saw Bell prove that these views are fundamentally incompatible. He would have conceded that special relativity does indeed need additional structure and that removing it was a mistake. We already know that the objectively real universe in the real world has a preferred foliation and we have measured it, and so the idea that it is so absurd to think one exists when it is necessary to make the empirical evidence consistent with a model of objective reality is rather unconvincing.

    All of the supposed “quantum weirdness” stems from this obsession with refusing to add back the structure needed to make quantum theory into a realist theory. When you add the structure back, you can then fit the empirical predictions of relativistic quantum mechanics to a theory of point particles moving in Newtonian spacetime. You end up with a picture that is actually coherent and comprehensible. But a regular old boring realist theory of nature doesn’t sell books or get clicks on articles. You gotta keep brainwashing people into believing that physics is “weird” to keep the money flowing, and painting everyone who disagrees as “science deniers” even though we all agree on the same empirical evidence and no one is calling that into question.

    But why then had Born not told me of this “pilot wave?” If only to point out what was wrong with it? Why did von Neumann not consider it? More extraordinarily, why did people go on producing “impossibility” proofs, after 1952, and as recently as 1978? When even Pauli, Rosenfeld, and Heisenberg, could produce no more devastating criticism of Bohm’s version than to brand it as “metaphysical” and “ideological?” Why is the pilot wave picture ignored in text books? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism, are not forced on us by experimental facts, but by deliberate theoretical choice?

    — John Bell


  • There is no mystery. Realism requires object permanence, and object permanence requires that you believe in counterfactual statements. If I measure something at t=0 and t=2, I could have measured it at t=1, and so you have to believe it had a value at t=1 or else you devolve into solipsism. If you believe an objective reality exists at all then you have to uphold these kinds of counterfactuals or else you have no basis to believe that objective reality exists independently of you measuring or observing it.

    Bell’s theorem proves clearly proves that special relativity simply lacks sufficient structure needed to give a realist account of the world. Special relativity is not compatible with objective reality. Rather than accepting this conclusion and admitting special relativity needs additional structure added to it, physicists almost universally came to the consensus that we should reject the very idea that there exists an objective reality independent of observation to preserve the sacred status of special relativity

    This became the dominant Copenhagen interpretation. Physics is just about what shows up in measuring devices, during observation, not about objective reality. Many Worlds then showed up later as a cope. It arose as a middle-ground by arguing the mathematics used to predict what shows up on measuring devices is objective reality, as if we live in a Platonic realm of mathematics given by the idealized state vector in infinite dimensional Hilbert space.

    This coping mechanism is not even coherent. You can’t derive an “ought” statement from a lot of “is” statements. The conclusion can never be stronger than the premises. Similarly, you cannot derive observability by starting from pure mathematics where nothing is observable. Many Worlds has no algebra of observables and it is logically impossible to derive them. You must begin with objects defined in terms of their observables and fit models to their dynamics. You cannot logically start from the Platonic realm of pure mathematics.

    It is just a coping mechanism to avoid questioning the completeness of special relativity while also saying you don’t deny objective reality by turning the pure mathematics into objective reality.

    If you just admit that a contradiction between special relativity and objective reality means we should call into question the completeness of special relativity, then you can add a little bit of additional structure to it, something called a preferred foliation, and then you suddenly discover that you can fit relativistic quantum mechanics to a realist theory of point particles moving deterministically in 3D space with well defined values at all times independently of the observer.

    The theory suddenly becomes intuitive and clear without any mystery, and decoherence was literally discovered through analyzing a realist model of quantum mechanics, because it gives such intuitive clarity of what is going on it finally looks like you are analyzing a coherent physical theory and not an incoherent mess which only has something to say about what shows up on measuring devices.


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzbig facts
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    If you appeal to heat death then you cannot say brains pop back into existence either because “matter has a finite life,” and so it is self-defeating. If brains can pop back into existence due to random fluctuations then surely planets and stars could as well given enough time.



  • Einstein didn’t even get a nobel prize for special relativity because it was considered too radical at the time.

    He shouldn’t have gotten one for SR specifically anyways because Hendrik Lorentz had already developed a theory that was mathematically equivalent and presented a year prior to Einstein.

    The speed of light can be derived from Maxwell’s equations, which is weird to be able to derive a speed just by analyzing how electromagnetism works, because anyone in any reference frame would derive the same speed, which implies the existence of a universal speed. If the speed is universal, what it is universal relative to?

    Physicists prior to Einstein believed there might be a universal reference frame which defines absolute time and absolute space, these days called a preferred foliation. The Michelson-Morley experiment was an attempt to measure the existence of this preferred foliation because most theories of how it worked would render it detectable in principle, but found no evidence for it.

    Most physicists these days retell this experiment as having debunked the idea and led to its replacement with Einstein’s special relativity. But the truth is more complicated than that, because Lorentz found you could patch the idea by just assuming objects physically contract based on their motion relative to preferred foliation. Lorentz’s theory was presented in 1904, a year before Einstein, and was mathematically equivalent, so it makes all the same predictions, and so anything Einstein’s theory would predict, his theory would’ve also predicted.

    The reason Lorentz’s theory fell by the wayside is because, by being able to explain the results of the Michelson-Morley experiment which was meant to detect the preferred foliation, it meant it was no longer detectable, and so people liked Einstein’s theory more that threw out this undetectable aspect. But it would still be weird to give Einstein the Nobel prize for what is ultimately just a simplification of Lorentz’s theory. (Einstein also already received one for something he did deserve anyways.)

    But there are also good reasons these days to consider putting the preferred foliation back in and that Lorentz was right. The Friedmann solution to Einstein’s general relativity (the solution associated with the universe we actually live in) spontaneously gives rise to a preferred foliation which is actually empirically observable. You can measure your absolute motion relative to the universe by looking at the cosmic dipole in the cosmic background radiation. Since we know you can measure it now and have actually measured our absolute motion in the universe, the argument against Lorentz’s theory is much weaker.

    An even stronger argument, however, comes from quantum mechanics. A famous theorem by the physicist John Bell proves the impossibility of “local realism,” and in this case locality means locality in terms of special relativity, and realism means belief that particles have real states in the real physical world independently of you looking at them (called the ontic states) which explain what shows up on your measurement device when you try to measure them. Since many physicists are committed to the idea of special relativity, they conclude that Bell’s theorem must debunk realism, that objective reality does not exist independently of you looking at it, and devolve into bizarre quantum mysticism and weirdness.

    But you can equally interpret this to mean that special relativity is wrong and that the preferred foliation needs to put back in. The physicist Hrvoje Nikolic for example published a paper titled “Relativistic QFT from a Bohmian perspective: A proof of concept” showing that you can fit quantum mechanics to a realist theory that reproduces the predictions of relativistic quantum mechanics if you add back in a preferred foliation.


  • “Why” implies an underlying ontology. Maybe there is something underneath it but it’s as far as it goes down as far as we currently know. If we don’t at least tentatively accept that our current most fundamental theories are the fundamental ontology of nature, at least as far as we currently know, then we can never believe anything about nature at all, because it would be an infinite regress. Every time we discover a new theory we can ask “well why does it work like that?” and so it would be impossible to actually believe anything about nature.



  • There are nonlocal effects in quantum mechanics but I am not sure I would consider quantum teleportation to be one of them. Quantum teleportation may look at first glance to be nonlocal but it can be trivially fit to local hidden variable models, such as Spekkens’ toy model, which makes it at least seem to me to belong in the class of local algorithms.

    You have to remember that what is being “transferred” is a statistical description, not something physically tangible, and only observable in a large sample size (an ensemble). Hence, it would be a strange to think that the qubit is like holding a register of its entire quantum state and then that register is disappearing and reappearing on another qubit. The total information in the quantum state only exists in an ensemble.

    In an individual run of the experiment, clearly, the joint measurement of 2 bits of information and its transmission over a classical channel is not transmitting the entire quantum state, but the quantum state is not something that exists in an individual run of the experiment anyways. The total information transmitted over an ensemble is much greater can would provide sufficient information to move the statistical description of one of the qubits to another entirely locally.

    The complete quantum state is transmitted through the classical channel over the whole ensemble, and not in an individual run of the experiment. Hence, it can be replicated in a local model. It only looks like more than 2 bits of data is moving from one qubit to the other if you treat the quantum state as if it actually is a real physical property of a single qubit, because obviously that is not something that can be specified with 2 bits of information, but an ensemble can indeed encode a continuous distribution.

    This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.

    — Dmitry Blokhintsev

    Here’s a trivially simple analogy. We describe a system in a statistical distribution of a single bit with [a; b] where a is the probability of 0 and b is the probability of 1. This is a continuous distribution and thus cannot be specified with just 1 bit of information. But we set up a protocol where I measure this bit and send you the bit’s value, and then you set your own bit to match what you received. The statistics on your bit now will also be guaranteed to be [a; b]. How is it that we transmitted a continuous statistical description that cannot be specified in just 1 bit with only 1 bit of information? Because we didn’t. In every single individual trial, we are always just transmitting 1 single bit. The statistical descriptions refer to an ensemble, and so you have to consider the amount of information actually transmitted over the ensemble.

    A qubit’s quantum state has 2 degrees of freedom, as it can it be specified on the Bloch sphere with just an angle and a rotation. The amount of data transmitted over the classical channel is 2 bits. Over an ensemble, those 2 bits would become 2 continuous values, and thus the classical channel over an ensemble contains the exact degrees of freedom needed to describe the complete quantum state of a single qubit.


  • Superdeterminism isn’t just the idea that if you know the initial state of everything you can predict the outcome with certainty. It’s more bizarre than that. Superdeterminism suggests that particles have pre-existing correlations created at the Big Bang that would allow you to predict the behavior of a system that the particle never interacted with without knowing anything about it, at least in contrived cases.

    For example, you can imagine setting up an experiment whereby the particles will be measured by an experimenter and the experimenter can freely choose the measurement basis as a personal conscious decision. Superdeterminism would suggest that if you could measure the hidden variables on the particles before using them in the experiment, just from the particles alone, even if they have never interacted with the experimenter since the Big Bang, would be sufficient to predict the experimenter’s conscious decision of the choice in measurement basis.

    Indeed, if you knew that one singular variable from the particles, you would know the experimenter’s decision ahead of time with certainty, even if you know nothing about the experimenter and never met them before yourself. You could even go up to the experimenter and insist they should use a different measurement basis, but you shouldn’t be able to ever change their mind, because it’s absolutely predetermined they would make that decision, and that decision is also correlated with the hidden variable of the particle, so knowing the hidden variable of the particle alone is equivalent to knowing what their decision will be ahead of time.

    Nothing is logically impossible about it, it is just a bit strange.


  • I got interested in quantum computing as a way to combat quantum mysticism. Quantum mystics love to use quantum mechanics to justify their mystical claims, like quantum immortality, quantum consciousness, quantum healing, etc. Some mystics use quantum mechanics to “prove” things like we all live inside of a big “cosmic consciousness” and there is no objective reality, and they often reference papers published in the actual academic literature.

    These papers on quantum foundations are almost universally framed in terms of a quantum circuit, because this deals with quantum information science, giving you a logical argument as to something “weird” about quantum mechanic’s logical structure, as shown in things like Bell’s theorem, the Frauchiger-Renner paradox, the Elitzur-Vaidman paradox, etc.

    If a person claims something mystical and sends you a paper, and you can’t understand the paper, how are you supposed to respond? But you can use quantum computing as a tool to help you learn quantum information science so that you can eventually parse the paper, and then you can know how to rebut their mystical claims. But without actually studying the mathematics you will be at a loss.

    You have to put some effort into understanding the mathematics. If you just go vaguely off of what you see in YouTube videos then you’re not going to understand what is actually being talked about. You can go through for example IBM’s courses on the basics of quantum computing and read a textbook on quantum computing and it gives you the foundations in quantum information science needed to actually parse the logical arguments in these papers and what they are really trying to say.


  • There is a problem, not just among laymen, but also among academics, to continually conflate contextuality with subjectivity.

    If I ask you the best music genre, and you ask me the best music genre, we will likely give different answers, because the question is subjective. It depends upon the subject and there is no “objectively” best music genre in the world. If I ask you the velocity of an object, and you ask me the velocity, it is conceivable we might give different answers if we are both perceiving it in two different reference frames. Is that because velocity is subjective? That it is all in our heads and just a personal opinion?

    I find this hard to believe, because you can conduct an experiment where two observers use radar guns to measure the velocity of the object they are perceiving, and you can later compare them and see that the radar gun does indeed agree that the velocity was different. If it was purely subjective, why would a purely mechanical device like a radar gun also record the difference, which is not a conscious observer or a subject at all?

    I believe that velocity can ontologically differ between observers. Meaning, it is part of objective reality that it differs. But this difference is not because they are observers. If the observers observed the same object in the same frame of reference they would perceive it at the same velocity. The difference is not reducible to them being observers, so calling it “observer-dependent” is misleading. The difference goes beyond them being observers and into objective reality: that they perceive the object in different measurement contexts.

    This is what I mean by the distinction between contextuality and subjectivity. Some properties of then natural world really do ontologically realize themselves differently in objective reality depending upon the context of their realization.

    Basically, what these academics do who claim quantum mechanics disproves objective reality is that they conflate subjectivity to contextuality and then demonstrate that two observers can give a different description of the same system in principle in quantum theory, and then conclude that this means there is no objective reality because the description of the quantum system must be subjective as it differs between the two observers. But, again, that does not follow. One can just interpret the difference as context-dependent rather than observer-dependent and then there is no trouble interpreting it as a physical theory of the natural world independent of the observer. It is just not independent of context.

    There is in fact a whole philosophical school called contextual realism based on Wittgensteinian philosophy and originated by the French philosopher Jocelyn Benoist that argues that much of the confusion in philosophy (such as the “hard problem”) ultimately has its origins is continually confusing the contextual for the subjective and if the distinction is adhered to clearly from the get-go then the issue goes away. The physicist Francois-Igor Pris has written extensively on the relationship between this philosophical school and interpretative problems in quantum mechanics.

    The logic of quantum theory does allow for two different observers to give different descriptions of the same system, but (1) the differences are never empirically relevant as the theory also predicts that if they were to become empirically relevant they would both agree on what they would both perceive, and (2) the theory predicts those deviations in the description, kind of like how Galilean relativity predicts that two observers will record different velocities in different frames of reference by using a Galilean transformation.

    The fact that theory predicts these differences (in point #2) makes it hardly subjective, as the deviations are predicted by the objective theory, and they are always consistent with one another (in point #1).




  • I personally don’t believe it’s real in the way Chalmers defines it. You can define it in another way where it can be considered real, but his definition I don’t find convincing. Indirect realism is the belief that what we perceive is not real but kind of a veil that blocks us from seeing true reality. True reality then by definition is fundamentally impossible to observe, not by tools and not under any counterfactual circumstances.


  • If my options are to vote for genocide or not vote at all and thus allow the USA to be destroyed by a madman, then hate me all you want but I’ll take the latter option. Let fire and brimstone rain down on the USA. If there is no option to vote against the genocide, then the USA dismantling its own legitimacy as a world power is infinitely better for harm reduction as the USA being a functional and respectable state in the global arena allows it to do even greater harm.


  • Well I am of the same opinion of the philosopher Alexandr Bogdanov and the philosopher Jocelyn Benoist which is that indirect realism is based on very bad arguments in the first place, and this is the first premise of Chalmers’ argument for the “hard problem,” and so to drop it as a premise drops the “problem.” I would recommend Bogdanov’s book The Philosophy of Living Experience and Benoist’s book Toward a Contextual Realism. The uniting theme is that they both reject the existence of a veil that blocks us from seeing reality, and thus Chalmers’ notion of “consciousness” is rejected, and so there is no “hard problem” in the first place.

    The “hard problem” is really just a reformulation of the mind-body problem, and Feuerbach had originally pointed out in his essay “On Spiritualism and Materialism” that the mind-body problem is not solvable because to derive it, one has to start from an assumption that there is a gulf between the mind and the body (the phenomena and the noumena, “consciousness” and physical reality), and so to then solve it would be to bridge that gulf, which contradicts oneself, as that would mean a gulf didn’t exist in the first place. He thus interprets the mind-body problem (later reformulated as the hard problem) as a proof by contradiction that indirect realism is not tenable, and so materialists should abandon this gulf at the very axiomatic basis of their philosophy.

    There will never be a “solution” because it’s better understood as a logical proof that indirect realism is wrong. That means, no matter how intuitive indirect realism may seem and no matter how many arguments you think you can come up with off the top of your head to defend it, you should step back and actually rigorously evaluate those arguments as they cannot actually be correct and you must be making a mistake somewhere.


  • Survivorship bias as an argument doesn’t really work because you are already presupposing you are the one who survived. Of course if you assume that there is a multiverse of infinite copies of yourself and at least one of them survived an incredibly incredibly unlikely event, then by definition you would not die and would be the person who survives the event.

    But it’s kind of circular. You cannot apply surviroship bias prior to conducting the experiment because you have no reason to believe that what you call “you” would be one of the survivors. It is much more likely, even if we assume the multiverse theory is true (see my criticism of it here) that what you would call “you” after the splitting of worlds would not be one of the survivors.

    Let me give an analogy. Replace the very likely event of dying with something else, like losing the lottery. At least one branch of the multiverse you would win the lottery. Yes, if we bias it so we only consider the branch where you win the lottery, then by definition you are guaranteed to win the lottery if you play it. But that biasing makes no sense prior to actually playing the lottery. It is much more likely what you call “you” after you play the lottery would be one that sees themselves as having lost the lottery.


  • Quantum immortality was a concept in quantum mysticism invented by Hugh Everett, the guy who originated the Many Worlds Interpretation. It’s not even taken seriously by defenders of Many Worlds. Major proponents of Many Worlds like Sean Carroll even admit it is nonsensical and silly.

    Imagine if a company perfectly cloned you. If you then died, do you expect that your consciousness would suddenly hop into the clone and take control over them? No, it makes no sense. The clone is effectively another person. If you die, you would just die. The clone would keep living on because the clone ultimately isn’t you.

    The obvious problem with quantum immortality with it is that if you truly believe in Many Worlds, then the other branches of yourself in other copies of the universe are effectively like clones of yourself. You dying in this branch of the multiverse doesn’t somehow magically imply your consciousness can hop into another branch where you are still alive. “You” as in the “you” on this branch where you die would just die, and the other “yous” would continue to live on.

    Penrose’s ideas are not taken seriously either, because the arguments for them are comedically bad. Pretty much all physicists are in unanimous agreement that quantum computing needs to be well-isolated from the environment and incredibly cold, the opposite of a human brain, and so there is zero chance the brain is utilizing quantum computing effects.

    Penrose’s argument is, and I kid you not, that it is possible for humans to believe things they cannot prove, for example, we cannot currently prove Goldbach’s Conjecture but you can choose to believe it, and therefore he concludes human consciousness must transcend what is computable. Since no algorithm can compute the outcome of the collapse of the wavefunction with absolute certainty (as it is random), he then thinks that the human brain must therefore be using quantum processes.

    I genuinely don’t know how anyone can find that argument convincing. The barrier towards creating artificial intelligence obviously isn’t that AI has a tendency to only believe things that are rigorously computable. In fact, it is quite the opposite, AI constantly hallucinates and makes statements that are obviously false and nonsensical. The physical implementation of the neural network can be captured by a rigorous mathematical model without the output of what the neural network does or says being all rigorous mathematical statements. There is no contradiction between believing the human brain is not a quantum computer and that humans are capable of believing or saying things that they did not rigorously compute.

    Penrose then partnered with Hameroff to desperately search for any evidence that there are any coherent quantum states in the brain at all. They start with their conclusion they want and desperately seek something out that might fit it. All they have found is that there might be brief coherent quantum states in microtubules, but microtubules are not a feature of the brain, but of eukaryotic cells generally, and they play a structural role as a kind of lattice that keeps the cells together. Even if they are right that microtubules briefly can have a coherent quantum state, that does not get you one iota closer into proving that the human brain is a quantum computer in the sense that coherent quantum states actually play a role in decision making or conscious thought.


  • Moore’s law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it’s a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.

    This actually started to happen around 2015. These clever tricks were always exaggerated because there isn’t an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.

    The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it’s actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia’s locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.

    Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.