Archive for the ‘Foundations Of Physics’ Category

CONSCIOUSNESS, ATOM OF THOUGHT, Atom of Computing: All Found In Electrons?

May 7, 2018

Consciousness: we know we have it, we know many other animals have it, but we don’t know what it is.

Before we can answer this, a question naturally arises: so what is it, to know what it is? What is it, to be? “To be” is something our consciousness knows, when it perceives it. But we also need to know when something “is” to know when, how and if our consciousness is. 

In order to simplify our thinking on this arduous subject, existence entangled with consciousness, consider our most fundamental, hence simplest, theory. Consider Quantum Physics. Surely “existence” is defined there, as Quantum Physics deals with what is most fundamental. Take the simplest examples: photon, electron. What is an electron? In Quantum Physics, an electron is what one electron does. Isn’t that enlightening?

Shouldn’t consciousness be, what consciousness does?

Initially, electrons were just negatively charged particles. At least, so it was until Bohr. Then the description of the electron became much more complex. It turned out that electrons did occupy only some energy levels. Then came De Broglie, who said electrons did as waves he attached to them did. And it was found, indeed, that electrons did so. PAM Dirac then proposed the simplest “relativistic” equation for the electron (a more complicated, second degree PDE had been proposed before and couldn’t be made to predict what was observed). That requested something called “spinor space”…. Then in turn predicted electronic spin and the anti-electron, and both were observed.

(Important aside: the French mathematician Cartan had invented spinors earlier in pure geometry. Yes, invented: he built-in his brain the relevant neurological connections, that is, the relevant geometry.)

Thus what we now call the electron has become higher dimensional in logical space (logical space is the space spanned by independent axioms; I just made it up; that means there is a connection between logic and geometry… thus, in particular, arithmetic and geometry…).

By adding axioms to its description, the concept of electron has become richer… The electron is a richer concept in our consciousness.

Confronted to 2 slits, the electron acts as if it were choosing where to go, after them. Is that, not just a computation, but a primitive form of consciousness? What consciousness is made of? Hard to say for sure, at this point, but certainly a guess worth exploring: any theory of consciousness may have to take this, that the electron acts as if it were conscious, into account. 

We evolved as living beings, and the more complex we became, the more conscious. Jean-Baptiste Lamarck’s law of increasing complexity applies, and is exemplified, by the evolution of consciousness.. Consciousness is probably a law of physics, not an accident of history.

Some say:’oh, well, consciousness may not be that important’. Well, first at least three different phyla evolved it, independently, on Earth, vertebrates being only one of them. (As all trout fishers know, trouts act as if they were conscious, that’s why the experienced ones are so hard to catch, when the water is clear…)

But there is a much deeper objection to considering consciousness unimportant: what is the connection of consciousness to thinking? Could the atom of consciousness be the atom of thinking…. And precisely defined as Quantum Computation?

Indeed, consider programming as presently done with electronic computers: one thing after the other, just so very fast, yet, it is fundamentally desperately dumb. Present day computing, pre-Quantum Computing, can result in desperately slow computations. Whereas the electron can compute instantaneously (says a hopefully naive Quantum theory) that problems too complicated for our (pre-Quantum!) computers to handle, and find out, where the low energy solution is. That’s the superiority of Quantum Computing: tremendous, instantaneous, stupendous computation, right.

So, what looks like a type of consciousness, found in the translating electron, is not just an incredibly efficient way of computing, it is at the core of the efficiency of the world. Could it be the most primitive form, the atom of thinking?

Identifying fundamental quantum and fundamental thinking is an idea whose time has come… Philosophically speaking, in the most practical manner, it means that discursive logic will never cover the last mile…

Patrice Ayme

***

***

Very Tangential Observations:

  1. Albert Einstein ascribed properties to the photon, and the electron, which I claim, have not been observed (thus leading physics astray, straight into the Multiverse). However the ulterior formalism sort of implemented Einstein’s design (which is older than Einstein), attributing (sort of, or maybe not) a strict position to elementary particles… and was found to give excellent  results (namely QED, QCD, the “Standard Model”…) But Ptolemy too, gave good results. Thus, now, elementary particles are endowed with properties which, if I am right, are fake… It has often happened in science that a fake, or grossly incomplete theory will masquerade as true for a very long time: math is full of them (Non Euclidean geometry, etc.).
  2.  The example of Non-Euclidean geometry is revealing: it was abandoned for brain-dead Euclidean geometry… Why did those Hellenistic regime Greeks opt for that silly form of mathematics? Because their superiors, various kings and tyrants, prefered silly. Because geometry in the plane was easier, a case of looking for the keys only below the lampost, because it’s simpler, and one is drunk. Let’s not repeat the mistake of having only simple thoughts, in the case of pondering consciousness, just because our superiors prefer simple thoughts, and are drunk on their power… Soon to be extinguished in great balls of nuclear fire…
Advertisements

LOGIC IS MATERIAL

April 11, 2018

Logic doesn’t just matter, it is matter.

FUNDAMENTAL PROCESSES, INCLUDING COMPUTATIONS, LOGIC, ARE MATERIAL OBJECTS:

Is there something besides matter? No. What is matter? Ah, two types of things, corresponding to wave-particle duality… Or, as I put it often, process-quanta duality.

***

We should have come a long way in 24 centuries, yet some keep repeating ideas of Plato, an Athenian plutocrat. Plato (and his teacher Socrates and student Aristotle) had an extreme right wing agenda, much of it pursued later as the “Hellenistic” regimes (dictatorships), imperial fascist Roman Principate, and the rage against innovation. Plato’s metaphysics has much in common, if not everything, with Christianism (this explains its survival…)

And now for a word from this essay’s sponsor, the gentleman contradicting me. Robin Herbert replied to me: …”many don’t seem to grasp that the classical logics are not tied to any physical assumptions… the classical logics are not tied to any physical assumptions. I think the problem is that we have this term “classical physics” and another term “classical logic” and people think they are related. They aren’t.”

Are we that stupid? I guess, our enemies wish we were…

***

Only those who have never heard of Platonism would not be familiar with the notion that logic is not “material”: it is at the core of Plato’s view of the universe. And also at the core of Christianism, so help me not god!

I beg to oppose the dematerialization of logic. Differently from Plato, I have a careful observation of nature, Quantum theory, the mechanics of atomic theory, to back me up. Frankly, relative to what we know now, Plato is an ignorant twerp. So why the reverence for his antique antics? My unforgiving mood is driven in part by the observation that the Ancient Greeks had plenty of holes in their axiomatics… Especially in mathematics (where they made several ludicrous mistakes, such as forgetting non-Euclidean geometry, generations after discovering it).

If logic is not tied to “physics”, or what’s material, we want to know what that is. But, as I am going to show, all we do is go back to the Gospel of John as the ultimate authority (itself straight out of Plato!)

Twentieth Century physics has revealed that physics is made of “Fundamental Processes” (see the very nice, pre-QCD book by that title from Feynman)… And Quanta. The former, the processes, are described by waves, the second, those lumps of energy, by particles.

Thus, saying that “logic is not physics” is tantamount to saying that logic is neither a fundamental process (or set thereof), nor quanta (or set thereof).

Orbitals to an electron around a proton (the Hydrogen atom), visualized in 2013 (Phys. Review). What you are looking at is one electron, when it is delocalized. The electron is the cloud. The cloud is a process. The process is what an atom of hydrogen is, 99.9999999% of the time… At least…

There are several problems with such a claim: far from being immaterial, any logic shows up as quanta (aka “symbols”), and is itself a process (classical logic rests on implication, the simplest process:”if A then B”, and chains therefrom). Logic shows up as nothing else, so that’s what it is: a bunch of fundamental processes and quanta. This is the modern philosophy of physics, in action! (It originated with Newton and Laplace, and was then amplified by Jules Henri Poincaré)

There was a famous exchange between Heisenberg and Einstein; the latter, at the peak of his glory, accused the young Quantum physicist to have only put observables in his matrix quantum theory. Heisenberg coolly smirked back that it was Einstein who taught him to do so! (Constructively infuriated, ten years later Einstein rolled out the EPR thought experiment, alleging a contradiction between Quantum Mechanics and LOCAL “elements of reality“. The effect was relabeled “entanglement” by Schrödinger, now the central notion in Quantum theory… Einstein should have realized that it was this very delocalization which made atoms wholes…)    

So what’s “material”? What’s observable! And what is observable? (Delocalized) fundamental processes and (localized, yet ephemeral) quanta. Claiming that the logos is neither is (implicitly) done in the first sentence of the Gospel of John, and John adds that its name is god. We of the natural school shall excommunicate those evoking god. Those who claim “logic”, the logos, escapes nature (= physis) are just followers of whom John followed, namely Plato. They are Platocrats, a particular prototype of plutocrats…

Fundamental processes are described by equations, but that doesn’t mean the equations are “real”, beyond symbols (“quanta”) of a medium. First of all, equations are approximations: a classical computer can only make a finite number of operations (differently from a full Quantum computer, which works with a continuum, the circle S1). Instead what is really real is the fundamental process(es) the equations approximate.

Indeed, consider atoms: they are real, “indivisible” (sort of)… and yet mostly made of delocalized processes known as electronic orbitals.  It is the delocalization which creates the substance: see the picture above… 

So is a classical computation a real object, in the aforementioned sense? Yes, because it is a FINITE set of fundamental processes (moving electrons and photons around). However, if the proposed computation, or logical deduction, takes an infinite amount of time, it becomes something that never comes to exist. (That’s an allusion to a classical computer trying to duplicate Quantum computers; in the case of the chlorophyll molecule, no classical computer could do what the molecule, viewed as a Quantum computer, does!)

In this view, call it material logic, time, whether we want it or not, whether logicians realize it, or not, is an essential part of logic: the time-energy principle de facto granulates time (we need infinite energy for infinitely small time intervals, hence for would be infinite logical computations). To say time is not part of logic is another of these oversights (as Archimedes did, implicitly using what non-standard analysts, Robinson and Al. called “Archimedes Axiom”, which excludes infinitely small (or large) integral numbers). Any piece of logic comes with its own duration, namely how many steps it needs in its simplest form.   

Quantum computing uses one (hypothesized) infinity: the assumed instantaneity of what I call the Quantum Interaction (aka Quantum Collapse). That enables to delocalize Quantum logic (no distributive law of propositional logic!), as delocalized Quantum processes, and this is why it can’t be classically duplicated (aka “Quantum supremacy”).

Happy processes!

Patrice Aymé

Dwarf Galaxies Contradict Standard Cosmology, BUT NOT SQPR!

February 21, 2018

Standard Cosmology Threatened, SQPR Proven?

Cosmology matters, it has always mattered, ever since there are reasons, and we humans try to refine them. Cosmology is the laboratory of pure reason.

The standard cosmological model is called the Lambda Cold Dark Matter model. “Lambda” is for the Cosmological Constant, an invention of Albert Einstein (hey, you see, Albert invented a few things…). Lambda basically says that space, spacetime itself, could have an energy independent of the mass-energy tensor (the energy of all and any particles). Dark Matter, in that model, is assumed to be some, so far mysterious thing, spread all about, right from the start. A type of particle, so far undiscovered.

After the Big Bang, in the ΛCDM, the universe expands: light takes ever longer to go between the developing clumps of matter which will end as galactic clusters. In this clumps, Dark Matter concentrates, like the rest. Dark Matter reacting only to gravity, it ends up forming the next generation, more concentrated clumps (it’s not held back by radiation pressure from lighting stars, ect.). These Dark Matter kernels in turn attract material which ends up more or less rotating (the bigger, the more rotation), and we call that galaxies. Dwarf galaxies stay irregular and often don’t rotate as flat disks. Giant galaxies such as the Milky Way, Andromeda and Centaurus A, rotate mightily, and find themselves with dozens of smaller galaxies as satellites.

Centaurus A (NGC 5128) is an unusual giant elliptical galaxy crossed by a dust lane. The yellow halo is made of billions of yellow stars. It is ten billion light years away (5 times further than Andromeda, and is the largest closest giant galaxy we can see, after Andromeda (others may be hidden by dust). It is accompanied by 16 Dwarf Galaxies rotating in the same plane as Centaurus A itself. Something absolutely not predicted by ΛCDM. Width of the picture is 16 arc minutes, half of the full moon (which 30 arc minutes, half a degree).

***

The ΛCDM model is, at first sight, impressive. Computer simulations of the model with observations are considered to be very successful on very large scales (larger than galactic clusters, up to the observable horizon). But  it has a “small scale crisis”:  too many dwarf galaxies, too much dark matter in the innermost regions of galaxies, too much Dark Matter halos (which are not observed). These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model.

However, worse is now surfacing: the distribution of dwarf galaxies in a flat disk around their mother galaxy is absolutely not predicted by the ΛCDM paradigm.

ΛCDM predicts Dwarf Galaxies around a giant galaxy, but also predicts their orbits should be left to chance, there is not enough time since the Big bang to develop a huge rotation of the supergalactic cloud. ΛCDM says galaxies formed nearly instantaneously, after being torn on the outskirts by Dark Matter clumps which then make Dwarf Galaxies.

An international team of astronomers has determined that Centaurus A, a massive elliptical galaxy 13 million light-years from Earth, is accompanied by a number of dwarf satellite galaxies orbiting the main body in a narrow disk. This is the first time such a galactic arrangement has been observed outside the Local Group, home to the Milky Way, and anchored by it, Andromeda and the much smaller Triangulum galaxy. (By the way, it turns out that Andromeda is roughly the same size as the giant Milky Way, and not larger, as previously thought. The error came from overestimation of the Dark Matter in Andromeda, from too gross an application of the Virial Theorem. All this may have consequences for life in the universe, as it is easy to find reasons for zones in giant galaxies more hospitable for life, which less organized galaxies won’t have… But I digress.)

***

Dwarf galaxies move in unexpected ways in Milky Way, Andromeda and Centaurus A. This contradicts Standard Cosmology:

Giant galaxies like our Milky Way are orbited by satellite dwarf galaxies. Standard cosmological simulations of galaxy formation predict that these satellites should move randomly around their host. Müller et al. examined the satellites of the nearby elliptical galaxy Centaurus A. They found that the satellites are distributed in a planar arrangement, and 14 members of the plane (out of 16) are demonstrably orbiting in the same direction. This is inconsistent with more than 99.5% of comparable galaxies in simulations. Centaurus A, the Milky Way, and Andromeda all have highly statistically unlikely satellite systems. This observational evidence suggests that something is wrong with standard cosmological simulations.

In other words, ΛCDM predicts that there should be a halo of Dark matter and Dwarf Galaxies. There is not. (Whereas SQPR predicts planar structures, see below!)

“The significance of this finding is that it calls into question the validity of certain cosmological models and simulations as explanations for the distribution of host and satellite galaxies in the universe,” said co-author Marcel Pawlowski, “Hubble Fellow” in the Department of Physics & Astronomy at the University of California, Irvine.

He said that under the lambda cold dark matter model, smaller systems of stars should be more or less randomly scattered around their anchoring galaxies and should move in all directions. Yet Centaurus A is the third documented example, behind the Milky Way and Andromeda, of a “vast polar structure” in which satellite dwarves co-rotate around a central galactic mass in what Pawlowski calls “preferentially oriented alignment.

The difficulty of studying the movements of dwarf satellites around their hosts varies according to the target galaxy group. It’s relatively easy for the Milky Way. “You get proper motions,” Pawlowski said. “You take a picture now, wait three years or more, and then take another picture to see how the stars have moved; that gives you the tangential velocity.”

Using this technique, scientists have measurements for 11 Milky Way satellite galaxies, eight of which are orbiting in a tight disk perpendicular (!) to the spiral galaxy’s plane. There are probably other satellites in the system that can’t be seen from Earth because they’re blocked by the Milky Way’s dusty disk.

***

SQPR Versus ΛCDM:

To avoid the concept of Dark Matter, MOdified Newtonian Dynamics (MOND) have been suggested. It seems to me clear that they don’t work. Moreover, MOND is an ad hoc explanation: have problem, invent specific axiomatics to solve problem. Besides solving what looks like Dark Matter, without Dark Matter, and this, only around galaxies, not during collisions, MOND has no reason for being. The more evidence piles up, the less plausible it looks.

My own theory, SQPR is quite the opposite. It is a MODIFIED Quantum Dynamics (MOQD): it predicts a Sub Quantum Reality, to make Quantum Mechanics logically complete, and causal, with a nonlocality that will not be as “spooky” (to use Einstein’s bon mot). SQPR predicts Dark Matter, and it predicts that Dark Matter is CREATED inside giant galaxies, just the same as Black Holes are created inside giant galaxies (at ten times the rate of growth inside smaller galaxies). So, with me, Dark matter becomes a Quantum effect. The exact predictions are these:

Young giant galaxies will have little Dark Matter. Dark Matter is emergent.

Dark Matter will form in disks… And Dwarf Galaxies too.

SQPRs predictions are completely different. But fit observations…

My scenario is this: giant gas clouds, galactic size, of normal matter, coalesce first from the pull of gravity. As they do, conservation of angular momentum will augment the rotation speed (there always will be some rotation to start with, it’s nearly the same phenomenon as in cyclones formation). Implosion of the galactic size cloud, in conjunction with the rise of angular speed, creates a flat disk. This disk will contain lumps in the outer zone: dwarf galaxies, similar to planet formation in a solar system. Meanwhile, the Quantum Interaction, at cosmological distance, will churn out Dark Matter.

So we will typically end up with a flat disk of Dwarf Galaxies rotating in the same plane as the growing disk of Dark Matter of the giant galaxy. (Notice that I predict Dwarf Galaxies will have less Dark Matter, in the typical case).

Objectors may brandish the fact that the Dwarf Galaxy disk of the Milky Way is perpendicular, a glaring contradiction with my model. Well, my retort to that: something happened which yanked one relative to the other. The local group contains more than 54 galaxies, and it’s not even clear the large ones have all been found out, because of Milky Way dust: so a large galaxy passing by could have disrupted the dynamics of the Milky Way with its Dwarf Galaxy disk. There are plenty of observations such vast distortions between galaxies (and in the Solar System, Uranus can be contemplated, whose rotation axis is perpendicular to that of all the other planets, and where common sense would put it, perpendicular to ecliptic: clearly something big and weird happened which rotated the rotation axis spectacularly; by the way, Mars rotation axis also wobble spectacularly, although it’s coincidentally the exact same angle on the elliptic as Earth’s, right now, another spectacular coincidence (strange occurrences are not a proof of the existence of gods; however the case of Dwarf Galaxies, considered here, is 3/3… And actually more, and it becomes very statistically significant, if we look at the set of all Dwarf Galaxies around MW, A, Centaurus A).

***

Mavericks, such as yours truly argue that, like much modern physics, and related to that, the ΛCDM model is built upon an intricate foundation of conventionalist stratagems, rendering it unfalsifiable in the sense promoted by Karl Popper. Mavericks have to be taken seriously: several experts howled, for many decades, that there was Dark Matter. They were viewed as having fallen to the Dark Side (naturally enough). Then a serious mathematician called Segal pointed out that there was a Dark Energy problem: the cosmic acceleration itself accelerated, he insisted, and wrote an entire very serious book about it. In spite, or because of, these graves accusations, the entire field was ignored for more than 50 years (entire books about Dark Matter and the accelerating acceleration of the universe, were discarded as cranks): governments prefered to finance militarily useful physics (“high energy” physics) rather than potentially revolutionary physics.   

Anyway, things are quickly coming to a head. Astronomy is finally getting financed much more than it used to be. Astronomy, experimentation contemplated, on the largest scale, is shattering physics. Noble high energy physicists were studying only 5% of the universe, says astronomy…

ΛCDM says Dark Matter was always there. I suggest instead that it was created, by standard Mass-Energy and how (as Black Holes were created, albeit from a Quantum, not gravitational, mechanism). We will see. First we see, then we think.

Patrice Aymé

Perverse Logic: Saving the Multiverse with Unhinged Cosmic Inflation!

February 1, 2018

When The Unobservable Universe Is Used To Justify Various Follies, Such As The Multiverse, Civilization Is In A Bad Way:

Physics is the laboratory of reason. This where the most advanced, most subtle logics are forged (even more so than in pure mathematics, where the navel’s importance is too great). So what physicists ponder, matters to the entire civilization which nurtures them. When physics goes to the dogs, so does civilization. The follies of state of the art theoretical physics, reflect an ambient madness which pervades civilization. (If you don’t believe this, may I sell you some imaginary bitcoins for millions of dollars?)

Astrophysicist Ethan Siegel, a continual source of excellent articles in physics, wrote an interesting essay which I disagree with. His reasons are interesting, and have the merit of honesty. My answers are even more striking, and I bring the full weight of 24 centuries of history as meta-evidence for crushing the feeble, pathetic, short-sighted considerations of my fellow physicists. Ethan’s essay is entitled: “Yes, The Multiverse Is Real, But It Won’t Fix Physics
Surprisingly, the evidence points towards the existence of the unobservable multiverse. But it isn’t the answer you’re looking for.

Ethan proposes to use cosmic inflation to provide for the proliferation of Schrödinger cats and Wigner’s friends. One folly would thus provide for the other, and they would thus stay up, like two drunks falling into each other’s arms. I will instead humbly suggest to do away with madness altogether. But first a little recap.

The universe is expanding. This experimental evidence was established around 1920, by a number of astronomers in Europe and the USA, the most famous of whom was lawyer turned astronomer, Edwin Hubble. Hubble had the biggest telescope. The expansion is presumed to be looking everywhere the same, and this is what seems to be observed. That also means that, if one looks far away, galaxies will seem to be receding from us at speed ever closer to the speed of light. As the apparent speed of these galaxies approach c, their light gets shifted to lower and lower frequencies, until they become invisible (same reason as why Black Holes are blacker than black).

Where the transition to invisibility occurs is called the “event horizon”. Beyond the event horizon is the unobservable universe (we can’t detect it gravitationally, as gravity goes at the speed of light, a theoretical prediction now experimentally verified).

The observed universe is “flat” (namely there is no detected distortion in the distribution of clouds, filaments and superclusters of galaxies). That sounds unlikely, and indicates that the observed universe is a tiny portion of a much larger whole.

This unobservable universe has nothing to do with the “Multiverse” brandished recently by many theoretical physicists who have apparently run out of imagination for something more plausible. Eighty years ago, Schrödinger pointed out that Quantum Mechanics, as formalized then (and now!) was observer dependent, and filled up the universe with waves of dead and live cats (when applied to macroscopic objects). That’s called the Schrödinger Cat Paradox. Instead of calling for a re-thinking of Quantum Mechanics (as I do!), Ethan Siegel (and many other physicists and astrophysicists) embrace the dead and alive cats, settling them in “parallel universes”. So basically they reenact Solomon Judgment: instead of cutting the baby in two, they cut the universe in two. Zillions of time per second, in zillions of smaller places than you can possibly imagine… Here is a picture of Schrödinger cat: as the branches separate in that movie, two universes are created. This is what Ethan Siegel wants to justify, thanks to cosmic inflation…

Ethan’s revealing comment: “The idea of parallel Universes, as applied to Schrödinger’s cat. As fun and compelling as this idea is, without an infinitely large region of space to hold these possibilities in, even inflation won’t create enough Universes to contain all the possibilities that 13.8 billion years of cosmic evolution have brought us. Image credit: Christian Schirm.”
To explain crazy, we will go more crazy, thus making the previous crazy sound more rational, relatively speaking…

The Multiverse”, with baby universes all over the universe, has more to do with the “Many Worlds Interpretation” of Quantum Mechanics, a theory so absurd that the great popes of physics ruling around 1960 rejected it outright. Wheeler was ashamed of himself for having had a PhD student, Everett, who suggested this folly(Everett couldn’t get an academic job, at a time when academic employment in physics was booming!)

Ethan wrote: “In the region that became our Universe, which may encompass a large region that goes far beyond what we can observe, inflation ended all-at-once. But beyond that region, there are even more regions where it didn’t end.”

This sort of statement, and I say this with all due respect to the divine, is equivalent to saying:”Me, Ethan, having checked all that exists, observable by simple humans, or not, thereby informs you that I am either God, or that She is an interlocutor of mine. We checked that cosmic inflation thing, and saw it all over all the possible universes. Don’t talk, just learn.”

There is no way for us humans to know, for sure, or not, what is going on beyond the observable universe (aside from having no gravitational field distortions when approaching the event horizon, as I said above when considering “flatness”).

Ethan notices that Many Worlds fanatics have tried to use cosmic inflation to save their (ridiculous) theory. (“Many Worlds” is ridiculous, as Schrödinger tried to show, long ago, because there would be as many ways to cut the universes into “Many Worlds” as there are observers. So, so to speak, the “Many World Interpretation”, call it MWI, is actually MWI ^ {Observers} (MWI to the power of the set of all possible Observers, the latter set being itself something of an uncountably infinite function of MWI.)

Ethan says: “But just because variants of the Multiverse are falsifiable, and just because the consequences of its existence are unobservable, doesn’t mean that the Multiverse isn’t real. If cosmic inflation, General Relativity, and quantum field theory are all correct, the Multiverse likely is real, and we’re living in it.

What Ethan is saying is that if a number of crazy (cosmic inflation), or incomplete (Quantum Field Theory), ideas are “all correct”, then something as useful as angels on pin heads is real.Yes, indeed, if one believes that Muhammad flew to Jerusalem on a winged horse (!), one may as well believe all the rest of the Qur’an. That is a proof by crystal balls. After Ptolemy and company had established their (half correctly) predicting “epicycles” theory, one could have used it in turn to “prove” Aristotle ridiculous theory of motion.

23 centuries ago a much saner theory existed, that of Aristarchus. It was rejected at the time, precisely because it was not insane, and even though it was used to make a nearly correct prediction of the distance of the Moon. Aristarchus underestimated the distance of the Sun, but a telescope could have changed this (by showing more precisely the angle of the terminus on the Moon). If astronomers had the time had accepted heliocentrism as a possibility, it would have led them to invent the telescope. Similarly, right now, rejecting Many Worlds and Multiverse will lead to develop instruments which don’t exist yet (I have proposed at least one).

Astrophysicist Ethan Siegel suggests that: “The Multiverse is real, but provides the answer to absolutely nothing.” My opinion is that the Multiverse is worse than useless: the unhinged mood it provides prevents to develop more fruitful avenues of research, both theoretically and experimentally.

Insanity is the rule in crowds (Nietzsche). Thus follies are the truths crowds love, at first sight, before being corrected by higher minds. Why? Follies bind, because they are so special.

https://patriceayme.wordpress.com/2015/02/20/commonly-accepted-delusions-follies-that-bind/

In Aristarchus’ times, heliocentrism, the fact Earth and its Moon rotate around the Sun, should have been obvious. Indeed, people, let’s think for a moment: where was the Sun supposed to be, considering the phases of the Moon? If the Sun turned around Earth, the Moon’s illumination should have changed all day long! It didn’t require much geometrical analysis to discover that this source of light could only be where Aristarchus computed it to be, far away from the Earth-Moon system.

It took 19 centuries to correct that (obvious!) mistake. Interestingly, Jean Buridan, circa 1350 CE, did it in the most theoretical fashion.

https://patriceayme.wordpress.com/2016/03/20/momentum-force-inertia-middle-ages-buridan/

Buridan first showed that Aristotle’s ridiculous theory of motion made no sense, and had to be replaced by inertia and momentum (what Buridan called “impetus”). Having done this, the motion of the planets in a heliocentric system could be explained by “circular impetus”, Buridan pointed out (then he observed sardonically that we couldn’t observe the difference between epicycles and heliocentrism, so may as well go for “Scripture”).

Similarly, nowadays, instead of arguing with the “angels on a multiverse pinhead” authorities, we better point out to the glaring inconsistencies of Quantum Mechanics.

Civilization without reason is like a chicken without a head: it can run, but not forever.

Patrice Aymé

Discrepancy In Universe’s Expansion & Quantum Interaction

January 17, 2018

In “New Dark Matter Physics Could Solve The Expanding Universe Controversy“, Ethan Siegel points out that:

“Multiple teams of scientists can’t agree on how fast the Universe expands. Dark matter may unlock why.
There’s an enormous controversy in astrophysics today over how quickly the Universe is expanding. One camp of scientists, the same camp that won the Nobel Prize for discovering dark energy, measured the expansion rate to be 73 km/s/Mpc, with an uncertainty of only 2.4%. But a second method, based on the leftover relics from the Big Bang, reveals an answer that’s incompatibly lower at 67 km/s/Mpc, with an uncertainty of only 1%. It’s possible that one of the teams has an unidentified error that’s causing this discrepancy, but independent checks have failed to show any cracks in either analysis. Instead, new physics might be the culprit. If so, we just might have our first real clue to how dark matter might be detected.

20 years ago it was peer-reviewed published, by a number of teams that we were in an ever faster expanding universe (right). The Physics Nobel was given for that to a Berkeley team and to an Australian team. There are now several methods to prove this accelerating expansion, and they (roughly) agree.

Notice the striking differences between different models in the past; only a Universe with dark energy matches our observations. Possible fates of the expanding Universe which used to be considered were, ironically enough, only the three on the left, which are now excluded.  Image credit: The Cosmic Perspective / Jeffrey O. Bennett, Megan O. Donahue, Nicholas Schneider and Mark Voit.

Three main classes of possibilities for why the Universe appears to accelerate have been considered:

  1. Vacuum energy, like a cosmological constant, is energy inherent to space itself, and drives the Universe’s expansion. (This idea comes back to Einstein who introduced a “Cosmological Constant” in the basic gravitational equation… To make the universe static, a weird idea akin to crystal sphere of Ptolemaic astronomy; later Einstein realized that, had he not done that, he could have posed as real smart by predicting the expansion of the universe… So he called it, in a self-congratulating way, his “greatest mistake”… However, in the last 20 years, the “greatest mistake” has turned to be viewed as a master stroke…).
  2. Dynamical dark energy, driven by some kind of field that changes over time, could lead to differences in the Universe’s expansion rate depending on when/how you measure it. (Also called “quintessence”; not really different from 1), from my point of view!)
  3. General Relativity could be wrong, and a modification to gravity might explain what appears to us as an apparent acceleration. (However, the basic idea of the theory of gravitation is so simplest, it’s hard to see how it could be wrong, as long as one doesn’t introduce Quantum effects… Which is exactly what I do! In my own theory, said effect occur only at large cosmic distances, on the scale of large galaxies)

Ethan: “At the dawn of 2018, however, the controversy over the expanding Universe might threaten that picture. Our Universe, made up of 68% dark energy, 27% dark matter, and just 5% of all the “normal” stuff (including stars, planets, gas, dust, plasma, black holes, etc.), should be expanding at the same rate regardless of the method you use to measure it. At least, that would be the case if dark energy were truly a cosmological constant, and if dark matter were truly cold and collisionless, interacting only gravitationally. If everyone measured the same rate for the expanding Universe, there would be nothing to challenge this picture, known as standard (or “vanilla”) ΛCDM.

But everyone doesn’t measure the same rate.”

The standard, oldest, method of measuring the Hubble cosmic expansion rate is through a method known as the cosmic distance ladder. The simplest version only has three rungs. First, you measure the distances to nearby stars directly, through parallax, the variation of the angle of elevation during the year, as the Earth goes around its orbit. Most specifically you measure the distance to the long-period Cepheid stars like this. Cepheids are “standard candles”; they are stars whose luminosities vary, but their maximum power doesn’t, so we can know how far they are by looking how much they shine. Second, you then measure other properties of those same types of Cepheid stars in nearby galaxies, learning how far away those galaxies are. And lastly, in some of those galaxies, you’ll have a specific class of supernovae known as Type Ia supernovae. Those supernovae explode exactly when they accrete 1.4 solar mass, from another orbiting star (a theory of Indian Nobel Chandrasekhar, who taught at the University of Chicago). One can see these 1a supernovae all over the universe. Inside the Milky Way, as well as many of billions of light years away. With just these three steps, you can measure the expanding Universe, arriving at a result of 73.24 ± 1.74 km/s/Mpc.

The other methods makes all sorts of suppositions about the early universe. I view it as a miracle that it is as close as it is: 66.9 km/s/Megaparsec…

Ethan concludes that: “Currently, the fact that distance ladder measurements say the Universe expands 9% faster than the leftover relic method is one of the greatest puzzles in modern cosmology. Whether that’s because there’s a systematic error in one of the two methods used to measure the expansion rate or because there’s new physics afoot is still undetermined, but it’s vital to remain open-minded to both possibilities. As improvements are made to parallax data, as more Cepheids are found, and as we come to better understand the rungs of the distance ladder, it becomes harder and harder to justify blaming systematics. The resolution to this paradox may be new physics, after all. And if it is, it just might teach us something about the dark side of the Universe.”

My comment: The QUANTUM INTERACTION CHANGES EVERYTHING:

My own starting point is a revision of Quantum Mechanics: I simply assume that Newton was right (that’s supposed to be a joke, but with wisdom attached). Newton described his own theory of gravitation to be absurd (the basic equation, F = M1 M2/dd. where d was the distance was from a French astronomer, Ishmael Boulliau, as Newton himself said. Actually this “Bullaldius” then spoiled his basic correct reasoning with a number of absurdities which Newton corrected).

Newton was actually insulting against his own theory. He said no one with the slightest understanding of philosophy would assume that gravitation was instantaneous.

Newton’s condemnation was resolved by Laplace, a century later. Laplace just introduced a finite speed for the propagation of the gravitational field. That implied gravitational waves, for the same reason as a whip makes waves.

We are in a similar situation now. Present Quantum Physics assumes that the Quantum Interaction (the one which carries Quantum Entanglement) is instantaneous. This is absurd for exactly the same reason Newton presented, and Laplace took seriously, for gravitation.

Supposing that the Quantum Interaction has a finite speed (it could be bigger than 10^23c, where c is the speed of light.

Supposing this implies (after a number of logical and plausible steps) both Dark Matter and Dark Energy. It is worth looking at. But let’s remember the telescope (which could have been invented in antiquity) was invented not to prove that the Moon was not a crystal ball, but simply to make money (by distinguishing first which sort of cargo was coming back from the Indies).

We see what we want to see, because that’s we have been taught to see, we search what we want to search, because that’s what we have been taught to search. Keeping an open mind is great, but a fully open mind is a most disturbing thing… 

Patrice Aymé

“Proof” That Faster Than Light Communications Are Impossible Is False

December 16, 2017

There are theories everywhere, and the more ingrained they are, the more suspiciously they should be looked at. From the basic equations of relativity it is clear that if one adds speeds less than the speed of light, one will get a speed less than the speed of light. It is also clear that adding impulse to a mass will make it more massive, while its speed will asymptotically approach that of light (and, as I explained, the reason is intuitive, from Time Dilation).

The subject is not all sci-fi: modern cosmology brazenly assumes that space itself, after the alleged Big Bang, expanded at a speed at least 10^23 c (something like one hundred thousand billion billions time the speed of light c). The grossest, yet simplest, proof of that is simple: the observable universe is roughly 100 billion light years across, and it is ten billion years old. Thus it expanded at the minimum average clip of ten billion light years, every billion years. 100c/10 = 10c, according to standard cosmology. One could furiously imagine a spaceship somehow surfing on a wave of warped space, expanding for the same obscure reason.

The question naturally arises whether velocities which are greater than that of light could ever possibly be obtained in other ways. For example, are there communication speeds faster than light? (Throwing some material across will not work: its mass will increase, while its speed stays less than c.)

Textbooks say it’s not possible. There is actually a “proof” of that alleged impossibility, dating all the way back to Einstein (1907) and Tolman (1917). The mathematics are trivial (they are reproduced in my picture below). But the interpretation is apparently less so. Wikipedia weirdly claims that faster than light communications would allow to travel back in time. No. One could synchronize all clocks on all planets in the galaxies, and having faster than light communications would not change anything. Why? Time is local, faster than light data travel is nonlocal.

The problem of faster than light communications can be attacked in the following manner.

Consider two points A and B on the X axis of the system S, and suppose that some impulse originates at A, travels to B with the velocity u and at B produces some observable phenomenon, the starting of the impulse at A and the resulting phenomenon at B thus being connected by the relation of cause and effect. The time elapsing between the cause and its effect as measured in the units of system S will evidently be as follows in the calligraphy below. Then I use the usual Relativity formula (due to Lorentz) of time as it elapses in S’:

Equations help, but they are neither the beginning, nor the end of a story. Just an abstraction of it. The cult of equations is naive, interpretation is everything. The same thing, more generally, holds for models.
As Tolman put it in 1917: “Let us suppose now that there are no limits to the possible magnitude of the velocities u and V, and in particular that the causal impulse can travel from A to B with a velocity u greater than that of light. It is evident that we could then take a velocity u great enough uV/C^2 will be greater than one.
so that Delta(t) would become negative. In other words, for an observer in system S’ the effect which occurs at B would precede in time its cause which originates at A.”

I quote Tolman, because he is generally viewed as the one having definitively established the impossibility of faster than light communications. Tolman, though is not so sure; in his next sentence he turns out wishy washy: “Such a condition of affairs might not be a logical impossibility; nevertheless its extraordinary nature might incline us to believe that no causal impulse can travel with a velocity greater than that of light.”

Actually it is an effect those who have seen movies running in reverse are familiar with. Causality apparently running in reverse is no more surprising than the fact that two events at x1 and x2 which are simultaneous in S are separated by:  (x1-x2) (V/square root (1-VV/CC)). That introduces a sort of fake, or apparent causality, sometimes this before that, sometimes that before this.

(The computation is straightforward and found in Tolman’s own textbook; it originated with Henri Poincaré.[9][10] In 1898 Poincaré argued that the postulate of light speed constancy in all directions is useful to formulate physical laws in a simple way. He also showed that the definition of simultaneity of events at different places is only a convention.[11]) . Notice that, in the case of simultaneity, the signs of V and (x1-x2) matter. Basically, depending upon how V moves, light in S going to S’ takes more time to catch up with the moving frame, and the more so, the further it is, the same exact effect which explains the nil result in the Michelson-Morley interferometer; there is an underlying logic below all of this, and it’s always the same).

Tolman’s argumentation about the impossibility of faster than light communications is, in the end, purely philosophical and fully inconsistent with the closely related, and fully mainstream, relativity of simultaneousness.

Poincaré in 1900 proposed the following convention for defining clock synchronisation: 2 observers A and B, which are moving in space (which Poincaré called the aether), synchronise their clocks by means of optical signals. They believe to be at rest in space (“the aether”) from not moving relative to distant galaxies or the Cosmic Radiation Background and assume that the speed of light is constant in all directions. Therefore, they have to consider only the transmission time of the signals and then crossing their observations to examine whether their clocks are synchronous.

“Let us suppose that there are some observers placed at various points, and they synchronize their clocks using light signals. They attempt to adjust the measured transmission time of the signals, but they are not aware of their common motion, and consequently believe that the signals travel equally fast in both directions. They perform observations of crossing signals, one traveling from A to B, followed by another traveling from B to A.” 

In 1904 Poincaré illustrated the same procedure in the following way:

“Imagine two observers who wish to adjust their timepieces by optical signals; they exchange signals, but as they know that the transmission of light is not instantaneous, they are careful to cross them. When station B perceives the signal from station A, its clock should not mark the same hour as that of station A at the moment of sending the signal, but this hour augmented by a constant representing the duration of the transmission. Suppose, for example, that station A sends its signal when its clock marks the hour 0, and that station B perceives it when its clock marks the hour t. The clocks are adjusted if the slowness equal to t represents the duration of the transmission, and to verify it, station B sends in its turn a signal when its clock marks 0; then station A should perceive it when its clock marks t. The timepieces are then adjusted. And in fact they mark the same hour at the same physical instant, but on the one condition, that the two stations are fixed. Otherwise the duration of the transmission will not be the same in the two senses, since the station A, for example, moves forward to meet the optical perturbation emanating from B, whereas the station B flees before the perturbation emanating from A. The watches adjusted in that way will not mark, therefore, the true time; they will mark what may be called the local time, so that one of them will be slow of the other.[13]

This Poincaré (“–Einstein”) synchronisation was used by telegraphers as soon as the mid-nineteenth century. It would allow to cover the galaxy with synchronized clocks (although local times will differ a bit depending upon the motion of stars, and in particular where in the galactic rotation curve a star sits). Transmitting instantaneous signals in that networks would not affect causality. Ludicrously, Wikipedia asserts that faster than light signals would make “Bertha” rich (!!!). That comes simply from Wikipedia getting thoroughly confused, allowing faster than light signals for some data, and not for other data, thus giving an advantage to some, and not others.

***

Quantum Entanglement (QE) enables at-a-distance changes of Quantum states:

(It comes in at least three types of increasing strength.) Quantum Entanglement, as known today, is within Quantum state to within Quantum state, but we cannot control in which Quantum state the particle will be, to start with, so we cannot use QE for communicating faster than light (because we don’t control what we write, so to speak, as we write with states, so we send gibberish).

This argument is formalized in a “No Faster Than Light Communication theorem”. However, IMHO, the proof contains massive loopholes (the proof assumes that there is no Sub Quantum Reality, whatsoever, nor could there ever be some, ever, and thus that the unlikely QM axioms are forever absolutely true beyond all possible redshifts you could possibly imagine, inter alia). So this is not the final story here. QE enables, surprisingly, the Quantum Radar (something I didn’t see coming). And it is not clear to me that we have absolutely no control on states statistically, thus that we can’t use what Schrödinger, building on the EPR thought experiment, called “Quantum Steering” to communicate at a distance. Quantum Radar and Quantum Steering are now enacted through real devices. They use faster-than-light in their inner machinery.

As the preceding showed, the supposed contradiction of faster-than-light communications with Relativity is just an urban legend. It makes the tribe of physicists more priestly, as they evoke a taboo nobody can understand, for the good reason that it makes no sense, and it is intellectually comfortable, as it simplifies brainwork, taboos always do, but it is a lie. And it is high time this civilization switches to the no more lies theorem, lest it wants to finish roasted, poisoned, flooded, weaponized and demonized.

Patrice Ayme’

Technical addendum:

https://en.wikipedia.org/wiki/Relativity_of_simultaneity

As Wikipedia itself puts it, weasel-style, to try to insinuate that Einstein brought something very significant to the debate, the eradication of the aether (but the aether came back soon after, and there are now several “reasons” for it; the point being that, as Poincaré suspected, there is a notion of absolute rest, and now we know this for several reasons: CRB, Unruh effect, etc.):

In 1892 and 1895, Hendrik Lorentz used a mathematical method called “local time” t’ = t – v x/c2 for explaining the negative aether drift experiments.[5] However, Lorentz gave no physical explanation of this effect. This was done by Henri Poincaré who already emphasized in 1898 the conventional nature of simultaneity and who argued that it is convenient to postulate the constancy of the speed of light in all directions. However, this paper does not contain any discussion of Lorentz’s theory or the possible difference in defining simultaneity for observers in different states of motion.[6][7] This was done in 1900, when Poincaré derived local time by assuming that the speed of light is invariant within the aether. Due to the “principle of relative motion”, moving observers within the aether also assume that they are at rest and that the speed of light is constant in all directions (only to first order in v/c). Therefore, if they synchronize their clocks by using light signals, they will only consider the transit time for the signals, but not their motion in respect to the aether. So the moving clocks are not synchronous and do not indicate the “true” time. Poincaré calculated that this synchronization error corresponds to Lorentz’s local time.[8][9] In 1904, Poincaré emphasized the connection between the principle of relativity, “local time”, and light speed invariance; however, the reasoning in that paper was presented in a qualitative and conjectural manner.[10][11]

Albert Einstein used a similar method in 1905 to derive the time transformation for all orders in v/c, i.e., the complete Lorentz transformation. Poincaré obtained the full transformation earlier in 1905 but in the papers of that year he did not mention his synchronization procedure. This derivation was completely based on light speed invariance and the relativity principle, so Einstein noted that for the electrodynamics of moving bodies the aether is superfluous. Thus, the separation into “true” and “local” times of Lorentz and Poincaré vanishes – all times are equally valid and therefore the relativity of length and time is a natural consequence.[12][13][14]

… Except of course, absolute relativity of length and time is not really true: everywhere in the universe, locally at rest frames can be defined, in several manner (optical, mechanical, gravitational, and even using a variant of the Quantum Field Theory Casimir Effect). All other frames are in trouble, so absolute motion can be detected. The hope of Einstein, in devising General Relativity was to explain inertia, but he ended down with just a modification of the 1800 CE Bullialdus-Newton-Laplace theory… (Newton knew his instantaneous gravitation made no sense, and condemned it severely, so Laplace introduced a gravitation speed, thus the gravitational waves, and Poincaré made them relativistic in 1905… Einstein got the applause…)

CONTINUUM FROM DISCONTINUUM

December 1, 2017

Discontinuing The Continuum, Replacing It By Quantum Entanglement Of Granular Substrate:

Is the universe granular? Discontinuous? Is spacetime somehow emergent? I do have an integrated solution to these quandaries, using basic mass-energy physics, and quantum entanglement. (The two master ideas I use here are mine alone, and if I am right, will change physics radically in the fullness of time.)  

First let me point out that worrying about this is not just a pet lunacy of mine. Edward Witten is the only physicist to have got a top mathematics prize, and is viewed by many as the world’s top physicist (I have met with him). He gave a very interesting interview to Quanta Magazine: A Physicist’s Physicist Ponders the Nature of Reality.

Edward Witten reflects on the meaning of dualities in physics and math, emergent space-time, and the pursuit of a complete description of nature.”

Witten ponders, I answer.

Quantum Entanglement enables to build existence over extended space with a wealth exponentially growing beyond granular space

Witten: “I tend to assume that space-time and everything in it are in some sense emergent. By the way, you’ll certainly find that that’s what Wheeler expected in his essay [Information, Physics, Quantum, Wheeler’s 1989 essay propounding the idea that the physical universe arises from information, which he dubbed “it from bit.” He should have called it: “It from Qubit”. But the word “Qubit” didn’t exist yet; nor really the concept, as physicists had not realized yet the importance of entanglement and nonlocality in building the universe: they viewed them more as “spooky” oddities on the verge of self-contradiction. ..]

Edward Witten: As you’ll read, he [Wheeler] thought the continuum was wrong in both physics and math. He did not think one’s microscopic description of space-time should use a continuum of any kind — neither a continuum of space nor a continuum of time, nor even a continuum of real numbers. On the space and time, I’m sympathetic to that. On the real numbers, I’ve got to plead ignorance or agnosticism. It is something I wonder about, but I’ve tried to imagine what it could mean to not use the continuum of real numbers, and the one logician I tried discussing it with didn’t help me.”

***

Well, I spent much more time studying logic than Witten, a forlorn, despised and alienating task. (Yet, when one is driven by knowledge, nothing beats an Internet connected cave in the desert, far from the distracting trivialities!) Studying fundamental logic, an exercise mathematicians, let alone physicists, tend to detest, brought me enlightenment. mostly because it shows how relative it is, and how it can take thousands of years to make simple, obvious steps. How to solve this lack of logical imagination affecting the tremendous mathematician cum physicist Witten? Simple. From energy considerations, there is an event horizon to how large an expression can be written. Thus, in particular there is a limit to the size of a number. Basically, a number can’t be larger than the universe.

https://patriceayme.wordpress.com/2011/10/10/largest-number/

This also holds for the continuum: just as numbers can’t be arbitrarily large, neither can the digital expression of a given number be arbitrarily long. In other words, irrational numbers don’t exist (I will detail in the future what is wrong with the 24 century old proof, step by step).

As the world consists in sets of entangled quantum states (also known as “qubits”), the number of states can get much larger than the world of numbers. For example a set of 300 entangled up or down spins presents with 2^300 states (much larger than the number of atoms in the observable, 100 billion light years across universe). Such sets (“quantum simulators”) have been basically implemented in the lab.

Digital computers only work with finite expressions. Thus practical, effective logic uses already only finite mathematics, and finite logic. Thus there is no difficulty to use only finite mathematics. Physically, it presents the interest of removing many infinities (although not renormalization!)

Quantum entanglement creates a much richer spacetime than the granular subjacent space. Thus an apparently continuous spacetime is emergent from granular space. Let’s go back to the example above: 300 spins, in a small space, once quantum entangled, give a much richer spacetime quantum space of 2^300 states.

Consider again a set S of 300 particles (a practical case would be 300 atoms with spins up or down). If a set of “particles” are all entangled together I will call that a EQN (Entangled Quantum Network). Now consider an incoming wave W (typically a photonic or gravitational wave; but it could be a phonon, etc.). Classically, if the 300 particles were… classical, W has little probability to interact with S, because it has ONLY 300 “things”, 300 entities, to interact with. Quantum Mechanically, though, it has 2^300 “things”, all the states of the EQN, to interact with. Thus, a much higher probability of interacting. Certainly the wave W is more likely to interact wit2^300 entities than with 300, in the same space! (The classical computations can’t be made from scratch by me, or anybody else; but the classical computation, depending on “transparency” of a film of 300 particles would actually depend upon the Quantum computation nature makes discreetly, yet pervasely!

EQNs make (mathematically at least) an all pervasive “volume” occupying wave. I wrote “volume” with quote-unquote, because some smart asses, very long ago (nearly a century) pointed out that the Quantum Waves are in “PHASE” space, thus are NOT “real” waves. Whatever that means: Quantum volumes/spaces in which Quantum Waves compute can be very complicated, beyond electoral gerrymandering of congressional districts in the USA! In particular, they don’t have to be 3D “volumes”. That doesn’t make them less “real”. To allude to well-established mathematics: a segment is a one dimensional volume. A space filling curve is also a sort of volume, as is a fractal (and has a fractal dimension).

Now quantum entanglement has been demonstrated over thousands of kilometers, and mass (so to speak) quantum entanglement has been demonstrated over 500 nanometers (5,000 times the size of an atom). One has to understand that solids are held by quantum entanglement. So there is plenty enough entanglement to generate spaces of apparently continuous possibilities and even consciousness… from a fundamentally granular space.

Entanglement, or how to get continuum from discontinuum. (To sound like Wheeler.)

The preceding seems pretty obvious to me. Once those truths get around, everybody will say:’But of course, that’s so obvious! Didn’t Witten say that first?’

No, he didn’t.

You read it here first.

Granular space giving rise to practically continuous spacetime is an idea where deep philosophy proved vastly superior to the shortsightedness of vulgar mathematics.

Patrice Ayme’

WHY LIGHT & GRAVITATION GO AT SAME SPEED

November 2, 2017

As long as one does not have a simple explanation, and, or description, of a subject, one does not understand it fully.

The present essay presents a direct proof, found by me, from basic principles, that gravitational waves go at the speed of light.

The essay also presents the direct experimental proof of the same fact that we got a few days ago, when the explosion of a “kilonova” was observed (kilonovae are very rare, but crucial in the apparition of life as we know it, details below).

A consequence of the preceding is that the MOND theories are false. MOND was a philosophical horror, something full of ad hoc hypotheses, so I am happy it’s out of the window. MOND eschewed the simplest description of gravity, the basics of which, the 1/d^2 law preceded the birth of Newton himself.   

***

First things first: WHY GRAVITATIONAL WAVES?

When two sources of a field of type 1/d2 (such as gravitation or electromagnetism) rotate around each other, they generate waves which go to infinity (even if I don’t believe in infinity, as an absolute, it works fine as a figure of speech…)  

That’s simply because the field changes, as sometimes the charges are aligned, sometimes sideways. As the field changes it moves the objects it acts on. Now the point is that this disturbance of the field propagates indefinitely.

At this point, a philosophical question may arise: do the disturbances of the field carry away energy? Well, in a way, it’s an idiotic question, because we know it does, that’s an experimental fact.

This experimental fact shows fields are real.

Now, let’s slow down a bit: one century of experimentation with electromagnetic fields had shown, by 1900 CE, that, electromagnetic fields carried away energy.

What about gravitation? Well,  theories were made in which a waving gravitational field carried away energy, such as Poincaré’s theories of gravitation, and, in particular, Einstein’s.

The experimental proof came when closely rotating stars, which should have been emitting copious amounts of gravitational field energy, were observed to lose energy just as predicted. But first the theory:

Orbiting Masses Generate Gravitational Waves (on top). If the gravitational waves were left behind the light, many references frames would observe non-conservation of energy after a collision event (bottom) between aforesaid masses. This is my thought experiment, and it’s also what happened 130 million years ago in a galaxy not that far away.

***

HERE IS WHY GRAVITATIONAL WAVES GO AS FAST AS LIGHT WAVES:

Patrice Thought Experiment Demonstrating Gravitation & Electromagnetic Waves Go At the Same Speed:

So now visualize this. Say, to simplify, that two equal masses rotate around each other. Call them M1 and M2. Say M1 is matter, and M2 antimatter, each of mass m The system M1-M2, emits more and more gravitational energy as the two masses approach each other. Finally they collide. At this point, the system M1-M2 becomes pure electromagnetic radiation, of energy E = 2 (mc^2).

Now what does one see at a distance?

Suppose the electromagnetic energy E going at the speed of light, c, travelled faster than the gravitational wave of energy G, travelling at speed g.

Then suppose also one is in a reference frame R travelling at uniform speed V, away from the M1-M2 collision event. As g is less than c, V can be more than g.

And then what?

The gravitational wave of energy G going at speed g, CANNOT catch up with the reference frame R.

However, before the collision, some of the energy of the system was inside G. And it’s easy to compute how much: it’s equal to the potential energy of the rotating system before the collision. In the scenario we constructed, that energy is never seen again, from the point of view of R. Let me repeat slowly: before the collision, M1  and M2 can be seen, orbiting each other. The potential energy of the system P, can be computed, using this VISUAL information (visual, hence travelling at the speed of light, c). So then the energy of the system is 2Mc^2 + P.

All of P is transformed into G, the energy of the gravitational wave. If the speed g of the wave is less than the speed of light, c, there are reference frames, namely those with V > g, where P will be seen to have disappear.

Thus if the speed of gravitational waves was less than the speed of light, there would be frames in which one could observe distant events where energy would not be conserved. 

Now let’s make it realistic.  The preceding situation is not just possible, but common:

***

Closely Orbiting Annihilating Stars Were Just Observed:

Instead of making the preceding M2 out of antimatter, one can simply make M1 and M2 into neutron stars. That’s exactly what happened 130 million years ago, when dinosaurs roamed the Earth, in a galaxy far away—NGC 4993, to be exact—two neutron stars spiraled into each other, from emitting gravitational radiation, and emitting more, the more they spiraled (the waves got converted in sound). The stars then went into a frantic dance, and collided.

Had this happened inside our own Milky Way, the present gravitational waves detectors the U.S.-built LIGO and European-built Virgo observatories, would have detected the gravitational waves for minutes, or maybe hours. But the gravitational waves we got were diluted by a factor of 10^10 (!) relative to what they would have been if the collision had been just 10,000 light years away, inside the Milky Way.

After billions of years spent slowly circling each other, in their last moments the two neutron-degenerate stars spiraled around each other thousands of times in a frenzy before finally smashing together at a significant fraction of light-speed, likely creating a black hole (typically neutron stars are remnants of sun like stars, two of those packed in a small volume makes a black hole).

Such an event is called a “kilonova” (because it has the energy of 1,000 novas). Kilonovae are rare cosmic events, once every 10,000 years in a giant galaxy like the Milky Way. That’s because neutron stars are produced by supernovae. To boot, supernovae explode asymmetrically, giving hefty “kick” to those remnants, strong enough to eject a neutron star entirely from its galaxy (the Crab Nebula remnant goes at 375 km/s relative to the explosion nebula.

***

Exit MOND:

MOND, or MOdified Newton Dynamics is a somewhat ridiculous class of theories invented in the last few decades to deny the existence of DARK MATTER. Instead, the MOND monkeys devised an ad hoc theory, which basically claim that gravity is stronger at low speeds (whatever), as was more or less observed (sort of) inside galaxies (didn’t work so good, or not at all, for clusters).

You see, gravitation basic behavior is simple. Kepler thought it was an attractive force in 1/d. However Bullialdus suggested the law was 1/d2 in analogy with the behavior of… light (however Bullialdus didn’t understand that, in combination with Buridan’s mechanics from 1350 CE, one could explain Kepler’s laws; but Hooke and then Newton did)

***

The collision of the two neutrons stars, and the black hole they created, also emitted electromagnetic radiation. That light comes from the fact materials fall at enormous speeds. Thus both gravitational waves and electromagnetic waves were captured from a single source. The first light from the merger was a brief, brilliant burst of gamma rays, the birth scream of the black hole. The gamma ray flash was picked up by NASA’s Fermi Gamma-Ray Space Telescope, 1.7 second after the arrival of the gravitational waves (dust would have delayed the light a bit at the onset, but not the gravitational waves). Hours later astronomers using ground-based telescopes detected more light from the merger, the “kilonova” produced by the expansion of debris. The kilonova faded from view over the following weeks.

As expected, astronomers saw in the aftermath various wavelengths of corresponding to the many heavy elements formed instantly during the collision (it was an old prediction that merging neutron stars would form the heaviest elements such as gold and titanium, neutron-rich metals that are not known to form in (normal) stars.

(Caveat: I hold that star theory is incomplete for super hyper giant stars with hundreds of solar masses, and a very reduced lifetime; that has been one of my argument against standard Big bang theory.) But let’s go back to my thought experiment. What about the other aspect I envisioned, being on a frame R travelling at a very large speed?It’s very realistic, actually for its other aspect, frames moving at near light speed.

***

Frames Travelling At Close To Speed Of Light Are Common:

… Not jut a figment of my imagination. That’s also very realistic: as one approaches the event-horizon, entire galaxies recess ever closer to the speed of light, here is the V I was talking about above.   

***

Simple science is deep science

All treaties on Gravitation tend to be the same: hundreds of pages of computation, and one wrong equation could well sink the ship (Quantum Field Theory is worse, as few fundamental hypotheses therein make any sense. Hence the famous prediction from QFT that the energy of the vacuum should be 10120 greater than observed…)

I believe instead in a modular approach: from incontrovertible facts, quick reasonings give striking conclusions. This makes science logically compartmentalized, avoiding thus that any subfield of science follow the Titanic down the abyss, from a single breach. It also make science easier to teach, and even easier to think about. For example the reality of Quantum Waves comes not just from all sorts of variants of the 2-slit experiments, but also from the Casimir Effect, a direct evidence for the reality of Quantum waves in empty space, which is observed so much that it has to be taken into account in the engineering of any nano-machinery (I also suggested a device to extract energy from this “vacuum”).

***

Conclusion: Just from the necessity of apparent conservation of energy in all inertial frames, rather simple physics show that the speed of gravitational waves has to be exactly the speed of light. No need for hundreds of pages of obscure computations and devious logics. No need even for Relativity, just basic kinematics from 1800 CE.

Patrice Ayme’

LEARN TO LEARN: Henri Poincaré, Not Einstein, Discovered Gravitational Waves, 111 years Ago

October 3, 2017

Physics Nobel Committee Should Learn Physics! And the notion of truth!

The truth shall not just make us free, but also safe, and moral. Teaching thinking is to teach truth and how to get to it. One should start by not deliberately lying. And understanding when it is that humanity started to understand something.

Intellectuals should revere the truth. If Satan speaks the truth, intellectuals should quote him approvingly.Why? Because ethics is truth! The Nobel in Physics was given to screwdriver turners for decisive contributions to the LIGO detector and the observation of gravitational waves”

However the rest of the press release from the Nobel committee on physics is a lie: it attributes the original idea of gravitational waves to a German. Surely the physicists who sit on the Nobel Committee are knowledgeable enough to know this is a lie. That sort of lies may sounds innocuous, it’s not: it’s anti-scientific, and proto-Nazi. It teaches the youth wrong. It teaches present day Nazis wrong.

The generation of waves by a central source field is easy to understand in primary school.

It’s because of these sorts of nationalistic distortions that Germans, a century ago, got so full of hubris that they went mad: everybody told them they invented everything! Everybody told Germans they were the superior race! And Max Planck was one of the prophets of this German superiority. ! And the hated French, were nothing, because that “inferior race” had invented nothing! Thus, naturally enough, since they were told from everywhere that they were so smart, the Germans decided to subjugate the rest of humanity, be it only to enlighten it (that was the idea of Keynes in “The Economic Consequence of Peace”).

Actually, it’s not a German who discovered, and named, “Relativity”, but a Frenchman.    

In press releases announcing the detection of gravitational waves, the collaborations LIGO and VIRGO, as well as the Centre National de la Recherche Scientifique (CNRS, France), explicitly (and WRONGLY) attributed to the German Albert Einstein the original prediction of the existence of gravitational waves in 1916. A similar comment is made in the Physical Review Letters article by LIGO and VIRGO.

But actually, gravitational waves traveling at the speed of light, were clearly predicted by Henri Poincaré on June 5, 1905, as a relativistic requirement. Poincaré made this requirement explicit in his academic note Sur la dynamique de l’électron (On electron dynamics, June 5, 1905) published by the French Académie des Sciences.

At the time, Poincaré was already world famous, and Einstein, nothing. Planck, a German nationalist, would make Einstein everything by allowing Einstein to publish articles without any reference on preceding he knew about, and parroted. This was sheer propaganda.

After explicitly formulating special relativity in this fundamental article, Poincaré further develops the requirement suggested by Hendrik Antoon Lorentz that the new space-time transformation leading to special relativity should apply to all existing forces and not just to the electromagnetic interaction. (At the insistence of Poincaré, Lorentz got the Nobel for Relativity in 1902)

Henri Poincaré concludes that, as a consequence of the new space-time geometry, gravitation must generate waves traveling at the speed of light in a similar way to electromagnetism.

Following the pre-Nazi German nationalistic propaganda contained in the press releases of scientific collaborations and institutions, almost all medias attribute to Albert Einstein the original prediction of gravitational waves.

The Physical Review Letters article by LIGO and VIRGO Observation of Gravitational Waves from a Binary Black Hole Merger,  PRL 116, 061102 (11 February 2016), explicitly sates https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.116.061102 : “In 1916, the year after the final formulation of the field equations of general relativity, Albert Einstein predicted the existence of gravitational waves”. What, then, about the work done by Henri Poincaré 11 years before the Einstein finding ?

Actually, the situation seems quite clear. In his short article of 5 June 1905 Sur la dynamique de l’électron, C.R. T.140 (1905) 1504-1508 (Comptes Rendus de l’Académie des Sciences, France), http://www.academie-sciences.fr/pdf/dossiers/Poincare/Poincare_pdf/Poincare_CR1905.pdf , the French mathematician and physicist Henri Poincaré explicitly formulated special relativity upgrading the space-time transformations that he called “Lorentz transformations” and to which he referred as the “Lorentz group”. After having worked out and discussed the new space-time geometry, Poincaré writes:

… Mais ce n’est pas tout: Lorentz, dans l’Ouvrage cité, a jugé nécessaire de compléter son hypothèse en supposant que toutes les forces, quelle qu’en soit l’origine, soient affectées, par une translation [a change of inertial frame in Poincaré’s language], de la même manière que les forces électromagnétiques, et que, par conséquent, l’effet produit sur leurs composantes par la transformation de Lorentz est encore défini par les équations (4).

Il importait d’examiner cette hypothèse de plus près et en particulier de rechercher quelles modifications elle nous obligerait à apporter aux lois de la gravitation [HOW TO MODIFY GRAVITATION]. C’est ce que j’ai cherché à déterminer; j’ai été d’abord conduit à supposer que la propagation de la gravitation n’est pas instantanée, mais se fait avec la vitesse de la lumière. (…)

Quand nous parlerons donc de la position ou de la vitesse du corps attirant, il s’agira de cette position ou de cette vitesse à l’instant où l’onde gravifique [GRAVITATIONAL WAVE] est partie de ce corps; quand nous parlerons de la position ou de la vitesse du corps attiré, il s’agira de cette position ou de cette vitesse à l’instant où ce corps attiré a été atteint par l’onde gravifique émanée de l’autre corps; il est clair que le premier instant est antérieur au second… [End of quote]

Gravitational waves were thus explicitly predicted by Henri Poincaré in his 5 june 1905 article formulating special relativity. All of these ideas got incorporated in the gravitational wave equation of Einstein (who worked closely, day by day, with a number of top mathematicians at the time, including crack mathematician David Hilbert, who found a different approach).

In special relativity, such as already defined explicitly, with all its equations, by Poincaré and Lorentz, the speed of light c is not just the speed of a specific object (light) but a universal constant defining (local) space-time geometry. As a consequence, no physical object, signal, or correlation can travel faster than c. Poincaré explained in extreme details the philosophy behind it (if something is always true, it’s a law of nature), in a book which Einstein and his student friends studied in thorough detail (although Einstein didn’t quote Poincaré in his famous 1905 parrot work, naturally enough for a nationalistic parrot (later Einstein would have a fall-out with another French Nobel, Bergson, about Relativity).

According to Poincaré in his article of 5 June 1905, the requirement of a universal space-time geometry with the speed of light c as the critical speed implies that the gravitational force must be propagated by gravitational waves with a speed equal to c , just as electromagnetic waves carry the electromagnetic interaction.

As Henri Poincaré explicitly underlines, the space-time geometry defined by Lorentz tranformations applies to all existing forces including the gravitational ones. Thus, gravitation cannot propagate instantaneously and must instead propagate at the speed of light. The same argument clearly applies to any object associated to gravitation.

Considering as a simple example the gravitational interaction between two bodies, Poincaré introduces a “gravific wave” leaving the first body, traveling at the speed of light and reaching the second body at a later time. This was the original formulation of the prediction of gravitational waves in a context where its general scope was obvious. Poincaré had been working for years on electromagnetism, and knew perfectly well that more sophisticated scenarios than the example he was providing could be imagined without altering the role of c as the critical speed.

A decade later, with general relativity, Albert Einstein considered in detail more involved scenarios than the one made explicit by Poincaré, incorporating in particular an effective space-time curvature generated by gravitation in a static universe. But this does not invalidate the basic principle discovered and formulated by Henri Poincaré in 1905.

In his article, Poincaré also refers to the previous work by Pierre-Simon de Laplace, Count of Laplace (1749-1827), one of the main French scientists of the period of Napoléon Bonaparte. Laplace had already considered the possibility that gravitation propagates at some finite speed, but he did not question the basic space-time geometry.

Poincaré had demonstrated and published E = m c^2… in 1900, more than 5 years before Einstein plagiarized it.

I have talked about this for years. I am happy that Science 2.0 picked up the notion in “Henri Poincaré Predicted The Existence Of Gravitational Waves As Early As June 5, 1905”

Correct attribution of civilization defining discoveries is fundamental. Example: India discovered numbers & zero as used today.

The chronological hierarchy of discoveries reflects, in general, the logical hierarchy of evidence supporting these discoveries. Whether in science, or in global thinking. Thus who discovered what, when, how and why, is not just anecdotal. it’s logical, according to the most natural logic.

As it turns out, few places in spacetime made most civilization defining discoveries, and then they made plenty of them, and that was related to political processes: a few Greek city-states, especially Ionian cities and Athens and Paris and its satellites are obvious examples.

One can learn to learn better, one can learn to think better, this is what the existence of concentrations of civilizational genesis, show.

It’s crucially important to understand what made these places tick and how, with the aim of reproducing such circumstances. Paris was the pioneering place in science, worldwide, for around a millennium, and this was the core mental skeleton of Europe, and even civilization. Buridan discovered in particular the inertia, thus the heliocentric system (attributed to Copernicus, well after the Catholic Church made studying Buridan into a capital crime!), Lamarck, evolution (taught in Paris while forbidden in England, etc… The same crowd probably wants us to believe in Donald Trump and Neo Liberalism, as no good idea could possibly come from anywhere else not Germanoido-Anglo-Saxon. The Nobel Committee is dominated by US physicists anxious to demonstrate US superiority and, in particular, the superiority of US universities, because there is beaucoup money in it, and it could please their sponsors (the tax-free plutocrats).

It’s also important to make correct attributions, because the original authors are always clearer about their reasonings, and how they got there. Plagiarists tend to be more obscure, because they hide their tracks.

Re-attributing the correct discoveries can be shattering, and teaches us how obscurantism proceeds to eradicate knowledge. The disappearance, for two millennia, of non-Euclidean geometry, is a case in point. So is that of atomism, and “Brownian” motion. The suppression of Buridan and the heliocentric system, by the Christian church is a particularly sinister instance: it was vicious, deliberate, and motivated by the hatred for thinking..

So let’s celebrate the discovery of gravitational waves. My little drawing above shows that one does not need even relativity to make waves. A big motion of the source will do, as anybody watching a tsunami on TV knows.

The gravitational wave detectors inaugurate a new sort of measuring instrument. However, the idea is at least as old as the Michelson and Morley interferometer of the Nineteenth Century. There is nothing new to it. (That’s why I called the laureates “screwdriver turners.)

And what of Planck, Einstein’s unhinged sponsor? Planck signed a disgusting message in World War One denying Germany had committed war crimes (he later denounced it, when the war was over). The French made one of Planck’s sons prisoner in World War One, and the other son was caged and executed by Hitler. That Hitler interlocutor, Max Planck, got, unfortunately, not just for him, but all humanity, his just deserts. But let’s not keep on having them now. Want Relativity? Think Henri Poincaré, forget about his parrots!

Planck enabled Einstein to post in the Annalen der Physik, the oldest journal in physics (1799), WITHOUT any reference, on the three most famous subjects in physics at the time. It was vicious and deliberate, to serve the satanic god of hyper-nationalism of the racist type. Playing with hyper-nationalism, Planck ended up losing, and Einstein, and the German Jews, became double losers (they lost as Germans and as Jews). So here is a case of the losers writing history… German hyper nationalism was encouraged by Einstein and Planck, with a false flag attribution, and they, and their kind, lost twice.

Truth is not seen just with the eyes. Truth is seen through the mind of a thorough debate.

Patrice Ayme’

 

SUB-QUANTUM GRAVITATIONAL COLLAPSE 2 SLIT Thought Experiment

September 23, 2017

A Proposed Lab SUB QUANTUM TEST: SQPR, Patrice Aymé Contra Albert Einstein: GRAVITATIONALLY DETECTING QUANTUM COLLAPSE! 

Einstein claimed that a “particle” was a lump of energy, even while in translation. He had no proof of this assertion, and it underlays all modern fundamental physics, and I believe it’s false. As I see it, this error, duplicated by 99.99% of 20 C theoretical physicists, led the search for the foundations of physics astray in the Twentieth Century. How could one prove my idea, and disprove Einstein?

What Einstein wrote is this, in what is perhaps his most famous work (1905 CE): “Energy, during the propagation of a ray of light, is not continuously distributed over steadily increasing spaces, but it consists of a finite number of energy quanta LOCALIZED AT POINTS IN SPACE, MOVING WITHOUT DIVIDING…” [What’s in capital letters, I view as extremely probably false. Einstein then added nine words, four of which explaining the photoelectric effect, and for which he got the Nobel Prize. Those nine words were entirely correct, but physically independent of the preceding quote!]

If those “energy quanta” are “localized at points in space“, they concentrate onto themselves all the mass-energy.

It’s simple. According to me, the particle disperses while it is in translation (roughly following, and becoming a nonlinear variant of its De Broglie/Matter Wave dispersion, the bedrock of Quantum Physics as everybody knows it). That means its mass-energy disperses. According to Einstein, it doesn’t.

However, a gravitational field can be measured. In my theory, SQPR, the matter waves are real. What can “real” mean, in its simplest imaginable form? Something is real if that something has mass-energy-momentum. So one can then do a thought experiment. Take the traditional Double Slit experiment, and install a gravitational needle (two masses linked by a rigid rod, like a hydrogen molecule at absolute zero) in the middle of the usual interference screen.

Sub Quantum Patrice Reality Is Experimentally Discernible From Einstein’s Version of Quantum Physics! Notice in passing that none of the physics super minds of the Twentieth Century seem to have noticed Einstein’s Axiom, which is ubiquitously used all over Quantum Physics and QFT!

According to Einstein, the gravitational needle will move before the process of interference is finished, and the self-interfering particle hit the screen (some may object that, because photons travel at c, and so do gravitons, one can’t really gravitationally point at the photon; however, that’s not correct, there should be a delayed field moving the needle).

According to me, the particle is dispersed during the self-interfering process: it’s nowhere in particular. Thus the mass-energy is dispersed before the collapse/singularization. Thus a gravitational field from the self-interfering particle can’t be measured from inside the self-interfering geometry.

Could the experiment be done?

Yes. But it won’t be easy.

Molecules constituted  of 5000 protons, 5000 neutrons and 5000 electrons have exhibited double slit behavior.  That’s plenty enough mass to turn a gravitational needle made of two hydrogen atoms. However, with such a large object, my theory may well fail to be experimentally checked (the molecule probably re-localizes continually, thus the needle will move before impact). Ideally, one should best check this Sub Quantum Reality with a simple unique particle, such as a photon, or an electron.

Why did I long believe Einstein was wrong on this point, what I called “Einstein’s Axiom” above?

First, he had no proof of what he said. Allure can’t replace reason

Second, localization into a point is contrary to the philosophical spirit, so to speak, of Quantum Physics. The basic idea of Quantum Physics is that one can’t localize physics into points in space… or into points in energy (this was Planck’s gist). Both space and energy come in LUMPS. For example, an electron delocalizes around a proton, creating an atom of hydrogen.

The lump thing for emissions of energy is Planck’s great discovery (a blackbody sends energy packets hf, where f is the frequency and h, Planck’s constant). The non-relevance of points is De Broglie’s great intuition: De Broglie’s introduced the axiom that one can compute everything about the translation behavior of an object from the waves associated to the energy-momentum of said object.

So Einstein was wrong on the philosophy, as he himself concluded thirty years of thinking hard about Quantum Physics, as one of its two founders, with his discovery of what he called “Spooky Interaction At A Distance” (the “EPR”, which has turned from thought experiment to real experiment, checked now in hundreds of different experiments). If “elements of reality” (to use the Einstein EPR language), are spooky action at a distance” why not so when the particle is in flight, which is precisely the gist of the EPR… (After I thought of this, I found a paper by Zurek and Al. who seem to draw a similar conclusion.)

The philosophy of Quantum Physics in one sentence: small is big, or even, everywhere.

Third, Einstein’s hypothesis of points particles being always localized has led to lots of problems, including the so-called “Multiverse” or the “Many Worlds Interpretation of Quantum Mechanics” (at least, according to yours truly…).

Fourth, the development of Twentieth Century physics according to Einstein’s roadmap, has led to theories on 5% or so of known mass-energy, at most: an epic failure. Whereas my own Sub Quantum Reality readily predicts the apparition of Dark Matter and the joint apparition of Dark Energy, as observed.

Fifth: If Einstein were right, the which-path information in the 2-slit experiment would be readily available, at least as a thought experiment, and that can’t work. The entire subject is still highly controversial: contemplate the massive paper in the Proceedings of the National Academy of Sciences, “Finally making sense of the double-slit experiment”, March 20, 2017, whose lead author is Yakir Aharonov, from the extremely famous and important Aharonov-Bohm effect. The Aharonov-Bohm effect pointed out that the potentials, not the fields themselves, were the crucial inputs of Quantum Physics. That should have been obvious to all and any who studied Quantum Physics. Yet it was overlooked by all the super minds for nearly 40 years!

Sixth: This is technical, so I won’t give the details (which are not deep). One can modify Einstein’s original EPR experiment (Which had to do with pairs of particles in general, not just photon polarization a la Bohm-Bell). One can introduce in the EPR 1935 set-up, an ideal gravity detector. If Einstein was right about the particle being always localized, determinism would be always true on particle A of an {A,B} interaction pair. Thus particle A could be tracked, gravitationally, always. But that would grossly violated the free arbiter of a lab experimenter deciding to tinker with B’s path, through an experiment of her choosing. (How do large particles do it, then? Well they tend to partly localize continually thanks to their own size, and random singularizations.)

The naked truth can be in full view, yet, precisely because it’s naked, nobody dares to see it!

Richard Feynman famously said that the double slit experiment was central to physics, and that no one understood it. He considered it carefully. Gravitation should stand under it, though! The preceding proposed experiment is one which it was obvious to propose. Yet, no one proposed it, because they just couldn’t seriously envision Quantum Collapse, and thus its impact on gravitation. Yet, I do! And therein the connection between Quantum Physics and Gravitation, the quest for the Graal of modern physicists… 

So let’s have an experiment, Mr. Einstein!

Patrice Ayme’