Archive for the ‘Non Locality’ Category

ENTANGLEMENT, FASTER THAN LIGHT COMMUNICATIONS, CAUSALITY, etc.

January 25, 2023

Abstract: Faster Than Light Particle Transfer? Not Possible According To Special Relativity. But Faster Than Light Communications? Some Day, Probably. Using Quantum Entanglement… Not Particle Transport.

TYRANOSOPHER: Folklore based on a vague reasoning of Einstein says Faster Than Light Communications are impossible (a variant supposedly breaks the universe… see below). Having read Einstein carefully, yours truly determined that Einstein’s reasoning was flimsy (Albert himself hints at that in his original paper).

Most of Special Relativity stays intact if Faster Than Light Communication, FTLC are possible. ALL the equations, and thus the verifying experiments of Special Relativity stay intact. (See towards the end answers to objections).

Simplicia: Many people will write you off because you wrote off Einstein. They won’t read any further.

Tyranosopher: OK, I will detail in another essay my objections to the packaging of Special Relativity which forbids FTLC with great details. Below is just a sketch.

Now about Einstein: he is not God. Actually, there is no God. When I was young and naive, I approved (all) of Einstein’s critiques of Quantum theory, a theory to which he crucially contributed as number two, Planck being number one. Planck said emission of radiation was in grains, quanta, and explained two facts this way. Einstein explained that supposing absorption of radiation also came in quanta explained the photoelectric effect. Planck condemned the latter, but Einstein was right. Then other physicists contributed. The next huge conceptual breakthrough was De Broglie’s matter waves. Then CIQ (Copenhagen Interpretation Quantum) arose with correct physics, admirable math, but a sick un-realistic metaphysics. De Broglie objected and rolled out a realistic model of fundamental physics. Einstein seconded De Broglie, but they were overwhelmed by the launch of QED by Dirac. Then all sorts of strange and marvellous high energy zoo, then QFT, etc.

Nevertheless, after exchanges with Karl Popper, Einstein wrote the EPR paper on nonlocality, in 1935… EPR criticized Quantum Physics and its nonlocality from the “realistic” point of view. I am also all for Sub Quantum Physical Reality (SQPR), but I have an axiom neither De Broglie nor Einstein had. Science progresses one axiom at a time…

However, as the decades passed, and I deepened my understanding, I realized that Einstein’s admirable work was not as revolutionary and crazy as needed. 

Simplicia: The funny thing is that Einstein discovered nonlocality in the 1935 EPR paper. Which is one of the top ten papers in theoretical physics, and very hot today, as Quantum Computers use nonlocality.

Tyranosopher: Einstein was honest enough to not throw nonlocality out of the window. Maybe his conversation with the philosopher Karl Popper helped: Popper did contribute to the discovery of nonlocality. Einstein called nonlocality “spooky action at a distance”.

Simplicia: Now nonlocality is a proven experimental fact.

Tyranosopher: Yes the “SPOOKY ACTION AT A DISTANCE” which initially was a purely theoretical fact coming out of the axiomatics of Quantum Theory has been PROVEN over distances of many kilometers. One has to know the crucial difference of QUANTUM SPIN versus classical spin to see nonlocality clearly.

Chinese scientists have measured a minimum speed for this “spooky action at a distance”. I call it the QUANTUM INTERACTION, and assign to it a finite speed, TAU. This supplementary axiom contradicts Quantum Theory

Instead, classical Twentieth Century Quantum Physics says that Quantum Entanglement proceeds at infinite speed.

So this supplementary axiom of propagating finite speed nonlocality should be experimentally testable. I claim the proof of a finite speed for the Quantum Interaction is all around us: Dark Matter and Dark Energy are the results of this experiment, conducted for free by the universe itself. Amen.

Simplicia: What do you mean that nonlocality has been proven? Your friend Ian Miller, who is a physical chemist, denies a proof was achieved.

***

Tyranosopher: I admire Ian, he is a renaissance man, but don’t understand his arguments in this particular case. There are countless variants and proofs under the label “Bell’s theorem” in a jungle of tweaked axiomatics. Ian uses the classical Noether’s theorem… which doesn’t apply to Quantum situations. For once I will use an authority argument. The Nobel was given to nonlocality in 2022, and should have been given at least two decades ago to Alain Aspect. That could have helped physics.

To understand the simplest quantifiable proof of nonlocality one has to know about Quantum Spin and what has been experimentally discovered. Quantum Spin does NOT behave like Classical Spin. Classical Spin can be measured in all directions simultaneously, but Quantum Spin can be measured in only one direction at a time, and that erases preceding measurement.

https://patriceayme.wordpress.com/2022/11/10/proof-of-no-local-hidden-variables-from-magnets/

Building up on Einstein’s 1935 EPR, the simplest Quantum Entanglement which can be studied over a distance was elaborated by David Bohm in the 1950s and then studied in detail by a very small group of physicists, including CERN theoretical high energy physics head, John Bell, in the 1960s, to produce an experimentally testable inequality… which was given the Physics Nobel for 2022.

Simplicia: OK, many people have thought this instantaneous nonlocality could be used for Faster Than Light, FTL.

Tyranosopher:  Maybe. But one has to distinguish FTL and FTL Communication. FTL for massive objects is impossible, except by transporting a space bubble, which is pure science fiction of the extravagant type.

However if SQPR is correct and TAU is finite, one should be able, theoretically speaking, to create energy imbalances at a distance, after an elaborate technological setup, and thus create FTLC channels. 

***

QUANTUM ENTANGLEMENT SEEMS TO PRODUCE FTLC: 

Suppose we produce a state of total spin zero shared by two particles. (Particle streams, in practice.)

We keep one going in circles around Earth, and send the other to Proxima Centauri, 4 lightyears away.

Now say that, after 4 years, we measure the spin in the z direction in the Earth neighborhood, and we find |+>. Then we know that the other particle has spin |-> at Proxima.

So our measurement at Earth created a spin down at Proxima… Instantaneously

Now, with particle streams and synchronized clocks one could easily transform this into an FTL Morse code….

Except for one Quantum difficulty: we do not know how to get a |+> state to start with. We have the same probability to create a |-> state…We can’t make a stream of I+> states to start with, so we can’t type our FTL Morse code to start with! It’s as if we told a cosmic monkey in another room to type, but he can’t select letters. 

***

Hence the impossibility of Faster Than Light Communications rests only upon claiming to know something we know nothing about: can one NEVER EVER prepare, and, or NEVER EVER select Quantum states before measuring them? In other words, do Quantum States have tails? 

There is a so-called “Non Cloning” [of states] theorem…But the “proof” has a loophole (it depends upon assuming a unitary operator, thus denying there are quantum tails, exactly what it wants to prove) In truth, it’s an experimental problem: if what the prestigious French physicist Devoret at Yale and his collaborators is true, it has been possible to prepare some (contrived) Quantum states… but, SO FAR, it has not been possible to prepare Quantum states which happen to be ENTANGLED.

***

When some physicists pretend Faster Than Light Communications are impossible, they pontificate, because, in truth, we don’t know. And science doesn’t progress one pontifex at a time, but one correct intuition at a time. The intuitive case for FTLC is growing as the Quantum amazes us ever more.

***

What we know is that something we thought to be completely impossible, SWAPPING QUANTUM ENTANGLEMENT, is not only possible, but now so amply demonstrated that it is central to various developing Quantum technologies

***

SQPR assumes particles have complex structures, a linear part (the guiding wave) and a nonlinear part (the “particle”), the entire structure being unstable and prone to contracting at TAU, the collapse and entanglement speed. 

***

However, Quantum Swapping shows that, somehow, one can have Quantum Interactions without collapse, namely the propagation of QE.

***

Thus it is starting to smell as if one could interact with a particle’s extended presence without inducing collapse, and then select the type we like…

***

Simplicia: Hence FTLC should be possible?

Tyranosopher: FTLC through Quantum Entanglement would not contradict Relativity, because it would not change anything to light clocks, or for the equation Force = d(mv/(1-vv/cc))/dt. There would be no mass transport.

***

It all smells as if FTLC will become possible. That does not mean that Faster Than Light matter transport should be possible. The latter is impossible without warp drives.

Simplicia: Wait, don’t go. It is well known that FTL Communication leads to the breakdown of causality, and thus, sheer madness. Consider the excellent video:

https://www.youtube.com/watch?v=an0M-wcHw5A

Tyranosopher: Yes, beautiful video. Minkowski spacetime diagrams. Einstein didn’t like them, he didn’t like either Minkowsky or “spacetime”. It was reciprocal: Minkowsky, who was Einstein’s physics professor at Zurich Polytechnic, ETA, called Albert a “lazy dog” and made sure he couldn’t get an academic appointment. Instead a friend got Einstein a job at the Patent Office in Bern.

Simplicia: Can we get to the point? You don’t like spacetime as a concept, so what?

Tyranosopher: Notice that they draw these spacetime diagrams all over the galaxy’s real space, in various places, and then they draw a contradiction. 

Simplicia: Yes, so what?

Tyranosopher: Relativity was invented by Henri Poincaré to describe local effects. Basically local speed makes local time of the speeding object run slow. A fast traveling light clock goes slow when going along the direction of the speed, at the speed. From there after quite a bit of half hidden logic, plus Michelson Morley type experiments which showed the undetectability of speed within a ship cabin not looking outside (the original Galileo imagery), one deduced length also contracted, and so did the local time of the moving device.  

Simplicia: Thanks for the two sentences recap of Relativity.

Tyranosopher: The slowing down of the local time was amply confirmed with fast particle like muons, and in a slightly different context, GPS computations crucially depend upon time contraction of the orbiting satellites.

Simplicia: And then? Why are spacetime diagrams bad?

Tyranosopher: Spacetime diagrams are tangent space objects. They are, at best, local approximations. Extending a spacetime diagram to Vega has degraded meaning. Einstein knew this, he mentioned somewhere that General Relativity violates the constancy of the speed of light. And that’s fairly obvious as light could be put in orbit around a black hole. Now the silly ones cry that time would be in orbit around said black hole, and bite its own tail, etc.  Grandchildren would kill all their grandparents, etc. Silly stuff: they confuse local and global, although that’s the bedrock of differential geometry. Differential geometry is locally flat (aka “Euclidean”) and globally curved (or even twisted). But this is not even the worst…

Simplicia: How come this is all not well-known.

T: Long ago I gave a seminar along these lines at Stanford. Many of the best and brightest were in attendance, Hawking, Penrose, Yau, Susskind, etc. and not too happy from what I said. But my point about General Relativity making no sense without Quantum is viewed as trivially obvious nowadays.

Simplicia: So you are saying one can’t just rotate the spacetime axes of a moving spaceship and make deductions?

T: One can make deductions, but one can’t make deductions where local time of a moving ship becomes global time, as in the video I linked above. Earth can synchronize time with Vega, Henri Poincaré described how that can be done. But one can’t synchronize time with a moving spaceship (as those who claim to have demonstrated that FTLC breaks causality to). 

If one sends an FTL message to a moving spaceship, it does not get it in our past. It gets it in our future. Our past and our future are local… to us, and… Vega, if we synchronized time with Vega. A really silly mistake. 

Simplicia: Please stop insulting fellow intellectuals, or they are not going to be fellows anymore. And why did you link to a false video?

Tyranosopher: Right, let me rephrase this: it has been known since the onset of Relativity that at speed simultaneity is violated. So cause and effect can look inverted in a moving ship relative to what they are in a co-moving frame. That’s basic.The video misses the point, although it looks so reasonable, with great graphics.

Therefore, in the Special Theory of Relativity, causality can only be established and defined in the co-moving frame. (Same for mass, let be said in passing. Even the otherwise excellent Richard Feynman makes that mistake in his lectures. The video I linked above makes that mistake).

So claiming Faster Than Light Communications violates causality is erroneous! 

***

Simplicia: If and when do you think we can realize FTLC?

Tyranosopher: We are tantalizingly close. Some physicists (Devoret) adorned with prizes, glory and long careers claim that they can detect the preparation of a Quantum jump, and even that they can revert it. If that’s true, and we can apply that kind of selection to Quantum Spin, FTLC could be installed with Mars before humanity lands on the planet.

Simplicia: Are you serious?

Tyranosopher: Absolutely. 

Patrice Ayme  

Proof Of NO LOCAL Hidden Variables From Magnets

November 10, 2022

Abstract: Looked at it the right way, the Stern Gerlach experiment with three consecutive magnets oriented various ways, show that there can’t be LOCAL hidden variables. No need, to exhibit nonlocality, for the precise, but obscure logic of the Bell Inequality. The argument here is less mathematically precise, but more intuitive.

***

Stern-Gerlach Magnets (SGM) reveal an aspect of the Quantum, namely the quantization of (some sort of) angular momentum. SGM launched in 1922 the saga of Quantum Spin (it turns out to be connected to deep pure geometry which had been developed independently by Élie Cartan, a decade earlier). Drive an electron, or (appropriate) atomic beam through a SGM, and one will get two dots, one up, one down. Whatever the axis of the SGM [1]. (The SGM is just a magnetic field with a specific linear direction.) 

That means, at the emotional level that, at the smallest scale, spin, the electronic (sort of) angular momentum, or the orbital (sort of) angular momentum, reduce to UP and DOWN. First surprise. (This is actually the case of Spin 1/2, the simplest, such as for an electron; we are trying to learn the most from the simplest case.)

Say the first SGM is vertical (magnetic field along “z axis”) and a second SGM is horizontal (mag field along “x axis”). Call them respectively SGMV and SGMH. So SGMH produces LEFT-RIGHT beams. Put SGMH across the UP beam coming out of SGMV. One could call that beam SGMV (UP). Once goes through SGMH, one will get 50-50 LEFT-RIGHT. No surprise there.

Now say one selects the RIGHT beam coming out SGMH. Call that beam SGMH (UP; RIGHT)… because first the beam went up, then the beam went right.  

Naively one would expect, from classical mechanics, that SGMH (UP; RIGHT) to have kept a memory of its initial source as SGMV(UP). 

That would be to assume that beam SGMV (UP) and its descendant SGMH (UP;  RIGHT) to have kept some memory, in other words that some that the beams through the first SGMV and then the second SGM to harbor some LOCAL HIDDEN VARIABLES.

But that’s not the case. 

Indeed, please run SGMH (UP; RIGHT) into a second SGMV (that is a Stern Gerlach Magnet parallel to the first magnet SGMV… Call that second vertical Stern Gerlach Magnet SGMV2 One gets fifty-fifty UP and DOWN coming out of SGMV2. It is as if the initial Stern Gerlach, SGMV, never happened. (This set-up is presented in Feynman Lectures on Physics III, Chapter 6, Spin 1/2)

So if there were local hidden variables carried after SGMV that is in the beam SGMV (UP), they got somehow completely eradicated by getting into the beam SGMH (RIGHT).

So these proposed local hidden variables do not stay hidden inside the “particle”: an outside agent can erase them…. Thus those putative local hidden variables aren’t really “local” anymore: the environment impacts them, outside of the particle, and drastically so, just as the potential impacts the phase of an electron in the Bohm-Aharonov experiment… non locally.

***

One can rerun the experiment, by using both beams SGMH (RIGHT) and SGMH (RIGHT), mixing them up. Then it turns out that SGMV2 deflects ONLY UP. So simply going through magnet SGMH, WITHOUT selecting a beam (either SGMH(LEFT) or SMGH (RIGHT)) doesn’t do anything: a collapsing of the Quantum space available to the Quantum wave, selecting either left or right space, is what does something.

Conventional Quantum Physics, newish, path integral version, phrases this by saying one can’t say which path has been followed [2] to keep the information SGV UP or SGV DOWN.  Copenhagen Interpretation of Quantum (CIQ) simply says that selecting beam SGMH (RIGHT) is a measurement thus collapses the wave function… SQPR says roughly the same thing.

In any case, this eradication of the influence of SGMH on the “particle” by just keeping open the OTHER beam, which the putative local hidden variable “particle” is by definition NOT taking, is itself a NONLOCAL effect, thus once again demolishing the “LOCAL Hidden Variable” concept. (One could say that one beam is entangled with the other…)

The advantage of this conceptual approach is that it exhibits directly the nonlocality… without hermetic complications [3]. It also shows the interest of a more philosophical rather than purely formalistic approach to physics.

Patrice Ayme
***

[1] Wolfgang Pauli in 1924 was the first to propose a doubling of the number of available electron states due to a two-valued non-classical “hidden rotation“. In 1925, George Uhlenbeck and Samuel Goudsmit suggested the simple physical interpretation for spin of a particle spinning around its own axis… But clearly that doesn’t fit what is observed. Pauli built a mathematical machinery which reflected the observed GSM behavior. It turned out to be a particular case of deep mathematical work from the French mathematician Élie Cartan who was born and initially educated in the small Alpine coal mining village of La Mure, and rose through merit in the republican educational system. It’s a bit like taking the square root of space. I don’t understand it, neither did the extremely famous mathematician Atiyah…

It is easy to be blinded by the math. But actually the math describes an observed physical behavior. Now this behavior may arise from deeper geometrical reason 

***

[2] In SQPR, the “particles” are preceded by the linear guiding waves. Blocking some of them triggers “collapse”. By selecting SGMH (RIGHT) one clearly collapses the linear guidance.

***

[3] Stern Gerlach Magnets also directly illustrates Spin, as did in the first few lines above (magnetic field —> two dots!) The Pauli machinery is often how Spin is introduced in Quantum Physics courses, but that, philososophically is confusing the formalism derived from what is observed with the observation itself.

“Proof” That Faster Than Light Communications Are Impossible Is False

December 16, 2017

There are theories everywhere, and the more ingrained they are, the more suspiciously they should be looked at. From the basic equations of relativity it is clear that if one adds speeds less than the speed of light, one will get a speed less than the speed of light. It is also clear that adding impulse to a mass will make it more massive, while its speed will asymptotically approach that of light (and, as I explained, the reason is intuitive, from Time Dilation).

The subject is not all sci-fi: modern cosmology brazenly assumes that space itself, after the alleged Big Bang, expanded at a speed at least 10^23 c (something like one hundred thousand billion billions time the speed of light c). The grossest, yet simplest, proof of that is simple: the observable universe is roughly 100 billion light years across, and it is ten billion years old. Thus it expanded at the minimum average clip of ten billion light years, every billion years. 100c/10 = 10c, according to standard cosmology. One could furiously imagine a spaceship somehow surfing on a wave of warped space, expanding for the same obscure reason same obscure reason as the Big Bang itself, that is…) 

The question naturally arises whether velocities which are greater than that of light could ever possibly be obtained in other ways. For example, are there communication speeds faster than light? (Throwing some material across will not work: its mass will increase, while its speed stays less than c.)

Textbooks say it’s not possible. There is actually a “proof” of that alleged impossibility, dating all the way back to Einstein (1907) and Tolman (1917). The mathematics are trivial (they are reproduced in my picture below). But the interpretation is apparently less so. Wikipedia weirdly claims that faster than light communications would allow to travel back in time. No. One could synchronize all clocks on all planets in the galaxies, and having faster than light communications would not change anything. Why? Time is local, faster than light data travel is nonlocal.

The problem of faster than light communications can be attacked in the following manner.

Consider two points A and B on the X axis of the system S, and suppose that some impulse originates at A, travels to B with the velocity u and at B produces some observable phenomenon, the starting of the impulse at A and the resulting phenomenon at B thus being connected by the relation of cause and effect. The time elapsing between the cause and its effect as measured in the units of system S will evidently be as follows in the calligraphy below. Then I use the usual Relativity formula (due to Lorentz) of time as it elapses in S’:

Equations help, but they are neither the beginning, nor the end of a story. Just an abstraction of it. The cult of equations is naive, interpretation is everything. The same thing, more generally, holds for models.
As Tolman put it in 1917: “Let us suppose now that there are no limits to the possible magnitude of the velocities u and V, and in particular that the causal impulse can travel from A to B with a velocity u greater than that of light. It is evident that we could then take a velocity u great enough uV/C^2 will be greater than one.
so that Delta(t) would become negative. In other words, for an observer in system S’ the effect which occurs at B would precede in time its cause which originates at A.”

I quote Tolman, because he is generally viewed as the one having definitively established the impossibility of faster than light communications. Tolman, though is not so sure; in his next sentence he turns out wishy washy: “Such a condition of affairs might not be a logical impossibility; nevertheless its extraordinary nature might incline us to believe that no causal impulse can travel with a velocity greater than that of light.”

Actually it is an effect those who have seen movies running in reverse are familiar with. Causality apparently running in reverse is no more surprising than the fact that two events at x1 and x2 which are simultaneous in S are separated by:  (x1-x2) (V/square root (1-VV/CC)). That introduces a sort of fake, or apparent causality, sometimes this before that, sometimes that before this.

(The computation is straightforward and found in Tolman’s own textbook; it originated with Henri Poincaré.[9][10] In 1898 Poincaré argued that the postulate of light speed constancy in all directions is useful to formulate physical laws in a simple way. He also showed that the definition of simultaneity of events at different places is only a convention.[11]) . Notice that, in the case of simultaneity, the signs of V and (x1-x2) matter. Basically, depending upon how V moves, light in S going to S’ takes more time to catch up with the moving frame, and the more so, the further it is, the same exact effect which explains the nil result in the Michelson-Morley interferometer; there is an underlying logic below all of this, and it’s always the same).

Tolman’s argumentation about the impossibility of faster than light communications is, in the end, purely philosophical and fully inconsistent with the closely related, and fully mainstream, relativity of simultaneousness.

Poincaré in 1900 proposed the following convention for defining clock synchronisation: 2 observers A and B, which are moving in space (which Poincaré called the aether), synchronise their clocks by means of optical signals. They believe to be at rest in space (“the aether”) from not moving relative to distant galaxies or the Cosmic Radiation Background and assume that the speed of light is constant in all directions. Therefore, they have to consider only the transmission time of the signals and then crossing their observations to examine whether their clocks are synchronous.

“Let us suppose that there are some observers placed at various points, and they synchronize their clocks using light signals. They attempt to adjust the measured transmission time of the signals, but they are not aware of their common motion, and consequently believe that the signals travel equally fast in both directions. They perform observations of crossing signals, one traveling from A to B, followed by another traveling from B to A.” 

In 1904 Poincaré illustrated the same procedure in the following way:

“Imagine two observers who wish to adjust their timepieces by optical signals; they exchange signals, but as they know that the transmission of light is not instantaneous, they are careful to cross them. When station B perceives the signal from station A, its clock should not mark the same hour as that of station A at the moment of sending the signal, but this hour augmented by a constant representing the duration of the transmission. Suppose, for example, that station A sends its signal when its clock marks the hour 0, and that station B perceives it when its clock marks the hour t. The clocks are adjusted if the slowness equal to t represents the duration of the transmission, and to verify it, station B sends in its turn a signal when its clock marks 0; then station A should perceive it when its clock marks t. The timepieces are then adjusted. And in fact they mark the same hour at the same physical instant, but on the one condition, that the two stations are fixed. Otherwise the duration of the transmission will not be the same in the two senses, since the station A, for example, moves forward to meet the optical perturbation emanating from B, whereas the station B flees before the perturbation emanating from A. The watches adjusted in that way will not mark, therefore, the true time; they will mark what may be called the local time, so that one of them will be slow of the other.[13]

This Poincaré (“–Einstein”) synchronisation was used by telegraphers as soon as the mid-nineteenth century. It would allow to cover the galaxy with synchronized clocks (although local times will differ a bit depending upon the motion of stars, and in particular where in the galactic rotation curve a star sits). Transmitting instantaneous signals in that networks would not affect causality. Ludicrously, Wikipedia asserts that faster than light signals would make “Bertha” rich (!!!). That comes simply from Wikipedia getting thoroughly confused, allowing faster than light signals for some data, and not for other data, thus giving an advantage to some, and not others.

***

Quantum Entanglement (QE) enables at-a-distance changes of Quantum states:

(It comes in at least three types of increasing strength.) Quantum Entanglement, as known today, is within Quantum state to within Quantum state, but we cannot control in which Quantum state the particle will be, to start with, so we cannot use QE for communicating faster than light (because we don’t control what we write, so to speak, as we write with states, so we send gibberish).

This argument is formalized in a “No Faster Than Light Communication theorem”. However, IMHO, the proof contains massive loopholes (the proof assumes that there is no Sub Quantum Reality, whatsoever, nor could there ever be some, ever, and thus that the unlikely QM axioms are forever absolutely true beyond all possible redshifts you could possibly imagine, inter alia). So this is not the final story here. QE enables, surprisingly, the Quantum Radar (something I didn’t see coming). And it is not clear to me that we have absolutely no control on states statistically, thus that we can’t use what Schrödinger, building on the EPR thought experiment, called “Quantum Steering” to communicate at a distance. Quantum Radar and Quantum Steering are now enacted through real devices. They use faster-than-light in their inner machinery.

As the preceding showed, the supposed contradiction of faster-than-light communications with Relativity is just an urban legend. It makes the tribe of physicists more priestly, as they evoke a taboo nobody can understand, for the good reason that it makes no sense, and it is intellectually comfortable, as it simplifies brainwork, taboos always do, but it is a lie. And it is high time this civilization switches to the no more lies theorem, lest it wants to finish roasted, poisoned, flooded, weaponized and demonized.

Patrice Ayme’

Technical addendum:

https://en.wikipedia.org/wiki/Relativity_of_simultaneity

As Wikipedia itself puts it, weasel-style, to try to insinuate that Einstein brought something very significant to the debate, the eradication of the aether (but the aether came back soon after, and there are now several “reasons” for it; the point being that, as Poincaré suspected, there is a notion of absolute rest, and now we know this for several reasons: CRB, Unruh effect, etc.):

In 1892 and 1895, Hendrik Lorentz used a mathematical method called “local time” t’ = t – v x/c2 for explaining the negative aether drift experiments.[5] However, Lorentz gave no physical explanation of this effect. This was done by Henri Poincaré who already emphasized in 1898 the conventional nature of simultaneity and who argued that it is convenient to postulate the constancy of the speed of light in all directions. However, this paper does not contain any discussion of Lorentz’s theory or the possible difference in defining simultaneity for observers in different states of motion.[6][7] This was done in 1900, when Poincaré derived local time by assuming that the speed of light is invariant within the aether. Due to the “principle of relative motion”, moving observers within the aether also assume that they are at rest and that the speed of light is constant in all directions (only to first order in v/c). Therefore, if they synchronize their clocks by using light signals, they will only consider the transit time for the signals, but not their motion in respect to the aether. So the moving clocks are not synchronous and do not indicate the “true” time. Poincaré calculated that this synchronization error corresponds to Lorentz’s local time.[8][9] In 1904, Poincaré emphasized the connection between the principle of relativity, “local time”, and light speed invariance; however, the reasoning in that paper was presented in a qualitative and conjectural manner.[10][11]

Albert Einstein used a similar method in 1905 to derive the time transformation for all orders in v/c, i.e., the complete Lorentz transformation. Poincaré obtained the full transformation earlier in 1905 but in the papers of that year he did not mention his synchronization procedure. This derivation was completely based on light speed invariance and the relativity principle, so Einstein noted that for the electrodynamics of moving bodies the aether is superfluous. Thus, the separation into “true” and “local” times of Lorentz and Poincaré vanishes – all times are equally valid and therefore the relativity of length and time is a natural consequence.[12][13][14]

… Except of course, absolute relativity of length and time is not really true: everywhere in the universe, locally at rest frames can be defined, in several manner (optical, mechanical, gravitational, and even using a variant of the Quantum Field Theory Casimir Effect). All other frames are in trouble, so absolute motion can be detected. The hope of Einstein, in devising General Relativity was to explain inertia, but he ended down with just a modification of the 1800 CE Bullialdus-Newton-Laplace theory… (Newton knew his instantaneous gravitation made no sense, and condemned it severely, so Laplace introduced a gravitation speed, thus the gravitational waves, and Poincaré made them relativistic in 1905… Einstein got the applause…)

CONTINUUM FROM DISCONTINUUM

December 1, 2017

Discontinuing The Continuum, Replacing It By Quantum Entanglement Of Granular Substrate:

Is the universe granular? Discontinuous? Is spacetime somehow emergent? I do have an integrated solution to these quandaries, using basic mass-energy physics, and quantum entanglement. (The two master ideas I use here are mine alone, and if I am right, will change physics radically in the fullness of time.)  

First let me point out that worrying about this is not just a pet lunacy of mine. Edward Witten is the only physicist to have got a top mathematics prize, and is viewed by many as the world’s top physicist (I have met with him). He gave a very interesting interview to Quanta Magazine: A Physicist’s Physicist Ponders the Nature of Reality.

Edward Witten reflects on the meaning of dualities in physics and math, emergent space-time, and the pursuit of a complete description of nature.”

Witten ponders, I answer.

Quantum Entanglement enables to build existence over extended space with a wealth exponentially growing beyond granular space

Witten: “I tend to assume that space-time and everything in it are in some sense emergent. By the way, you’ll certainly find that that’s what Wheeler expected in his essay [Information, Physics, Quantum, Wheeler’s 1989 essay propounding the idea that the physical universe arises from information, which he dubbed “it from bit.” He should have called it: “It from Qubit”. But the word “Qubit” didn’t exist yet; nor really the concept, as physicists had not realized yet the importance of entanglement and nonlocality in building the universe: they viewed them more as “spooky” oddities on the verge of self-contradiction. ..]

Edward Witten: As you’ll read, he [Wheeler] thought the continuum was wrong in both physics and math. He did not think one’s microscopic description of space-time should use a continuum of any kind — neither a continuum of space nor a continuum of time, nor even a continuum of real numbers. On the space and time, I’m sympathetic to that. On the real numbers, I’ve got to plead ignorance or agnosticism. It is something I wonder about, but I’ve tried to imagine what it could mean to not use the continuum of real numbers, and the one logician I tried discussing it with didn’t help me.”

***

Well, I spent much more time studying logic than Witten, a forlorn, despised and alienating task. (Yet, when one is driven by knowledge, nothing beats an Internet connected cave in the desert, far from the distracting trivialities!) Studying fundamental logic, an exercise mathematicians, let alone physicists, tend to detest, brought me enlightenment. mostly because it shows how relative it is, and how it can take thousands of years to make simple, obvious steps. How to solve this lack of logical imagination affecting the tremendous mathematician cum physicist Witten? Simple. From energy considerations, there is an event horizon to how large an expression can be written. Thus, in particular there is a limit to the size of a number. Basically, a number can’t be larger than the universe.

https://patriceayme.wordpress.com/2011/10/10/largest-number/

This also holds for the continuum: just as numbers can’t be arbitrarily large, neither can the digital expression of a given number be arbitrarily long. In other words, irrational numbers don’t exist (I will detail in the future what is wrong with the 24 century old proof, step by step).

As the world consists in sets of entangled quantum states (also known as “qubits”), the number of states can get much larger than the world of numbers. For example a set of 300 entangled up or down spins presents with 2^300 states (much larger than the number of atoms in the observable, 100 billion light years across universe). Such sets (“quantum simulators”) have been basically implemented in the lab.

Digital computers only work with finite expressions. Thus practical, effective logic uses already only finite mathematics, and finite logic. Thus there is no difficulty to use only finite mathematics. Physically, it presents the interest of removing many infinities (although not renormalization!)

Quantum entanglement creates a much richer spacetime than the granular subjacent space. Thus an apparently continuous spacetime is emergent from granular space. Let’s go back to the example above: 300 spins, in a small space, once quantum entangled, give a much richer spacetime quantum space of 2^300 states.

Consider again a set S of 300 particles (a practical case would be 300 atoms with spins up or down). If a set of “particles” are all entangled together I will call that a EQN (Entangled Quantum Network). Now consider an incoming wave W (typically a photonic or gravitational wave; but it could be a phonon, etc.). Classically, if the 300 particles were… classical, W has little probability to interact with S, because it has ONLY 300 “things”, 300 entities, to interact with. Quantum Mechanically, though, it has 2^300 “things”, all the states of the EQN, to interact with. Thus, a much higher probability of interacting. Certainly the wave W is more likely to interact wit2^300 entities than with 300, in the same space! (The classical computations can’t be made from scratch by me, or anybody else; but the classical computation, depending on “transparency” of a film of 300 particles would actually depend upon the Quantum computation nature makes discreetly, yet pervasely!

EQNs make (mathematically at least) an all pervasive “volume” occupying wave. I wrote “volume” with quote-unquote, because some smart asses, very long ago (nearly a century) pointed out that the Quantum Waves are in “PHASE” space, thus are NOT “real” waves. Whatever that means: Quantum volumes/spaces in which Quantum Waves compute can be very complicated, beyond electoral gerrymandering of congressional districts in the USA! In particular, they don’t have to be 3D “volumes”. That doesn’t make them less “real”. To allude to well-established mathematics: a segment is a one dimensional volume. A space filling curve is also a sort of volume, as is a fractal (and has a fractal dimension).

Now quantum entanglement has been demonstrated over thousands of kilometers, and mass (so to speak) quantum entanglement has been demonstrated over 500 nanometers (5,000 times the size of an atom). One has to understand that solids are held by quantum entanglement. So there is plenty enough entanglement to generate spaces of apparently continuous possibilities and even consciousness… from a fundamentally granular space.

Entanglement, or how to get continuum from discontinuum. (To sound like Wheeler.)

The preceding seems pretty obvious to me. Once those truths get around, everybody will say:’But of course, that’s so obvious! Didn’t Witten say that first?’

No, he didn’t.

You read it here first.

Granular space giving rise to practically continuous spacetime is an idea where deep philosophy proved vastly superior to the shortsightedness of vulgar mathematics.

Patrice Ayme’

SUB-QUANTUM GRAVITATIONAL COLLAPSE 2 SLIT Thought Experiment

September 23, 2017

A Proposed Lab SUB QUANTUM TEST: SQPR, Patrice Aymé Contra Albert Einstein: GRAVITATIONALLY DETECTING QUANTUM COLLAPSE! 

Einstein claimed that a “particle” was a lump of energy, even while in translation. He had no proof of this assertion, and it underlays all modern fundamental physics, and I believe it’s false. As I see it, this error, duplicated by 99.99% of 20 C theoretical physicists, led the search for the foundations of physics astray in the Twentieth Century. How could one prove my idea, and disprove Einstein?

What Einstein wrote is this, in what is perhaps his most famous work (1905 CE): “Energy, during the propagation of a ray of light, is not continuously distributed over steadily increasing spaces, but it consists of a finite number of energy quanta LOCALIZED AT POINTS IN SPACE, MOVING WITHOUT DIVIDING…” [What’s in capital letters, I view as extremely probably false. Einstein then added nine words, four of which explaining the photoelectric effect, and for which he got the Nobel Prize. Those nine words were entirely correct, but physically independent of the preceding quote!]

If those “energy quanta” are “localized at points in space“, they concentrate onto themselves all the mass-energy.

It’s simple. According to me, the particle disperses while it is in translation (roughly following, and becoming a nonlinear variant of its De Broglie/Matter Wave dispersion, the bedrock of Quantum Physics as everybody knows it). That means its mass-energy disperses. According to Einstein, it doesn’t.

However, a gravitational field can be measured. In my theory, SQPR, the matter waves are real. What can “real” mean, in its simplest imaginable form? Something is real if that something has mass-energy-momentum. So one can then do a thought experiment. Take the traditional Double Slit experiment, and install a gravitational needle (two masses linked by a rigid rod, like a hydrogen molecule at absolute zero) in the middle of the usual interference screen.

Sub Quantum Patrice Reality Is Experimentally Discernible From Einstein’s Version of Quantum Physics! Notice in passing that none of the physics super minds of the Twentieth Century seem to have noticed Einstein’s Axiom, which is ubiquitously used all over Quantum Physics and QFT!

According to Einstein, the gravitational needle will move before the process of interference is finished, and the self-interfering particle hit the screen (some may object that, because photons travel at c, and so do gravitons, one can’t really gravitationally point at the photon; however, that’s not correct, there should be a delayed field moving the needle).

According to me, the particle is dispersed during the self-interfering process: it’s nowhere in particular. Thus the mass-energy is dispersed before the collapse/singularization. Thus a gravitational field from the self-interfering particle can’t be measured from inside the self-interfering geometry.

Could the experiment be done?

Yes. But it won’t be easy.

Molecules constituted  of 5000 protons, 5000 neutrons and 5000 electrons have exhibited double slit behavior.  That’s plenty enough mass to turn a gravitational needle made of two hydrogen atoms. However, with such a large object, my theory may well fail to be experimentally checked (the molecule probably re-localizes continually, thus the needle will move before impact). Ideally, one should best check this Sub Quantum Reality with a simple unique particle, such as a photon, or an electron.

Why did I long believe Einstein was wrong on this point, what I called “Einstein’s Axiom” above?

First, he had no proof of what he said. Allure can’t replace reason

Second, localization into a point is contrary to the philosophical spirit, so to speak, of Quantum Physics. The basic idea of Quantum Physics is that one can’t localize physics into points in space… or into points in energy (this was Planck’s gist). Both space and energy come in LUMPS. For example, an electron delocalizes around a proton, creating an atom of hydrogen.

The lump thing for emissions of energy is Planck’s great discovery (a blackbody sends energy packets hf, where f is the frequency and h, Planck’s constant). The non-relevance of points is De Broglie’s great intuition: De Broglie’s introduced the axiom that one can compute everything about the translation behavior of an object from the waves associated to the energy-momentum of said object.

So Einstein was wrong on the philosophy, as he himself concluded thirty years of thinking hard about Quantum Physics, as one of its two founders, with his discovery of what he called “Spooky Interaction At A Distance” (the “EPR”, which has turned from thought experiment to real experiment, checked now in hundreds of different experiments). If “elements of reality” (to use the Einstein EPR language), are spooky action at a distance” why not so when the particle is in flight, which is precisely the gist of the EPR… (After I thought of this, I found a paper by Zurek and Al. who seem to draw a similar conclusion.)

The philosophy of Quantum Physics in one sentence: small is big, or even, everywhere.

Third, Einstein’s hypothesis of points particles being always localized has led to lots of problems, including the so-called “Multiverse” or the “Many Worlds Interpretation of Quantum Mechanics” (at least, according to yours truly…).

Fourth, the development of Twentieth Century physics according to Einstein’s roadmap, has led to theories on 5% or so of known mass-energy, at most: an epic failure. Whereas my own Sub Quantum Reality readily predicts the apparition of Dark Matter and the joint apparition of Dark Energy, as observed.

Fifth: If Einstein were right, the which-path information in the 2-slit experiment would be readily available, at least as a thought experiment, and that can’t work. The entire subject is still highly controversial: contemplate the massive paper in the Proceedings of the National Academy of Sciences, “Finally making sense of the double-slit experiment”, March 20, 2017, whose lead author is Yakir Aharonov, from the extremely famous and important Aharonov-Bohm effect. The Aharonov-Bohm effect pointed out that the potentials, not the fields themselves, were the crucial inputs of Quantum Physics. That should have been obvious to all and any who studied Quantum Physics. Yet it was overlooked by all the super minds for nearly 40 years!

Sixth: This is technical, so I won’t give the details (which are not deep). One can modify Einstein’s original EPR experiment (Which had to do with pairs of particles in general, not just photon polarization a la Bohm-Bell). One can introduce in the EPR 1935 set-up, an ideal gravity detector. If Einstein was right about the particle being always localized, determinism would be always true on particle A of an {A,B} interaction pair. Thus particle A could be tracked, gravitationally, always. But that would grossly violated the free arbiter of a lab experimenter deciding to tinker with B’s path, through an experiment of her choosing. (How do large particles do it, then? Well they tend to partly localize continually thanks to their own size, and random singularizations.)

The naked truth can be in full view, yet, precisely because it’s naked, nobody dares to see it!

Richard Feynman famously said that the double slit experiment was central to physics, and that no one understood it. He considered it carefully. Gravitation should stand under it, though! The preceding proposed experiment is one which it was obvious to propose. Yet, no one proposed it, because they just couldn’t seriously envision Quantum Collapse, and thus its impact on gravitation. Yet, I do! And therein the connection between Quantum Physics and Gravitation, the quest for the Graal of modern physicists… 

So let’s have an experiment, Mr. Einstein!

Patrice Ayme’

DARK MATTER PROPULSION Proposed

December 10, 2016

In  Sub-Quantum Patrice’s Reality (SQPR), Matter Waves are real (in Quantum Theory Copenhagen Interpretation (QTCI) the Matter Waves are just probability waves of… knowledge… hence the insistence that “it came from bit“). There has been no direct evidence that Matter Waves were real. So far. But times they are changing as the other one, Bob Dylan, a gifted yet not too deep singer who got his Nobel today, said.

Both Dark Matter and Dark Energy are consequences of SQPR. So: Observing both Dark Matter and Dark Energy constitute proofs of SQPR.

The prediction of the deviation of light by the Sun was twice with “General Relativity” than the one predicted in Newtonian Mechanics. The effect was minute, and detected only in grazing starlight, during Solar eclipse of 29 May 1919 (by the ultra famous British astronomer and physicist Eddington). Thus, as 95% of the universe matter-energy is from Dark Matter or Dark Energy, my prediction carries more weight.

SPQR also predict “fuel-less” production, in a variant of the effect which produces Dark Matter in SQPR (also called PSQR below): 

Dark Matter Pushes, Patrice Ayme Says. Explaining NASA's Findings?

Dark Matter Pushes, Patrice Ayme Says. Explaining NASA’s Findings?

How does Dark Matter create propulsion? Well, that it does is evident: just look at galactic clusters (more details another day). A Matter Wave will expand, until it singularizes. If it expands enough, it will become so big that it will lose a (smaller) piece of itself during re-singularization. That  piece is the Dark Matter.

Thus visualize this: take a cavity C, and bounce a Matter Wave around it (there is plenty of direct theoretical and experimental evidence that this can be arranged).

Make a hole H in the boundary of C (this is not different from the Black Body oven the consideration of which led Planck to discover the Quantum).

Some Dark Matter then escapes. By the hole. 

However, Dark Matter carries energy momentum (evidence from Galaxies, Galactic Clusters, etc.).

Hence a push. A Dark Matter push. (Notice: the Dark Matter is created inside the device, it doesn’t have to be “gathered”. DM propellant speed could be many times the speed of light, hence great efficiency…)

The (spectacular) effect has been apparently observed by NASA.

Does this violate Newton’s Third Law? (As it has been alleged.)

No. I actually just used Newton’s Third Law, the Action = Reaction law. So SQPR explains the observed effect in combination with the Action= Reaction Law, “proving” both.

How could we prove SQPR? There should be a decrease of energy-momentum after a while, and the decrease should equal the observed push exactly.

Patrice Ayme’

***

Warning: The preceding considerations are at the edge of plausible physics. (Groups of dissenting physicists are always busy making theories where Dark Matter does not exist (and they should!) Should they be right, the preceding is nonsense. The consensus, though, is that Dark Matter exists, but is explained by a variant of the so-called “Standard Model”, using “Supersymmetry”, or “WIMPs”, or “Axions”. My own theory, SQPR is, by far, the most exotic, as it uses an hypothesized Sub Quantic Reality, obtained by throwing Quantum Theory Copenhagen Interpretation, QTCI, through the window, as a first order theory.)

DARK GALAXY (Explained?)

October 1, 2016

A giant galaxy made nearly entirely of Dark Matter has been discovered. Theories of Dark Matter proposed by people salaried for professing physics cannot explain (easily, if at all!) why there would be so much Dark Matter in one galaxy. I can. In my own theory, Dark Matter is not really matter, although matter gives birth to it, under some particular geometrical conditions. In my theory, in some geometrodynamic situations, a galaxy will churn inordinate amounts of Dark Matter quickly. So I was not surprised by the find.

There are many potential theories of Dark Matter. Most are fairly conventional. They typically hypothesize new particles (some of these new particles could come from new symmetries, such as supersymmetry). I do not see how they can predict why these particular particles appear in some places, and not others. However, the importance of location, of geometry, is a crucial feature of my own theory.

I predicate that the Quantum Interaction (copyright myself) does not have infinite range. Thus, quantum interactions, in some conditions of low mass-energy density, leave behind part of the Quantum Wave. Such debris have mass-energy, so they exert gravitational pull, but they have little else besides (most of the characteristics of the particles they were part of concentrate somewhere else).

I Can Explain This Dark Galaxy, By Changing The Foundations Of Physics. No Less.

I Can Explain This Dark Galaxy, By Changing The Foundations Of Physics. No Less.

[From the Hawaiian Gemini telescope.]

In my own theory, one can imagine that the geometry of a galaxy is, at some point extremely favorable to the creation of Dark Matter: it is just a question of dispersing the matter just so. The Dark Galaxy has 1% of the stars of our Milky Way, or less. In my theory, once Dark Matter has formed, it does not seem possible to make visible matter again with it (broken Quantum Wave debris float around like a cosmic fog).

All past science started as a mix of philosophy and science-fiction (Aristarchus, Lucretius, Giordano Bruno, Immanuel Kant, Lamarck are examples). One can only surmise it will be the same in the future, and this is supported by very good logic: guessing comes always before knowing. Those who claim that science will never be born again from philosophy and fantasy are saying that really new science will never happen again. They say that all the foundations of science are known already. So they are into predication, just like religious fanatics.

It was fashionable to say so, among physicists in the 1990s, the times of the fable known as TOE, the so-called Theory Of Everything. Shortly after this orgasm of self-satisfaction by self-appointed pontiffs, the evidence became clear that the universe’s mass-energy was mostly Dark Energy, and Dark Matter.

This is an interesting case of meta-mood shared: also in the 1990s, clever idiots (Fukuyama, etc.) claimed history had ended: a similar claim from the same period, permeating the same mood of stunted imagination. The advantage, while those who pontificated that way? They could claim they knew everything: they had become gods, living gods.

I had known about Dark Matter all along (the problem surfaced nearly a century ago). I considered it a huge problem: It held galaxies and galactic clusters, together. But maybe something had been overlooked. Meanwhile Main Stream Physics (MSP) dutifully, studiously, ignored it. For decades. Speaking of Dark matter made one despicable, a conspiracy theorist.

Another thing MSP ignored was the foundations of physics. Only the most prestigious physicists, such as Richard Feynman, could afford to repeat Einstein’s famous opinion that “nobody understands Quantum Mechanics”. I gave my intellectual life’s main axis of reflection in trying to understand what nobody wanted to understand, that nobody thought they could afford to understand, the real foundations of physics. (So doing I was forced to reflect on why it is that people do not want to understand the most fundamental things, even while professing they do. It is particularly blatant in, say, economics.)

I have long discovered that the real foundations of physics are entangled with those of mathematics (it is not just that physics, nature, is written with mathematics, as Galileo wrote; there is a dialogue between the mathematics that we invent, and the universe that we discover, they lead to each other). For example whether the infinity axiom is allowed in mathematics change the physics radically (the normalization problem of physics is solved if one removes the infinity axiom).

Right now, research at the foundations of (proper) physics is hindered by our lack of nonlinear mathematics: Quantum mechanics, as it is, is linear (waves add up in the simplest way). However the “collapse of the wave packet” is obviously nonlinear (this is why it’s outside of existing physics, from lack of math). From that Quantum collapse, when incomplete from great distances involved, comes Dark Matter. At least, so I propose. 

Patrice Ayme’

DARK MATTER-ENERGY, Or How Inquiry Proceeds

September 7, 2016

How to find really new knowledge? How do you find really new science? Not by knowing the result: this is what we don’t have yet. Any really new science will not be deduced from pre-existing science. Any really new knowledge will come out of the blue. Poetical, and, or, emotional logic will help before linear logic does.

A top lawyer, admitted to the US Supreme Court, and several countries. told me that the best judges know, emotionally, where they want to go, and then build a logical case for it.

The case of Dark Matter is telling: this increasingly irritating elephant in the bathroom has been in evidence for 80 years, lumbering about, smashing the most basic concepts of physics. As the encumbering beast did not fit existing science, it was long religiously ignored by the faithful of the church of standard physics, as a subject not worthy of deep inquiry by very serious physicists. Now Dark Matter, five times more massive than Standard Model matter, is clearly sitting heavily outside of the Standard Model, threatening to crush it into irrelevance. Dark matter obscures the lofty pretense of known physics to explain everything (remember the grandly named TOE, the so-called “Theory Of Everything“? That TOE was a fraud, snake oil, because mainstream physics celebrities crowed about TOE, while knowing perfectly well that Dark Matter dwarfed standard matter, and while being completely outside of the Standard Model).

Physicists are presently looking for Dark Matter, knowing what they know, namely that nature has offered them a vast zoo of particles, many of them without rhyme or reason. or symmetries to “explain” (indeed, some have rhyme, a symmetry, a mathematical group such as SU3 acting upon them; symmetries have revealed new particles, sometimes). 

Bullet Cluster, 100 Million Years Old. Two Galaxies Colliding. The Dark Matter, In Blue, Is Physically Separated From the Hot, Standard Matter Gas, in Red.

Bullet Cluster, 100 Million Years Old. Two Galaxies Colliding. The Dark Matter, In Blue, Is Physically Separated From the Hot, Standard Matter Gas, in Red.

This sort of picture above is most of what we presently have to guess what Dark Matter could be; the physical separation of DM and SM is most telling to me: it seems to indicate that SM and DM do not respond to the same forces, something that my Quantum theory predicts; it’s known that Dark Matter causes gravitational lensing, as one would expect, as it was first found by its gravitational effects, in the 1930s…

However, remember: a truly completely new (piece of) science cannot be deduced from pre-existing paradigm. Thus, if Dark Matter was really about finding a new particle type, it would be interesting, but not as interesting as it would be, if it were not, after all, a new particle type, but instead, a consequence from a completely new law in physics.

This is the quandary about finding truly completely new science. It can never be deduced from ruling paradigms, and may actually overthrow them. What should then be the method to use? Can Descartes and Sherlock Holmes help? The paradigm presented by Quantum Physics helps. The Quantum looks everywhere in space to find solutions: this is where its (“weird”) nonlocality comes in. Nonlocality is crucial for interference patterns and for finding lowest energy solutions, as in the chlorophyll molecule. This suggests that our minds should go nonlocal too, and we should look outside of a more extensive particle zoo to find what Dark Matter is.

In general, searching for new science should be by looking everywhere, not hesitating to possibly contradict what is more traditional than well established.

An obvious possibility to explain Dark Matter is, precisely, that Quantum Physics is itself incomplete, and generating Dark Matter, and Dark Energy, in places where said incompleteness (of the present Quantum theory) would be most blatant: large cosmic distances.

More precisely, Quantum processes, stretched over cosmic distances, instead of being perfectly efficient and nonlocal over gigantically cosmic locales, could leave a Quantum mass-energy residue, precisely in the places where extravagant cosmic stretching of Quanta occurs (before “collapse”, aka “decoherence”). (I call this theory of mine SQPR, Sub Quantum Patrice Reality.)

This would happen if what one should call the “Quantum Interaction” proceeds at a finite speed (much faster than c, by a factor of at least 10^23…). It’s enough.

The more one does find a conventional explanation (namely a new type of particle) for Dark Matter, the more likely my style of explanation is likely. How could one demonstrate it? Not by looking for new particles, but by conducting new and more refined experiments in the foundations of Quantum Physics.

If this guess is correct, whatever is found askew in the axioms of present Quantum Physics could actually help future Quantum Computer technology (because the latter works with Quantum foundations directly, whereas conventional high energy physics tend to eschew the wave aspects, due to the high frequencies involved).

Going on a tangent is what happens when the central, attractive force, is let go. A direct effect of freedom. Free thinking is tangential. We have to learn to produce tangential thinking.

René Descartes tried to doubt the truth of all his beliefs to determine which beliefs he could be certain were true. However, at the end of “The Meditations” he hastily conclude that we can distinguish between dream and reality. It is not that simple. The logic found in dreams is all too similar to the logic used by full-grown individuals in society.

Proof? Back to Quantum Physics. On the face of it, the axioms of Quantum Physics have a dream like quality (there is no “here”, nor “there”, “now” is everywhere, and, mysteriously, the experiment is Quantum, whereas the “apparatus” is “classical”). Still, most physicists, after insinuating they have figured out the universe, eschew the subject carefully.  The specialists of Foundations are thoroughly confused: see Sean Carroll, http://www.preposterousuniverse.com/blog/2013/01/17/the-most-embarrassing-graph-in-modern-physics/

However unbelievable Quantum Physics, however dream-like it is, physicists believe in it, and don’t question it anymore than cardinals would Jesus. Actually, it’s this dream-like nature which, shared by all, defines the community of physicists. Cartesian doubt, pushed further than Descartes did, will question not just the facts, the allegations, but the logic itself. And even the mood behind it.

Certainly, in the case of Dark Matter, some of the questions civilization has to ask should be:

  1. How sure are we of the Foundations of Quantum Physics? Answer: very sure, all too sure!
  2. Could not it be that Dark Matter is a cosmic size experiment in the Foundations of Quantum Physics?

Physics, properly done, does not just question the nature of nature. Physics, properly done, questions the nature of how we find out the nature of anything. Physics, properly done, even questions the nature of why we feel the way we do. And the way we did. About anything, even poetry. In the end, indeed, even the toughest logic is a form of poetry, hanging out there, justified by its own beauty, and nothing else. Don’t underestimate moods: they call what beauty is.

Patrice Ayme’

Entangled Universe: Bell Inequality

May 9, 2016

Abstract: The Bell Inequality shatters the picture of reality civilization previously established. A simple proof is produced.

What is the greatest scientific discovery of the Twentieth Century? Not Jules Henri Poincaré’s Theory of Relativity and his famous equation: E = mcc. Although a spectacular theory, since  Poincaré’s made time local, in order to keep the speed of light constant, it stemmed from Galileo’s Principle of Relativity, extended to Electromagnetism. To save electromagnetism globally, Jules Henri Poincaré made time and length local.

So was the discovery of the Quantum by Planck the greatest discovery? To explain two mysteries of academic physics, Planck posited that energy was emitted in lumps. Philosophically, though, the idea was just to extent to energy the basic philosophical principle of atomism, which was two thousand years old. Energy itself was discovered by Émilie Du Châtelet in the 1730s.

Quantum Entanglement Is NOT AT ALL Classically Predictable

Quantum Entanglement Is NOT AT ALL Classically Predictable

Just as matter went in lumps (strict atomism), so did energy. In light of  Poincaré’s E = mc2, matter and energy are the same, so this is not surprising (by a strange coincidence (?)  Poincaré demonstrated, and published E = mc2, a few month of the same year, 1900, as Max Planck did E = hf; Einstein used both formulas in 1905).

The greatest scientific discovery of Twentieth Century was Entanglement… which is roughly the same as Non-Locality. Non-Locality would have astounded Newton: he was explicitly very much against it, and viewed it, correctly, as the greatest flaw of his theory. My essay “Non-Locality” entangles Newton, Émilie Du Châtelet, and the Quantum, because therefrom the ideas first sprung.

***

Bell Inequality Is Obvious:

The head of the Theoretical division of CERN, John Bell, discovered an inequality which is trivial and apparently so basic, so incredibly obvious, that it reflects the most basic common sense that it should always be true. Ian Miller (PhD, Physical Chemistry) provided a very nice perspective on all this. Here it is, cut and pasted (with his agreement):

Ian Miller: A Challenge! How can Entangled Particles violate Bell’s Inequalities?

Posted on May 8, 2016 by ianmillerblog           

  The role of mathematics in physics is interesting. Originally, mathematical relationships were used to summarise a myriad of observations, thus from Newtonian gravity and mechanics, it is possible to know where the moon will be in the sky at any time. But somewhere around the beginning of the twentieth century, an odd thing happened: the mathematics of General Relativity became so complicated that many, if not most physicists could not use it. Then came the state vector formalism for quantum mechanics, a procedure that strictly speaking allowed people to come up with an answer without really understanding why. Then, as the twentieth century proceeded, something further developed: a belief that mathematics was the basis of nature. Theory started with equations, not observations. An equation, of course, is a statement, thus A equals B can be written with an equal sign instead of words. Now we have string theory, where a number of physicists have been working for decades without coming up with anything that can be tested. Nevertheless, most physicists would agree that if observation falsifies a mathematical relationship, then something has gone wrong with the mathematics, and the problem is usually a false premise. With Bell’s Inequalities, however, it seems logic goes out the window.

Bell’s inequalities are applicable only when the following premises are satisfied:

Premise 1: One can devise a test that will give one of two discrete results. For simplicity we label these (+) and (-).

Premise 2: We can carry out such a test under three different sets of conditions, which we label A, B and C. When we do this, the results between tests have to be comparable, and the simplest way of doing this is to represent the probability of a positive result at A as A(+). The reason for this is that if we did 10 tests at A, 10 at B, and 500 at C, we cannot properly compare the results simply by totalling results.

Premise 1 is reasonably easily met. John Bell used as an example, washing socks. The socks would either pass a test (e.g. they are clean) or fail, (i.e. they need rewashing). In quantum mechanics there are good examples of suitable candidates, e.g. a spin can be either clockwise or counterclockwise, but not both. Further, all particles must have the same spin, and as long as they are the same particle, this is imposed by quantum mechanics. Thus an electron has a spin of either +1/2 or -1/2.

Premises 1 and 2 can be combined. By working with probabilities, we can say that each particle must register once, one way or the other (or each sock is tested once), which gives us

A(+) + A(-) = 1; B(+) + B(-) = 1;   C(+) + C(-) = 1

i.e. the probability of one particle tested once and giving one of the two results is 1. At this point we neglect experimental error, such as a particle failing to register.

Now, let us do a little algebra/set theory by combining probabilities from more than one determination. By combining, we might take two pieces of apparatus, and with one determine the (+) result at condition A, and the negative one at (B) If so, we take the product of these, because probabilities are multiplicative. If so, we can write

A(+) B(-) = A(+) B(-) [C(+) + C(-)]

because the bracketed term [C(+) + C(-)] equals 1, the sum of the probabilities of results that occurred under conditions C.

Similarly

B(+)C(-)   = [A(+) + A(-)] B(+)C(-)

By adding and expanding

A(+) B(-) + B(+)C(-) = A(+) B(-) C(+) + A(+) B(-) C(-) + A(+) B(+)C(-) + A(-)B(+)C(-)

=   A(+)C(-) [(B(+) + B(-)] + A+B C+ + AB(+)C(-)

Since the bracketed term [(B(+) + B(-)] equals 1 and the last two terms are positive numbers, or at least zero, we have

A(+) B(-) + B(+)C(-) ≧ A(+)C(-)

This is the simplest form of a Bell inequality. In Bell’s sock-washing example, he showed how socks washed at three different temperatures had to comply.

An important point is that provided the samples in the tests must give only one result from only two possible results, and provided the tests are applied under three sets of conditions, the mathematics say the results must comply with the inequality. Further, only premise 1 relates to the physics of the samples tested; the second is merely a requirement that the tests are done competently. The problem is, modern physicists say entangled particles violate the inequality. How can this be?

Non-compliance by entangled particles is usually considered a consequence of the entanglement being non-local, but that makes no sense because in the above derivation, locality is not mentioned. All that is required is that premise 1 holds, i.e. measuring the spin of one particle, say, means the other is known without measurement. So, the entangled particles have properties that fulfil premise 1. Thus violation of the inequality means either one of the premises is false, or the associative law of sets, used in the derivation, is false, which would mean all mathematics are invalid.

So my challenge is to produce a mathematical relationship that shows how these violations could conceivably occur? You must come up with a mathematical relationship or a logic statement that falsifies the above inequality, and it must include a term that specifies when the inequality is violated. So, any takers? My answer in my next Monday post.

[Ian Miller.]

***

The treatment above shows how ludicrous it should be that reality violate that inequality… BUT IT DOES! This is something which nobody had seen coming. No philosopher ever imagined something as weird. I gave an immediate answer to Ian:

‘Locality is going to come in the following way: A is going to be in the Milky Way, B and C, on Andromeda. A(+) B(-) is going to be 1/2 square [cos(b-a)]. Therefrom the contradiction. There is more to be said. But first of all, I will re-blog your essay, as it makes the situation very clear.’

Patrice Ayme’

TO BE AND NOT TO BE? Is Entangled Physics Thinking, Or Sinking?

April 29, 2016

Frank Wilczek, a physics Nobel laureate, wrote a first soporific, and then baffling article in Quanta magazine: “Entanglement Made Simple”. Yes, all too simple: it sweeps the difficulties under the rug. After a thorough description of classical entanglement, we are swiftly told at the end, that classical entanglement supports the many World Interpretation of Quantum Mechanics. However, classical entanglement (from various conservation laws) has been known since the seventeenth century.

Skeptical founders of Quantum physics (such as Einstein, De Broglie, Schrodinger, Bohm, Bell) knew classical entanglement very well. David Bohm found the Bohm-Aharanov effect, which demonstrated the importance of (nonlocal) potential, John Bell found his inequality which demonstrated, with the help of experiments (Alain Aspect, etc.) that Quantum physics is nonlocal.

Differently From Classical Entanglement, Which Acts As One, Quantum Entanglement Acts At A Distance: It Interferes With Measurement, At A Distance

Differently From Classical Entanglement, Which Acts As One, Quantum Entanglement Acts At A Distance: It Interferes With Measurement, At A Distance

The point about the cats is that everybody, even maniacs, ought to know that cats are either dead, or alive. Quantum mechanics make the point they can compute things about cats, from their point of view. OK.

Quantum mechanics, in their busy shops, compute with dead and live cats as possible outcomes. No problem. But then does that mean there is a universe, a “world“, with a dead cat, happening, and then one with a live cat, also happening simultaneously?

Any serious philosopher, somebody endowed with common sense, the nemesis of a Quantum mechanic, will say no: in a philosopher’s opinion, a cat is either dead, or alive. To be, or not to be. Not to be, and thus not not to be.

A Quantum mechanic can compute with dead and live cats, but that does not mean she creates worlds, by simply rearranging her computation, this way, or that. Her various dead and live cats arrangements just mean she has partial knowledge of what she computes with, and that Quantum measurements, even from an excellent mechanic, are just partial, mechanic-dependent measurements.

For example, if one measures spin, one needs to orient a machine (a Stern Gerlach device). That’s just a magnetic field going one way, like a big arrow, a big direction. Thus one measures spin in one direction, not another.

What’s more surprising is that, later on, thanks to a nonlocal entanglement, one may be able to determine that, at this point in time, the particle had a spin that could be measured, from far away, in another direction. So far, so good: this is like classical mechanics.

However, whether or not that measurement at a distance has occurred, roughly simultaneously, and way out of the causality light cone, EFFECTS the first measurement.

This is what the famous Bell Inequality means.

And this is what the problem with Quantum Entanglement is. Quantum Entanglement implies that wilful action somewhere disturbs a measurement beyond the reach of the five known forces. It brings all sorts of questions of a philosophical nature, and make them into burning physical subjects. For example, does the experimenter at a distance have real free will?

Calling the world otherworldly, or many worldly, does not really help to understand what is going on. Einstein’s “Spooky Interaction At A Distance” seems a more faithful, honest rendition of reality than supposing that each and any Quantum mechanic in her shop, creates worlds, willy-nilly, each time it strikes her fancy to press a button.

What Mr. Wilczek did is what manyworldists and multiversists always do: they jump into their derangement (cats alive AND dead) after saying there is no problem. Details are never revealed.

Here is, in extenso, the fully confusing and unsupported conclusion of Mr. Wilczek:

“Everyday language is ill suited to describe quantum complementarity, in part because everyday experience does not encounter it. Practical cats interact with surrounding air molecules, among other things, in very different ways depending on whether they are alive or dead, so in practice the measurement gets made automatically, and the cat gets on with its life (or death). But entangled histories describe q-ons that are, in a real sense, Schrödinger kittens. Their full description requires, at intermediate times, that we take both of two contradictory property-trajectories into account.

The controlled experimental realization of entangled histories is delicate because it requires we gather partial information about our q-on. Conventional quantum measurements generally gather complete information at one time — for example, they determine a definite shape, or a definite color — rather than partial information spanning several times. But it can be done — indeed, without great technical difficulty. In this way we can give definite mathematical and experimental meaning to the proliferation of “many worlds” in quantum theory, and demonstrate its substantiality.”

Sounds impressive, but the reasons are either well-known or then those reasons use a sleight of hand.

Explicitly: “take both of two contradictory property-trajectories into account”: just read Feynman QED, first chapter. Feynman invented the ‘sum over histories’, and Wilczek is his parrot; but Feynman did not become crazy from his ‘sum over history’: Richard smirked when his picturesque evocation was taken literally, decades later…

And now the sleight of hand: …”rather than  [gather] partial information spanning several times. But it can be done — indeed, without great technical difficulty.” This nothing new: it is the essence of the double slit discovered by that Medical Doctor and polymath, Young, around 1800 CE: when one runs lots of ‘particles’ through it, one sees the (wave) patterns. This is what Wilczek means by “partial information“. Guess what? We knew that already.

Believing that one can be, while not to be, putting that at the foundation of physics, is a new low in thinking. And it impacts the general mood, making it more favorable towards unreason.

If anything can be, without being, if anything not happening here, is happening somewhere else, then is not anything permitted? Dostoyevsky had a Russian aristocrat suggests that, if god did not exist anything was permitted. And, come to think of it, the argument was at the core of Christianism. Or more, exactly, of the Christian reign of terror which started in the period 363 CE-381 CE, from the reigns of emperor Jovian to the reign of emperor Theodosius. To prevent anything to be permitted, a god had to enforce the law.

What we have now is way worse: the new nihilists (Wilczek and his fellow manyworldists) do not just say that everything is permitted. They say: it does not matter if everything is permitted, or not. It is happening, anyway. Somewhere.

Thus Many-Worlds physics endangers, not just the foundations of reason, but the very justification for morality. That is that what is undesirable should be avoided. Even the Nazis agreed with that principle. Many-Worlds physics says it does not matter, because it is happening, anyway. Somewhere, out there.

So what is going on, here, at the level of moods? Well, professor Wilczek teaches at Harvard. Harvard professors advised president Yeltsin of Russia, to set up a plutocracy. It ruined Russia. Same professors made a fortune from it, while others were advising president Clinton to do the same, and meanwhile Prime Minister Balladur in France was mightily impressed, and followed this new enlightenment by the Dark Side, as did British leaders, and many others. All these societies were ruined in turn. Harvard was the principal spirit, the Geist, behind the rise of plutocracy, and the engine propelling that rise, was the principle that morality did not matter. because, because, well, Many-Worlds!

How does one go from the foundations of physics, to the foundations of plutocracy? Faculty members in the richest, most powerful universities meet in mutual admiration societies known as “faculty clubs” (I was there!) and lots of other I scratch-your-back, you scratch-my-back social occasions they spend much of their time indulging in. So they influence each other, at the very least in the atmospheres of moods they create, and then breathe together.

Remember? It is not that everything is permitted, they chuckle: it’s happening anyway, so we may as well profit from it too. Many-Worlds physics feeds a mood favorable to ever more plutocracy, by fostering confused thinking, and that’s all there is to it. (But that, of course, is a lot, all too much.)

Patrice Ayme’


NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Political Reactionary

Dark Enlightenment and Neoreaction

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

Sean Carroll

in truth, only atoms and the void

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Footnotes to Plato

because all (Western) philosophy consists of a series of footnotes to Plato

Patrice Ayme's Thoughts

Striving For Ever Better Thinking. Humanism Is Intelligence Unleashed. From Intelligence All Ways, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.

ianmillerblog

Smile! You’re at the best WordPress.com site ever

NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Political Reactionary

Dark Enlightenment and Neoreaction

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

Sean Carroll

in truth, only atoms and the void

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Footnotes to Plato

because all (Western) philosophy consists of a series of footnotes to Plato

Patrice Ayme's Thoughts

Striving For Ever Better Thinking. Humanism Is Intelligence Unleashed. From Intelligence All Ways, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.

ianmillerblog

Smile! You’re at the best WordPress.com site ever

NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Political Reactionary

Dark Enlightenment and Neoreaction

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

Sean Carroll

in truth, only atoms and the void

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Footnotes to Plato

because all (Western) philosophy consists of a series of footnotes to Plato

Patrice Ayme's Thoughts

Striving For Ever Better Thinking. Humanism Is Intelligence Unleashed. From Intelligence All Ways, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.

ianmillerblog

Smile! You’re at the best WordPress.com site ever

%d bloggers like this: