Posts Tagged ‘Non Locality’

Entangled Universe: Bell Inequality

May 9, 2016

Abstract: The Bell Inequality shatters the picture of reality civilization previously established. A simple proof is produced.

What is the greatest scientific discovery of the Twentieth Century? Not Jules Henri Poincaré’s Theory of Relativity and his famous equation: E = mcc. Although a spectacular theory, since  Poincaré’s made time local, in order to keep the speed of light constant, it stemmed from Galileo’s Principle of Relativity, extended to Electromagnetism. To save electromagnetism globally, Jules Henri Poincaré made time and length local.

So was the discovery of the Quantum by Planck the greatest discovery? To explain two mysteries of academic physics, Planck posited that energy was emitted in lumps. Philosophically, though, the idea was just to extent to energy the basic philosophical principle of atomism, which was two thousand years old. Energy itself was discovered by Émilie Du Châtelet in the 1730s.

Quantum Entanglement Is NOT AT ALL Classically Predictable

Quantum Entanglement Is NOT AT ALL Classically Predictable

Just as matter went in lumps (strict atomism), so did energy. In light of  Poincaré’s E = mc2, matter and energy are the same, so this is not surprising (by a strange coincidence (?)  Poincaré demonstrated, and published E = mc2, a few month of the same year, 1900, as Max Planck did E = hf; Einstein used both formulas in 1905).

The greatest scientific discovery of Twentieth Century was Entanglement… which is roughly the same as Non-Locality. Non-Locality would have astounded Newton: he was explicitly very much against it, and viewed it, correctly, as the greatest flaw of his theory. My essay “Non-Locality” entangles Newton, Émilie Du Châtelet, and the Quantum, because therefrom the ideas first sprung.


Bell Inequality Is Obvious:

The head of the Theoretical division of CERN, John Bell, discovered an inequality which is trivial and apparently so basic, so incredibly obvious, that it reflects the most basic common sense that it should always be true. Ian Miller (PhD, Physical Chemistry) provided a very nice perspective on all this. Here it is, cut and pasted (with his agreement):

Ian Miller: A Challenge! How can Entangled Particles violate Bell’s Inequalities?

Posted on May 8, 2016 by ianmillerblog           

  The role of mathematics in physics is interesting. Originally, mathematical relationships were used to summarise a myriad of observations, thus from Newtonian gravity and mechanics, it is possible to know where the moon will be in the sky at any time. But somewhere around the beginning of the twentieth century, an odd thing happened: the mathematics of General Relativity became so complicated that many, if not most physicists could not use it. Then came the state vector formalism for quantum mechanics, a procedure that strictly speaking allowed people to come up with an answer without really understanding why. Then, as the twentieth century proceeded, something further developed: a belief that mathematics was the basis of nature. Theory started with equations, not observations. An equation, of course, is a statement, thus A equals B can be written with an equal sign instead of words. Now we have string theory, where a number of physicists have been working for decades without coming up with anything that can be tested. Nevertheless, most physicists would agree that if observation falsifies a mathematical relationship, then something has gone wrong with the mathematics, and the problem is usually a false premise. With Bell’s Inequalities, however, it seems logic goes out the window.

Bell’s inequalities are applicable only when the following premises are satisfied:

Premise 1: One can devise a test that will give one of two discrete results. For simplicity we label these (+) and (-).

Premise 2: We can carry out such a test under three different sets of conditions, which we label A, B and C. When we do this, the results between tests have to be comparable, and the simplest way of doing this is to represent the probability of a positive result at A as A(+). The reason for this is that if we did 10 tests at A, 10 at B, and 500 at C, we cannot properly compare the results simply by totalling results.

Premise 1 is reasonably easily met. John Bell used as an example, washing socks. The socks would either pass a test (e.g. they are clean) or fail, (i.e. they need rewashing). In quantum mechanics there are good examples of suitable candidates, e.g. a spin can be either clockwise or counterclockwise, but not both. Further, all particles must have the same spin, and as long as they are the same particle, this is imposed by quantum mechanics. Thus an electron has a spin of either +1/2 or -1/2.

Premises 1 and 2 can be combined. By working with probabilities, we can say that each particle must register once, one way or the other (or each sock is tested once), which gives us

A(+) + A(-) = 1; B(+) + B(-) = 1;   C(+) + C(-) = 1

i.e. the probability of one particle tested once and giving one of the two results is 1. At this point we neglect experimental error, such as a particle failing to register.

Now, let us do a little algebra/set theory by combining probabilities from more than one determination. By combining, we might take two pieces of apparatus, and with one determine the (+) result at condition A, and the negative one at (B) If so, we take the product of these, because probabilities are multiplicative. If so, we can write

A(+) B(-) = A(+) B(-) [C(+) + C(-)]

because the bracketed term [C(+) + C(-)] equals 1, the sum of the probabilities of results that occurred under conditions C.


B(+)C(-)   = [A(+) + A(-)] B(+)C(-)

By adding and expanding

A(+) B(-) + B(+)C(-) = A(+) B(-) C(+) + A(+) B(-) C(-) + A(+) B(+)C(-) + A(-)B(+)C(-)

=   A(+)C(-) [(B(+) + B(-)] + A+B C+ + AB(+)C(-)

Since the bracketed term [(B(+) + B(-)] equals 1 and the last two terms are positive numbers, or at least zero, we have

A(+) B(-) + B(+)C(-) ≧ A(+)C(-)

This is the simplest form of a Bell inequality. In Bell’s sock-washing example, he showed how socks washed at three different temperatures had to comply.

An important point is that provided the samples in the tests must give only one result from only two possible results, and provided the tests are applied under three sets of conditions, the mathematics say the results must comply with the inequality. Further, only premise 1 relates to the physics of the samples tested; the second is merely a requirement that the tests are done competently. The problem is, modern physicists say entangled particles violate the inequality. How can this be?

Non-compliance by entangled particles is usually considered a consequence of the entanglement being non-local, but that makes no sense because in the above derivation, locality is not mentioned. All that is required is that premise 1 holds, i.e. measuring the spin of one particle, say, means the other is known without measurement. So, the entangled particles have properties that fulfil premise 1. Thus violation of the inequality means either one of the premises is false, or the associative law of sets, used in the derivation, is false, which would mean all mathematics are invalid.

So my challenge is to produce a mathematical relationship that shows how these violations could conceivably occur? You must come up with a mathematical relationship or a logic statement that falsifies the above inequality, and it must include a term that specifies when the inequality is violated. So, any takers? My answer in my next Monday post.

[Ian Miller.]


The treatment above shows how ludicrous it should be that reality violate that inequality… BUT IT DOES! This is something which nobody had seen coming. No philosopher ever imagined something as weird. I gave an immediate answer to Ian:

‘Locality is going to come in the following way: A is going to be in the Milky Way, B and C, on Andromeda. A(+) B(-) is going to be 1/2 square [cos(b-a)]. Therefrom the contradiction. There is more to be said. But first of all, I will re-blog your essay, as it makes the situation very clear.’

Patrice Ayme’

Points Against Multiverses

December 31, 2015

Physics, the study of nature, is grounded not just in precise facts, but also a loose form of logic called mathematics, and in even more general reasonings we know as “philosophy”. For example, the rise of Quantum Field Theory required massive Effective Ontology: define things by their effects. The reigning philosophy of physics became “shut-up and calculate”. But it’s not that simple. Even the simplest Quantum Mechanics, although computable, is rife with mind numbing mysteries (about the nature of matter, time and non-locality).

Recently the (increasing) wild wackiness of the Foundations of Physics, combined with the fact that physics, as it presently officially exists, cannot under-stand Dark Energy and Dark Matter, most of the mass-energy out there, has led some Europeans to organize conferences where physicists meet with reputable  philosophers.

Einstein Was Classical, The World Is Not. It's Weirder Than We Have Imagined. So Far.

Einstein Was Classical, The World Is Not. It’s Weirder Than We Have Imagined. So Far.

[Bell, CERN theory director, discovered a now famous inequality expressing locality, which Quantum physics violate. Unfortunately he died of a heart attack thereafter.]

Something funny happened in these conferences: many physicists came out of them, persuaded, more than ever, or so they claimed, that they were on the right track. Like little rodents scampering out in the daylight,  now sure that there was nothing like a big philosophical eagle to swoop down on them. They made many of these little reasonings in the back of their minds official (thus offering now juicy targets).

Coel Hellier below thus wrote clearly what has been in the back of the minds of the Multiverse Partisans. I show “his” argument in full below. Coel’s (rehashing of what has become the conventional Multiverse) argument is neat, cogent, powerful.

However I claim that it is not as plausible, not as likely, as the alternative, which I will present. Coel’s argument rests on a view of cosmology which I claim is neither mathematically necessary, nor physically tenable (in light of the physics we know).

To understand what I say, it’s better to read Coel first. Especially as I believe famous partisans of the Multiverse have been thinking along the same lines (maybe not as clearly). However, to make it fast, those interested by my demolition of it can jump directly to my counter, at the end: NO POINTS, And Thus No Multiverse.


Multiverses Everywhere: Coel Hellier’s Argument:

Coel Hellier, a professional astrophysicist of repute, wrote :  “How many Big Bangs? A philosophical argument for a multiverse”:

“Prompted by reading about the recent Munich conference on the philosophy of science, I am reminded that many people regard the idea of a multiverse as so wild and wacky that talking about it brings science into disrepute.”

Well, being guided by non-thinking physicists will do that. As fundamental physicist Mermin put it, decades ago:

The Philosophy "Shut Up And Calculate" Is A Neat Example Of Intellectual Fascism. It Is Increasingly Undermined By The Effort Toward Quantum Computing, Where Non-Locality Reigns

The Philosophy “Shut Up And Calculate” Is A Neat Example Of Intellectual Fascism. It Is Increasingly Undermined By The Effort Toward Quantum Computing, Where Non-Locality Reigns.

Coel, claiming to have invented something which has been around for quite a while, probably decades: My argument here is the reverse: that the idea of multiple Big Bangs, and thus of a multiverse, is actually more mundane and prosaic than the suggestion that there has only ever been one Big Bang. I’m calling this a “philosophical” argument since I’m going to argue on very general grounds rather than get into the details of particular cosmological models.

First, let me clarify that several different ideas can be called a “multiverse”, and here I am concerned with only one. That “cosmological multiverse” is the idea that our Big Bang was not unique, but rather is one of many, and that the different “universes” created by each Big Bang are simply separated by vast amounts of space.

Should we regard our Big Bang as a normal, physical event, being the result of physical processes, or was it a one-off event unlike anything else, perhaps the origin of all things? It is tempting to regard it as the latter, but there is no evidence for that idea. The Big Bang might be the furthest back thing we have evidence of, but there will always be a furthest-back thing we have evidence of. That doesn’t mean its occurrence was anything other than a normal physical process.

If you want to regard it as a one-off special event, unlike any other physical event, then ok. But that seems to me a rather outlandish idea. When physics encounters a phenomenon, the normal reaction is to try to understand it in terms of physical processes.”

Then Coel exposes some of the basic conclusions of the Standard Big Bang model:

So what does the evidence say? We know that our “observable” universe is a region of roughly 13.8 billion light years in radius, that being the distance light can have traveled since our Big Bang. (Actually, that’s how we see it, but it is now bigger than that, at about 90 billion light years across, since the distant parts have moved away since they emitted the light we now see.) We also know that over that time our observable universe has been steadily expanding.

Then astrophysicist Coel start to consider necessary something about the geometry of the universe which is not so, in my opinion. Coel:

“At about 1 second after the Big Bang, what is now our observable universe was only a few light years across, and so would have fitted into (what is now) the space between us and the nearest star beyond our Sun. Before that it would have been yet smaller.”

What’s wrong? Coel assumes implicitly that the universe started from a POINT. But that does not have to be the case. Suppose the universe started as an elastic table. As we go back in time, the table shrinks, distances diminish. Coel:

“We can have good confidence in our models back to the first seconds and minutes, since the physics at that time led to consequences that are directly observable in the universe today, such as the abundance of helium-4 relative to hydrogen, and of trace elements such as helium-3, deuterium, and lithium-7.[1] Before that time, though, our knowledge gets increasingly uncertain and speculative the further back we push.”

These arguments about how elements were generated, have a long history. They could actually be generated in stars (I guess, following Hoyle and company). Star physics is not that well-known that we can be sure they can’t (stars as massive as 600 Suns seem to have been discovered; usual astrophysics says they are impossible; such stars would be hotter than the hottest stars known for sure).

Big Bangists insist that there would have been no time to generate these elements in stars, because the universe is 13.8 billion years old. But that 13.8 billion is from their Big Bang model. So their argument is circular: it explodes if the universe is, actually 100 billion years old.

But back to Coel’s Multiverses All Over. At that point, Coel makes a serious mistake, the one he was drifting towards above:

“One could, if one likes, try to extrapolate backwards to a “time = zero” event at which all scales go to zero and everything is thus in the same place. But trying to consider that is not very sensible since we have no evidence that such an event occurred (from any finite time or length scale, extrapolating back to exactly zero is an infinite extrapolation in logarithmic space, and making an infinite extrapolation guided by zero data is not sensible). Further, we have no physics that would be remotely workable or reliable if applied to such a scenario.[2]

…”all scales go to zero and everything is thus in the same place.” is not true, in the sense that it does not have to be. Never mind, Coel excludes it, although he claims “extrapolating back in time” leads there. It does not.

Instead, Coel invites us to Voodoo (Quantum) Physics:

“So what is it sensible to consider? Well, as the length scale decreases, quantum mechanics becomes increasingly important. And quantum mechanics is all about quantum fluctuations which occur with given probabilities. In particular, we can predict that at about the Planck scale of 10−35 metres, quantum-gravity effects would have dominated.[3] We don’t yet have a working theory of quantum gravity, but our best guess would be that our Big Bang originated as a quantum-gravity fluctuation at about that Planck-length scale.”

Well, this is conventional pata-physics. Maybe it’s true, maybe not. I have an excellent reason why it should not (details another time). At this point, Coel is firmly in the conventional Multiverse argument (come to think of it, he did not invent it). The universe originated in a Quantum fluctuation at a point, thus:

“So, we can either regard our Big Bang as an un-natural and un-physical one-off event that perhaps originated absolutely everything (un-natural and un-physical because it would not have been a natural and physical process arising from a pre-existing state), or we can suppose that our Big Bang started as something like a quantum-gravity fluctuation in pre-existing stuff. Any physicist is surely going to explore the latter option (and only be forced to the former if there is no way of making the latter work).

At times in our human past we regarded our Solar System as unique, with our Earth, Sun and Moon being unique objects, perhaps uniquely created. But the scientific approach was to look for a physical process that creates stars and planets. And, given a physical process that creates stars, it creates not just one star, but oodles of them strewn across the galaxy. Similarly, given a physical process that creates Earth-like planets, we get not just one planet, but planets around nearly every star.”

Coel then gets into the famous all-is-relative mood, rendered famous by “French Theory”:

“It was quite wrong to regard the Sun and Earth as unique; they are simply mundane examples of common physical objects created by normal physical processes that occur all over the galaxy and indeed the universe.

But humans have a bias to a highly anthropocentric view, and so we tend to regard ourselves and what we see around us as special, and generally we need to be dragged kicking and screaming to the realisation that we’re normal and natural products of a universe that is much the same everywhere — and thus is strewn with stars like our Sun, with most of them being orbited by planets much like ours.

Similarly, when astronomers first realised that we are in a galaxy, they anthropocentrically assumed that there was only one galaxy. Again, it took a beating over the head with evidence to convince us that our galaxy is just one of many.”

Well, it’s not because things we thought were special turned out not to be that nothing is special. The jury is still out about how special Earth, or, for that matter, the Solar System, are. I have argued Earth is what it is, because of the Moon and the powerful nuclear fission reactor inside Earth. The special twist being that radioactive elements tend to gather close to the star, and not in the habitable zone. So Earth maybe, after all special.

At this point, Coel is on a roll: multiverses all over. Says he:

“ So, if we have a physical process that produces a Big Bang then likely we don’t get just one Big Bang, we get oodles of them. No physical process that we’re aware of happens once and only once, and any restriction to one occurrence only would be weird and unnatural. In the same way, any physical process that creates sand grains tends to create lots of them, not just one; and any physical process that creates snowflakes tends to create lots of them, not just one.

So, we have three choices: (1) regard the Big Bang as an unnatural, unphysical and unexplained event that had no cause or precursor; (2) regard the Big Bang as a natural and physical process, but add the rider that it happened only once, with absolutely no good reason for adding that rider other than human parochial insularity; or (3) regard the Big Bang as a natural and physical event, and conclude that, most likely, such events have occurred oodles of times.

Thus Big Bangs would be strewn across space just as galaxies, stars and planets are — the only difference being that the separation between Big Bangs is much greater, such that we can see only one of them within our observable horizon.

Well, I don’t know about you, but it seems to me that those opting for (3) are the ones being sensible and scientifically minded, and those going for (1) or (2) are not, and need to re-tune their intuition to make it less parochial.”

To make sure you get it, professor Coel repeats the argument in more detail, and I will quote him there, because as I say, the Multiverse partisans have exactly that argument in the back of their mind:

“So, let’s assume we have a Big Bang originating as a quantum-gravity fluctuation in a pre-existing “stuff”. That gives it a specific length scale and time scale, and presumably it would have, as all quantum fluctuations do, a particular probability of occurring. Lacking a theory of quantum gravity we can’t calculate that probability, but we can presume (on the evidence of our own Big Bang) that it is not zero.

Thus the number of Big Bangs would simply be a product of that probability times the number of opportunities to occur. The likelihood is that the pre-existing “stuff” was large compared to the quantum-gravity fluctuation, and thus, if there was one fluctuation, then there would have been multiple fluctuations across that space. Hence it would likely lead to multiple Big Bangs.

The only way that would not be the case is if the size of the pre-existing “stuff” had been small enough (in both space and time) that only one quantum fluctuation could have ever occurred. Boy, talk about fine tuning! There really is no good reason to suppose that.

Any such quantum fluctuation would start as a localised event at the Planck scale, and thus have a finite — and quite small — spatial extent. Its influence on other regions would spread outwards, but that rate of spreading would be limited by the finite speed of light. Given a finite amount of time, any product of such a fluctuation must then be finite in spatial extent.

Thus our expectation would be of a pre-existing space, in which there have occurred multiple Big Bangs, separated in space and time, and with each of these leading to a spatially finite (though perhaps very large) universe.

The pre-existing space might be supposed to be infinite (since we have no evidence or reason for there being any “edge” to it), but my argument depends only on it being significantly larger than the scale of the original quantum fluctuation.

One could, of course, counter that since the initial quantum fluctuation was a quantum-gravity event, and thus involved both space and time, then space and time themselves might have originated in that fluctuation, which might then be self-contained, and not originate out of any pre-existing “stuff”.[5] Then there might not have been any pre-existing “stuff” to argue about. But if quantum-gravity fluctuations are a process that can do that, then why would it happen only once? The natural supposition would be, again, that if that can happen once, then — given the probabilistic nature of physics — it would happen many times producing multiple different universes (though these might be self-contained and entirely causally disconnected from each other).”

Then, lest you don’t feel Multiversal enough, professor Coel rolls out the famous argument which brings the Multiverse out of Cosmic Inflation. Indeed, the universe-out of nothing Quantum fluctuation is basically the same as that of Cosmic Inflation. It’s the same general mindset: I fluctuate, therefore I am (that’s close to Paris motto, Fluctuat Nec Mergitur…). Coel:

In order to explain various aspects of our observed universe, current cosmological models suggest that the initial quantum fluctuation led — early in the first second of its existence — to an inflationary episode. As a result the “bubble” of space that arose from the original quantum-fluctuation would have grown hugely, by a factor of perhaps 1030. Indeed, one can envisage some quantum-gravity fluctuations leading to inflationary episodes, but others not doing so.

The inflationary scenario also more or less requires a multiverse, and for a similar reason to that given above. One needs the region that will become our universe to drop out of the inflationary state into the “normal” state, doing so again by a quantum fluctuation. Such a quantum fluctuation will again be localised, and so can only have a spatially finite influence in a finite time.

Yet, the inflationary-state bubble continues to expand so rapidly, much more rapidly than the pocket of normal-state stuff within it, that its extent does not decrease, but only increases further. Therefore whatever process caused our universe to drop out of the inflationary state will cause other regions of that bubble to do the same, leading to multiple different “pocket universes” within the inflationary-state bubble.

Cosmologists are finding it difficult to construct any model that successfully transitions from the inflationary state to the normal state, that does not automatically produce multiple pocket universes.[6] Again, this follows from basic principles: the probabilistic nature of quantum mechanics, the spatial localisation of quantum fluctuations, and the finite speed at which influence can travel from one region to another.”

The driver of the entire Multiverse thinking is alleged Quantum Fluctuations in a realm we know f anything. Those who are obsessed by fluctuations may have the wrong obsession. And professor Coel to conclude with more fluctuations:

“The dropping out of the inflationary state is what produces all of the energy and matter that we now have in our universe, and so effectively that dropping-out event is what we “see” as our Big Bang. This process therefore produces what is effectively a multiverse of Big Bangs strewn across that inflationary bubble. Thus we have a multiverse of multiverses! Each of the (very large number of?) quantum-gravity fluctuations (that undergo an inflationary state) then itself produces a whole multiverse of pocket universes.

The point I am trying to emphasize is that any process that is at all along the lines of current known physics involves the probabilistic nature of quantum mechanics, and that means that more or less any conceivable process for creating one Big Bang is going to produce not just a single event, but almost inevitably a vast number of such events. You’d really have to try hard to fine-tune and rig the model to get only one Big Bang.

As with any other physical process, producing multiple Big Bangs is far more natural and in-line with known physics than trying to find a model that produces only one. Trying to find such a model — while totally lacking any good reason to do so — would be akin to looking for a process that could create one snowflake or one sand grain or one star or galaxy, but not more than one.”


Did the universe expand from one point? Not necessarily. It could have been from a line, a plane, a volume, even something with a crazy topology. The Big Bang is the time zero limit of the FLRW metric. Then the spacing between every point in the universe becomes zero and the density goes to infinity.

Did the Universe expand from Quantum Gravity? Beats me, I don’t have a theory of Quantum Gravity.

What I know is that, expanding from what’s known of gravity, if the universe expanded from a “point”, that would be smaller than the Planck volume, thus the universe would be within a Black Hole. From what we know about those, no expansion.

Once we don’t have the universe expanding from a point, we cannot argue that it expanded from one point in some sort of “stuff”. If the universe is the “stuff” itself, and it’s everywhere, and expanding from everywhere, exit the argument about a “point”.

The argument about a “point” was that: why this particular point? Why not another “Quantum Fluctuation” from another “point” in the “stuff”. Why should our “point” be special? Is it not scientific to believe in the equality of points? Except points have measure zero in three dimensional space, and thus it’s more “scientific”, “mathematical” to suppose the universe expanded from a non-measure zero set, namely a volume (and it better be bigger than the Planck Volume).

So the argument that there should be many universes because there are many points and many Quantum (Gravity) fluctuations flies apart.

Remains the argument that we need Cosmic Inflation. Yes, but if the universe expands from all over, all over, there is only one such. Cosmic Inflation does not have to appear at all points generating baby universes, It becomes more like Dark Energy.

Speaking of which, why should we have two Cosmic Inflations when we already have one? Even my spell checker does not like the idea of two inflations. It does not like the “s”. Ah, yes, the existing Big Bang needs its own Inflation.

Yet if there is only one inflation, presto, no more standard Big Bang, But then what of Helium, Lithium, etc? How do we synthesize enough of those? Well maybe we would have much more time to synthesize them, inside stars… Especially super giant stars.

Another word about these Quantum Fluctuations. Are they the fundamental lesson of Quantum Physics (as the Multiversists implicitly claim)? No.

Why? There are several most fundamental lessons about Quantum Physics. Most prominent: the DYNAMICAL universe is made of waves. That fact, by itself implies NON-LOCALITY. It also implies neighborhoods, no points, are the fundamental concepts (one cannot localize a wave at a point). This is the origin of the “Quantum Fluctuations”.

So we just saw that “Quantum Fluctuations” may not be the most fundamental concept. Fundamental, yes, but not most fundamental. When debating fundamentals with the Devil, you better bring exquisite logic, and a Non-Local spoon, otherwise you will be Quantum fluctuated out.

Patrice Ayme’

Is “Spacetime” Important?

November 3, 2015

Revolutions spawn from, and contributes to, the revolutionary mood. It is no coincidence that many revolutionary ideas in science: Chemistry (Lavoisier), Biological Evolution (Lamarck), Lagrangians, Black Holes,, Fourier Analysis, Thermodynamics (Carnot), Wave Optics, (Young, Poisson), Ampere’s Electrodynamics spawned roughly at the same time and place, around the French Revolution.

In the Encyclopedie, under the term dimension Jean le Rond d’Alembert speculated that time might be considered a fourth dimension… if the idea was not too novel. Joseph Louis Lagrange in his ), wrote that: “One may view mechanics as a geometry of four dimensions…” (Theory of Analytic Functions, 1797.) The idea of spacetime is to view reality as a four dimensional manifold, something measured by the “Real Line” going in four directions.

There is, it turns out a huge problem with this: R, the real line, has what is called a separated topology: points have distinct neighborhoods. However, the QUANTUM world is not like that, not at all. Countless experiments, and the most basic logic, show this:

Reality Does Not Care About Speed, & The Relativity It Brings

Reality Does Not Care About Speed, & The Relativity It Brings

Manifolds were defined by Bernhard Riemann in 1866 (shortly before he died, still young, of tuberculosis). A manifold is made of chunks (technically: neighborhoods), each of them diffeomorphic to a neighborhood in R^n (thus a deformed piece of R^n, see tech annex).

Einstein admitted that there was a huge problem with the “now” in physics (even if one confines oneself to his own set-ups in Relativity theories). Worse: the Quantum changes completely the problem of the “now”… Let alone the “here”.

In 1905, Henri Poincaré showed that by taking time to be an imaginary fourth spacetime coordinate (√−1 c t), a Lorentz transformation can be regarded as a rotation of coordinates in a four-dimensional Euclidean space with three real coordinates representing space, and one imaginary coordinate, representing time, as the fourth dimension.

— Hermann Minkowski, 1907, Einstein’s professor in Zurich concluded: “The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”

This remark rests on Lorentz’s work, how to go from coordinates (x, t) to (x’, t’). In the simplest case:

C is the speed of light. Lorentz found one needed such transformations to respect electrodynamics. If v/c is zero (as it is if one suppose the speed v to be negligible  relative to c, the speed of light infinite), one gets:

t = t’

x’ = x – vt

The first equation exhibits universal time: time does not depend upon the frame of reference. But notice that the second equation mixes space and time already. Thus, philosophically speaking, proclaiming “spacetime” could have been done before. Now, in so-called “General Relativity”, there are problems with “time-like” geodesics (but they would surface long after Minkowski’s death).

Another problem with conceptually equating time and space is that time is not space: space dimensions have a plus sign, time a minus sign (something Quantum Field Theory often ignores by putting pluses everywhere in computations)

In any case, I hope this makes clear that, philosophically, just looking at the equations, “spacetime” does not have to be an important concept.

And Quantum Physics seems to say that it is not: the QUANTUM INTERACTION (QI; my neologism) is (apparently, so far) INSTANTANEOUS (like old fashion time).

As we saw precedingly (“Can Space Be Faster Than Light“), the top cosmologists are arguing whether the speed of space can be viewed as faster than light. Call that the Cosmic Inflation Interaction (CII; it has its own hypothesized exchange particle, the “Inflaton”). We see that c, the speed of light is less than CII, and may, or may not be related to QI (standard Quantum Physics implicitly assumes that the speed of the Quantum Interaction QI is infinite).

One thing is sure: we are very far from TOE, the “Theory Of Everything”, which physicists anxious to appear as the world’s smartest organisms, with all the power and wealth to go with it, taunted for decades.

Patrice Ayme’

Tech Annex: R is the real line, RxR = R^2, the plane, RxRxR = R^3 the usual three dimensional space, etc. Spacetime was initially viewed as just RxRxRxR = R^4.]What does diffeomorphic mean? It means a copy which can be shrunk or dilated somewhat in all imaginable ways, perhaps (but without breaks, and so that all points can be tracked; a diffeomorphism does this, and so do all its derivatives).


September 11, 2015

Feynman:”It is safe to say that no one understands Quantum Mechanics.” 

Einstein: “Insanity is doing the same thing over and over and expecting different results.”

Nature: “That’s how the world works.”

Wilzcek (Physics Nobel Prize): “Naïveté is doing the same thing over and over, and always expecting the same result.”

Parmenides, the ancient Greek philosopher, theorized that reality is unchanging and indivisible and that movement is an illusion. Zeno, a student of Parmenides, devised four famous paradoxes to illustrate the logical difficulties in the very concept of motion. Zeno’s arrow paradox starts and ends this way:

  • If you know where an arrow is, you know everything about its physical state….
  • The arrow does not move…

Classical Mechanics found the first point to be erroneous. To know the state of a particle, one must know not only its position X, but also its velocity and mass (what’s called its momentum P). Something similar happens with Quantum Physics. To know the state of a particle, we need to know whether the state of what it has interacted with before…  exists, or not. According to old fashion metaphysics, that’s beyond weird. It’s simply incomprehensible.

The EPR Interaction: Zein Und Zeit. For Real.

The EPR Interaction: Zein Und Zeit. For Real.

[The Nazi philosopher Heidegger, an ex would-be priest, wrote a famous book “Being And Time“. However, rather than a fascist fantasy, the EPR is exactly about that level of depth: how existence and time come to be! And how those interact with our will…]

With that information, X and P, position and momentum, for each particle, classical mechanics predicts a set of particles’ future evolution completely. (Formally dynamic evolution satisfies a second order linear differential equation. That was thoroughly checked by thousands of officers of gunnery, worldwide, over the last five centuries.)

Highly predicting classical mechanics is the model of Einstein Sanity.

Aristotle had ignored the notion of momentum, P. For Aristotle, one needed a force to maintain motion (an objective proof of Aristotle’s stupidity; no wonder Aristotle supported, and instigated, fascist dictatorship as the best system of governance). Around 1320 CE, the Parisian genius Buridan declared that Aristotle was completely wrong and introduced momentum P, calling it “IMPETUS”.

May we be in a similar situation? Just like the Ancient Greeks had ignored P, is Quantum Wave Mechanics incomplete from an inadequate concept of what a complete description of the world is?

Einstein thought so, and demonstrated it to his satisfaction in his EPR Thought Experiment. The EPR paper basically observed that, according to the Quantum Axiomatics, two particles, after they interacted still formed JUST ONE WAVE. Einstein claimed that there had to exist hidden “elements of reality”, not yet identified in the (Copenhagen Interpretation of) quantum theory. Those heretofore hidden “elements of reality” would re-establish Einstein Sanity, Einstein feverishly hoped.

According to Einstein, following his friend Prince Louis De Broglie (to whom he had conferred the Doctorate) and maybe the philosopher Karl Popper (with whom he corresponded prior on non-locality), Quantum Mechanics appears random. But that randomness is only because of our ignorance of those “hidden variables.” Einstein’s demonstration rested on the impossibility of what he labelled “spooky action at a distance”.

That was an idea too far. The “spooky action at a distance” has been (amply) demonstrated in the meantime. Decades of experimental tests, including a “loophole-free” test published on the scientific preprint site last month, show that the world is like that: completely non-local everywhere.

In 1964, the physicist John Bell, CERN’s theory chief, working with David Bohm’s version of Einstein’s EPR thought experiment, identified an inequality obeyed by any physical theory that is both local — meaning that interactions don’t travel faster than light — and where the physical properties usually attributed to “particles” exist prior to “measurement.”

(As an interesting aside, Richard Feynman tried to steal Bell’s result, at a time when Bell was not famous, at least in the USA: a nice example of “French Theory” at work! And I love Feynman…)

Einstein’s hidden “elements of reality” probably exist, but they are NON-LOCAL. (Einstein was obsessed by locality; but that’s an error. All what can be said in favor of locality is that mathematics, and Field Theory, so far, are local: that’s the famous story of the drunk who looks for his keys under the lamp post, because that’s the only thing he sees.)

Either some physical influences travel faster than light, or some properties don’t exist before measurement. Or both

I believe both happen. Yes, both: reality is both faster than light, and it is pointwise fabricated by interactions (“measurement”). Because:

  1. The EPR Thought Experiment established the faster than light influence (and that was checked experimentally).
  2. But then some properties cannot exist prior to “EPR style influence”. Because, if they did, why do they have no influence whatsoever, once the EPR effect is launched?

Now visualize the “isolated” “particle”. It’s neither truly “isolated” nor truly a “particle”, as some of its properties have not come in existence yet. How to achieve this lack of existence elegantly? Through non-localization, as observed in the one-slit and two-slit experiments.

Why did I say that the “isolated” “particle” was not isolated? Because it interfered with some other “particle” before. Of course. Thus it’s EPR entangled with that prior “particle”. And when that “particle” is “measured” (namely INTERACTS with another “particle”), the so-called “isolated” “particle” gets changed, by the “spooky action at a distance”, at a speed much faster than light.

(This is no flight of fancy of mine, consecutive to some naïve misinterpretation; Zeilinger and Al. in Austria, back-checked the effect experimentally; Aspect in Paris and Zeilinger got the Wolf prize for their work on non-locality, so the appreciation for their art is not restricted to me!)

All these questions are extremely practical: they are at the heart of the difficulties in engineering a Quantum Computer.

Old physics is out of the window. The Quantum Computer is not here yet, because the new physics is not understood enough, yet.

Patrice Ayme’


January 22, 2015

Far From Being Absurd, Life May Be A Quantum Force That Gets Ever More Complex

The most striking feature of the Quantum is that, by doing something somewhere, one can change the state of something else, somewhere else. Einstein found this “spooky”. Philosophically, it just says that, whatever the universe is about, it’s not about old fashion “points”, as found in old fashion mathematics.

This “Quantum Entanglement” and the related, yet diluted “Quantum Discord” constitute the true architecture of the universe. This revelation ought to impact everything. Not just philosophy, but also psychology.

I propose the following. Life, its gathering complexity, adaptability, progress, ethology, meaning, are all animated by the very nature of the Quantum. It’s neither weird, nor absurd, it’s a force that proceeds.

Let me backtrack a bit.

“There is only one really serious philosophical problem,” Camus says in his book, The Myth of Sisyphus, “and that is suicide. Deciding whether or not life is worth living is to answer the fundamental question in philosophy. All other questions follow from that.”

What does this mean? Not much. Besides the admission that apparently much of Camus’ life was absurd. Camus should have done like Nietzsche, and go climb mountains solo. There, as Nietzsche did, he would have found meaning.

One does not decide if life is worth living, most of the time, because, most of the time, life is not a choice. One does chose to breathe. One breathes. When one is thirsty, one drinks, and so on. There is a mechanical aspect to animals, who are machines which live. Most of the time, an animal’s systems are on automatic, best described by inertia.

Animals find meaning by experiencing the life that they are made for.

Recent studies have shown that young lions get neurological damage, if they don’t chew hard on tough flesh. Being a lion is meant to be tough, to be fulfilling. Camus and company lived too soft, in their hour of glory.

Lamarck believed that two forces acted on evolution. One had to do with adapting to the environment, the other was the “Pouvoir de Vie”. This “Life Power” brought increasing complexity to biological evolution. It goes without saying that it is observed. It is an open question whether life started on Mars (it probably did). What is clear, though, is that fortunes are spent to sterilize landers sent to other planets (including the Moon), because exobiologists are worried that today’s Earth life would take over: Earth life has become so complex, it can adapt to what space can throw at it.

This “Life Power” made reductionists spiteful, because they saw no science based reason for it. However, if they had been smarter, they would have seen it that it was a fact. They knew too much Classical and Thermo Dynamics… While the true nature of Quantum Physics was hidden by the siren song of the Copenhagen Interpretation.

Quantum Physics depends upon law (unfortunately, that “law” varies; it can be an infuriatingly parachuted wave equation, or another, or, more generally in the “Standard Model”, some manipulation of a hyper complicated Laplacian; in any case, it has to do with waves… Non-linear waves, in the general case!), initial conditions, and also the final space (a Hilbert space generated by eigenstates). This makes Quantum Mechanics somewhat teleological, an inconceivable horror for the classic-mechanical minded.

It means the Quantum looks far ahead, and everywhere, as if it were a god in the machine. It is a god, the god, in the machine…

The final space for genes is the environment. Genes are Quantum machines (a bit like Turing machines, but operated by the Quantum). This interaction between the genetic machinery and the environment means that we have a Quantum mechanism for fast adaptation to the environment.

Someday, soon, Quantum Biology may well become the queen of sciences… Ruling even mathamatics.

But not only this. The Quantum force operates through Quantum Entanglement… Entanglement creates a complexity at a distance, and that complexity propagates, as the Quantum Entanglement does.

So it is as if life progressed by extending Quantum tendrils in all spaces that it can reach, and it can reach a lot. There is Lamarck’ Life Power, there is the increasing complexity, and there is progress. If biology itself progresses, at fortiori culture, the minds’ tendrils.

Why was Camus so obsessed by absurdity? Because he got surrounded by absurdity. He came from a dirt poor environment in Algeria, and, in exchange for valor and work, was given everything by the Republic. This testimony, a celebration of human rights and equal opportunity, was then confronted to “intellectuals” who inverted, and buried all these values… In their names. Camus was told to follow Comrade Stalin, instead. When he begged to differ, he is called a colonial racist.

What is teleology? It’s the logic of the ends, the logic of purposes, logic at a distance. Socrates believed in it. Plato and Aristotle had their own versions. During the Enlightenment, dominated , and inspired as it was by Classical Mechanics, teleology got assimilated to the discredited Christian god, and fell into contempt.

We know more now, and we can afford different, more sophisticated teleologies. I claim that life is teleological, because it evolves not just haphazardly (“stochastically”), but also teleologically (thanks to Quantum Physics, which provides eyes and a feeling… for what is going on at a distance).

Teleology at the level of hydrogen bonds? Most probably (surprise, surprise). Modify the DNA’s environment, and Quantum Computational pressure is exerted on DNA’s hydrogen bonds (among other bonds). Thus the DNA  will evolve much faster than (classical “Darwinian”) haphazard mutations would have it.  It is such an obvious mechanism that evolution is bound to have selected for it. Life’s little secret is the Quantum!

Experiments are planned. All this will be probably viewed as obvious, all along, within ten years.

What this teleology does is to make life ever more adapted and ever more adaptable. If one measures progress by adaptability, progress there has been, as adaptability has progressed.

Philosophically, it means that, in the deepest sense, life, thanks to the Quantum, is behaving as if it were making value judgments. For example, at the molecular level, lowest energy solutions can be evaluated, and selected.

What is the aim of that teleology? Survival of the life form adapting. A question which immediately surges, is what is life? One thing that is clear, though, is the definition of goodness. For a give lifeform, that means survival of said lifeform. So, naturally enough, goodness will vary according to species, but also tribes, and even individuals.

So let the biggest goodness, and the goodness of the strongest lifeform win (as Nietzsche insisted… and this is the way life always has had it… as Nietzsche himself pointed out, following Sade, who was even clearer!)

Experiments in ethology are starting to test this (EJ Winner ought to consider them! ;-)). Basic psychology, such as a sense of fairness, have obvious survival values in social species such as primates.

Intelligence is also teleological. Philosophically, one can argue that intelligence, and even culture, are an extension of the adaptability of life at the nanometer scale, harnessing the Quantum. The extension probably uses the same Quantum machinery that starts to be put in evidence at the molecular level (say in the chlorophyll molecule).

If Homo Is Aware, Is the Universe Aware? It’s a bit like the question of pondering whether a planet harboring life is alive, or not. Earth is certainly alive, because life enables the very conditions on Earth that enable of its on-going existence, so far. The advent of oxygen producing lifeforms enabled the progress of complexity, hence the apparition of intelligence and advanced ethology of the conscious type.

Speaking of this, a question naturally arises: what is the definition of life? Life, so far, has no definition. The greatest minds have been left speechless.

Considering the preceding, clearly, such a definition will have to involve the Quantum, and Entanglement, besides reproduction. Crystals, and Quasi-crystals can reproduce an Entanglement architecture, but they are intensely boring.  They can be described by just one equation.

Life is any Quantum Entanglement architecture which can approximately self-reproduce and adapt while being described by a potentially growing set of equations.

Probably, we are aspects of the Entanglements and Delocalization that the Quantum is capable of, at least in a little corner of the universe. We don’t need no Sisyphus: we can operate on it, at a distance.

Spooky, admittedly, but that’s what we are.

All these revelations  change the overall mood towards the purpose of life. Life is not absurd, it simply is a growing, entangled complexity, our morality and intelligence, hopes, meaning, and consciousness are entangled with it. But the solutions we cling to, all too long, can well be, indeed, all too disjointed, point-wise, disconnected, and thoroughly absurd.

Equally clearly, the (meta, or final!) solution is to have all the absurdities gobbled up by life itself.

Patrice Ayme’


December 28, 2014

Non-Locality, acting at a distance, without intermediaries, is the stuff of legends in tales for little children. A sorcerer does something somewhere, and something happens, or is felt, somewhere else. Newton himself rejected it. Isaac said the gravitation theory which he had helped to elaborate, was “absurd”, precisely because of it implicitly used “act upon another at a distance”:

“It is inconceivable that inanimate Matter should, without the Mediation of something else, which is not material, operate upon, and affect other matter without mutual Contact…That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro’ a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it.—Isaac Newton, Letters to Bentley, 1692/3.

Du Châtelet Discovered Energy, Infrared Radiation, Correcting Newton

Du Châtelet Discovered Energy, Infrared Radiation, Correcting Newton On His Confusion Of Momentum (Buridan) and Energy, Which She Established

[Yes, one of civilization’s most important physicists and thinkers was a woman; but don’t ask the French, they never heard of her… because she was a woman.]

However Émilie Du Châtelet pointed out that: “…hypotheses eventually become truths for us if their probability increases to such a point that this probability can morally pass for certainty…. In contrast, an hypothesis becomes improbable in proportion to the number of circumstances found for which the hypothesis does not give a reason. And finally, it becomes false when it is found to contradict a well-established observation.” (Du Châtelet’s Lectures on Physics, 1740. Notice the subtlety of the thinking.)

Every Quantum process contradicts Locality, thus, Émilie Du Châtelet would say, Locality is a false hypothesis.

Gravitation got better described (not much) by making gravitation into a field propagating at the speed of light. It is not a trivial modification: it immediately predicts gravitational waves. If two huge star like objects (such as pulsars) rotate around each other, they should generate such waves, they should carry energy away, and those two objects ought to fall towards each other at a predictable rate. Said rate is indeed observed, thus Einstein’s gravitational equation (obtained by talking a lot with others, such as Hilbert, Grasso, etc.) seems correct.

Einstein’s main motivation for his theory of “General Relativity” was that he wanted to explain inertia (why fast rotating planets develop a bulge at the equator, or more generally an acceleration VV/r). That worry, called Mach’s Principle, actually originated 100% with Newton. Newton put water in a pail, twisted and twisted and twisted a rope from which the pail was suspended, and let go: the pail rotated faster and faster, and the water inside crawled up.

Einstein basic wishful logic was that: gravitation = inertia (he called that the “Principle of Equivalence”). So, by making a theory of gravitation, Einstein would make one of inertia, and become a giant among giants (of Du Châtelet’s caliber, say).

Silly. Silly idea, doomed to fail.

Why silly? Once gravitation was made into a field, Einstein and company made it into curvature in a manifold (called “spacetime”; the basic idea was elaborated by genius Riemann, two generations earlier, although implicitly attributed to Einstein by the ignorant ones).

So gravitation is locally determined: once at a point A, gravitation, that is, curvature of spacetime, is determined in a(ny) neighborhood of A (call it N).

The distant stars do not influence N much, if at all. Yet, inertia is clearly determined by the distant galactic clusters.  Einstein could not understand this.

But now physicists understand better Einstein was deluded, and (Soviet physicist) Fock’s critique that Einstein’s General Relativity is just a theory of gravitation is universally (albeit silently) accepted.

So let me repeat slowly, as I suspect many readers will not understand this either: inertia, as far as present day physics can see, is a Non-Local effect. Inertia has been Non-Local, ever since Buridan discovered it, seven centuries ago (1320 CE; time flies!)

Einstein completely failed at understanding inertia. Einstein even failed to realize that it was a Non-Local effect, although that is completely obvious. So he came out obsessed by Non-Locality, while being angry at it (so he was open to the Non-Local objection of philosopher-physicist Sir Karl Popper! Hence the EPR paper, more or less lifted from Popper.)

All this to say that I am not shocked by Non-Locality: I just have to go out, and look at the stars, move about, and I see Non-Locality.

Many, if not most physicists are horrified by Non-Locality.

Philosophically, though, being afraid of Non-Locality makes no sense. Once I was broaching Quantum Physics with my dad. I explained what I understood of the problem of Non-Locality to him.

My dad did not know much physics, but he was a scientist. Admitted to the famed ENA (the school of conspirators from which the present leaders of France come from), he declined it, and, instead, following the path of his own father, an amateur-professional geologist, he himself became a (highly successful) non-academic geologist (he discovered Algeria’s fortune).

My Dad said: ”Non-Locality is obvious. To think things would get ever smaller, just the same, made no sense.”

With this philosophical perspective, the following arise: physical space is not made of points (although Quantum Field Theory is, one of its many problems).

When physicists talk about Non-Locality, they feel the urge to get into the “Bell Inequality”. But it’s a convoluted, over-specialized, contrived way to get at Non-Locality (I say this, although I respect the late John Bell as much as I despise Feynman when he tried to steal Bell’s work… Although, in general I do respect and love Feynman, especially in light of his appreciation for my own ideas).

Bell theorem says that some Local Hidden Variable theories imply an Inequality that Quantum Physics violate. So Bell’s is a work which predicts that something false is not true.

My approach to Non-Locality is made for Primary School. It goes first through:

  • The Uncertainty Principle:

Suppose you want to know where an object is. Suppose all you have is touch. So you kick it. However, if you kick it, it goes somewhere else. That’s the Uncertainty Principle.

Why touch? Because light is touch. It turns out that light carries energy and momentum. Anybody who lays in the sun will agree about the energy. To demonstrate the momentum of light requires a bit more experimental subtlety.

Could you kick the object gently? No. That’s where the Wave Principle kicks in. Waves ignore objects which are smaller than themselves: they just turn around them, as anybody who has seen a twenty meter tsunami wave enter a Japanese port will testify.

So, to detect a small object, one needs a small wavelength, high frequency wave. However the energy of a Quantum wave (at least a light wave) is proportional to its frequency.

So the more precise the determination of (position of) the object, the higher the frequency of the wave, the greater the energy and momentum conferred to the object, etc.

  • Conservation of Momentum: 

One has axioms, in physics, as in mathematics. Modern physics axioms include the conservation of energy and momentum. Newton knew of the latter, and confused it with the former. A French woman, Gabrielle Émilie Le Tonnelier de Breteuil, marquise du Châtelet discovered (kinetic) energy (”force vive”). As she also discovered Infrared radiation, she obviously could have done more when she died from a fever, at age 43, after giving birth to her fourth child. (Her lover Voltaire, also a physicist quipped that:” Émilie du Châtelet was a great man whose only defect was to be a woman”)

Fundamental hypotheses in contemporary physics are conservation of energy and momentum (something the Multiverse violates, thus, into the bin of silly ideas).

  • The Non-Local Interaction:

So say two particles, such as a positron-electron pair, are created together and have total momentum zero (a completely realistic situation: machines do this, for medicine).

Knowing the momentum of (say) the electron E, gives that of the positron P (the vector is exactly opposite to that of the electron). Classical and Quantum mechanics say the same.

So, without having disturbed P (it could be next to Beta Centauri, 4 light years away), we know its momentum. Should one measure it later, one will find it as said. (The latter experiment, retrospective checking of entanglement was actually accomplished by the Austrian Zeillinger and his team!)

However, the basic set-up of Quantum Physics says that the measurement create the state (my formulation, you will not read that in textbooks, although it’s clearly what Bohr wanted to say, but he did not dare, lest his academic reputation gets vilified: he had only a Nobel Prize in physics, after all…).

So the state of P, maybe a few light years away, was created by measuring E.

How come?

The basic Quantum set-up was designed for laboratory experiments, not Cosmological Quantum effects. So it did not need to consider all the consequences of this.

Following Du Châtelet, I will say that we are in obvious need of a new hypothesis, the QUANTUM INTERACTION (ex “Collapse of the Wave Packet”). It explains what we observe (instead of trying desperately to say that we cannot possible observe what we observe).

Following Newton, I will say it is absurd to suppose that the effect of E on P is instantaneous. So this Quantum Interaction goes at a speed I call TAU (it’s at least 10^10 the speed of light: 10,000,000,000 times c).

New physics coming to a Quantum Computer near you.

And of course , said new physics will have giant impacts on philosophy (be it only by presenting new models of how things may be done), or Free Will (is it really free if it takes its orders from Andromeda?). This is going to be fun.

Patrice Ayme’

Free Will & Quantum

December 27, 2014

It is natural to suspect that those who evoke the Quantum at every turn are a bit deranged. Has a Quantum obsession replaced God? God died, but not the need to obsess? (Dominique Deux made a wry remark in that direction.)

Nietzsche himself is an example. Having “killed” (his father’s) “God“, Nietzsche obsessed about the (Indian based) “Eternal Return of the Same”, something from the Zeitgeist. Henri Poincare’ soon demonstrated some dynamical systems roughly do this (although I certainly do not believe all Solar Systems will; recent observations have confirmed my hunch: many Solar Systems are very unstable, the Sun-Jupiter harmony may be rare…)

Quasar & Host Galaxy [NASA-ESA Hubble]

Quasar & Host Galaxy [NASA-ESA Hubble]

[The picture, from 1996, is poor, as the Quasar is very far. We need another telescope, but plutocrats don’t want it, because they would have to pay more taxes, thus rendered unable to treat the Commons as dirt as much as they desire. Yet, in spite of the plutocratically imposed low resolution, one can see the mighty ultra-relativistic jets arising from the Quasar’s core.]

Obsessing about the Quantum is obsessing about the true nature of Nature. As it turns out it’s much simpler and magical than the classical picture.

Nature is the Quantum writ large. Relativity, the Standard model, the Big Bang: these are all amusements of dubious veracity. The Quantum is the Real Thing. And it’s everywhere. Most people just don’t know it yet.

Even Biological Evolution Theory, or Free Will, are going to be revealed to be within the Quantum’s empire.

There is something called “Free Will Skepticism” as massaged in Gregg Caruso Scientia Salon’s essays, and his (celebrity) TED talk. It is not so much skepticism about the existence of Free Will, but skepticism that those who loudly believe in “Free Will” have a constructive, progressive attitude in the society of the USA.

Ultimately, the problem of Free Will will have to tackle the problem of what are exactly the free agents in Quantum Physics.

Well, nobody knows for sure. What the free agents are is the central problem of Quantum Computing, and the high energy physicists’ wild goose chase for high energy processes went the other way, for two generations, so we don’t know what determines the evolution of the Quantum systems.

High energy processes are of interest only in high energy places, none of which are found where the biosphere lays. In other words, much physics, high energy physics used the Quantum, but did not try to figure it out.

Not knowing what the free agents, if any, of Quantum Physics are imply that we do not know what determines the evolution of the simplest processes.

The simplest processes are, by definition, the Quantum processes.

As long as we do not really know what controls simplest systems, talking about whether there is Free Will, or not, is shooting the breeze.

Free Will is even a problem in Quantum Non-Local analysis.

On-going experiments on non-locality. In some hard core physics labs. Those experiments aim to turn around the problem that we may have no Free Will.

The situation is this: doing a measurement at point A was found to have an influence at point B. The influence propagates orders of magnitude faster than the speed of light (as the formalism of basic Quantum Physics theory predicts).

French physicist Alain Aspect was able to show this with crafty optico-acoustic devices (he got the Wolf prize for this, and, clearly, ought to get the physics Nobel). The question remained, though, that maybe Alain Aspect himself was a pre-determined phenomenon deprived of Free Will.

To check this, Aspect’s experiment is going to be re-run with distant quasars in charge (rather than just some French guys). MIT physics department is doing this.

Free Will is the last major loophole of Bell’s inequality — a 50-year-old theorem on Spin that, as it is violated by experiments, means that the universe is based not the (topologically separated) laws of classical physics, but on Non-Locality.

Actually this is all very simple. (No need for the fancy high school math of Bell’s theorem, a particular case of Non-Locality with spins.)

Two quasars on opposite sides of heavens are so distant from each other, that they would have been out of causal contact since the (semi-mythical) Big Bang some 14 billion years ago: there are no possible means for any third party to communicate with both of them since the (semi-mythical) beginning of the universe…

Now, of course, if my own version of the universe is true, and the universe is actually 100 billion years old, the “loophole” re-opens…

But of course, as a philosopher, I know perfectly well that I have Free Will, and, as a momentarily benevolent soul, I extend the courtesy to Alain Aspect.

The universe is Non-Local, even my Free Will is Non-Local, it does not have to be like long dead gentlemen thought it should be.

Patrice Ayme’

QUANTUM ENTANGLEMENT: Nature’s Faster Than Light Architecture

November 22, 2014

A drastically back-to-basic reasoning shows that the universe is held together and ordered by a Faster Than Light Interaction, QUANTUM ENTANGLEMENT. Nature is beautifully simple and clever.

(For those who spurn Physics, let me point out that Quantum Entanglement, being the Fundamental Process, occurs massively in the brain. Thus explaining the non-local nature of consciousness.)


The Universe is held together by an entangled, faster than light interaction. It is time to talk about it, instead of the (related) idiocy of the “multiverse”. OK, it is easier to talk idiotically than to talk smart.

Entanglement Propagates, Says the National Science Foundation (NSF)

Entanglement Propagates, Says the National Science Foundation (NSF)

I will present Entanglement in such a simple way, that nobody spoke of it that way before.

Suppose that out of an interaction, or system S, come two particles, and only two particles, X and Y. Suppose the energy of S is known, that position is the origin of the coordinates one is using, and that its momentum is zero.

By conservation of momentum, momentum of X is equal to minus momentum of Y.

In Classical Mechanics, knowing where X is tells us immediately where Y is.

One can say that the system made of X and Y is entangled. Call that CLASSICAL ENTANGLEMENT.

This is fully understood, and not surprising: even Newton would have understood it perfectly.

The same situation holds in Quantum Physics.

This is not surprising: Quantum Physics ought not to contradict Classical Mechanics, because the latter is fully demonstrated, at least for macroscopic objects X and Y. So why not for smaller ones?

So far, so good.

In Quantum Physics, Classical Entanglement gets a new name. It is called QUANTUM ENTANGLEMENT. It shows up as a “paradox”, the EPR.

That paradox makes the greatest physicists freak out, starting with Einstein, who called QUANTUM ENTANGLEMENT “spooky action at a distance”.

Why are physicists so shocked that what happens in Classical Mechanics would also be true in Quantum Physics?

Some say John Bell, chief theorist at CERN, “solved” the EPR Paradox, in 1964. Not so. Bell, who unfortunately died of a heart attack at 64, showed that the problem was real.

So what’s the problem? We have to go back to what is the fundamental axiom of Quantum Physics (Note 1). Here it is:

De Broglie decreed in 1924 that all and any particle X of energy-momentum (E,p) is associated to a wave W. That wave W s uniquely defined by E and p. So one can symbolize this by: W(E,p).

W(E,p) determines in turn the behavior of X. In particular all its interactions.

De Broglie’s obscure reasoning seems to have been understood by (nearly) no one to this day. However it was checked right away for electrons, and De Broglie got the Nobel all for himself within three years of his thesis.

Most of basics Quantum Mechanics is in De Broglie’s insight. Not just the “Schrodinger” equation, but the Uncertainty Principle.


Take a “particle X”. Let’s try to find out where it is. Well, that means we will have to interact with it. Wait, if we interact, it is a wave W. How does one find the position of a wave? Well the answer is that one cannot: when one tries to corner a wave, it becomes vicious, as everybody familiar with the sea will testify. Thus to try to find the position of a particle X makes its wave develop great momentum.

A few years after De Broglie’s seminal work, Heisenberg explained that in detail in the particular case of trying to find where an electron is, by throwing a photon on it.

This consequence of De Broglie’s Wave Principle was well understood in several ways, and got to be known as the Heisenberg Uncertainty Principle:

(Uncertainty of Position)(Uncertainty of Momentum) > (Planck Constant)


The Quantum Wave, and thus the Uncertainty, applies to any “particle” (it could be a truck).

It is crucial to understand what the Uncertainty Principle says. In light of all particles being waves (so to speak), the Uncertainty Principle says that, AT NO MOMENT DOES A PARTICLE HAVE, EVER, A PERFECTLY DEFINED MOMENTUM and POSITION.

It would contradict the “particle’s” wavy nature. It’s always this question of putting a wave into a box: you cannot reduce the box to a point. There are NO POINTS in physics.

Now we are set to understand why Quantum Entanglement created great anxiety. Let’s go back to our two entangled particles, X and Y, sole, albeit not lonely, daughters of system S. Suppose X and Y are a light year apart.

Measure the momentum of X, at universal time t (Relativity allows to do this, thanks to a process of slow synchronization of clocks described by Poincare’ and certified later by Einstein). The momentum of Y is equal and opposite.

But, wait, at same time t, the position of Y could be determined.

Thus the Uncertainty Principle would be violated at time t at Y: one could retrospectively fully determine Y’s momentum and position, and Y would have revealed itself to be, at that particular time t, a vulgar point-particle… As in Classical Mechanics. But there are no point-particles in Quantum Physics:  that is, no point in Nature, that’s the whole point!).


(This contradiction is conventionally called the “EPR Paradox”; it probably ought to be called the De Broglie-Einstein-Popper Paradox, or, simply, the Non-Locality Paradox.)

This is the essence of why Quantum Entanglement makes physicists with brains freak out. I myself have thought of this problem, very hard, for decades. However, very early on, I found none of the solutions by the great names presented to be satisfactory. And so I developed my own. The more time passes, the more I believe in it.

A difficulty I had is my theory created lots of cosmic garbage, if true (;-)).

At this point, Albert Einstein and his sidekicks (one of them was just used to translate from Einstein’s German) wrote:

“We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete.” [Einstein, A; B Podolsky; N Rosen (1935-05-15). “Can Quantum-Mechanical Description of Physical Reality be Considered Complete?”. Physical Review 47 (10): 777–780.]

The EPR paper ends by saying:

“While we have thus shown that the wave function does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible.”

This is high lawyerese: even as vicious a critic as your humble servant cannot find anything wrong with this craftily composed conceptology.

Einstein had corresponded on the subject with the excellent philosopher Karl Popper earlier (and Popper found his own version of the EPR). This is no doubt while he was more circumspect that he had been before.

Let’s recapitulate the problem, my way.

After interacting, according to the WAVE PRINCIPLE, both widely separating particles X and Y share the SAME WAVE.

I talk, I talk, but this is what the equations that all physicists write say: SAME WAVE. They can write all the equations they want, I think about them.

That wave is non-local, and yes, it could be a light year across. Einstein had a problem with that? I don’t.

Those who cling to the past, tried everything to explain away the Non-Locality Paradox.

Einstein was a particular man, and the beginning of the EPR paper clearly shows he wants to cling back to particles, what I view as his error of 1905. Namely that particles are particles during fundamental processes (he got the Physics Nobel for it in 1922; however, as I will not get the Nobel, I am not afraid to declare the Nobel Committee in error; Einstein deserved several Nobels, yet he made a grievous error in 1905, which has led most physicists astray, to this day… hence the striking madness of the so-called “multiverse”).

The Bell Inequality (which Richard Feynman stole for himself!) conclusively demonstrated that experiments could be made to check whether the Quantum Non-Local effects would show up.

The experiments were conducted, and the Non-Local effects were found.

That they would not have been found would have shattered Quantum Physics completely. Indeed, all the modern formalism of Quantum Physics is about Non-Locality, right from the start.

So what is my vision of what is going on? Simple: when one determines, through an interaction I, the momentum of particle X, the wave made of X and Y, W(X,Y), so to speak, “collapses”, and transmits the fact of I to particle Y at faster than light speed TAU. (I have computed that TAU is more than 10^10 the speed of light, c; Chinese scientists have given a minimum value for TAU, 10^4 c)

Then Y reacts as if it had been touched. Because, well, it has been touched: amoebae-like, it may have extended a light year, or more.

Quantum Entanglement will turn into Einstein’s worst nightmare. Informed, and all around, quasi-instantaneously. Tell me, Albert, how does it feel to have thought for a while one had figured out the universe, and then, now, clearly, not at all?

(Why not? I did not stay stuck, as Einstein did, making metaphors from moving trains, clocks, etc; a first problem with clocks is that Quantum Physics does not treat time and space equivalently. Actually the whole Quantum conceptology is an offense to hard core Relativity.)

Faster than light entanglement is a new way to look at Nature. It will have consequences all over. Indeed particles bump into each other all the time, so they get entangled. This immediately implies that topology is important to classify, and uncover hundreds of states of matter that we did not suspect existed. None of this is idle: Entanglement  is central to Quantum Computing.

Entanglement’s consequences, from philosophy to technology, are going to dwarf all prior science.

Can we make predictions, from this spectacular, faster than light, new way to look at Nature?


Dark Matter. [2]

Patrice Ayme’


[1]: That the De Broglie Principle, the Wave Principle implies Planck’s work is my idea, it’s not conventional Quantum as found in textbooks.

[2]: Interaction density depends upon matter density. I propose that Dark Matter is the remnants of waves that were too spread-out to be fully brought back by Quantum Wave Collapse. In low matter density, thus, will Dark Matter be generated. As observed.


August 8, 2013

Abstract: simple considerations of a philosophical, non computational, nature, on Space, Time and the Quantum show that the former two are not basic (and that some apparently most baffling traits of the Quantum are intuitive!). Progress in knowledge of the interdependence of things should not be hampered by traditional prejudices. (Not an easy essay: readers are encouraged to jump around it like kangaroos!)


What is time? Today’s physics does not answer that question, it just computes with the notion as if it were obvious. To find out what time could be, a little bout of metaphysics different from the tentative one in today’s understanding of nature, is needed.

Einstein amplified the notion that the universe is about spacetime (x,t) in a reference frame F. He, and his friends Hilbert and Besso used the mathematical, and physical ideas, created by Riemann (and his Italian successors: Ricci, Levi-Civita, etc.)

"Solitary and Uncomprehended Genius"

Riemann: “Solitary and Uncomprehended Genius” (Poincaré said)

Lorentz discovered one had to assume that (x’,t’) in a moving frame F’ cruising by at a steady speed v is related to (x,t) in frame F according to the Lorentz transformations.

Lorentz got the Nobel Prize, for finding these (thanks to the recommendation of the towering Henri Poincaré); I am not pointing this out to compare the relative merits of celebrities, but to establish the hierarchy of the discoveries they made, and thus the logic therein. (Poincaré’s 1904“Principe de Relativite’” was firmly established before Einstein showed up on the scene, and the latter’s contributions, although enlightening, have been vastly overestimated.)

Not that the initial logic of a discovery always perdures, but sometimes it’s important. The Einstein cult has been obscuring reality; Einstein would have been the first one to decry it (Einstein basically ran away with the idea of Poincaré that the constancy of the speed of light, c, being always observed, was thus a fundamental law of physics, and made it the foundation of what Poincare’ called “Relativite'”).

Only by using the Lorentz transformations are the equations of electrodynamics preserved. In other words: only thus is the speed of light measured to be c in both F, using (x,t) and F’, using (x’,t’).

So what is time t?

According to the scheme in Relativity, it’s simple: given the sanctity of the speed of light, c, and space x, time can be measured by having a photon of light going between two perfect mirrors, and counting the impacts (that’s what is called a light clock; it’s very useful to derive most equations of Relativity).

Indeed space is measured by the time it takes light to go back and forth. This sounds like a circular logic: time is needed to measure space and space is needed, to measure time.

Does that mean one of the two, say, time, is derivative?

I used to think so (propped by the lack of time in Quantum Theory, see below). But, actually, no.

Indeed, time can be localized down to the proton scale.

One can measure time at that scale with how long it takes some elementary particle to decay. Or because to any particle is associated its De Broglie wave, hence a frequency (and that particle can be confined in as small a space as a proton).

Basically time can be measured at a point.

However, space, by definition is… non local (space is always an extent, all the more if time is used to measure it, thanks to c; technically my idea is that space depends upon the holonomy group, time does not; thus Minkowsky’s “spacetime” belongs to the dustbin!).

Thus the conceptual universe in which bask electromagnetism makes it look as if, somehow, time was more fundamental.

The situation is the exact opposite in Quantum Theory. Quantum Theory is full of entangled situations. Measure such a situation somewhere, and it changes all over. “Measure such a situation somewhere, and it changes all over” means that a Quantum Process is all over it. Whatever “it” is. Einstein called that “spooky interaction at a distance”. I call it the QUANTUM INTERACTION.

Einstein tried to escape the spookiness. Instead, I claim it should be embraced. After all, Quantum spookiness makes life possible.

We indeed know now that this spooky Quantum interaction is fundamental to life. It allows life to be more efficient than any understanding from classical mechanics could have it. Vision and the chlorophyll molecule use Quantum spookiness at a distance. This recent discovery did not surprise me at all. I fully expected it, just as I fully expect that consciousness will be revealed to be a Quantum effect (an easy prediction, at this point, in this Quantum universe!)

A computer using the Quantum Theory would be more efficient, for the same reason: the Quantum computer computes all over, in a non local way. (The computers we have now are just sleek electron-using versions of the classical computers the ancient Greeks had, with their little teethed wheels; the Quantum computer is founded on a completely different process.)

This “spooky” non locality has alarmed many a thinker. But notice this simple fact: space itself, even the classical space used in electromagnetism, is non local (as one uses light travel, plus time, to determine space).

So it’s only natural that space in Quantum Theory be non local too.

The “spookiness” is easily understood thus: spacetime physics a la Einstein and company singles out a particular interaction, electromagnetism, and the sanctity of c, to measure the universe with. Why this one, and not another of the fundamental interactions we know?

Quantum Theory (QT) gets out of this would-be choice by choosing none of the traditional forces to measure space with!

As QT has it, as it stands, QT does not need to measure the universe. (I believe it does, using the Quantum Interaction, and I can support that with impossible simultaneous measurements at great distances, but that’s another, more advanced set of considerations.)

Those who think thinking is reduced to computing will object that it is not the same type of non locality (the one I claim to see in classical space and the “spooky” one of Quantum space). Whatever: the non locality in quantum Theory does not depend upon light speed. That’s the important point.

There, the lesson cannot be emphasized enough: on the face of it, the basic set-up of Quantum Theory tells us that light, and, in particular light speed, is NOT fundamental.

This few observations above should they prove to be as deep and correct as I believe they are, show the power of the philosophical method, even in today’s physics. Some will scoff, but not consider carefully all the philosophy behind spacetime a la Einstein.

A warning for those who scoff about the importance of meta-physics: the founding paper of differential geometry in mathematics, and physics, was a lecture by Bernhard Riemann. It’s full of metaphysics and metamathematics, for the best.

The paper had just one equation (and it is a definition!)

That lecture was entitled “Über die Hypothesen welche der Geometrie zu Grunde liegen (“On The Hypotheses Which Underlie Geometry“). (Call these “hypotheses” meta-geometrical, metamathematical, or metaphysical.)

The lecture was published in 1868, two years after his author’s death (and 14 years after he gave it). Riemann’s main idea was to define manifolds and curvature. (Riemannian) manifolds were defined by a metric. Curvature ought to be a tensor, Riemann said, not just a simple number (scalar; as Gaussian curvature).

From top to bottom: positive, negative and no curvature.

From top to bottom: positive, negative and no curvature.

Riemann generalized the notion of curvature to any dimension, thanks to the Riemann Curvature Tensor (the simplified Ricci form of which appears in Einstein’s gravitational field equation).

Here is for some meta-physics; Riemann: “It is quite conceivable that the geometry of space in the very small does not satisfy the axioms of [Euclidean] geometry… The properties which distinguish space from other conceivable triply-extended magnitudes are only to be deduced from experience.

Gauss, Riemann’s teacher, knew this so well that he had tried to measure the curvature of space, if any, using a triangle of tall peaks. Gauss found no curvature, but now we know that gravitation is best described as curved spacetime.

(This lack of Gaussian curvature shows that it’s not because situation is not found under some conditions that it is not there under other conditions; in biology the proof by Medawar that Lamarckism was false, using mice, for which he got the Nobel (being British, ;-)) comes to mind: no Lamarckism in Medawar experiments did not prove that there would be no Lamarckism in other experiments; now four Lamarckist mechanisms are known!)

Twentieth Century physics, in particular the theory of gravitation, exploits the following fact, understood by Riemann as he laid, dying from tuberculosis in Italy. Force is a tautology for geodesics coming closer (or not). Thus curvature is force.

Einstein remarkably said: “Only the genius of Riemann, solitary and uncomprehended, had already won its way by the middle of the last century to a new conception of space, in which space was deprived of its rigidity, and in which its power to take part in physical events was recognized as possible.”

(I find this statement all the more remarkable and prophetic in that it is not in Einstein’s physics, and could not be, but rather in the one I would like to have, where fundamental dynamic processes literally create space…)

The fact that a tautology is at the heart of Einstein’s Theory of Relativity means that it explains nothing much! (Relativity fanatics are going to hate that statement!…although it describes very well what happens to objects evolving in spacetime, especially GPS, let it be said in passing.)

“Only to be deduced from experience”, said mathematician Riemann. What’s the ultimate experience we have? Quantum Theory. And what did we find QT said? You can’t measure with space, you can’t measure with time (although clearly the Quantum depends upon the differential topology of the situation, see the Bohm-Aharanov effect! where, by the way, the space metric is made fun of once again!)

Last splendid idea from Riemann (1854-1866):

“Researches starting from general notions, like the investigation we have just made, can only be useful in preventing this work from being hampered by too narrow views, and progress in knowledge of the interdependence of things from being checked by traditional prejudices.”



Patrice Ayme


September 1, 2011



Abstract: Why Quantum Physics violates locality. Twentieth-second century primary school version.


 LOCALITY: What does locality mean? It means that what happens at a point is determined by what is happening in a neighborhood of that point within a small enough distance, as determined by light. Moreover, it means that the universe U is made of points: U = Union points. Points, by definition, are singletons (they have no elements in the sense of set theory), and they have dimension zero: nothing belongs to a point.

 SPACETIME: Generally the universe is called “spacetime”. However, this concept, spacetime, introduces the assumptions of Einstein’s Special Relativity, as boosted by Minkowski, established before Quantum Physics.

 In particular the spacetime hypothesis assumes that the universe is a product of what is called in mathematics the “real line”, which assumes, among other things, what is called a T2, Hausdorff topology. Two different points are separated by different neighborhoods (to use the appropriate concepts from general topology).

 Quantum Physics violates both LOCALITY and SPACETIME.

 How do we know this? When one analyzes the smallest processes, one finds that, in plenty of cases, the SMALLEST PROCESSES, THE INDIVISIBLE PROCESSES, SPREAD IN TIME OVER ARBITRARY BIG REGIONS, ON THEIR OWN (THAT IS WITHOUT ANY INTERACTION WITH THE REST OF THE UNIVERSE). Are they then big, or are they small? Verily, therein a mystery of the Quantum.

 In this innocuous concept I just uttered, they spread as big as they want, although being as small as there is, one finds the entire origin of Quantum non locality. No need for fancy mathematics, or even any equation. The idea is as dramatic as can be.

 Indeed, non locality boils down to a matter of definition. As the indivisible process spreads out, it stays one, well, by definition. It means that touching it anywhere is like touching it everywhere.

 When two particles comes out of such an indivisible process, they are called “ENTANGLED”. The semantics gets in the way. What we do not have is actually two particles, but two possible experimental channels, which can be widely separated, where, if we experiment, two particles will show up, and widely separated, if the channels are so.

 Thus we see that the two channels are entangled, and touching one is also touching the other.

 What are some of these cases where the smallest, indivisible processes spread out macroscopically? Well, they are so common, that they seem to be the rule, not the exception: 

 Diffraction (the 1 slit experiment) is such a case: the slit is small, diffusion gets big. Arbitrarily big.

 The famous 2 slit experiment is another case: the slits are close by, the interference screen is at a large distance. Hey, the 2-slit could be a galactic cluster. A cluster made of galaxies, each 200,000 light years across. 

 Any fundamental process where two particles separate after interacting. (In particular the simple set-up of the Einstein Podolski Rosen thought experiment, such as the Bohm total spin zero variant.)

 It is highly likely that such an effect is used all over biology, to transport energy close to 100% efficiency over macroscopic distances (an allusion to the fact that this is not only about pure science, but the economic fall-out will be considerable, once this is so well understood that we can dominate the processes involved).

 Not all Quantum processes spread all over space. Bohr got the Nobel for his patchy, haphazard atomic theory which worked, because electronic matter waves self interfere coherently onto themselves (these matters waves, the de Broglie waves, are called orbitals, and they make the body of atoms, what we call matter, and sharing orbitals is much of what we call chemistry).


 Thus we have found the following, from the most basic set-up of Quantum theory:

 A Fundamental Quantum Process, is one, until interacted with, even if it is spread over space. This is what Quantum Non Locality is all about.

 Some crystals can make out of one photon, two photons with opposite polarizations, and they could then be sent in two different channels, a light year apart.

Parallel transporting along the two channels the polarization directions, we would always find them opposite. A more subtle relation between the polarizations holds, and was found to be true even when the polarization angles are moved randomly during the photons flight time (Aspect experiment, for which Alain Aspect got the Wolf Prize in 2010).

 By making all sorts of supplementary hypotheses about local hidden parameters and local measurements of polarizations, though, on finds that should not be the case. This contradiction is called the Bell Inequality (I like Bell very much, and I approve of his quest, which is also my quest. I apologize for his many admirers, by presenting his efforts in an arguably demeaning light).

The preceding, most simple way to look at Non Locality, gives an excellent reason to not do that: the logic of the Quantum is as simple as it gets: as long as I am left alone, says the Quantum, I am one. And indivisible.


 What does it all mean? First Einstein and company in their “EPR” paper, talked about “elements of reality”. They did NOT talk about ELEMENTS OF SPACE. They did not have the notion. I will argue they should have, but of course, the fact that they did not have it was central to their (erroneous) reasoning. 

 Einstein and company wondered how a particle could communicate with another, even across light years. Wrong amazement. Particles are not “communicating“. Actually, they are not “particles” to start with.

 For decades I have advocated a radical solution, as exposed above, aligning the definition elementarily: the two particles are one and the same, they are in the same place at the time of the Quantum interaction, and stop being so, as a result. The topology used in physics, the same that the dinosaurs used, the T2, separated topology, is not appropriate to the real universe. OK, it was appropriate for pterosaurs. But it’s not appropriate, across the universe. BTW, the pterosaurs, the best fliers, by far, that this planet has known, went extinct, although they were obviously very smart.

 Is this the end all, be all? Quantum Physics a la Bohr reigns, and nothing else can be said? No. If the bare bone theory above is true, the entire theory of spacetime is false: space is not made of points, and it is constructed interactively. The case of time is even more so. Imagine there are no heavens made of points, only the sky you make, etc. Thus more has been said


Patrice Ayme


 Note 1: An enlightening analogy: The question of using the Quantum set-up to transmit information superluminally, or what Einstein called “Spooky Interaction At A Distance” has come up. The preceding, as it is, sticking to strict Quantum theory, demolishes both views, with crushing simplicity.

 How? OK, let’s make an experimental metaphor. Suppose we have an infinitely rigid bar between the two entangled particles: each time we experiment with one, we turn the bar, and so it turns at the other end too. Simple. Some will say: ha ha ha, but then I can look at the bar, and I see the bar turn, and so information has been transmitted. Not so fast. We are dealing with as elementary a Quantum process as possible, which means the particle was not observed, before the bar turned. So we see the bar turn, but we do not know if it was, or not, turning before. To tell if a signal was sent, one has first to define a state where one can say no signal was received.  


 Note 2: I was a rough to the point of inaccuracy with the Bell Inequality above.  There is a subtlety, which can be seen easily say in the case of spin. Spin measurements in various directions are not independent of each other. Thus, if one measures spin in the close channel, a measurement of spin in another, random direction in the distance channel will show that influence, and a local determination of spin in the distant channel by parallel transport will not exhibit this. BTW, introducing the notion of parallel transport in the conversation, which is the whole point of the “local hidden parameter” debate is from yours truly.

 Note 3: And let’s not forget to smile about the naïve who developed frantically supersymmetric superstrings super budgeted super having-nothing-to-do-with reality… While forgetting to think about the fundamentals as described above.