Archive for the ‘Non Locality’ Category


December 10, 2016

In Patrice’s Sub-Quantum Reality (PSQR), Matter Waves are real (in Quantum Theory Copenhagen Interpretation (QTCI) the Matter Waves are just probability waves). There has been no direct evidence that Matter Waves were real. So far. But times they are changing as the other one, who got his Nobel today, said.

Both Dark Matter and Dark Energy are consequences of PSQR. So: Observing both Dark Matter and Dark Energy constitute proofs of PSQR.

The prediction of the deviation of light by the Sun was twice with “General Relativity” than the one predicted in Newtonian Mechanics. The effect was minute, and detected only in grazing starlight, during Solar eclipse of 29 May 1919 (by the ultra famous British astronomer and physicist Eddington). Thus, as 95% of the universe matter-energy is from Dark Matter or Dark Energy, my prediction carries more weight.

PSQR also predict “fuel-less” production, in a variant of the effect which produces Dark Matter in PSQR: 

Dark Matter Pushes, Patrice Ayme Says. Explaining NASA's Findings?

Dark Matter Pushes, Patrice Ayme Says. Explaining NASA’s Findings?

How does Dark Matter create propulsion? Well, that it does is evident: just look at galactic clusters (more details another day). A Matter Wave will expand, until it singularizes. If it expands enough, it will become so big that it will lose a (smaller) piece of itself during re-singularization. That  piece is the Dark Matter.

Thus visualize this: take a cavity C, and bounce a Matter Wave around it (there is plenty of direct theoretical and experimental evidence that this can be arranged).

Make a hole H in the boundary of C (this is not different from the Black Body oven the consideration of which led Planck to discover the Quantum).

Some Dark Matter then escapes. By the hole. 

However, Dark Matter carries energy momentum (evidence from Galaxies, Galactic Clusters, etc.).

Hence a push. A Dark Matter push.

The (spectacular) effect has been apparently observed by NASA.

Does this violate Newton’s Third Law? (As it has been alleged.)

No. I actually just used Newton’s Third Law, the Action = Reaction law. So PSQR explains the observed effect in combination with the Action= Reaction Law, “proving” both.

How could we prove PSQR? There should be a decrease of energy-momentum after a while, and the decrease should equal the observed push exactly.

Patrice Ayme’


Warning: The preceding considerations are at the edge of plausible physics. (A tiny group of dissenting physicists are even busy making theories where Dark Matter does not exist. The consensus is that Dark Matter exists, but is explained by a variant of the so-called “Standard Model”, using “Supersymmetry”, or “WIMPs”, or “Axions”. My own theory, PSQR is, by far, the most exotic, as it threws Quantum Theory Copenhagen Interpretation, QTCI, through the window.)

DARK GALAXY (Explained?)

October 1, 2016

A giant galaxy made nearly entirely of Dark Matter has been discovered. Theories of Dark Matter proposed by people salaried for professing physics cannot explain (easily, if at all!) why there would be so much Dark Matter in one galaxy. I can. In my own theory, Dark Matter is not really matter, although matter gives birth to it, under some particular geometrical conditions. In my theory, in some geometrodynamic situations, a galaxy will churn inordinate amounts of Dark Matter quickly. So I was not surprised by the find.

There are many potential theories of Dark Matter. Most are fairly conventional. They typically hypothesize new particles (some of these new particles could come from new symmetries, such as supersymmetry). I do not see how they can predict why these particular particles appear in some places, and not others. However, the importance of location, of geometry, is a crucial feature of my own theory.

I predicate that the Quantum Interaction (copyright myself) does not have infinite range. Thus, quantum interactions, in some conditions of low mass-energy density, leave behind part of the Quantum Wave. Such debris have mass-energy, so they exert gravitational pull, but they have little else besides (most of the characteristics of the particles they were part of concentrate somewhere else).

I Can Explain This Dark Galaxy, By Changing The Foundations Of Physics. No Less.

I Can Explain This Dark Galaxy, By Changing The Foundations Of Physics. No Less.

[From the Hawaiian Gemini telescope.]

In my own theory, one can imagine that the geometry of a galaxy is, at some point extremely favorable to the creation of Dark Matter: it is just a question of dispersing the matter just so. The Dark Galaxy has 1% of the stars of our Milky Way, or less. In my theory, once Dark Matter has formed, it does not seem possible to make visible matter again with it (broken Quantum Wave debris float around like a cosmic fog).

All past science started as a mix of philosophy and science-fiction (Aristarchus, Lucretius, Giordano Bruno, Immanuel Kant, Lamarck are examples). One can only surmise it will be the same in the future, and this is supported by very good logic: guessing comes always before knowing. Those who claim that science will never be born again from philosophy and fantasy are saying that really new science will never happen again. They say that all the foundations of science are known already. So they are into predication, just like religious fanatics.

It was fashionable to say so, among physicists in the 1990s, the times of the fable known as TOE, the so-called Theory Of Everything. Shortly after this orgasm of self-satisfaction by self-appointed pontiffs, the evidence became clear that the universe’s mass-energy was mostly Dark Energy, and Dark Matter.

This is an interesting case of meta-mood shared: also in the 1990s, clever idiots (Fukuyama, etc.) claimed history had ended: a similar claim from the same period, permeating the same mood of stunted imagination. The advantage, while those who pontificated that way? They could claim they knew everything: they had become gods, living gods.

I had known about Dark Matter all along (the problem surfaced nearly a century ago). I considered it a huge problem: It held galaxies and galactic clusters, together. But maybe something had been overlooked. Meanwhile Main Stream Physics (MSP) dutifully, studiously, ignored it. For decades. Speaking of Dark matter made one despicable, a conspiracy theorist.

Another thing MSP ignored was the foundations of physics. Only the most prestigious physicists, such as Richard Feynman, could afford to repeat Einstein’s famous opinion that “nobody understands Quantum Mechanics”. I gave my intellectual life’s main axis of reflection in trying to understand what nobody wanted to understand, that nobody thought they could afford to understand, the real foundations of physics. (So doing I was forced to reflect on why it is that people do not want to understand the most fundamental things, even while professing they do. It is particularly blatant in, say, economics.)

I have long discovered that the real foundations of physics are entangled with those of mathematics (it is not just that physics, nature, is written with mathematics, as Galileo wrote; there is a dialogue between the mathematics that we invent, and the universe that we discover, they lead to each other). For example whether the infinity axiom is allowed in mathematics change the physics radically (the normalization problem of physics is solved if one removes the infinity axiom).

Right now, research at the foundations of (proper) physics is hindered by our lack of nonlinear mathematics: Quantum mechanics, as it is, is linear (waves add up in the simplest way). However the “collapse of the wave packet” is obviously nonlinear (this is why it’s outside of existing physics, from lack of math). From that Quantum collapse, when incomplete from great distances involved, comes Dark Matter. At least, so I propose. 

Patrice Ayme’

DARK MATTER, Or How Inquiry Proceeds

September 7, 2016

How to find really new knowledge? How do you find really new science? Not by knowing the result: this is what we don’t have yet. Any really new science will not be deduced from pre-existing science. Any really new knowledge will come out of the blue. Poetical logic will help before linear logic does.

The case of Dark Matter is telling: this increasingly irritating elephant in the bathroom has been in evidence for 80 years, lumbering about. As the encumbering beast did not fit existing science, it was long religiously ignored by the faithful, as a subject not worthy of serious inquiry by very serious physicists. Now Dark Matter, five times more massive than Standard Model matter, is clearly sitting heavily outside of the Standard Model, threatening to crush it into irrelevance. Dark matter obscures the lofty pretense of known physics to explain everything (remember the grandly named TOE, the so-called “Theory Of Everything“? That was a fraud, snake oil, because main stream physics celebrities crowed about TOE, while knowing perfectly well that Dark Matter dwarfed standard matter, and was completely outside of the Standard Model).

Physicists are presently looking for Dark Matter, knowing what they know, namely that nature has offered them a vast zoo of particles, many of them without rhyme or reason (some have rhyme, a symmetry, a mathematical group such as SU3 acting upon them; symmetries revealed new particles, sometimes). 

Bullet Cluster, 100 Million Years Old. Two Galaxies Colliding. The Dark Matter, In Blue, Is Physically Separated From the Hot, Standard Matter Gas, in Red.

Bullet Cluster, 100 Million Years Old. Two Galaxies Colliding. The Dark Matter, In Blue, Is Physically Separated From the Hot, Standard Matter Gas, in Red.

[This sort of pictures is most of what we presently have to guess what Dark Matter could be; the physical separation of DM and SM is most telling to me: it seems to indicate that SM and DM do not respond to the same forces, something that my Quantum theory predicts; it’s known that Dark Matter causes gravitational lensing, as one would expect, as it was first found by its gravitational effects, in the 1930s…]

However, remember: a truly completely new piece of science cannot be deduced from pre-existing paradigm. Thus, if Dark Matter was really about finding a new particle type, it would be interesting, but not as interesting as it would be, if it were not, after all, a new particle type, but from a completely new law in physics.

This is the quandary about finding truly completely new science. It can never be deduced from ruling paradigms, and may actually overthrow them. What should then be the method to use? Can Descartes and Sherlock Holmes help? The paradigm presented by Quantum Physics helps. The Quantum looks everywhere in space to find solutions: this is where its (“weird”) nonlocality comes in. Nonlocality is crucial for interference patterns and for finding lowest energy solutions, as in the chlorophyll molecule. This suggests that our minds should go nonlocal too, and we should look outside of a more extensive particle zoo to find what Dark Matter is.

In general, searching for new science should be by looking everywhere, not hesitating to possibly contradict what is more traditional than well established.

An obvious possibility is, precisely, that Quantum Physics is itself incomplete, and generating Dark Matter in places where said incompleteness would be most blatant. More precisely, Quantum processes, stretched over cosmic distances, instead of being perfectly efficient and nonlocal over gigantically cosmic locales, could leave a Quantum mass-energy residue, precisely in the places where extravagant cosmic stretching of Quanta occurs (before “collapse”, aka “decoherence”).

The more one does find a conventional explanation (namely a new type of particle) for Dark Matter, the more likely my style of explanation is likely. How could one demonstrate it? Not by looking for new particles, but by conducting new and more refined experiments in the foundations of Quantum Physics.

If this guess is correct, whatever is found askew in the axioms of present Quantum Physics could actually help future Quantum Computer technology (because the latter works with Quantum foundations directly, whereas conventional high energy physics tend to eschew the wave aspects, due to the high frequencies involved).

Going on a tangent is what happens when the central, attractive force, is let go. A direct effect of freedom. Free thinking is tangential. We have to learn to produce tangential thinking.

René Descartes tried to doubt the truth of all his beliefs to determine which beliefs he could be certain were true. However, at the end of “The Meditations” he hastily conclude that we can distinguish between dream and reality. It is not that simple. The logic found in dreams is all too similar to the logic used by full-grown individuals in society.

Proof? Back to Quantum Physics. On the face of it, the axioms of Quantum Physics have a dream like quality (there is no “here”, nor “there”, “now” is everywhere, and, mysteriously, the experiment is Quantum, whereas the “apparatus” is “classical”). Still, most physicists, after insinuating they have figured out the universe, eschew the subject carefully.  The specialists of Foundations are thoroughly confused: see Sean Carroll,

However unbelievable Quantum Physics, however dream-like it is, physicists believe in it, and don’t question it anymore than cardinals would Jesus. Actually, it’s this dream-like nature which, shared by all, defines the community of physicists. Cartesian doubt, pushed further than Descartes did, will question not just the facts, the allegations, but the logic itself. And even the mood behind it.

Certainly, in the case of Dark Matter, some of the questions civilization has to ask should be:

  1. How sure are we of the Foundations of Quantum Physics? (Answer: very sure, all too sure!)
  2. Could not it be that Dark Matter is a cosmic size experiment in the Foundations of Quantum Physics?

Physics, properly done, does not just question the nature of nature. Physics, properly done, questions the nature of how we find out the nature of anything. Physics, properly done, even questions the nature of why we feel the way we do. And the way we did. About anything, even poetry. In the end, indeed, even the toughest logic is a form of poetry, hanging out there, justified by its own beauty, and nothing else. Don’t underestimate moods: they call what beauty is.

Patrice Ayme’

Entangled Universe: Bell Inequality

May 9, 2016

Abstract: The Bell Inequality shatters the picture of reality civilization previously established. A simple proof is produced.

What is the greatest scientific discovery of the Twentieth Century? Not Jules Henri Poincaré’s Theory of Relativity and his famous equation: E = mcc. Although a spectacular theory, since  Poincaré’s made time local, in order to keep the speed of light constant, it stemmed from Galileo’s Principle of Relativity, extended to Electromagnetism. To save electromagnetism globally, Jules Henri Poincaré made time and length local.

So was the discovery of the Quantum by Planck the greatest discovery? To explain two mysteries of academic physics, Planck posited that energy was emitted in lumps. Philosophically, though, the idea was just to extent to energy the basic philosophical principle of atomism, which was two thousand years old. Energy itself was discovered by Émilie Du Châtelet in the 1730s.

Quantum Entanglement Is NOT AT ALL Classically Predictable

Quantum Entanglement Is NOT AT ALL Classically Predictable

Just as matter went in lumps (strict atomism), so did energy. In light of  Poincaré’s E = mc2, matter and energy are the same, so this is not surprising (by a strange coincidence (?)  Poincaré demonstrated, and published E = mc2, a few month of the same year, 1900, as Max Planck did E = hf; Einstein used both formulas in 1905).

The greatest scientific discovery of Twentieth Century was Entanglement… which is roughly the same as Non-Locality. Non-Locality would have astounded Newton: he was explicitly very much against it, and viewed it, correctly, as the greatest flaw of his theory. My essay “Non-Locality” entangles Newton, Émilie Du Châtelet, and the Quantum, because therefrom the ideas first sprung.


Bell Inequality Is Obvious:

The head of the Theoretical division of CERN, John Bell, discovered an inequality which is trivial and apparently so basic, so incredibly obvious, that it reflects the most basic common sense that it should always be true. Ian Miller (PhD, Physical Chemistry) provided a very nice perspective on all this. Here it is, cut and pasted (with his agreement):

Ian Miller: A Challenge! How can Entangled Particles violate Bell’s Inequalities?

Posted on May 8, 2016 by ianmillerblog           

  The role of mathematics in physics is interesting. Originally, mathematical relationships were used to summarise a myriad of observations, thus from Newtonian gravity and mechanics, it is possible to know where the moon will be in the sky at any time. But somewhere around the beginning of the twentieth century, an odd thing happened: the mathematics of General Relativity became so complicated that many, if not most physicists could not use it. Then came the state vector formalism for quantum mechanics, a procedure that strictly speaking allowed people to come up with an answer without really understanding why. Then, as the twentieth century proceeded, something further developed: a belief that mathematics was the basis of nature. Theory started with equations, not observations. An equation, of course, is a statement, thus A equals B can be written with an equal sign instead of words. Now we have string theory, where a number of physicists have been working for decades without coming up with anything that can be tested. Nevertheless, most physicists would agree that if observation falsifies a mathematical relationship, then something has gone wrong with the mathematics, and the problem is usually a false premise. With Bell’s Inequalities, however, it seems logic goes out the window.

Bell’s inequalities are applicable only when the following premises are satisfied:

Premise 1: One can devise a test that will give one of two discrete results. For simplicity we label these (+) and (-).

Premise 2: We can carry out such a test under three different sets of conditions, which we label A, B and C. When we do this, the results between tests have to be comparable, and the simplest way of doing this is to represent the probability of a positive result at A as A(+). The reason for this is that if we did 10 tests at A, 10 at B, and 500 at C, we cannot properly compare the results simply by totalling results.

Premise 1 is reasonably easily met. John Bell used as an example, washing socks. The socks would either pass a test (e.g. they are clean) or fail, (i.e. they need rewashing). In quantum mechanics there are good examples of suitable candidates, e.g. a spin can be either clockwise or counterclockwise, but not both. Further, all particles must have the same spin, and as long as they are the same particle, this is imposed by quantum mechanics. Thus an electron has a spin of either +1/2 or -1/2.

Premises 1 and 2 can be combined. By working with probabilities, we can say that each particle must register once, one way or the other (or each sock is tested once), which gives us

A(+) + A(-) = 1; B(+) + B(-) = 1;   C(+) + C(-) = 1

i.e. the probability of one particle tested once and giving one of the two results is 1. At this point we neglect experimental error, such as a particle failing to register.

Now, let us do a little algebra/set theory by combining probabilities from more than one determination. By combining, we might take two pieces of apparatus, and with one determine the (+) result at condition A, and the negative one at (B) If so, we take the product of these, because probabilities are multiplicative. If so, we can write

A(+) B(-) = A(+) B(-) [C(+) + C(-)]

because the bracketed term [C(+) + C(-)] equals 1, the sum of the probabilities of results that occurred under conditions C.


B(+)C(-)   = [A(+) + A(-)] B(+)C(-)

By adding and expanding

A(+) B(-) + B(+)C(-) = A(+) B(-) C(+) + A(+) B(-) C(-) + A(+) B(+)C(-) + A(-)B(+)C(-)

=   A(+)C(-) [(B(+) + B(-)] + A+B C+ + AB(+)C(-)

Since the bracketed term [(B(+) + B(-)] equals 1 and the last two terms are positive numbers, or at least zero, we have

A(+) B(-) + B(+)C(-) ≧ A(+)C(-)

This is the simplest form of a Bell inequality. In Bell’s sock-washing example, he showed how socks washed at three different temperatures had to comply.

An important point is that provided the samples in the tests must give only one result from only two possible results, and provided the tests are applied under three sets of conditions, the mathematics say the results must comply with the inequality. Further, only premise 1 relates to the physics of the samples tested; the second is merely a requirement that the tests are done competently. The problem is, modern physicists say entangled particles violate the inequality. How can this be?

Non-compliance by entangled particles is usually considered a consequence of the entanglement being non-local, but that makes no sense because in the above derivation, locality is not mentioned. All that is required is that premise 1 holds, i.e. measuring the spin of one particle, say, means the other is known without measurement. So, the entangled particles have properties that fulfil premise 1. Thus violation of the inequality means either one of the premises is false, or the associative law of sets, used in the derivation, is false, which would mean all mathematics are invalid.

So my challenge is to produce a mathematical relationship that shows how these violations could conceivably occur? You must come up with a mathematical relationship or a logic statement that falsifies the above inequality, and it must include a term that specifies when the inequality is violated. So, any takers? My answer in my next Monday post.

[Ian Miller.]


The treatment above shows how ludicrous it should be that reality violate that inequality… BUT IT DOES! This is something which nobody had seen coming. No philosopher ever imagined something as weird. I gave an immediate answer to Ian:

‘Locality is going to come in the following way: A is going to be in the Milky Way, B and C, on Andromeda. A(+) B(-) is going to be 1/2 square [cos(b-a)]. Therefrom the contradiction. There is more to be said. But first of all, I will re-blog your essay, as it makes the situation very clear.’

Patrice Ayme’

TO BE AND NOT TO BE? Is Entangled Physics Thinking, Or Sinking?

April 29, 2016

Frank Wilczek, a physics Nobel laureate, wrote a first soporific, and then baffling article in Quanta magazine: “Entanglement Made Simple”. Yes, all too simple: it sweeps the difficulties under the rug. After a thorough description of classical entanglement, we are swiftly told at the end, that classical entanglement supports the many World Interpretation of Quantum Mechanics. However, classical entanglement (from various conservation laws) has been known since the seventeenth century.

Skeptical founders of Quantum physics (such as Einstein, De Broglie, Schrodinger, Bohm, Bell) knew classical entanglement very well. David Bohm found the Bohm-Aharanov effect, which demonstrated the importance of (nonlocal) potential, John Bell found his inequality which demonstrated, with the help of experiments (Alain Aspect, etc.) that Quantum physics is nonlocal.

Differently From Classical Entanglement, Which Acts As One, Quantum Entanglement Acts At A Distance: It Interferes With Measurement, At A Distance

Differently From Classical Entanglement, Which Acts As One, Quantum Entanglement Acts At A Distance: It Interferes With Measurement, At A Distance

The point about the cats is that everybody, even maniacs, ought to know that cats are either dead, or alive. Quantum mechanics make the point they can compute things about cats, from their point of view. OK.

Quantum mechanics, in their busy shops, compute with dead and live cats as possible outcomes. No problem. But then does that mean there is a universe, a “world“, with a dead cat, happening, and then one with a live cat, also happening simultaneously?

Any serious philosopher, somebody endowed with common sense, the nemesis of a Quantum mechanic, will say no: in a philosopher’s opinion, a cat is either dead, or alive. To be, or not to be. Not to be, and not to be.

A Quantum mechanic can compute with dead and live cats, but that does not mean she creates worlds, by simply rearranging her computation, this way, or that. Her various dead and live cats arrangements just mean she has partial knowledge of what she computes with, and that Quantum measurements, even from an excellent mechanic, are just partial, mechanic-dependent measurements.

For example, if one measures spin, one needs to orient a machine (a Stern Gerlach device). That’s just a magnetic field going one way, like a big arrow, a big direction. Thus one measures spin in one direction, not another.

What’s more surprising is that, later on, thanks to a nonlocal entanglement, one may be able to determine that, at this point in time, the particle had a spin that could be measured, from far away, in another direction. So far, so good: this is like classical mechanics.

However, whether or not that measurement at a distance has occurred, roughly simultaneously, and way out of the causality light cone, EFFECTS the first measurement.

This is what the famous Bell Inequality means.

And this is what the problem with Quantum Entanglement is. Quantum Entanglement implies that wilful action somewhere disturbs a measurement beyond the reach of the five known forces. It brings all sorts of questions of a philosophical nature, and make them into burning physical subjects. For example, does the experimenter at a distance have real free will?

Calling the world otherworldly, or many worldly, does not really help to understand what is going on. Einstein’s “Spooky Interaction At A Distance” seems a more faithful, honest rendition of reality than supposing that each and any Quantum mechanic in her shop, creates worlds, willy-nilly, each time it strikes her fancy to press a button.

What Mr. Wilczek did is what manyworldists and multiversists always do: they jump into their derangement (cats alive AND dead) after saying there is no problem. Details are never revealed.

Here is, in extenso, the fully confusing and unsupported conclusion of Mr. Wilczek:

“Everyday language is ill suited to describe quantum complementarity, in part because everyday experience does not encounter it. Practical cats interact with surrounding air molecules, among other things, in very different ways depending on whether they are alive or dead, so in practice the measurement gets made automatically, and the cat gets on with its life (or death). But entangled histories describe q-ons that are, in a real sense, Schrödinger kittens. Their full description requires, at intermediate times, that we take both of two contradictory property-trajectories into account.

The controlled experimental realization of entangled histories is delicate because it requires we gather partial information about our q-on. Conventional quantum measurements generally gather complete information at one time — for example, they determine a definite shape, or a definite color — rather than partial information spanning several times. But it can be done — indeed, without great technical difficulty. In this way we can give definite mathematical and experimental meaning to the proliferation of “many worlds” in quantum theory, and demonstrate its substantiality.”

Sounds impressive, but the reasons are either well-known or then those reasons use a sleight of hand.

Explicitly: “take both of two contradictory property-trajectories into account”: just read Feynman QED, first chapter. Feynman invented the ‘sum over histories’, and Wilczek is his parrot; but Feynman did not become crazy from his ‘sum over history’: Richard smirked when his picturesque evocation was taken literally, decades later…

And now the sleight of hand: …”rather than  [gather] partial information spanning several times. But it can be done — indeed, without great technical difficulty.” This nothing new: it is the essence of the double slit discovered by that Medical Doctor and polymath, Young, around 1800 CE: when one runs lots of ‘particles’ through it, one sees the (wave) patterns. This is what Wilczek means by “partial information“. Guess what? We knew that already.

Believing that one can be, while not to be, putting that at the foundation of physics, is a new low in thinking. And it impacts the general mood, making it more favorable towards unreason.

If anything can be, without being, if anything not happening here, is happening somewhere else, then is not anything permitted? Dostoyevsky had a Russian aristocrat suggests that, if god did not exist anything was permitted. And, come to think of it, the argument was at the core of Christianism. Or more, exactly, of the Christian reign of terror which started in the period 363 CE-381 CE, from the reigns of emperor Jovian to the reign of emperor Theodosius. To prevent anything to be permitted, a god had to enforce the law.

What we have now is way worse: the new nihilists (Wilczek and his fellow manyworldists) do not just say that everything is permitted. They say: it does not matter if everything is permitted, or not. It is happening, anyway. Somewhere.

Thus Many-Worlds physics endangers, not just the foundations of reason, but the very justification for morality. That is that what is undesirable should be avoided. Even the Nazis agreed with that principle. Many-Worlds physics says it does not matter, because it is happening, anyway. Somewhere, out there.

So what is going on, here, at the level of moods? Well, professor Wilczek teaches at Harvard. Harvard professors advised president Yeltsin of Russia, to set up a plutocracy. It ruined Russia. Same professors made a fortune from it, while others were advising president Clinton to do the same, and meanwhile Prime Minister Balladur in France was mightily impressed, and followed this new enlightenment by the Dark Side, as did British leaders, and many others. All these societies were ruined in turn. Harvard was the principal spirit behind the rise of plutocracy, and the engine propelling that rise, was the principle that morality did not matter. because, because, well, Many-Worlds!

How does one go from the foundations of physics, to the foundations of plutocracy? Faculty members in the richest, most powerful universities meet in mutual admiration societies known as “faculty clubs” and lots of other I scratch-your-back, you scratch-my-back social occasion they spend much of their time indulging in. So they influence each other, at the very least in the atmospheres of moods they create, and then breathe together.

Remember? It is not that everything is permitted: it’s happening anyway, so we may as well profit from it first. Many-Worlds physics feeds a mood favorable to many plutocrats, and that’s all there is to it. (But that, of course, is a lot, all too much.)

Patrice Ayme’

Crazy Physics Helps With Overall Madness?

April 27, 2016

Quantum Physics has long been a circus. When De Broglie proposed his thesis, his  thesis jury (which comprised top physicists, including a Nobel Laureate) did not know what to make of it, and consulted Einstein. Einstein was enthusiastic, saying de Broglie “lifted a piece of the veil”. Three years later, de Broglie got the Nobel and proposed his pilot wave theory. Pauli made an objection, de Broglie replied to it with the consummate politeness of the Prince he was, and thus the reply was not noticed. Five years after, the great mathematician Von Neumann asserted a “proof” that there was no Quantum Mechanics but for the one elaborated in Copenhagen. De Broglie’s objections were not listened to. Another two decades later, David Bohm presented de Broglie theory at the Institute for Advanced Physics in Princeton. But Bohm was drowned by question about why he had refused to testify at the Committee on Anti-American Activities in Congress (the American born Bohm promptly lost his job at Princeton University and his US passport, and would leave the US forever).

The usual interpretation of Quantum Physics consider that the De Broglie Matter Waves therein are only probability waves. This idea of Nobel Laureate Born has eschewed controversy. However Einstein sourly remarked: “God does not play with dice.” To which Nobel Laureate Bohr smartly replied:”Stop telling God what to do!

Qubits Are Real. But The Multiverse Is Madness

Qubits Are Real. But The Multiverse Is Madness. And Madness Is Contagious.

De Broglie suggested a “Double Solution” theory, which was promptly forgotten as Dirac launched Quantum ElectroDynamics by starting from the simplest relativistic wave, and building the (spinor) space he needed to have said wave wave in it.  Bohm revived (some of) De Broglie’s ideas by proposing to guide an always well defined particle with a (nonlocal) “quantum potential”.


And The Madness Set In:

Nowadays, descriptions of Quantum Physics are keen to assert that something can be in two places at the same time, that there are many worlds, or universes, created each time something happen, that cats are dead and alive, that the observer creates reality, etc…

All this derangement affecting physicists has something to do with a collective madness similar to the pseudo-scientific theories behind the Slave Trade, Stalinism, or Nazism.

No, I am not exaggerating. The theory behind enslaving Black Africans (going all the way back to the Middle Ages) was that Black Africans were, somehow, the missing link between man and ape. That’s why the Pope allowed the slave trade.

Neither am I exaggerating about fascism: the Nazis were actually obsessed by the new physics, a world where everything seemed possible. They called it “Jewish Physics”, and several Nobel laureates (Lenard, etc.), top mathematicians (say Teichmuller, who died on the Eastern Front in combat) were its opponents.

It contributed to suggest an overall mood:’if anything is possible, why not surrealism, fascism, Stalinism, Nazism?’

Germany has long led, intellectually (not to say France did not lead too, but it was the great opponent). Thus when top physicists became Nazis even before Hitler did, they no doubt impressed the latter by their attacks on “Jewish Science”.

The madness was not confined to the Nazis, stricto sensu. An excellent example is Max Planck, discoverer of the Quantum.

Planck accepted Einstein’s paper on “The Electrodynamics of Moving Bodies” without references… When it was sure that Planck knew about the work of Poincare’, Lorentz, Fitzgerald, Michelson-Morley, etc. on Relativity. Poincaré  was a star, and had toured the USA, delivering lectures on “Relativity” the year prior.

So what was Planck up to? Promoting the German arriviste to the cost of the most accomplished mathematician and physicist, because the latter was a Frenchman. (Poincaré , who was as elevated a character as can be found, nevertheless complained about Einstein plagiarism later.) Not only was  Poincaré French, but his family was refugee from the occupation of Lorraine by the Prussians. Raymond Poincaré, who was prime minister of France several times and president of the French Republic during World War I, was Henri’s cousin.

This is of some import, in the understanding of ideas, to this day: Poincaré  discovered the idea of gravitational waves, and explained why all interactions should go at the speed of light. Scientists who published (stole) the same ideas later could not copy all of  Poincaré ’s arguments, it would have been too obvious (that they stole the ideas), so those important details of  Poincaré  have been forgotten… And this haunts physics to this day

I believe that this is how the extremely all too relative, theory of Relativity a la Einstein appeared: Einstein could not duplicate all of  Poincaré’s details, so he omitted (some of) them… Resulting in a (slick) theory with a glaring defect: all classes of frames in uniform motion are supposed to be equivalent, a blatant absurdity (as even the Big Bang theory imposes a unique class of comoving frames). This brought a lot of (on-going) confusion (say about “rest” mass).

Planck did not stop with stealing Relativity from  Poincaré, and offering it to the Great German empire.

Planck endorsed the general excitement of the German public, when Germany attacked the world on August 1, 1914. He wrote that, “Besides much that is horrible, there is also much that is unexpectedly great and beautiful: the smooth solution of the most difficult domestic political problems by the unification of all parties (and) … the extolling of everything good and noble.”

Planck also signed the infamous Manifesto of the 93 intellectuals“, a pamphlet of war propaganda (while Einstein at the academy in Berlin, retained a pacifistic attitude which almost led to his imprisonment, although he was saved by his Swiss citizenship). The Manifesto, ironically enough, enumerated German war crimes, while denying (‘not true’) that they had happened. It did not occur to the idiots who had signed it, that just denying this long litany of crimes was itself a proof that they had occurred… And it’s telling they had to deny them: the German population obviously was debating whether those crimes had happened, now that the war was not doing well.

Planck got punished for his nationalism: his second son Erwin was taken prisoner by the French in 1914. His eldest son Karl died at Verdun (along with another 305,000 soldiers). When he saw Hitler was destroying Germany, Planck went to see the dictator, to try to change his mind, bringing to his attention that he was demolishing German universities. But to no avail. In January 1945, Erwin, to whom he had been particularly close, was sentenced to death by the obscene and delirious Nazi “people” court, the Volksgerichtshof. Because Erwin participated in the failed attempt to make a coup against the criminal Hitler in July 1944. Erwin was executed on 23 January 1945 (along with around 5,000 German army officers, all the way to Feldmarshal).

So what to think of the “Multiverse”, “Dead and Alive Cats”, Things which are in different places at the same time, etc.? Do they have to do with suggesting, even promoting, a global reign of unreason?

I think they do. I think the top mood contaminate lesser  intellectuals, political advisers, even politicians themselves. Thus political and social leaders feel anything goes, so, next thing you know, they suggest crazy things, like self-regulating finance, trade treaties where plutocrats can sue states (apparently one of the features of TPP and TTIP), or a world which keeps on piling CO2, because everything is relative, dead, thus alive, and everywhere is the same, here, there and everywhere, since at the same place, in space, time, or whatever.

Physics, historically, was not just a model of knowledge, but of rational rectitude. This has been lost. And it was lost from technical reasons, discarding other approaches, in part because of sheer nationalism.

In the 1960s John Bell, the Irishman who was director of theory at CERN, published a book with his famous theorem on nonlocality inside:”Speakables and Unspeakables in Quantum Mechanics”. A title full of hidden sense.

Patrice Ayme

The Quantum Puzzle

April 26, 2016


Is Quantum Computing Beyond Physics?

More exactly, do we know, can we know, enough physics for (full) quantum computing?

I have long suggested that the answer to this question was negative, and smirked at physicists sitting billions of universes on a pinhead, as if they had nothing better to do, the children they are. (Just as their Christian predecessors in the Middle Ages, their motives are not pure.)

Now an article in the American Mathematical Society Journal of May 2016 repeats (some) of the arguments I had in mind: The Quantum Computer Puzzle. Here are some of the arguments. One often hears that Quantum Computers are a done deal. Here is the explanation from Justin Trudeau, Canada’s Prime Minister, which reflects perfectly the official scientific conventional wisdom on the subject:

(One wishes all our great leaders would be as knowledgeable… And I am not joking as I write this! Trudeau did engineering and ecological studies.)

... Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits...

… Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits…

Before some object that physicists are better qualified than mathematicians to talk about the Quantum, let me point towards someone who is perhaps the most qualified experimentalist in the world on the foundations of Quantum Physics. Serge Haroche is a French physicist who got the Nobel Prize for figuring out how to count photons without seeing them. It’s the most delicate Quantum Non-Demolition (QND) method I have heard of. It involved making the world’s most perfect mirrors. The punch line? Serge Haroche does not believe Quantum Computers are feasible. However Haroche does not suggest how he got there. The article in the AMS does make plenty of suggestions to that effect.

Let me hasten to add some form of Quantum Computing (or Quantum Simulation) called “annealing” is obviously feasible. D Wave, a Canadian company is selling such devices. In my view, Quantum Annealing is just the two slit experiment written large. Thus the counter-argument can be made that conventional computers can simulate annealing (and that has been the argument against D Wave’s machines).

Full Quantum Computing (also called  “Quantum Supremacy”) would be something completely different. Gil Kalai, a famous mathematician, and a specialist of Quantum Computing, is skeptical:

“Quantum computers are hypothetical devices, based on quantum physics, which would enable us to perform certain computations hundreds of orders of magnitude faster than digital computers. This feature is coined “quantum supremacy”, and one aspect or another of such quantum computational supremacy might be seen by experiments in the near future: by implementing quantum error-correction or by systems of noninteracting bosons or by exotic new phases of matter called anyons or by quantum annealing, or in various other ways…

A main reason for concern regarding the feasibility of quantum computers is that quantum systems are inherently noisy. We will describe an optimistic hypothesis regarding quantum noise that will allow quantum computing and a pessimistic hypothesis that won’t.”

Gil Katai rolls out a couple of theorems which suggest that Quantum Computing is very sensitive to noise (those are similar to finding out which slit a photon went through). Moreover, he uses a philosophical argument against Quantum Computing:

It is often claimed that quantum computers can perform certain computations that even a classical computer of the size of the entire universe cannot perform! Indeed it is useful to examine not only things that were previously impossible and that are now made possible by a new technology but also the improvement in terms of orders of magnitude for tasks that could have been achieved by the old technology.

Quantum computers represent enormous, unprecedented order-of-magnitude improvement of controlled physical phenomena as well as of algorithms. Nuclear weapons represent an improvement of 6–7 orders of magnitude over conventional ordnance: the first atomic bomb was a million times stronger than the most powerful (single) conventional bomb at the time. The telegraph could deliver a transatlantic message in a few seconds compared to the previous three-month period. This represents an (immense) improvement of 4–5 orders of magnitude. Memory and speed of computers were improved by 10–12 orders of magnitude over several decades. Breakthrough algorithms at the time of their discovery also represented practical improvements of no more than a few orders of magnitude. Yet implementing Boson Sampling with a hundred bosons represents more than a hundred orders of magnitude improvement compared to digital computers.

In other words, it unrealistic to expect such a, well, quantum jump…

“Boson Sampling” is a hypothetical, and simplest way, proposed to implement a Quantum Computer. (It is neither known if it could be made nor if it would be good enough for Quantum Computing[ yet it’s intensely studied nevertheless.)


Quantum Physics Is The Non-Local Engine Of Space, and Time Itself:

Here is Gil Kalai again:

“Locality, Space and Time

The decision between the optimistic and pessimistic hypotheses is, to a large extent, a question about modeling locality in quantum physics. Modeling natural quantum evolutions by quantum computers represents the important physical principle of “locality”: quantum interactions are limited to a few particles. The quantum circuit model enforces local rules on quantum evolutions and still allows the creation of very nonlocal quantum states.

This remains true for noisy quantum circuits under the optimistic hypothesis. The pessimistic hypothesis suggests that quantum supremacy is an artifact of incorrect modeling of locality. We expect modeling based on the pessimistic hypothesis, which relates the laws of the “noise” to the laws of the “signal”, to imply a strong form of locality for both. We can even propose that spacetime itself emerges from the absence of quantum fault tolerance. It is a familiar idea that since (noiseless) quantum systems are time reversible, time emerges from quantum noise (decoherence). However, also in the presence of noise, with quantum fault tolerance, every quantum evolution that can experimentally be created can be time-reversed, and, in fact, we can time-permute the sequence of unitary operators describing the evolution in an arbitrary way. It is therefore both quantum noise and the absence of quantum fault tolerance that enable an arrow of time.”

Just for future reference, let’s “note that with quantum computers one can emulate a quantum evolution on an arbitrary geometry. For example, a complicated quantum evolution representing the dynamics of a four-dimensional lattice model could be emulated on a one-dimensional chain of qubits.

This would be vastly different from today’s experimental quantum physics, and it is also in tension with insights from physics, where witnessing different geometries supporting the same physics is rare and important. Since a universal quantum computer allows the breaking of the connection between physics and geometry, it is noise and the absence of quantum fault tolerance that distinguish physical processes based on different geometries and enable geometry to emerge from the physics.”


I have proposed a theory which explains the preceding features, including the emergence of space. Let’s call it Sub Quantum Physics (SQP). The theory breaks a lot of sacred cows. Besides, it brings an obvious explanation for Dark Matter. If I am correct the Dark matter Puzzle is directly tied in with the Quantum Puzzle.

In any case, it is a delight to see in print part of what I have been severely criticized for saying for all too many decades… The gist of it all is that present day physics would be completely incomplete.

Patrice Ayme’

Gravitational Waves Directly Detected

February 11, 2016

How Were Gravitational Waves detected?

By two detectors in the USA, one in Washington State, the other in Louisiana (detecting in one place would have been enough; in two places at the same time, the finding is overwhelming certain; the National Science Foundation of the USA had spent $1.1 billion, over 40 years, on that research). The detectors were simplicity themselves in concept: just a light interferometer to measure the distance between mirrors: light is split, sent in two perpendicular directions, and then re-united with itself. If one of the branches vary slightly in length relative to the other as a gravitational waves passes, an interference will show up. However mirrors hanging from pendulum hanging from pendulums five times, the whole thing in an anti-vibration machine had to be realized in half a dozen places in a chain of reflections and interference.

What are these Gravitational Waves?

As far as existing gravitation theory has it, distortion in space (and, thus time: time and space are related by the speed of light, c).

A Field Carries Away A Wave Just As A Whip Does

A Field Carries Away A Wave Just As A Whip Does

What Was Detected:

Einstein’s Gravitation Theory says that gravitation “is” the deformation of space(time) it brings. It is this deformation which was directly detected: a part of space in one direction was made shorter than in another direction. That meant a huge gravitational wave had passed.

The formidable event that caused it was the crash and collapse of two black holes into each other, each around 30 solar masses (much more details are known).

Gravitational Waves Were Certain Theoretically, & Already Detected:

We already had evidence for the existence of gravitational waves, both theoretical and experimental. Einstein’s name was rolled out, naturally enough. Because Einstein contributed to the present Theory of Gravitation (I am not anti-Einstein, far from it, but he closely worked with a number of other people, including the towering mathematician David Hilbert, who published his own approach to gravity within weeks of Einstein).

Einstein tends to appear as the cherry on many a cake. Those who celebrate the photogenic cherry, and ignore the cake, will go hungry.


Actually, once one has hypothesized that gravitation is a field propagating at a finite speed, the apparition of waves is automatic.

The reasoning was made first by British and French Eighteenth Century physicists, in the framework of electromagnetic force; the mathematics is exactly the same with gravitation, as both fields vary with the inverse square of the distance. This is what happens in a radio antenna, with electrons going back and forth: the electric field that those electrons create is deformed in such a way that it moves other electron at a distance, back and forth.

The Gravitational Energy Loss Detection Method:

Thus, how do the waves show up? By shaking things at a distance. Using conservation of energy, it means that the field creating system, by moving just so, loses energy to its waves. An obvious case is two neutron stars (“pulsars”) rotating around each other: as they move back and forth, because of said rotation, they create gravitational waves which carry energy away from their system, As this happens, their system loses energy, the two stars should spiral into each other, thus rotate ever faster, and this should be observable, and computable exactly. This, indeed, was thoroughly observed, so we knew the waves were there.

Einstein’s Gravitation Theory is a sleight of hand:

It affects to identify space(time) deformations with gravitation. The idea actually originated with the awesome German mathematician Bernard Riemann, who invented manifold theory in part to point out that any force could be viewed as convergence, or divergence of geodesics (this is an idea that physics has been milking ever since).

This, though, does not answer Newton’s deeper query about the nature of gravitation (see below). It’s a bit as if a creature asked:’What is an arm?’ And one answered:’An arm is what pushes things, and we can detect the deformation the arm brought.’

What is the discovery good for?

Well, first, one has to make sure. Science is about making 100% sure. The present experiment improved some technology far out what anything else required (but then it does open some possibilities!) Just an importantly, now we will be able to check the details of the Gravitation Theory (the big picture was not in doubt; the details are). Ultimately it may be possible to communicate through gravitational waves, etc (although right now the deformation are only of the size of the fraction of a nucleus, and we could detect them!)

Who were the originators of that idea? First Newton himself pointed out that his own theory of gravitation was grotesque (I am paraphrasing). Newton:

“that one body may act upon another at a distance thro’ a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it.”

There were actually two problems: that the action was instantaneous, and that it was at a distance without intermediaries. Newton paid attention to the second one, physics, in the last two centuries, solved the first (which was implicit in Newton’s observations).

As I mentioned in passing above, part of Newton’s worries were addressed by the invention of the concept of field. And then by the realization that fields carried energy away in waves. At that point, gravitational waves were automatic… Riemann’s introduction of manifolds, and how to conceptualize forces in them gave the manifestation of its nature to gravitation we presently have, a distortion of space metric (once again, time follows automatically).

It’s important to know who invented what, and contributed most. Because it unveils how ideas appear and evolve. Then, in turn, one can make theories of that, accelerating innovation (don’t forget there is a horse race between innovation and oblivion, on the scale of the entire biosphere!)

Curiously, this is all very useful; GPS with a precision of 30 centimeters has allowed to find out that baboon society is more democratic than ours, in fundamental ways. “General Relativistic” effects (the fact clocks run slow in a gravitational field) make crucial corrections to the GPS computations (otherwise GPS would be pretty useless). So this is not all academic. GPS will soon allow robotic agriculture… among other things.

We still don’t know what gravitation is. However, we can predict more things than Newton did… Even if he did not suspect they were there. This is just the beginning of what could be revealed, if our satanic impulses are kept in check.

Patrice Ayme’

Is “Spacetime” Important?

November 3, 2015

Revolutions spawn from, and contributes to, the revolutionary mood. It is no coincidence that many revolutionary ideas in science: Chemistry (Lavoisier), Biological Evolution (Lamarck), Lagrangians, Black Holes,, Fourier Analysis, Thermodynamics (Carnot), Wave Optics, (Young, Poisson), Ampere’s Electrodynamics spawned roughly at the same time and place, around the French Revolution.

In the Encyclopedie, under the term dimension Jean le Rond d’Alembert speculated that time might be considered a fourth dimension… if the idea was not too novel. Joseph Louis Lagrange in his ), wrote that: “One may view mechanics as a geometry of four dimensions…” (Theory of Analytic Functions, 1797.) The idea of spacetime is to view reality as a four dimensional manifold, something measured by the “Real Line” going in four directions.

There is, it turns out a huge problem with this: R, the real line, has what is called a separated topology: points have distinct neighborhoods. However, the QUANTUM world is not like that, not at all. Countless experiments, and the most basic logic, show this:

Reality Does Not Care About Speed, & The Relativity It Brings

Reality Does Not Care About Speed, & The Relativity It Brings

Manifolds were defined by Bernhard Riemann in 1866 (shortly before he died, still young, of tuberculosis). A manifold is made of chunks (technically: neighborhoods), each of them diffeomorphic to a neighborhood in R^n (thus a deformed piece of R^n, see tech annex).

Einstein admitted that there was a huge problem with the “now” in physics (even if one confines oneself to his own set-ups in Relativity theories). Worse: the Quantum changes completely the problem of the “now”… Let alone the “here”.

In 1905, Henri Poincaré showed that by taking time to be an imaginary fourth spacetime coordinate (√−1 c t), a Lorentz transformation can be regarded as a rotation of coordinates in a four-dimensional Euclidean space with three real coordinates representing space, and one imaginary coordinate, representing time, as the fourth dimension.

— Hermann Minkowski, 1907, Einstein’s professor in Zurich concluded: “The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”

This remark rests on Lorentz’s work, how to go from coordinates (x, t) to (x’, t’). In the simplest case:

C is the speed of light. Lorentz found one needed such transformations to respect electrodynamics. If v/c is zero (as it is if one suppose the speed v to be negligible  relative to c, the speed of light infinite), one gets:

t = t’

x’ = x – vt

The first equation exhibits universal time: time does not depend upon the frame of reference. But notice that the second equation mixes space and time already. Thus, philosophically speaking, proclaiming “spacetime” could have been done before. Now, in so-called “General Relativity”, there are problems with “time-like” geodesics (but they would surface long after Minkowski’s death).

Another problem with conceptually equating time and space is that time is not space: space dimensions have a plus sign, time a minus sign (something Quantum Field Theory often ignores by putting pluses everywhere in computations)

In any case, I hope this makes clear that, philosophically, just looking at the equations, “spacetime” does not have to be an important concept.

And Quantum Physics seems to say that it is not: the QUANTUM INTERACTION (QI; my neologism) is (apparently, so far) INSTANTANEOUS (like old fashion time).

As we saw precedingly (“Can Space Be Faster Than Light“), the top cosmologists are arguing whether the speed of space can be viewed as faster than light. Call that the Cosmic Inflation Interaction (CII; it has its own hypothesized exchange particle, the “Inflaton”). We see that c, the speed of light is less than CII, and may, or may not be related to QI (standard Quantum Physics implicitly assumes that the speed of the Quantum Interaction QI is infinite).

One thing is sure: we are very far from TOE, the “Theory Of Everything”, which physicists anxious to appear as the world’s smartest organisms, with all the power and wealth to go with it, taunted for decades.

Patrice Ayme’

Tech Annex: R is the real line, RxR = R^2, the plane, RxRxR = R^3 the usual three dimensional space, etc. Spacetime was initially viewed as just RxRxRxR = R^4.]What does diffeomorphic mean? It means a copy which can be shrunk or dilated somewhat in all imaginable ways, perhaps (but without breaks, and so that all points can be tracked; a diffeomorphism does this, and so do all its derivatives).


September 11, 2015

Feynman:”It is safe to say that no one understands Quantum Mechanics.” 

Einstein: “Insanity is doing the same thing over and over and expecting different results.”

Nature: “That’s how the world works.”

Wilzcek (Physics Nobel Prize): “Naïveté is doing the same thing over and over, and always expecting the same result.”

Parmenides, the ancient Greek philosopher, theorized that reality is unchanging and indivisible and that movement is an illusion. Zeno, a student of Parmenides, devised four famous paradoxes to illustrate the logical difficulties in the very concept of motion. Zeno’s arrow paradox starts and ends this way:

  • If you know where an arrow is, you know everything about its physical state….
  • The arrow does not move…

Classical Mechanics found the first point to be erroneous. To know the state of a particle, one must know not only its position X, but also its velocity and mass (what’s called its momentum P). Something similar happens with Quantum Physics. To know the state of a particle, we need to know whether the state of what it has interacted with before…  exists, or not. According to old fashion metaphysics, that’s beyond weird. It’s simply incomprehensible.

The EPR Interaction: Zein Und Zeit. For Real.

The EPR Interaction: Zein Und Zeit. For Real.

[The Nazi philosopher Heidegger, an ex would-be priest, wrote a famous book “Being And Time“. However, rather than a fascist fantasy, the EPR is exactly about that level of depth: how existence and time come to be! And how those interact with our will…]

With that information, X and P, position and momentum, for each particle, classical mechanics predicts a set of particles’ future evolution completely. (Formally dynamic evolution satisfies a second order linear differential equation. That was thoroughly checked by thousands of officers of gunnery, worldwide, over the last five centuries.)

Highly predicting classical mechanics is the model of Einstein Sanity.

Aristotle had ignored the notion of momentum, P. For Aristotle, one needed a force to maintain motion (an objective proof of Aristotle’s stupidity; no wonder Aristotle supported, and instigated, fascist dictatorship as the best system of governance). Around 1320 CE, the Parisian genius Buridan declared that Aristotle was completely wrong and introduced momentum P, calling it “IMPETUS”.

May we be in a similar situation? Just like the Ancient Greeks had ignored P, is Quantum Wave Mechanics incomplete from an inadequate concept of what a complete description of the world is?

Einstein thought so, and demonstrated it to his satisfaction in his EPR Thought Experiment. The EPR paper basically observed that, according to the Quantum Axiomatics, two particles, after they interacted still formed JUST ONE WAVE. Einstein claimed that there had to exist hidden “elements of reality”, not yet identified in the (Copenhagen Interpretation of) quantum theory. Those heretofore hidden “elements of reality” would re-establish Einstein Sanity, Einstein feverishly hoped.

According to Einstein, following his friend Prince Louis De Broglie (to whom he had conferred the Doctorate) and maybe the philosopher Karl Popper (with whom he corresponded prior on non-locality), Quantum Mechanics appears random. But that randomness is only because of our ignorance of those “hidden variables.” Einstein’s demonstration rested on the impossibility of what he labelled “spooky action at a distance”.

That was an idea too far. The “spooky action at a distance” has been (amply) demonstrated in the meantime. Decades of experimental tests, including a “loophole-free” test published on the scientific preprint site last month, show that the world is like that: completely non-local everywhere.

In 1964, the physicist John Bell, CERN’s theory chief, working with David Bohm’s version of Einstein’s EPR thought experiment, identified an inequality obeyed by any physical theory that is both local — meaning that interactions don’t travel faster than light — and where the physical properties usually attributed to “particles” exist prior to “measurement.”

(As an interesting aside, Richard Feynman tried to steal Bell’s result, at a time when Bell was not famous, at least in the USA: a nice example of “French Theory” at work! And I love Feynman…)

Einstein’s hidden “elements of reality” probably exist, but they are NON-LOCAL. (Einstein was obsessed by locality; but that’s an error. All what can be said in favor of locality is that mathematics, and Field Theory, so far, are local: that’s the famous story of the drunk who looks for his keys under the lamp post, because that’s the only thing he sees.)

Either some physical influences travel faster than light, or some properties don’t exist before measurement. Or both

I believe both happen. Yes, both: reality is both faster than light, and it is pointwise fabricated by interactions (“measurement”). Because:

  1. The EPR Thought Experiment established the faster than light influence (and that was checked experimentally).
  2. But then some properties cannot exist prior to “EPR style influence”. Because, if they did, why do they have no influence whatsoever, once the EPR effect is launched?

Now visualize the “isolated” “particle”. It’s neither truly “isolated” nor truly a “particle”, as some of its properties have not come in existence yet. How to achieve this lack of existence elegantly? Through non-localization, as observed in the one-slit and two-slit experiments.

Why did I say that the “isolated” “particle” was not isolated? Because it interfered with some other “particle” before. Of course. Thus it’s EPR entangled with that prior “particle”. And when that “particle” is “measured” (namely INTERACTS with another “particle”), the so-called “isolated” “particle” gets changed, by the “spooky action at a distance”, at a speed much faster than light.

(This is no flight of fancy of mine, consecutive to some naïve misinterpretation; Zeilinger and Al. in Austria, back-checked the effect experimentally; Aspect in Paris and Zeilinger got the Wolf prize for their work on non-locality, so the appreciation for their art is not restricted to me!)

All these questions are extremely practical: they are at the heart of the difficulties in engineering a Quantum Computer.

Old physics is out of the window. The Quantum Computer is not here yet, because the new physics is not understood enough, yet.

Patrice Ayme’