Posts Tagged ‘Matter Waves’

How To Generate Matter Waves In Large Objects

June 14, 2020

L = h/P is the De Broglie hypothesis, where L is the wavelength of the matter wave, P is the momentum of the object, and h is Planck constant. 

The relationship was known in the case of the photon [1].

What I viewed as a mystery: the validity of the formula for any object. It seemed to me to make all particles, and all masses sort of fundamental. How could that be? Meanwhile, the mystery of mass got thicker. I heard of the Higgs, it’s mostly a gimmick to create mass out of friction from an assumed universal field, now supposedly observed… for particles which in the Standard Model which wouldn’t otherwise have any. How much of the mass of a nucleon is produced by simply harnessing E = mcc is unknown to me, but could be most of it.

De Broglie matter waves, simplest version. In general p, the momentum is relativistic, and “mv” is just the slow poke version of it… So all I am saying is that the zillions of linear parts of all these waves add up… OK, I should make my own drawing, severely more sophisticated: this one gives the impression that the linear tails (the outside, guiding parts) of the wave is nonlinear… In SQPR, the center is highly nonlinear, and the outer, guiding part ready to transform itself in Dark Matter, given the right geometry

Then SQPR appeared. In SQPR, waves are everything, but they are additive, nonlinear… And they are all what space is. 

So visualize an object O. It’s made of a zillion elementary particles, call that number Z, each actually an elementary wave, expanding, entangled, then nonlinearly collapsing, then expanding again after that interaction, entangled with all over, etc. In the average, though, the expanding quantum waves of all these objects will have momentum p/Z and mass, m/Z. This clarity of mind escapes Quantum Field Theorists (QFTists), because their version of space is haunted by so-called “vacuum energy”… for which there is only anti-evidence (namely the universe exists; if vacuum energy existed, the universe won’t, because it would be collapsed all the time! [2])

Now SQPR says those linear parts all add up, as an average p/Z and constitute then a mass of m, momentum p. It’s all very simple, but now it sounds intuitive… A return of intuition in physics, would be welcome, instead of complete insanity…

Patrice Ayme



[1]. Planck had discovered E = hF, where F is the frequency of light. Einstein proposed to generalize it to quanta of light (“Lichtquanten”) in flight, and explained immediately the photoelectric effect that way (he got the Nobel for that; by the way, SQPR immediately explains Dark Matter)


[2] One evidence for the vacuum energy is the “CASIMIR EFFECT”… which is thoroughly demonstrated in very practical nanophysics. I explained once how to make it produce much energy. Nobel S. Weinberg  in his book on Gravitation rolls out Casimir as a proof of vacuum energy (instead, I roll out the universe to disprove vac energy!) However, it turns out one doesn’t need the full vac energy to explain Casimir…


The Casimir Effect and the Quantum Vacuum
R. L. Jaffe
Center for Theoretical Physics,
Laboratory for Nuclear Science and Department of Physics
Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139
In discussions of the cosmological constant, the Casimir effect is often invoked as decisive
evidence that the zero point energies of quantum fields are “real”. On the contrary, Casimir effects
can be formulated and Casimir forces can be computed without reference to zero point energies.
They are relativistic, quantum forces between charges and currents. The Casimir force (per unit
area) between parallel plates vanishes as α, the fine structure constant, goes to zero, and the standard
result, which appears to be independent of α, corresponds to the α → ∞ limit.

SQPR has its own version of the vacuum: it’s truly empty, devoid of mass-energy. All the space that is physical is made of matter waves… Matter waves and pieces thereof all have mass-energy, making them real.

Not An Infinity Of Angels On Pinheads

July 1, 2016

Thomas Aquinas and other ludicrous pseudo-philosophers (in contradistinction with real philosophers such as Abelard) used to ponder questions about angels, such as whether they can interpenetrate (as bosons do).

Are today’s mathematicians just as ridiculous? The assumption of infinity has been “proven” by the simplest reasoning ever: if n is the largest number, clearly, (n+1) is larger. I have long disagreed with that hare-brained sort of certainty, and it’s not a matter of shooting the breeze. (My point of view has been spreading in recent years!) Just saying something exists, does not make it so (or then one would believe Hitler and Brexiters). If I say:”I am emperor of the galaxy known as the Milky Way!” that has a nice ring to it, but it does not make it so (too bad, that would be fun).

Given n symbols, each labelled by something, can one always find a new something to label (n+1) with? I say: no. Why? Because reality prevents it. Somebody (see below) objected that I confused “map” and “territory”. But I am a differential geometer, and the essential idea there, from the genius B. Riemann, is that maps allow to define “territory”:

Fundamental Idea Of Riemann: the Maps At the Bottom Are Differentiable

Fundamental Idea Of Riemann: the Maps At the Bottom Are Differentiable

The reason has to do with discoveries made between 1600 and 1923. Around 1600 Kepler tried to concretize that attraction of planets to the sun (with a 1/d law). Ishmael Boulliau (or Bullialdius) loved the eclipses (a top astronomer, a crater on the Moon is named after him). But Boulliau strongly disagreed with 1/d and gave a simple, but strong reasoning to explain it should be 1/dd, the famous inverse square law.

Newton later (supposedly) established the equivalence between the 1/dd law and Kepler’s three laws of orbital motion, thus demonstrating the former (there is some controversy as whether Newton fully demonstrated that he could assume planets were point-masses, what’s now known as Gauss’ law).

I insist upon the 1/dd law, because we have no better (roll over Einstein…), on a small-scale.

Laplace (and some British thinker) pointed out in the late 18C that this 1/dd law implied Black Holes.

In 1900, Jules Henri Poincaré demonstrated that energy had inertial mass. That’s the famous E = mcc.

So famous, it could only be attributed to a member of the superior Prussian race.

The third ingredient in the annihilation of infinity was De Broglie’s assertion that to every particle a wave should be associated. The simple fact that, in some sense a particle was a wave (or “wave-packet”), made the particle delocalized, thus attached to a neighborhood, not a point. At this point, points exited reality.

Moreover, the frequency of the wave is given by its momentum-energy, said De Broglie (and that was promptly demonstrated in various ways). That latter fact prevents to make a particle too much into a point. Because, to have short wave, it needs a high frequency, thus a high energy, and if that’s high enough, it becomes a Black Hole, and, even worse a Whole Hole (gravity falls out of sight, physics implodes).

To a variant of the preceding, in: Solution: ‘Is Infinity Real?’  Pradeep Mutalik says:

July 1, 2016 at 12:31 pm

@Patrice Ayme: It seems that you are making the exact same conflation of “the map” and “the territory” that I’ve recommended should be avoided. There is no such thing as the largest number in our conceptual model of numbers, but there is at any given point, a limit on the number of particles in the physical universe. If tomorrow we find that each fermion consists of a million vibrating strings, we can easily accommodate the new limit because of the flexible conceptual structure provided by the infinite assumption in our mathematics.


I know very well the difference between “maps” and territory: all of post-Riemann mathematics rests on it: abstract manifolds (the “territories”) are defined by “maps Fi” (such that, Fi composed with Fj is itself a differential map from an open set in Rx…xR to another, the number of Real lines R being the dimension… Instead of arrogantly pointing out that I have all the angles covered, I replied:

Dear Pradeep Mutalik:

Thanks for the answer. What limits the number of particles in a (small enough) neighborhood is density: if mass-energy density gets too high, according to (generally admitted) gravity theory, not even a graviton could come out (that’s even worse than having a Black Hole!)

According to Quantum Theory, to each particle is associated a wave, itself computed from, and expressing, the momentum-energy of said particle.

Each neighborhood could be of (barely more than) Planck radius. Tessellate the entire visible universe this way. If too each distinct wave one attaches an integer, it is clear that one will run out of waves, at some point, to label integers with. My view does not depend upon strings, super or not: I just incorporated the simplest model of strings.

Another mathematician just told me: ‘Ah, but the idea of infinity is like that of God’. Well, right. Precisely the point. Mathematics, ultimately, is abstract physics. We don’t need god in physics, as Laplace pointed out to Napoleon (“Sire, je n’ai pas besoin de cette hypothese”). (I know well that Plato and his elite, tyrant friendly friends and students replied to all of this, that they were not of this world, a view known as “Platonism”, generally embraced by mathematicians, especially if they are from plutocratic Harvard University… And I also know why this sort of self-serving, ludicrous opinion, similar to those of so-called “Saint” Thomas, a friend of the Inquisition, and various variants of Satanism, have been widely advocated for those who call for self-respect for their class of haughty persons…) 

The presence of God, aka infinity, in mathematics, is not innocuous. Many mathematical brain teasers become easier, or solvable if one assumes only a largest number (this is also how computers compute, nota bene). Assuming infinity, aka God, has diverted mathematical innovation away from the real world (say fluid flow, plasma physics, nonlinear PDEs, nonlinear waves, etc.) and into questions akin to assuming that an infinity of angels can hold on a pinhead. Well, sorry, but modern physics has an answer: only a finite number.

Patrice Ayme’



May 18, 2016

Demolishing The Quantum COPENHAGEN MISINTERPRETATION With Its Own Instruments:

The nature of reality fascinate true philosophers. Do we have to understand the Quantum to understand dreams? The naive will say no. But, well, in truth, probably. The brain is no analogue computer, it’s a QUANTUM computer. So, to understand dreams, one has to try to understand the quantum. However, to go deeper than the foundations of physics is, by definition, to suggest new physics.

I have said for years, nay many decades, that Quantum Waves are real, and obsolete physics are not. OK, just kidding, obsolete physics is heavy. I should not joke: physicists are rarely amused about the foundations of physics: they know they don’t work.

Universe Is Not Empty: It’s Full Of Stiff, Superluminal Quantum Waves

Universe Is Not Empty: It’s Full Of Stiff, Superluminal Quantum Waves

[The picture, made in 2013 by a fundamental physics institute in the Netherlands, was obtained by statistical sampling. Some call that technique a “Quantum Microscope”.]

Quantum Waves are of course real objects. Proof? Well, experimental proofs are coming.

However, I will roll here a slick philosophical proof which I have seen, or even alluded to, nowhere. It’s disarmingly simple, of the order, in the way of baffling simplicity, of the celebrated, 26 centuries old, “this sentence is false” (the precise mathematical dressing of that brain twister is known as the first Godel Incompleteness Theorem).

The first mention of the Copenhagen Interpretation was in Heisenberg’s 1930 book on Quantum Mechanics which, he wrote, “contributes somewhat to the diffusion of that ‘Kopenhagener Geist der Quantentheorie’ [i.e., Copenhagen spirit of quantum theory] if I may so express myself, which has directed the entire development of modern atomic physics”.

So here I am fighting a “spirit” (“Geist”). (When confronted to the De Broglie-Bohm theory in the 1950s, the ex-Nazi Heisenberg called the “Copenhagen Geist” and “Interpretation”… a term he came to regret… Nowadays, people attached to sanity have to fight the “Many Worlds”/”Multiverse” Interpretation, a collective madness worse than smoking.)

Even the most closed minded physicist recognizes that (“elementary”) particles are (“somewhat”) real. In the Copenhagen Interpretation, the property of “wave” and that of “particle” are viewed as “dual” or “complementary” (one or the other).

However the Copenhagen Interpretation then proceeds to contradict said duality. Indeed, if the wave-particle duality is correct (as the Copenhagenists claim), then obviously, if particles are real (something has got to be real!), then surely waves are real.

However the Born Interpretation of the Quantum waves is that they are PROBABILITY waves. But a probability wave is not real. Hence a blatant, fuming, red hot, grotesque, contradiction.

This is an extremely elementary philosophical reasoning, however, it seems to have escaped ALL the physicists who considered the subject. (Do parrots think? Yes, they do… all the same.)

Reciprocally, if one admits that the real world is really made, somehow of particles, then the reasoning I just made suggests that the Quantum Waves are real.

Here is a completely independent demonstration of the latter: it turns out matter is mostly, all the time, launched in dynamical quantum processes. Actually most of the mass is generated by quick motions of quarks and gluons within hadrons, thanks to Poincaré’s relationship, Energy = Mass (“E = mc2”) . During these displacements, matter is under the form of Quantum Waves (or of dynamical quantum fields, as some will want to say, to sound real cool). An example is electronic orbitals in atoms: they have substance… because they are delocalized waves. Thus, matter is clearly made, 99.999% of the time, of delocalized quantum waves.

Patrice Ayme’

Zero Point Energy Machine

December 22, 2013

Are there science fiction technologies nobody expects, over the horizon? Yes. Let me present the CASIMIR FORCE. And suggest, with my characteristic generosity, how to make an energy producing machine from it.

To understand what I propose to make, one has to visualize this: fundamentally, there are no particles, nor fields, only waves. (De Broglie’s) Matter Waves (invented in 2013). Those matter waves are also all over. In particular, the waves are in between two electrically conducting plates, and outside them, too.

Between the plates there are fewer waves. Why? Those waves basically die when they touch an electrically conductive surface. So the only waves present, between the plates, are basically like strings attached on both sides (the exact same picture as the “harmonic oscillator”) Thus the frequencies permitted in between plates are few. But outside the plates, the frequencies can be anything.

The waves are related to the phenomenon known as “particles”: particles carry momentum. As they bounce around, they push. But they are fewer pushing from inside. Hence there is a force that forces the plates to come closer.

Fewer Quantum Matter Waves Inside = Less Pressure, So Plates Squeeze Until More Photons Are Trapped Inside

Basically, one can say that the vacuum is more vacuous inside, between the plates than outside. The vacuum gap was measured, and it’s called the Casimir Force, after his discoverer (Casimir was a Dutch physicist).

This is often (a bit abusively) described as the “Zero Point Energy”. (Einstein and a colleague introduced the notion of “nullpunktenergie” in 1913.) I used the term, though, because it makes an excellent slogan.

Can one generate energy from Zero Point Energy? Yes. Even gecko lizards do it.

How to go to the stars with ZPE? By generating motion. Motions will allow to move magnets, hence generate electricity. To generate motion, connect the plates to springs keeping them apart (but not so far that the Casimir force would disappear). Suppose the plates are made of semiconductor material. Plug in a base current turning them conductive. At that point, the Casimir force turns on, and the plates come closer. When they are close enough, cut the base current: the springs pull back the plates, unleashing an escape interrupter, repeating the process. So it repeats.

The rest, ladies and gentleman, is just technological details, like building tiny enough magnets… But that could take a while, as it would require mastery of constructing nanoscale devices. Technology can take a long time to develop: it took 150 years to go from discovery of the photovoltaic effect until the first practical photocells. The example of controlled thermonuclear fusion shows the same: it took 60 years to reach breakeven pure thermonuclear fusion controlled energy production in 2013… although poorly controlled thermonuclear fusion was demonstrated by spectacular explosions, in the early 1950s (it was so poorly controlled that, because of a conceptual mistake, not understanding that Lithium 7 would turn into Li6 during the explosion, and then fuse, the Castle Bravo explosion released 15 megatons of TNT, instead of the expected maximum of 5 Megatons).

In any case, progress is being made in harnessing the Casimir Force.

This, in truth, the Casimir Effect is nothing new. It’s all about waves. The Casimir Force, in a sense was well-known to mariners of old. Two tall ships in a long swell, but without wind, parallel to each other, would be invincibly attracted to each other, and crash, with catastrophic consequences.

Old Mariners Knew The Casimir Force All Too Well.

Old Mariners Knew The Casimir Force All Too Well, as an indisputable danger.

The mathematics, and even the physics are exactly the same. This is a clue that, after all, Quantum Physics may be more natural than usually depicted. And it’s all very natural: geckos, the famous lizards who can run upside down along ceilings, actually use (a variant of) the Casimir Force.

In the title, I used “Zero Point Energy” (ZPE). This is a bit dishonest. Einstein himself so strongly disagreed with some forms of ZPE, that he refused to learn Quantum Electro-Dynamics (QED; although he had the best, most dedicated and patient, professor). However, the basic form, as displayed here, is completely uncontroversial (and an experimental fact).

Waves: Little Understood Yet

Waves: Little Understood Yet

The only question is this: as the Casimir Force, following the recipe above, would provide with enormous amounts of energy, with no pollution whatsoever, even in interstellar space, and as it looks just to be a question of precision nanotechnological construction to make it work, why are no massive technological ZPE program in evidence in the most advanced countries?

It’s not like the CO2 is not building up, and the methane not exuding the permafrost in humongous quantities. And it’s not like the Chinese are not catching up, multiplying Jade Rabbits on the Moon, either…

Patrice Aymé


July 9, 2010


Warning: A subquantal conceptual big bang is applied to the Big Bang itself, sparks fly…


Abstract: Conventional Big Bang Theory depends upon some unproven physics at the Quantum level. Although experiments, so far, show Quantum Mechanics to be 100% true, there is a good reason to believe that this will not perdure. The problem with Quantum Mechanics is that it violates Nothing Instantaneous at Distance ("NID"), an undeclared physics metaprinciple which has always triumphed, ever since the ape came down from the tree to preach man.

NID is not a physical law in the sense of the laws that allow to make computations, but it has always been found to be true… until today’s official formulation of conventional Quantum Mechanics, which blatantly violates it.

The author boldly sketches its own theory, which is driven by respect for NID. After rendering Quantum Mechanics obsolete, it is an easy task to dispose of one of the paradoxes of the present Big Bang Theory. We have nothing to fear, but fun itself. Not for those that the Philosophy of Quantum Theory frightens.


Physicist Tamara Davis, writing in Scientific American, July 2010, tries to solve a paradox of Big Bang theory by getting rid of the law of conservation of energy. Quite a feat, since conservation of energy is exactly the deepest foundation of physics. Whatever I am going to do next to perspectives in physics in the present essay, it will not be as ridiculous. Thus encouraged, I will go boldly where no mind has gone before.

Dr. Davis exposes the problem this way: "Almost all of our information about outer space comes in the form of light, and one of light’s key features is that it gets redshifted—its electromagnetic waves get stretched—as it travels from distant galaxies through our ever expanding universe, in accordance with Albert Einstein’s general theory of relativity. But the longer the wavelength, the lower the energy. Thus, inquisitive minds ask: When light is redshifted by the expansion of the universe, where does its energy go? Is it lost, in violation of the conservation principle?"

She then advocates that the cosmological redshift can be thought of as a photon making many tiny little Doppler shifts along its directory. According to her, Doppler shifts do not represent a true loss of photon energy, only a change of perspective (from one galaxy, to another receding galaxy).

Verily, in the (creationist) Big Bang Theory, physicists said: "Let There Be Light!" and so all was light in the beginning. Some of the light, in Big bang Theory (BBT) turned into matter, some kept on going, and we receive the later now as a diffuse 3 degree Kelvin radiation.

In more details, the paradox is this: say somewhat after the BB, some of the energy was light, E(L), and some was matter E(M). E(L) will be made of a given number of photons, say N, with average energy A. So E(L) = N A. Now, according to Planck’s inauguration of Quantum Mechanics, A = h V, where V is the average photon frequency. So E(L) = h N V. But, according to BBT, V goes down, as the universe expands. Nowadays V corresponds to very very cold light. But initially V was extremely, unimaginably incandescent gamma ray light. Thus E(L) has gone down from enormous to negligible! In other words, looking at BBT in the simplest fashion, a gross violation of energy conservation is in full evidence. Part of the problem is that in Einstein’s Relativity, spacetime has no physical reality (in contradistinction with the old ether theory, old ether being what electromagnetic waves were supposed to wave). Thus the energy lost by Big Bang light cannot be transferred to something else, since the only thing around is spacetime, and spacetime has no reality (not that simple, see P/S 4) .

Of these sorts of simple contradictions great scientific progress is made. Anybody could have pointed out to Aristotle that he had neglected air (or water) resistance. But one had to wait Buridan, 17 centuries later, to do so, discovering Newton’s First Law, more than three centuries before Newton was born… But I digress…

Dr. Tamara Davis escapes with a pirouette. She simply states that laws such as energy conservation does not apply to the universe as such. (Useless a pirouette it is, because a moment’s reflection show that energy conservation will also be violated for arbitrarily small subsections of the universe).

I have a simpler suggestion: to remake all of physics from scratch, while respecting the conservation of energy. Here is a sketch. First I do away with localized particle trajectory. So, when a particle goes from X to Y, according to me, it is not localized. A particle is not a particle until it has been localized. Proof: well, first we have no proof that they are localized, so why to suppose they are, as most physicists, even many Quantum physicists suppose? It’s not because, as monkeys, we found 20 million years ago that stones were localized when flying towards our opponents, that this is still true when the stone is a particle of light. Actually it is the opposite which is obvious.

The reasons to believe photons delocalize is the fact that photons (and all particles) take the entire geometry into account as they propagate: wherever they can go influences where they will end up. Propagating particles embrace the whole. They always end up in a particular place, but that place is computed by the implied order of the whole. This is the most basic idea in Optics, and Quantum Mechanics.

One way to partly say this is that light behaves as a wave. So light goes around a sphere from everywhere, goes through two slits, etc. The idea that light could be a wave came initially from Huyghens, but he did not have the wealth of examples that would be found in the next two centuries (Young’s slits and Poisson’s dot). This wave behavior is used in lenses.

So it was long anticipated that light would delocalize: a wave is intrinsically delocalized. So far, so good. Newton preferred to think of light as a particle (he was a great man, and wanted to be greater than Huyghens that way, so he had to contradict him!) It is easy to see why: the ancient Greeks had anticipated atoms, the smallest possible pieces. Newton just assumed there would be atoms of light. Experiences of Hertz, discovering the photoelectric effect, in combination with Planck’s atomization of light energy (one now says "quantization"), led Einstein to suggest the "heuristic viewpoint" that therein a proof that light was made of particles.

So wave or particle? The situation became more intriguing when photons (or, in general, particles) were fired in the apparatus (whatever it is), one photon (or particle) at a time. Photons (or particles) still behaved like waves.

The largest optical apparatuses (please excuse the Anglicized Latin grammar…) known are galactic clusters. They lens the light, using their formidable gravitation to do so. According to Einstein theory of gravitation, light follows geodesics of spacetime, and those are bent by mass. (Newton’s theory of light produces roughly the same gravitational lens effect, as Laplace, who predicted black holes, using Newton’s particle theory of light, would have pointed out.)

So far so good, but what does that mean? That means photons delocalize around galaxies themselves… since they interfere with themselves, around galaxies themselves. This, of course boggles the mind, so common minds do not like to consider the possibility. But there is no alternative.

Hence the atom of light, the photon, is, most of the time, quite far from being at a single point. Instead it can "localize" at points which are quasi infinitely large geometries. The astute mathematician will be reminded of Alain Connes’ "Non Commutative Geometry", where points can be spaces.

I say, "most of the time", because a cosmological photon is for billions of years out there… in its delocalized state. I am just observing this, as it is. Most physicists, including the honorable Tamara above, represent photons following trajectories, as if they were Newton’s particles. But they are not. even Einstein made that elementary conceptual mistake (he did not need to go into the subtleties of the EPR thought experiment to find delocalization!).

Usual Quantum Mechanics is an abstraction of what is observed in human sized laboratories. Although, recently, photon delocalization experiments were conducted over distances up to dozen of kilometers. The results respected scrupulously the QM predictions. However, I am persuaded that this will not be the case as the distances become astronomical. I have a reasoning for this that I borrowed from Newton: namely, nothing can be instantaneous and at a distance. Such was the objection of Newton to his own theory of gravitation (which was instantaneous, and at a distance. "Einstein’s" theory of gravitation uses Faraday’s field concept and the speed of light to address Newton’s worries).

That nothing can be instantaneous and at a distance ("NID") was already the core of the Einstein Podolski Rosen paradox ("EPR"). EPR pointed out that "elements of reality", according to QM, could be spread out arbitrarily wide, and that makes no sense (because they took it for granted that everybody believed in NID).

However, the Bell inequalities were checked by Aspect and others, showing that it is exactly what happens. So, now getting inspiration from Raymond Poincare’, I will paraphrase him faithfully: if something is exactly what always happens , then it is a law of nature. Poincare’ brandished this meta principle to justify his postulate that the speed of light would always be measured to be c (this idea is attributed to Einstein, who actually read it in a book of Poincare’; Poincare’, like Buridan, was French, so he could not possibly have had a deep idea, according to the Anglo-Saxon conspiracy which considers that French culture has to do with wine and cheese. Only).

So let’s be clear: experimentally, and from its very formulation, Quantum Mechanics violates NID, "No Instantaneous Distance".

Thus, if NID, "No Instantaneous Distance", is made into a metaprinciple, one has to deduce that Quantum Theory, as it is, is false. Or more exactly incomplete, the way Newton’s gravitation is incomplete.

In this view, to complete QT one has to do away with its instantaneous at a distance aspect, thus, one has to impose the existence of a SUBQUANTAL INTERACTION.

Some hypocrites will scream that I do not respect the metaprinciple of minimum logic ("Ockham’s razor"), that this is not worthy physical speculation anymore. But actually Big Bang Theory supposes an unobserved, and unobservable field, the inflaton. At least my subquantal field is observable, and I claim that a lot of the 3 degree Kelvin cosmological radiation is just such an observation (3K radiation actually sets detection level for detection of the subquantal field).

In truth, all and any Quantum process is all about widely spread elements of reality that QM claim instantaneously convert to the singular. This is the old "Collapse of the Wave Packet". I just say it proceeds at some speed, TAU (ten to the ten the speed of light at least). To simplify, I will also hypothesize that this the speed at which the linear quantum guiding wave also spreads. That wave is known as the Quantum Potential in David Bohm’s refurbished version of De Broglie’s guiding wave theory (ooopss, De Broglie was another Frenchman, he invented the full blown Quantum Theory in 1923; QM was attributed to others later, although de Broglie got the Nobel 6 years after writing his thesis).

The difference between me and Bohm is that I have no more particle, and the Quantum Potential spreads at TAU. The potential is actually a MATTER WAVE. De Broglie seems to have believed in the physics of matter waves all along (Schrodinger adopted the idea, but was subsequently ridiculed by over-smart types such as Von Neuman; Von Neuman claimed to have demonstrated that there was no Quantum mechanics but Quantum mechanics, but it is increasingly understood that this is not correct). De Broglie had tried a particle-less theory too, the "double solution" .

I have my own version of the "double solution". It exploits the instability of non linear waves. A stable non linear wave, such as a soliton, is a fine balance between linear dispersion and non linear singularization. I view elementary particles, including photons, as a dance between the two aspects: when there is linear propagation (at TAU), linear dispersion, what we hypothesize to be "particle" propagation, dominates. When the matter wave field becomes locally too strong, having interacted with a subquantal field, it singularizes itself, localizing itself in one point.

A number of thought experiments and real practical experiments with very low intensity lasers interfering, show that matter waves are real. The matter wave from one laser guides, through interference, the photon from the other laser.

OK, let’s back down from the conceptual edge, and go back to our cosmological photons. How does the guiding wave and its delocalization fit in all this? What does this theory of mine all mean? As a photon’s linear guiding matter wave approaches a galactic cluster at TAU, imagine the scene: the linearized, delocalized photon matter wave, ten million light years across, bearing down on a galactic cluster at ten billion times the speed of light. The delocalized photon’s matter wave has a high probability to encounter a (still hypothetical) graviton‘s matter wave, or other the matter wave of some other particle hanging around the cluster.

The sudden local non linearity in the photon’s guiding wave leads to a collapse of said delocalized linearized photon. Then the photon will suddenly singularize, namely appear and interact somewhere. However, over the cosmological distances the delocalized photon was spread about, NID says that some of the photon will be unable to singularize in that spot where the singularity has started. Thus a distant piece of the delocalized photon will get separated from the rest of the singularizing photon, and hang around as cosmological flotsam. The photon will have reddened. In the next cycle, the photon, now a bit weakened, will delocalize again, and repeat the process. If this is correct, and the mean free (delocalized) path of cosmological photons varies (according to whether they come around regions full of matter), photons flying more in extremely empty space will be more redshifted (which is contrary to common sense, and will compete with the fact that photons zigzagging in clusters will get redshifted just from said zigzagging; so the two effects will have to be carefully distinguished).

Some will say that my theory violates relativity in spirit, if not outright computations, etc. Sure. That Relativity’s equations have proven extremely precise, for example for GPS, does not say anything about whether it is still obviously valid at the scale of galactic clusters.

Anyway, there is much more to say, but not today.


Patrice Ayme


P/S 1: So, if light ages, for (sub) Quantum reasons, is the Big Bang completely false? Well, I do not know. Galaxies very far away seem younger, and all Quasars are very far away (at least 800 million light years) showing that, even if there was no Big Bang, the universe has been, in any case, changing, if not aging.

A reason to be extremely suspicious about the Big Bang is that the prima facie evidence for it, the expansion speed of the universe, is truly unknown: supernovas studies have shown it to be (incredibly!) accelerating (to be confirmed!). Moreover, conventional Big bang Theory has to hypothesize inflation, an expansion at gigantic multiple of the speed of light, for the entire universe. Differently from me here, the reasoning is ad hoc, and not from first, time honored principles. My motivation, as Nietzsche would insist, comes from the highest principles, saving the principle of energy conservation and NID, whereas the motivation of Big Bangers is as low as it can get, because they had to invent a field to save their creationism.

P/S 2: The famous Dirac pontifically declared in his text book that photons interfered (ONLY!) with themselves. But that was before the invention of lasers, which allowed to demonstrate that this statement was not correct. The fact that matter waves are real, if 100% confirmed, will probably be viewed, in the future, as the greatest discovery of Twentieth Century science.

P/S 3: The sketch of theory above was presented to some of the heroes of physics (LdB & RF).

P/S 4: "Spacetime is no real substance", hard core relativists love to claim, sounding a bit like hard core Muslims about the moon. But this is not clear, even in conventional Relativity. Indeed, spacetime can wave. By shaking the source of a field, any field with a finite propagation speed, one can shake said field at a distance, and thus shake an object responsive to said field, at a distance, after a while. Thus a finite propagation field carries energy away, and Einstein gravitational field does not escape to that rule.

But Planck had discovered that electromagnetic energy was quantized, i.e., made of lumps, quanta, particles. The particles are called photons. By logical simplicity, one assumes the same for gravitational energy. Hence the prediction of gravitons, in analogy with photons. But now, the gravitons are supposed to be particles like any other boson. Do they make spacetime or not? Do photons make the electromagnetic fields, or are just its quantal manifestations? Thus the question of the spacetime as a real substance becomes the question of the reality of the electromagnetic field as a real substance.

Simple questions, deep answers still unknown: for more than 30 years, it was obvious that potentials (by contradistinction with fields) could have a direct effect, being on the right hand side of the Schrodinger equation (which came from de Broglie). But one had to wait for Bohm and his student Aharanov to notice that (and it was immediately verified experimentally). By then the American born Bohm had been banned from the good old USA (for practicing all too advanced philosophy, apparently a no-no for the US Congress and Princeton University, in spite of Einstein just wanting him as assistant)…

P/S 5: The theory above in particular, and Quantum Theory in general, have absolute bearing on what philosophers call ontology, the study of existence. Indeed, Bohm posthumously published last book was: “The Undivided Universe: An ontological interpretation of quantum theory” (1993).

P/S 6: Naturellement, the theory above applies to (expected to exist by the field-wave-particle principle) gravitons. So gravitons ought to age, hence weaken, as they get away from dense sources of matter, and far out, for the same reason as the photons above.

This may relate to "Dark Energy": if there is less gravitational force to block the expanding force, expansion will accelerate. Notice in passing that this subquantal field of mine, which is each propagating particle, is expanding tremendously, at TAU (>>>>> c). So we may have the reason for the expansion of the universe below our noses, or more exactly between our eyelashes, as we see light waves interfere there… Any Quantum propagation is an inflationary universe, reduced to its simplest case, with, de facto, gravity zero (otherwise an interaction with a graviton would bring de-coherence).

In the present morass of General Relativity, gravitons are supposed to not interact with themselves (which makes no sense: they would be the only such particles). Speaking of morass, I did not stoop to mention the Copenhagen Interpretation (where TAU is hypothesized to be infinite, among other radical simplifications), and the Many-Worlds Interpretation (obviously a schizoid absurdity).