Archive for the ‘Physics’ Category

Mathematical Beauty, Physics, And Truth

June 23, 2018

Mathematical beauty can guide physics: this is what happened for the foundation of QED by Dirac. At least, so it looks at first sight, and so he said. However, Dirac was guided by one intuition deeper than “beauty”: finding an equation of maximum simplicity to describe the electron… while knowing the Klein Gordon relativistic equation didn’t describe the electron, finding a simpler (first order) PDE that would be “relativistic” guided his search. Then see what happened. He knew that the simple wave equation are first order (although conventional strings are second order PDEs). Doing so Dirac re-invented unknowingly part of Cartan Spinor theory, a pure mathematical theory invented 15 years earlier. The Dirac equation he found led to experimental predictions, which were found to be true.

General Relativity too had a mathematical origin: Riemann, in the 1860s, got the idea that force will manifest itself as a deviation of geodesics. The idea is actually even older, in 3 dimensions, going back to Buridan (1350). That’s how Buridan superseded Aristotelian physics with his “Impetus” theory (the first order of the mechanics we have now).

Special Relativity was invented differently: a number of equations were found to explain effects observed, until Poincaré built a coherent logical whole resting on the idea that the speed of light should always be measured to be c. In particular electromagnetism was found to the essence of Relativity.

The picture is from CERN. The waves are from beaches of Western North America. Ultimately, it seems likely to me that nonlinear phenomenon are needed to understand hydraulics in full. But present day hydraulics, like Quantum Physics (away from collapse), is linear…

So the opposition is not so much between mathematics and physics, it’s between shallow ideas and deeper ideas. Physicists had no deeply new ideas, ideas which can stand-under, understand, for generations. Much of that has to do with denying that the Foundations of Quantum Physics are worthy of consideration.

Mathematical beauty can guide physics: but who guides mathematical beauty? 23 centuries ago, mathematicians then in power decided that Euclidean mathematics was beautiful, and non-Euclidean mathematics (invented prior) was ugly. Let’s not talk of the ugly anymore, or, at least, too complicated, they opined. After a few generations of pounding that notion, it became a claim that nothing existed in geometry, but for the beauty of geometry in a plane. Mathematicians got so dumb they forgot that the axiom of parallels was just an axiom, not a theorem (they tried to demonstrate it for nearly 20 centuries, whereas it would take ten seconds to explain to them what idiots they were, had they a brain in that direction…)

Indeed, never mind that Pytheas of Marseilles and his successors had, thanks to spherical geometry, computed the size of the rounded Earth most precisely. So, clearly mathematics on a sphere was extremely useful! In particular, true, and in existence!

Some say equation are beautiful. Equations themselves are subjects to interpretations. For example Henri Poincaré’s E = mcc, rolled out at the Sorbonne in 1899, is not clear. Similarly Einstein GR equation, basically: Curvature – Mass-energy, is not clear, as Einstein pointed out: right side is ill defined. After Dirac discovered his equation he realized it had to live in “Spinor Space”. So interpreting an equation gave the space where it had meaning.

***

Right now the most fundamental problems in mathematics and physics are clear to yours truly:

First, mathematics use an infinity axiom, namely that there is infinity. In the formal language of the Zermelo–Fraenkel axioms, the axiom reads: There is a set I (the set which is postulated to be infinite), such that the empty set is in I, and such that whenever any x is a member of I, the set formed by taking the union of x with its singleton {x} is also a member of I. Such a set is sometimes called an inductive set.

https://en.wikipedia.org/wiki/Axiom_of_infinity

This, this Infinity Axiom, in my opinion supposes too much, relative to the physical and practical realms, just like Euclidean geometry supposed too much relative to the practical and physical realms. Indeed, in practice, flat geometry does not exist. Same for infinity: in practice infinity cannot exists (not enough particles to count all the numbers). The Infinity Axiom introduces infinities in physics which are a mathematical artefact. This philosophical point is too hard for most top theorists to understand, the ones the Wall Street Journal is in love with (because of there are leading minds officially sanctioned in physics, thus as higher principles, so it is in in economics, sociology, hence plutocracy is rightfully supreme; see below).

Second, Quantum Physics is about WAVES. This enormous conceptual breakthrough was from Louis de Broglie. Waves are beautiful, especially Quantum Waves. Yet, in practice, waves are NOT linear. They are often nearly linear, right, but not quite (just like Euclidean geometry doesn’t quite exist, except as a figment of the imagination, and even then… ). However, present day mathematics has not been focused on nonlinear waves, so we don’t have a notion of “mathematical beauty”of nonlinear waves.

And guess what? The formalism of quantum Physics itself says that the “collapse” it can’t do without is nonlinear.

And now for a word of wisdom from that rather tall little thief friend of ours, Richard P. Feynman: “Physics is to math what sex is to masturbation.” There has been too much self dealing in physics, too much nonsense at the highest level! Bohr’s philosophy, which underlays his satisfaction with the Copenhagen Interpretation of Quantum Physics, is a surrealistic horror: he thought that clarity contradicted truth (or idea to this effect… actually the exact contradiction of the beautiful idea of equation).

Want new physics? Do like Buridan, Oresme, and their friends and students, seven centuries ago: invent new mathematics (they invented the second page of calculus, the first one was from Archimedes himself, 16 centuries before). That’s done by working on the axioms, introducing new ones.

So when is a system of thought X deeper than another Y? When X implies Y, by under-standing it, namely introducing deeper (“under”) reasons for its standing.

String theory has been the equivalent of the crystal spheres and epicycles construction which replaced the evidence all could see, that Earth, the small thing turned around the Sun, the big thing (the Greeks knew from computations, looking at the Moon, and shadows, that the Sun was millions of kilometers away…) Right now the big thing is Quantum Collapse, that’s what needs to be understood. String Theory does a few things, like cancelling some infinities as a problem (my proposal above is much more radical… also, unavoidable…)

Meanwhile, while those self-esteemed super brains make super theories of supersymmetries of super strings (their concepts involve the word”super” very much…), to make a theory of Quantum Gravity, little Patrice has noticed this: there is NO experiment, and, a fortiori theory of gravity in the double slit… Why? Because the super minds, too busy being super, have not noticed that we lack experiments there (after they read this, they will steal the idea, and run to the closest physics journal edited by their friends to publish it as their great insight).

Patrice Ayme

***

***

Note 1: the preceding was inspired by the following WSJ article:

Einstein’s character was more like that of an artist than a scientist, his older son, Hans, said: The great physicist reserved his highest praise for theories that are beautiful, rather than ones that merely fit the facts. When, in the latter half of his career, Einstein spent most of his time trying to discover a unified theory of gravity and electromagnetism, he paid little attention to new experiments and focused mainly on trying to find the best mathematical structure. Alas, the strategy got him nowhere.

According to the physicist and prolific blogger Sabine Hossenfelder, Einstein and others who work in a similar way are “lost in math,” the title of her lively and provocative book. Until the early 1970s, few theoreticians fitted such a description—most of them were taking inspiration from the results of experiments. It was this strategy that led them to the so-called Standard Model, which describes the inner workings of atoms with remarkable success. Over the past four decades, however, theoretical physics has gone astray, in Ms. Hossenfelder’s view. Part of the problem, she feels, is that so many theoreticians have allowed themselves to be seduced by the aesthetic appeal of mathematical theories that are going nowhere.

As she explains, the use of beauty as a proxy for truth has an impressive pedigree: Not only was it espoused by Einstein, it also became the obsession of the almost comparably brilliant English quantum physicist Paul Dirac. In 1975 he wrote: “If you are receptive and humble, mathematics will lead you by the hand . . . along an unexpected path, path where new vistas open up . . . from which one can survey the surroundings and plan future progress.” Toward the end of his life, he declared that any theoretical physicist who disagreed with him should give up research and do something else.

As a result of this misguided focus on beauty, Ms. Hossenfelder says, her generation of theoretical physicists has been “stunningly unsuccessful.” The multiverse—the idea that our universe is only one of a vast number—is one of the fashionable concepts that she believes is a dud… 

Ms. Hossenfelder believes string theorists are deluded. “Nature doesn’t care” about mathematical beauty, she declares. Clever physicists have been led up the garden path before, she stresses, pointing to the once-fashionable theories of the ether that Einstein later demonstrated to be redundant.

Ms. Hossenfelder has paid a high price for her counter-orthodoxy…”

And the WSJ to conclude by discreetly celebrating the Fuhrerprinzip which Hossenfelder violated:

“The best string theorists are confident that they are heading in the right direction not only because of the theory’s mathematical beauty but because of its huge potential, despite its formidable challenges.

When Ms. Hossenfelder reiterates in her final chapter that many of the world’s most accomplished theorists are “lost in math,” we cannot help wondering whether it is she who is lost. Time will tell whether many of the world’s leading theoretical physicists have spent decades barking up the wrong tree. Meanwhile, it is pleasing to read that Ms. Hossenfelder now has a research grant and has resumed work on the subject she plainly cares deeply about, no doubt steering well clear of what she regards as bandwagons. In that respect, at least, Einstein would have been proud of her.”

***

***

After the plutocratic horror critique above, I must re-establish some justice to Sabine (and myself, indirectly). Here is Nature:

Lost in Math: How Beauty Leads Physics Astray Sabine HossenfelderBasic (2018)

“Why should the laws of nature care about what I find beautiful?” With that statement, theoretical physicist and prolific blogger Sabine Hossenfelder sets out to tell a tale both professional and personal in her new book, Lost in Math. It explores the morass in which modern physics finds itself, thanks to the proliferation of theories devised using aesthetic criteria, rather than guidance from experiments. It also charts Hossenfelder’s own struggles with this approach.

Hossenfelder — a research fellow specializing in quantum gravity and modifications to the general theory of relativity at the Frankfurt Institute for Advanced Studies in Germany — brings a trenchant new voice to concerns that have been rumbling in physics for at least two decades. In 2006, Lee Smolin’s The Trouble with Physics and Peter Woit’s Not Even Wrong fired the first salvos at the trend of valuing mathematical elegance over empirical evidence. Both books took on string theory, a ‘theory of everything’ in which the fundamental constituents of nature are strings vibrating in many more spatial dimensions than the familiar three. Since its entry into mainstream physics in the mid-1980s, the theory has failed to make predictions that would unambiguously verify or falsify it.

Hossenfelder, too, tackles string theory, but her broadsides are more basic. She points to the paucity of experimental data, exacerbated as the machines needed to probe ever higher energies and smaller distances become more costly to build. Given that, she is worried that too many theorists are using mathematical arguments and subjective aesthetics to judge a theory’s validity.”

By the way, my own theory of Quantum Foundations predicts Dark Matter and Dark Energy… It also predicts unpredicted, in contradiction-with Einstein, mass behavior in, say the 2-slit experiment… Namely a dispersion of mass during translation…

Here is more of Nature:

For example, Hossenfelder questions the desire for naturalness — the idea that a theory should not be contrived or have parameters that have to be fine-tuned to fit observations. The standard model of particle physics feels like such a contrivance to many physicists, despite its spectacular success in predicting particles such as the Higgs boson, discovered at the Large Hadron Collider (LHC) at CERN, Europe’s particle-physics laboratory near Geneva, Switzerland. In the theory, to prevent the mass of the Higgs from ballooning beyond reasonable bounds, certain parameters have to be set just so, rather than be derived from first principles. This smacks of unnaturalness.

To get rid of this ugliness, physicists developed supersymmetry — an elegant theory in which every known particle has a hypothetical partner particle. Supersymmetry made the Higgs mass natural. It also showed how three of the four fundamental forces of nature would have been one at energies that existed shortly after the Big Bang (an aesthetically pleasing scenario). It even unexpectedly provided a particle, the neutralino, that could explain dark matter — matter that is unseen, yet thought to exist because of its observed gravitational effect on galaxies and galactic clusters. Hossenfelder explains that in combining everything that theoretical physicists value (symmetry, naturalness, unification and unexpected insights), supersymmetry has become “what biologists fittingly call a ‘superstimulus’ — an artificial yet irresistible trigger”.

CONSCIOUSNESS, ATOM OF THOUGHT, Atom of Computing: All Found In Electrons?

May 7, 2018

Consciousness: we know we have it, we know many other animals have it, but we don’t know what it is.

Before we can answer this, a question naturally arises: so what is it, to know what it is? What is it, to be? “To be” is something our consciousness knows, when it perceives it. But we also need to know when something “is” to know when, how and if our consciousness is. 

In order to simplify our thinking on this arduous subject, existence entangled with consciousness, consider our most fundamental, hence simplest, theory. Consider Quantum Physics. Surely “existence” is defined there, as Quantum Physics deals with what is most fundamental. Take the simplest examples: photon, electron. What is an electron? In Quantum Physics, an electron is what one electron does. Isn’t that enlightening?

Shouldn’t consciousness be, what consciousness does?

Initially, electrons were just negatively charged particles. At least, so it was until Bohr. Then the description of the electron became much more complex. It turned out that electrons did occupy only some energy levels. Then came De Broglie, who said electrons did as waves he attached to them did. And it was found, indeed, that electrons did so. PAM Dirac then proposed the simplest “relativistic” equation for the electron (a more complicated, second degree PDE had been proposed before and couldn’t be made to predict what was observed). That requested something called “spinor space”…. Then in turn predicted electronic spin and the anti-electron, and both were observed.

(Important aside: the French mathematician Cartan had invented spinors earlier in pure geometry. Yes, invented: he built-in his brain the relevant neurological connections, that is, the relevant geometry.)

Thus what we now call the electron has become higher dimensional in logical space (logical space is the space spanned by independent axioms; I just made it up; that means there is a connection between logic and geometry… thus, in particular, arithmetic and geometry…).

By adding axioms to its description, the concept of electron has become richer… The electron is a richer concept in our consciousness.

Confronted to 2 slits, the electron acts as if it were choosing where to go, after them. Is that, not just a computation, but a primitive form of consciousness? What consciousness is made of? Hard to say for sure, at this point, but certainly a guess worth exploring: any theory of consciousness may have to take this, that the electron acts as if it were conscious, into account. 

We evolved as living beings, and the more complex we became, the more conscious. Jean-Baptiste Lamarck’s law of increasing complexity applies, and is exemplified, by the evolution of consciousness.. Consciousness is probably a law of physics, not an accident of history.

Some say:’oh, well, consciousness may not be that important’. Well, first at least three different phyla evolved it, independently, on Earth, vertebrates being only one of them. (As all trout fishers know, trouts act as if they were conscious, that’s why the experienced ones are so hard to catch, when the water is clear…)

But there is a much deeper objection to considering consciousness unimportant: what is the connection of consciousness to thinking? Could the atom of consciousness be the atom of thinking…. And precisely defined as Quantum Computation?

Indeed, consider programming as presently done with electronic computers: one thing after the other, just so very fast, yet, it is fundamentally desperately dumb. Present day computing, pre-Quantum Computing, can result in desperately slow computations. Whereas the electron can compute instantaneously (says a hopefully naive Quantum theory) that problems too complicated for our (pre-Quantum!) computers to handle, and find out, where the low energy solution is. That’s the superiority of Quantum Computing: tremendous, instantaneous, stupendous computation, right.

So, what looks like a type of consciousness, found in the translating electron, is not just an incredibly efficient way of computing, it is at the core of the efficiency of the world. Could it be the most primitive form, the atom of thinking?

Identifying fundamental quantum and fundamental thinking is an idea whose time has come… Philosophically speaking, in the most practical manner, it means that discursive logic will never cover the last mile…

Patrice Ayme

***

***

Very Tangential Observations:

  1. Albert Einstein ascribed properties to the photon, and the electron, which I claim, have not been observed (thus leading physics astray, straight into the Multiverse). However the ulterior formalism sort of implemented Einstein’s design (which is older than Einstein), attributing (sort of, or maybe not) a strict position to elementary particles… and was found to give excellent  results (namely QED, QCD, the “Standard Model”…) But Ptolemy too, gave good results. Thus, now, elementary particles are endowed with properties which, if I am right, are fake… It has often happened in science that a fake, or grossly incomplete theory will masquerade as true for a very long time: math is full of them (Non Euclidean geometry, etc.).
  2.  The example of Non-Euclidean geometry is revealing: it was abandoned for brain-dead Euclidean geometry… Why did those Hellenistic regime Greeks opt for that silly form of mathematics? Because their superiors, various kings and tyrants, prefered silly. Because geometry in the plane was easier, a case of looking for the keys only below the lampost, because it’s simpler, and one is drunk. Let’s not repeat the mistake of having only simple thoughts, in the case of pondering consciousness, just because our superiors prefer simple thoughts, and are drunk on their power… Soon to be extinguished in great balls of nuclear fire…

LOGIC IS MATERIAL

April 11, 2018

Logic doesn’t just matter, it is matter.

FUNDAMENTAL PROCESSES, INCLUDING COMPUTATIONS, LOGIC, ARE MATERIAL OBJECTS:

Is there something besides matter? No. What is matter? Ah, two types of things, corresponding to wave-particle duality… Or, as I put it often, process-quanta duality.

***

We should have come a long way in 24 centuries, yet some keep repeating ideas of Plato, an Athenian plutocrat. Plato (and his teacher Socrates and student Aristotle) had an extreme right wing agenda, much of it pursued later as the “Hellenistic” regimes (dictatorships), imperial fascist Roman Principate, and the rage against innovation. Plato’s metaphysics has much in common, if not everything, with Christianism (this explains its survival…)

And now for a word from this essay’s sponsor, the gentleman contradicting me. Robin Herbert replied to me: …”many don’t seem to grasp that the classical logics are not tied to any physical assumptions… the classical logics are not tied to any physical assumptions. I think the problem is that we have this term “classical physics” and another term “classical logic” and people think they are related. They aren’t.”

Are we that stupid? I guess, our enemies wish we were…

***

Only those who have never heard of Platonism would not be familiar with the notion that logic is not “material”: it is at the core of Plato’s view of the universe. And also at the core of Christianism, so help me not god!

I beg to oppose the dematerialization of logic. Differently from Plato, I have a careful observation of nature, Quantum theory, the mechanics of atomic theory, to back me up. Frankly, relative to what we know now, Plato is an ignorant twerp. So why the reverence for his antique antics? My unforgiving mood is driven in part by the observation that the Ancient Greeks had plenty of holes in their axiomatics… Especially in mathematics (where they made several ludicrous mistakes, such as forgetting non-Euclidean geometry, generations after discovering it).

If logic is not tied to “physics”, or what’s material, we want to know what that is. But, as I am going to show, all we do is go back to the Gospel of John as the ultimate authority (itself straight out of Plato!)

Twentieth Century physics has revealed that physics is made of “Fundamental Processes” (see the very nice, pre-QCD book by that title from Feynman)… And Quanta. The former, the processes, are described by waves, the second, those lumps of energy, by particles.

Thus, saying that “logic is not physics” is tantamount to saying that logic is neither a fundamental process (or set thereof), nor quanta (or set thereof).

Orbitals to an electron around a proton (the Hydrogen atom), visualized in 2013 (Phys. Review). What you are looking at is one electron, when it is delocalized. The electron is the cloud. The cloud is a process. The process is what an atom of hydrogen is, 99.9999999% of the time… At least…

There are several problems with such a claim: far from being immaterial, any logic shows up as quanta (aka “symbols”), and is itself a process (classical logic rests on implication, the simplest process:”if A then B”, and chains therefrom). Logic shows up as nothing else, so that’s what it is: a bunch of fundamental processes and quanta. This is the modern philosophy of physics, in action! (It originated with Newton and Laplace, and was then amplified by Jules Henri Poincaré)

There was a famous exchange between Heisenberg and Einstein; the latter, at the peak of his glory, accused the young Quantum physicist to have only put observables in his matrix quantum theory. Heisenberg coolly smirked back that it was Einstein who taught him to do so! (Constructively infuriated, ten years later Einstein rolled out the EPR thought experiment, alleging a contradiction between Quantum Mechanics and LOCAL “elements of reality“. The effect was relabeled “entanglement” by Schrödinger, now the central notion in Quantum theory… Einstein should have realized that it was this very delocalization which made atoms wholes…)    

So what’s “material”? What’s observable! And what is observable? (Delocalized) fundamental processes and (localized, yet ephemeral) quanta. Claiming that the logos is neither is (implicitly) done in the first sentence of the Gospel of John, and John adds that its name is god. We of the natural school shall excommunicate those evoking god. Those who claim “logic”, the logos, escapes nature (= physis) are just followers of whom John followed, namely Plato. They are Platocrats, a particular prototype of plutocrats…

Fundamental processes are described by equations, but that doesn’t mean the equations are “real”, beyond symbols (“quanta”) of a medium. First of all, equations are approximations: a classical computer can only make a finite number of operations (differently from a full Quantum computer, which works with a continuum, the circle S1). Instead what is really real is the fundamental process(es) the equations approximate.

Indeed, consider atoms: they are real, “indivisible” (sort of)… and yet mostly made of delocalized processes known as electronic orbitals.  It is the delocalization which creates the substance: see the picture above… 

So is a classical computation a real object, in the aforementioned sense? Yes, because it is a FINITE set of fundamental processes (moving electrons and photons around). However, if the proposed computation, or logical deduction, takes an infinite amount of time, it becomes something that never comes to exist. (That’s an allusion to a classical computer trying to duplicate Quantum computers; in the case of the chlorophyll molecule, no classical computer could do what the molecule, viewed as a Quantum computer, does!)

In this view, call it material logic, time, whether we want it or not, whether logicians realize it, or not, is an essential part of logic: the time-energy principle de facto granulates time (we need infinite energy for infinitely small time intervals, hence for would be infinite logical computations). To say time is not part of logic is another of these oversights (as Archimedes did, implicitly using what non-standard analysts, Robinson and Al. called “Archimedes Axiom”, which excludes infinitely small (or large) integral numbers). Any piece of logic comes with its own duration, namely how many steps it needs in its simplest form.   

Quantum computing uses one (hypothesized) infinity: the assumed instantaneity of what I call the Quantum Interaction (aka Quantum Collapse). That enables to delocalize Quantum logic (no distributive law of propositional logic!), as delocalized Quantum processes, and this is why it can’t be classically duplicated (aka “Quantum supremacy”).

Happy processes!

Patrice Aymé

Dwarf Galaxies Contradict Standard Cosmology, BUT NOT SQPR!

February 21, 2018

Standard Cosmology Threatened, SQPR Proven?

Cosmology matters, it has always mattered, ever since there are reasons, and we humans try to refine them. Cosmology is the laboratory of pure reason.

The standard cosmological model is called the Lambda Cold Dark Matter model. “Lambda” is for the Cosmological Constant, an invention of Albert Einstein (hey, you see, Albert invented a few things, on his own, contrarily to what he claimed in self-derision…). Lambda basically says that space, spacetime itself, could have an energy independent of the mass-energy tensor (the energy of all and any particles). Dark Matter, in that model, is assumed to be some, so far mysterious, thing, spread all about, right from the start. A type of particle, so far undiscovered (standard physicists would guess).

After the Big Bang, in the ΛCDM, the universe expands: light takes ever longer to go between the developing clumps of matter which will end up as galactic clusters. In these clumps, Dark Matter concentrates, like the rest. Dark Matter reacting only to gravity, it ends up forming the next generation, more concentrated clumps (it’s not held back by radiation pressure from lighting stars, ect.). These Dark Matter kernels in turn attract material which ends up more or less rotating (the bigger, the more rotation), and we call that galaxies. Dwarf galaxies stay irregular and often don’t rotate as flat disks. Giant galaxies such as the Milky Way, Andromeda and Centaurus A, rotate mightily, and find themselves with dozens of smaller galaxies as satellites.

Centaurus A (NGC 5128) is an unusual giant elliptical galaxy crossed by a dust lane. The yellow halo is made of billions of yellow stars. It is ten billion light years away (5 times further than Andromeda, and is the largest closest giant galaxy we can see, after Andromeda (others may be hidden by dust). It is accompanied by 16 Dwarf Galaxies rotating in the same plane as Centaurus A itself. Something absolutely not predicted by ΛCDM. Width of the picture is 16 arc minutes, half of the full moon (which 30 arc minutes, half a degree).

***

The ΛCDM model is, at first sight, impressive. Computer simulations of the model with observations are considered to be very successful on very large scales (larger than galactic clusters, up to the observable horizon). But  it has a “small scale crisis”:  too many dwarf galaxies, too much dark matter in the innermost regions of galaxies, too much Dark Matter halos (which are not observed). These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model.

However, worse is now surfacing: the distribution of dwarf galaxies in a flat disk around their mother galaxy is absolutely not predicted by the ΛCDM paradigm.

ΛCDM predicts Dwarf Galaxies around a giant galaxy, but also predicts their orbits should be left to chance, there is not enough time since the Big bang to develop a huge rotation of the supergalactic cloud. ΛCDM says galaxies formed nearly instantaneously, after being torn on the outskirts by Dark Matter clumps which then make Dwarf Galaxies.

An international team of astronomers has determined that Centaurus A, a massive elliptical galaxy 13 million light-years from Earth, is accompanied by a number of dwarf satellite galaxies orbiting the main body in a narrow disk. This is the first time such a galactic arrangement has been observed outside the Local Group, home to the Milky Way, and anchored by it, Andromeda and the much smaller Triangulum galaxy. (By the way, it turns out that Andromeda is roughly the same size as the giant Milky Way, and not larger, as previously thought. The error came from overestimation of the Dark Matter in Andromeda, from too gross an application of the Virial Theorem. All this may have consequences for life in the universe, as it is easy to find reasons for zones in giant galaxies more hospitable for life, which less organized galaxies won’t have… But I digress.)

***

Dwarf galaxies move in unexpected ways in Milky Way, Andromeda and Centaurus A. This contradicts Standard Cosmology:

Giant galaxies like our Milky Way are orbited by satellite dwarf galaxies. Standard cosmological simulations of galaxy formation predict that these satellites should move randomly around their host. Müller et al. examined the satellites of the nearby elliptical galaxy Centaurus A. They found that the satellites are distributed in a planar arrangement, and 14 members of the plane (out of 16) are demonstrably orbiting in the same direction. This is inconsistent with more than 99.5% of comparable galaxies in simulations. Centaurus A, the Milky Way, and Andromeda all have highly statistically unlikely satellite systems. This observational evidence suggests that something is wrong with standard cosmological simulations.

In other words, ΛCDM predicts that there should be a halo of Dark matter and Dwarf Galaxies. There is not. (Whereas SQPR predicts planar structures, see below!)

“The significance of this finding is that it calls into question the validity of certain cosmological models and simulations as explanations for the distribution of host and satellite galaxies in the universe,” said co-author Marcel Pawlowski, “Hubble Fellow” in the Department of Physics & Astronomy at the University of California, Irvine.

He said that under the lambda cold dark matter model, smaller systems of stars should be more or less randomly scattered around their anchoring galaxies and should move in all directions. Yet Centaurus A is the third documented example, behind the Milky Way and Andromeda, of a “vast polar structure” in which satellite dwarves co-rotate around a central galactic mass in what Pawlowski calls “preferentially oriented alignment.

The difficulty of studying the movements of dwarf satellites around their hosts varies according to the target galaxy group. It’s relatively easy for the Milky Way. “You get proper motions,” Pawlowski said. “You take a picture now, wait three years or more, and then take another picture to see how the stars have moved; that gives you the tangential velocity.”

Using this technique, scientists have measurements for 11 Milky Way satellite galaxies, eight of which are orbiting in a tight disk perpendicular (!) to the spiral galaxy’s plane. There are probably other satellites in the system that can’t be seen from Earth because they’re blocked by the Milky Way’s dusty disk.

***

SQPR Versus ΛCDM:

To avoid the concept of Dark Matter, MOdified Newtonian Dynamics (MOND) have been suggested. It seems to me clear that they don’t work. Moreover, MOND is an ad hoc explanation: have problem, invent specific axiomatics to solve problem. Besides solving what looks like Dark Matter, without Dark Matter, and this, only around galaxies, not during collisions, MOND has no reason for being. The more evidence piles up, the less plausible it looks.

My own theory, SQPR is quite the opposite. It is a MODIFIED Quantum Dynamics (MOQD): it predicts a Sub Quantum Reality, to make Quantum Mechanics logically complete, and causal, with a nonlocality that will not be as “spooky” (to use Einstein’s bon mot). SQPR predicts Dark Matter, and it predicts that Dark Matter is CREATED inside giant galaxies, just the same as Black Holes are created inside giant galaxies (at ten times the rate of growth inside smaller galaxies). So, with me, Dark matter becomes a Quantum effect. The exact predictions are these:

Young giant galaxies will have little Dark Matter. Dark Matter is emergent.

Dark Matter will form in disks… And Dwarf Galaxies too.

SQPRs predictions are completely different. But fit observations…

My scenario is this: giant gas clouds, galactic size, of normal matter, coalesce first from the pull of gravity. As they do, conservation of angular momentum will augment the rotation speed (there always will be some rotation to start with, it’s nearly the same phenomenon as in cyclones formation). Implosion of the galactic size cloud, in conjunction with the rise of angular speed, creates a flat disk. This disk will contain lumps in the outer zone: dwarf galaxies, similar to planet formation in a solar system. Meanwhile, the Quantum Interaction, at cosmological distance, will churn out Dark Matter.

So we will typically end up with a flat disk of Dwarf Galaxies rotating in the same plane as the growing disk of Dark Matter of the giant galaxy. (Notice that I predict Dwarf Galaxies will have less Dark Matter, in the typical case).

Objectors may brandish the fact that the Dwarf Galaxy disk of the Milky Way is perpendicular, a glaring contradiction with my model. Well, my retort to that: something happened which yanked one relative to the other. The local group contains more than 54 galaxies, and it’s not even clear the large ones have all been found out, because of Milky Way dust: so a large galaxy passing by could have disrupted the dynamics of the Milky Way with its Dwarf Galaxy disk. There are plenty of observations such vast distortions between galaxies (and in the Solar System, Uranus can be contemplated, whose rotation axis is perpendicular to that of all the other planets, and where common sense would put it, perpendicular to ecliptic: clearly something big and weird happened which rotated the rotation axis spectacularly; by the way, Mars rotation axis also wobble spectacularly, although it’s coincidentally the exact same angle on the elliptic as Earth’s, right now, another spectacular coincidence (strange occurrences are not a proof of the existence of gods; however the case of Dwarf Galaxies, considered here, is 3/3… And actually more, and it becomes very statistically significant, if we look at the set of all Dwarf Galaxies around MW, A, Centaurus A).

***

Mavericks, such as yours truly argue that, like much modern physics, and related to that, the ΛCDM model is built upon an intricate foundation of conventionalist stratagems, rendering it unfalsifiable in the sense promoted by Karl Popper. Mavericks have to be taken seriously: several experts howled, for many decades, that there was Dark Matter. They were viewed as having fallen to the Dark Side (naturally enough). Then a serious mathematician called Segal pointed out that there was a Dark Energy problem: the cosmic acceleration itself accelerated, he insisted, and wrote an entire very serious book about it. In spite, or because of, these graves accusations, the entire field was ignored for more than 50 years (entire books about Dark Matter and the accelerating acceleration of the universe, were discarded as cranks): governments prefered to finance militarily useful physics (“high energy” physics) rather than potentially revolutionary physics.   

Anyway, things are quickly coming to a head. Astronomy is finally getting financed much more than it used to be. Astronomy, experimentation contemplated, on the largest scale, is shattering physics. Noble high energy physicists were studying only 5% of the universe, says astronomy…

ΛCDM says Dark Matter was always there. I suggest instead that it was created, by standard Mass-Energy and how (as Black Holes were created, albeit from a Quantum, not gravitational, mechanism). We will see. First we see, then we think.

Patrice Aymé

Perverse Logic: Saving the Multiverse with Unhinged Cosmic Inflation!

February 1, 2018

When The Unobservable Universe Is Used To Justify Various Follies, Such As The Multiverse, Civilization Is In A Bad Way:

Physics is the laboratory of reason. This where the most advanced, most subtle logics are forged (even more so than in pure mathematics, where the navel’s importance is too great). So what physicists ponder, matters to the entire civilization which nurtures them. When physics goes to the dogs, so does civilization. The follies of state of the art theoretical physics, reflect an ambient madness which pervades civilization. (If you don’t believe this, may I sell you some imaginary bitcoins for millions of dollars?)

Astrophysicist Ethan Siegel, a continual source of excellent articles in physics, wrote an interesting essay which I disagree with. His reasons are interesting, and have the merit of honesty. My answers are even more striking, and I bring the full weight of 24 centuries of history as meta-evidence for crushing the feeble, pathetic, short-sighted considerations of my fellow physicists. Ethan’s essay is entitled: “Yes, The Multiverse Is Real, But It Won’t Fix Physics
Surprisingly, the evidence points towards the existence of the unobservable multiverse. But it isn’t the answer you’re looking for.

Ethan proposes to use cosmic inflation to provide for the proliferation of Schrödinger cats and Wigner’s friends. One folly would thus provide for the other, and they would thus stay up, like two drunks falling into each other’s arms. I will instead humbly suggest to do away with madness altogether. But first a little recap.

The universe is expanding. This experimental evidence was established around 1920, by a number of astronomers in Europe and the USA, the most famous of whom was lawyer turned astronomer, Edwin Hubble. Hubble had the biggest telescope. The expansion is presumed to be looking everywhere the same, and this is what seems to be observed. That also means that, if one looks far away, galaxies will seem to be receding from us at speed ever closer to the speed of light. As the apparent speed of these galaxies approach c, their light gets shifted to lower and lower frequencies, until they become invisible (same reason as why Black Holes are blacker than black).

Where the transition to invisibility occurs is called the “event horizon”. Beyond the event horizon is the unobservable universe (we can’t detect it gravitationally, as gravity goes at the speed of light, a theoretical prediction now experimentally verified).

The observed universe is “flat” (namely there is no detected distortion in the distribution of clouds, filaments and superclusters of galaxies). That sounds unlikely, and indicates that the observed universe is a tiny portion of a much larger whole.

This unobservable universe has nothing to do with the “Multiverse” brandished recently by many theoretical physicists who have apparently run out of imagination for something more plausible. Eighty years ago, Schrödinger pointed out that Quantum Mechanics, as formalized then (and now!) was observer dependent, and filled up the universe with waves of dead and live cats (when applied to macroscopic objects). That’s called the Schrödinger Cat Paradox. Instead of calling for a re-thinking of Quantum Mechanics (as I do!), Ethan Siegel (and many other physicists and astrophysicists) embrace the dead and alive cats, settling them in “parallel universes”. So basically they reenact Solomon Judgment: instead of cutting the baby in two, they cut the universe in two. Zillions of time per second, in zillions of smaller places than you can possibly imagine… Here is a picture of Schrödinger cat: as the branches separate in that movie, two universes are created. This is what Ethan Siegel wants to justify, thanks to cosmic inflation…

Ethan’s revealing comment: “The idea of parallel Universes, as applied to Schrödinger’s cat. As fun and compelling as this idea is, without an infinitely large region of space to hold these possibilities in, even inflation won’t create enough Universes to contain all the possibilities that 13.8 billion years of cosmic evolution have brought us. Image credit: Christian Schirm.”
To explain crazy, we will go more crazy, thus making the previous crazy sound more rational, relatively speaking…

The Multiverse”, with baby universes all over the universe, has more to do with the “Many Worlds Interpretation” of Quantum Mechanics, a theory so absurd that the great popes of physics ruling around 1960 rejected it outright. Wheeler was ashamed of himself for having had a PhD student, Everett, who suggested this folly(Everett couldn’t get an academic job, at a time when academic employment in physics was booming!)

Ethan wrote: “In the region that became our Universe, which may encompass a large region that goes far beyond what we can observe, inflation ended all-at-once. But beyond that region, there are even more regions where it didn’t end.”

This sort of statement, and I say this with all due respect to the divine, is equivalent to saying:”Me, Ethan, having checked all that exists, observable by simple humans, or not, thereby informs you that I am either God, or that She is an interlocutor of mine. We checked that cosmic inflation thing, and saw it all over all the possible universes. Don’t talk, just learn.”

There is no way for us humans to know, for sure, or not, what is going on beyond the observable universe (aside from having no gravitational field distortions when approaching the event horizon, as I said above when considering “flatness”).

Ethan notices that Many Worlds fanatics have tried to use cosmic inflation to save their (ridiculous) theory. (“Many Worlds” is ridiculous, as Schrödinger tried to show, long ago, because there would be as many ways to cut the universes into “Many Worlds” as there are observers. So, so to speak, the “Many World Interpretation”, call it MWI, is actually MWI ^ {Observers} (MWI to the power of the set of all possible Observers, the latter set being itself something of an uncountably infinite function of MWI.)

Ethan says: “But just because variants of the Multiverse are falsifiable, and just because the consequences of its existence are unobservable, doesn’t mean that the Multiverse isn’t real. If cosmic inflation, General Relativity, and quantum field theory are all correct, the Multiverse likely is real, and we’re living in it.

What Ethan is saying is that if a number of crazy (cosmic inflation), or incomplete (Quantum Field Theory), ideas are “all correct”, then something as useful as angels on pin heads is real.Yes, indeed, if one believes that Muhammad flew to Jerusalem on a winged horse (!), one may as well believe all the rest of the Qur’an. That is a proof by crystal balls. After Ptolemy and company had established their (half correctly) predicting “epicycles” theory, one could have used it in turn to “prove” Aristotle ridiculous theory of motion.

23 centuries ago a much saner theory existed, that of Aristarchus. It was rejected at the time, precisely because it was not insane, and even though it was used to make a nearly correct prediction of the distance of the Moon. Aristarchus underestimated the distance of the Sun, but a telescope could have changed this (by showing more precisely the angle of the terminus on the Moon). If astronomers had the time had accepted heliocentrism as a possibility, it would have led them to invent the telescope. Similarly, right now, rejecting Many Worlds and Multiverse will lead to develop instruments which don’t exist yet (I have proposed at least one).

Astrophysicist Ethan Siegel suggests that: “The Multiverse is real, but provides the answer to absolutely nothing.” My opinion is that the Multiverse is worse than useless: the unhinged mood it provides prevents to develop more fruitful avenues of research, both theoretically and experimentally.

Insanity is the rule in crowds (Nietzsche). Thus follies are the truths crowds love, at first sight, before being corrected by higher minds. Why? Follies bind, because they are so special.

https://patriceayme.wordpress.com/2015/02/20/commonly-accepted-delusions-follies-that-bind/

In Aristarchus’ times, heliocentrism, the fact Earth and its Moon rotate around the Sun, should have been obvious. Indeed, people, let’s think for a moment: where was the Sun supposed to be, considering the phases of the Moon? If the Sun turned around Earth, the Moon’s illumination should have changed all day long! It didn’t require much geometrical analysis to discover that this source of light could only be where Aristarchus computed it to be, far away from the Earth-Moon system.

It took 19 centuries to correct that (obvious!) mistake. Interestingly, Jean Buridan, circa 1350 CE, did it in the most theoretical fashion.

https://patriceayme.wordpress.com/2016/03/20/momentum-force-inertia-middle-ages-buridan/

Buridan first showed that Aristotle’s ridiculous theory of motion made no sense, and had to be replaced by inertia and momentum (what Buridan called “impetus”). Having done this, the motion of the planets in a heliocentric system could be explained by “circular impetus”, Buridan pointed out (then he observed sardonically that we couldn’t observe the difference between epicycles and heliocentrism, so may as well go for “Scripture”).

Similarly, nowadays, instead of arguing with the “angels on a multiverse pinhead” authorities, we better point out to the glaring inconsistencies of Quantum Mechanics.

Civilization without reason is like a chicken without a head: it can run, but not forever.

Patrice Aymé

Discrepancy In Universe’s Expansion & Quantum Interaction

January 17, 2018

In “New Dark Matter Physics Could Solve The Expanding Universe Controversy“, Ethan Siegel points out that:

“Multiple teams of scientists can’t agree on how fast the Universe expands. Dark matter may unlock why.
There’s an enormous controversy in astrophysics today over how quickly the Universe is expanding. One camp of scientists, the same camp that won the Nobel Prize for discovering dark energy, measured the expansion rate to be 73 km/s/Mpc, with an uncertainty of only 2.4%. But a second method, based on the leftover relics from the Big Bang, reveals an answer that’s incompatibly lower at 67 km/s/Mpc, with an uncertainty of only 1%. It’s possible that one of the teams has an unidentified error that’s causing this discrepancy, but independent checks have failed to show any cracks in either analysis. Instead, new physics might be the culprit. If so, we just might have our first real clue to how dark matter might be detected.

20 years ago it was peer-reviewed published, by a number of teams that we were in an ever faster expanding universe (right). The Physics Nobel was given for that to a Berkeley team and to an Australian team. There are now several methods to prove this accelerating expansion, and they (roughly) agree.

Notice the striking differences between different models in the past; only a Universe with dark energy matches our observations. Possible fates of the expanding Universe which used to be considered were, ironically enough, only the three on the left, which are now excluded.  Image credit: The Cosmic Perspective / Jeffrey O. Bennett, Megan O. Donahue, Nicholas Schneider and Mark Voit.

Three main classes of possibilities for why the Universe appears to accelerate have been considered:

  1. Vacuum energy, like a cosmological constant, is energy inherent to space itself, and drives the Universe’s expansion. (This idea comes back to Einstein who introduced a “Cosmological Constant” in the basic gravitational equation… To make the universe static, a weird idea akin to crystal sphere of Ptolemaic astronomy; later Einstein realized that, had he not done that, he could have posed as real smart by predicting the expansion of the universe… So he called it, in a self-congratulating way, his “greatest mistake”… However, in the last 20 years, the “greatest mistake” has turned to be viewed as a master stroke…).
  2. Dynamical dark energy, driven by some kind of field that changes over time, could lead to differences in the Universe’s expansion rate depending on when/how you measure it. (Also called “quintessence”; not really different from 1), from my point of view!)
  3. General Relativity could be wrong, and a modification to gravity might explain what appears to us as an apparent acceleration. (However, the basic idea of the theory of gravitation is so simplest, it’s hard to see how it could be wrong, as long as one doesn’t introduce Quantum effects… Which is exactly what I do! In my own theory, said effect occur only at large cosmic distances, on the scale of large galaxies)

Ethan: “At the dawn of 2018, however, the controversy over the expanding Universe might threaten that picture. Our Universe, made up of 68% dark energy, 27% dark matter, and just 5% of all the “normal” stuff (including stars, planets, gas, dust, plasma, black holes, etc.), should be expanding at the same rate regardless of the method you use to measure it. At least, that would be the case if dark energy were truly a cosmological constant, and if dark matter were truly cold and collisionless, interacting only gravitationally. If everyone measured the same rate for the expanding Universe, there would be nothing to challenge this picture, known as standard (or “vanilla”) ΛCDM.

But everyone doesn’t measure the same rate.”

The standard, oldest, method of measuring the Hubble cosmic expansion rate is through a method known as the cosmic distance ladder. The simplest version only has three rungs. First, you measure the distances to nearby stars directly, through parallax, the variation of the angle of elevation during the year, as the Earth goes around its orbit. Most specifically you measure the distance to the long-period Cepheid stars like this. Cepheids are “standard candles”; they are stars whose luminosities vary, but their maximum power doesn’t, so we can know how far they are by looking how much they shine. Second, you then measure other properties of those same types of Cepheid stars in nearby galaxies, learning how far away those galaxies are. And lastly, in some of those galaxies, you’ll have a specific class of supernovae known as Type Ia supernovae. Those supernovae explode exactly when they accrete 1.4 solar mass, from another orbiting star (a theory of Indian Nobel Chandrasekhar, who taught at the University of Chicago). One can see these 1a supernovae all over the universe. Inside the Milky Way, as well as many of billions of light years away. With just these three steps, you can measure the expanding Universe, arriving at a result of 73.24 ± 1.74 km/s/Mpc.

The other methods makes all sorts of suppositions about the early universe. I view it as a miracle that it is as close as it is: 66.9 km/s/Megaparsec…

Ethan concludes that: “Currently, the fact that distance ladder measurements say the Universe expands 9% faster than the leftover relic method is one of the greatest puzzles in modern cosmology. Whether that’s because there’s a systematic error in one of the two methods used to measure the expansion rate or because there’s new physics afoot is still undetermined, but it’s vital to remain open-minded to both possibilities. As improvements are made to parallax data, as more Cepheids are found, and as we come to better understand the rungs of the distance ladder, it becomes harder and harder to justify blaming systematics. The resolution to this paradox may be new physics, after all. And if it is, it just might teach us something about the dark side of the Universe.”

My comment: The QUANTUM INTERACTION CHANGES EVERYTHING:

My own starting point is a revision of Quantum Mechanics: I simply assume that Newton was right (that’s supposed to be a joke, but with wisdom attached). Newton described his own theory of gravitation to be absurd (the basic equation, F = M1 M2/dd. where d was the distance was from a French astronomer, Ishmael Boulliau, as Newton himself said. Actually this “Bullaldius” then spoiled his basic correct reasoning with a number of absurdities which Newton corrected).

Newton was actually insulting against his own theory. He said no one with the slightest understanding of philosophy would assume that gravitation was instantaneous.

Newton’s condemnation was resolved by Laplace, a century later. Laplace just introduced a finite speed for the propagation of the gravitational field. That implied gravitational waves, for the same reason as a whip makes waves.

We are in a similar situation now. Present Quantum Physics assumes that the Quantum Interaction (the one which carries Quantum Entanglement) is instantaneous. This is absurd for exactly the same reason Newton presented, and Laplace took seriously, for gravitation.

Supposing that the Quantum Interaction has a finite speed (it could be bigger than 10^23c, where c is the speed of light.

Supposing this implies (after a number of logical and plausible steps) both Dark Matter and Dark Energy. It is worth looking at. But let’s remember the telescope (which could have been invented in antiquity) was invented not to prove that the Moon was not a crystal ball, but simply to make money (by distinguishing first which sort of cargo was coming back from the Indies).

We see what we want to see, because that’s we have been taught to see, we search what we want to search, because that’s what we have been taught to search. Keeping an open mind is great, but a fully open mind is a most disturbing thing… 

Patrice Aymé

“Proof” That Faster Than Light Communications Are Impossible Is False

December 16, 2017

There are theories everywhere, and the more ingrained they are, the more suspiciously they should be looked at. From the basic equations of relativity it is clear that if one adds speeds less than the speed of light, one will get a speed less than the speed of light. It is also clear that adding impulse to a mass will make it more massive, while its speed will asymptotically approach that of light (and, as I explained, the reason is intuitive, from Time Dilation).

The subject is not all sci-fi: modern cosmology brazenly assumes that space itself, after the alleged Big Bang, expanded at a speed at least 10^23 c (something like one hundred thousand billion billions time the speed of light c). The grossest, yet simplest, proof of that is simple: the observable universe is roughly 100 billion light years across, and it is ten billion years old. Thus it expanded at the minimum average clip of ten billion light years, every billion years. 100c/10 = 10c, according to standard cosmology. One could furiously imagine a spaceship somehow surfing on a wave of warped space, expanding for the same obscure reason same obscure reason as the Big Bang itself, that is…) 

The question naturally arises whether velocities which are greater than that of light could ever possibly be obtained in other ways. For example, are there communication speeds faster than light? (Throwing some material across will not work: its mass will increase, while its speed stays less than c.)

Textbooks say it’s not possible. There is actually a “proof” of that alleged impossibility, dating all the way back to Einstein (1907) and Tolman (1917). The mathematics are trivial (they are reproduced in my picture below). But the interpretation is apparently less so. Wikipedia weirdly claims that faster than light communications would allow to travel back in time. No. One could synchronize all clocks on all planets in the galaxies, and having faster than light communications would not change anything. Why? Time is local, faster than light data travel is nonlocal.

The problem of faster than light communications can be attacked in the following manner.

Consider two points A and B on the X axis of the system S, and suppose that some impulse originates at A, travels to B with the velocity u and at B produces some observable phenomenon, the starting of the impulse at A and the resulting phenomenon at B thus being connected by the relation of cause and effect. The time elapsing between the cause and its effect as measured in the units of system S will evidently be as follows in the calligraphy below. Then I use the usual Relativity formula (due to Lorentz) of time as it elapses in S’:

Equations help, but they are neither the beginning, nor the end of a story. Just an abstraction of it. The cult of equations is naive, interpretation is everything. The same thing, more generally, holds for models.
As Tolman put it in 1917: “Let us suppose now that there are no limits to the possible magnitude of the velocities u and V, and in particular that the causal impulse can travel from A to B with a velocity u greater than that of light. It is evident that we could then take a velocity u great enough uV/C^2 will be greater than one.
so that Delta(t) would become negative. In other words, for an observer in system S’ the effect which occurs at B would precede in time its cause which originates at A.”

I quote Tolman, because he is generally viewed as the one having definitively established the impossibility of faster than light communications. Tolman, though is not so sure; in his next sentence he turns out wishy washy: “Such a condition of affairs might not be a logical impossibility; nevertheless its extraordinary nature might incline us to believe that no causal impulse can travel with a velocity greater than that of light.”

Actually it is an effect those who have seen movies running in reverse are familiar with. Causality apparently running in reverse is no more surprising than the fact that two events at x1 and x2 which are simultaneous in S are separated by:  (x1-x2) (V/square root (1-VV/CC)). That introduces a sort of fake, or apparent causality, sometimes this before that, sometimes that before this.

(The computation is straightforward and found in Tolman’s own textbook; it originated with Henri Poincaré.[9][10] In 1898 Poincaré argued that the postulate of light speed constancy in all directions is useful to formulate physical laws in a simple way. He also showed that the definition of simultaneity of events at different places is only a convention.[11]) . Notice that, in the case of simultaneity, the signs of V and (x1-x2) matter. Basically, depending upon how V moves, light in S going to S’ takes more time to catch up with the moving frame, and the more so, the further it is, the same exact effect which explains the nil result in the Michelson-Morley interferometer; there is an underlying logic below all of this, and it’s always the same).

Tolman’s argumentation about the impossibility of faster than light communications is, in the end, purely philosophical and fully inconsistent with the closely related, and fully mainstream, relativity of simultaneousness.

Poincaré in 1900 proposed the following convention for defining clock synchronisation: 2 observers A and B, which are moving in space (which Poincaré called the aether), synchronise their clocks by means of optical signals. They believe to be at rest in space (“the aether”) from not moving relative to distant galaxies or the Cosmic Radiation Background and assume that the speed of light is constant in all directions. Therefore, they have to consider only the transmission time of the signals and then crossing their observations to examine whether their clocks are synchronous.

“Let us suppose that there are some observers placed at various points, and they synchronize their clocks using light signals. They attempt to adjust the measured transmission time of the signals, but they are not aware of their common motion, and consequently believe that the signals travel equally fast in both directions. They perform observations of crossing signals, one traveling from A to B, followed by another traveling from B to A.” 

In 1904 Poincaré illustrated the same procedure in the following way:

“Imagine two observers who wish to adjust their timepieces by optical signals; they exchange signals, but as they know that the transmission of light is not instantaneous, they are careful to cross them. When station B perceives the signal from station A, its clock should not mark the same hour as that of station A at the moment of sending the signal, but this hour augmented by a constant representing the duration of the transmission. Suppose, for example, that station A sends its signal when its clock marks the hour 0, and that station B perceives it when its clock marks the hour t. The clocks are adjusted if the slowness equal to t represents the duration of the transmission, and to verify it, station B sends in its turn a signal when its clock marks 0; then station A should perceive it when its clock marks t. The timepieces are then adjusted. And in fact they mark the same hour at the same physical instant, but on the one condition, that the two stations are fixed. Otherwise the duration of the transmission will not be the same in the two senses, since the station A, for example, moves forward to meet the optical perturbation emanating from B, whereas the station B flees before the perturbation emanating from A. The watches adjusted in that way will not mark, therefore, the true time; they will mark what may be called the local time, so that one of them will be slow of the other.[13]

This Poincaré (“–Einstein”) synchronisation was used by telegraphers as soon as the mid-nineteenth century. It would allow to cover the galaxy with synchronized clocks (although local times will differ a bit depending upon the motion of stars, and in particular where in the galactic rotation curve a star sits). Transmitting instantaneous signals in that networks would not affect causality. Ludicrously, Wikipedia asserts that faster than light signals would make “Bertha” rich (!!!). That comes simply from Wikipedia getting thoroughly confused, allowing faster than light signals for some data, and not for other data, thus giving an advantage to some, and not others.

***

Quantum Entanglement (QE) enables at-a-distance changes of Quantum states:

(It comes in at least three types of increasing strength.) Quantum Entanglement, as known today, is within Quantum state to within Quantum state, but we cannot control in which Quantum state the particle will be, to start with, so we cannot use QE for communicating faster than light (because we don’t control what we write, so to speak, as we write with states, so we send gibberish).

This argument is formalized in a “No Faster Than Light Communication theorem”. However, IMHO, the proof contains massive loopholes (the proof assumes that there is no Sub Quantum Reality, whatsoever, nor could there ever be some, ever, and thus that the unlikely QM axioms are forever absolutely true beyond all possible redshifts you could possibly imagine, inter alia). So this is not the final story here. QE enables, surprisingly, the Quantum Radar (something I didn’t see coming). And it is not clear to me that we have absolutely no control on states statistically, thus that we can’t use what Schrödinger, building on the EPR thought experiment, called “Quantum Steering” to communicate at a distance. Quantum Radar and Quantum Steering are now enacted through real devices. They use faster-than-light in their inner machinery.

As the preceding showed, the supposed contradiction of faster-than-light communications with Relativity is just an urban legend. It makes the tribe of physicists more priestly, as they evoke a taboo nobody can understand, for the good reason that it makes no sense, and it is intellectually comfortable, as it simplifies brainwork, taboos always do, but it is a lie. And it is high time this civilization switches to the no more lies theorem, lest it wants to finish roasted, poisoned, flooded, weaponized and demonized.

Patrice Ayme’

Technical addendum:

https://en.wikipedia.org/wiki/Relativity_of_simultaneity

As Wikipedia itself puts it, weasel-style, to try to insinuate that Einstein brought something very significant to the debate, the eradication of the aether (but the aether came back soon after, and there are now several “reasons” for it; the point being that, as Poincaré suspected, there is a notion of absolute rest, and now we know this for several reasons: CRB, Unruh effect, etc.):

In 1892 and 1895, Hendrik Lorentz used a mathematical method called “local time” t’ = t – v x/c2 for explaining the negative aether drift experiments.[5] However, Lorentz gave no physical explanation of this effect. This was done by Henri Poincaré who already emphasized in 1898 the conventional nature of simultaneity and who argued that it is convenient to postulate the constancy of the speed of light in all directions. However, this paper does not contain any discussion of Lorentz’s theory or the possible difference in defining simultaneity for observers in different states of motion.[6][7] This was done in 1900, when Poincaré derived local time by assuming that the speed of light is invariant within the aether. Due to the “principle of relative motion”, moving observers within the aether also assume that they are at rest and that the speed of light is constant in all directions (only to first order in v/c). Therefore, if they synchronize their clocks by using light signals, they will only consider the transit time for the signals, but not their motion in respect to the aether. So the moving clocks are not synchronous and do not indicate the “true” time. Poincaré calculated that this synchronization error corresponds to Lorentz’s local time.[8][9] In 1904, Poincaré emphasized the connection between the principle of relativity, “local time”, and light speed invariance; however, the reasoning in that paper was presented in a qualitative and conjectural manner.[10][11]

Albert Einstein used a similar method in 1905 to derive the time transformation for all orders in v/c, i.e., the complete Lorentz transformation. Poincaré obtained the full transformation earlier in 1905 but in the papers of that year he did not mention his synchronization procedure. This derivation was completely based on light speed invariance and the relativity principle, so Einstein noted that for the electrodynamics of moving bodies the aether is superfluous. Thus, the separation into “true” and “local” times of Lorentz and Poincaré vanishes – all times are equally valid and therefore the relativity of length and time is a natural consequence.[12][13][14]

… Except of course, absolute relativity of length and time is not really true: everywhere in the universe, locally at rest frames can be defined, in several manner (optical, mechanical, gravitational, and even using a variant of the Quantum Field Theory Casimir Effect). All other frames are in trouble, so absolute motion can be detected. The hope of Einstein, in devising General Relativity was to explain inertia, but he ended down with just a modification of the 1800 CE Bullialdus-Newton-Laplace theory… (Newton knew his instantaneous gravitation made no sense, and condemned it severely, so Laplace introduced a gravitation speed, thus the gravitational waves, and Poincaré made them relativistic in 1905… Einstein got the applause…)

CONTINUUM FROM DISCONTINUUM

December 1, 2017

Discontinuing The Continuum, Replacing It By Quantum Entanglement Of Granular Substrate:

Is the universe granular? Discontinuous? Is spacetime somehow emergent? I do have an integrated solution to these quandaries, using basic mass-energy physics, and quantum entanglement. (The two master ideas I use here are mine alone, and if I am right, will change physics radically in the fullness of time.)  

First let me point out that worrying about this is not just a pet lunacy of mine. Edward Witten is the only physicist to have got a top mathematics prize, and is viewed by many as the world’s top physicist (I have met with him). He gave a very interesting interview to Quanta Magazine: A Physicist’s Physicist Ponders the Nature of Reality.

Edward Witten reflects on the meaning of dualities in physics and math, emergent space-time, and the pursuit of a complete description of nature.”

Witten ponders, I answer.

Quantum Entanglement enables to build existence over extended space with a wealth exponentially growing beyond granular space

Witten: “I tend to assume that space-time and everything in it are in some sense emergent. By the way, you’ll certainly find that that’s what Wheeler expected in his essay [Information, Physics, Quantum, Wheeler’s 1989 essay propounding the idea that the physical universe arises from information, which he dubbed “it from bit.” He should have called it: “It from Qubit”. But the word “Qubit” didn’t exist yet; nor really the concept, as physicists had not realized yet the importance of entanglement and nonlocality in building the universe: they viewed them more as “spooky” oddities on the verge of self-contradiction. ..]

Edward Witten: As you’ll read, he [Wheeler] thought the continuum was wrong in both physics and math. He did not think one’s microscopic description of space-time should use a continuum of any kind — neither a continuum of space nor a continuum of time, nor even a continuum of real numbers. On the space and time, I’m sympathetic to that. On the real numbers, I’ve got to plead ignorance or agnosticism. It is something I wonder about, but I’ve tried to imagine what it could mean to not use the continuum of real numbers, and the one logician I tried discussing it with didn’t help me.”

***

Well, I spent much more time studying logic than Witten, a forlorn, despised and alienating task. (Yet, when one is driven by knowledge, nothing beats an Internet connected cave in the desert, far from the distracting trivialities!) Studying fundamental logic, an exercise mathematicians, let alone physicists, tend to detest, brought me enlightenment. mostly because it shows how relative it is, and how it can take thousands of years to make simple, obvious steps. How to solve this lack of logical imagination affecting the tremendous mathematician cum physicist Witten? Simple. From energy considerations, there is an event horizon to how large an expression can be written. Thus, in particular there is a limit to the size of a number. Basically, a number can’t be larger than the universe.

https://patriceayme.wordpress.com/2011/10/10/largest-number/

This also holds for the continuum: just as numbers can’t be arbitrarily large, neither can the digital expression of a given number be arbitrarily long. In other words, irrational numbers don’t exist (I will detail in the future what is wrong with the 24 century old proof, step by step).

As the world consists in sets of entangled quantum states (also known as “qubits”), the number of states can get much larger than the world of numbers. For example a set of 300 entangled up or down spins presents with 2^300 states (much larger than the number of atoms in the observable, 100 billion light years across universe). Such sets (“quantum simulators”) have been basically implemented in the lab.

Digital computers only work with finite expressions. Thus practical, effective logic uses already only finite mathematics, and finite logic. Thus there is no difficulty to use only finite mathematics. Physically, it presents the interest of removing many infinities (although not renormalization!)

Quantum entanglement creates a much richer spacetime than the granular subjacent space. Thus an apparently continuous spacetime is emergent from granular space. Let’s go back to the example above: 300 spins, in a small space, once quantum entangled, give a much richer spacetime quantum space of 2^300 states.

Consider again a set S of 300 particles (a practical case would be 300 atoms with spins up or down). If a set of “particles” are all entangled together I will call that a EQN (Entangled Quantum Network). Now consider an incoming wave W (typically a photonic or gravitational wave; but it could be a phonon, etc.). Classically, if the 300 particles were… classical, W has little probability to interact with S, because it has ONLY 300 “things”, 300 entities, to interact with. Quantum Mechanically, though, it has 2^300 “things”, all the states of the EQN, to interact with. Thus, a much higher probability of interacting. Certainly the wave W is more likely to interact wit2^300 entities than with 300, in the same space! (The classical computations can’t be made from scratch by me, or anybody else; but the classical computation, depending on “transparency” of a film of 300 particles would actually depend upon the Quantum computation nature makes discreetly, yet pervasely!

EQNs make (mathematically at least) an all pervasive “volume” occupying wave. I wrote “volume” with quote-unquote, because some smart asses, very long ago (nearly a century) pointed out that the Quantum Waves are in “PHASE” space, thus are NOT “real” waves. Whatever that means: Quantum volumes/spaces in which Quantum Waves compute can be very complicated, beyond electoral gerrymandering of congressional districts in the USA! In particular, they don’t have to be 3D “volumes”. That doesn’t make them less “real”. To allude to well-established mathematics: a segment is a one dimensional volume. A space filling curve is also a sort of volume, as is a fractal (and has a fractal dimension).

Now quantum entanglement has been demonstrated over thousands of kilometers, and mass (so to speak) quantum entanglement has been demonstrated over 500 nanometers (5,000 times the size of an atom). One has to understand that solids are held by quantum entanglement. So there is plenty enough entanglement to generate spaces of apparently continuous possibilities and even consciousness… from a fundamentally granular space.

Entanglement, or how to get continuum from discontinuum. (To sound like Wheeler.)

The preceding seems pretty obvious to me. Once those truths get around, everybody will say:’But of course, that’s so obvious! Didn’t Witten say that first?’

No, he didn’t.

You read it here first.

Granular space giving rise to practically continuous spacetime is an idea where deep philosophy proved vastly superior to the shortsightedness of vulgar mathematics.

Patrice Ayme’

Science and Philosophy: two aspects of the same thing. Why they are separated.

November 22, 2017

 

Separating philosophy from science is like separating breathing in, from breathing out.

Philosophy is how one guesses, science is how one makes sure.

To this “Jan Sand” retorted: ‘Science is how one attempts to make sure.’

Well, no. Attempting is no science. Hope enables one to live, but it’s not life. “One makes sure” comes with a context, the context enabling to express the problem and the answer attached to it.

Science is both a method, and a field of knowledge. Both are relative to the context at hand. The method consists in using only elements of reality one is sure of.

In their context, for example, classical optics, mechanics, electromagnetism and thermodynamics are all appropriate and correct. Yet, they don’t work next to a Black Hole: a Black Hole is the wrong context for them.

The first interstellar asteroid is a shard, probably a metallic one. It was observed to cover the Earth-Moon distance in less than three hours. With the nes telescopes being built, it is the first of many.

Consider the first Interstellar Asteroid was observed passing by the sun, on a highly hyperbolic trajectory. Speed: 139,000 kilometer per hour. Color: the deep red of the severely irradiated material (an orange like picture was obtained). No water or other volatile element. Albedo (reflectivity) varies from one to ten. Making an absolute hypothesis of what the albedo is, its size would one hundred meters across, a kilometer long. Found first by an Hawaiian telescope, its name is 1I ‘Oumuamua (Reach out first first; “1I for First Interstellar”)

This is all science, because many telescope, including Europe’s VLT (Very Large Telescope) in Chile, observed the object, and science dating more than 4 centuries has made telescope highly reliable (although cardinals initially demurred).

Rubbing sticks vigorously just so will enable to bring in such high temperature, as to start a fire: that’s science. (The fundamental science of humanity, 1.3 million years old.)

But not all “attempts” at “making sure” turn out to be science. Philosophy is what organizes these attempts.

For “superstrings”, it was felt that, instead of supposing point-particles, one could suppose strings, and some problems would disappear. Other problems would disappear if one supposed a symmetry between fermions and bosons. Thus “superstrings” came to be.

Superstrings is certainly a sort of logic, but not science. In particular, it makes no peculiar predictions, aside from the hypotheses it started with!

Similarly, Euclidean geometry pushed all the way, is unending logic, not science (because it has nothing to do with reality, it says nothing relevant to reality, once pushed far enough).

Most famously, epicycle theory was a sort-of logic, with some truths mixed in, but not science: it turned out to be 100% false (although the Fourier analysis hidden therein gave it some respectability, because parts of a lie can be true).

I have my own proposal for Sub Quantum Reality (“SQPR”). It is an attempt. It is astoundingly smart. It does make predictions, and explains some significant phenomena, for example Dark Matter, Dark Energy. So it looks good. However, it is not science.

Why?

Because my theory makes extraordinary claims giving a completely different picture of physics, extremely far from the facts and moods which give meaning to both Relativity and Quantum Mechanics.

So SQPR would need extraordinary proofs.

One could be simply that all other explanations for Dark Matter fail, and only SQPR is left standing.

A more direct proof would be that SQPR predicts a measurable difference in energy distribution during the famous 2-slit experiment from the prediction Albert Einstein explicitly made. If it turned out to be true that my prediction is correct on this, pretty much all of existing physics becomes false, or, let’s say more precisely, it becomes a very good approximation to a deeper theory.

And then SQPR would become a science (if all other testable predictions turn out to be in accord with it).

Elements of science have to be certain, within a particular context, or “universe” (in the logic sense of “universe”) which, itself, is part of the real world.

For example Quantum Field Theory makes probabilistic predictions which can boil down in very precise numbers which can be measured. Quantum Computers will also make probabilistic predictions (about the real world, even the world of numbers).

In the latter case, it’s just a guess. In other words, philosophy.

Those who claim science does not depend upon philosophy, just as those who claim philosophy does not depend upon science are, at best, trivially correct: they have got to be looking at small subfields of these activities, cleaning the corners.  

In the grand scheme of things, science and philosophy are roughly the same activity: twisting logic any which way, to get testable consequences. Thus discovering new logics on the way, not just new facts

***

One may ask: why did philosophy and science get separated?

Because our masters, the plutocrats want to keep on ruling. That means they don’t want us to understand what they are doing. Thus, smarts are their enemy. Hence people have to be kept in little mental boxes, so stupid, just so.

This is nothing new. When Rome was at its apogee, very learned Greek slaves educated the youth of the elite. As they were slaves, they knew their place. This helps to explain why Rome stagnated intellectually, and thus was unable to solves its pressing strategic, technological, economic, health and ecological problems. Stupidly educated youth makes stupid, and obedient adults.  

Specialization is a way for plutocrats to keep on ruling. After all, to run a civilization, one needs special capabilities. The ultimate specialization is to pretend that certain knowledge, that is science, is independent from guessing new sure knowledge, that is, philosophy.

Actually the latter is intrinsically bad, since, if it was thoroughly applied, it would allow We The People to understand how plutocracy works. Thus philosophy was strongly encouraged to degenerate, by being cut from knowledge, be it sure, or historical, etc.

If society wants to survive, it will have to forge ahead in the way of understanding. Failing to comprehend or to implement this, has led many civilizations or states  to collapse (Maya, Sumer, Egypt, Abbasid Caliphate, Jin dynasty, Western Xia, the Dali Kingdom , Southern Song, Aztecs,.etc.).

Thus sustainable plutocracy is a balancing act between understanding and obedience. This time, though, understanding has to be maximized, be it only to solve the climate crisis (there are many other crises). Thus plutocracy has foster understanding (quite a bit as Jeff Bezos is doing with Amazon, hence his success)..

We may be unable to get rid of plutocracy, because We The Sheep People out there are so supine. The next best thing, which is also the necessary thing, is that it is in the interest of everybody to let philosophy roll, and thus get reacquainted with science. And reciprocally.

Patrice Ayme

WHY LIGHT & GRAVITATION GO AT SAME SPEED

November 2, 2017

As long as one does not have a simple explanation, and, or description, of a subject, one does not understand it fully.

The present essay presents a direct proof, found by me, from basic principles, that gravitational waves go at the speed of light.

The essay also presents the direct experimental proof of the same fact that we got a few days ago, when the explosion of a “kilonova” was observed (kilonovae are very rare, but crucial in the apparition of life as we know it, details below).

A consequence of the preceding is that the MOND theories are false. MOND was a philosophical horror, something full of ad hoc hypotheses, so I am happy it’s out of the window. MOND eschewed the simplest description of gravity, the basics of which, the 1/d^2 law preceded the birth of Newton himself.   

***

First things first: WHY GRAVITATIONAL WAVES?

When two sources of a field of type 1/d2 (such as gravitation or electromagnetism) rotate around each other, they generate waves which go to infinity (even if I don’t believe in infinity, as an absolute, it works fine as a figure of speech…)  

That’s simply because the field changes, as sometimes the charges are aligned, sometimes sideways. As the field changes it moves the objects it acts on. Now the point is that this disturbance of the field propagates indefinitely.

At this point, a philosophical question may arise: do the disturbances of the field carry away energy? Well, in a way, it’s an idiotic question, because we know it does, that’s an experimental fact.

This experimental fact shows fields are real.

Now, let’s slow down a bit: one century of experimentation with electromagnetic fields had shown, by 1900 CE, that, electromagnetic fields carried away energy.

What about gravitation? Well,  theories were made in which a waving gravitational field carried away energy, such as Poincaré’s theories of gravitation, and, in particular, Einstein’s.

The experimental proof came when closely rotating stars, which should have been emitting copious amounts of gravitational field energy, were observed to lose energy just as predicted. But first the theory:

Orbiting Masses Generate Gravitational Waves (on top). If the gravitational waves were left behind the light, many references frames would observe non-conservation of energy after a collision event (bottom) between aforesaid masses. This is my thought experiment, and it’s also what happened 130 million years ago in a galaxy not that far away.

***

HERE IS WHY GRAVITATIONAL WAVES GO AS FAST AS LIGHT WAVES:

Patrice Thought Experiment Demonstrating Gravitation & Electromagnetic Waves Go At the Same Speed:

So now visualize this. Say, to simplify, that two equal masses rotate around each other. Call them M1 and M2. Say M1 is matter, and M2 antimatter, each of mass m The system M1-M2, emits more and more gravitational energy as the two masses approach each other. Finally they collide. At this point, the system M1-M2 becomes pure electromagnetic radiation, of energy E = 2 (mc^2).

Now what does one see at a distance?

Suppose the electromagnetic energy E going at the speed of light, c, travelled faster than the gravitational wave of energy G, travelling at speed g.

Then suppose also one is in a reference frame R travelling at uniform speed V, away from the M1-M2 collision event. As g is less than c, V can be more than g.

And then what?

The gravitational wave of energy G going at speed g, CANNOT catch up with the reference frame R.

However, before the collision, some of the energy of the system was inside G. And it’s easy to compute how much: it’s equal to the potential energy of the rotating system before the collision. In the scenario we constructed, that energy is never seen again, from the point of view of R. Let me repeat slowly: before the collision, M1  and M2 can be seen, orbiting each other. The potential energy of the system P, can be computed, using this VISUAL information (visual, hence travelling at the speed of light, c). So then the energy of the system is 2Mc^2 + P.

All of P is transformed into G, the energy of the gravitational wave. If the speed g of the wave is less than the speed of light, c, there are reference frames, namely those with V > g, where P will be seen to have disappear.

Thus if the speed of gravitational waves was less than the speed of light, there would be frames in which one could observe distant events where energy would not be conserved. 

Now let’s make it realistic.  The preceding situation is not just possible, but common:

***

Closely Orbiting Annihilating Stars Were Just Observed:

Instead of making the preceding M2 out of antimatter, one can simply make M1 and M2 into neutron stars. That’s exactly what happened 130 million years ago, when dinosaurs roamed the Earth, in a galaxy far away—NGC 4993, to be exact—two neutron stars spiraled into each other, from emitting gravitational radiation, and emitting more, the more they spiraled (the waves got converted in sound). The stars then went into a frantic dance, and collided.

Had this happened inside our own Milky Way, the present gravitational waves detectors the U.S.-built LIGO and European-built Virgo observatories, would have detected the gravitational waves for minutes, or maybe hours. But the gravitational waves we got were diluted by a factor of 10^10 (!) relative to what they would have been if the collision had been just 10,000 light years away, inside the Milky Way.

After billions of years spent slowly circling each other, in their last moments the two neutron-degenerate stars spiraled around each other thousands of times in a frenzy before finally smashing together at a significant fraction of light-speed, likely creating a black hole (typically neutron stars are remnants of sun like stars, two of those packed in a small volume makes a black hole).

Such an event is called a “kilonova” (because it has the energy of 1,000 novas). Kilonovae are rare cosmic events, once every 10,000 years in a giant galaxy like the Milky Way. That’s because neutron stars are produced by supernovae. To boot, supernovae explode asymmetrically, giving hefty “kick” to those remnants, strong enough to eject a neutron star entirely from its galaxy (the Crab Nebula remnant goes at 375 km/s relative to the explosion nebula.

***

Exit MOND:

MOND, or MOdified Newton Dynamics is a somewhat ridiculous class of theories invented in the last few decades to deny the existence of DARK MATTER. Instead, the MOND monkeys devised an ad hoc theory, which basically claim that gravity is stronger at low speeds (whatever), as was more or less observed (sort of) inside galaxies (didn’t work so good, or not at all, for clusters).

You see, gravitation basic behavior is simple. Kepler thought it was an attractive force in 1/d. However Bullialdus suggested the law was 1/d2 in analogy with the behavior of… light (however Bullialdus didn’t understand that, in combination with Buridan’s mechanics from 1350 CE, one could explain Kepler’s laws; but Hooke and then Newton did)

***

The collision of the two neutrons stars, and the black hole they created, also emitted electromagnetic radiation. That light comes from the fact materials fall at enormous speeds. Thus both gravitational waves and electromagnetic waves were captured from a single source. The first light from the merger was a brief, brilliant burst of gamma rays, the birth scream of the black hole. The gamma ray flash was picked up by NASA’s Fermi Gamma-Ray Space Telescope, 1.7 second after the arrival of the gravitational waves (dust would have delayed the light a bit at the onset, but not the gravitational waves). Hours later astronomers using ground-based telescopes detected more light from the merger, the “kilonova” produced by the expansion of debris. The kilonova faded from view over the following weeks.

As expected, astronomers saw in the aftermath various wavelengths of corresponding to the many heavy elements formed instantly during the collision (it was an old prediction that merging neutron stars would form the heaviest elements such as gold and titanium, neutron-rich metals that are not known to form in (normal) stars.

(Caveat: I hold that star theory is incomplete for super hyper giant stars with hundreds of solar masses, and a very reduced lifetime; that has been one of my argument against standard Big bang theory.) But let’s go back to my thought experiment. What about the other aspect I envisioned, being on a frame R travelling at a very large speed?It’s very realistic, actually for its other aspect, frames moving at near light speed.

***

Frames Travelling At Close To Speed Of Light Are Common:

… Not jut a figment of my imagination. That’s also very realistic: as one approaches the event-horizon, entire galaxies recess ever closer to the speed of light, here is the V I was talking about above.   

***

Simple science is deep science

All treaties on Gravitation tend to be the same: hundreds of pages of computation, and one wrong equation could well sink the ship (Quantum Field Theory is worse, as few fundamental hypotheses therein make any sense. Hence the famous prediction from QFT that the energy of the vacuum should be 10120 greater than observed…)

I believe instead in a modular approach: from incontrovertible facts, quick reasonings give striking conclusions. This makes science logically compartmentalized, avoiding thus that any subfield of science follow the Titanic down the abyss, from a single breach. It also make science easier to teach, and even easier to think about. For example the reality of Quantum Waves comes not just from all sorts of variants of the 2-slit experiments, but also from the Casimir Effect, a direct evidence for the reality of Quantum waves in empty space, which is observed so much that it has to be taken into account in the engineering of any nano-machinery (I also suggested a device to extract energy from this “vacuum”).

***

Conclusion: Just from the necessity of apparent conservation of energy in all inertial frames, rather simple physics show that the speed of gravitational waves has to be exactly the speed of light. No need for hundreds of pages of obscure computations and devious logics. No need even for Relativity, just basic kinematics from 1800 CE.

Patrice Ayme’