Archive for the ‘Quantum Field Theory’ Category

Not An Infinity Of Angels On Pinheads

July 1, 2016

Thomas Aquinas and other ludicrous pseudo-philosophers (in contradistinction with real philosophers such as Abelard) used to ponder questions about angels, such as whether they can interpenetrate (as bosons do).

Are today’s mathematicians just as ridiculous? The assumption of infinity has been “proven” by the simplest reasoning ever: if n is the largest number, clearly, (n+1) is larger. I have long disagreed with that hare-brained sort of certainty, and it’s not a matter of shooting the breeze. (My point of view has been spreading in recent years!) Just saying something exists, does not make it so (or then one would believe Hitler and Brexiters). If I say:”I am emperor of the galaxy known as the Milky Way!” that has a nice ring to it, but it does not make it so (too bad, that would be fun).

Given n symbols, each labelled by something, can one always find a new something to label (n+1) with? I say: no. Why? Because reality prevents it. Somebody (see below) objected that I confused “map” and “territory”. But I am a differential geometer, and the essential idea there, from the genius B. Riemann, is that maps allow to define “territory”:

Fundamental Idea Of Riemann: the Maps At the Bottom Are Differentiable

Fundamental Idea Of Riemann: the Maps At the Bottom Are Differentiable

The reason has to do with discoveries made between 1600 and 1923. Around 1600 Kepler tried to concretize that attraction of planets to the sun (with a 1/d law). Ishmael Boulliau (or Bullialdius) loved the eclipses (a top astronomer, a crater on the Moon is named after him). But Boulliau strongly disagreed with 1/d and gave a simple, but strong reasoning to explain it should be 1/dd, the famous inverse square law.

Newton later (supposedly) established the equivalence between the 1/dd law and Kepler’s three laws of orbital motion, thus demonstrating the former (there is some controversy as whether Newton fully demonstrated that he could assume planets were point-masses, what’s now known as Gauss’ law).

I insist upon the 1/dd law, because we have no better (roll over Einstein…), on a small-scale.

Laplace (and some British thinker) pointed out in the late 18C that this 1/dd law implied Black Holes.

In 1900, Jules Henri Poincaré demonstrated that energy had inertial mass. That’s the famous E = mcc.

So famous, it could only be attributed to a member of the superior Prussian race.

The third ingredient in the annihilation of infinity was De Broglie’s assertion that to every particle a wave should be associated. The simple fact that, in some sense a particle was a wave (or “wave-packet”), made the particle delocalized, thus attached to a neighborhood, not a point. At this point, points exited reality.

Moreover, the frequency of the wave is given by its momentum-energy, said De Broglie (and that was promptly demonstrated in various ways). That latter fact prevents to make a particle too much into a point. Because, to have short wave, it needs a high frequency, thus a high energy, and if that’s high enough, it becomes a Black Hole, and, even worse a Whole Hole (gravity falls out of sight, physics implodes).

To a variant of the preceding, in: Solution: ‘Is Infinity Real?’  Pradeep Mutalik says:

July 1, 2016 at 12:31 pm

@Patrice Ayme: It seems that you are making the exact same conflation of “the map” and “the territory” that I’ve recommended should be avoided. There is no such thing as the largest number in our conceptual model of numbers, but there is at any given point, a limit on the number of particles in the physical universe. If tomorrow we find that each fermion consists of a million vibrating strings, we can easily accommodate the new limit because of the flexible conceptual structure provided by the infinite assumption in our mathematics.

***

I know very well the difference between “maps” and territory: all of post-Riemann mathematics rests on it: abstract manifolds (the “territories”) are defined by “maps Fi” (such that, Fi composed with Fj is itself a differential map from an open set in Rx…xR to another, the number of Real lines R being the dimension… Instead of arrogantly pointing out that I have all the angles covered, I replied:

Dear Pradeep Mutalik:

Thanks for the answer. What limits the number of particles in a (small enough) neighborhood is density: if mass-energy density gets too high, according to (generally admitted) gravity theory, not even a graviton could come out (that’s even worse than having a Black Hole!)

According to Quantum Theory, to each particle is associated a wave, itself computed from, and expressing, the momentum-energy of said particle.

Each neighborhood could be of (barely more than) Planck radius. Tessellate the entire visible universe this way. If too each distinct wave one attaches an integer, it is clear that one will run out of waves, at some point, to label integers with. My view does not depend upon strings, super or not: I just incorporated the simplest model of strings.

Another mathematician just told me: ‘Ah, but the idea of infinity is like that of God’. Well, right. Precisely the point. Mathematics, ultimately, is abstract physics. We don’t need god in physics, as Laplace pointed out to Napoleon (“Sire, je n’ai pas besoin de cette hypothese”). (I know well that Plato and his elite, tyrant friendly friends and students replied to all of this, that they were not of this world, a view known as “Platonism”, generally embraced by mathematicians, especially if they are from plutocratic Harvard University… And I also know why this sort of self-serving, ludicrous opinion, similar to those of so-called “Saint” Thomas, a friend of the Inquisition, and various variants of Satanism, have been widely advocated for those who call for self-respect for their class of haughty persons…) 

The presence of God, aka infinity, in mathematics, is not innocuous. Many mathematical brain teasers become easier, or solvable if one assumes only a largest number (this is also how computers compute, nota bene). Assuming infinity, aka God, has diverted mathematical innovation away from the real world (say fluid flow, plasma physics, nonlinear PDEs, nonlinear waves, etc.) and into questions akin to assuming that an infinity of angels can hold on a pinhead. Well, sorry, but modern physics has an answer: only a finite number.

Patrice Ayme’

 

QUANTUM FLUCTUATIONS & ARROW OF TIME

January 18, 2016

What is time? Quantum Physics gives an answer, classical physics does not. Quantum Physics suggests that time is the set of all irreversible processes. This is a world first, so it requires some explanations. I have been thinking, hard, of these things all my life. Sean Carroll, bless his soul, called my attention to the new development that mainstream physicists are starting to pay attention to my little kingdom(so I thank him).

***

SCIENCE IS WHAT WE DO:

Sean Carroll in “Quantum Fluctuations”:

“Let’s conjure some science up in here. Science is good for the soul.”

Patrice Ayme’: Why is science good for the soul? Because the human soul is centered on finding truth. Science is truth, thus science is human. Nothing is more human than science. Science is what humans do. Another thing humans do is art, and it tries to both duplicate, distort, and invent new nature, or interpretations, interpolations, and suggestions, of and from, nature:

Claim: Quantum Interference Is An Irreversible Process, Time's Arrows All Over. Quantum Interference Goes From Several Waves, To One Geometry. Soap Bubbles Brim With Quantum Interference..

Claim: Quantum Interference Is An Irreversible Process, Time’s Arrows All Over. Quantum Interference Goes From Several Waves, To One Geometry. Soap Bubbles Brim With Quantum Interference..

SC: …what are “quantum fluctuations,” anyway? Talk about quantum fluctuations can be vague. There are really 3 different types of fluctuations: Boltzmann, Vacuum, & Measurement. Boltzmann Fluctuations are basically classical: random motions of things lead to unlikely events, even in equilibrium.

Patrice Ayme’: As we will see, or we have already seen in my own “Quantum Wave”, Quantum Fluctuations are just the Quantum Waves. Richard Feynman, at the end of his chapter on entropy in the Feynman Lectures on Physics, ponders how to get an arrow of time in a universe governed by time-symmetric underlying laws. Feynman:

“So far as we know, all the fundamental laws of physics, such as Newton’s equations, are reversible. Then where does irreversibility come from? It comes from order going to disorder, but we do not understand this until we know the origin of the order. Why is it that the situations we find ourselves in every day are always out of equilibrium?”

Patrice Ayme’: Is that really true? Are equations time-symmetric? Not really. First, equations don’t stand alone. Differential equations depend upon initial conditions. Obviously, even if the equations are time-symmetric, the initial conditions are not: the final state cannot be exchanged with the initial state.

Quantum Physics make this observation even more important. The generic Quantum set-up depends upon a geometric space S in which the equation(s) of motion will evolve. Take for example the 2-slit: the space one considers generally, S, is the space AFTER the 2-slit. The one before the 2-slit, C, (for coherence) is generally ignored. S is ordered by Quantum interference.

The full situation is made of: (C, S & Quantum interference). it’s not symmetric. The Quantum depends upon the space (it could be a so-called “phase space”) in which it deploys. That makes it time-assymmetric. An example: the Casimir Effect.

***

QUANTUM PHYSICS IS ABOUT WAVES:

Sean Carroll: “Nothing actually “fluctuates” in vacuum fluctuations! The system can be perfectly static. Just that quantum states are more spread out.”

Indeed. Quantum states are, intrinsically, more spread out. They are NON-LOCAL. Why?

One has to go back to the basics. What is Quantum Physics about? Some, mostly the “Copenhagen Interpretation” followers, claim Quantum Physics is a subset of functional analysis. (The famous mathematician Von Neumann, one of the creators of Functional Analysis, was the founder of this system of thought; this scion of plutocrats, famously, yet satanically, claimed that De Broglie and Bohmian mechanics were impossible… Von Neumann had made a logical mistake; maybe that had to do with being involved with the satanic part of the American establishment, as, by then, that Hungarian had migrated to the USA and wanted to be called “Johnny”!).

The Quantum-as-functional analysis school became dominant. It had great successes in the past. It allows to view Quantum Physics as “Non Commutative Geometry”. However, contrarily to repute, it’s not the most fundamental view. (I have my own approach, which eschews Functional Analysis.)

But let’s backtrack. Where does Quantum-as-functional-analysis come from? A Quantum system is made of a (“configuration”) space S and an equation E (which is a Partial Differential Equation). Out of S and E is created a Hilbert Space with a basis, the “eigenstates”.

In practice, the eigenstates are fundamental waves. They can be clearly seen, with the mind’s eye, in the case of the Casimir Effect with two metallic plates: there is a maximal size for the electromagnetic wavelengths between the plates (as they have to zero out where they touch the metal).

The notion of wave is more general than the notion of eigenstate (Dirac pushed, successfully, the notion of wave so far that it created space, Spinor Space, and Quantum Field Theory has done more of the same, extending the general mood of De Broglie-Dirac to ever fancier Lagrangians, energy expression guiding the waves according to De Broglie scheme).

Historically, De Broglie suggested in 1923 (several publications to the French Academy of Science) that to each particle was associated a (relativistic) wave. De Broglie’s reasons were looked at by Einstein, who was impressed (few, aside from Einstein could understand what De Broglie said; actually De Broglie French jury thesis, which had two Nobel prizes, was so baffled by De Broglie’s thesis, that they sent it to Einstein, to ask him what he thought. Einstein replied with the greatest compliment he ever made to anyone: “De Broglie has started to lift the great veil,” etc…).

The De Broglie’s wave appears on page 111 of De Broglie’s 1924 thesis, which has 118 pages (and contains, among other things, the Schrödinger wave equation, and, of course, the uncertainty principle, something obvious: De Broglie said all particles were guided by waves whose wavelengths depended upon their (relativistic) energy. An uncertainty automatically appears when one tries to localize a particle (that is, a wave) with another particle (that is, another wave!)

***

CLASSICAL PHYSICS HAS NO ARROW OF TIME:

Consider an empty space S. If the space S is made available to (classical) Boltzmann particles, S is progressively invaded by (classical) particles occupying ever more states.

Classical physicist (Boltzmann, etc.) postulated the Second Law of Thermodynamics: something called entropy augmented during any process. Problem, rather drastic: all classical laws of physics are reversible! So, how can reversible physics generate a time-irreversible law? Classical physicist have found no answer. But I did, knight in shining armor, mounted on my powerful Quantum Monster:

***

QUANTUM PROCESSES CREATE IRREVERSIBLE GEOMETRIES:

When the same space S is made available as part of a Quantum System, the situation is strikingly different. As Sean Carroll points out, the situation is immediately static, it provides an order (as Bohm insisted it did). The observation is not new: the De Broglie waves provided an immediate explanation of the stability of electronic waves around atoms (thus supporting Bohr’s “First, or Semi-Classical, Quantum Theory”.

What’s a difference of a Quantum System with a classical system? The classical system evolves, from a given order, to one, more disordered. The Quantum system does not evolve through increasing disorder. Instead, the space S, once accessed, becomes not so  much an initial condition, but a global order.

The afore-mentioned Hilbert Space with its eigenstates is that implicit, or implicate (Bohm) order. So the Quantum System is static in an important sense (from standing Quantum Waves, it sorts of vibrates through time).

Thus Quantum Systems have an intrinsic time-assymmetry (at least when dealing with cavities). When there are no cavities, entanglement causes assymmetry: once an interaction has happened, until observation, there is entanglement. Before interaction, there was no entanglement. Two classical billiards balls are not entangled either before or after they interact, so the interaction by collision is fully time reversible.

Entanglement is also something waves exhibit, once they have interacted and not before, which classical particles are deprived of.

Once more we see the power of the Quantum mindset for explaining the world in a much more correct, much simpler, and thus much more powerful way. The Quantum even decides what time is.

So far as we know, all the classical fundamental laws of physics, such as Newton’s equations, are reversible. Then were does irreversibility come from? It does NOT come, as was previously suggested, from order going to disorder.

Quite the opposite: irreversibility comes from disorder (several waves)going to order (one wave, ordered by its surrounding geometry). And we do understand the origin of the order: it’s the implicit order of Quantum Waves deployed.

You want to know the world? Let me introduce you to the Quantum, a concept of wealth, taste and intelligence.

Last and not least: if I am right, the Quantum brings the spontaneous apparition of order, the exact opposite picture which has constituted the manger in which the great cows of physics have found their sustenance. Hence the fact that life and many other complicated naturally occurring physical systems are observed to create order in the universe are not so baffling anymore. Yes, they violate the Second Law of Thermodynamics. However, fundamentally, that violated the spirit, the principle of the universe, the Quantum itself.

Patrice Ayme’

Quantum Fluctuates (Not That Much)

January 3, 2016

The Multiverse fanatics use “Quantum Fluctuations” to justify the existence of the… Universe. Their logic rests on the famous, and deep, inequality:

(Time Uncertainty) (Energy Uncertainty) > (Planck Constant).

I have an accompanying drawing of sorts which relates the preceding to the better known inequality called the “Uncertainty Principle”:

(Uncertainty Position) (Uncertainty Momentum) > (Planck Constant = h).

Uncertainty actually is not as much a “Principle” as a theorem (both inequalities are demonstrated below). The entire subject is very interesting philosophically, as we will see. The lessons are far-ranging, and all over. Yet recent physics textbooks have been eschewing the philosophical character of what is done, within the logic of physics, and stick to soulless formalism. The result has been an entire generation ill-equipped to handle philosophical questions (and yet, they are now forced to do so). Before I get into the philosophy, which appear later, let me roll out the basic physics.

Time And Energy Are Entangled, And This Is The Easiest Proof

Time And Energy Are Entangled, And This Is The Easiest Proof

OK, let’s give a few more details (hidden by implication arrows above). The Position-Momentum inequality is rather obvious, once one has got the basic quantum picture of the photon as a wave, and how it relates to energy.

  1. To locate an object V, one needs to see it. That means ricochet a photon of it (we have nothing better than photons to see… Although some French guy got the Nobel for seeing photons with atomic phase changes, but that’s another story).
  2. So throw photon P on V. To hit V, P needs a smaller wavelength W than L, the diameter of V. Otherwise, P being a wave when it moves, or, more exactly, explores space supraluminally, it will turn around V.
  3. The momentum of the photon P is inverse to W. [This is Energy = h (Frequency)]
  4. So the smaller L, the harder the photon P will hit the object V. That is, the smaller the localization of V, the greater the momentum of V.

So localizing a particle kicks it. How do we get to Energy-Time Uncertainty from there? The Standard Model (which is proven and consistent in its present very restricted domain: no gravity, etc.) has three classes of particles, one of them the class of force carriers. Force carriers go at the speed of light, c, and (thus) have zero mass (the Higgs gives them the appearance of mass as an afterthought).

So what do I do? Well momentum is basically energy (make c = 1), and time is space (thanks to c, measuring time is measuring space and reciprocally). Thus Position-Momentum becomes Time-Energy (the “real” proof as found in Messiah’s basic QM textbook involves functional analytic manipulations, but I doubt it really says more!)

[There are slick derivations of Time-Energy relationship using functional analysis. I am not so sure they make sense… As time is not really an observable in Quantum Physics. My primitive derivation found in the drawing is extremely basic, thus much more powerful. Their main advantage would be to mesmerize undergraduate.]

How Quantum Field Theory (QFT) Blossomed:

Philosophically, the rise of QFT is all about inventing new weird logics. Modern logic comprises Classical Logic, but has gone much further (multivalued logic, fuzzy logic, paradoxal logic, to quote just a few). Basically it has gone in realms where all the rules of classical logic fail. And physics has not come short, but made equally impressive contributions in weirdness.

Let me hasten to add that I find all this very valuable. De Broglie made reasonings I still do not understand. Dirac got the idea that the wave (equation) should be the primary axiom (getting spinor space, where electrons roam, from it, and then spin, anti-matter, etc.).

In QFT the Time-Energy Uncertainty plays a central role, and what is done is actually philosophically fascinating, and should inform the rest of philosophy:

  1. Time-Energy Uncertainty prevents to know fundamental processes if the product of uncertainty in Time, multiplied by the uncertainty in Energy is less than a constant (h).
  2. Thus, should such HIDDEN Fundamental Processes (HFP) occur, we won’t be able to detect them directly.
  3. Hence let’s suppose such HFP happen. Then let’s compute. We discover renormalization, and find end results which are different from those without the HFP.
  4. Check experimentally. What is found is that physics with HFP is correct, and physics without HFP is not.
  5. Einstein tried, but gave up on all this, after his friend Ehrenfest tried to teach them to him for three weeks at Princeton.

Philosophical lesson? Something can be hidden, in principle, and still have indirect, observable effects. (Application in politics? Think of the plutocrats’ most vicious ways, unobservable, in principle, as the media they control make sure of it. Yet, indirectly they are poisoning the world, and the world is dying.)

Some of Today’s Physicists Are Easily Philosophically Confused:

But let’s go back to pataphysics, it’s lot of fun. In the so-called Big Boom, time is supposed to go to zero. Pataphysicists reason that, then, as the uncertainty in time goes down to zero, the uncertainty in energy has got to tend to infinity. First problem: it’s not because the uncertainty on something goes to infinity, that this thing goes to infinity.

But the main problem is the easy way in which the time-energy uncertainty was derived above. If only that reasoning makes sense, it applies to particles, and even virtual particles (although some fully active physicists consider those virtual particles do not exist, only fields do, and Feynman himself was not sure, private conversation). Thus the reasoning above justifies Quantum Fluctuations as they are used in Quantum Field Theory… and, indeed, they are clearly a safe and effective theory there. They work so well that, according to EFFECTIVE ONTOLOGY, those virtual particles ought to exist (I am aware of the arguments against them, more on that another time).

Thus that particles can flicker in and out of existence because of Quantum Fluctuations, I have not only demonstrated in my very primitive (and thus very safe and effective) way, but nobody in the know can deny it happens, since QFT works, and proves the concept . During their brief existence, those virtual particles (or field fluctuations represented by particles, some sophists will insist) affect charge, mass, etc. and these renormalizations have been observed.

Notice that I said: flicker in and OUT of existence. Why OUT of existence?  These particles flicker OUT of existence because of ENERGY CONSERVATION. Notice also that the universe does not flicker out of existence.

Pataphysicists Throw The Baby Out, And Drink The Dirty Water:

Physics is the search of basic axioms and the logic to bring them to life. One of these basic axioms is energy conservation.

This is what the pataphysicists propose to violate, as if they were Saudi paedophiles. Now violations can be justified in extraordinary circumstances (after all Aisha, who Muhammad married when she was six, came to love the Prophet more than any of his followers, and defended his work with her life, after His passing).

However the Big Boom theory of the creation of the universe is not such a great miracle, that it has to be preserved at all cost.

One should not throw the baby with the bath. Nor should one throw the baby out to preserve the dirty bath water. The precious baby is the principle of energy conservation. The dirty bath water is the Big Boom theory. That Big Bang already requires space to expand at zillion of times the speed of light. I have nothing against it, except it looks ad hoc. Pataphysicists have also smelled a rotten rat there, with that one and only, ad hoc  inflation, too, so they say:

“Look at a blade of grass. What do you see? A blade of grass. But look beyond: here is another one blade of grass, and another, and another. Zillions of blades of grass. Then look at planets: zillions, And at stars: zillions, and galaxies too: zillions. Thus universes? Zillions too!”

It reminds me of the fable of the frog who wanted to make itself bigger than an ox. It was doing well, inflating itself, until it exploded in a Big Bang. Pataphysicists can inflate their minds as much as they want, it’s still all wind inside. Time-Energy uncertainty applies to Quantum Fields, inasmuch as it respects energy conservation. Agreed, it is only natural that those who got reputations out of nothing, feel now confident that they can get a universe out of nothing. After all, it’s what their existence is all about.

And the weirdest thing? There is a simple, a simpler, alternative to all the madness: the 100 billion years universe. We will see who wins. This is going to be fun.

Patrice Ayme’

Points Against Multiverses

December 31, 2015

Physics, the study of nature, is grounded not just in precise facts, but also a loose form of logic called mathematics, and in even more general reasonings we know as “philosophy”. For example, the rise of Quantum Field Theory required massive Effective Ontology: define things by their effects. The reigning philosophy of physics became “shut-up and calculate”. But it’s not that simple. Even the simplest Quantum Mechanics, although computable, is rife with mind numbing mysteries (about the nature of matter, time and non-locality).

Recently the (increasing) wild wackiness of the Foundations of Physics, combined with the fact that physics, as it presently officially exists, cannot under-stand Dark Energy and Dark Matter, most of the mass-energy out there, has led some Europeans to organize conferences where physicists meet with reputable  philosophers.

Einstein Was Classical, The World Is Not. It's Weirder Than We Have Imagined. So Far.

Einstein Was Classical, The World Is Not. It’s Weirder Than We Have Imagined. So Far.

[Bell, CERN theory director, discovered a now famous inequality expressing locality, which Quantum physics violate. Unfortunately he died of a heart attack thereafter.]

Something funny happened in these conferences: many physicists came out of them, persuaded, more than ever, or so they claimed, that they were on the right track. Like little rodents scampering out in the daylight,  now sure that there was nothing like a big philosophical eagle to swoop down on them. They made many of these little reasonings in the back of their minds official (thus offering now juicy targets).

Coel Hellier below thus wrote clearly what has been in the back of the minds of the Multiverse Partisans. I show “his” argument in full below. Coel’s (rehashing of what has become the conventional Multiverse) argument is neat, cogent, powerful.

However I claim that it is not as plausible, not as likely, as the alternative, which I will present. Coel’s argument rests on a view of cosmology which I claim is neither mathematically necessary, nor physically tenable (in light of the physics we know).

To understand what I say, it’s better to read Coel first. Especially as I believe famous partisans of the Multiverse have been thinking along the same lines (maybe not as clearly). However, to make it fast, those interested by my demolition of it can jump directly to my counter, at the end: NO POINTS, And Thus No Multiverse.

***

Multiverses Everywhere: Coel Hellier’s Argument:

Coel Hellier, a professional astrophysicist of repute, wrote :  “How many Big Bangs? A philosophical argument for a multiverse”:

“Prompted by reading about the recent Munich conference on the philosophy of science, I am reminded that many people regard the idea of a multiverse as so wild and wacky that talking about it brings science into disrepute.”

Well, being guided by non-thinking physicists will do that. As fundamental physicist Mermin put it, decades ago:

The Philosophy "Shut Up And Calculate" Is A Neat Example Of Intellectual Fascism. It Is Increasingly Undermined By The Effort Toward Quantum Computing, Where Non-Locality Reigns

The Philosophy “Shut Up And Calculate” Is A Neat Example Of Intellectual Fascism. It Is Increasingly Undermined By The Effort Toward Quantum Computing, Where Non-Locality Reigns.

Coel, claiming to have invented something which has been around for quite a while, probably decades: My argument here is the reverse: that the idea of multiple Big Bangs, and thus of a multiverse, is actually more mundane and prosaic than the suggestion that there has only ever been one Big Bang. I’m calling this a “philosophical” argument since I’m going to argue on very general grounds rather than get into the details of particular cosmological models.

First, let me clarify that several different ideas can be called a “multiverse”, and here I am concerned with only one. That “cosmological multiverse” is the idea that our Big Bang was not unique, but rather is one of many, and that the different “universes” created by each Big Bang are simply separated by vast amounts of space.

Should we regard our Big Bang as a normal, physical event, being the result of physical processes, or was it a one-off event unlike anything else, perhaps the origin of all things? It is tempting to regard it as the latter, but there is no evidence for that idea. The Big Bang might be the furthest back thing we have evidence of, but there will always be a furthest-back thing we have evidence of. That doesn’t mean its occurrence was anything other than a normal physical process.

If you want to regard it as a one-off special event, unlike any other physical event, then ok. But that seems to me a rather outlandish idea. When physics encounters a phenomenon, the normal reaction is to try to understand it in terms of physical processes.”

Then Coel exposes some of the basic conclusions of the Standard Big Bang model:

So what does the evidence say? We know that our “observable” universe is a region of roughly 13.8 billion light years in radius, that being the distance light can have traveled since our Big Bang. (Actually, that’s how we see it, but it is now bigger than that, at about 90 billion light years across, since the distant parts have moved away since they emitted the light we now see.) We also know that over that time our observable universe has been steadily expanding.

Then astrophysicist Coel start to consider necessary something about the geometry of the universe which is not so, in my opinion. Coel:

“At about 1 second after the Big Bang, what is now our observable universe was only a few light years across, and so would have fitted into (what is now) the space between us and the nearest star beyond our Sun. Before that it would have been yet smaller.”

What’s wrong? Coel assumes implicitly that the universe started from a POINT. But that does not have to be the case. Suppose the universe started as an elastic table. As we go back in time, the table shrinks, distances diminish. Coel:

“We can have good confidence in our models back to the first seconds and minutes, since the physics at that time led to consequences that are directly observable in the universe today, such as the abundance of helium-4 relative to hydrogen, and of trace elements such as helium-3, deuterium, and lithium-7.[1] Before that time, though, our knowledge gets increasingly uncertain and speculative the further back we push.”

These arguments about how elements were generated, have a long history. They could actually be generated in stars (I guess, following Hoyle and company). Star physics is not that well-known that we can be sure they can’t (stars as massive as 600 Suns seem to have been discovered; usual astrophysics says they are impossible; such stars would be hotter than the hottest stars known for sure).

Big Bangists insist that there would have been no time to generate these elements in stars, because the universe is 13.8 billion years old. But that 13.8 billion is from their Big Bang model. So their argument is circular: it explodes if the universe is, actually 100 billion years old.

But back to Coel’s Multiverses All Over. At that point, Coel makes a serious mistake, the one he was drifting towards above:

“One could, if one likes, try to extrapolate backwards to a “time = zero” event at which all scales go to zero and everything is thus in the same place. But trying to consider that is not very sensible since we have no evidence that such an event occurred (from any finite time or length scale, extrapolating back to exactly zero is an infinite extrapolation in logarithmic space, and making an infinite extrapolation guided by zero data is not sensible). Further, we have no physics that would be remotely workable or reliable if applied to such a scenario.[2]

…”all scales go to zero and everything is thus in the same place.” is not true, in the sense that it does not have to be. Never mind, Coel excludes it, although he claims “extrapolating back in time” leads there. It does not.

Instead, Coel invites us to Voodoo (Quantum) Physics:

“So what is it sensible to consider? Well, as the length scale decreases, quantum mechanics becomes increasingly important. And quantum mechanics is all about quantum fluctuations which occur with given probabilities. In particular, we can predict that at about the Planck scale of 10−35 metres, quantum-gravity effects would have dominated.[3] We don’t yet have a working theory of quantum gravity, but our best guess would be that our Big Bang originated as a quantum-gravity fluctuation at about that Planck-length scale.”

Well, this is conventional pata-physics. Maybe it’s true, maybe not. I have an excellent reason why it should not (details another time). At this point, Coel is firmly in the conventional Multiverse argument (come to think of it, he did not invent it). The universe originated in a Quantum fluctuation at a point, thus:

“So, we can either regard our Big Bang as an un-natural and un-physical one-off event that perhaps originated absolutely everything (un-natural and un-physical because it would not have been a natural and physical process arising from a pre-existing state), or we can suppose that our Big Bang started as something like a quantum-gravity fluctuation in pre-existing stuff. Any physicist is surely going to explore the latter option (and only be forced to the former if there is no way of making the latter work).

At times in our human past we regarded our Solar System as unique, with our Earth, Sun and Moon being unique objects, perhaps uniquely created. But the scientific approach was to look for a physical process that creates stars and planets. And, given a physical process that creates stars, it creates not just one star, but oodles of them strewn across the galaxy. Similarly, given a physical process that creates Earth-like planets, we get not just one planet, but planets around nearly every star.”

Coel then gets into the famous all-is-relative mood, rendered famous by “French Theory”:

“It was quite wrong to regard the Sun and Earth as unique; they are simply mundane examples of common physical objects created by normal physical processes that occur all over the galaxy and indeed the universe.

But humans have a bias to a highly anthropocentric view, and so we tend to regard ourselves and what we see around us as special, and generally we need to be dragged kicking and screaming to the realisation that we’re normal and natural products of a universe that is much the same everywhere — and thus is strewn with stars like our Sun, with most of them being orbited by planets much like ours.

Similarly, when astronomers first realised that we are in a galaxy, they anthropocentrically assumed that there was only one galaxy. Again, it took a beating over the head with evidence to convince us that our galaxy is just one of many.”

Well, it’s not because things we thought were special turned out not to be that nothing is special. The jury is still out about how special Earth, or, for that matter, the Solar System, are. I have argued Earth is what it is, because of the Moon and the powerful nuclear fission reactor inside Earth. The special twist being that radioactive elements tend to gather close to the star, and not in the habitable zone. So Earth maybe, after all special.

At this point, Coel is on a roll: multiverses all over. Says he:

“ So, if we have a physical process that produces a Big Bang then likely we don’t get just one Big Bang, we get oodles of them. No physical process that we’re aware of happens once and only once, and any restriction to one occurrence only would be weird and unnatural. In the same way, any physical process that creates sand grains tends to create lots of them, not just one; and any physical process that creates snowflakes tends to create lots of them, not just one.

So, we have three choices: (1) regard the Big Bang as an unnatural, unphysical and unexplained event that had no cause or precursor; (2) regard the Big Bang as a natural and physical process, but add the rider that it happened only once, with absolutely no good reason for adding that rider other than human parochial insularity; or (3) regard the Big Bang as a natural and physical event, and conclude that, most likely, such events have occurred oodles of times.

Thus Big Bangs would be strewn across space just as galaxies, stars and planets are — the only difference being that the separation between Big Bangs is much greater, such that we can see only one of them within our observable horizon.

Well, I don’t know about you, but it seems to me that those opting for (3) are the ones being sensible and scientifically minded, and those going for (1) or (2) are not, and need to re-tune their intuition to make it less parochial.”

To make sure you get it, professor Coel repeats the argument in more detail, and I will quote him there, because as I say, the Multiverse partisans have exactly that argument in the back of their mind:

“So, let’s assume we have a Big Bang originating as a quantum-gravity fluctuation in a pre-existing “stuff”. That gives it a specific length scale and time scale, and presumably it would have, as all quantum fluctuations do, a particular probability of occurring. Lacking a theory of quantum gravity we can’t calculate that probability, but we can presume (on the evidence of our own Big Bang) that it is not zero.

Thus the number of Big Bangs would simply be a product of that probability times the number of opportunities to occur. The likelihood is that the pre-existing “stuff” was large compared to the quantum-gravity fluctuation, and thus, if there was one fluctuation, then there would have been multiple fluctuations across that space. Hence it would likely lead to multiple Big Bangs.

The only way that would not be the case is if the size of the pre-existing “stuff” had been small enough (in both space and time) that only one quantum fluctuation could have ever occurred. Boy, talk about fine tuning! There really is no good reason to suppose that.

Any such quantum fluctuation would start as a localised event at the Planck scale, and thus have a finite — and quite small — spatial extent. Its influence on other regions would spread outwards, but that rate of spreading would be limited by the finite speed of light. Given a finite amount of time, any product of such a fluctuation must then be finite in spatial extent.

Thus our expectation would be of a pre-existing space, in which there have occurred multiple Big Bangs, separated in space and time, and with each of these leading to a spatially finite (though perhaps very large) universe.

The pre-existing space might be supposed to be infinite (since we have no evidence or reason for there being any “edge” to it), but my argument depends only on it being significantly larger than the scale of the original quantum fluctuation.

One could, of course, counter that since the initial quantum fluctuation was a quantum-gravity event, and thus involved both space and time, then space and time themselves might have originated in that fluctuation, which might then be self-contained, and not originate out of any pre-existing “stuff”.[5] Then there might not have been any pre-existing “stuff” to argue about. But if quantum-gravity fluctuations are a process that can do that, then why would it happen only once? The natural supposition would be, again, that if that can happen once, then — given the probabilistic nature of physics — it would happen many times producing multiple different universes (though these might be self-contained and entirely causally disconnected from each other).”

Then, lest you don’t feel Multiversal enough, professor Coel rolls out the famous argument which brings the Multiverse out of Cosmic Inflation. Indeed, the universe-out of nothing Quantum fluctuation is basically the same as that of Cosmic Inflation. It’s the same general mindset: I fluctuate, therefore I am (that’s close to Paris motto, Fluctuat Nec Mergitur…). Coel:

In order to explain various aspects of our observed universe, current cosmological models suggest that the initial quantum fluctuation led — early in the first second of its existence — to an inflationary episode. As a result the “bubble” of space that arose from the original quantum-fluctuation would have grown hugely, by a factor of perhaps 1030. Indeed, one can envisage some quantum-gravity fluctuations leading to inflationary episodes, but others not doing so.

The inflationary scenario also more or less requires a multiverse, and for a similar reason to that given above. One needs the region that will become our universe to drop out of the inflationary state into the “normal” state, doing so again by a quantum fluctuation. Such a quantum fluctuation will again be localised, and so can only have a spatially finite influence in a finite time.

Yet, the inflationary-state bubble continues to expand so rapidly, much more rapidly than the pocket of normal-state stuff within it, that its extent does not decrease, but only increases further. Therefore whatever process caused our universe to drop out of the inflationary state will cause other regions of that bubble to do the same, leading to multiple different “pocket universes” within the inflationary-state bubble.

Cosmologists are finding it difficult to construct any model that successfully transitions from the inflationary state to the normal state, that does not automatically produce multiple pocket universes.[6] Again, this follows from basic principles: the probabilistic nature of quantum mechanics, the spatial localisation of quantum fluctuations, and the finite speed at which influence can travel from one region to another.”

The driver of the entire Multiverse thinking is alleged Quantum Fluctuations in a realm we know f anything. Those who are obsessed by fluctuations may have the wrong obsession. And professor Coel to conclude with more fluctuations:

“The dropping out of the inflationary state is what produces all of the energy and matter that we now have in our universe, and so effectively that dropping-out event is what we “see” as our Big Bang. This process therefore produces what is effectively a multiverse of Big Bangs strewn across that inflationary bubble. Thus we have a multiverse of multiverses! Each of the (very large number of?) quantum-gravity fluctuations (that undergo an inflationary state) then itself produces a whole multiverse of pocket universes.

The point I am trying to emphasize is that any process that is at all along the lines of current known physics involves the probabilistic nature of quantum mechanics, and that means that more or less any conceivable process for creating one Big Bang is going to produce not just a single event, but almost inevitably a vast number of such events. You’d really have to try hard to fine-tune and rig the model to get only one Big Bang.

As with any other physical process, producing multiple Big Bangs is far more natural and in-line with known physics than trying to find a model that produces only one. Trying to find such a model — while totally lacking any good reason to do so — would be akin to looking for a process that could create one snowflake or one sand grain or one star or galaxy, but not more than one.”

Patrice Says: NO POINTS, AND THUS NO MULTIVERSE(s):

Did the universe expand from one point? Not necessarily. It could have been from a line, a plane, a volume, even something with a crazy topology. The Big Bang is the time zero limit of the FLRW metric. Then the spacing between every point in the universe becomes zero and the density goes to infinity.

Did the Universe expand from Quantum Gravity? Beats me, I don’t have a theory of Quantum Gravity.

What I know is that, expanding from what’s known of gravity, if the universe expanded from a “point”, that would be smaller than the Planck volume, thus the universe would be within a Black Hole. From what we know about those, no expansion.

Once we don’t have the universe expanding from a point, we cannot argue that it expanded from one point in some sort of “stuff”. If the universe is the “stuff” itself, and it’s everywhere, and expanding from everywhere, exit the argument about a “point”.

The argument about a “point” was that: why this particular point? Why not another “Quantum Fluctuation” from another “point” in the “stuff”. Why should our “point” be special? Is it not scientific to believe in the equality of points? Except points have measure zero in three dimensional space, and thus it’s more “scientific”, “mathematical” to suppose the universe expanded from a non-measure zero set, namely a volume (and it better be bigger than the Planck Volume).

So the argument that there should be many universes because there are many points and many Quantum (Gravity) fluctuations flies apart.

Remains the argument that we need Cosmic Inflation. Yes, but if the universe expands from all over, all over, there is only one such. Cosmic Inflation does not have to appear at all points generating baby universes, It becomes more like Dark Energy.

Speaking of which, why should we have two Cosmic Inflations when we already have one? Even my spell checker does not like the idea of two inflations. It does not like the “s”. Ah, yes, the existing Big Bang needs its own Inflation.

Yet if there is only one inflation, presto, no more standard Big Bang, But then what of Helium, Lithium, etc? How do we synthesize enough of those? Well maybe we would have much more time to synthesize them, inside stars… Especially super giant stars.

Another word about these Quantum Fluctuations. Are they the fundamental lesson of Quantum Physics (as the Multiversists implicitly claim)? No.

Why? There are several most fundamental lessons about Quantum Physics. Most prominent: the DYNAMICAL universe is made of waves. That fact, by itself implies NON-LOCALITY. It also implies neighborhoods, no points, are the fundamental concepts (one cannot localize a wave at a point). This is the origin of the “Quantum Fluctuations”.

So we just saw that “Quantum Fluctuations” may not be the most fundamental concept. Fundamental, yes, but not most fundamental. When debating fundamentals with the Devil, you better bring exquisite logic, and a Non-Local spoon, otherwise you will be Quantum fluctuated out.

Patrice Ayme’

Black Hole Paradox

September 1, 2015

Photons are the carriers of the electromagnetic field. Each single photon is endowed with a given energy, hf, where f is the frequency of said photon. In some circumstances, the energy a photon possesses is less than the one it needs to get out of a gravitational well. So it cannot get out: a black hole forms.

Essentially, this comes from the fact a photon’s energy is finite, whereas the energy of a gravitational field can grow infinitely… Or so I, and others, used to think, until I became skeptical.

No Doubt There Are Black Holes. Question: How Come?

No Doubt There Are Black Holes. Question: How Come?

Simulated view of a black hole in front of the Large Magellanic Cloud, one of many small galaxies satellite to the giant Milky Way. This Black Hole is assumed to be alone, without accretion disk (accretion would make the Black Hole very luminous!). The ratio between the black hole Schwarzschild radius and the observer distance to it is 1:9. Of note is the gravitational lensing effect known as an Einstein ring, which produces a set of two fairly bright and large but highly distorted images of the Cloud as compared to its actual angular size.

The two arcs of circle top and bottom are actually the Large Magellanic Cloud, appearing in two places, as light goes above and below the Black Hole. The Milky Way appears above, strongly distorted by gravitational lensing. [2006 image by French physicist Alain R.]

Gravitons are the (alleged) carriers of the gravitational field. Each of them has some energy. At some point the energy gravitons individually posses ought to be less than the potential energy needed to get out of a gravitational well. (The reasoning is the same as for photons.)

But then what?

In the case of photons, what is blocked is light the electromagnetic field: light, in another word.

What is blocked when gravitons get blocked? The gravitational field itself! Thus a black hole would not just then show up as a black, “frozen star”. A Black Hole should outright violate (apparent) matter conservation. It should disconnect gravitationally.

Following this simple logic, at some point a mass collapsing gravitationally should disappear, not just visually, but gravitationally.

Yet, astronomical observations reveal hyper massive black holes at the center of galaxies. This tends to indicate that physics may happen inside a black hole that we can neither observe, nor predict.

I presented these simple ideas a very long time ago in Stanford, a private university in California, personally or in seminars, to some of the household names in the field. The reaction of my iconoclasm was close to indignant anger. It’s easy to see why. We human beings live lives which are endowed with sense only by forgetting that we make little sense individually, absent others.

A way to make sense is by giving love and care. Another, mostly the obverse, by the will to power. A scientific, or, more generally, an intellectual career (philosopher, poet, writer, etc.)marries both love and power. Science, in particular, unites a potent hierarchy akin to priesthood, with the pretense of great magic vis-à-vis the public and being a gift to humanity. Or so it is perceived by its participants. Break the spell, and scientists feel as insects instead of semi-gods, and the absurdity of their position, that of thieves in full sight, exposed to the pillory, is too much to bear.

Yet, a quick glance at the history of science shows that great errors and lack of understanding, spectacularly erroneous theories could have been detected easily, with simple observations.

I am not saying that science is always simple. Far from it. For example, the heliocentric theory could be only demonstrated to be sure with 100% certainty, only after a careful study of the phases of Venus, through increasingly powerful telescopes, during the middle of the seventeenth century. Before that, geocentrism failed the smell test (it was too contrived, and the sun was so much bigger). True. The smell test is philosophical in nature. Before that, one could only say that it was un-scientific to rule out the most likely theory (heliocentrism), just because one could not prove it, and because it enraged so many people in high places.

It cannot be any different today: the very idea of the priesthood, scientific or not, is making some humans into quasi-gods. Out of this divine hierarchy comes the certainty that metaphysics has been solved.

Thus, when I suggested that, on the face of it, ultimately, Black Holes ought to disconnect gravitationally with the rest of the universe, I undermined the principle that the greatest scientists (I will not write their names as not to enrage them further), covered as they are with great medals, after all, do not understand much more about gravitation, than we did, say, three centuries ago.

I caused these people existential pain: no, you are not the greatest of the great, having achieved greater understanding than anyone did before you, colossally dominating history and humanity, and deservedly so. What you call greatest of the greatness, seems, after all, to be just errors of the smugly ignorant.

Einstein was not that way. He said:

All these fifty years of conscious brooding have brought me no nearer to the answer to the question, ‘What are light quanta?’ Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken. (Albert Einstein, 1954)

Most importantly, Albert Einstein also suspected that Matter could not be described by field theory:

I consider it quite possible that physics cannot be based on the field concept, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, gravitation theory included, [and of] the rest of modern physics. (Albert Einstein, 1954)

In my theory, elementary particles are not only non-local (Einstein’s Error was to suppose that they were), but they break (giving rise to Dark Matter). But I will not go as far as to say that “nothing remains”. Far from it, my dear Uncle Albert. Quantum Field Theory remains, as an approximation. Just as the epicycle theory remains, as a sort of Fourier Analysis of a periodic motion.

Some physicists will laugh at the simplicity of the preceding reasoning, and just exasperatedly utter: “that’s ridiculous” as some of the most prestigious specialists of the field did to me, decades ago. Maybe it is. Just tell me why. I am humbly waiting.

Patrice Ayme’