Posts Tagged ‘Foundations Physics’

DARK MATTER-ENERGY, Or How Inquiry Proceeds

September 7, 2016

How to find really new knowledge? How do you find really new science? Not by knowing the result: this is what we don’t have yet. Any really new science will not be deduced from pre-existing science. Any really new knowledge will come out of the blue. Poetical, and, or, emotional logic will help before linear logic does.

A top lawyer, admitted to the US Supreme Court, and several countries. told me that the best judges know, emotionally, where they want to go, and then build a logical case for it.

The case of Dark Matter is telling: this increasingly irritating elephant in the bathroom has been in evidence for 80 years, lumbering about, smashing the most basic concepts of physics. As the encumbering beast did not fit existing science, it was long religiously ignored by the faithful of the church of standard physics, as a subject not worthy of deep inquiry by very serious physicists. Now Dark Matter, five times more massive than Standard Model matter, is clearly sitting heavily outside of the Standard Model, threatening to crush it into irrelevance. Dark matter obscures the lofty pretense of known physics to explain everything (remember the grandly named TOE, the so-called “Theory Of Everything“? That TOE was a fraud, snake oil, because mainstream physics celebrities crowed about TOE, while knowing perfectly well that Dark Matter dwarfed standard matter, and while being completely outside of the Standard Model).

Physicists are presently looking for Dark Matter, knowing what they know, namely that nature has offered them a vast zoo of particles, many of them without rhyme or reason. or symmetries to “explain” (indeed, some have rhyme, a symmetry, a mathematical group such as SU3 acting upon them; symmetries have revealed new particles, sometimes). 

Bullet Cluster, 100 Million Years Old. Two Galaxies Colliding. The Dark Matter, In Blue, Is Physically Separated From the Hot, Standard Matter Gas, in Red.

Bullet Cluster, 100 Million Years Old. Two Galaxies Colliding. The Dark Matter, In Blue, Is Physically Separated From the Hot, Standard Matter Gas, in Red.

This sort of picture above is most of what we presently have to guess what Dark Matter could be; the physical separation of DM and SM is most telling to me: it seems to indicate that SM and DM do not respond to the same forces, something that my Quantum theory predicts; it’s known that Dark Matter causes gravitational lensing, as one would expect, as it was first found by its gravitational effects, in the 1930s…

However, remember: a truly completely new (piece of) science cannot be deduced from pre-existing paradigm. Thus, if Dark Matter was really about finding a new particle type, it would be interesting, but not as interesting as it would be, if it were not, after all, a new particle type, but instead, a consequence from a completely new law in physics.

This is the quandary about finding truly completely new science. It can never be deduced from ruling paradigms, and may actually overthrow them. What should then be the method to use? Can Descartes and Sherlock Holmes help? The paradigm presented by Quantum Physics helps. The Quantum looks everywhere in space to find solutions: this is where its (“weird”) nonlocality comes in. Nonlocality is crucial for interference patterns and for finding lowest energy solutions, as in the chlorophyll molecule. This suggests that our minds should go nonlocal too, and we should look outside of a more extensive particle zoo to find what Dark Matter is.

In general, searching for new science should be by looking everywhere, not hesitating to possibly contradict what is more traditional than well established.

An obvious possibility to explain Dark Matter is, precisely, that Quantum Physics is itself incomplete, and generating Dark Matter, and Dark Energy, in places where said incompleteness (of the present Quantum theory) would be most blatant: large cosmic distances.

More precisely, Quantum processes, stretched over cosmic distances, instead of being perfectly efficient and nonlocal over gigantically cosmic locales, could leave a Quantum mass-energy residue, precisely in the places where extravagant cosmic stretching of Quanta occurs (before “collapse”, aka “decoherence”). (I call this theory of mine SQPR, Sub Quantum Patrice Reality.)

This would happen if what one should call the “Quantum Interaction” proceeds at a finite speed (much faster than c, by a factor of at least 10^23…). It’s enough.

The more one does find a conventional explanation (namely a new type of particle) for Dark Matter, the more likely my style of explanation is likely. How could one demonstrate it? Not by looking for new particles, but by conducting new and more refined experiments in the foundations of Quantum Physics.

If this guess is correct, whatever is found askew in the axioms of present Quantum Physics could actually help future Quantum Computer technology (because the latter works with Quantum foundations directly, whereas conventional high energy physics tend to eschew the wave aspects, due to the high frequencies involved).

Going on a tangent is what happens when the central, attractive force, is let go. A direct effect of freedom. Free thinking is tangential. We have to learn to produce tangential thinking.

René Descartes tried to doubt the truth of all his beliefs to determine which beliefs he could be certain were true. However, at the end of “The Meditations” he hastily conclude that we can distinguish between dream and reality. It is not that simple. The logic found in dreams is all too similar to the logic used by full-grown individuals in society.

Proof? Back to Quantum Physics. On the face of it, the axioms of Quantum Physics have a dream like quality (there is no “here”, nor “there”, “now” is everywhere, and, mysteriously, the experiment is Quantum, whereas the “apparatus” is “classical”). Still, most physicists, after insinuating they have figured out the universe, eschew the subject carefully.  The specialists of Foundations are thoroughly confused: see Sean Carroll, http://www.preposterousuniverse.com/blog/2013/01/17/the-most-embarrassing-graph-in-modern-physics/

However unbelievable Quantum Physics, however dream-like it is, physicists believe in it, and don’t question it anymore than cardinals would Jesus. Actually, it’s this dream-like nature which, shared by all, defines the community of physicists. Cartesian doubt, pushed further than Descartes did, will question not just the facts, the allegations, but the logic itself. And even the mood behind it.

Certainly, in the case of Dark Matter, some of the questions civilization has to ask should be:

  1. How sure are we of the Foundations of Quantum Physics? Answer: very sure, all too sure!
  2. Could not it be that Dark Matter is a cosmic size experiment in the Foundations of Quantum Physics?

Physics, properly done, does not just question the nature of nature. Physics, properly done, questions the nature of how we find out the nature of anything. Physics, properly done, even questions the nature of why we feel the way we do. And the way we did. About anything, even poetry. In the end, indeed, even the toughest logic is a form of poetry, hanging out there, justified by its own beauty, and nothing else. Don’t underestimate moods: they call what beauty is.

Patrice Ayme’

Happy In the Sky With New Logics: Einstein’s Error II

August 6, 2016

Einstein assumed reality was localized and definite in one of his famous 1905 papers, and physics never recovered from that ridiculous, out-of-the-blue, wanton, gratuitous error. (The present essay complements the preceding one found in the link). 

At the origin of Quantum Mechanics is Max Planck’s train of thought. Max demonstrated that supposing that electromagnetic energy was EMITTED as packets of energy hf explained the two obvious problems of physics; h is a constant (since then named after Planck), f is the frequency of the light.

Then came, five years later, Einstein. He explained the photoelectric effect’s mysterious features by reciprocating Planck’s picture: light’s energy was RECEIVED as packets of energy hf. Fine.   

However, so doing Einstein claimed that light, LIGHT IN TRANSIT, was made of “LICHT QUANTEN” (quanta of light), which he described as localized. He had absolutely no proof of that. Centuries of observation stood against it. And the photoelectric effect did not necessitate this grainy feature in flight, so did not justify it.  

Thus Einstein introduced the assumption that the ultimate description of nature was that of grains of mass-energy. That was, in a way, nothing new, but the old hypothesis of the Ancient Greeks, the atomic theory. So one could call this the Greco-Einstein hypothesis. The following experiment, conducted in 1921, demonstrated Einstein was wrong. Thus the perpetrator Walther Gerlach, did not get the Nobel, and the Nobel Committee never mentioned the importance of the experiment. Arguably, Gerlach’s experiment was more important than any work of Einstein, thus deserved punishment The Jewish Stern, an assistant of Einstein, got the Nobel alone in 1944, when Sweden was anxious to make friends with the winning “United Nations”: 

Two Points. The Classical Prediction Is A Vertical Smear. It Is Also Einstein’s Prediction. And Incomprehensible In Einstein’s View Of The World.

Two Points. The Classical Prediction Is A Vertical Smear. It Is Also Einstein’s Prediction. And That Smear Is Incomprehensible In Einstein’s View Of The World.

Yet, Einstein’s advocacy of nature as made of grains was obviously wrong: since the seventeenth century, it was known that there were wave effects ruling matter (diffraction, refraction, Newton’s rings). That was so true, Huyghens proposed light was made of waves. Around 1800 CE Young and Ampere proposed proofs of wave nature (2 slit experiment and Poisson’s dot). The final proof of the wave theory was Maxwell’s completion and synthesis of electromagnetism which showed light was an electromagnetic wave (travelling at always the same speed, c).

Einstein’s hypothesis of light as made of grain is fundamentally incompatible with the wave theory. The wave theory was invented precisely to explain DELOCALIZATION. A grain’s definition is the exact opposite.

There is worse.

Spin was discovered as an experimental fact in the 1920s. Interestingly it had been discovered mathematically by the French Alpine mathematician Elie Cartan before World War One, and stumbled upon by Dirac’s invention of the eponymous equation.  

The simplest case is the spin of an electron. What is it? When an electron is put in a magnetic field M, it deviates either along the direction of M (call it M!) or the opposite direction (-M). This sounds innocuous enough, until one realizes that it is the OBSERVER who selects the direction “M” of M. Also there are two angles of deviation only. (The Gerlach experiment was realized with silver (Ag) atoms, but the deviation was caused by a single electron therein.)

Einstein would have us believe that the electron is a grain. Call it G. Then G would have itself its own spin. A rotating charged particle G generates a magnetic field. Call it m. If Einstein were correct, as the direction of M varies, its interaction between the grain G magnetic field m will vary. But it’s not the case: it is as if m did not count. At all. Does not count, at all, whatsoever. It’s all about M, the direction of M.

So Einstein was wrong: there is no grain G with an independent existence, an independent magnetic filed m.

Bohr was right: Einstein was, obviously, wrong. That does not mean that Bohr and his followers, who proclaimed the “Copenhagen Interpretation” were right on other issues. Just like Einstein hypothesized something he did not need, so did the Copenhagists.

Backtrack above: M is determined by the observer, I said (so bleated the Copenhagen herd). However, although M can changed by an observer, clearly an observer is NOT necessary to create a magnetic field M and its direction.

Overlooking that blatant fact, that not all magnetic fields are created by observers, is the source of Copenhagen confusion.

We saw above that correct philosophical analysis is crucial to physics. Computations are also crucial, but less so: a correct computation giving correct results can be made from false hypotheses (the paradigm here is epicycle theory: false axiomatics, the Sun did not turn around the Earth, yet, roughly correct computations produced what was observed).

Out of Quantum Theory came Quantum ElectroDynamics (QED), and, from there, Quantum Field Theory (QFT).  

QED is one of the most precise scientific theory ever. However, there is much more precise: the mass of the photon is determined to be no more than 10^(-60) kilogram (by looking at whether the electromagnetic field of Jupiter decreases in 1/d^2…).

Nevertheless, QED is also clearly the most erroneous physical theory ever (by an order of 10^60). Indeed, it predicts, or rather uses, the obviously false hypothesis that there is some finite energy at each point of space. Ironically enough, it is Einstein and Stern (see above) who introduced the notion of “zero point energy” (so, when Einstein later could not understand, or refused to understand, Quantum Electrodynamics, it was not because all the weirdest concepts therein were not of his own making…)

The debate on the Foundations of Quantum Physics is strong among experts, all over the map, and permeated with philosophy. Thus don’t listen to those who scoff about whether philosophy is not the master of science: it always has been, it is frantically so, and always will be. It is a question of method: the philosophical method uses anything to construct a logic. The scientific method can be used only when one knows roughly what one is talking about. Otherwise, as in Zeroth Century, or Twentieth Century physics, one can go on imaginary wild goose chases.

From my point of view, Dark Matter itself is a consequence of the True Quantum Physics. This means that experiments could be devised to test it. The belief that some scientific theory is likely incites beholders to make experiments to test it. Absent the belief, there would be no will, hence no financing. Testing for gravitational waves was long viewed as a wild goose chase. However, the Federal government of the USA invested more than one billion dollars in the experimental field of gravitational wave detection, half a century after an early pioneer (who was made fun of). It worked, in the end, splendidly: several Black Hole (-like) events were detected, and their nature was unexpected, bringing new fundamental questions.

Some will say that all this thinking, at the edges of physics and philosophy is irrelevant to their lives, now. Maybe they cannot understand the following. Society can ether put its resources in making the rich richer, more powerful and domineering. Or society can pursue higher pursuits, such as understanding more complex issues. If nothing else, the higher technology involved will bring new technology which nothing else will bring (the Internet was developed by CERN physicists).

Moreover, such results change the nature not just of what we believe reality to be, but also of the logic we have developed to analyze it. Even if interest in all the rest faded away, the newly found diamonds of more sophisticated, revolutionary logics would not fade away.

Patrice Ayme’

 

TO BE AND NOT TO BE? Is Entangled Physics Thinking, Or Sinking?

April 29, 2016

Frank Wilczek, a physics Nobel laureate, wrote a first soporific, and then baffling article in Quanta magazine: “Entanglement Made Simple”. Yes, all too simple: it sweeps the difficulties under the rug. After a thorough description of classical entanglement, we are swiftly told at the end, that classical entanglement supports the many World Interpretation of Quantum Mechanics. However, classical entanglement (from various conservation laws) has been known since the seventeenth century.

Skeptical founders of Quantum physics (such as Einstein, De Broglie, Schrodinger, Bohm, Bell) knew classical entanglement very well. David Bohm found the Bohm-Aharanov effect, which demonstrated the importance of (nonlocal) potential, John Bell found his inequality which demonstrated, with the help of experiments (Alain Aspect, etc.) that Quantum physics is nonlocal.

Differently From Classical Entanglement, Which Acts As One, Quantum Entanglement Acts At A Distance: It Interferes With Measurement, At A Distance

Differently From Classical Entanglement, Which Acts As One, Quantum Entanglement Acts At A Distance: It Interferes With Measurement, At A Distance

The point about the cats is that everybody, even maniacs, ought to know that cats are either dead, or alive. Quantum mechanics make the point they can compute things about cats, from their point of view. OK.

Quantum mechanics, in their busy shops, compute with dead and live cats as possible outcomes. No problem. But then does that mean there is a universe, a “world“, with a dead cat, happening, and then one with a live cat, also happening simultaneously?

Any serious philosopher, somebody endowed with common sense, the nemesis of a Quantum mechanic, will say no: in a philosopher’s opinion, a cat is either dead, or alive. To be, or not to be. Not to be, and not to be.

A Quantum mechanic can compute with dead and live cats, but that does not mean she creates worlds, by simply rearranging her computation, this way, or that. Her various dead and live cats arrangements just mean she has partial knowledge of what she computes with, and that Quantum measurements, even from an excellent mechanic, are just partial, mechanic-dependent measurements.

For example, if one measures spin, one needs to orient a machine (a Stern Gerlach device). That’s just a magnetic field going one way, like a big arrow, a big direction. Thus one measures spin in one direction, not another.

What’s more surprising is that, later on, thanks to a nonlocal entanglement, one may be able to determine that, at this point in time, the particle had a spin that could be measured, from far away, in another direction. So far, so good: this is like classical mechanics.

However, whether or not that measurement at a distance has occurred, roughly simultaneously, and way out of the causality light cone, EFFECTS the first measurement.

This is what the famous Bell Inequality means.

And this is what the problem with Quantum Entanglement is. Quantum Entanglement implies that wilful action somewhere disturbs a measurement beyond the reach of the five known forces. It brings all sorts of questions of a philosophical nature, and make them into burning physical subjects. For example, does the experimenter at a distance have real free will?

Calling the world otherworldly, or many worldly, does not really help to understand what is going on. Einstein’s “Spooky Interaction At A Distance” seems a more faithful, honest rendition of reality than supposing that each and any Quantum mechanic in her shop, creates worlds, willy-nilly, each time it strikes her fancy to press a button.

What Mr. Wilczek did is what manyworldists and multiversists always do: they jump into their derangement (cats alive AND dead) after saying there is no problem. Details are never revealed.

Here is, in extenso, the fully confusing and unsupported conclusion of Mr. Wilczek:

“Everyday language is ill suited to describe quantum complementarity, in part because everyday experience does not encounter it. Practical cats interact with surrounding air molecules, among other things, in very different ways depending on whether they are alive or dead, so in practice the measurement gets made automatically, and the cat gets on with its life (or death). But entangled histories describe q-ons that are, in a real sense, Schrödinger kittens. Their full description requires, at intermediate times, that we take both of two contradictory property-trajectories into account.

The controlled experimental realization of entangled histories is delicate because it requires we gather partial information about our q-on. Conventional quantum measurements generally gather complete information at one time — for example, they determine a definite shape, or a definite color — rather than partial information spanning several times. But it can be done — indeed, without great technical difficulty. In this way we can give definite mathematical and experimental meaning to the proliferation of “many worlds” in quantum theory, and demonstrate its substantiality.”

Sounds impressive, but the reasons are either well-known or then those reasons use a sleight of hand.

Explicitly: “take both of two contradictory property-trajectories into account”: just read Feynman QED, first chapter. Feynman invented the ‘sum over histories’, and Wilczek is his parrot; but Feynman did not become crazy from his ‘sum over history’: Richard smirked when his picturesque evocation was taken literally, decades later…

And now the sleight of hand: …”rather than  [gather] partial information spanning several times. But it can be done — indeed, without great technical difficulty.” This nothing new: it is the essence of the double slit discovered by that Medical Doctor and polymath, Young, around 1800 CE: when one runs lots of ‘particles’ through it, one sees the (wave) patterns. This is what Wilczek means by “partial information“. Guess what? We knew that already.

Believing that one can be, while not to be, putting that at the foundation of physics, is a new low in thinking. And it impacts the general mood, making it more favorable towards unreason.

If anything can be, without being, if anything not happening here, is happening somewhere else, then is not anything permitted? Dostoyevsky had a Russian aristocrat suggests that, if god did not exist anything was permitted. And, come to think of it, the argument was at the core of Christianism. Or more, exactly, of the Christian reign of terror which started in the period 363 CE-381 CE, from the reigns of emperor Jovian to the reign of emperor Theodosius. To prevent anything to be permitted, a god had to enforce the law.

What we have now is way worse: the new nihilists (Wilczek and his fellow manyworldists) do not just say that everything is permitted. They say: it does not matter if everything is permitted, or not. It is happening, anyway. Somewhere.

Thus Many-Worlds physics endangers, not just the foundations of reason, but the very justification for morality. That is that what is undesirable should be avoided. Even the Nazis agreed with that principle. Many-Worlds physics says it does not matter, because it is happening, anyway. Somewhere, out there.

So what is going on, here, at the level of moods? Well, professor Wilczek teaches at Harvard. Harvard professors advised president Yeltsin of Russia, to set up a plutocracy. It ruined Russia. Same professors made a fortune from it, while others were advising president Clinton to do the same, and meanwhile Prime Minister Balladur in France was mightily impressed, and followed this new enlightenment by the Dark Side, as did British leaders, and many others. All these societies were ruined in turn. Harvard was the principal spirit behind the rise of plutocracy, and the engine propelling that rise, was the principle that morality did not matter. because, because, well, Many-Worlds!

How does one go from the foundations of physics, to the foundations of plutocracy? Faculty members in the richest, most powerful universities meet in mutual admiration societies known as “faculty clubs” and lots of other I scratch-your-back, you scratch-my-back social occasion they spend much of their time indulging in. So they influence each other, at the very least in the atmospheres of moods they create, and then breathe together.

Remember? It is not that everything is permitted: it’s happening anyway, so we may as well profit from it first. Many-Worlds physics feeds a mood favorable to many plutocrats, and that’s all there is to it. (But that, of course, is a lot, all too much.)

Patrice Ayme’

The Quantum Puzzle

April 26, 2016

CAN PHYSICS COMPUTE?

Is Quantum Computing Beyond Physics?

More exactly, do we know, can we know, enough physics for (full) quantum computing?

I have long suggested that the answer to this question was negative, and smirked at physicists sitting billions of universes on a pinhead, as if they had nothing better to do, the children they are. (Just as their Christian predecessors in the Middle Ages, their motives are not pure.)

Now an article in the American Mathematical Society Journal of May 2016 repeats (some) of the arguments I had in mind: The Quantum Computer Puzzle. Here are some of the arguments. One often hears that Quantum Computers are a done deal. Here is the explanation from Justin Trudeau, Canada’s Prime Minister, which reflects perfectly the official scientific conventional wisdom on the subject:  https://youtu.be/rRmv4uD2RQ4

(One wishes all our great leaders would be as knowledgeable… And I am not joking as I write this! Trudeau did engineering and ecological studies.)

... Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits...

… Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits…

Before some object that physicists are better qualified than mathematicians to talk about the Quantum, let me point towards someone who is perhaps the most qualified experimentalist in the world on the foundations of Quantum Physics. Serge Haroche is a French physicist who got the Nobel Prize for figuring out how to count photons without seeing them. It’s the most delicate Quantum Non-Demolition (QND) method I have heard of. It involved making the world’s most perfect mirrors. The punch line? Serge Haroche does not believe Quantum Computers are feasible. However Haroche does not suggest how he got there. The article in the AMS does make plenty of suggestions to that effect.

Let me hasten to add some form of Quantum Computing (or Quantum Simulation) called “annealing” is obviously feasible. D Wave, a Canadian company is selling such devices. In my view, Quantum Annealing is just the two slit experiment written large. Thus the counter-argument can be made that conventional computers can simulate annealing (and that has been the argument against D Wave’s machines).

Full Quantum Computing (also called  “Quantum Supremacy”) would be something completely different. Gil Kalai, a famous mathematician, and a specialist of Quantum Computing, is skeptical:

“Quantum computers are hypothetical devices, based on quantum physics, which would enable us to perform certain computations hundreds of orders of magnitude faster than digital computers. This feature is coined “quantum supremacy”, and one aspect or another of such quantum computational supremacy might be seen by experiments in the near future: by implementing quantum error-correction or by systems of noninteracting bosons or by exotic new phases of matter called anyons or by quantum annealing, or in various other ways…

A main reason for concern regarding the feasibility of quantum computers is that quantum systems are inherently noisy. We will describe an optimistic hypothesis regarding quantum noise that will allow quantum computing and a pessimistic hypothesis that won’t.”

Gil Katai rolls out a couple of theorems which suggest that Quantum Computing is very sensitive to noise (those are similar to finding out which slit a photon went through). Moreover, he uses a philosophical argument against Quantum Computing:

It is often claimed that quantum computers can perform certain computations that even a classical computer of the size of the entire universe cannot perform! Indeed it is useful to examine not only things that were previously impossible and that are now made possible by a new technology but also the improvement in terms of orders of magnitude for tasks that could have been achieved by the old technology.

Quantum computers represent enormous, unprecedented order-of-magnitude improvement of controlled physical phenomena as well as of algorithms. Nuclear weapons represent an improvement of 6–7 orders of magnitude over conventional ordnance: the first atomic bomb was a million times stronger than the most powerful (single) conventional bomb at the time. The telegraph could deliver a transatlantic message in a few seconds compared to the previous three-month period. This represents an (immense) improvement of 4–5 orders of magnitude. Memory and speed of computers were improved by 10–12 orders of magnitude over several decades. Breakthrough algorithms at the time of their discovery also represented practical improvements of no more than a few orders of magnitude. Yet implementing Boson Sampling with a hundred bosons represents more than a hundred orders of magnitude improvement compared to digital computers.

In other words, it unrealistic to expect such a, well, quantum jump…

“Boson Sampling” is a hypothetical, and simplest way, proposed to implement a Quantum Computer. (It is neither known if it could be made nor if it would be good enough for Quantum Computing[ yet it’s intensely studied nevertheless.)

***

Quantum Physics Is The Non-Local Engine Of Space, and Time Itself:

Here is Gil Kalai again:

“Locality, Space and Time

The decision between the optimistic and pessimistic hypotheses is, to a large extent, a question about modeling locality in quantum physics. Modeling natural quantum evolutions by quantum computers represents the important physical principle of “locality”: quantum interactions are limited to a few particles. The quantum circuit model enforces local rules on quantum evolutions and still allows the creation of very nonlocal quantum states.

This remains true for noisy quantum circuits under the optimistic hypothesis. The pessimistic hypothesis suggests that quantum supremacy is an artifact of incorrect modeling of locality. We expect modeling based on the pessimistic hypothesis, which relates the laws of the “noise” to the laws of the “signal”, to imply a strong form of locality for both. We can even propose that spacetime itself emerges from the absence of quantum fault tolerance. It is a familiar idea that since (noiseless) quantum systems are time reversible, time emerges from quantum noise (decoherence). However, also in the presence of noise, with quantum fault tolerance, every quantum evolution that can experimentally be created can be time-reversed, and, in fact, we can time-permute the sequence of unitary operators describing the evolution in an arbitrary way. It is therefore both quantum noise and the absence of quantum fault tolerance that enable an arrow of time.”

Just for future reference, let’s “note that with quantum computers one can emulate a quantum evolution on an arbitrary geometry. For example, a complicated quantum evolution representing the dynamics of a four-dimensional lattice model could be emulated on a one-dimensional chain of qubits.

This would be vastly different from today’s experimental quantum physics, and it is also in tension with insights from physics, where witnessing different geometries supporting the same physics is rare and important. Since a universal quantum computer allows the breaking of the connection between physics and geometry, it is noise and the absence of quantum fault tolerance that distinguish physical processes based on different geometries and enable geometry to emerge from the physics.”

***

I have proposed a theory which explains the preceding features, including the emergence of space. Let’s call it Sub Quantum Physics (SQP). The theory breaks a lot of sacred cows. Besides, it brings an obvious explanation for Dark Matter. If I am correct the Dark matter Puzzle is directly tied in with the Quantum Puzzle.

In any case, it is a delight to see in print part of what I have been severely criticized for saying for all too many decades… The gist of it all is that present day physics would be completely incomplete.

Patrice Ayme’

QUANTUM FLUCTUATIONS & ARROW OF TIME

January 18, 2016

What is time? Quantum Physics gives an answer, classical physics does not. Quantum Physics suggests that time is the set of all irreversible processes. This is a world first, so it requires some explanations. I have been thinking, hard, of these things all my life. Sean Carroll, bless his soul, called my attention to the new development that mainstream physicists are starting to pay attention to my little kingdom(so I thank him).

***

SCIENCE IS WHAT WE DO:

Sean Carroll in “Quantum Fluctuations”:

“Let’s conjure some science up in here. Science is good for the soul.”

Patrice Ayme’: Why is science good for the soul? Because the human soul is centered on finding truth. Science is truth, thus science is human. Nothing is more human than science. Science is what humans do. Another thing humans do is art, and it tries to both duplicate, distort, and invent new nature, or interpretations, interpolations, and suggestions, of and from, nature:

Claim: Quantum Interference Is An Irreversible Process, Time's Arrows All Over. Quantum Interference Goes From Several Waves, To One Geometry. Soap Bubbles Brim With Quantum Interference..

Claim: Quantum Interference Is An Irreversible Process, Time’s Arrows All Over. Quantum Interference Goes From Several Waves, To One Geometry. Soap Bubbles Brim With Quantum Interference..

SC: …what are “quantum fluctuations,” anyway? Talk about quantum fluctuations can be vague. There are really 3 different types of fluctuations: Boltzmann, Vacuum, & Measurement. Boltzmann Fluctuations are basically classical: random motions of things lead to unlikely events, even in equilibrium.

Patrice Ayme’: As we will see, or we have already seen in my own “Quantum Wave”, Quantum Fluctuations are just the Quantum Waves. Richard Feynman, at the end of his chapter on entropy in the Feynman Lectures on Physics, ponders how to get an arrow of time in a universe governed by time-symmetric underlying laws. Feynman:

“So far as we know, all the fundamental laws of physics, such as Newton’s equations, are reversible. Then where does irreversibility come from? It comes from order going to disorder, but we do not understand this until we know the origin of the order. Why is it that the situations we find ourselves in every day are always out of equilibrium?”

Patrice Ayme’: Is that really true? Are equations time-symmetric? Not really. First, equations don’t stand alone. Differential equations depend upon initial conditions. Obviously, even if the equations are time-symmetric, the initial conditions are not: the final state cannot be exchanged with the initial state.

Quantum Physics make this observation even more important. The generic Quantum set-up depends upon a geometric space S in which the equation(s) of motion will evolve. Take for example the 2-slit: the space one considers generally, S, is the space AFTER the 2-slit. The one before the 2-slit, C, (for coherence) is generally ignored. S is ordered by Quantum interference.

The full situation is made of: (C, S & Quantum interference). it’s not symmetric. The Quantum depends upon the space (it could be a so-called “phase space”) in which it deploys. That makes it time-assymmetric. An example: the Casimir Effect.

***

QUANTUM PHYSICS IS ABOUT WAVES:

Sean Carroll: “Nothing actually “fluctuates” in vacuum fluctuations! The system can be perfectly static. Just that quantum states are more spread out.”

Indeed. Quantum states are, intrinsically, more spread out. They are NON-LOCAL. Why?

One has to go back to the basics. What is Quantum Physics about? Some, mostly the “Copenhagen Interpretation” followers, claim Quantum Physics is a subset of functional analysis. (The famous mathematician Von Neumann, one of the creators of Functional Analysis, was the founder of this system of thought; this scion of plutocrats, famously, yet satanically, claimed that De Broglie and Bohmian mechanics were impossible… Von Neumann had made a logical mistake; maybe that had to do with being involved with the satanic part of the American establishment, as, by then, that Hungarian had migrated to the USA and wanted to be called “Johnny”!).

The Quantum-as-functional analysis school became dominant. It had great successes in the past. It allows to view Quantum Physics as “Non Commutative Geometry”. However, contrarily to repute, it’s not the most fundamental view. (I have my own approach, which eschews Functional Analysis.)

But let’s backtrack. Where does Quantum-as-functional-analysis come from? A Quantum system is made of a (“configuration”) space S and an equation E (which is a Partial Differential Equation). Out of S and E is created a Hilbert Space with a basis, the “eigenstates”.

In practice, the eigenstates are fundamental waves. They can be clearly seen, with the mind’s eye, in the case of the Casimir Effect with two metallic plates: there is a maximal size for the electromagnetic wavelengths between the plates (as they have to zero out where they touch the metal).

The notion of wave is more general than the notion of eigenstate (Dirac pushed, successfully, the notion of wave so far that it created space, Spinor Space, and Quantum Field Theory has done more of the same, extending the general mood of De Broglie-Dirac to ever fancier Lagrangians, energy expression guiding the waves according to De Broglie scheme).

Historically, De Broglie suggested in 1923 (several publications to the French Academy of Science) that to each particle was associated a (relativistic) wave. De Broglie’s reasons were looked at by Einstein, who was impressed (few, aside from Einstein could understand what De Broglie said; actually De Broglie French jury thesis, which had two Nobel prizes, was so baffled by De Broglie’s thesis, that they sent it to Einstein, to ask him what he thought. Einstein replied with the greatest compliment he ever made to anyone: “De Broglie has started to lift the great veil,” etc…).

The De Broglie’s wave appears on page 111 of De Broglie’s 1924 thesis, which has 118 pages (and contains, among other things, the Schrödinger wave equation, and, of course, the uncertainty principle, something obvious: De Broglie said all particles were guided by waves whose wavelengths depended upon their (relativistic) energy. An uncertainty automatically appears when one tries to localize a particle (that is, a wave) with another particle (that is, another wave!)

***

CLASSICAL PHYSICS HAS NO ARROW OF TIME:

Consider an empty space S. If the space S is made available to (classical) Boltzmann particles, S is progressively invaded by (classical) particles occupying ever more states.

Classical physicist (Boltzmann, etc.) postulated the Second Law of Thermodynamics: something called entropy augmented during any process. Problem, rather drastic: all classical laws of physics are reversible! So, how can reversible physics generate a time-irreversible law? Classical physicist have found no answer. But I did, knight in shining armor, mounted on my powerful Quantum Monster:

***

QUANTUM PROCESSES CREATE IRREVERSIBLE GEOMETRIES:

When the same space S is made available as part of a Quantum System, the situation is strikingly different. As Sean Carroll points out, the situation is immediately static, it provides an order (as Bohm insisted it did). The observation is not new: the De Broglie waves provided an immediate explanation of the stability of electronic waves around atoms (thus supporting Bohr’s “First, or Semi-Classical, Quantum Theory”.

What’s a difference of a Quantum System with a classical system? The classical system evolves, from a given order, to one, more disordered. The Quantum system does not evolve through increasing disorder. Instead, the space S, once accessed, becomes not so  much an initial condition, but a global order.

The afore-mentioned Hilbert Space with its eigenstates is that implicit, or implicate (Bohm) order. So the Quantum System is static in an important sense (from standing Quantum Waves, it sorts of vibrates through time).

Thus Quantum Systems have an intrinsic time-assymmetry (at least when dealing with cavities). When there are no cavities, entanglement causes assymmetry: once an interaction has happened, until observation, there is entanglement. Before interaction, there was no entanglement. Two classical billiards balls are not entangled either before or after they interact, so the interaction by collision is fully time reversible.

Entanglement is also something waves exhibit, once they have interacted and not before, which classical particles are deprived of.

Once more we see the power of the Quantum mindset for explaining the world in a much more correct, much simpler, and thus much more powerful way. The Quantum even decides what time is.

So far as we know, all the classical fundamental laws of physics, such as Newton’s equations, are reversible. Then were does irreversibility come from? It does NOT come, as was previously suggested, from order going to disorder.

Quite the opposite: irreversibility comes from disorder (several waves)going to order (one wave, ordered by its surrounding geometry). And we do understand the origin of the order: it’s the implicit order of Quantum Waves deployed.

You want to know the world? Let me introduce you to the Quantum, a concept of wealth, taste and intelligence.

Last and not least: if I am right, the Quantum brings the spontaneous apparition of order, the exact opposite picture which has constituted the manger in which the great cows of physics have found their sustenance. Hence the fact that life and many other complicated naturally occurring physical systems are observed to create order in the universe are not so baffling anymore. Yes, they violate the Second Law of Thermodynamics. However, fundamentally, that violated the spirit, the principle of the universe, the Quantum itself.

Patrice Ayme’

Is “Spacetime” Important?

November 3, 2015

Revolutions spawn from, and contributes to, the revolutionary mood. It is no coincidence that many revolutionary ideas in science: Chemistry (Lavoisier), Biological Evolution (Lamarck), Lagrangians, Black Holes,, Fourier Analysis, Thermodynamics (Carnot), Wave Optics, (Young, Poisson), Ampere’s Electrodynamics spawned roughly at the same time and place, around the French Revolution.

In the Encyclopedie, under the term dimension Jean le Rond d’Alembert speculated that time might be considered a fourth dimension… if the idea was not too novel. Joseph Louis Lagrange in his ), wrote that: “One may view mechanics as a geometry of four dimensions…” (Theory of Analytic Functions, 1797.) The idea of spacetime is to view reality as a four dimensional manifold, something measured by the “Real Line” going in four directions.

There is, it turns out a huge problem with this: R, the real line, has what is called a separated topology: points have distinct neighborhoods. However, the QUANTUM world is not like that, not at all. Countless experiments, and the most basic logic, show this:

Reality Does Not Care About Speed, & The Relativity It Brings

Reality Does Not Care About Speed, & The Relativity It Brings

Manifolds were defined by Bernhard Riemann in 1866 (shortly before he died, still young, of tuberculosis). A manifold is made of chunks (technically: neighborhoods), each of them diffeomorphic to a neighborhood in R^n (thus a deformed piece of R^n, see tech annex).

Einstein admitted that there was a huge problem with the “now” in physics (even if one confines oneself to his own set-ups in Relativity theories). Worse: the Quantum changes completely the problem of the “now”… Let alone the “here”.

In 1905, Henri Poincaré showed that by taking time to be an imaginary fourth spacetime coordinate (√−1 c t), a Lorentz transformation can be regarded as a rotation of coordinates in a four-dimensional Euclidean space with three real coordinates representing space, and one imaginary coordinate, representing time, as the fourth dimension.

— Hermann Minkowski, 1907, Einstein’s professor in Zurich concluded: “The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”

This remark rests on Lorentz’s work, how to go from coordinates (x, t) to (x’, t’). In the simplest case:

C is the speed of light. Lorentz found one needed such transformations to respect electrodynamics. If v/c is zero (as it is if one suppose the speed v to be negligible  relative to c, the speed of light infinite), one gets:

t = t’

x’ = x – vt

The first equation exhibits universal time: time does not depend upon the frame of reference. But notice that the second equation mixes space and time already. Thus, philosophically speaking, proclaiming “spacetime” could have been done before. Now, in so-called “General Relativity”, there are problems with “time-like” geodesics (but they would surface long after Minkowski’s death).

Another problem with conceptually equating time and space is that time is not space: space dimensions have a plus sign, time a minus sign (something Quantum Field Theory often ignores by putting pluses everywhere in computations)

In any case, I hope this makes clear that, philosophically, just looking at the equations, “spacetime” does not have to be an important concept.

And Quantum Physics seems to say that it is not: the QUANTUM INTERACTION (QI; my neologism) is (apparently, so far) INSTANTANEOUS (like old fashion time).

As we saw precedingly (“Can Space Be Faster Than Light“), the top cosmologists are arguing whether the speed of space can be viewed as faster than light. Call that the Cosmic Inflation Interaction (CII; it has its own hypothesized exchange particle, the “Inflaton”). We see that c, the speed of light is less than CII, and may, or may not be related to QI (standard Quantum Physics implicitly assumes that the speed of the Quantum Interaction QI is infinite).

One thing is sure: we are very far from TOE, the “Theory Of Everything”, which physicists anxious to appear as the world’s smartest organisms, with all the power and wealth to go with it, taunted for decades.

Patrice Ayme’

Tech Annex: R is the real line, RxR = R^2, the plane, RxRxR = R^3 the usual three dimensional space, etc. Spacetime was initially viewed as just RxRxRxR = R^4.]What does diffeomorphic mean? It means a copy which can be shrunk or dilated somewhat in all imaginable ways, perhaps (but without breaks, and so that all points can be tracked; a diffeomorphism does this, and so do all its derivatives).