Archive for the ‘Foundation mathematics’ Category

DOING AWAY WITH INFINITY SOLVES MUCH MATH & PHYSICS

January 11, 2018

Particle physics: Fundamental physics is frustrating physicists: No GUTs, no glory, intones the Economist, January 11, 2018. Is this caused by a fundamental flaw in logic? That’s what I long suggested.

Says The Economist:“Persistence in the face of adversity is a virtue… physicists have been nothing if not persistent. Yet it is an uncomfortable fact that the relentless pursuit of ever bigger and better experiments in their field is driven as much by belief as by evidence. The core of this belief is that Nature’s rules should be mathematically elegant. So far, they have been, so it is not a belief without foundation. But the conviction that the truth must be mathematically elegant can easily lead to a false obverse: that what is mathematically elegant must be true. Hence the unwillingness to give up on GUTs and supersymmetry.”

Mathematical elegance? What is mathematics already? What maybe at fault is the logic brought to bear in present day theoretical physics. And I will say even more: all of today logic may be at fault. It’s not just physics which should tremble. The Economist gives a good description of the developing situation, arguably the greatest standstill in physics in four centuries:

“In the dark

GUTs are among several long-established theories that remain stubbornly unsupported by the big, costly experiments testing them. Supersymmetry, which posits that all known fundamental particles have a heavier supersymmetric partner, called a sparticle, is another creature of the seventies that remains in limbo. ADD, a relative newcomer (it is barely 20 years old), proposes the existence of extra dimensions beyond the familiar four: the three of space and the one of time. These other dimensions, if they exist, remain hidden from those searching for them.

Finally, theories that touch on the composition of dark matter (of which supersymmetry is one, but not the only one) have also suffered blows in the past few years. The existence of this mysterious stuff, which is thought to make up almost 85% of the matter in the universe, can be inferred from its gravitational effects on the motion of galaxies. Yet no experiment has glimpsed any of the menagerie of hypothetical particles physicists have speculated might compose it.

Despite the dearth of data, the answers that all these theories offer to some of the most vexing questions in physics are so elegant that they populate postgraduate textbooks. As Peter Woit of Columbia University observes, “Over time, these ideas became institutionalised. People stopped thinking of them as speculative.” That is understandable, for they appear to have great explanatory power.”

A lot of the theories found in theoretical physics “go to infinity”, and a lot of their properties depend upon infinity computations (for example “renormalization”). Also a lot of problems which appear and that, say, “supersymmetry” tries to “solve”, have to do with turning around infinite computations which go mad for all to see. For example, plethora of virtual particles make Quantum Field Theory miss reality by a factor of 10^120. Thus curiously, Quantum Field Theory is both the most precise, and most false theory ever devised. Confronted to all this, physicists have tried to do what has worked in the past, liked finding the keys below the same lighted lamp post, and counting the same angels on the same pinhead.

A radical way out presents itself. It is simple. And it is global, clearing out much of logic, mathematics and physics, of a dreadful madness which has seized those fields: INFINITY. Observe that infinity itself is not just a mathematical hypothesis, it is a mathematically impossible hypothesis: infinity is not an object. Infinity has been used as a device (for computations in mathematics). But what if that device is not an object, is not constructible?

Then lots of the problems theoretical physics try to solve, a lot of these “infinities“, simply disappear. 

Colliding Galaxies In the X Ray Spectrum (Spitzer Telescope, NASA). Very very very big is not infinity! We have no time for infinity!

The conventional way is to cancel particles with particles: “as a Higgs boson moves through space, it encounters “virtual” versions of Standard Model particles (like photons and electrons) that are constantly popping in and out of existence. According to the Standard Model, these interactions drive the mass of the Higgs up to improbable values. In supersymmetry, however, they are cancelled out by interactions with their sparticle equivalents.” Having a finite cut-off would do the same.

A related logic creates the difficulty with Dark Matter, in my opinion. Here is why. Usual Quantum Mechanics assumes the existence of infinity in the basic formalism of Quantum Mechanics. This brings the non-prediction of Dark Matter. Some physicists will scoff: infinity? In Quantum Mechanics? However, the Hilbert spaces which Quantum Mechanical formalism uses are often infinite in extent. Crucial to Quantum Mechanics formalism, but still extraneous to it, festers an ubiquitous instantaneous collapse (semantically partly erased as “decoherence” nowadays). “Instantaneous” is the inverse of “infinity” (in perverse infinity logic). If the later has got to go, so does the former. As it is Quantum Mechanics depends upon infinity. Removing the latter requires us to change the former.

Laplace did exactly this with gravity around 1800 CE. Laplace removed the infinity in gravitation, which had aggravated Isaac Newton, a century earlier. Laplace made gravity into a field theory, with gravitation propagating at finite speed, and thus predicted gravitational waves (relativized by Poincaré in 1905).

Thus, doing away with infinity makes GUTS’ logic faulty, and predicts Dark Matter, and even Dark Energy, in one blow.

If one new hypothesis puts in a new light, and explains, much of physics in one blow, it has got to be right.

Besides doing away with infinity would clean out a lot of hopelessly all-too-sophisticated mathematics, which shouldn’t even exist, IMHO. By the way, computers don’t use infinity (as I said, infinity can’t be defined, let alone constructed).

Sometimes one has to let go of the past, drastically. Theories of infinity should go the way of those crystal balls theories which were supposed to explain the universe: silly stuff, collective madness.

Patrice Aymé

Notes: What do I mean by infinity not constructible? There are two approaches to mathematics:1) counting on one’s digits, out of which comes all of arithmetics. If one counts on one’s digits, one runs of digits after a while, as any computer knows, and I have made into a global objection, by observing that, de facto, there is a largest number (contrarily to what fake, yet time-honored, 25 centuries old “proofs” pretend to demonstrate; basically the “proof” assumes what it pretends to demonstrate, by claiming that, once one has “N”, there is always “N + 1”).

2) Set theory. Set theory is about sets. An example of “set” could be the set of all atoms in the universe. That may, or may not, be “infinite”. In any case, it is not “constructible”, not even to be extended consideration, precisely because it is so considerable (conventional Special Relativity, let alone basic practicality prevent that; Axiomatic Set Theory a la Bertrand Russell has tried to turn around infinity with the notion of  a proper class…)

In both 1) and 2), infinite can’t be considered, precisely, because it doesn’t finish.

Some will scoff, that I am going back to Zeno’s paradox, being baffled by what baffled Zeno. But I know Zeno, he is a friend of mine. My own theory explains Zeno’s paradox. And, in any case, so does Cauchy’s theory of limits (which depends upon infinity only superficially; even infinitesimal theory, aka non-standard analysis, from Leibnitz + Model Theory survives my scathing refounding of all of logics, math, physics).  

By the way, this is all so true that mathematicians have developed still another notion, which makes, de facto, logic local, and spurn infinity, namely Category Theory. Category Theory is very practical, but also an implicit admission that mathematicians don’t need infinity to make mathematics. Category Theory has now become fashionable in some corners of theoretical physics.

3) The famous mathematician Brouwer threw out some of the famous mathematical results he had himself established, on grounds somewhat similar to those evoked above, when he promoted “Intuitionism”. The latter field was started by Émile Borel and Henri Lebesgue (of the Lebesgue integral), two important French analysts, viewed as  semi-intuitionists. They elaborated a constructive treatment of the continuum (the real line, R), leading to the definition of the Borel hierarchy. For Borel and Lebesgue considering the set of all sets of real numbers is meaningless, and therefore has to be replaced by a hierarchy of subsets that do have a clear description. My own position is much more radical, and can be described as ultra-finitism: it does away even with so-called “potential infinity” (this is how I get rid of many infinities in physics, which truly are artefacts from mathematical infinity).  I expect no sympathy: thousands of mathematicians live off infinity.

4) Let me help those who want to cling to infinity. I would propose two sort of mathematical problems: 1) those who can be solved when considered in Ultra Finite mathematics  (“UF”). 2) Those which stay hard, not yet solved, even in UF mathematics.

The Quantum Puzzle

April 26, 2016

CAN PHYSICS COMPUTE?

Is Quantum Computing Beyond Physics?

More exactly, do we know, can we know, enough physics for (full) quantum computing?

I have long suggested that the answer to this question was negative, and smirked at physicists sitting billions of universes on a pinhead, as if they had nothing better to do, the children they are. (Just as their Christian predecessors in the Middle Ages, their motives are not pure.)

Now an article in the American Mathematical Society Journal of May 2016 repeats (some) of the arguments I had in mind: The Quantum Computer Puzzle. Here are some of the arguments. One often hears that Quantum Computers are a done deal. Here is the explanation from Justin Trudeau, Canada’s Prime Minister, which reflects perfectly the official scientific conventional wisdom on the subject:  https://youtu.be/rRmv4uD2RQ4

(One wishes all our great leaders would be as knowledgeable… And I am not joking as I write this! Trudeau did engineering and ecological studies.)

... Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits...

… Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits…

Before some object that physicists are better qualified than mathematicians to talk about the Quantum, let me point towards someone who is perhaps the most qualified experimentalist in the world on the foundations of Quantum Physics. Serge Haroche is a French physicist who got the Nobel Prize for figuring out how to count photons without seeing them. It’s the most delicate Quantum Non-Demolition (QND) method I have heard of. It involved making the world’s most perfect mirrors. The punch line? Serge Haroche does not believe Quantum Computers are feasible. However Haroche does not suggest how he got there. The article in the AMS does make plenty of suggestions to that effect.

Let me hasten to add some form of Quantum Computing (or Quantum Simulation) called “annealing” is obviously feasible. D Wave, a Canadian company is selling such devices. In my view, Quantum Annealing is just the two slit experiment written large. Thus the counter-argument can be made that conventional computers can simulate annealing (and that has been the argument against D Wave’s machines).

Full Quantum Computing (also called  “Quantum Supremacy”) would be something completely different. Gil Kalai, a famous mathematician, and a specialist of Quantum Computing, is skeptical:

“Quantum computers are hypothetical devices, based on quantum physics, which would enable us to perform certain computations hundreds of orders of magnitude faster than digital computers. This feature is coined “quantum supremacy”, and one aspect or another of such quantum computational supremacy might be seen by experiments in the near future: by implementing quantum error-correction or by systems of noninteracting bosons or by exotic new phases of matter called anyons or by quantum annealing, or in various other ways…

A main reason for concern regarding the feasibility of quantum computers is that quantum systems are inherently noisy. We will describe an optimistic hypothesis regarding quantum noise that will allow quantum computing and a pessimistic hypothesis that won’t.”

Gil Katai rolls out a couple of theorems which suggest that Quantum Computing is very sensitive to noise (those are similar to finding out which slit a photon went through). Moreover, he uses a philosophical argument against Quantum Computing:

It is often claimed that quantum computers can perform certain computations that even a classical computer of the size of the entire universe cannot perform! Indeed it is useful to examine not only things that were previously impossible and that are now made possible by a new technology but also the improvement in terms of orders of magnitude for tasks that could have been achieved by the old technology.

Quantum computers represent enormous, unprecedented order-of-magnitude improvement of controlled physical phenomena as well as of algorithms. Nuclear weapons represent an improvement of 6–7 orders of magnitude over conventional ordnance: the first atomic bomb was a million times stronger than the most powerful (single) conventional bomb at the time. The telegraph could deliver a transatlantic message in a few seconds compared to the previous three-month period. This represents an (immense) improvement of 4–5 orders of magnitude. Memory and speed of computers were improved by 10–12 orders of magnitude over several decades. Breakthrough algorithms at the time of their discovery also represented practical improvements of no more than a few orders of magnitude. Yet implementing Boson Sampling with a hundred bosons represents more than a hundred orders of magnitude improvement compared to digital computers.

In other words, it unrealistic to expect such a, well, quantum jump…

“Boson Sampling” is a hypothetical, and simplest way, proposed to implement a Quantum Computer. (It is neither known if it could be made nor if it would be good enough for Quantum Computing[ yet it’s intensely studied nevertheless.)

***

Quantum Physics Is The Non-Local Engine Of Space, and Time Itself:

Here is Gil Kalai again:

“Locality, Space and Time

The decision between the optimistic and pessimistic hypotheses is, to a large extent, a question about modeling locality in quantum physics. Modeling natural quantum evolutions by quantum computers represents the important physical principle of “locality”: quantum interactions are limited to a few particles. The quantum circuit model enforces local rules on quantum evolutions and still allows the creation of very nonlocal quantum states.

This remains true for noisy quantum circuits under the optimistic hypothesis. The pessimistic hypothesis suggests that quantum supremacy is an artifact of incorrect modeling of locality. We expect modeling based on the pessimistic hypothesis, which relates the laws of the “noise” to the laws of the “signal”, to imply a strong form of locality for both. We can even propose that spacetime itself emerges from the absence of quantum fault tolerance. It is a familiar idea that since (noiseless) quantum systems are time reversible, time emerges from quantum noise (decoherence). However, also in the presence of noise, with quantum fault tolerance, every quantum evolution that can experimentally be created can be time-reversed, and, in fact, we can time-permute the sequence of unitary operators describing the evolution in an arbitrary way. It is therefore both quantum noise and the absence of quantum fault tolerance that enable an arrow of time.”

Just for future reference, let’s “note that with quantum computers one can emulate a quantum evolution on an arbitrary geometry. For example, a complicated quantum evolution representing the dynamics of a four-dimensional lattice model could be emulated on a one-dimensional chain of qubits.

This would be vastly different from today’s experimental quantum physics, and it is also in tension with insights from physics, where witnessing different geometries supporting the same physics is rare and important. Since a universal quantum computer allows the breaking of the connection between physics and geometry, it is noise and the absence of quantum fault tolerance that distinguish physical processes based on different geometries and enable geometry to emerge from the physics.”

***

I have proposed a theory which explains the preceding features, including the emergence of space. Let’s call it Sub Quantum Physics (SQP). The theory breaks a lot of sacred cows. Besides, it brings an obvious explanation for Dark Matter. If I am correct the Dark matter Puzzle is directly tied in with the Quantum Puzzle.

In any case, it is a delight to see in print part of what I have been severely criticized for saying for all too many decades… The gist of it all is that present day physics would be completely incomplete.

Patrice Ayme’

BEING FROM DOING: EFFECTIVE ONTOLOGY, Brain & Consciousness

December 29, 2015

Thesis: Quantum Waves themselves are what information is (partly) made of. Consciousness being Quantum, shows up as information. Reciprocally, information gets Quantum translated, and then builds the brain, then the mind, thus consciousness. So the brain is a machine debating with the Quantum. Let me explain a bit, while expounding on the way the theory of General Relativity of Ontological Effectiveness, “GROE”:

***

What is the relationship between the brain and consciousness? Some will point out we have to define our terms: what is the brain, what is consciousness? We can roll out an effective definition of the brain (it’s where most neurons are). But consciousness eludes definition.

Still, that does not mean we cannot say more. And, from saying more, we will define more.

Relationships between definitions, axioms, logic and knowledge are a matter of theory:

Take Euclid: he starts with points. What is a point? Euclid does not say, he does not know, he has to start somewhere. However where that where exactly is may be itself full of untoward consequences (in the 1960s, mathematicians working in Algebraic Geometry found points caused problems; they have caused problems in Set Theory too; vast efforts were directed at, and around points). Effectiveness defines. Consider this:

Effective Ontology: I Compute, Therefore That's What I Am

Effective Ontology: I Compute, Therefore That’s What I Am

Schematic of a nanoparticle network (about 200 nanometres in diameter). By applying electrical signals at the electrodes (yellow), and using artificial evolution, this disordered network can be configured into useful electronic circuits.

Read more at: http://phys.org/news/2015-09-electronic-circuits-artificial-evolution.html#jCp

All right, more on my General Relativity of Ontological Effectiveness:

Modern physics talks of the electron. What is it? Well, we don’t know, strictly speaking. But fuzzy thinking, we do have a theory of the electron, and it’s so precise, it can be put in equations. So it’s the theory of the electron which defines the electron. As the former could, and did vary, so did the latter (at some point physicist Wheeler and his student Feynman suggested the entire universe what peopled by just one electron going back and forth in time.

Hence the important notion: concepts are defined by EFFECTIVE THEORIES OF THEIR INTERACTION with other concepts (General Relativity of Ontological Effectiveness: GROE).

***

NATURALLY Occurring Patterns Of Matter Can Recognize Patterns, Make Logic:

Random assemblies of gold nanoparticles can perform sophisticated calculations. Thus Nature can start computing, all by itself. There is no need for the carefully arranged patterns of silicon.

Classical computers rely on ordered circuits where electric charges follow preprogrammed rules, but this strategy limits how efficient they can be. Plans have to be made, in advance, but the possibilities become vast in numbers at such a pace that the human brain is unable to envision all the possibilities. The alternative is to do as evolution itself creates intelligence: by a selection of the fittest. In this case, a selection of the fittest electronic circuits.

(Selection of the fittest was well-known to the Ancient Greeks, 25 centuries ago, 10 centuries before the Christian superstition. The Ancient Greeks, used artificial and natural selection explicitly to create new breeds of domestic animals. However, Anglo-Saxons prefer to name things after themselves, so they can feel they exist; thus selection of the fittest is known by Anglo-Saxons as “Darwinian”. Hence soon we will hear about “Darwinian electronics”, for sure!)

“The best microprocessors you can buy in a store now can do 10 to the power 11 (10^11; one hundred billions) operations per second and use a few hundred watts,” says Wilfred van der Wiel of the University of Twente in the Netherlands, a leader of the gold circuitry effort. “The human brain can do orders of magnitude more and uses only 10 to 20 watts.  That’s a huge gap in efficiency.”

To close the gap, one goes back to basics. The first electronic computers, in the 1940s, tried to mimic what were thought at the time to be brain operations. So the European Union and the USA are trying more of the same, to develop “brain-like” computers that do computations naturally without their innards having been specifically laid out for the purpose. For a few years, the candidate  material that can reliably perform real calculations has been found to be gold.

Van der Wiel and colleagues have observed that clumps of gold grains handle bits of information (=electric charge) in the same way that existing microprocessors do.

Clump of grains computing operate as a unit, in parallel, much as it seems neurons do in the brain. This should improve pattern recognition. A pattern, after all, is characterized by dimension higher than one, and so is a clump operating together. A mask to recognize a mask.

Patterns are everywhere, logics itself are patterns.

***

WE ARE WHAT WE DO:

So what am I saying, philosophically? I am proposing a (new) foundation for ontology which makes explicit what scientists and prehistoric men have been doing all along. 

The theory of the nature of being is ontology, the “Logic of Being”. Many philosophers, or pseudo-philosophers have wrapped themselves up in knots about what “Being”. (For example, Heidegger, trained as a Catholic seminarian, who later blossomed as a fanatical professional Nazi, wrote a famous book called “Zein und Zeit”, Being and Time. Heidegger tries at some point to obscurely mumble feelings not far removed from some explicit notions in the present essay.)

Things are defined by what they do. And they do what they do in relation with other things.

Where does it stop? Well, it does not. What we have done is define being by effectiveness. This is what mathematicians have been doing all along. Defining things by how they work produce things, and theories, which work. The obvious example is mathematics: it maybe a castle in the sky, but this castle is bristling with guns, and its canon balls are exquisitely precise, thanks to the science of ballistics, a mathematical creation.

Things are what they do. Fundamental things do few things, sophisticated things do many things, and thus have many ways of being.

Some will say: ‘all right, you have presented an offering to the gods of wisdom, so now can we get back to the practical, such as the problems Europe faces?’

Be reassured, creatures of little faith: Effective Ontology is very practical. First of all, that’s what all of physics and mathematics, and actually all of science rest (and it defines them beyond Karl Popper’s feeble attempt).

Moreover, watch Europe. Some, including learned, yet nearly hysterical commenters who have graced this site, are desperately yelling to be spared from a “Federal Europe“, the dreaded “European Superstate“. The theory of Effective Ontology focuses on the essence of Europe. According to Effective Ontology, Europe is what it does.

And  what does Europe do? Treaties. A treaty, in Latin, is “foedus. Its genitive is foederis, and it gives foederatus, hence the French fédéral and from there, 150 years later in the USA, “federal”. Europe makes treaties (with the Swiss (Con)federation alone, the Europe Union has more than 600 treaties). Thus Europe IS a Federal State.

Effective Ontology has been the driver of Relativity, Quantum Physics, and Quantum Field Theory. And this is precisely why those theories have made so many uncomfortable.

Patrice Ayme’