If It’s Physically Impossible, It’s Impossible: THE INTEGERS USED SO FAR ARE INCONSISTENT WITH THE (known) UNIVERSE.

***

Abstract: The senior, extremely experienced, and justly famous Princeton mathematics professor, Edward Nelson, tried to prove that arithmetic was inconsistent. But he assumed something while deriving his attempted proof, which was not true (as a result).

I have more basic, and much more drastic claims:

**There is a largest number. Or more exactly, numbers can’t be too large (in sheer size of the numbers of digits needed to express them). All and any logic is bounded, and local.** Full real logic involves qubits, not bits. Only thus is infinity recovered, through non local methods. A situation with realistic logic exists, which closely parallel that encountered in geometry, before the invention of local differential geometry. Local logic can be integrated, using a connection.

In other words, if you can’t build them, don’t pretend unobservable castles in the air exist, and compute with them, to boot! Basic number theory and logic have to become much more subtle.

***

THERE IS A LARGEST NUMBER. AND LOGIC IS LOCAL.

A well known theorem in primary school is that there is an infinity of numbers. Indeed, suppose there is not, and N is the largest number. Then the number (N+ 1) is even larger, Quod Erat Demonstrandum.

Simple. That’s what all mathematicians say. But is the reasoning truly valid? Indeed, what is N?

In Cantor’s theory of cardinals, N is the set of sets which have, well, N elements. This is not exactly as circular as it sounds. As John Von Neumann pointed out, one can build up a set with no element (by decree: we just say there is such a thing; it’s an axiom, the axiom of the empty set.) Symbolize it by 0.

Then we can consider the set whose only element is the empty set: symbolize it by {0}. So when you look inside, inside the brackets, all you see is 0, the empty set. Call that set “one”, or 1.Then look at the set having as elements only 0 and 1. One can symbolize it by {0, 1}, that is: {0, {0}}. Call it 2. And so forth.

N+1 would be the set having as elements 0 and N: {0, N}. This way we get all the numbers and the successor operation, +1. So far, this is standard fare, known to all research mathematicians.

However, suppose G* is the apparent number of particles, virtual or not, in the known universe (using the Planck Length which terminates renormalization, and bounds on energy density coming from bounds on gravitational curvature, one can estimate G*; G* is not infinite because the knowable universe is bounded, be it only because, far away enough, space recedes beyond light speed). Contemporary logic and mathematics have ignored this situation, just like Euclid ignored the fact that he did not have a non local definition of a straight line (although he needed it).

Now in the preceding construction of G*, written only as a symbol made of 0s, and the brackets {s and }s, one gets, on the right hand side of G*, well, a large number of symbols }s, namely G* of them: G* }s! That means **one would have as many brackets }s than there are particles, virtual or not, in the universe. But what are the }s made of? Particles, virtual or not. **

**So just thinking of G* is impossible: G* would require all the particles in the universe to symbolize it. **

Some will say: hey, wait a minute, you are confusing mathematics and engineering. In mathematics one generally prove that a would be mathematical object, BAD, does not exist by arriving at a contradiction. Given a set of axioms, AXIOM, supposing the existence of that object, a supplementary axiom, gets to a proposition A such that: A –> Non A.

In other words, honorable mathematical proofs consists in demonstrating that the theory made of AXIOM + BAD is “*inconsistent*“.

Another thing mathematicians do a lot of, as Terry Tao just did to professor Nelson, *who was his logic professor at Princeton*, is to show that a proposed reasoning does not work, because something which was supposed to effect that reasoning, and was viewed as obvious, is not obvious, or is even wrong.

Tao seemed to have found that a sub theory had got to have had a greater *Komolgoroff complexity* than Nelson had supposed; by an enumeration argument. Nelson’ perfect answer: “*You [Terry] are quite right, and my original response was wrong. Thank you for spotting my error. I withdraw my claim *[That Peano Arithmetic is inconsistent].”

My main reasoning here to establish the existence of the largest number G*, is the ultimate enumeration argument. One cannot construct (G* +1) because one has run out of … matter.

Some will say: ah, but to prove mathematics, one uses only the inner experience, whereas you used a mixed approach. Well mathematicians do the same. Euclid famously supposed a number of hidden hypotheses besides his axioms. For example that two circles intersected. The only way to justify that is through *Analytic Geometry* (established in the 17th Century) resting on the concept of continuum (19th Century)… In other words, on the construction of the real numbers, in the second half of the nineteenth century, itself resting on the conventional (and as we saw, erroneous) construction of the integers.

To hammer the point some more. Princeton’s Wiles proved Fermat’s Last Theorem by using some powerful hypotheses about infinity. It is supposed to be a heroic task beyond human achievement to convert the proof into first order logic… And, in any case, it is not clear what axiomatics Wiles really used (did he use an “*inaccessible cardinal”*, in a vital way, or not?) However, as long as the axiomatics is not clear, one cannot assert one has a proof, but just the sketch of one.

Notice that **the main strategy in philosophy, over the millennia, is to precisely show that a time honored reasoning does not work, because something viewed as obvious is not actually obvious, or that is actually completely wrong**.** It’s naturally one of the main ways a philosophical attitude by civilization class scientists impacts science. **

But here we have done something more radical. We have a *symbol which cannot possibly exist*. **No axiomatics can build it.** How could something one cannot even symbolize exist in mathematics?

The limitations on logical systems are also severe and go beyond simply being limited to coding with a finite number of symbols or numbers. **The length of the implication chains** and the length of the descriptions of the propositions, themselves or the numbers describing them are all bounded. (So all diagonalization arguments a la Cantor, including all Gödel theorems fail, etc.)

Thus any logical language is limited, there is a limit to any (local) logical universe.

We will call that the *Logical Horizon*, or *Golo Horizon* (Golo being the male dominant baboon in West African language; there is only that much that a Golo can understand, due to the nature of his neurological universe; also the nickname of somebody dear to me).

**The situation with the Logical Horizon is analogous to the horizon in a differentiable manifold given by the exponential map. Except here it applies to logic itself. Conclusion: arithmetic, and logic are both local. **

(This will have consequences to all domains of thought which use mathematics either technically, or as a source of models or inspiration; that includes philosophy.)

So what happens to the various notions of infinity found in logic? Well, they will have to be reconsidered carefully.

Another notion which can wiped out, is that information is more important than matter: Wheeler famously said at some point that he wanted to reduce physics to information. Or, as he put it, “*it from bit*“.

This is a bad joke if there ever was one. Wheeler knew plenty of Quantum Physics (he was Feynman’s teacher, and co-conspirator at Princeton, after all). Plenty enough to know his joke was deeply misleading. I am myself often reduced to dubious jokes of kindergarten level such as Bushama, Obabla. “*It from bit*” is much worse. Whereas the Bush-Obama era is a solid evidence of reducing taxes on the superrich, giving public money to banksters, warring in Afghanistan, throwing away the constitution, and civilization as obsolete, while describing the whole thing as the opposite of what it is, there is no evidence whatsover for “*It From Bit*“.

All the evidence there is, consists in people thinking that “*digital”* is superior to “*analogue*“. True, monkeys have digits, and they are superior, but that’s roughly where the analogy, and the fun, stops.

“*It from Bit*” is exactly the erroneous conclusion to draw out of Quantum Physics. “*Bit*” is an artificial idea. The real world does not have “bits”, anymore than it has “digits”. As we just saw, numbers are very limited. This means that any physical theory, even a classical one, is indeterminate, just from that.

Any “*bit*“, the smallest piece of information, is a convened packet of energy. In its smallest form it is the presence, or absence, of a photon, neutrino or electron. So any information stream is actually an energy stream. There is a finite number of bits. Fundamentally, because they are about particles, namely, in my vision of the Quantum, very special manifestation of the continuous Quantum reality.

**Reality is all about Quantum Physics, which deals in “ qubits“, not bits**.

**Qubits entangle with each other, are non local, and provide with an infinity beyond integers.**These three complexities that

*qubits*have,

*simple bits*are deprived of. And of course three complexities to be essential ingredients in non local logic.

Information is made of energy and energy is bounded, locally and to infinity, and so are mathematics and logic.

Dedekind famously entitled his work on numbers:”*Was sind und was sollen die Zahlen**“. “What are and what ought to be the numbers”.* He made the famous commentary*:”God created the positive integers, and the rest is the work of man.” *Dedekind made “cuts”… A Quantum event (there no classical events (except in an approximate sense).

However, we just saw that the constraints of the real world are so strong that the numbers cannot be whatever. Maybe, as god does not exist, it could not even create the numbers. Or is it that man created the integers, but, since he was not god, could not finish the task?

Or maybe we just found a proof of the inexistence of god? Behind this joke is a serious point: the idea of god contained that of infinity. However, we just saw that infinity cannot be obtained on the cheap, by piling up numbers in one spot.

***

HOW LOGIC WILL BECOME LOCAL: THE GEOMETRICAL ANALOGY.

The situation as it is in logic, and as I expect it to evolve, is similar to what happened with Euclid. Euclid stricly made geometry on an infinite flat plane, something which obviously did not exist in his world. Or in any world at all. Similarly we just saw that conventional logic and arithmetic do not exist in any world at all. However, qubits are non local, entangled. That allows us to do the same with logic (demonstration some other time).

Let’s go back to the genesis of full geometry. Let’s suppose Euclid honestly tried to draw straight lines on a sphere. Suppose the Earth was an ideally smooth sphere, and one had a bit of straight line on the ground, Bit(1), and a point X off it. Euclid’s postulates said two strange things.

First that the bit of straight line, B(1) could be extended in a full straight line, L(1). That *seemed* obvious on the plane, but it was NOT obvious on a sphere (so Euclid spoke of easier things).

To do this properly, Greek mathematicians would have needed to first find the essence of the idea of a line. That was to minimize length. Now ancient Greeks had to find out what lines minimized length locally, *on a sphere*. As it turns out those lines are what are called great circles.

To figure those out several notions, several subtleties, to extend the notion of straight line to a sphere, a new style of logic had to be introduced, establishing what is now known as *differential geometry*. This immense field of mighty subtleties started in the first half of the Nineteenth Century, with the work of Gauss, Bolay and Lobachevsky, but fully blossomed only a century later, with the implementation of Riemann’s program for gravitation by many mathematicians (and to which Einstein contributed enthusiastically).

The notion of tangent vector was indispensable: this is the direction V in which Euclid would have pointed, when at point x on that sphere called the Earth. The great circle tangent to V is the intersection of the sphere with the plane in (normal three dimensional) space containing V and the vector from the center of the Earth to x.

This can all be demonstrated in various way, the most modern being that the connection on the sphere is the trace of the (“Levi-Civitta”) connection in normal three dimensional space when it is equipped with the normal basic distance known to the Egyptians (the square root of the sum of the squares of the differences of coordinates).

So poor Euclid, trying to extend his bit of line B(1) into a full line L(1), on the sphere, would have been forced to invent geodesics (but that taxed Euclid’s imagination, so he decided to ignore the obvious fact that the Earth was not flat, just like the obnoxious servants of militarized plutocracy nowadays.)

After discovering that great circles locally minimized distance, our imaginary Euclid, if he had tried to implement his fifth postulate *(“Through a point y there is one and only one line, L(2), which never meets L(1)”*), would have encountered miserable failure. However, the very nature of the *geodesics-as-great-circles* would have made clear why: great circles always intersect.

The ancient Greeks could have found out much of the preceding. Actually *Euclid’s immediate predecessors had introduced the first elements of Non Euclidean geometry, with subtle considerations of various angles in possible triangles*. Euclid’s obsessive development of plane geometry was made at the exclusion of the mathematics of his predecessors. It was a rigorous step forward into backwardness.

Why did Euclid do his flat Euclidean geometry, exclusively? Well, I believe, because of the conquest of the Hellenistic world by fascist plutocratic generals of Alexander the Great, who established dictatorships that would last centuries (and similar successor regimes which lasted millennia). A mood set on intellectuals which made it clear that revolutionary thinking was out. And it stayed pretty much out until the European Middle Ages, when the rise of local effective democracy reconstituted progressively the combative originality of the Greek City-States, prior to the Hellenestic degeneracy (while socialized fascism, friendly to demography, but not to revolutionary thinking, installed itself over Vietnam, China, Korea and Japan).

Euclidean geometry was more fascist than the Non Euclidean sort. After all fascism wants rigid, flat, or, better uninformed, uncritical, unidimensional minds, just obsessed by corporate monetary profits. That is why Tom Friedman publishes best seller after best seller, and editorial after editorial in the New York, while that august publication seemed to wisely decide blocking my comments since the *“Occupy Wall Street”* movement has blossomed. More than 50 comments blocked already, and counting… It was the same in 2003 with the Iraq war…

Euclid’s geometry was a physical impossibility on the ground, and that should have given a hint to Euclid’s contemporaries (instead of having to wait 21 centuries, for the obvious). But they had other worries.

We have a similar situation with numbers now. Logic is bounded, finite, and so are numbers, locally. To reach global implications, we have to connect local logics in a global whole.

We have an advantage on the Greeks, to figure more advanced mathematics (and civilization!): we have the Internet, disseminator of truth! And so far just out of reach of the fascist government, in most places. However, have no illusions: so it was with Athens until the well named Antipater took control, after striking a deal with the plutocrats.

**Real numbers are not real. Really.**

***

Patrice Ayme

***