IT’S ALL IN THE MIND, AND THE MIND IS LOCALLY COUNTABLE, BUT THE UNIVERSE IS NOT.

***

In a nutshell: The discovery of a general theory of incompleteness, in the last century, is one of the greatest advance in civilization, 25 centuries in coming. It ought to have a gigantic impact on general human understanding, and action, greater than any other scientific theory, but it has failed to do so, so far, because it has stayed all too esoteric.

I give a new, **neurological approach to incompleteness**, designed in part to remedy this. Verily, mathematics is not out there, but thoroughly inside the mind (this contradicts Plato). Just as symbolic systems are limited, so is the mind, in the same exact way, it turns out (although the mind found a way out of this limitation). And the limit is countability. And that’s where incompleteness comes from.

***

Introduction and abstract:

**Incompleteness is a common characteristic of all axiomatic systems** (Turing demonstrated this by a variant of Cantor diagonalization). Therein a formidable weapon against human hubris. Too bad so few philosophers, and, a fortiori, politicians, have heard of it. (If the politician realized how incomplete his mind is, he may think enough before going out, and killing innocent people with robots, to the point that he may find other ways, like talking to people with the discourse that kills the conflict, rather than the babies.)

In any case, incompleteness has been hard to understand, because it is a major advance in understanding, and, as all such advances, it leaves the savage mind behind. New, and fresher generations will be exposed to incompleteness early, and find it as natural as zero. The concept zero. How? By finding, as usual, a natural approach to the existence of the new concept, and how to build a reproduction of it in one’s mind, hard wired from the start.

Indeed,** Zero is, because zero does**. **Zero does what nothing else does, namely introduces nothingness into the computational realm** (take that, Sartre! Initially the Babylonians, 3,000 years ago, just left an empty space for zero. Later they put various marks.)

This is how, and why, people find the concept of zero so natural, and many other basic mathematical concepts, so natural, although, well, they took the best minds millennia to develop. People find them natural, because the new concepts do natural, and helpful things. It took millennia to find the correct approach to zero. In the end, __zero is an axiom__. But, a very useful one. We are going to do the same thing for incompleteness in this essay, with a simple observation which has vast consequences, and will be taken as completely obvious in the future (just as zero is now obvious, except zero is more of a convention, and we are going to make an observation).

To make a new concept self obvious, one always has first to find the correct, that means the simplest and most elegant approach.

Godel’s work on incompleteness was complicated in its details (because he imposed onto himself a minimalist setting, working only with integers).

Much less complicated is Cantor’s much more crucial breakthrough (reproduced below), and others’ work on incompleteness (Turing, Chaitin). Complication went down, as intelligence went up. Therein this essay a new demonstration of incompleteness in mathematics, using the (author’s) neuromathematical approach (taken for granted here as a background, an admittedly unfair but necessary short cut).

***

NEUROLOGY IS MORE SOPHISTICATED THAN EXISTING LOGIC:

The **neuromathematical approach claims that any mathematical theory is a neuronal geometry** (you can call this an axiom, if you wish, but, one day it will be proven in the lab in minute detail, so it’s a conjecture, Tyranosopher’s conjecture). This goes much beyond the usual theory of neural networks, which has no geometry, and the simplest of topologies (mathematical semantics is used here, common readers can ignore it, or look up Wikipedia).

**Neuronal geometry is given by a METRIC, itself given by the time it takes to process neuronal** **logic** (see annex; dendrites, neuroglia, synapses, firing rates, and the finite speed of action potentials are involved in this delay in communication, hence in the distance function; neurological signal speed plays the role of the speed of light in physics, which is also the distance function).

The crucial point is that **any neuronal geometry rests on a countable network. Hence it is clearly incomplete.** This is the essence of my argument. One cannot make simpler than that.

Of course some will smirk that it cannot be that simple. How to generate the richness of mathematics (let alone poetry!) from this madly simple picture? Well the point is that it is not that simple, it’s an immensely complicated configuration space of very high dimension (from neurotransmitters), endowed with geometrodynamics (right there, it’s much more complex than General Relativity, a very simple geometrodynamics, with just 4 dimensions and a fixed topology).

The geometrodynamics allows each neuronal geometry to morph into a neurology next door, topologically inequivalent to it (with a different genus; so neurology is also endowed with a topologicodynamics, differently from the much poorer General Relativity, which is stuck with just one topology). This is how the space of all neuronal geometries can mimic (what Cantor called) the power of the continuum (see below).

***

THE IMPORTANCE OF INCOMPLETENESS:

The realization that **Incompleteness Is A Non Compressible Feature Of Understanding** has been a major philosophical and scientific advance (arguably the greatest, and not just of the twentieth century). It has been a new notion, so enormous that it surfaced slowly over 25 centuries (!). Only now have we reached a final understanding of what is going on.

In its modern version, due initially to Gödel, incompleteness showed up as the Gödel’s incompleteness theorem, which states that the theory of numbers includes undecidable propositions. So propositions exist that can neither be proven, nor disproven (this is similar to the parallel postulate being neither proven nor disproven from the other axioms of Euclid; it’s just saying there will always be the equivalent of the parallel postulate, propositions that can neither be proven, nor disproven, from any previous set of axioms, in any thought system.)

***

HOW THE GREEKS STUMBLED REPEATEDLY OVER INCOMPLETENESS: LIAR, IRRATIONALS, PARALLELS, ZERO.

Incompleteness is the opposite of the all knowing god, it shows that one such being could never be (that is why the Greeks had gods all over, probably, they were smart, they guessed the truth). The first inklings that something was amiss in the theory of human knowledge came from (first) the paradox of the liar. The paradox of the liar surfaces in self referential statements that make logic literally short circuits. An example of the liar paradox is the statement: *"this sentence is false!"* Indeed, if it is true, it is false, so it is false, but that is true, etc.

It sounds stupid to worry about such things, but it is not, when one tries to establish **perfect logic** as some Greeks, and later, philosophers in Paris around 1100 CE, tried to do (the philosophers in Paris, being also theologians, were trying to find the thinking of god, which had to be perfect, hence their great rigor).

The paradox of the liar resurfaced brutally in the heart of mathematics in 1900 after Friedrich Frege wrote down what he thought was perfect axiomatics for arithmetic. Bertrand Russell found the paradox of the liar just below Frege’s surface, causing a serious crisis (which is not fully resolved yet: when set theory is taught, what is taught is so called "naïve set theory", which ignores most serious problems, see annex, where it is revealed that the foundations of mathematics are rather fluid…).

Another way in which incompleteness appeared, a little while later, was with the apparition of irrational numbers (as they came to be known). The Greek mathematicians, building up on the work of their Babylonian and Egyptian predecessors, thought they had a full axiomatic of arithmetic, with just their pathetic little integers. They had connected that with their axiomatization of geometry, through the concept of length.

The Egyptians lined up their pyramids perfectly with the true north (within three-sixtieths of a degree), thus demonstrating they knew how to measure stuff. Geometry was used mostly to determine property extent, obviously important in the periodically flooded rich arable land of the Nile valley. It had come to be that numbers were used to measure length, and both concepts had been identified, through the concept of ratio of integers (giving fractions of a measuring unit).

Pythagoras, a Greek in Southern Italy, proved the theorem that the square of the hypotenuse was equal to the sum of the squares of the sides of the triangle. Soon after, he and his students found that the diagonal of a square of side one, whatever it was, was not the ratio of two integers. It could be calculated with an arbitrary precision, but the process was never ending.

The Greeks had thought that all numbers were "rational numbers", which supposedly made sense because they were… well, may be, not numbers that one could count on one’s fingers, the integers, but, at least, ratios of integers.

Thus the notion of rational number was incomplete, in the following sense. The Greeks had hoped that all and any length was a number, AND they had also hoped that any number was the ratio of two integers. That was a lot of hope they wanted to believe in. Suddenly the world was not something the Greeks mastered anymore.

If one made the esthetic decision that a number was always a ratio of integers, as the Greeks did initially, then, they found to their dismay that not all lengths were numbers. But then, if one made the decision that all and any length had to correspond to a "number", the notion of number had to be extended, beyond "ratio", to include all the hypotenuses of all and any triangle (thus the ir-ratio numbers). That was philosophically maddening, completely, well, irrational. Indeed then what was a number? Were there still other definitions extending further the notion of number? Where did these extensions stop? OK, so suppose, as the Greeks ended up doing, that any length was a number, and that so was any ratio of lengths.

According to this new definition, the ratio of circumference to diameter of a circle, named pi, is a number. Could it be represented as a length obtained by Greek instruments, line and compass? If not, how to compute it?

Meanwhile the Greeks stumbled on the concept of zero. Instead of completing their mathematics in that direction, they passed the concept to the Indians (who, with a more numerically aggressive religion, friendly to big numbers, were not afraid to develop it, while using a more advanced notation perhaps following the Chinese).

Meanwhile the postulate was made by Euclid that one could NOT deduce from the rest of Euclid’s axioms whether one, and only one parallel to a given line passed by a given point exterior to it.

This so called "parallel postulate" nagged the Greeks, and everybody else, for 2,100 years (it was not as “self obvious” than the other axioms of Euclid, so people had a feeling that it ought to be a theorem, hence demonstrable form the “self obvious” axioms).

It should not have nagged all mathematicians for 2100 years, but it did. People tried, in vain, to prove the postulate (from the other axioms). Finally, starting in 1829 with Lobachevski, geometries were found that satisfied all of Euclid axioms, except the parallel postulate. (Exotic geometries were so scandalous that Gauss claimed he did not publish his research because he feared the "cries of the Beotians", the Beotians were peasants north of Athens known for being civilizational retards.)

Exotic geometries should have been obvious, as long as one had made the esthetic, not to say hedonistic, decision to make Euclidean geometry on a pillow, or a saddle, or a sphere…

Why to restrict oneself to a plane? Was the world flat? No, and the Greeks knew it was not, they even had measured the size of the Earth with great precision (so big was Earth’s size that Columbus was not believed when he said he could sail to China, because it was known that China was beyond the range of existing sail boats… But America was not, and the Vikings had traded ivory, and even timber, from there for 5 centuries…)

So it was irrational to restrict oneself to flat geometry (and Euclid’s predecessors knew this, but as the world veered into Macedonian fascist domination, full blown thinking became the enemy of the sovereign, and thus was forced to adopt a low profile: in a world where Gold Man Sacks, little men learn to be stupid servants).

***

NEW GEOMETRIC MODELS COMPLETED THE PICTURES:

In the meantime, "numbers" which, when multiplied by themselves gave a negative product, had been found to be useful to solve equations. Those "imaginary" numbers led to real solutions, and had many other esthetic advantages. For example any polynomial equation of degree n had exactly n roots (d’Alembert’s theorem). Very pretty, very handy. Finally a magnificent, trivial and beautiful interpretation of "imaginary" numbers was found (1806). That was more or less coincidental with the discovery by Faraday that a moving magnet created an electric current (1821).

A stupid journalist asked Faraday what was the use of that effect, to which the great man replied: *"What is the use of a new born baby?"* (1821.) Faraday law of induction is of course at the basis of all of the world’s industry now. Tellingly a madly rotating turbine in an electric power plant, or windmill, describes a geometry that exactly depicts imaginary numbers (Argand’s diagram, 1806). So an extension of the concept of number that would have driven the Greeks completely mad, at first sight, had a natural geometric description… Simply geometry was not just about the technology of line and compass. Now we have the technology of turbines, and, or Quantum Mechanics, and those are all about “imaginary” numbers, which are not imaginary anymore than turbines or Quantum Mechanics.

(Both turbines and Quantum have to do with electromagnetic waves, that’s their connection: the famous 2-slit experiment, in optics or electromagnetism, is also the basis of Quantum Mechanics,)

So the parallel postulate was solved by being more open to what one meant by geometry… In general understanding further is similar to what happened with the parallel postulate: suppose more stuff, to get richer abstractions, abstractions that can do more. The world is rich, a richer mind, correctly made, can model it better.

Then it turned out that pi, although calculable (its square being an infinite series of predictable rationals), was transcendental (that is, it was not the solution of a polynomial equation).

***

THEN CANTOR INVENTED THE CANTOR DIAGONALIZATION PROCESS:

Mathematicians massaged the Cantor diagonalization process for the century that followed its establishment in Cantor’s mind, extracting juicy theorems and spectacular results from it (with generally trivial proofs, see annex). It is very simple.

Cantor supposed that all the real numbers could be counted like sheep, from top to bottom, and so he lined them across, developing each real number in its full decimal expansion horizontally. OK, the bottom was down to infinity. That gigantic array he obtained is also called a matrix.

Cantor ended visually with a gigantic matrix of integers, let’s call it the CANTOR MATRIX. Cantor labeled that gigantic matrix as R(n, m): R(n,m) being the mth integer in the decimal expansion of the nth real in the Cantor matrix.

Then Cantor built a real C by considering the diagonal R(n,n) of his giant Cantor matrix. He defined C by giving an algorithm for its decimal expansion, namely a way to compute C: the nth digit of C would be C(n) where C(n) would be R(n,n) plus a (perhaps variable, or not) *non zero* integer. To define things precisely, say: C(n) = R(n,n) + 1. In other words, the nth decimal of the made up number C can never be the nth decimal of… any number: at this point, the conclusion is obvious: C cannot exist. But let’s pound it, the way mathematicians like to do.

Indeed, since Cantor had supposed that the reals could be lined up like sheep, the number C ought to be in the list, as the kth (say) number. In other words: C = R(k). Hence we should have the nth digit of C, C(n), equal to the nth digit of R(k). But that is R(k,n). But C(n) was constructed to be R(n,n) + 1. In particular, C(k) is then both R(k,k) and R(k,k) +1. So, either 0 = 1, or the initial hypothesis at the root of the whole contraption, that it was possible to build the Cantor Matrix, containing all the integers, was FALSE.

A more intuitive way to look at the proof is this: suppose each number is viewed as a mountain range, each point in the decimal expansion being viewed as an altitude, anything between 0 and 9. A Cantor mountain range is made up, by modifying one of the heights of each mountain range at some point, and gluing all such modifications along to obtain a mountain range guaranteed to be different from all those lined up initially. This means that the Cantor modification is geometric in nature (height being a distance). As we will see neurology can do more, because it can not only make geometric changes, but topological ones (changing its genus with all sorts of surgery) .

What has exactly happened here, in this Cantor diagonalization trick? Well, I claim, **something neurological happened**.

***

FORMAL INCOMPLETENESS:

Now go forward another generation or two to Godel and Turing. Godel demonstrated that, as long as one had basic integers, with multiplication and addition, a sentence could be made that would say:"I am not demonstrable".

Turing generalized this, and Chaitin, generalizing in turn ideas of Leibnitz and Borel, found a probabilistic approach. Borel had observed that chance could not be defined, because, if it were, it won’t be chance anymore. This may sound too philosophical, but, remember, mathematics is about philosophy. Or, as I point out, neurophilosophy.

***

INCOMPLETENESS IS HOW WE REACH FOR THE STARS:

For 2,000 years, mathematicians were mystified by parallels, but all they had to do is look at any curved surface to realize that they were mystified erroneously: the problem was not what parallels did, or did not do, but how they should be defined. Same for the concept of numbers, same for the concept of chance.

The paradox of the liar was a big subject of (non trivial) reflection in the depth of the Middle Ages, between Paris and Oxford (circa 1100-1400 CE). It is still alive and well; Bertrand Russel used a variant of it to show that the axiomatics of mathematics were self contradictory (circa 1900). He considered the set B all of which elements are sets which are not elements of themselves. Now if B is an element of B, it is not an element of B. And if it is not an element of B, it is an element of B.

Godel used a variant of the liar argument.

OK, so what is the verdict? Can we progress by introducing much more powerful semantics and abstraction? What do I mean? Imaginary numbers were hard, until it was realized that they corresponded to rotations in the plane (Argand diagrams, rotating electric fields). Then that a number multiplied by itself could be minus became trivial. Similarly, curved geometry, irrational numbers, became obvious, once looked at the right way. This is the case for the zero, or negative numbers, everybody take those for granted.

Abstraction consists into forgetting the details, and concentrating on an essence, which becomes the new definition. Incompleteness is made to work in reverse.

Neurologically, abstraction corresponds to establishing a shorter, less energetically and less temporally costly neural network going more directly to the meat of the matter (don’t forget you are dealing with the mind of a killer ape, meat is where it’s at).

It is my opinion that pieces of mathematics correspond to subsets of neuronal architecture (I should say neuroglial architecture, because glial cells, which make up 90% of the human brain, are involved). Any of subset of neuronal architecture is countable (actually, although large, it is finite, say involving a skeleton of at most a of a trillion trillions pieces of networks (yes, ten to the power 24), counting everything, even dendrites). So basically a mathematical reasoning is a neural network (a subset of all paths possible with a trillion trillions pieces). But the neural network can be changed in a non trivial way, __TOPOLOGICALLY__ speaking (topology is the science of neighborhoods, forgetting about distance measured by number: distance gets measured only by the notion of neighborhood -literally, not by a number).

Although any given neural network is countable, it can readily morph into something completely different, geometrically, or topologically. [Neural] countability is next to the infinity of the continuum. According to me, **this is the essence of Cantor diagonalization: any countable array gives rise to elements not in it. **

**And it is the essence of incompleteness: any mathematical theory is, by essence a COUNTABLE neural network, and thus misses most of math. Realized mathematics will always be of measure zero in the set of all possible mathematics.**

Notice that the neural networks can vary geometrically (which is in a way what Cantor did), but can also do much more, because they can morph into some which are * not* topologically equivalent (and do this all the time, since their connectivity varies through new neuronal, axonal, dendritic and glial geometry).

Now, of course, if even any mathematical reasoning is that incomplete, a fortiori all and any reasoning. Thus the preceding result has impact on all knowledge and cognition.

***

LIAR AND NEUROLOGY:

And what of the paradox of the liar in all this? (Another version of it is: "The following sentence is true. The preceding sentence is false.")

Well, Russell solved it with his theory of "type", a hierarchy avoiding self reference. I think the solution is just **to realize that neurology has a hierarchical organization that can be called "meta"** (loosely corresponding to Russel’s hierarchy, or one founded by Von Neumann, which starts with the empty set, then the infinity, or inductive et axiom: if y is in it, so is y U {y}).

"Meta" enables abstraction. It basically consists, given a neuronal set S into a set of higher neurons, H, which, observing the quasi-simultaneity of some sorts of firings, draw the consequences, in the form of new axonal chains between S and H that short circuits the long axonal chains confined to S (this corresponds to the logician Alonso Church’s definition of abstraction).

Say neuron A, after a long chain of intermediaries, makes neuron Z fire always; then a neuron B appears that connects directly A and Z, shortening the axiomatic/program structure: such is the abstracting process, reproduced in mathematics by forgetting (some of) the details (of course, it is the same abstracting process which is used all over).

The liar paradox disappears in neurology, because neuromathematics eliminates self referential loops. These cannot happen (neuro)logically (neurons don’t short circuit, be it only because they cannot fire immediately again, let alone the fact that neurology sees no interest in close by, loopy circuitry).

***

INCOMPLETELY YOURS:

In any case, such is my resolution of incompleteness. **All and any theory is countable, but the universe is not. And neither is the mind (thus the mathematics).** The related liar’s problem is done with by the geometrodynamics.

Some will say: what sort of proof is that? But what is the proof of zero? Or the proof of irrationals? Or the proof of hyperbolic geometry? Of course, there is none, they are just choices, and then observations we make in life.

I realize this is incomplete, and (not yet) demonstrable in its entirety, but, as I was saying, all and any theory is incomplete…

***

Patrice Ayme

***

1) __Annex on why neurology has intrinsic geometry__: Generally, neurology is viewed as set of neural networks. Neural networks are almost trivial things: a directed graph with edge weights, and perhaps a "transfer function" at each vertex. The interesting content is in algorithms that progressively improve a solution to an inverse problem — calculating edge weights that result in desired couplings between input and output edges. The picture here goes completely beyond that, since NEUROLOGY BECOMES VARIABLE GEOMETRY, AND EVEN VARIABLE TOPOLOGY.

indeed, **neuronal logic incorporates a temporal hierarchy, given by the time it takes to process the logic**. Neurology, among other things, is **logic + time delayed causality** (notice the analogy with special relativity, and, or field theory, be it electrodynamics or gravitational, where a crucial point is delayed, hence local, causality).

Neuronal geometry is thus given by the time it takes for logical processes (neuronal firing and the propagation of signals down axonal-dentritic-glial chains is far from instantaneous, because not only are nerve impulses slow, but the signal is reprocessed along the way, with typically a glial cell’s foot interfering into each synapse, which is itself, all by itself, a geometrical computer).

*

2) __Annex on how the debate progressed in Paris circa 1100 CE__: The notions above address, and are an attempt at solving, once and for all, directly the debate between realism (Champeaux, Archdeacon of Paris, teacher of Abelard) versus nominalism (Roscelin, preceding teacher of Abelard), versus conceptualism (Abelard). Those thinkers, circa 1100 CE, all knew each other, and were busy going well beyond Aristotle’s metaphysical uselessness in the debate on "universals" (ideas). Champeaux thought "universals" were real, out there (a position started with Plato, I guess, where it made strictly no sense). Roscelin thought "universals" were all in the mind. Abelard was in between. The position above, neuromathematics, is that universals are real, but all in the mind. This is how the universe teaches us to become human. (BTW, this shows that the European thought system had gone well beyond the Greeks by 1100 CE, and thus was not dependent of getting reacquainted with the Greeks, contrarily to what is generally depicted. The argument can even be made that forgetting the forgettable details of Greek thought is exactly what the doctor ordered.)

*

__Cantor diagonalization has many spectacular applications__. For example, suppose we considered a number m, and then suppose we enumerated its properties: P(1) could be whether it is even, P(2) could be whether it is prime, P(3) could be whether it is normal, etc… Each property could be expressed by an expansion as a sequence of 0s and 1s, as is done in computer science. Then one could consider diagonals, and tweak them as Cantor did, getting properties not found in the original list. Conclusion: the properties of any given number are not countable. (That puts an ironical light on the physicists searching for a "theory of everything" and the believers who believe in just one got: they should take the diagonal of god, see what happens…)

*

__The foundations of mathematics have proven to be a jungle__: Many foundational systems have been elaborated (ZFC, MK, T-G, NBG, etc.), to try to have enough logical power to support the elaborated reasonings of some mathematicians (such as Grothendieck), or theories such as category and model theory, while avoiding paradoxes. The final word is not in, but the implicit morality modern mathematicians have extracted from it is that **your foundation depends upon your construction**, just as in the building industry. **All foundations are local, only the mind is global.**